Talk:Motorola 68000: Difference between revisions
Line 197: | Line 197: | ||
::::Btw, by re-reading your post it seems you think that one microcycle (what you call instruction cycle) takes four CPU clock cycles. This is also wrong, it takes two clock cycles, not four.[[User:Ijor|Ijor]] 04:18, 2 January 2007 (UTC) |
::::Btw, by re-reading your post it seems you think that one microcycle (what you call instruction cycle) takes four CPU clock cycles. This is also wrong, it takes two clock cycles, not four.[[User:Ijor|Ijor]] 04:18, 2 January 2007 (UTC) |
||
:<br />Here's a bit of real history. (I was at Motorola in the 80's). |
|||
<br />The original design was for a machine with 32 bit address registers and only 16 bit data registers, eight of each! The microcode writer convinced the powers that be to make it a 32Address/32data machine. |
|||
<br />This history is reflected in the fact that the high 16 bits of the 8 data registers are located beside the high 16 bits of the address registers and are physically located on the far side of the chip from the data ALU rather than beside it like the lower 16 bits are. |
|||
<br />Yes, there were 3 math units. One 16 bit ALU and two 16 bit Address Units. The ALU was complex enough for the math instruction set while the AUs were only able to do address related math (add, subtract). And of course the 2 16 bit AUs worked together to make the 32 bit address calculations. |
|||
<br />Therefore, to do a simple 16 bit math instruction, the ALU can do the operation while the AUs can perform address calculations during the same micro-cycle. |
|||
<br />To perform a general 32 bit data operation, it is necessary to move the high 16 bit register data past/through the AUs to the ALU. This is why 32 bit ops take many more cycles than 16 bit ops. |
|||
<br />One could therefore say it was a 16 bit processor pretending to be a 32 bit one. A real 32 bit ALU came out in the 68020. |
|||
<br />BTW: If you look at a die, the top half is the microcode, the middle the random logic, and the bottom the registers and ALU/AUs. The bottom third is ordered LtoR: High A & D registers, AUhigh, (X) , AUlow, Low A registers, (X), Low D registers, ALU, I/O aligner. Where the Xs are gates (usually closed) that allow data to travel between the three areas and the I/O aligner was the interface to the Data pins with byte/word align logic. |
|||
:<br />Some more interesting History. |
|||
<br />After making the 68000, IBM and Motorola got together and made the 360 on a chip. It was thought that because the 68000 core was '32 bits' and regular enough that this would be an easy task. |
|||
<br />What they failed to realize was that the random logic in the middle of the chip would have to change considerably to make the 360 on a chip. This took more resources and time than Motorola expected, delaying the 68010 and 68020, giving Intel a better chance to jump ahead with the i286/i386. |
|||
[[Special:Contributions/71.231.43.16|71.231.43.16]] ([[User talk:71.231.43.16|talk]]) 13:05, 30 November 2007 (UTC)HenriS |
|||
== Content from [[Amiga virtual machine]] ([[Wikipedia:Articles for deletion|AFD]]) == |
== Content from [[Amiga virtual machine]] ([[Wikipedia:Articles for deletion|AFD]]) == |
Revision as of 13:05, 30 November 2007
Computing B‑class High‑importance | ||||||||||
|
Code density
The first reference link is dead.
(VAX, 320xx microprocessor usually produced more compact code than 68000 also. Sometimes the 8086 as well.
---
Hard to believe. The VAX is a 32-bit machine.
(Yes, but it was byte-coded, very compact. GJ)
The TI 32000 series is a RISC machine, isn't it?
(possibly, for some definitions of RISC. But i meant the Natsemi chip 32016 and successors. I have now fixed the link! GJ)
RISCs execute fast, but their code is not compact.
(RISC is not usually compact. But several are not much worse than 68K, and the HP-PA allegedly beats nearly everything for code size. No i don't know how, and i never got my hands on one to test. GJ)
Even space-optimized RISCs like the ARM need larger code than the 68K. They had to really sweat the ARM down with the "thumb" and "thumbscrew" approaches to reduce it to less than the 68K. Just reading about it tells me somebody had a bad 6 months getting there.
Certainly the 8086 is not smaller; You'll cram about 2x as much code into a 68K machine per byte as an x86. If you don't believe -me-, see the 6/27/97 entry:
http://vis-www.cs.umass.edu/~heller/cgi-bin/heller.cgi/?Frame=Main
x86 code is just not that compact. Ray Van De Walker
(It was 20% smaller than 68K the only time i actually coded something in both and cared enough to check the sizes. It did depend on what you were coding. 32 bit arithmetic on 8086 was horrible, and running out of registers was nearly as bad. And if you could make use of auto increment and decrement on 68K that was a big win. But the stuff i did mostly avoided all that, and was extremely compact on 8086. This experience was apparently almost normal for hand coded 16 bit stuff. 68K usually won for stuff from compilers. With the 80386, Intel became more "normal" and all the comparisons probably changed. -- Geronimo Jones)
That web page (~heller) seems like an especially bogus comparison. 1) using C++ is a joke, neither CPU architecture was designed to support it. C would be a better language to compiler. 2) using different breed compilers is silly, you should use compilers from the same stable. For example, the lattice C compiler targets both architectures, as does Metrowerks (just about), and of course gcc. 3) the program you compile probably makes a big difference. As GJ points out 32-bit ops on an 8086 are a pain, but if your C program uses mostly 'int' then that's not a problem. On a 80486 it might not make much difference. --drj
--- These are all reasonable objections. However, there's no doubt that many designers thought that it was more compact. So, I rewrote it from a NPOV to say so. I also rewrote the orthogonality discussion from a NPOV. I hope that helps. Ray Van De Walker
---
A common misunderstanding among assembly-language programmers had to do with the DBcc loop instructions. These would unconditionally decrement the count register, and then take the branch unless the count register had been set to -1 (instead of 0, as on most other machines). I have seen many examples of code sequences which subtracted 1 from the loop counter before entering the loop, as a "workaround" for this feature.
In fact, the loop instructions were designed this way to allow for skipping the loop altogether if the count was initially zero, without the need for a separate check before entering the loop.
The following simple code sequence for clearing <count> bytes beginning at <addr> illustrates the basic technique for using a loop instruction:
move.l <addr>, a0 move.w <count>, d0 bra.s @7 @1: clr.b (a0)+ @7: dbra d0, @1
Notice how you enter the loop by branching directly to the decrement instruction. This makes it execute the loop body precisely <count> times, simply falling right out if <count> is initially zero.
Also, even though the DBcc instructions only support a 16-bit count, it is possible to use them with a full 32-bit count as follows:
move.l <addr>, a0 move.l <count>, d0 bra.s @7 @1: swap d0 @2: clr.b (a0)+ @7: dbra d0, @2 swap d0 dbra d0, @1
This does involve a bit more code, but because the inner loop executes up to 65536 times for each time round the outer loop, the extra time taken is insignificant.Ldo 10:05, 12 Sep 2003 (UTC)
all motorola 68k
its nice and it needs improvement
virtualization
The main page claimed that "the 68000 could not easily run a virtual image of itself without simulating a large number of instructions.". This is false; the only 68000 instruction which violates the Popek_and_Goldberg_virtualization_requirements is the "MOVE from SR" instruction. The 68010 made "MOVE from SR" privileged for that reason, and added an unprivileged "MOVE from CCR" instruction that could be used in its place.
It was further claimed that "This lack caused the later versions of the Intel 80386 to win designs in avionic control, where software reliability was achieved by executing software virtual machines.". The i386 is MUCH harder to virtualize than a 68000, as it has very many violations of the Popek and Goldberg requirements, and they are much more difficult to deal with than the 68000's "MOVE from SR" instruction. See X86 virtualization, and in particular Mr. Lawton's paper referenced there.
I'm not sure how common the i386 was in avionics, but the 68000 and later 68K family parts were in fact widely used.
--Brouhaha 22:52, 23 Nov 2004 (UTC)
I know the early Apollo workstations had to include a major KLUDGE in order to implement virtual memory-- they had TWO 68000's one running one clock cycle ahead of the other. If the one ahead got a page fault, the one behind would service the fault, then they'd exchange places.
"Its name was derived from the 68,000 transistors on the chip."
Please supply a reference for this. Mirror Vax 21:12, 18 Jun 2005 (UTC)
It's really unlikely the 680000 has only 68,000 transistors. More likely the name came as an upgrade of the good old Motorola "6800" series. Although there's almost no resemblance between the two architectures.
- Actually the MC68000 did have approximately 68,000 transistor "sites"; that count included PLA locations that might or might not have an actual transistor depending on whether that PLA bit was a one or a zero. This information was widely publicized by Motorola FAEs back then, but wasn't in the data sheets, so it's hard to find anything that would be considered definitive today. At one time the Mototola SPS BBS had information on transistor counts of various devices in the 68K family, which ranged from 68,000 for the MC68000 to 273,000 for the MC68030. If someone had time to dig through electronics trade journals (Electronics, Electronics Design, EDN, EE Times) from 1979-1980, they might find mention of the transistor count.
- Or one might pester one of the original designers of the MC68000. His email address isn't that hard to find, but I'm not going to put it here since that would probably result in the guy getting tons of email with dumb 68K questions.
- Anyhow, it's accurate to say that the MC68000 designation derived from BOTH the transistor count and as a logical successor to the MC6800 family. --Brouhaha 01:03, 17 October 2005 (UTC)
- We can only include verifiable information, not speculation (however plausible). Besides, how the name was arrived at is not important. Mirror Vax 01:54, 17 October 2005 (UTC)
- The 68000 transistor count was widely known at the time - I'm sure one could find it mentioned in Byte (magazine) etc. There was a great deal of rivalry between the 8086 and the 68000. The transistor count was presumably a way of advertizing how advanced the 68000 was, compared to the 8086, and explain why the 68000 was delayed. An important piece of information, IMHO. -- Egil 04:56, 17 October 2005 (UTC)
- Just doing a quick google on "68000 transistors" I easily found:
- http://www.faqs.org/faqs/motorola/68k-chips-faq/
- http://www.cs.berkeley.edu/~pattrsn/talks/NAE.ppt
- http://www.cse.fau.edu/~mike/MP/day1comments.pdf
- http://www.sscs.org/awards/2003TFA.htm
- Note the 29000 transistors of the 8086. Not mentioning the rivalry between the 68k and the 8086, and not mentioning the transistor count issue would be the wrong thing here, it is an important piece of historical information. The actual transistor count ended up slightly over 68000, I've seen 70000 mentioned. -- Egil 05:12, 17 October 2005 (UTC)
- Just doing a quick google on "68000 transistors" I easily found:
- Mirror VAX wrote "We can only include verifiable information" -- since when? You *REALLY* would not like what the 68000 page would turn into if we removed everything that wasn't 100% verifiable from actual printed, customer-distributed Motorola literature. Since multiple people (myself included) have personal recollection of Motorola FAEs giving the 68000 transistor number and stating that it influenced the part number, I think it's fair game to include, and certainly it's closer to being authoritative than a lot of the other rubbish on the page.
- The 68000 FAQ has a list of transistor counts that appears to have been derived from information Motorola put on their now-defunct "Freeware BBS". It confirms the transistor count. --Brouhaha 23:41, 17 October 2005 (UTC)
- First of all, the subject is not the transistor count. We are discussing how the chip was named. Sorry if I didn't make that clear. Why isn't it good enough to simply state the transistor count, and leave to the reader speculation about what "influenced" the name? Mirror Vax 02:11, 18 October 2005 (UTC)
- Also, it's possible that the influence worked in the other direction - perhaps they decided on the 68000 name, and then creatively rounded the transistor count to match (maybe there are really 67000, or 69000...) Mirror Vax 02:21, 18 October 2005 (UTC)
- Your latest suggestion is ridicolus. If you had even bothered reading the referrences, you would have seen that the final design ended up with having around 70,000. Motorola made a great marketing fuss about the transistor count wrt. chip name, and that is certainly something that should be mentioned. -- Egil 05:35, 18 October 2005 (UTC)
- Mirror VAX asks "Why isn't it good enough": because various Motorola employees including FAEs at the time of introduction made a point of the transistor count being related to the part number; it's not just some random coincidence that customers noticed after the fact. Why do you have such a big problem with it? As for your other suggestion, I've been in the industry for over 20 years, currently work for a semiconductor company, and I've never yet heard of anyone basing design characteristics of a chip on the numerical portion of a part number.
- It is much more likely the case that they had an approximate transistor budget in mind when they started the design, beased on the process technology and die size they wanted. As the design progressed, the transistor budget probably changed. For instance, it could have been decided to increase the transistor budget to allow for more GPRs, or a larger instruction set. It is possible but rather less likely that the design ended up needing fewer transistors than the original budget. Without contacting the designers, we're unlikely to ever know what the original transistor budget at the outset of the project was.
- In any case, it is common practice for the final part number to be determined AFTER the design is complete and ready for production. One chip my employer developed was known by the number 4000 during development, then 3035 for first silicon (not shipped to customers), then 3023 for the final product. Except possibly for the original 4000 designation, the part numbers were determined by the marketing department and were essentially unrelated to the engineering details. --Brouhaha 00:23, 19 October 2005 (UTC)
- You don't know how the marketers arrived at the name. You weren't in the room. I wasn't in the room. So we have two choices: (1) we can invent a history that seems plausible, and might be wholly true, partly true, or wholly false, or (2) we can stick to what we know to be true. You prefer (1); I prefer (2). Mirror Vax 02:26, 19 October 2005 (UTC)
- I know what the Motorola FAEs *said* was the basis for the name. So we can assume that they were telling the truth, or we can assume that they were lying, or we can assume that I am lying. Which seems more plausible to you?
- Did you actually have any contact with Motorola FAEs regarding the MC68000 in the 1979-1981 timeframe? I dealt with the local Motorola FAEs in Denver as part of my job. --Brouhaha 05:36, 19 October 2005 (UTC)
- OK, found a reasonably definitive reference. Harry "Nick" Tredennick, one of the engineers responsible for the logic design and microcode of the 68000 (and listed as one of the inventors on six of the Motorola patents regarding the 68000), posted to comp.sys.intel on 22-Aug-1996 a response to comments about the 68000 designation being derived from the transistor count, or as a followon to the 6800: "I think there was a little of each in the naming, but definitely some contribution from its being a follow-on to the 6800. We (the lowly engineers) were concerned at the time that the press would confuse the 6800 with the 68000 in reporting. It happened." This confirms what the Motorola FAEs were telling customers at the time. --Brouhaha 09:29, 24 October 2005 (UTC)
- The current version of the article says, "The transistor cell count, which was said to be 68,000 (in reality around 70,000)...". I don't know if that's true or not, but if it is, it undermines the notion that the name was derived from the transistor count (as does Tredennick's statement that there was "definitely some contribution from its being a follow-on to the 6800"). Rather, it suggests that the stated transistor count was derived from the name. Why not name it the MC70000? Why, if you are bragging about large transistor count, would you "round down" 70,000 to 68,000? Mirror Vax 15:06, 24 October 2005 (UTC)
- Which part of "I think there was a little of each" did you not understand? He didn't say that the part number was based exclusively on the MC6800. And given your insistence on authoritative information, where is the authoritative source for the 70,000 count? --Brouhaha 19:03, 24 October 2005 (UTC)
- Good question. As I said, I don't know if it's true or not. Mirror Vax 19:29, 24 October 2005 (UTC)
Motorola 6800
What about the Motorola 6800 (one zero less)? --Abdull 13:12, 2 October 2005 (UTC)
- What about it? --Brouhaha 01:03, 17 October 2005 (UTC)
- The Motorola 6800 is an 8-bit CPU.
Talking about claims
The article says: "Originally, the MC68000 was designed for use in household products (at least, that is what an internal memo from Motorola claimed)." I very much doubt this. What sort of household product would need the computing power of the 68k? The 68k was totally state of the art wrt complexity, pin count and chip area at the time, with a price to match. (I would have believed the above statement if we are talking about the MC6800, but that is another issue). -- Egil 05:59, 17 October 2005 (UTC)
So, how many bits?
To help clarify this, is the 68000 code word size 16 or 32 bits wide? --Arny 09:03, 30 January 2006 (UTC)
- Do you mean the size of the instructions? They could vary from 16 bits (eg 0x4e75 for RTS, 0x4e71 for NOP, and 60xx for short relative branches) to 80 bits (0x23f9aaaaaaaa55555555 for MOVE.L $AAAAAAAA, $55555555). The data bus for reading/writing to memory was 16 bits wide, and the registers A0-A7 and D0-D7 were 32 bits wide. Cmdrjameson 14:17, 30 January 2006 (UTC)
- I think the conventional view is that the 68000 is a 16-bit implementation of a 32-bit architecture; the later 68020, '030 and '040 are 32-bit implementations of the same architecture. This is what it basically says in my copy of "68000 primer" by Kelly-Bootle and Fowler. Graham 22:55, 30 January 2006 (UTC)
- Yes, this is what I've heard too. I'm next to certain this is explicitly documented in Motorola's reference manuals about the 680x0. By way of contrast, the 68008 was also a 16-bit implementation of the same architecture, but this time in a smaller physical package and as a result had an 8-bit data bus, and only a 20-bit external address. Cmdrjameson 01:20, 31 January 2006 (UTC)
- It was still a 16/32-bit chip though. The narrow bus was only to keep the physical pin count down. It used as many fetches as needed to bring in the data byte by byte on the 8 lines. Of course this made it slow but more than adequate for the applications it was intended for. I think this approach was really clever on the part of Moto - they allowed people to learn the instruction set once and apply it over a very wide range of different chips and applications. The same code would run unchanged on all varieties of the processor and the hardware just did what it needed to do to make it work. I guess it could be said that this was one of the first micros to be designed mainly from the software perspective rather than the hardware one. Graham 01:27, 31 January 2006 (UTC)
- Oh absolutely, it was still definitely a 16/32-bit chip. Mind you, this notion of having a common instruction set/architecture and a large range of implementations with different price/performance characteristics wasn't unique to Motorola. DEC differentiated the VAX product line with horribly slow but cheap implementation in the 11/730 vs the faster 11/780; and later with systems like the MicroVAX vs the 8650. And of course the granddaddy of them all is IBM's System/360 which did all this back in the 60s... --Cmdrjameson 11:00, 31 January 2006 (UTC)
- The 68000 was a mainframe on a chip ;-) Graham 11:10, 31 January 2006 (UTC)
- Hardly. It may have been the first microprocessor to have an architecture similar to that of a mainframe CPU, but it didn't particularly have any of the other attributes of mainframes, nor was the raw computing performance comparable to a contemporary mainframe. That's not a dig at the 68000; it wasn't trying to be a mainframe, and it was definitely a best-in-class microprocessor for several years after its introduction.
- Intel called their Intel iAPX 432 a "Micromainframe", and it had a few attributes that were more mainframe-like than the 68000, but its uniprocessor performace was significantly worse than the 68000. --Brouhaha 23:09, 31 January 2006 (UTC)
- Earnestness alert! I was joking. Graham 23:47, 31 January 2006 (UTC)
The article claims that the 68000 has 3 ALUs. This is completely wrong. It has a single 16-bit ALU. And this is probably the most important aspect that makes the 68000 a 16/32 chip, even internally.
32-bits ALU operations are performed internally with two 16-bit steps, and using a carry when needed. 32-bit address calculations are performed using a separate simple AU unit. This unit is not an ALU. The only operation it can perform is a simple 32-bit addition.Ijor 19:40, 14 December 2006 (UTC)
- If you look at a die photo there are three equal sized ALUs. Multiplication in particular is handled by two ALU's chained together. Potatoswatter 10:36, 15 December 2006 (UTC)
- Can you point to some document that states that those 3 sections are 3 ALUs as you think? Can you point to any source describing that multiplication is performed by two ALUs? Can you explain, if multiplication uses two 16-bit ALUs, why it takes the number of cycles as it does? Can you explain, if it has more than a single ALUs, then why logical 32-bit operations (that don't require carry) take longer than 16-bits ones? If it has 3 ALUs, then can you explain the timing of the DIVx instructions? Can you explain why the need of implementing an ALU exclusively for address caculation, when all what is needed is a simple small addition?Ijor 17:01, 15 December 2006 (UTC)
- I'm pretty busy so I can't do research for you, and I won't be around for the next month either. See The M68000 Family, vol. 1 by Hilf and Nausch. Microphotograph on p40. The low address word has a much smaller ALU. They describe in detail the layout of the microcode and nanocode ROMs and how the ALUs are ported to each other. There are two LSW ALUs and one MSW ALU. 70 cycles for multiplication = 16 instruction cycles * 4 cycles/instruction cycle + 6 cycles overhead. The ALUs form three 16 bit registers of internal state. I'd guess two are being used to calculate a running total, with the first operand latched into the low word ALU's input, and a left shift performed every insn cycle. The third ALU is used to right-shift through the second operand.
- Don't underestimate the importance of instruction cycles as opposed to clock cycles. The ALUs just couldn't be programmed to do an operation every cycle. The above algo fits with the address ALU doing two additions per insn cycle and the data ALU being able to do right shifts one per insn cycle. The microcode needed one cycle to branch, assuming the operation was programmed as a loop. (Otherwise that cycle would be a conditional branch, so same difference.) No real activity could happen when the microcode state machine was dedicated to controlling itself. Making a microcoded ~68000 transistor machine do multiplication that fast is harder than it might sound.
- Not fair to demand an explanation of division. Generally what you seem to be confused about is the fact the address ALU had to compute a 32 bit addition for every insn just to increment the PC. Potatoswatter 04:47, 16 December 2006 (UTC)
- I don't need you to make any research for me, I already did. I researched and investigated the 68000 far and beyond what was ever did, at least in disclosed form. The questions I asked were rhetoric, just to proof my point. I already know the answer to all of them.
- I don't have that book, but if it states that the 68000 has 3 ALUs, then the book is wrong. The book seems to be confusing an ALU, with a simple AU. The 68000 32-bit AU, which indeed can be separated in two halves, is not an ALU. Among other things it can't perform logical operations, and it can't perform shifts or rotations. It can only perform a simple addition. That's why it is called AU, and not ALU. Btw, the term AU is used in Motorola documentation, it is not my own one.
- The above MUL algo is wrong, for starters because 70 cycles is only the maximum, it depends on the operand bit pattern and is not fixed.
- It is true that the ALU can't perform operations on every CPU cycle. Actually the 68000 needs a minimum of two cycles for any internal operation, not just ALU ones. But it is wrong that it needs an extra cycle to perform microcode branches. Microcode branches, conditionals or not, don't take any overhead at all. The reason is because every microcode instruction performs a branch. There is no concept of microcode PC. Every micro instruction must specify the next one.
- Instructions must of course perform a 32-bit addition to compute the next PC. This is not an impediment or an explanation for why 32-bit operations take longer. The only explanation is because there is a single 16-bit ALU. See my article about the 68000 prefetch and cycle-by-cycle execution to understand why.
- It is perfectly fair to ask about division. I solved all the details about the 68000 division algo already and published the results about a year ago. Can you or the authors of that book explain the exact timing of both DIV instructions for multiple ALUs?
- As you can see by reading my articles, I know exactly the difference between a CPU clock cycle, a bus cycle, and a micro-cycle.Ijor 16:04, 16 December 2006 (UTC)
- Some quotes from Motorola papers and patents: "A limited arithmetic unit is located in the HIGH and LOW sections, and a general capability arithmetic and logical unit is located in the DATA section. This allows address and data calculations to occur simultaneously. For example, it is possible to do a register-to-register word addition concurrently with a program counter increment".
- I think this clearly shows that there is a single ALU, and that the other one is an AU. Again, it wouldn't make any sense to implement an ALU that would never perform any LOGICAL operations.
- Btw, by re-reading your post it seems you think that one microcycle (what you call instruction cycle) takes four CPU clock cycles. This is also wrong, it takes two clock cycles, not four.Ijor 04:18, 2 January 2007 (UTC)
Here's a bit of real history. (I was at Motorola in the 80's).
The original design was for a machine with 32 bit address registers and only 16 bit data registers, eight of each! The microcode writer convinced the powers that be to make it a 32Address/32data machine.
This history is reflected in the fact that the high 16 bits of the 8 data registers are located beside the high 16 bits of the address registers and are physically located on the far side of the chip from the data ALU rather than beside it like the lower 16 bits are.
Yes, there were 3 math units. One 16 bit ALU and two 16 bit Address Units. The ALU was complex enough for the math instruction set while the AUs were only able to do address related math (add, subtract). And of course the 2 16 bit AUs worked together to make the 32 bit address calculations.
Therefore, to do a simple 16 bit math instruction, the ALU can do the operation while the AUs can perform address calculations during the same micro-cycle.
To perform a general 32 bit data operation, it is necessary to move the high 16 bit register data past/through the AUs to the ALU. This is why 32 bit ops take many more cycles than 16 bit ops.
One could therefore say it was a 16 bit processor pretending to be a 32 bit one. A real 32 bit ALU came out in the 68020.
BTW: If you look at a die, the top half is the microcode, the middle the random logic, and the bottom the registers and ALU/AUs. The bottom third is ordered LtoR: High A & D registers, AUhigh, (X) , AUlow, Low A registers, (X), Low D registers, ALU, I/O aligner. Where the Xs are gates (usually closed) that allow data to travel between the three areas and the I/O aligner was the interface to the Data pins with byte/word align logic.
Some more interesting History.
After making the 68000, IBM and Motorola got together and made the 360 on a chip. It was thought that because the 68000 core was '32 bits' and regular enough that this would be an easy task.
What they failed to realize was that the random logic in the middle of the chip would have to change considerably to make the 360 on a chip. This took more resources and time than Motorola expected, delaying the 68010 and 68020, giving Intel a better chance to jump ahead with the i286/i386.
71.231.43.16 (talk) 13:05, 30 November 2007 (UTC)HenriS
Content from Amiga virtual machine (AFD)
The article Amiga virtual machine has come to my attention as it was proposed for deletion today. While my take is that it should be deleted, I think it contains some useful information, and I've proceeded to add some of it (namely, the section originally titled Bytecodes) to the 68k article (is that a good name for an article, by the way?). Only later did I realize that this is possibly a more appropriate place for such content. However, at this point, I prefer to avoid further roaming of content until someone else reviews it and discusses about the most appropriate placement for it. LjL 20:16, 16 May 2006 (UTC)
68020 addressing modes influenced by printer applications?
I'm rather dubious of the claim added by Wayne Hardman that the addressing modes of the 68020 were influenced specifically by printer applications. Can anyone cite a reference? It seems much more likely that Motorola chose new addressing modes based on a survey of compiler-generated code for a wide variety of applications, and that some of those addressing modes happened to be fairly useful in printers (or graphic rendering in general). --Brouhaha 23:07, 21 August 2006 (UTC)
- I question the premise that the new addressing modes are useful. This sort of feature creep is typical of microcoded CPUs and doesn't necessarily reflect any measurable benefit. Coldfire dumped the new modes (except for the scaled index feature). Mirror Vax 00:03, 22 August 2006 (UTC)
- Obviously Motorola thought at the time that they would be useful. What I question is whether they were thought to be especially useful for printer applications. If no one can cite any reference for that, the sentence should be removed. --Brouhaha 22:23, 22 August 2006 (UTC)
- I agree that the claim about 68020 addressing modes being influenced by printer applications is dubious, and even if true, it would be more appropriate for the 68020 article. I removed it as part of some other edits. --Colin Douglas Howell 03:31, 20 September 2006 (UTC)
Ti-calculators with M68k
Some Ti-calculators (Texas Instruments) uses the Motorola 68000 e.g. the Ti-89 Titanium (or original) and Ti-92 Voyage. How about adding info about them? --Red_Hat_Eagle 03:31, 30 October 2006 (UTC)
- TI graphing calculators are already mentioned briefly, but I agree that a bit more detail should be added. --Colin Douglas Howell 06:39, 31 October 2006 (UTC)
- I've added more detail on the 68000's use in TI calculators. --Colin Douglas Howell 22:19, 1 November 2006 (UTC)
History section needs to be restructured
The History section is currently too long and contains a mix of general historical info about the 68000 and descriptions of specific 68000 applications. I confess that I've recently made the problem worse. The section needs to be split into separate History and Applications sections, or something along those lines, but I'm not sure how best to go about this. There doesn't seem to be any agreed-upon structure for Wikipedia articles on microprocessors. (I know "Be Bold" is the Wikipedia motto, but I'm naturally oriented towards caution rather than boldness.) --Colin Douglas Howell 22:38, 1 November 2006 (UTC)
- OK, I just went ahead and did it. There's still room for further improvement, of course. --Colin Douglas Howell 05:02, 2 November 2006 (UTC)
- Nice work. Chris Cunningham 09:05, 2 November 2006 (UTC)
Article has vague "cheerleading" statements which need eliminating
This article has some vague statements with a sort of "cheerleading" tone. The earliest version of the article seems to have been written by a fan of the 68000 and contains a number of such statements, some of which are still in the current article. While I agree that the 68000 was a well-designed processor, I think that such POV expressions are out of line here; it's better to describe and explain the processor's advantages in a clear, unbiased way.
Here's one example which I've just removed, referring to uses of the 68000: "It also sees use by medical manufacturers and many printer manufacturers because of its low cost, convenience, and good stability." The 68000 is certainly cheap now, but no more so than many other processors and microcontrollers competing for these markets. Whether it is "convenient" can depend on all sorts of factors, such as "which microcontrollers contain the particular devices we need?" and "what architectures are our embedded programmers most fluent in?" As for "good stability", I'm sure that applies to most microcontrollers; the embedded market has little tolerance for flakiness. (I felt free to remove the sentence because the uses in medical and printer fields are now also mentioned elsewhere.) --Colin Douglas Howell 19:02, 3 November 2006 (UTC)
Another one I've just eliminated claimed that the 68000 family was popular in the Unix world "because the architecture so strongly resembles the Digital PDP-11 and VAX, and is an excellent computer for running C code". It's certainly true that 68000-based Unix systems were popular, but these reasons for this popularity are hard to support. The 68000 did not "strongly resemble" either the PDP-11 or the VAX. True, it was a general-register architecture like those others, but there were lots of differences in detail. For example, the PDP-11 was purely 16-bit, had no separate address registers, and included the program counter as a general register. The VAX, on the other hand, had 16 general registers and was a three-address machine rather than a two-address machine like the 68000 and PDP-11. You could say that the 68000 had some limited general similarity to the PDP-11 and VAX, but it's not clear that was important for its popularity in Unix systems.
As for "an excellent computer for running C code", that statement doesn't even seem meaningful. The 68000 was no better for running compiled C code than many other architectures. It's true that C programmers on the 8086 and 80286 might have had to worry about far and near pointers, but even that deficiency was eliminated with the 80386. In any case, architectural issues are normally the C compiler's problem, not the user's, and again it's not clear whether such concerns were important in the popularity of 68000-based Unix. --Colin Douglas Howell 21:12, 3 November 2006 (UTC)
Removed unverifiable statement about recent Hitachi 68000 production
I've removed this statement: "As of 2001, Hitachi continued to manufacture the 68000 under license", because I can't find any information backing it up. Although I know that Hitachi made 68000 versions, I can't find anything about 68000 production on their web sites, either on the current pages or in the Internet Archive. If someone could find some sources of information about this, it would be a big help. --Colin Douglas Howell 07:09, 19 November 2006 (UTC)
Same pinout as ever?
Just wondering if today's 68K is the same size as the monster was in my Mac and Genesis? --24.249.108.133 23:42, 28 December 2006 (UTC)
- 'Todays' 68K CPUs are not pin compatible. They changed in the 68020.
Usefulness of PC-relative mode
The article makes the statement:
* 16-bit signed offset, e.g. 16(PC). This mode was very useful.
Very useful for what? It seems like it'd be useful for position independent code, but not much else. Since it's usefulness is non-obvious, should this be called out, or should we remove this editorialization? --Mr z 20:15, 9 August 2007 (UTC)
Amiga 500 processor
The Amiga 500 article makes a very precise but unsourced claim of 7.15909 mhz for the 68000, but this article makes no mention of the 68000 being manufactured at this speed. Is this correct? Miremare 18:20, 13 August 2007 (UTC)
- Slight underclocking. NTSC Color Subcarrier Frequency is 3.579545 MHz , the CPU speed (on NTSC models) exactly twice that. (83.245.252.197 22:47, 31 August 2007 (UTC))
With only 56 instructions the minimal instruction size was huge for its day at 16 bits.
What does this mean? The Strela, 1953, had a minimum instruction size of 36 bits. The PDP-10 (from early 60's) also. PDP-11 instructions are always 16 bit (because everything is always contained in one word - 1 opcode, up to 2 operands each including a register and an addressing mode)... I need to regards this statements as a mistake, unless it is supposed to say that the minimum opcode length is 16 bits, which sounds unplausible.
History section dispute
Theaveng, Intel and the 68000 are not analog to the VHS and Betamax and comparisons between those two products don't go into details non-techies can't follow.
Let's take a closer look at the current history section and I'll tell you some of my troubles with it.
- The 68000 grew out of the MACSS (Motorola Advanced Computer System on Silicon) project, begun in 1976. One of the early decisions made was to develop a new "no compromise" architecture that paid little attention to backward compatibility. This was a gamble because it would mean adopters of the chip would have to learn the new system entirely from scratch. In the end, the only concession to backward compatibility was a hardware one: the 68k could interface to existing 6800 peripheral devices, but could not run 6800 code. However, the designers built in a tremendous amount of forward compatibility, which paid off once 68k expertise had been gained by end users. For instance, the CPU registers were 32-bits wide, though address and data buses outside the CPU were initially narrower. In contrast the Intel 8086/8088 were 16-bits wide internally.
With the exception of the register file the 68000 was also 16-Bit wide internally. Also, this is a typical throwaway statement that don't quite fit. Making random comparisons with Intel CPUs just diffuse what the article actually is trying to tell. Also note that the word tremendous is a clear case of POV.
- At the time, there was fierce competition among several of the then established manufacturers of 8-bit processors to bring out 16-bit designs. National Semiconductor had been first with its IMP-16 and PACE processors in 1973-1975, but these had issues with speed. Texas Instruments was next, with its TMS9900, but its CPU was not widely accepted. Next came Intel with the Intel 8086/8088 in 1977/78. However, Motorola marketing stressed the (true) point that the 68000 was a much more complete 16-bit design than the others. In fact, it is an implementation of a 32-bit architecture on a 16-bit CPU. This was reflected in its complexity. The transistor cell count, which was then a fairly direct measure of power in that era, was more than twice that of the 29,000 of the 8086.
Okay, but centers a bit too much on 'other CPUs'. Let's split this up into two paragraphs, one discussing all the other CPUs and another for 68000 marketing. As for the transistor count comparison, the article has already touched upon transistor count, delving back onto this topic is a typical armature mistake. And why are we comparing with the two year older intel architecture? The Z8000 was closer in age, 16-Bit and like the 68000 not 'backwards constrained'.
- The simplest 68000 instructions took four clock cycles, but the most complex ones required many more. An 8 MHz 68000 had an average performance of roughly 1 MIPS.
Nice to know, but it's just a random technical dribble that's not discussed further. Should be cut away if it improves the article's readability.
- On average, the 68000's instructions in typical program code did more work per instruction than competing Intel processors, which meant that 68000 designs both needed less RAM to store machine code and were faster.[citation needed] Additionally, the 68000 offered a "flat" 24-bit addressing system supporting up to 16 MB of memory; at the time, this was a very large memory space. Intel chips instead used a segmented system which was much more difficult to program.
much more difficult is clearly POV. The RAM/Faster statement is uncited, vague and only seem to exist to tell us that the 68000 CPU was better than competing Intel CPUs (which is wrong - later Intel CPUs were faster). It says nothing about other CPU archs and leave it up to the reader to figure out what historical impact this had.
Also, keep in mind that the 68000 had strong competitors in the Z8000, NS162032, iAPX 432 and other contenders to the title of CPU of the eighties. The article as it currently stand does a poor job of detailing how the 68000 faired technically against these CPUs. Why single out the Intel 8086? The 8086 was two years older, had starkly different design goals, and would have faded away if it didn't get on the IBM rocket ship.
That said, I say again that technical details should not be the focus of the history section. The history of the 68000 CPU can be discussed without readers knowing about bit width, transistor count, address range, clock cycles, etc. These are technical terms that flies over many people's heads - rightfully so - and we should make an effort to keep those to a more technically focused section of the article. If we do feel the need to mention, say, clock cycles then we should (within reason) give a short explanation just what it means.
--Anss123 22:41, 2 October 2007 (UTC)
- Well then EDIT it to eliminate words like "massive" and anything else you think seems biased. And move the technical stuff out of history & into a sub-suection titled technology. I'd have no objection to that. ----- But what you did was basically excise half the article (including the Intel comparisons, which I liked), and I object to throwing away information so indiscriminately (it's such purges that led to the loss of early BBC tv shows & early silent movies) (because someone carelessly through them in the trash). EDIT to improve; don't do massive deletions is the wiki mantra.
- Also I suggest you take a look at the Betamax article. It does indeed do technical comparisions to VHS (comparing different video resolutions and frequencies). And I love that article, because it's a great resource. So too is this 68000 article. Please don't turn it into a worthless fluff piece that has no technical value whatsoever. - Theaveng 13:51, 3 October 2007 (UTC)
It's okay now but a while ago I read this page and I noticed how much fanboyism was in this page. A few seconds ago I read the page and I noticed that all the fanboyism already got taking out. I'm sorry I deleted your page and put my oppinion about it. It's just that it was not the best processor out there like the artical said it was back a few months ago. I'm sorry I did not reread it to see if it already got edited. I thought it was the same old fanboy-made article that was here back in September.
What I'm sick of hearing is Sega fanboys claming that their processor was like god and Nintendo's processor was a peice of poop. I hear that all day long and I'm sick of it. Wikipedia supporting their oppinion makes it even worse.
Thankyou for fanboy-proofing your article.
By the way the "4 cycles to add bytes, 8 cycles to add words, 4 cycles to access memory, not at the same time" stuff is true and is a major hit for it's CPU performance. Most other CPUs at it's time (ex: 6502, 65C816, Z80, 6809, ARM etc) didn't have those cycle limitations. This makes this processor very weak compared to others.
the 65C816 is one of the most underrated processors of all time. Sure the 68000 has a lot more internal registers; but it has the above performance problem. 65C816 doesn't have that problem with performance so it's able to access external memory faster than the 68000 could even access internal memory.
Nintendo is as fast, if not faster than Sega, because of the above 68000-only "bandwidth" limitation that almost no other processor, including 65C816, had.
—Preceding unsigned comment added by 75.58.34.62 (talk) 02:47, 24 November 2007 (UTC)
- Yes those evil Sega fanboys! That said, the SNES CPU is slower. At full speed the SNES CPU runs at 3.5 MHz - less than half of the Genny, but when accessing ROM, Video or other hardware it slows down to 1.5 MHz or 2.5 MHz depending on what it accesses. Further troubling it is it limited register set, resulting in programs often needing more instructions to be equivalent with the 68000. Worse, there's a 2 cycle penalty for fetching 16-bit words on the SNES (8-bit data buss) whereas the Genny might pull it off in one cycle with a little luck. At the end of the day the Genny CPU is significantly faster, even if some Genny games suffer from slowdowns.
- --Anss123 (talk) 10:25, 24 November 2007 (UTC)
Stop just stop!!! You didn't read my post the 68000 takes four clock cycles to access memory and do addition to bytes and connot do both at the same time. the 65C816 takes only one cycle to do both and can do them simultaniously
and all this "drops down to 2.5 megahertz when accessing certain parts of memory" proves nothing because a good programmer would avoid using that part of memory anyway
plus it has an H-DMA chip that does all the raster interept VRAM loading so the CPU doesn't have to worry about it. Something Sega Genesis doesn't have.
and let me rimind you that the snes uses a specialized version of the 65C816 that has two buses that can be used at the same time and many extra "external registers" built on the inside of the chip so it can access them even faster.
edit: by the way, can you tell me, one way you could possibly pull off something on 68000 in 1 cycle when the shortest opcode takes 4? —Preceding unsigned comment added by 75.58.75.4 (talk) 19:59, 24 November 2007 (UTC)
- I was talking about fetching 16-bit words. A genny can pull that off in one cycle as it has a 16-bit data buss, whereas the SNES has a 8-bit data buss and need at the very least two cycles.
- The two address busses in the SNES are not data busses. When you say both can be used at the same time I believe you referee to when the SNES does DMA. Both busses are in use then, with the system running at 2.5 MHz. If you use the 8-bit address bus the CPU slows down to about 1.5 MHz, used for reading the controller and audio chip among other things.
- Early SNES games ran the system at 2.5 MHz so that they could use cheaper ROM chips. Later games allow the CPU to run at 3.5 MHz, but not when accessing video hardware, audio hardware, etc....
give me a link to that information please about it slowing down to 1.5 MHz when accessing the video and audio chips. And NO wikipedia articles are NOT proof because they can be manipulated by anyone and especially you. You'll never find detailed hardware level information about it because you made that up. I'll find detailed hardware information on snes's memory map and it will completely agree with me.
and you still fail in naming me an 68000 opcode that takes less than four cycles.
edit: I found real proof before you found your own bs. www.romhacking.net/docs/memmap.txt
there you go, BUS-B is on fast speed. The only slow part of memory is the W-RAM which is the part that saves game files. These registers only have to be written to after levels.
- I did not say it slowed down to 1.5 MHz when accessing the video chip, I said it slowed down to 1.5 MHz when using bus the 8-bit buss to access the controllers and audio chip. When working with the video chip you generally want to perform memory copies, and that is most efficiently done with DMA. When doing DMA the system is slowed to 2.5 MHz, the CPU is actually halted. I believe romhacking is used as a source for this on Wikipedia, the document you referenced is written by the same user that wrote the Wikipedia stuff (user:Anomie).
- Oh, and about that Opcode. Why are you insisting that I name a one cycle opcode? I never claimed there was a one cycle opcode. I did claim that a 68K can perform a 16-bit fetch in one cycle, whereas the SNES needs two cycles. This due to Nintendo's decision to use a 8-bit data buss on the SNES, the 65C816 can be used with a 16-bit data buss too - but not on the SNES.--Anss123 (talk) 13:34, 25 November 2007 (UTC)
Yeah, there is no one cycle opcode in it's entire instruction set. The shortest opcode takes four. Yes, theoretically it can access 16-bits per cycle, but in reality it can't because there is no opcode that enables it to do that. I don't want to break your fantasy but the chip's microcoding is really bad. Sure Motorola could've did a lot of nice stuff with the same architecture but the microcode didn't utilize it's own hardware.
The 65C816's microcode did a much better job at utilizing the it's own hardware it's running. The 65C816 chip does actually accesses memory every cycle. The 68000 can only access memory every 4 clock cycles because of the poor decisions in microcoding.
Please stop using this theoretical philosophy, because we do not live in a theoretical world.
case closed, good night —Preceding unsigned comment added by 75.57.173.83 (talk) 03:39, 26 November 2007 (UTC)
- Anon dude: if either the 68000 or the 65816 were that bad, they wouldn't both have so many design wins. The 68000 accessed the bus every 4 cycles, but it was clocked faster, finally over 60 MHz, and has better compilers. In the end it was used in more applications, notably the Palm series of PDAs and the TI calculators in this decade.
- It is generally pointless (and thus unencyclopaedic) to compare competing products on aesthetic merits. The marketplace does that for us, and decides what aesthetics actually matter. (That marketplace consists of engineers who decide which chip is better for actual applications.) Nintendo chose the 65816. Sega chose the 68000. Apple chose both but knew one was a dead end. Behind each of those decisions was a broad discussion including the issues you've brought up. So let's make use of the research of others and stop clogging this talk page. Potatoswatter (talk) 04:15, 26 November 2007 (UTC)