|
|
16-bit vs. 32-Bit Integers Continued from Introduction The Intel 8088 used in the first IBM PCs was a 16-bit microprocessor. A processor's bit count used to be determined by the number of its data lines. A 16-bit processor has 16 data lines, which means it can input and output data in chunks of 16 bits, or 2 bytes. The 8088 was something of an anomaly in this regard because it had only 8 data lines (it was actually a scaled-down version of the 8086, which did have 16 data lines), but it was still considered a 16-bit processor because it could process data internally in 16-bit units. The phrase "process data internally in 16-bit units" means that the 8088's instruction set was built to handle 16-bit quantities, which seemed like a lot in those days. Most microprocessors used in personal computers back then, including the Apple II's 6502 and the Commodore 64's 6510, were 8-bit chips designed to process information just 8 bits at a time. An 8-bit number can store values only from 0 to 255, so to handle anything bigger, an application written for the Apple or Commodore had to string two or more bytes together and operate on them separately. Likewise, an 8088 could handle 16-bit integers as large as 65,535 quite efficiently, but to deal with larger numbers it was forced to do some extra work. Code written to the 8088 instruction set is 16-bit code. And an application that uses 16-bit code is a 16-bit application. One of the chief differences between 16- and 32-bit programs is how efficiently they handle large numbers. Suppose a 16-bit application were to add two 16-bit integers named a and b and store the result in a third variable named c. The microprocessor instructions generated by the compiler might look like this: mov ax,[a] add ax,[b] mov [c],ax The first instruction retrieves a from memory and stores it in the microprocessor's 16-bit AX register. The second instruction adds the value of b to the number in AX, and the third copies the result from AX to the memory location where c is stored. Now suppose that a, b, and c are 32-bit variables--how would a 16-bit application perform the same operation? The code is a little less straightforward this time around: mov ax,word ptr [a] add ax,word ptr [b] mov word ptr [c],ax mov ax,word ptr [a+2] adc ax,word ptr [b+2] mov word ptr [c+2],ax Never mind the word ptrs and the +2s; they make the instructions look more complicated, but they don't add to the complexity of the compiled code. What's important is that the operation now requires six instructions--and six separate memory accesses--instead of three. The first three instructions add the least significant words (the lowest 16 bits) of the two variables, and the next three add the most significant words, factoring in a possible carry from the previous operation. A 32-bit variable is treated like two 16-bit variables, and each 16-bit half is operated on separately. The 386 was the first Intel microprocessor to feature a 32-bit instruction set, and the same basic instruction set is still used today in 486s and Pentiums. A 32-bit application handles 32-bit quantities as easily as a 16-bit application handles 16-bit quantities. The same program that adds a and b to get c when a, b, and c are 32-bit integers compiles like this for a 32-bit platform: mov eax,[a] add eax,[b] mov [c],eax EAX is the 32-bit version of AX. We're back down to three instructions, because 32-bit instructions allow data to be manipulated 32 bits at a time. It's clear from these examples that 32-bit code is superior to 16-bit code when handling 32-bit values, and it will probably run faster because fewer instructions and fewer memory accesses are required. Less obvious is the fact that 32-bit code is no better at handling 16-bit quantities than 16-bit code is. To benefit from 32-bit code, it helps to be dealing with 32-bit data. Published as Tutor in the 11/07/95 issue of PC Magazine. |
|
TOP |
Copyright (c) 1997 Ziff-Davis Inc. |