Why does x84 mean 32 bit
Do you think a 50 year old male with no degree can still become a coder? A tool to try open-source projects quickly. I want to be a programmer. Get Ahead by Going Headless. Software Development. Follow us! Get the Latest Bytes Updates.
By using this site, you agree to our Privacy Policy and Terms of Use. What does x86 or x 64 mean? Jul 13 '10 reply Message. Cancel Changes. Post as a guest Name. Email Required, but never shown. Learn more about Collectives on Stack Overflow.
The Overflow Blog. Podcast Explaining the semiconductor shortage, and how it might end. Does ES6 make JavaScript frameworks obsolete? Featured on Meta. Now live: A fully responsive profile. Visit chat. Linked 1.
Related Hot Network Questions. Question feed. Michael Bogan - Oct DEV Community is a community of , amazing developers We're a place where coders share, stay up-to-date and grow their careers. Create account Log in. Twitter Facebook Github Instagram Twitch. Why is bit called x86 and not x32? So, the correct question is: Why is the bit x86 called x64? Upload image. Submit Preview Dismiss. Aug 13, After a while, Intel launched a new chip, but it decided to drop the 80, so it became the instead of Likewise, people were dropping the "80" from the front of "80x86", and calling this stuff just x Now, I'm pretty sure some will come and say Intel branded their chips x86 at such and such time, which they did, but I don't care.
The fact is that the ever-increasing middle digit gave rise to 80x86, and x86 came from that -- even if and were not 32 bits. So, once Intel finally went 64 bits, what did it call its new architecture? And then came AMD, which decided the market wanted a 64 bits CPU that was compatible, to the extent possible, with the x86 family. As a marketing appeal, they called it the "x" family, and they were hugely succesful. So much so that Intel ended up grudgingly following with their own 64 bits CPU based on x There is more to a cpu architecture than just the number of bits per register.
At the time that x86 became established as a name for the architecture, 32 bits per register was not considered an important enough feature to justify building it into the name. Well, x64 never really came from an instruction set or chip manufacturer. It originally came from Windows XP. The first 64 bit edition of Windows was given the title Windows XP x And I guess the term x64 stuck from there. The x86 moniker comes from the 32bit instruction set.
So all x86 processors without a leading 80 run the same 32 bit instruction set and hence are all compatible. So x86 has become a defacto name for that set and hence 32 bit. Intel later licensed it, but even still the name for the 64 bit x86 instruction set is typically AMD But mostly it's brand distinction.
Here's what a few different major Operating System distributions call each 64 bit flavor:. But the big thing to note is that they all mean the same thing for marketing purposes. Technically x86 simply refers to a family of processors and the instruction set they all use. It doesn't actually say anything specific about data sizes.
It used to be written as 80x86 to reflect the changing value in the middle of the chip model numbers, but somewhere along the line the 80 in the front was dropped, leaving just x Blame the Pentium and it's offspring for changing the way in which processors were named and marketed, although all newer processors using Intel's x86 instruction set are still referred to as x86, i, or i compatible which means they all use extensions of the original instruction set.
The first name for the bit extension to the x86 set was called x
0コメント