Why did Intel drop the Itanium? [closed]
I was reading up on the history of the computer and I came along the IA-64 (Itanium) processors. They sounded really interesting and I was confused as to why Intel would decide to drop them.
The ability to choose explicitly what 2 instructions you wanted to run in that cycle is a great idea, especially when writing your program in assembly, for example, a faster bootloader.
The hundreds of registers should be convincing for any assembly programmer. You could essentially store all the functions variables in the registers if it doesn't call any other ones.
The ability to do three operand instructions like this:
(qp) xor r1 = r2, r3 ; r1 = r2 XOR r3
(qp) xor r1 = (imm8), r3 ; r1 = (imm8) XOR r3
versus having only two to work with:
; eax = r1
; ebx = r2
; ecx = r3
mov eax, ebx ; r1 = r2
xor eax, ecx ; r1 = r2 XOR r3
mov eax, (imm8) ; r1 = (imm8)
xor eax, ecx ; r1 = (imm8) XOR r3
I heard it was because of no backwards x86 comparability, but couldn't that be fixed by just adding the Pentium circuitry to it and just add a processor flag that would switch it to Itanium mode (like switching to Protected or Long mode)
All the great things about it would have surly put them a giant leap ahead of AMD.
Any ideas?
Sadly this means you will need a very advanced compiler to do this. Or even one per specific model of the CPU. (E.g. a newer version of the Itanium with an extra feature would require different compiler).
When I was working on a WinForms (target only had .NET 2.0) project in Visual Studio 2010, I had a compile target of IA-64. That means that there is a .NET runtime that was able to be compiled for IA-64 and a .NET runtime means Windows. Plus, Hamilton's answer mentions Windows NT. Having a full blown OS like Windows NT means that there is a compiler capable of generating IA-64 machine code.
Performance was very disappointing compared to expectations and it didn't sell well compared to Intel's x86 architectures.
Intel talked me into building my Hamilton C shell on Itanium running Windows NT sometime around 2000 for a trade show. Itaniums were hard to come by so I used a VPN to a machine in their lab. Having already built versions for NT on x86, MIPS, Alpha and PowerPC, the "port" was trivial, just minor tweaks mostly to my makefiles. I think it took me maybe a half hour.
But the performance was truly underwhelming, definitely so over the VPN, and still disappointing when I got to the trade show and could try it right there in person. Itanium went nowhere because it wasn't a great product and nobody bought it.
Added:
For a while, Intel touted my experience porting to the Itanium using their VPN remote development experience on their website. Gone now but snapshotted at archive.org, here's what it said in their remote FAQ:
Q: Do you have a customer I can talk to about the Remote Access service?
A: Yes, Hamilton Laboratories*. For an in-depth look at the benefits Hamilton Laboratories derived from the service, see the Hamilton Laboratories case study.
In the "case study" it says I built an Itanium version because customers were clamoring for it. But I don't recall ever selling a copy for Itanium. Sold them for everything else, including PowerPC (and how many of those running NT do suppose there were?) just never for Itanium.
Challenge: To accelerate development of its Hamilton C Shell product to ensure a favorable time-to-market port of its customers' architecture tools for Intel® Itanium® and Windows* 2000.
Solution: Used the Remote Access Program, including high speed Internet access and Shiva® VPN client to access an Itanium development environment, modifying source code and make files, testing debugging and recompiling 64-bit application remotely in just 7 hours time.
Quick answer: Poor performance. Intel tried to release a revolutionary product when they should have evolved to the product they wanted.
More specifically: The processor was not fast enough under general circumstances. Intel released the processor just as the processor speed to memory speed gap was widening. Itanium, being a Reduced Instruction Set (RISC) processor required more bytes-per-instruction than its cousin, x86 variants. The increased memory load, caused the processor to run slowly.
All this was exasperated by the entire architecture being, essentially, a first release. While RISC itself was not a new idea, many of the hardware components were and needed new layout designs. There were also many new ideas in the Itanium instruction layout which needed to be thoroughly digested by the development community before high-quality software would become available.
In the end a lot of technology did end up getting used in Intel's existing release of chips - just not easily visible to the end-user.
The itanium is a great design if you can leverage it advantages.
Sadly this means you will need a very advanced compiler to do this. Or even one per specific model of the CPU. (E.g. a newer version of the Itanium with an extra feature would require different compiler).
Creating such a compiler once is a hard task. To do that for every variation of a CPU is not economical.
The other important part of the Itanium history that hasn't really been touched on is the fact that in 2001 with the Itaniums debut, it was impossible to get large amounts of RAM into commodity hardware. x86_64 was just a blip on the horizon and AMD Opterons wouldn't really even be released for another 2 years.
My first (and only) experience with an Itanium server was in 2002 at a chemical company where they needed an SQL Server to perform oil analysis to detect defects. This oil was coming from and going into multi-million dollar machines in a billion dollar company, so they had a cluster of Itaniums, each with 128Gb of RAM. 128Gb of RAM today is still a fair amount, but is easy and cheap to install in a server.
In 2002, 128Gb of RAM was a mammoth amount, and as they already had an existing SQL Server infrastructure, it was cheaper to fork out for a few Itanium machines and load them up with RAM, than it was to switch to a different platform and different database.
Now that it's trivial to to get 128Gb (or more) into a commodity server, one of the large parts of the Itanium market that didn't have any real viable competitors (the Opteron came along in 2003, and now of course servers that can take hundreds of gigs of memory are ubiquitous) is flooded with options that were cheaper to buy, cheaper to own, and faster.
I heard that it was because AMD pushed Intel to allocate more resources to their mainstream processors in order to compete. AMD came out with their Athlon 64s in 2003, which had better price/performance than the Pentiums. There is a belief that if Intel continued to develop Itanium in full force then it would be faster than the current x86 processors.