http://www.notforidiots.com/ULE.php 6. The NV30 debacle 3Dfx Rampage was to be called Spectre 5600/5800. It’s sad NVIDIA decided to name “FX 5800” after such a weak product. I guess we didn’t say “RIP, 3DFX” enough. Or maybe they just didn’t listen enough. Like the FX 5800, Spectre 5800 would have been a nice product if it had shipped on time. Unfortunately, neither 5800 shipped on time, and today’s 5800 can’t compare with ATI’s 2003 product line. The saga began in late 2000, slightly before NVIDIA bought off 3DFX intellectual property. The NV30 had been under development for a few months, yet the 3DFX acquisition resulted in a redesign of the NV30, with estimations of only a slight delay, to SIGGRAPH 2002. The original plans seem to have been to launch the NV30 approximately at the time the GF4 was launched (which did have some 3DFX influence since its 2D engine very closely resembles Rampage’s). After all, NVIDIA seemed omnipotent at the time. A slight delay couldn’t kill them. Unfortunately, it wasn’t a slight delay for long. Nothing went right after this decision. The first problem seems to have been the time it took to decide the final design, getting both NVIDIA and 3DFX employees productive. The second was a general underestimation of all related risks, uncertainties, and potential implementation delays. The final problem, which is really the biggest one IMO, is a massively exaggerated optimism about the specs, the obviously awful paper-to-silicon transition, as well as about yields. The original designs were completely out of this world: 500/1000 (yes, even the original designs wanted 1000Mhz 128-bit memory bus GDDR2, so that’s clearly a big design mistake), Programmable Primitive Processor ( PPP ) support as in the NV40/R400/R500, and multichip (the exact signification of multichip in that context remaining mysterious at best). There were so many redesigns, no one’s would ever accept to count them. Probably as many ones as the mythic Rampage – pretty big changes, too. The PPP went poof, and the design went from 6x2/12x0 to the current 4x2/8x0. Heck, if we imagined the NV35 as a 6x2/12x0, NVIDIA would significantly beat the Radeons in just about every benchmark but Pixel Shading intensive ones – if 6x2/12x0 means what I think it means, at least. But it obviously didn’t happen that way. But if they did so darn bad, where did the $347M go? One possible explanation is that their “free caviar and champagne” parties were just a TAD too extravagant. But someone, who I hope, will excuse me for quoting him here, gives a much more sensible explanation: nV had seven NV30 tapeouts (each costs around 10 mil), payed extra 20 mil for speeding up wafer production and in the end, only 4th or 5th tapeout chip came back in working state....it was pure panic. Each wafer can have 126 or 127 (not sure) NV30 chips […] They went into mass production with 7-10 chips per wafer. Each NV30 chip cost nV around 60 bucks. Instead of 10-20. [Correction, 11/9: The $10M/tape-out cost seems significantly too high, it should be more like $1M – so either that was a typo from the source, or it included a lot of other barely related costs. Also, the 7-10 chips per wafer number seems to be for one of the first working tape-outs where NVIDIA absolutely needed chips to show to the world – the $60 per chip cost would then be the lowest NVIDIA managed to put costs at, after several respins. Also, there most likely weren’t 7 tape-outs, maybe rather 4 tape-outs and 3 respins. - Uttar] BOMBÁSTICO!!!