1. Este site usa cookies. Ao continuar a usar este site está a concordar com o nosso uso de cookies. Saber Mais.

Intel set to announce graphics partnership with Nvidia?

Discussão em 'Novidades Hardware PC' iniciada por Zarolho, 1 de Junho de 2007. (Respostas: 8; Visualizações: 1304)

  1. Zarolho

    Zarolho Power Member

    Chicago (IL) – Intel may soon be announcing a close relationship with Nvidia, which apparently will be contributing to the company’s Larrabee project, TG Daily has learned. Larrabee is expected to roll out in 2009 and debut as a floating point accelerator product with a performance of more than 1 TFlops as well as a high-end graphics card with dual-graphics capabilities.

    Rumors about Intel’s Larrabee processor have been floating around for more than a year. Especially since the product’s official announcement at this year’s spring IDF and an accelerating interest in floating point accelerators, the topic itself and surrounding rumors are gaining traction every day.

    Industry sources told TG Daily that Intel is preparing a “big” announcement involving technologies that will be key to develop Larrabee. And at least some of those technologies may actually be coming from Nvidia, we hear: Our sources described Larrabee as a “joint effort” between the two companies, which may expand over time. A scenario in which Intel may work with Nvidia to develop Intel-tailored discrete graphics solutions is speculation but is considered to be a likely relationship between the two companies down the road. Clearly, Intel and Nvidia are thinking well beyond their cross-licensing agreements that are in place today.

    It is unclear when the collaboration will be announced; however, details could surface as early as June 26, when the International Supercomputing Conference 2007 will open its doors in Dresden, Germany.

    Asked about a possible announcement with Intel, Nvidia spokesperson Ken Brown provided us with a brief statement: “We enjoy a good working relationship with Intel and have agreements and ongoing engineering activities as a result. This said, we cannot comment further about items that are covered by confidentiality agreements between Intel and Nvidia.”

    We contacted Intel as well, but were unable to obtain a statement by the time this article was published. We will be updating this article as soon as we receive a statement or more information from Intel.

    The AMD-ATI and Intel-Nvidia thingy

    In the light of the AMD-ATI merger, it is only to be expected that the relationship between Intel and Nvidia is examined on an ongoing basis. So, what does a closer relationship between Intel and Nvidia mean?

    The combination with ATI enabled AMD to grow into a different class of company. It evolved from being CPU-focused into a platform company that not only can match some key technologies of Intel, but at least for now has an edge in areas such as visualization capabilities. At a recent press briefing, the company showed off some of its ideas and it was clear to us that especially the area of general purpose GPUs will pave the way to a whole new world of enterprise and desktop computing.

    Nvidia is taking a similar approach with its CUDA software interface, which allows developers to take advantage of the (general purpose) floating point horsepower of Geforce 8 graphics processors - more than 500 GFlops per chip. Intel’s Larrabee processor is also aimed at applications that benefit from floating point acceleration – such as physics, enhanced AI and ray tracing.

    While it has been speculated that Intel may be creating Larrabee with an IA CPU architecture, we were told there may be more GPU elements in this processor than we previously had thought. A Larrabee card with a (general purpose) graphics processing unit will support CPUs in applications that at least partially benefit from massively parallel processing (as opposed to the traditional sequential processing); in gaming, the Larrabee processor can be used for physics processing, for example.

    An imminent collaboration announcement between Intel and Nvidia, which reminds us of a recent Digitimes story that claimed Nvidia was trading technologies with Intel, of course, raises the question how close the relationship between Intel and Nvidia might be. It also raises the question, once again, if Intel may actually be interested in buying Nvidia – which could make a whole lot of sense for Intel, but appears to be rather unlikely at this time. Nvidia could cost Intel more than $15 billion, given the firm’s current market cap of $12.6 billion, and the talk in Silicon Valley indicates that Nvidia co-founder and CEO Jen-Hsun Huang isn’t really interested in selling the company.

    But a deal with Intel, involving the licensing of technologies or even supply of GPUs could have a huge impact on Nvidia’s bottom line and catapult the company into a new phase of growth. However, a closer collaboration could be important for Intel as well: AMD’s acquisition of ATI was not a measure to raise the stakes in the graphics market or to battle Nvidia; it was a move to compete in the future CPU market – with Intel. Having Nvidia on board provides Intel with a graphics advantage, at least from today’s point of view, and could allow the company to more easily access advanced graphics technology down the road.

    What we know about Larrabee

    Intel has recently shared more information with the public about its intents in the realm of general purpose GPU (GPGPU). In a presentation from March 7 of this year, Intel discussed its data parallelism programming implementation called Ct. The presentation discusses the use of flat vectors and very large instruction words (VLIW as utilized in ATI/AMD's R600). In essence, the Ct application programming language (API) bridges the gap of allowing it to work with existing legacy APIs and libraries as well as co-exist with current multiprocessing APIs (Pthreads and OpenMP), yet provides “extended functionality to address irregular algorithms.”


    There are several things to point out from the image above, which is a block diagram of a board utilizing Larrabee. First is the PCIe 2.0 interface with the system. Intel is currently testing PCIe 2.0 as part of the Bearlake-X (Beachwood) chipset (commercial name: X38), which could be coming out as part of the Wolfdale 45 nm processor rollout late this year or early in 2008. Larrabee won’t arrive until 2009, but our sources indicate that if you buy an X38-based board, you will be able to run a Larrabee board in such a system.

    In the upper right hand corner the power connections indicate 150 watts and 75 watts. These correspond to 8-pin and 6-pin power connections that we have seen on the recent ATI HD2900XT. Intel expects the power consumption of such a board to be higher than 150 watts. There are video outputs to the far left and as well as video in. Larrabee appears to have VIVO functionality as well as HDMI output based on the audio-in block seen at the top left.
    A set of BSI connections are next to the audio in connection. We are not positive on what the abbreviation stands for but we speculate that these are connections for using these cards in parallel like ATI’s Crossfire or Nvidia’s SLI technologies. Finally, there is the size of the processor. That is over twice the size of current GPUs as ATI’s R600 is roughly 21 mm by 20 mm (420 mm²). Intel describes the chip as a “discrete high end GPU” on a general purpose platform, using at least 16 cores and providing a “fully programmable performance of 1 TFlops.”


    Moving on we can see that Larrabee will be based on a multi-SIMD configuration. From other discussions about the chip across the net, it would seem that each is scalar that works using Vec16 instructions. That would mean that, for graphics applications, it could work on blocks of 2x2 pixels at a time. These “in-Order” execution SIMDs will have floating point 16 (FP16) precision as outlined by IEEE754 (32-bit single-precision). Also to note is the use of a ring memory architecture. From a presentation by Intel Chief Architect Ed Davis called “tera Tera Tera”, Davis outlines that the internal bandwidth on the bus will be 256 B/cycle and the external memory will have a bandwidth of 128 GB/s. This is extremely fast and achievable based on the 1.7-2.5 GHz projections for the core frequency. Attached to each core will be some form of texturing unit as well as a dynamically partitioned cache and ring stop on the memory ring.

    In the final image below you will notice that each device will have a 17 GB/s of bandwidth per link. These links tie into a next generation Southbridge titled “ICH-n” as this is yet to be determined. From discussions with those in the industry, it would appear that the external memory might not be soldered into the board but in fact be plug in modules. The slide denotes DDR3, GDDR, as well as FBD or fully buffered DIMMs. It will be interesting to see what form this will actually be implemented as but that is the fun of speculation.


    The current layout of project Larrabee is a deviation of previous Intel roadmap targets. In a 2005 whitepaper entitled “Platform 2015: Intel Processor and Platform Evolution for the Next Decade”, the company outlines a series of Xscale processors based on Explicitly Parallel Instruction Computing or EPIC. Intel has deviated slightly from its initial roadmap since the release of this paper: Intel sold Xscale to Marvell last year, which make sit a rather unlikely product for Larrabee – and could have opened up the discussion for other processing units.

    What is interesting is that rumors that Intel was looking for talent for an upcoming “project” involving graphics started passing around already more than a year and a half ago. In August of last year, you could apply for positions on Career Builder and Intel’s own website. A current generic job description exists on Intel’s website.

    Concluding note

    While this is an interesting approach to graphics, physics, and general purpose processing, we will be seeing the meat in the final product as well as the success of acceptance with independent software vendors (ISVs). In our opinion, the concept of the GPGPU is the most significant development in the computer environment in at least 15 years. The topic has been gaining ground lately and this new implementation from Intel could take things to a whole new level. As for the graphics performance, only time will tell.

    It will be interesting to see which role Nvidia will play in Intel’s strategy. Keep a close eye on this one.

  2. JPgod

    JPgod Moderador
    Staff Member

    CSI, USB 3.0, robson, etc, acordos com nvidia. :wow:

    A Intel-nvidia deverá volatizar totalmente a AMD-ATI...
  3. RuFuS

    RuFuS Power Member

    e pci-ex 3.0 tb... 8|

    será que já vem com 250W de energia só pela slot? :004: :x2:
  4. Romani48

    Romani48 Power Member

    Bem, agora isto ou se torna um monopolio e os preços sobem por lá cima para a Intel, e então a aMD-ATI passa a ser o sistema dos "pobres" :P

    ou então se a Intel-nVidia souber aproveitar, vamos ter sistemas excelente a preços "baixos" e então a AMD-ATI acabou de ser arrumada e K.O., para infelicidade minha ...

  5. JPgod

    JPgod Moderador
    Staff Member

    Nop, é PCI-E 2.0
  6. DJ_PAPA

    DJ_PAPA Power Member

    Ainda nao percebi o que tem de tao extraordinario para xegar a esse comentario final :rolleyes:

    É preciso esperar ate 2009 para ter 1 teraflop?
    Para isso pego em duas R600 em crossfire e tens o teraflop agora em Junho de 2007.

    No GPGPU quem tem dado cartas é unicamente a ATI. A Nvidia fez muito barulho com o Cuda, mas ainda nao se viu cuda 7 meses depois do seu lançamento. :007:

    O PCI-e 2.0 tb nao é nenhuma novidade. A AMD ja demostrou as suas novas boards a correr com uma HD 2900XT sem 1 conector de energia, isto é, com a board + grafica compativel com PCI-e 2.0.
    Inclusive board com:
    Gigabyte has RD790 with 4 PCIe 2.0

    Não vai ser por aqui que uma vai á falencia ou não. Ate pq nenhuma das duas vai.
    Tudo dependerá sim do Fusion e do semelhante da Intel, pq é um CPU para comer uns 70% do mercado e não 0.1% dessas plataformas milionárias.

    Nem sonhes que vais ter isto no teu PC ja em 2009 a preços de amigo:

    BTW o Robson vai ter que evoluir para outro tipo de utilidade pq em 2009 vamos ter os discos SSD em força que consequentemente anulam a utilidade actual da tecnologia Robson.
    Última edição: 1 de Junho de 2007
  7. blastarr

    blastarr Power Member

    O CUDA só chega à versão 1.0 por volta da Computex, daqui a dias.
  8. ajax

    ajax Banido

    Se calhar teremos que esperar pela versão 2.0. :D

    Poderia jurar que o Blastarr disse há semanas que o CSI tinha sido descontinuado. :D
    Última edição pelo moderador: 1 de Junho de 2007
  9. blastarr

    blastarr Power Member

    Juras mal.
    A única coisa que aconteceu foi uma mudança de nome, embora o novo também seja provisório pois é apenas para "consumo interno" na Intel:


Partilhar esta Página