Intel goes into graphics

Zarolho

Power Member
Meet Larrabee, Intel's answer to a GPU

WE FIRST TOLD
you about Intel's GPU plans last spring, and the name, Larrabee last summer. That brings up the question of just what the heck it is, other than the utter death of Nvidia. Intel decided to talk about Larrabee last week to VR-Zone (nice catch guys), so I guess that makes it open season on info. VRZ got it almost dead on, the target is 16 cores in the early 2009 time frame, but that is not a fixed number. Due to the architecture, that can go down in an ATI x900/x600/x300 fashion, maybe 16/8/4 cores respectively, but technically speaking it can also go up by quite a bit.

What are those cores? They are not GPUs, they are x86 'mini-cores', basically small dumb in order cores with a staggeringly short pipeline. They also have four threads per core, so a total of 64 threads per "CGPU". To make this work as a GPU, you need instructions, vector instructions, so there is a hugely wide vector unit strapped on to it. The instruction set, an x86 extension for those paying attention, will have a lot of the functionality of a GPU.

What you end up with is a ton of threads running a super-wide vector unit with the controls in x86. You use the same tools to program the GPU as you do the CPU, using the same mnemonics, and the same everything. It also makes things a snap to use the GPU as an extension to the main CPU.

Rather than making the traditional 3D pipeline of putting points in space, connecting them, painting the resultant triangles, and then twiddling them simply faster, Intel is throwing that out the window. Instead you get the tools to do things any way you want, if you can build a better mousetrap, you are more than welcome to do so. Intel will support you there.
Those are the cores, but how are they connected? That one is easy, a hugely wide bi-directional ring bus. Think four not three digits of bit width and Tbps not Gbps of bandwidth. It should be 'enough' for the average user, if you need more, well now is the time to contact your friendly Intel exec and ask.

As you can see, the architecture is stupidly scalable, if you want more CPUs, just plop them on. If you want less, delete nodes, not a big deal. That is why we said 16 but it could change on more or less on a whim. The biggest problem is bandwidth usage as a limiter to scalability. 20 and 24 core variants seem quite doable.

The current chip is 65nm and was set for first silicon in late 07 last we heard, but this was undoubtedly delayed when the project was moved from late 08 to 09. This info is for a test chip, if you see a production part, it will almost assuredly be on 45 nanometres. The one that is being worked on now is a test chip, but if it works out spectacularly, it could be made into a production piece. What would have been a hot and slow single threaded CPU is an average GPU nowadays.

Why bring up CPUs? When we first heard about Larrabee, it was undecided where the thing would slot in, CPU or GPU. It could have gone the way of Keifer/Kevet, or been promoted to full CPU status. There was a lot of risk in putting out an insanely fast CPU that can't do a single thread at speed to save its life.

The solution would be to plop a Merom or two in the middle, but seeing as the chip was already too hot and big, that isn't going to happen, so instead a GPU was born. I would think that the whole GPU notion is going away soon as the whole concept gets pulled on die, or more likely adapted as tiles on a "Fusion like" marchitecture.

In any case, the whole idea of a GPU as a separate chip is a thing of the past. The first step is a GPU on a CPU like AMD's Fusion, but this is transitional. Both sides will pull the functionality into the core itself, and GPUs will cease to be. Now do you see why Nvidia is dead?

So, in two years, the first steps to GPUs going away will hit the market. From there, it is a matter of shrinking and adding features, but there is no turning back. Welcome the CGPU. Now do you understand why AMD had to buy ATI to survive? µ

http://www.theinquirer.net/default.aspx?article=37548
 
«...Larrabee is ~16 tiny in-order cores with runahead execution with 4 threads per core (don't know if it's SMT or like Niagara's FMT) with large vector unit processing capability. I also heard that there's a regular OoO powerful CPU on-die as well (maybe it's off-die?).

This news item is definitely real. I have confirmation from multiple sources about the truth of this claim (actually Intel did a presentation on this a few days ago
wink.gif
).

If it helps, you can think of Larrabee as Cell x 2...»

http://www.xtremesystems.org/forums/showthread.php?t=133286
 
Intel Discrete GPU Roadmap Overview

Intel's Visual Computing Group (VCG) gave an interesting overview of the discrete graphics plans this week. There seems to be a few interesting developments down the pipeline that could prove quite a challenge to NVIDIA and AMD in 2 years time. As already stated on their website, the group is focused on developing advanced products based on a many-core architecture targeting high-end client platforms initially. Their first flagship product for games and graphics intensive applications is likely to happen in late 2008-09 timeframe and the GPU is based on multi-core architecture. We heard there could be as many as 16 graphics cores packed into a single die.

The process technology we speculate for such product is probably at 32nm judging from the timeframe. Intel clearly has the advantage of their advanced process technology since they are always at least one node ahead of their competitors and they are good in tweaking for better yield. Intel is likely use back their CPU naming convention on GPU so you could probably guess that the highest end could be called Extreme Edition and there should be mainstream and value editions. The performance? How about 16x performance of any fastest graphics card out there now [referring to G80] as claimed. Anyway it is hard to speculate who will lead by then as it will be DX10.1/11 era with NVIDIA G9x and ATi R7xx around.

Digg it if you like it and share your thoughts here

http://www.vr-zone.com/?i=4605




 
Mas o que é que esta cena ???

aberração... é quase a tradução directa do codename da placa...

então, a Valve, veio dizer recentemente, que andava a trabalhar numa versão, multi-thread do motor "source" e que isso era pano pra mangas e que pouca gente sequer ousaria enveredar por esses caminhos e a Intel, apresenta agora um "CGPU" com centenas de threads ????

epá.. eu nem quero comentar o assunto...

Quanto á performance da "suposta" placa, só se for a fazer o superPi.. porque toda a gente está habituadinha a programar em DX, e a usar o método actual 3D, e quero ver como é que a Intel vai meter o pessoal a mudar assim de linguagens de programação..

Eu encaro este CGPU como o CELL....

também era suposto o Cell, efectuar os cálculos 3D e tal, e nem ser preciso chip gráfico tradicional. viu-se claramente que não funcionava, daí a inclusão do RSX. (embora o CELL faça os cálculos de vertex shading) - ( sim, nesse tipo de cálculos os cpus são muito bons)..agora..

esta aproximação da Intel ???, eu até entendo..CPU's é o que eles sabem fazer!!! GPU's népia.. e para apanhar os rivais, tá quieto... tão muito atrasados.. agora que isto alguma vez venha a vingar... sinceramente duvido muito... isto é bom, no papel, e nos ppt...
 
Também já estava a achar que a Intel já estava a demorar muito tempo a entrar neste negócio, visto que eu acredito que eles têm a tecnologia para o fazer (ou a capacidade de a pesquisar).
 
Acho que é de louvar, uma atitude destas. Trazem algo novo, mas "simples". Se conseguirem fazer isto ser tão compatível como as outras, é capaz de sair daqui alguma coisa de jeito :)
O que realmente é impressionante é que ainda ontem o Cell era um "super-computador" só por si... e hoje, onde isto já vai... 8|
 
Achas mesmo que precisam de comprar a nvidia? Eu acho que não... Talvez contratarem um punhado de pessoas experientes para não terem que "inventar a roda de novo", mas não me parece que precisem de mais nada.

Se não precisam de nada, porque é que estão a licenciar tecnologia da Imagination Technologies (que os mais "antigos" conhecem por ter criado a série de chips PowerVR/Kyro/Kyro 2, bem como o chipset gráfico da Sega Dreamcast/Naomi Arcade System) ?
E há aínda a questão das patentes...
 
Bom... a Intel já andava a anunciar cpu's com gpu integrado (GMA's) para pc's low end e portáteis antes da AMD comprar fundir-se com a ATi, é natural que invistam neles agora que a barra subiu...

Mas tenho duvidas que isto vá para o high end, o grafica no CPU não é actualizável, nem se têm as opções que numa gama separada se têm, também o cpu é fabricado por uma marca, não há cá OEM's da CPU em si só das motherboards, nas graficas as coisas não se processam assim com diversas empresas a meterem o seu PCB e a RAM escolhida por eles para fora; um high end integrado não convém nada à Nvidia (por exemplo).

Como o vejo é mais uma forma de maximizar a performance e diminuir o consumo de uma grafica integrada de outra maneira integrada na motherboard.

Um GPU high end ia aumentar muito o core-die e tudo o que tinha no PCB teria de passar a estar na motherboard, limitaria muito a expansibilidade.

Mas os ganhos de performance ao fazer isto também eram bem vindos para as gráficas convencionais, sem duvida.

EDIT: agora é que li o artigo na integra, a intenção deles é "acabar" com a noção de GPU; como disse acima não acho provável de acontecer.
 
Última edição:
Se não precisam de nada, porque é que estão a licenciar tecnologia da Imagination Technologies (que os mais "antigos" conhecem por ter criado a série de chips PowerVR/Kyro/Kyro 2, bem como o chipset gráfico da Sega Dreamcast/Naomi Arcade System) ?
E há aínda a questão das patentes...

Eu não queria dizer que não precisavam de nada nem de ninguém, claro que precisam, só estava a tentar dizer que acredito que a Intel tem capacidade mais que suficiente para levar este projecto avante com sucesso, mas também acho que o que o I_Eat_All disse está bem observado.
 
Última edição:
Back
Topo