[Interview] Tim Sweeney - UT2007, GPU, PPU e CPU's

SilveRRIng

Power Member
Excelente entrevista a um dos Gurus dos motores gráficos.

Fala acerca dos futuros jogos da "sua casa", da arquitectura de PPU's, possivel impacto e limitações do Havok, assim como da solução GPU para PhyX...

Faz também uma antevisão do que será (ou deveria ser) todo o hardware daqui por uns 10 anos.

Por fim, tem a coragem de dizer o que pouca gente disse: P4 SUCKS!

(Não concordo totalmente com este ultimo ponto, diga-se)



Jacob- How will UT2007 use Ageia enhanced physics effects? Increased object count? Fluid effects?

Sweeney- Anywhere from explosions to have physically interacating particles... we are also looking at fluid effects to see where we can use those, gee, blood spurts sound like they might be a good candidate! Alot of other special effects like that, where they dont affect the core gameplay, so that players with the physics hardware and players without the physcs hardware can all play together without any restrictions.

Jacob- There was alot of controsversy with the recently released Ghost Recon, where some players got lower performance when enabling Ageia effects because the video card has to render more objects. Is that something that should be expected or should framerate be the same?

Sweeney- For the record, acceleration hardware is supposed to accelerate your framerate, not decrease it! [laughs] That seems like its just a messy tradeoff that they made there. You certainly want your physics hardware to improve your framerate. That means that the physics hardware might in some cases be able to update more objects so you can actually render another frame, so you need to have some sort of rendering LOD scheme for that to manage the object counts, and obviously you don't want to take this ultra fast physics card and plug it into a machine with a crummy video card. You really want to have a great video card to match up with your physics hardware and also a decent CPU to have your system in balance to really be able to take advantage of the full thing.

Jacob- How about Ageia effects over a network? Is that supported or is it Client side? I imagine trying to push that amount of physics data through the network, there might be a bottleneck.

Sweeney- There are a number of networking solutions for physics, what we are doing in UT2007 is using the physics hardware only for accelerating special effects objects, things where the server tells the client, Spawn this special effect here! The client responds with an explosion with thousands of particles, and each of those operates as a seperate physics object but it doesn't effect the gameplay... its just purely a visual effect there. Thats the easiest and most common solution.

Some of the other solutions it looks like other teams are using are only enabling the physics hardware's networking on a LAN environment, where the entire physics state of the world is being replicated to all the clients, that requires a vast amount of bandwith, more than even a broadband connection has there, so thats not very pratical.

The other approach is to run a peer to peer lockstep game, which would be ideal for like a fighting game or some other game with 2 players or 4 players playing against each other where the entire game runs in lockstep, everybody has hardware and the entire gamestate evolves deterministically on all of the machines.

Jacob- Havok recently announced the ability to accelerate physics on the GPU. Is that necessarily a bad idea?

Sweeney- Thats a good approach, they have some good technology there. Havok has a physics system that runs largely on the GPU to accelerate the computations there. It seems to be a lower precision physics than you have for the rest of the game which is problamatic. You really want all the physics in the world to be drawing with a constant level of precision, so you don't have to make weird trade-offs there. I guess there is also the trade-off with that, if your GPU is doing both physics and graphics, then you are not getting the full utilization out of the system.

Jacob- Have you guys ever considered the possiblity of maybe allowing console players to play against PC players in UT2007? Or is it too difficult to balance the different players?

Sweeney- The question of PC players playing against console players, or even console players on PS3 playing against Xbox360 is really more of a gameplay question than a technical one, because right now we do support running a PC server having PC clients join and play alongside PS3 and XB360 clients in our games networking framework. That basic approach works, but on XB360 side Microsoft has chosen to keep the network completely closed so you won't have PC players playing on XBox Live. Kind of an unfortunate decision, but it allows them to secure their network and control it carefully which you kinda want in a console environment. On PS3, we might actually enable that. When we get down to balancing the game to really play well on a controller, if it turns out we don't have to change the gameplay significantly, then we will enable PC players and PS3 players to play together, and we are really looking forward to that. We really like the idea alot, and it looks like Sony will be very supportive of the open network approach.

Jacob- It seems like both Nvidia and ATI are looking at unified shader architectures, instead of seperate vertex and pixel do you think that is beneficial to gaming on a graphical level?

Sweeney- Having one unified shader architecture enables you to do dynamic load balancing. Some scenes have an enormous number of pixels with a small number of triangles or vertices some have huge amounts of geometry with simple pixel shaders. You really want to have all the computing power in the chip utilized all the time and that means being able to shift the resources around between any potential use pixel, geometry, vertices, or just general computation that you are happening to do on the GPU. So I think, this is just the very beginning of a long term trend in where GPU's will become more and more generalized and eventually turn into full CPU like computing devices. Obviously the problem that GPU's are optimized to solve will always be a very different problem than CPU's, but they will certainly become more and more general purposed, to the point where you can write a C program and compile it for your GPU in the future.

Jacob- This isn't really a UT2007 question this is more a UE3 question; Do you guys have any plans to add DX10 features to UE3?

Sweeney- Oh yea absolutely, we will have full support for DX10, we will use their geometry shader stuff to accelerate shadow generation and other techiniques in the engine, we will be using virtual texturing. With both UT2K7 and Gears of War we are offering our textures at extremely high resolution like 2000x2000 resolution which is a higher resolution than you can effectively use on a console because of the limited memory, but it is something that certainly will be appropiate for PC's with virtualized texturing in the future, so we will whole heartly be supporting DX10. Its hard to say what the timeframe will be on that because Vista could ship this year, or next year, or whatever. But, we will certainly be supporting it really well when it comes along.

Jacob- And ten years from now do you vision that we will see... GPU's handling graphics, and PPU's handling physics, CPU doing A.I. and that kind of thing or do you think we will see some kind of blend of the 3 technologies or maybe 2 of them?

Sweeney- Looking at the long term future, the next 10 years or so, my hope and expectation is that there will be a real convergance between the CPU, GPU and non traditional architectures like the Physicx chip from Ageia, the Cell technology from Sony. You really want all those to evolve in the way of a large scale multicore CPU that has alot of non traditional computing power as a GPU has now. A GPU processes a huge number of pixels in parallel using relitevly simply control flow, CPU's are extremely good at random access logic, lots of branching, handling cache and things like that. I think really, essential, graphics and computing need to evolve together to the point where the future renders I hope and expect will look alot more like a software render from previous generations than a fixed function rasterizer pipeline and the stuff we have currently. I think GPU's will ulimately end up being... you know when we look at this 10 years from now, we will look back at GPU's being kinda a temporary fixed function hardware solution, to a problem that ultimately was, just general computing.

Jacob- And this isn't really related to UT2007 or UE3, but I'm sure you have heard John Carmack talking alot about Mega-texturing and how he uses it. I was just wondering what your thoughts on it was?

Sweeney- So, Mega-texturing is this idea of applying absolutely unique textures to every object in your environment everywhere. Computationally it looks kinda difficult because our resolutions are always going up at a steady rate, the amount of detail on our enviroment on our detail is increasing and the size of our environments are increasing. This, to me implys that you want to reuse your content very frequently. You want to be able to build a mesh in one place and reuse it hundreds of places of the environment just with minor modifications here and there. So, if your going to move in a mega texturing direction, I think you really have to look at that in the context of a larger material system that lets you instance objects and share assests and not have an explosion in the amount of content creation work thats required because if an RS has to sit down and paint every little detail in every object in the world, that an unecominical approach to game development. So in order for Mega texturing to work on a large scale, I think you need excellent tools, for being able to reuse, instance and reuse all this data so you save the artists time and they dont have to rebuild custom things all over the place.

Jacob- Just a few more questions here, kinda of in tradition of last year.... R600 or G80?

Sweeney- Haha, well at Epic we are using mainly the Geforce 7800 and 7900, we have a few ATI cards, they perform really well and we are really happy with the solutions from both companies.

Jacob- One more question, kinda in the same fashion, Conroe or AM2?

Sweeney- Haha, well thats hard to say, I don't know much about AM2, but Conroe is a really fantastic chip. The funny thing is very few people in the industry have been willing to come out and say that the Pentium 4 architecture sucks. It sucked all along. Even at the height of it's sucking, when it was running at 3.6GHz and not performing as well as a 2GHz AMD64... People were reluctant to say it sucked... so IT SUCKS! But Conroe really makes up for that and I am really happy to see that, that Intel is back on this track of extremely high computing performance at reasonable clockrates. Not having to scale up to these extremely hot running systems that require alot of cooling. I think they are on a good track there and if they scale the conroe architecture up to four and eight cores in the future, then that will put the industry on a really good track for improving performance dramatically at a faster rate than we have seen in the past.


Fonte
 
Última edição:
Entrevista interessante :)
Mas, por outro lado, não foi este gajo que lançou uma profecia tipo "em 2007 1GB nas gráficas vai ser a norma"?

De qualquer modo, achei interessante a visão dele de como a tecnologia deve evoluir. No fundo, nem cell nem CPU's n em GPU's nem PPU's... CGPCellPU's lol :D
 
"Por fim, tem a coragem de dizer o que pouca gente disse: P4 SUCKS!"

:D

Bem penso que ele se refere em performance em gaming, já que em outras coisas para usar o CPU, o Pentium 4 teve algumas vantagens. Mas no geral nunca admirei o P4 pelo facto de eu usar a maquina quase só para gaming, assim como os AMD no geral se mostraram mais baratos. Daí também eu só concordar em parte com o que ele disse.

Mas pronto, o Conroe vêm aí e chega de ranho e lágrimas. :D

Mas à parte disto, o Tim (e não ignorando o seu talento), ainda vai traçando o dead end tanto para a nVidia como para a ATI. Já não é a primeira vez que acaba por dizer que o GPUs serão inúteis daqui à alguns anos. O mesmo pode acontecer com os PPUs.

Eu acredito que isso possa acontecer pelas razões que ele referiu. Mas daqui a 10 anos que tipo de CPUs teremos? Quantos cores?
 
Última edição:
4

Não concordas mas concordo eu.. e muito.. P4 sucks e muito...

Será que eles andaram em espionagem industrial e aperceberam-se que o balanço performance/megahetz é que conta ?? têm algum insider na AMD ??
 
Back
Topo