Fx5900

Ray Tracing nas nossas gráficas só daqui há 10 anos. :-D

Sou capaz de estar a exagerar um pouco mas o Ray Tracing leva imenso processamento. :)
 
David Kirk is NVIDIA’s chief scientist, and his personal enthusiasm for Cg is easy to see. “What we’re doing here”, he says, “is allowing people to develop graphics at a higher level than was possible before, without getting their fingers dirty with the difficult, low-level assembly language.” Graphics development, he claims, has become much more complex simply due to the speed at which GPUs (Graphics Processing Units) are evolving. Standard PC CPUs follow Moore’s Law, which states that performance doubles roughly every 18 months; however, according to Kirk, GPU speed is currently doubly every six months, and with this increase in speed comes a related increase in complexity and power which has made modern GPUs very difficult to develop for.

“Right now, if you’re a really hotshot programmer you can do some cool stuff with the shaders on current GPUs”, according to Kirk, “but most programmers really struggle to get things up and running... What we’ve created in Cg is a technology that allows game developers to get more out of the time they’re spending on their graphics, and opens up these powerful tools to everyone, not just programming wizards… An experienced C coder can pick up Cg and be writing shaders in about an hour.” An impressive claim, although of course, the actual 3D graphics knowledge required to actually make the shaders do anything useful may take somewhat longer to learn.


Universal translator

When Kirk says “everyone” in this context, he really does mean everyone, too; he talks about the forthcoming Cg support for industry-standard 3D animation packages Softimage 3D, Lightwave, 3DS Max and Maya, which will allow artists to tweak shader effects in Cg within the editor itself. A third party is also working with NVIDIA on creating a tool to translate shaders from RenderMan – the film industry standard shaders package, used on a variety of movies – into real-time shaders in Cg, and Kirk expects that many movie production studios will use the real-time rendering capabilities of modern hardware, combined with Cg, to prototype their effects and rendering and streamline their production line. “We’re bringing the advantages of real-time rendering into the film production process, just as we’re bringing film quality graphics into the games industry”, boasts Kirk.

By way of example of the type of graphics quality he’s talking about, I’m shown a piece of Cg code for rendering realistic skin onto a face. The end results are certainly impressive – the skin does look appropriately lit and textured, and the entire scene is surprisingly life-like – but perhaps what is most impressive is the fact that the whole process is performed by a mere 20 to 30 lines of Cg code. “In assembly language, this would be thousands of lines of code”, Kirk tells me. The compiler NVIDIA has written for Cg outputs shader assembly at runtime, dynamically creating code for OpenGL or DirectX, depending on which is required, and Kirk is adamant that it creates shaders just as tightly optimised as the most lovingly hand-tweaked code. “Once you start to have huge assembly programs, computers are simply better at optimising them than humans are – we’re looking ahead here to when shaders are gigantic, complex programs which would be impossible to optimise by hand.”

:D
 
Pah.. acredita.. as placas sao usadas para fazer PREVIEWS! e nao rendering final. Como ja disseram antes, é tudo feito por Raytracing.

Quanto ao futura.. "só a Deus pertence" :D


Lembro-me de ver no DVD do Fight Club que as cenas que fizeram por computador renderizavam 8 frames por DIA!! . Mas também ja foi feito à uns anitos.

-= OK, BACK TO TOPIC NOW =-
 
Originally posted by SoundSurfer
Pah.. acredita.. as placas sao usadas para fazer PREVIEWS! e nao rendering final. Como ja disseram antes, é tudo feito por Raytracing.

Quanto ao futura.. "só a Deus pertence" :D


Lembro-me de ver no DVD do Fight Club que as cenas que fizeram por computador renderizavam 8 frames por DIA!! . Mas também ja foi feito à uns anitos.

-= OK, BACK TO TOPIC NOW =-


Algures onde eu digo algo onde afirmo ser uma placa a fazer o filme todo?
Algures onde eu nego a informação que vs afirmam?

Apenas ando a provocar os ATI lovers com "provocações" onde sublinho sempre o "maybe"!

Talvez seja melhor este link:
http://millimeter.com/ar/video_making_mega_matrix/index.htm

:D

During production, Gaeta's team had a cornucopia of tools at its disposal to make sure the production could figure out real-world parameters for the many 3D elements.

Most of the facilities involved used Maya-based pipelines, with rendering approaches varying between RenderMan, Mental Ray for its global illumination rendering capabilities, and a handful of custom solutions. Shake was widely used for compositing, with both films featuring some Inferno work as well. ESC used its proprietary, photographically based pipeline for acquiring textures for synthetic humans and virtual backgrounds, while other facilities also used Maya 3D StudioPaint for texture work. Filmbox processed motion-capture data, and LightWave performed some modeling work. ESC and most of the other facilities involved also relied heavily on a wide range of proprietary plug-ins.

The toolbox also included laser scanning systems, photogrammetry, tracking markers, tracking cameras, observation cameras, optical trackers, and 3D lasers. All of these provided data to permit Gaeta and his colleagues to create a viable relationship between the 3D world and the physical world. Also, the production extensively used miniatures, optical tracking, a complete package of coders, and what Gaeta claims are probably today's largest blue- and greenscreens.

:D
“The way it was set up, we could take those six disks offline to a [Sony] Tape Robot system, which pulled data off the disks onto tape, and replace them with six other disks,” says Cooper. “We also had a backup capture station always running, and a couple of extra disks, so we had about 20TB of storage on stage during the capture shoot.”

That data was later fed into computer systems running proprietary algorithms designed to analyze facial movement and calculate 3D information, triangulating points on the face from the positions of the five cameras. This allowed Gaeta's artists to view the animated face from nearly any angle.

:D
 
para que fique esclarecido eu sou nvidia fan

penso que esta thread afinal so servio para confundir ainda mais a maior parte do ppl

ficamos portanto a saber que ... todo o rendering e feito no porcessador e memoria e que afinal a grafica ate pode ser uma oak technology de 256 kbytes.



lol
 
Back
Topo