Futuremark audit revealed cheats in NVIDIA Detonator FX 44.03 and 43.51 WHQL drive

bem ta aki a justificaçao da ATI

ATI's Christ Evenden Responses to the Futuremark 3DMark03 Patch and Driver Audit Report The 1.9% performance gain comes from optimization of the two DX9 shaders (water and sky) in Game Test 4 . We render the scene exactly as intended by Futuremark, in full-precision floating point. Our shaders are mathematically and functionally identical to Futuremark's and there are no visual artifacts; we simply shuffle instructions to take advantage of our architecture. These are exactly the sort of optimizations that work in games to improve frame rates without reducing image quality and as such, are a realistic approach to a benchmark intended to measure in-game performance. However, we recognize that these can be used by some people to call into question the legitimacy of benchmark results, and so we are removing them from our driver as soon as is physically possible. We expect them to be gone by the next release of CATALYST."

so ganharam 1.9%, logo as "optimizations" feitas pela nvidia sao mm boas. :D
 
Wow, estou agradavelmente surpreendido pela resposta da Ati.

Não quero transformar isto num luta entre Ati e nVidia, mas comparem os dois comunicados.
A Ati pelo menos admite o erro e vai corrigi-lo. Acho que ficava muito bem se a nVidia viesse limpar a imagem que está a deixar.

A tecnologia pode ser a melhor do mundo, mas se começarem a criar ódios por toda a parte (Rambus por ex.), vão começar a perder clientes.

Eu, sinceramente, não estou a reconhecer esta nVidia. Nos temos da riva 128, tnt, era tudo bem diferente.
 
Futuremark replies to NVIDIA

bem ta aki, ainda nao tive tempo pra ler xeguei agora da school...

http://www.guru3d.com/comments.php?category=1&id=1852
http://www.hardavenue.com/#newsitem1054020435,86380,

mas gostei desta parte:

For the bigger part of it you should not blame FutureMark for this though, but blame the parties that started cheating. These are both nVIDIA and ATI and I don't care wether its a 2% or a 25% difference, cheating is cheating. I actually applaud nVIDIA for the way they did it, if you do it then have the b@lls to do it well
:-D :-D :-D
 
For the bigger part of it you should not blame FutureMark for this though, but blame the parties that started cheating. These are both nVIDIA and ATI and I don't care wether its a 2% or a 25% difference, cheating is cheating. I actually applaud nVIDIA for the way they did it, if you do it then have the b@lls to do it well
:-D :-D :-D

Acho que é mais um comentario de uma criança idiota do que de um gajo comprado...
 
Continuing the Futuremark saga, John Carmack has stated his opinions on shader optimization over at Slashdot.

Rewriting shaders behind an application's back in a way that changes the output under non-controlled circumstances is absolutely, positively wrong and indefensible.
Rewriting a shader so that it does exactly the same thing, but in a more efficient way, is generally acceptable compiler optimization, but there is a range of defensibility from completely generic instruction scheduling that helps almost everyone, to exact shader comparisons that only help one specific application. Full shader comparisons are morally grungy, but not deeply evil.

The significant issue that clouds current ATI / Nvidia comparisons is fragment shader precision. Nvidia can work at 12 bit integer, 16 bit float, and 32 bit float. ATI works only at 24 bit float. There isn't actually a mode where they can be exactly compared. DX9 and ARB_fragment_program assume 32 bit float operation, and ATI just converts everything to 24 bit. For just about any given set of operations, the Nvidia card operating at 16 bit float will be faster than the ATI, while the Nvidia operating at 32 bit float will be slower. When DOOM runs the NV30 specific fragment shader, it is faster than the ATI, while if they both run the ARB2 shader, the ATI is faster.

When the output goes to a normal 32 bit framebuffer, as all current tests do, it is possible for Nvidia to analyze data flow from textures, constants, and attributes, and change many 32 bit operations to 16 or even 12 bit operations with absolutely no loss of quality or functionality. This is completely acceptable, and will benefit all applications, but will almost certainly induce hard to find bugs in the shader compiler. You can really go overboard with this -- if you wanted every last possible precision savings, you would need to examine texture dimensions and track vertex buffer data ranges for each shader binding. That would be a really poor architectural decision, but benchmark pressure pushes vendors to such lengths if they avoid outright cheating. If really aggressive compiler optimizations are implemented, I hope they include a hint or pragma for "debug mode" that skips all the optimizations.


que granda confusao

nao considero de qualuqer forma que optimizações sejam cheats a nao ser que o produto final seja alterado ...
 
Back
Topo