1. Este site usa cookies. Ao continuar a usar este site está a concordar com o nosso uso de cookies. Saber Mais.
  2. Está disponível o Comparador ZWAME em versão beta.
    Mais informações.
    Remover anúncio

Carmack pede desculpa à Matrox, fala bem do P10, mostra desinteresse pelo Cg

Discussão em 'Novidades Hardware PC' iniciada por ToTTenTranz, 29 de Junho de 2002. (Respostas: 5; Visualizações: 859)

  1. ToTTenTranz

    ToTTenTranz Power Member

    "Welcome to id Software's Finger Service V1.5!

    Name: John Carmack
    Description: Programmer
    Last Updated: 06/27/2002 21:18:25 (Central Standard Time)
    June 27, 2002
    More graphics card notes:

    I need to apologize to Matrox -- their implementation of hardware displacement
    mapping is NOT quad based. I was thinking about a certain other companies
    proposed approach. Matrox's implementation actually looks quite good, so even
    if we don't use it because of the geometry amplification issues, I think it
    will serve the noble purpose of killing dead any proposal to implement a quad
    based solution.

    I got a 3Dlabs P10 card in last week, and yesterday I put it through its
    paces. Because my time is fairly over committed, first impressions often
    determine how much work I devote to a given card. I didn't speak to ATI for
    months after they gave me a beta 8500 board last year with drivers that
    rendered the console incorrectly. :-)

    I was duly impressed when the P10 just popped right up with full functional
    support for both the fallback ARB_ extension path (without specular
    highlights), and the NV10 NVidia register combiners path. I only saw two
    issues that were at all incorrect in any of our data, and one of them is
    debatable. They don't support NV_vertex_program_1_1, which I use for the NV20
    path, and when I hacked my programs back to 1.0 support for testing, an
    issue did show up, but still, this is the best showing from a new board from
    any company other than Nvidia.

    It is too early to tell what the performance is going to be like, because they
    don't yet support a vertex object extension, so the CPU is hand feeding all
    the vertex data to the card at the moment. It was faster than I expected for
    those circumstances.

    Given the good first impression, I was willing to go ahead and write a new
    back end that would let the card do the entire Doom interaction rendering in
    a single pass. The most expedient sounding option was to just use the Nvidia
    extensions that they implement, NV_vertex_program and NV_register_combiners,
    with seven texture units instead of the four available on GF3/GF4. Instead, I
    decided to try using the prototype OpenGL 2.0 extensions they provide.

    The implementation went very smoothly, but I did run into the limits of their
    current prototype compiler before the full feature set could be implemented.
    I like it a lot. I am really looking forward to doing research work with this
    programming model after the compiler matures a bit. While the shading
    languages are the most critical aspects, and can be broken out as extensions
    to current OpenGL, there are a lot of other subtle-but-important things that
    are addressed in the full OpenGL 2.0 proposal.

    I am now committed to supporting an OpenGL 2.0 renderer for Doom through all
    the spec evolutions. If anything, I have been somewhat remiss in not pushing
    the issues as hard as I could with all the vendors. Now really is the
    critical time to start nailing things down, and the decisions may stay with
    us for ten years.

    A GL2 driver won't give any theoretical advantage over the current back ends
    optimized for cards with 7+ texture capability, but future research work will
    almost certainly be moving away from the lower level coding practices, and if
    some new vendor pops up (say, Rendition back from the dead) with a next-gen
    card, I would strongly urge them to implement GL2 instead of proprietary

    I have not done a detailed comparison with Cg. There are a half dozen C-like
    graphics languages floating around, and honestly, I don't think there is a
    hell of a lot of usability difference between them at the syntax level. They
    are all a whole lot better than the current interfaces we are using, so I hope
    syntax quibbles don't get too religious. It won't be too long before all real
    work is done in one of these, and developers that stick with the lower level
    interfaces will be regarded like people that write all-assembly PC
    applications today. (I get some amusement from the all-assembly crowd, and it
    can be impressive, but it is certainly not effective)

    I do need to get up on a soapbox for a long discourse about why the upcoming
    high level languages MUST NOT have fixed, queried resource limits if they are
    going to reach their full potential. I will go into a lot of detail when I
    get a chance, but drivers must have the right and responsibility to multipass
    arbitrarily complex inputs to hardware with smaller limits. Get over it."

    Tão a ver? o JC não é assim tão mauzão e arrogante! Ele até pediu desculpa à matrox por se ter enganado acerca do displacement mapping!

    Parece que em drivers o P10 rula, especialmente o pormenor do OpenGL 2.0. E se graças à arquitectura completamente programável consegue fazer num ciclo o que as outras placas fazem em 3 ou 4 ou mais, é um espectáculo!

    Cg.. Ele diz que neste momento há dezenas de linguagens como o Cg e que estas não são usadas.. O Cg não é novidade nenhuma..
  2. Zealot

    Zealot I quit My Job for Folding

    Pois, mas ele reconhece que é preciso haver uma evolução na programação em 3D, para libertar o programador do código de baixo nível, algo comparado a escrever uma aplicação em Assembly!

    Esperemos que o Cg permita a entrada de uma nova geração de programas em 3D, graças à programação de alto nível!
  3. SUp3rFM

    SUp3rFM Guest

    O Carmack ultimamente anda com vontade de aparecer na televisão :)

    As últimas declarações dele resultam sempre em polémica. Certamente, não deve ter muito que fazer. Deve de estar à espera do UT2K3 para começar a trabalhar. :D
  4. ToTTenTranz

    ToTTenTranz Power Member

    Ou então tá à espera que saiam as R300 para ter algo de jeito com que trabalhar :-D :-D :-D :-D :-D
  5. SilveRRIng

    SilveRRIng Power Member


    A considerar apenas o q foi dito no anand´s sobre a P10 tenho-te apenas q corrigir num aspecto: é q a arquitectura da P10 tb não é totalmente programavel, assim como não são as placas actuais. Supostamente tal só acontecerá na nova geração, NV30, R300 e as próximas da 3Dlabs e Matrox.

    O q eu entendi sobre o q ele disse sobre a P10 é q a arquitectura desta, visto ter varias pipes paralelas (isto tb vi no anandos) é q traz vantagens pra renderizar num single pass. Parece-me bem. Só em pena é ainda ter algumas "fixed functions".
  6. ToTTenTranz

    ToTTenTranz Power Member


    Yah tens razão, acho que vi ultimamente que o chip até só suporta pixel shaders até à v1.2.. Mas no entanto suporta plenamente o OpenGL 2.0.. que estranho..

    Mas é uma abordagem muito interessante. Se consegue fazer tudo num ciclo então a velocidade de relógio do VPU é directamente proporcional à performance deste!
    E os 16Gb de memória virtual também são apelativos (ainda não percebi muito bem como é que funciona essa memória.. mas com certeza não é como a memória virtual do windows, senão se a meio do jogo a placa fosse armazenar texturas no disco rígido... coitados de nós!).

    Anyways, vamos a ver que partido é que a Creative consegue tirar deste chip. Concorrência é sempre boa ^^

Partilhar esta Página