Geforce 256 pixel shader VS Geforce 3 ps

What’s the big deal actually beetween the GeForce 256 and the GeForce 3??? Except maybe for the 4 texture units compare to 2 texture unit for the GeForce 256…
But what is the big fuzz about the GeForce 3 pixel shaders? Isn’t the Geforce 256 supposed to have 4 pixel shader register or something? I just don’t get it… I can use them on my GeForce 256 DDR using OpenGl (using the NVShaderAid from nvidia) but with DirectX 8 it’s like I don’t have any (using the NVEffectsBrowser).

What’s the difference beetween the GeForce 256 DDR and the GeForce 3? I don’t get it.

A whole lot of difference.

More textures, of course.

Both use the basic register combiners model, but GeForce3 has a number of major improvements – number of stages, number of constant registers, and combiner performance.

Direct shadow map support.

GeForce3 also adds a dependent texturing engine capable of far more than the simple “EMBM” effect. Lots of possibilities there.

This is just a broad summary, of course. Suffice it to say that there are a lot of new features.

  • Matt

the register combiners of a geforce3 arent just the same ones like the texture shaders of a geforce 3, it has much more features ( means you can do more math in one pass, and some complete new features with for example the texture rectangle where you possibly can copy the whole screen onto a texture ( not sure, but possible ), and then render a quad with a nice tesselation, moving the vertices a little around and you can do effects like photoshop (morphing the screen i ment here )… and with the vertex_program you can set up the texture coordinates much more efficient for these things… and you can say, texturecoordinate_of_second_texture is = color_of_first_texture (at the coordinate of the first texture ) )

etc…

just, stuff

I also had a question about the GeForce3. Since the GF3 has added the new spiffy programmable vertex and pixel shaders, don’t they basically take the place of the T&L engine or am I missing something? I read that the GF3 will still be compatible with programs that were written for the T&L engine. Is that right? Is there like a default program that is used whenever the T&L code is run? Or is the old T&L engine built in as well?
Anybody know?

Can’t wait to get one.

Funk.

Vertex programs and the “fixed-function” T&L mode both use the same engine, but they use it in slightly different ways.

Thanks -
Cass

im intrested to know whether the pixel shaders replace the register combiners, i dont know the pixel shaders very much, thou i read about the vertex shaders, but i assume we wont need the register combiners anymore, if we have pixel shaders, or is it just a way of controling the combiners more easily?

From what I’ve read from nVidia people in here, I do guess that vertex shaders are just a way to access more easily the register combiner + something else.
(read Matt post about the later)

so basically has pixel shaders in hardware on today’s geforce cards?

pixelshaders are dx8… not opengl… read what i think bout pixelshaders and other things made by microsoft in my newest post ( the one dated some minutes before this one ) at the EMBM post

(NV_texture_shader + NV_register_combiners + NV_register_combiners2) > (DX8 Pixel Shaders)

They’re roughly equivalent, except that the OpenGL extensions expose more of what GeForce3 can do.

more important:

(NV_texture_shader + NV_register_combiners + NV_register_combiners2) == gf3

(DX8 Pixel Shaders) > gf3
=> (DX8 Pixel Shaders) == gf3 + software

=>slow

(as i know from current information states…)

Isn’t that contrary to what cass just said?

no, cause in fact dx8shaders > glshaders, but when you use the dx8shader <= gf3, you have a smaller set so dx8shader_to_use < glshaders

ok, so we got that seteled.
and what about the hardware shadow maps?
will it alow for high resolution depth maps, or “just” do the transformation from the eye-space to the light space for each pixel?

two things:

a new texture data format: compressed depth_stencil (or something like that)… it is a direct copy of the depthbuffer and like that you can copy your depthvalues really fast to a texture, and this in 24bit resolution! (instead of, for example 8bit or 16bit with complex register_combiner requirements…)

second, it has a GL_SGIX_shadow or something like that… it does automatically testing a as shadowmap defined texture against a distance value, wich you have to set up, i think… ( with a projective 1d texture, for example… )

but, as always, nvidia dont let the complete infos out, so we have to wait for the gf3… and for 1050 for win2k…

wait, wait…
so using the programmable shaders on the GF3 will work ‘better’ in OpenGL than in DX?

Sounds pretty good to me.

Also, this is what i have gathered:
The new programmable pixel shaders stuff is a suped up version of the old pixel shaders.
And
The new programmable vertex shaders stuff is a suped up T&L engine.
‘Suped up’: meaning new modes, processor instructions, etc.

Pretty cool. I also want to add that the cross-bar memory doodad in the GF3 is a damn good idea. Kudos to one heck of a design.

Funk.

yeah, its a suped up somehow, but its much more… when you create something new like GL_SPHERE_MAP GL_EYE_LINEAR GL_OBJECT_LINEAR and you create now a GL_RANDOM_POINT, then you have a simple suped up… but the trick is, with the vertex_program ( dont call it vertexshader, were here in the opengl forum ), its just that you get full access to what you manipulated yet with glLight glTranslate GL_SPHERE_MAP etc… but now logically organiced, and like that with much more power ( cause now YOU program these effects, means you cando a GL_RANDOM_POINT effect, if you want )…

the pixel shaders ( or correctly texture_shader! ) is another suped up… you have more stuff you can do… but you get something complete new, too

you can now say

ok, first, we have our texture coordinates for texture0

letz get the color of texture0

now we have th texture coordinates of texture1

add the color of texture0"vector" to these coordinates, and get THIS COLOR at THIS NEW COORDINATEs

etc…

like that you can set up your normals per pixel, and then do GL_SPHERE_MAP even per pixel ( bether use a cube map and create the reflection vector per pixel and use this vector for the cubemap, looks like this then: http://www.nvidia.com/Marketing/Developer/DevRel.nsf/pages/18DFDEA7C06BD6738825694B000806A2 )