I’ve got a problem with TrackMatrixNV(enum target, uint address, enum matrix, enum transform);
.
Thr last parameter - transform - could be one of the following values:IDENTITY_NV,INVERSE_NV,TRANSPOSE_NV and INVERSE_TRANSPOSE_NV.
So, what I understood:
IDENTITY_NV: Tracks the Matrix as it is e.g. (vector)a’ = (matrix)M*(vector)a
INVERSE_NV: Tracks the inverse of the matrix. e.g. (vector)a = (inverse matrix)M^-1*(vector)a. If you have eyespace coordinates you can transform them back in world-space one’s.
BUT: What does TRANSPOSE_NV and INVERSE_TRANSPOSE_NV mean (my poor english…)? And what does it? Why do you track normals with INVERSE_TRANSPOSE_NV and not with IDENTITY_NV?
I try to understand Nutty’s spec_nvp on www.nutty.org . There he made the following transformations (all with the ModelView-Matrix): he transforms the vertex-position with GL_IDENTITY_NV. The normal is transformed with GL_INVERSE_TRANSPOSE_NV and the light with GL_INVERSE_NV.
Why this? why can’t I(he) transform everything with the GL_IDENTITY_NV?
Because normals don’t have to be translated such as vertex position, only the rotation part of the matrix is useful (the scaling part can be too easily “annuled” by normals renormalisation or rescaling).
Ok. And if I use GL_INVERSE_TRANSPOSE_NV only the Rotation-part of the matrix will by applied? I can’t imagine that. The inverse of a Matrix A transforme a already transformed (with Matrix A) point back.
Is there a rule, that vertex are transformed with the identity, the normals with the inverse_tranpose, and the light by transpose?
I don’t get it.
That’s the point. You won’t transform points with the inverse transpose, you will tranform the normals with this. It takes into account, that movement doesn’t affect normals, as uniform scaling does not too.
But what is with the Light-Poition? With the Normals it’s clear. They stand for a direction, so translation would be fatal. The Light-Position should be translated an rotated like a usual Vertex, shouldn’t it?
It depends on what nutty has done in his program. If he uses the vertices ,transformed in eye-space (with the identity) ,I think you have to put the light also in eye-space ,but if he uses the vertices of the object in object-space I think you have to use the inverse?!?!
First I set the viewers position with some rotation an translation: Matrix A
I draw an Object: I “push” the current matrix,do some other roatations and translations and draw (Matrix B), and “pop”.
So which Matrix will the VP receive by the drawing? A or B? if only Matrix B, how can I transforme the vertices in eye-space? How can I distinguish in the VP between object-space(Matrix B) and eyepspace (Matrix A)?
Ok, Nutty, that makes sense. So, for the normals we consider only the rotation, for the lighting we transform with the inverse the world-space coordinates of the light. So you got everything in object-space and may calculate lighting…
Michael, No. Not per pixel, just vertex lighting, done in object space. Prevents redunatant transforms of vertices to world space, doing lighting in object space, thats why I did it like that.