GPU for matrix maths?

Hello.

I did a little test to see what the performance implication of doing matrix math on the GPU and getting the matix back was. Alot of people I’ve spoken too seem to say it’s very slow to do this, as getting the matrix back off the GPU can be very slow. I had a little demo running at 80 fps, on a PIII 800, with a Geforce 256. There was no “glGetFloatv”'s being performed. I then added in 500 of them every frame, and I only dropped 0.2 of a frame in speed.

It seems a bit pointless doing any matrix math myself, when I can get the GPU to do it for me, with very little performance hit. 500 being a very excessive number of glGetFloatv calls to make.

Am I right??

Nutty