Apologises if I shouldn’t ask here, I thought I would try here before trying the mailing lists.
Is the current version of Mesa 3D, if anyone knows, able to run in software on the CPU without any hardware acceleration from the graphics card? I just needed to know this little nugget of information before setting the environment of whatever. If not, does anyone know a suitable alternative that would? Preferably the fastest one lol
ok, after further research and reading, it seems that if I build the mesa project, opengl32.dll, glu32.dll and osmesa32.dll are created.
If I run the compiled ‘gears’ program that is available from the website, I get 60fps. I presume that this is using the opengl32.dll from my /Windows/System32 directory, thus supplied by the ATI (its a Radeon card) and thus in hardware.
If I drop the compiled opengl32.dll and glu32.dll into the directory of the .exe the fps drops to 25. I’m assuming this is now using the software renderer? I assume this due to the following on the Mesa site…
3.1 Rendering is slow / why isn’t my graphics hardware being used?
Stand-alone Mesa (downloaded as MesaLib-x.y.z.tar.gz) doesn’t have any support for hardware acceleration (with the exception of the 3DFX Voodoo driver).
What you really want is a DRI or NVIDIA (or another vendor’s OpenGL) driver for your particular hardware.
If anyone can confirm I am speaking sense, that would be really helpful.
So it seems to be software, is Mesa Software Rendering going to perform much worse than any other software renderer? Coco3D seems to be an alternative, though I’m used to opengl so Mesa3D would suit me better from a development POV.
Dark Photon:
The following is the results from the ‘info’ arg.
i would’t say its slow. it’s slow compared to HW of course but the project is maintained for years and has stable versions according to changelist also optimized here and there. you should note that you can get artifacts like i did, check the picture for more info (left side Mesa 7.2): http://i31.tinypic.com/w6rtw5.jpg
_NK47 : isn’t this caused by low precision zbuffer (ie. could be using depth buffer default precision, can be 16bits with mesa, but 24 on your hardware implementation) ?
do you use shaders or just line rendering for the outlines ?
I’d like to know more about these artifacts.
ZbufferR: that was first thing entering my mind that depth buffer precision isn’t sufficient. thing is that i just put compiled mesa opengl32.dll and let it run, where my application enforces 24bit precision paired with 8bit stencil. the shader is GLSL 1.1 gooch shader containing two passes first pass for color computation and second for line drawing using linewidth of 5.0 and no fallback to FFP is done on the line pass (simple vertex/fragment shader). i didn’t dig much into that issue but saved the screen as a reference. if you need more details on the setup i could look more into it since its been half year since then.