shader attributes... always required?

Hey there,

I’m wondering if it is possible to have an attribute defined in a shader but then not use it.

For example, say you want your shader to support textures so you have an attribute vec2 textureCord, but then you may have an object you want to render that does not use textures and has no texture coordinates defined. When I try to render such an object it doesn’t work. Is there a way to have attributes defined and not use them, or is it necessary to have a second shader?

If it is necessary to have a second shader, how steep is the penalty for binding a new shader?

Thanks,
Achilles

Well, if your fragment shader does a texture lookup, how could it work if you don’t give a texture and/or texture coordinates?

You could add a test similar to:

  void main()
  {
	if (bTextured)
	 	gl_FragColor = texture2D(uSampler, vTexCoord);
	else
	 	gl_FragColor = color;
  }

But something like that is a performance killer. Fragment shaders should be as fast as possible (ie. avoid conditions unless really necessary or you’ll be GPU bound faster than you think). Having two shaders will be better. Even though the bindings have a cost, it is negligible. And sending additional uniforms has a cost too anyway, maybe worse, so…

If you really want a unique shader, you could also bind a tiny white texture, but the texture lookup still occurs (and at a GPU cost). Overall, having several shaders is almost always the best approach.

Thank you very much for your detailed reply. I did indeed have a boolean uniform for textured and untextured objects. I had been told that binding shaders is very costly and to avoid that at any cost. It would seem that person was misinformed.

As a side note, what methods or general rules can be used to test for performance hits on different shader code?

You have been very helpful!
Achilles

No, but this is only true at the geometry level. You have to batch your rendering to minize the number of bindings. I was talking about the fill-rate of fragment shaders.

As a side note, what methods or general rules can be used to test for performance hits on different shader code?

It is hard to tell, especially with WebGL… at fragment level, desktop app maneuvers exist to do it the quick way (ie. not involving other tools): use a high resolution to maximize fill-rate and big primitives (like quads of the viewport dimensions) then compare frame-rates (which, AFAIK, will be accurate enough only by disabling Vsync which you cannot do with WebGL). But this only give a rough idea of how costly is a fragment shader compared to another.

For more precise profiling (and maybe better WebGL support), manufacturers may provide free tools (as in “free beer” though, like NVidia here http://developer.nvidia.com/opengl).

There are other desktop-based cross-GPU solutions:

glslDevil (closed source, free): http://www.vis.uni-stuttgart.de/glsldevil/index.html

gDEBugger GL (closed source, 1 year-license): gDEBugger - OpenGL and OpenCL Debugger, Profiler and Memory Analyzer

Let’s be honest, the open-source scene on the subject is largely underground, but http://glintercept.nutty.org/ may be a good start.