texcoord binding and colladafx

Looking at the 1.4 specification it is not clear to me how COLLADA defines the binding of texture coordinates to the texture coordinate inputs for a shader.

I have a test scene which has a plane. To this is assigned a material which uses a .cgfx shader. This shader has multiple texture coordinates as inputs.
e.g.

texture texture1 : TEXTURE0

texture texture2 : TEXTURE1

In Maya, I can use the cgfxshader and define which UVSet is assigned to each of these texture coordinate inputs.

On exporting to COLLADA it seems to loose this information. Nowhere can I find which texture coordinate input in the COLLADA file is bound to the appropiate shader input.

I see that input has a ‘set’ attribute - but I don’t think this is something to do with what I want???

In one example in the collada doc I see this:

<instance_material symbol=“bark” target=“MidsummerBark03”>
<bind semantic=“TEXCOORD0” url=“BeechTree#texcoord2”/>
</instance_material>

So if the shader had a semantic TEXCOORD0, does this mean that the BeechTree#texcoord2 input is bound to this shader input?

I have tried exporting the scene with both the max and maya plug-ins, but these don’t export the information, so I’m none the wiser.

So how are the textcoord inputs bound to the texture inputs to a shader in terms of the collada spec?

When defining an <effect> or specializing an effect into a <material>, you have no idea what vertex attributes the geometry it is going to be applied to will have. This is why effect and material definitions talk about semantics rather than target specific streams of geometric data. It’s a form of “loose binding” that allows us to define effects and materials in different documents than the geometry without causing linkage problems.

It is when you get to the scene graph definition that you can solve the binding problem. When you instance geometry into the scene using <node> you get the chance to use the <bind_material> element. This is where different streams of data in the geometry are bound to the parameters in the material, and the binding is done explicitly as there using a “standard” mapping was found to be not flexible enough for many cases.

Imagine this situation: you have a model that has three sets of texcoords and you want to apply a material that uses a shader requiring only one set of texcoords. How do you choose which set of texcoords you are talking about? You solve the binding problem in <bind_material> by attaching semantics to streams of data. The same solution is used when a material references, say, two light sources but your scene contains 60 light sources distributed around the entire level you are modelling - <bind_material> tells the effect which light sources you are talking about.

A note: In <bind_material>, if the effect is multi-pass then ALL parameters across every pass with the same semantic get bound to the same input. This allows you to explicitly define when passes reuse information or when they need different streams.

<bind_material> is good for static binding of geometry to shader parameters, but sometimes you need to make runtime decisions about what value to set a shader parameter. This is where used-defined semantics and <annotate> annotations come into play. If you need a shader parameter to be set to the position of the nearest light source then that is a decision only your program can make a runtime, so you must can annotate the parameter with a message communicating this fact to your application, e.g. string LightType = “CLOSEST” . User defined semantics can be used when a shader requires information that can never be encoded into the COLLADA file, e.g. game state values like “DAMAGEPERCENT” or “MAGIC_LEVEL”. When binding an effect to a model, the application must search for unbound parameters, inspect their semantics and give them a value before rendering.

Applying an effect to your geometry is always a two-way communication between the effect runtime and your application. The effect runtime can only automatically bind certain values, and it’s up to your application to fill in the unknowns before rendering. Semantics and annotations are the tools to help this process work smoothly.

Hope that helps.

  • Robin Green

Yes that has helped.

So what I was asking was correct.

Some shader inputs such as TEXCOORD0, which are not explicitly defined in the shader by a variable, should be put into the <bind_material> section and it is up to the program which reads and understands the collada file (e.g. an engine) to correctly bind these to the effect as required.

In Maya you can create geometry, give it a cgfxShader as a material and setup these bindings (uvSet1->TEXCOORD0) and these can be written to the Collada file in the <bind_material> section. This allows you to in effect give a hint to the program reading the Collada file to use these bindings, although it can do anything it wants.