Mesh's <vertices> and primitives relationship clarification

Sorry I guess this thread will be duplicated of some other but I tried to search and coulnd’t find or I can’t use the forum’s search correctly (I dont like it) :confused:

<mesh> element has <vertices> and mesh primitives e.g <polylist> 's least one input must specify semantic=“VERTEX” (according to specs 1.5)
I think POSITIONS could be located inside <polylist> directly but okay, it is not my confusion:

https://www.khronos.org/collada/public-mailing-list/archives/1101/msg00014.php

According to specs <mesh> can contains multiple of primitives:

Geometric primitives, which assemble values from the inputs into vertex attribute data. Can be any combination of the following in any order: <lines> <linestrips> <polygons> <polylist> <triangles> <trifans> <tristrips>

Only one <vertices> element is allowed inside <mesh> and primitives’ input’s souce (semantic=“VERTEX”) is id of <vertices> element.
So, How can multiple primitives have different <vertices>, different positions, accessors… then? If <mesh> has only one primitive e.g. <polylist> it is ok, but what about multiple primitives

1.5 XSD also says:

The mesh element must contain one vertices element.

Maybe single source can contains all positions of all primitives with different offsets but to get an offset we need to access different accessors, am I wrong? What to do?

There is a statement inside PDF 1.5:

In a situation where you want to share index data, that is, to optimize the index data, and still have distinct set attributes, you can move the <input> element from the <vertices> element into the primitive element(s) and reuse the offset attribute value of the input with VERTEX semantic:

<vertices>
<input semantic=“POSITION”/>
<input semantic=“TEXCOORD”/>
<input semantic=“NORMAL”/>
</vertices>
<polygons>
<input semantic=“VERTEX” offset=“0”/>

<vertices>
<input semantic=“POSITION”/>
</vertices>
<polygons>
<input semantic=“VERTEX” offset=“0”/>
<input semantic=“TEXCOORD” offset=“0” set=“1”/>
<input semantic=“NORMAL” offset=“0” set=“4”/>

So we can move some inputs from <vertex> to primitives’ scope, then can we move POSITION too? it would solve my problem,
then we could leave <vertices></vertices> empty and move all POSITIONS to different primitives like :

<polygons>
<input semantic=“VERTEX” offset=“0”/> <!-- just for keep spec implementation valid –>
<input semantic=“POSITIONS” offset=“0” />
<input semantic=“NORMAL” offset=“1"” />
<input semantic=“TEXCOORD” offset=“2” />

Would this be valid?

Spec says:

Mesh-vertices must have at least one <input> (unshared) element with a semantic attribute whose value is POSITION.

So I think this means that I can’t leave <vertices> empty, also different positions would make different meshes right?

Can I assume that a <mesh> can only contain single position source with single <input semantic=“POSITION”… and the input must be in <vertices>, it makes sense then

The limitations on <vertex> seem to have to do with <morph> based animations. A “vertex” is a special semantic that combines all of the elements into a logical package. I worked with this somewhat when I developed a trial loader earlier in the year. I’m just now getting to apply the newly rewritten COLLADA-DOM library to that loader.

Unfortunately it’s hard to say anything authoritative about these things because we software developers can’t afford to commit things to memory: we just have to relearn everything as it’s required to implement something. All I can offer is my foggy old impressions, and the possibility that I might find myself retracing that code soon as I begin work testing a geometry loader.

PS: This isn’t really the COLLADA forum. There’s another locked forum somewhere around here that existed when collada.org was a website. It was moved here to khronos.org and archived long after COLLADA discussion petered out. This is a new forum, that was offered after the transition/shuttering of collada.org.

So currently I assume this: POSITION can only and must (for mesh) exists in <vertex> until someone clarify this, after finished implementation, I’ll back to investigate this

It seems three.js also do this way (I don’t say it is correct impl nor not):

Unfortunately it’s hard to say anything authoritative about these things because we software developers can’t afford to commit things to memory: we just have to relearn everything as it’s required to implement something. All I can offer is my foggy old impressions, and the possibility that I might find myself retracing that code soon as I begin work testing a geometry loader.

Please let me know if you discovered what is the correct way to implement POSITION semantic or is my assumption correct or not…

I think <vertices> is POSITION+. I have the polylist referring to VERTEX. I think that means that’s the POSITION source, + anything else built into the vertices. Because <morph> needs (or wants) all of the attributes bound to a vertex, it can only blend the associated attributes. The other attributes can exist, but do not take part in blending. Anyway, this is the basic view I am confident. There may be an alternative way; there may not.

EDITED: BTW, disregard what I said about there being two forums. I’m pretty sure that was the case only earlier this year. But now it’s plain that this forum goes back to the beginning. I’m either imagining things, or the two forums had their databases merged lately. (Or possibly it was just two boards that someone decided to merge.)

I think <vertices> is POSITION+. I have the polylist referring to VERTEX. I think that means that’s the POSITION source, + anything else built into the vertices.

Thanks, so I can still assume that unlike other semantics, POSITION semantic is only part of <vertices> element.

Because <morph> needs (or wants) all of the attributes bound to a vertex, it can only blend the associated attributes. The other attributes can exist, but do not take part in blending. Anyway, this is the basic view I am confident. There may be an alternative way; there may not.

One question/confusion:

if mesh has different normal sources then how to blend these normals? Specs says that morph only blends normals inside <vertices>,

Am I right for this: all sources inside <vertices> must have same index I mean in <p> </p> element if VERTEX offset=0 and <vertices> have more than one sources e.g normals, textures… then we can assume that position index=0, normal index=0 tex index = 0… an so on?

if it is true then we can’t add extra reference of primitives’ inputs’ sources to vertices to blend/morph? Because <vertices> inputs are <input> (unshared) and does not has offset attrib :confused: if we want to blend then we must join all sources, indexes to one (interleaved)?

I don’t know. I say “POSITION+.” There may be other ways, but they’d not work with blending, because the weights must be assigned to a “vertex.”

On the second point, I’m pretty sure you can assign attributes via separate indices, and they will affect the image, but will not be blended by the morph-weight equations. I think I made a post around here, and there is more in the bug tracker that address some of these limitations. I would think of COLLADA as only expressive enough to describe relatively primitive video games. You can draw a straight line to it from the PlayStation’s old hierarchical-model-data library.

The weights system needs to be more flexible, but there’s no rationale to push for a new version of COLLADA if there is not sound technology undergirding it. I think there are much more pressing issues with the current standard. To use COLLADA you have to scale back your ambition for certain. But in 3-D we’ve long gotten away from fundamentals, and no one has the faintest what the hell they’re doing, so I think that can be a healthy attitude.

<p> element stores multiple indexes, but I’m trying to convert it to single-index to send OpenGL, I’ve some trouble :confused:

For instance this looks nice:


<triangles count="286" material="Material1">
  <input offset="0" semantic="VERTEX"  />
  <input offset="1" semantic="NORMAL" />
  <p>1 0      2 0      1 0      3 1</p>
</triangles>

but this??


<triangles count="286" material="Material1">
  <input offset="0" semantic="VERTEX" />
  <input offset="1" semantic="NORMAL" />
  <p>1 0      2 0      1 1      1 1      1 3      1 0</p>
</triangles>

Position 1 used many times (this seems good) but position 1 needs to be duplicated for 1 1 and 1 3, am I right? Because they are not same vertex since they used different normals

I don’t think there is any restriction about this in specs so I think it must be fixed when parsing but finding positions to be duplicated is very costly :confused:
Any suggestions? Mick?

Finally I’ve purchased COLLADA book and found some answers for my first question:

The “POSITION” semantic is strongly correlated with the identity of individual mesh vertices and the <vertices> element.

Arnaud, Remi; Barnes, Mark C… COLLADA: Sailing the Gulf of 3D Digital Content Creation (Page 54). CRC Press. Kindle Edition.

“VERTEX”—Vertex attribute data principally containing the position of each vertex. This semantic is used to reference the <vertices> element. This is an indirection into the global definition of the mesh vertex attribute data described previously. Recall that the “POSITION” semantic cannot be used within a primitive collation element.

Arnaud, Remi; Barnes, Mark C… COLLADA: Sailing the Gulf of 3D Digital Content Creation (Page 55). CRC Press. Kindle Edition.

I have that book. There’s no solution. This is a common problem in graphics. COLLADA stores art, at a high level. Graphics APIs turn vertices into triangles.

In very old OpenGL there was “immediate mode” where you could specify every vertex manually, but that is equivalent to not using indexing, and sending “verbose data” over the AGP bus.

When vertices have a different index for each attribute they must be pre-processed before they can be rendered by a modern graphics driver, via an API like OpenGL ES. That means you must compare the indices and convert edges into multiple edges where necessary; generating new vertices and indices in the process.

The normal workflow for an application is to not use COLLADA. Ideally the “run-time” (as described in Sailing the Gulf) converts the COLLADA description into its runtime file format or data-representation. It might want to cache that, and compare timestamps on the COLLADA file to see if the cache is up-to-date. The problem in this field is everyone is eager to cut corners. That’s how we end up in this sorry state.

In the project I am working toward I have rewritten the Assimp library to serve as the run-time component and legacy file format back-end. It has lots of post-processing (or pre-processing depending on perspective) procedures that have accrued over time. I’ve contributed some. Usually developers just write these things as they require them. I don’t have any more advice.

P.S. I am very eager to begin rewriting the viewer part of the COLLADA-DOM package. It will be a great break from the backbreaking chore of rewriting the core library. This week I’ve been writing software less. I finished adding text-node support to the new COLLADA-DOM today (comments, processing-instructions, mixed-text content models, XML-declaration) and worked on removing children from complex decision trees earlier in the week. I don’t know if databinding libraries that work like COLLADA-DOM are common or not. It’s a world apart from your standard XML library. They share nothing in common.

EDITED: I actually developed an all purpose post-process for converting what I called a “complex” mesh into a “simple” mesh for Assimp, that the gatekeepers of the project would not accept. I did it because there was no way in hell I was going to recode the logic in every loader. I was going to post a link to the file, but it doesn’t appear to be in the repository on my website. I will have to update the repository first.

Here ( http://svn.swordofmoonlight.net/code/Daedalus/post-SimplifySceneProcess.cpp ) is an example of code that converts a “scene” with maximal flexibility in terms of indexing into an unshared vertex model. It’s designed so that any file format maps to it naturally, and so the loaders can be straightforward and simple. This code then unpacks the model. Which is a prerequisite for most post-processing steps. Then if desired, the model would have to be repackaged after processing so that it’s stored in a shared vertex model in a COLLADA XML file for example.

The code is a little dense. I did the rewrite in a somewhat experimental style. I always change styles when tackling a new project. I don’t believe in a single coding style. I think every code base has a coding style that is naturally suited to it, and experimentation is important. Anyway. I apologize for that. If nothing else you can get an idea for how much code is involved in the logic. Or how big the job is. The ^ in the code is a special overloaded operator for invoking the C++11 for-each like Lambda expressions over a container. The rest is standard C++ -ese. It’s definitely not C code :slight_smile:

There are some cases which must be considered to fix I think, some primitives may have more inputs than others or no common inputs but VERTEX, or semantics e.g. TEXTURE may occour many times with different set=“NUMBER” attrib…

My first try to fix indices is like this: loading large COLLADA file took ~0.24 - ~0.25 secs and finding posiitons (not included fixing, just finding) took ~4.2 secs!!! Also with lots of bugs. But I found very fast way to find them and fix them, now loading whole collada + finding and fixing indices tooks only ~0.26 - ~0.27 secs, not so bad for single thread, I spent lots of time on it :confused: Finally it seems work and valgrind also is happy, no leaks, no invalid read/write…

Recently I’ve tried to compare my library with COLLADA-DOM but I couldn’t do that, I got invalid version or like this error for 1.4.1 COLLADA file. But I’ve downloaded a version on Github which is built for 1.4.1 but still got same error and I gave up :confused:

Here ( http://svn.swordofmoonlight.net/code...eneProcess.cpp ) is an example of code that converts a “scene” with maximal flexibility in terms of indexing into an unshared vertex model. It’s designed so that any file format maps to it naturally, and so the loaders can be straightforward and simple. This code then unpacks the model. Which is a prerequisite for most post-processing steps. Then if desired, the model would have to be repackaged after processing so that it’s stored in a shared vertex model in a COLLADA XML file for example.

Thank you for sharing this and for your advices.

It’s definitely not C code

:stuck_out_tongue: After every critical work I run valgrind to check invalid read/write and leaks, in C++ it would be more safe, yeah.

The normal workflow for an application is to not use COLLADA. Ideally the “run-time” (as described in Sailing the Gulf) converts the COLLADA description into its runtime file format or data-representation. It might want to cache that, and compare timestamps on the COLLADA file to see if the cache is up-to-date. The problem in this field is everyone is eager to cut corners. That’s how we end up in this sorry state.

My library (C99) structure/data-rep is based on COLLADA XML structure but it’s not meant that my application/runtime depends COLLADA, it loads COLLADA but it will load another file formats too in the future e.g. glTF. I will work on unofficial COLLADA binary format or my app/runtimeformat in the future…

I’ll add some extra infos/meta to <asset><extra> while exporting then we can skip fix indices for next load, seems good idea

I am trying to get new COLLADA-DOM sources up on Sourceforge.net before too long. I’ve removed all of the dependencies more or less, but the new library is only for preview for sometime since it requires extensive testing. The original COLLADA-DOM is defunct more or less. It is used by some people, but like a lot of opensource software it takes a lot of fuss to set up and so I don’t particularly recommend it.

I would have recommended some COLLADA based unpacking code, but I don’t know of any specifically. There must be some in the COLLADA-DOM source code, but it would be outside of the core library, in the viewer demo code. The OpenCOLLADA project is no doubt geared toward that kind of thing. Basically COLLADA files are free to use any indexing strategy that they like, and so any converter/loader/viewer software must be able to unpack every possible indexing strategy or else it doesn’t fully implement COLLADA.

EDITED: BTW. Converting model geometry is an “offline” step. For large data sets it may be impossible to do it in a timely fashion. This is why COLLADA is intended to be used in an editor context for the most part. When editing files the user has time to wait, and doesn’t mind if a loading message or placeholder image is displayed.

OpenCOLLADA seems very complicated and I didn’t see any docs to how to load a file. Blender uses it maybe I could try to learn from there… I may learn it after finished my library to compare speeds.

I’ve took a look at COLLADA-DOM on SF, I haven’t look at rt and fx library in dom project before. I’ve downloaded snapshot of trunk and couldn’t compile viewer and cound’nt run exsisting binary on my Mac OS 10.11. The binary required Cg.framework and I downloaded it from nVidia then I changed dylib install name to help binary to find it, Does Cg.framerowork is same as I downloaded, I just tried my chance and seems doesn’t work (with no error) :slight_smile: maybe it requires older mac version

I’m rendering with shaders but rt renders with old OpenGL, it doesn’t matter it may help me to find some answers quickly, I’ll also check (later) indexing/de-indexing/unpacking codes if available anywhere in project

[QUOTE=recpas;41647]OpenCOLLADA seems very complicated and I didn’t see any docs to how to load a file. Blender uses it maybe I could try to learn from there… I may learn it after finished my library to compare speeds.

I’ve took a look at COLLADA-DOM on SF, I haven’t look at rt and fx library in dom project before. I’ve downloaded snapshot of trunk and couldn’t compile viewer and cound’nt run exsisting binary on my Mac OS 10.11. The binary required Cg.framework and I downloaded it from nVidia then I changed dylib install name to help binary to find it, Does Cg.framerowork is same as I downloaded, I just tried my chance and seems doesn’t work (with no error) :slight_smile: maybe it requires older mac version

I’m rendering with shaders but rt renders with old OpenGL, it doesn’t matter it may help me to find some answers quickly, I’ll also check (later) indexing/de-indexing/unpacking codes if available anywhere in project[/QUOTE]

RT and FX (or whatever the non-core components are) have been broken on the Sourceforge.net repository for a long time, since people who use it (mostly in robotics I think) only use the core-library component. I kind of broke the repository there early in 2016, because I didn’t understand the extent of changes compared to the versions on GitHub, which I had been using up to that point to rewrite the PHP component (the generator.)

I can’t recommend anything beyond looking at the development snapshots I’ve shared. The last one can be built with Visual Studio 2010 and forward. They include a rewrite of the core-library and the PHP code-generator. Right now I am working on implementing several new features that were not required for testing, but I feel are important enough to delay future work. The only reason I recommended OpenCOLLADA to you is to find the code inside of it that unpacks vertices to see if it might be of any use to your effort.

Here are the makeshift links with development snapshots for review:

http://sdk.swordofmoonlight.net/Very.Much.Unfinished.COLLADA-DOM.2.5.zip
http://sdk.swordofmoonlight.net/Generated_snapshot_in_ColladaDOM-3_mode.zip
http://sdk.swordofmoonlight.net/ColladaDOM-3_PHP_codeGen_snapshot.zip

The files could stand to be reorganized, but I’ve left them with their original names for the initial commit so that historians or whatever can conceivably see what maps to what. It’s not that confusing, but some class names have change. The second link has the generated classes in back compatible 2.x style and the new 3 style that is recommended going forward, especially with C++11 compilers. Because I updated the files for this experimental project yesterday you can see the toy loader I developed to get a feel for COLLADA-DOM, only now rewritten to reflect the new library’s design imperatives. This program is very embryonic and this loader has many idiosyncrasies because it was originally written against the old COLLADA-DOM which was maximally unexpressive, and has since been converted in place to the new style.

http://svn.swordofmoonlight.net/code/Daedalus/CollaDAPP.cpp

Again the general style is experimental. When I wrote this code I was using C++11 lambda’s for the very first time. They require braces and a semicolon to do things that normally would not, so I was curious to know how that would change some of my default coding practices.

I’ll check them in my free time, IIRC I couldn’t see any index related codes in OpenCOLLADA and maybe Blender or Maya do this job theyself

Maya and 3DSMax have parts of the OpenCOLLADA code base dedicated to them. It’s possible it’s in there. Blender should have its own section, but probably no one has taken an interest in doing that. The Blender implementation is very poor. It was moved out of the Blender repository and isn’t really supported. Mainly people, non-programmers, just fight for it to not be removed. It’s funny that it’s still the default export/import target really…

BUT that just goes to show I think that there is a major vacuum in the non-commercial side of things that needs to be filled by something; and that COLLADA looks like the most realistic candidate.

I think unpacking vertices is probably in the OpenCOLLADA core. That is role numero uno for it to play. But it’s not especially high quality. The code may not cover every use case systematically. COLLADA-DOM has plugins based on TinyXML originally, and then LibXML. The TinyXML library is very small (and frankly useless) so it’s not much of a problem, but the LibXML code base is surprisingly completely undocumented, and it’s a procedural (C-style) library. Trying to make heads or tail of it is ridiculously frustrating. I don’t know why anybody takes that project seriously. And it’s not a good support library either.

Most things in opensource are disappointingly very slapdash. I don’t think it helps even when commercial operations offer nominal support. And something else I’ve observed is once code goes opensource, it becomes almost impossible to alter it in a meaningful way, because of inertia and expectations. But that all stems from people not having the time and focus to do jobs correctly. But to be honest, I think most commercial code is not opensource only because it’s embarrassingly bad code. Whenever I’ve seen commercial code, it’s just like a troupe of poorly paid monkeys (in bananas one imagines) just had a blast making the stuff. It really shows when these systems begin to scale and cannot be made to marry with one another.

But to be honest, I think most commercial code is not opensource only because it’s embarrassingly bad code. Whenever I’ve seen commercial code, it’s just like a troupe of poorly paid monkeys (in bananas one imagines) just had a blast making the stuff
+1 for it :slight_smile:

Okay I’ll try to find in OpnCOLLADA repo again but I don’t think I’ll find anything about multi-index -> sinlge-index array. Maybe Maya or Blender just read COLLADA for direct-draw (drawArrays) not for indexed draw (drawElements) this would be really faster for parser even not for GPU.

FYI: I am now admin (technically co-admin it seems) of the SF.net COLLADA-DOM project, and it’s officially been taken out of “Inactive” status.

I also wrote about my taking an interest in those “FX” and “RT” components, which I did yesterday evening. I’ve begun the early stages of rejuvenating their code (for better or worse.) I will be committing some changes to the core-library section of the repository next week. In changing the viewer components I’ve opted to work in the SF.net repository directly. I intend to move my changes to the DOM component into that repository so that they will all be together again.