What is most recommended.. FCollada or using the ColladaDOM?

I am currently working on the tool chain for our engine and we want to move from proprietary exporters (in which we only support XSI) to using the Collada format. I looked at it previously with the DOM but had to move on to some other components so I didn’t get a lot of time with it.

Now I see that there is both the DOM and FCollada, so which is recommended to use?

Thanks ahead of time :slight_smile:

Joe Woynillowicz
Technical Director
Creoterra Inc.

It depends. The two API’s take wildly different approaches.

The COLLADA DOM is a low level API to work directly with the XML. It does handle URI’s nicely and allows you to load and cross reference many documents if your tool chain needs it. The DOM does not do things like generate VBO’s or deindex the mesh data for you. You have to take care of all that yourself. Also the DOM is locked into a single version of COLLADA. The DOM has support for the <extra> elements and allows you to easily add tool specific data.

The FCOLLADA API is high level. I think its great if all you need to do is load some COLLADA and get rendering information from it. But to tell you the truth I don’t have much experience with FCOLLADA, I have only glanced over it a while ago and I know FeelingSoftware has been continuing to make it better. When I did look at it, it seemed that it didn’t give you a lot of freedom for modifying the COLLADA that was loaded. I don’t know if it handles cross document references. And I am unsure of how well it handles <extra> data. The Maya and Max plugins are both using FCOLLADA to some extent and they output <extra> data but since it is all done by the same company, that <extra> data might be all it supports.

Like I have said, I am not very familiar with FCOLLADA. I think your decision boils down to a standard high-level vs. low-level argument. Do you want ease of use but less freedom or more freedom but more complicated?

I would even suggest starting with FCOLLADA and if you run into something you need to do that FCOLLADA doesn’t allow then 1) switch to the DOM or 2) its open source… fix it yourself.

-Andy

I agree that, as usual, the answer depends on your situation.

If you currently have functional exporters from XSI, then presumably your engine isn’t starved for assets - in other words, you don’t need to get a Collada exporter working before you can do work on your engine. If this presumption is correct, you’ve got a bit more flexibility in your choice.

It also depends on how much time you have to apply to the tool chain, versus how much fine-grained control you need. If you don’t have a lot of time to ramp up, the higher-level solution is probably the right one.

For me, I’ve had quite a bit of experience writing data handling pipelines and dealing with different packages and formats. I also have a functioning engine that I already had data available for - I used the .x format as my intermediary format, until I started to need features that either the format or the .x exporters didn’t handle (not sure which). Then I started switching over to Collada as my intermediary format.

I decided that I’d rather dig into things at the DOM level so there wouldn’t be much between me and the raw data. It took me a little while to ramp up, but I think it was the right call in retrospect.

I will say that a drawback to approaching importing Collada data at the DOM level (or, if you’re a serious masochist, at the raw XML level) is that the format is rather flexible. That is to say, there are a bunch of different ways of specifying the same data that are all valid within the Collada schema.

I’ve decided to take the somewhat paradoxical sounding rigid/flexible approach in response to this: The import code I’m writing doesn’t pretend to accept all valid Collada input - rather, it robustly handles the subset of Collada data that I’ve seen in exporting our current assets, and will throw exceptions if the data is outside the handled subset (so, in that sense, it’s rigid); the code is also nimble and adaptable enough so that adjusting it to handle additional subsets of valid Collada input that we encounter is fairly trivial (so, in that sense, it’s flexible).

Anyways, this is clearly not a definitive answer, but hopefully this sheds a bit of light on the decisions that went into our choices. Best of luck.

I do think this is a very sane approach.
When an element is not handled by your import code, and reported as such, one approach is to add the handling in the import code.
I would like to recommend another approach which is to have a separate conditioning utility (COLLADA in, COLLADA out) that will format the input to what is required. This can be extended with several other tools, such as automatic LODs, cleaning up the geometry… that can be pipelined to create an optimized data. Some of this processing will be purely optimizations, so for faster interaction they do not need to be used all the time.

If the ‘conditioning’ process is directly called by the import code, this can be done in memory (using the COLLADA-DOM for example), and transparently. But this is a more flexible approach in case you want to have a build process that can condition all your data.

This is really what we started with in the beginning and were using .X as the intermediate format. We had too many issues with .X and moved to building an exporter plug-in for XSI 4.2 / 5.0 and have been using it but we are looking to switch the full import/export toolchain to Collada (especially since the NV tools are moving that way) and only support the shader builder / engine view plugins directly in XSI.

I appreciate all of the comments, many thanks to everyone. It looks like realistically it just boils down to the time issue which I will have to evaluate.

Thanks!

Yep, that’s the plan.

Hmm, I hadn’t considered that approach specifically. I was planning on using Asset Creation Software -> Collada -> Platform specific binary, rather than ACS -> Collada -> Conditioned Collada -> PSB. I’ll think about that some, as there could be benefits to keeping some more of the intermediaries around in Collada.

In any event, yes, having the asset conditioning pipeline as an external process that’s part of an overall integrated build process is definitely the way to go, for far too many reasons to enumerate here. :slight_smile: LOD, vertex cache friendly stripification, convex hull generation, mesh validation, and a bunch of others are certainly candidates for phases of that pipeline.

Cheers,
Jason

Just to correct and update this thread. In the past months FCollada has evolved rapidly. It now supports import, modification and export of COLLADA data. It also supports file referencing. ColladaMaya uses FCollada’s export capability, and ColladaMax and other FCollada-based tools are planned to support it too in the near future.

jwoynillowicz
If you using Visual Studio 2005, you have no choice. FCollada not worked at VS2005.

Feeling Software uses Visual Studio 2003 for most of its development, but that doesn’t mean that FCollada can’t compile on more recent versions. We’ve compiled FCollada on a bunch of more exotic platforms, including the Playstation 3.

Why don’t you open a bug at www.feelingsoftware.com/bugzilla? This is likely a simple compilation error.

Why don’t you use Visual Studio 2005? :slight_smile:

Because most of the software we support (e.g. Maya and 3dsmax API plug-ins) doesn’t support that version of Visual Studio.

Hi, look here:

https://collada.org/public_forum/viewtopic.php?t=446

how to get fcollada running with vs2005.

greetings,
Sebastian Jancke

Thnx, I compile it in visual studio 2005. All work fine. :slight_smile: