Evaluating Collada for our modeling project

Hi, I’m in the process of evaluating Collada for use as a model interchange format at my company. We have a lot of the same goals that I imagine many companies using 3d modeling content do:

  • We want our artists to be able to use their preferred tool for 3d content creation.
  • We’d prefer not to write importers/exporters for every package our artists or customers might possibly use.
  • The model format should be extensible and the import/export tools for the dcc packages should preserve custom data in the model as much as possible.

To that end, I have a few questions about Collada’s capabilities and dcc support.

(1) Many of our artists are fond of LightWave. Does anyone have any word on the status of Collada support in LightWave? I’m familiar with LwCollada, but it’s just a geometry importer (no exporter) that looks like it hasn’t been worked on in months. I also read a post where remi mentioned that NewTek engineers had told him at Siggraph 05 that they wanted to support Collada. Searching on the NewTek boards turns up a bunch of people asking for Collada support, but no response from NewTek representatives. Any word on this?

(2) If I understand things correctly the <extra> element is the key method for extensibility in Collada. Can anyone offer insight about how well the dcc tools preserve <extra> data blocks during the import/export process?

(3) Regarding the conformance tests, what does the “Sch” column heading mean? Also, I notice conformance results are posted for SoftImage and Maya, but not Max or Blender. Any particular reason?

Thanks,
Steve

Howdy sthomas!

Maybe you would be interested in collada-verse asset management… i hope that using webdav to share access to a centralized version-controlled repo which is intelligent enough to check links between models and whatnot could be super useful, and i even have some ideas of how it can be implemented on a large-scale and optimized for performance on any scale…

respond to my post, please, if you think this is an interesting idea. :slight_smile:

As regards:

(1) if you are a lightwave customer, perhaps we ought to try and get you in touch with the people who remi spoke with. if they are having trouble showing that this is important to lw customers, you can give them a use case.

(2) any compliant xml dom should preserve unrecognized CDATA, but no telling what specific importers do. i suggest in all cases testing the software you expect to interact with as much as possible.

(3) i don’t know. :wink: I suspect if conformance results are not posted for Max or Blender that not enough data has been received for conformance testing. As I understand it, the conformance suite is Free / Open-Source software, so you should be able to do your own conformance tests. I might be available to help with Blender stuff esp.

Ciao!

-=JR

I am the maintainer of the Blender plug-in and I hope I can answer a few questions.

From my understanding the data files used for conformance testing was created a while ago (by Sony) for Maya and SoftImage but not for other softwares. The data was later used in conformance test with the SoftImage plug-in and the Maya plug-in which, at that time, was the two only plug-ins that where “officially” announced.

For Blender I can use conformance results which I have created on my own or use similar data sets. I decided a while ago that instead of focusing on working with the conformance I would encourage people to write to me and ask for features they would like implemented or post here on the forum their specific content pipeline needs. I wish I could devote more time to the Blender plug-in but until I make this into a living instead of doing it on my spare-time that will not change.

If I understand things correctly the <extra> element is the key method for extensibility in Collada. Can anyone offer insight about how well the dcc tools preserve <extra> data blocks during the import/export process?

Do you mean collada 1.3?

There is also a <technique> element which contains different profiles. The Maya and Max exporter extracts many of their own properties that is not featured in collada through this element. I believe Maya and Max have better data exchange with their specific profiles. Blender does not use any other profiles besides the COMMON type. If you need a specific property for a dcc tool then request that feature to the developer and it could hopefully be included in the specific profile.
If you need application specific properties (for example AI behavior) on different objects then the <extra> element could be exported. I hope to include this feature in the upcoming versions of the Blender plug-in. This would allow artist to export information into the <extra> element through Blenders linking to objects. This would hopefully solve the problem of extending the plug-in without rewriting (or even look at!) the python plug-in code and without breaking the collada format specification.

Collada 1.4 seems to have support for so many different elements and properties that the extensibility of the format is not necessary.

In any case the extensibility should be you, a user of the plug-in, bugging the developers to include more stable plug-ins and adding more features. :wink:

From the 1.4 spec:

Each feature required in this section is tested by one or more test cases in the COLLADA Conformance Test Suite. The COLLADA Conformance Test Suite is a set of tools that automate the testing of exporters and importers for Maya®, XSI, and 3DS Max. Each test case compares the native content against that content after it has gone through the tool’s COLLADA import/export plug-in. The results are captured in both an HTML page and a spreadsheet.
This implies that conformance results should be available at least for Max. Conformance test info is very important for evaluating the reliability of an importer/exporter.

Do you mean collada 1.3?
No, 1.4. I’m in the process of reading the spec now so I might be using incorrect or outdated terminology.

Collada 1.4 seems to have support for so many different elements and properties that the extensibility of the format is not necessary.
Are you saying that custom data is unnecessary? If so, I strongly disagree. A lot of applications have very app-specific data that could never (and should never) be incorporated into the standard directly. In order to support this data you need format extensibility. In order to use Collada files as source data importers/exporters should preserve custom data as much as possible.

Let me rephrase the situation as follows: Let’s say I have a single float value associated with a material. This is custom data that my application needs. So I write a Collada model editor that lets the user open a model and edit (only) this parameter. Is there any way I could add this float value to the Collada document such that importing it into a modeling package and then exporting wouldn’t throw out the data? I don’t care if you can edit it in the modeling package. I just don’t want it thrown out.

I can see your problem now. And yes the conformance results should be your answer to this question. If the conformance doesn’t provide information about this subject then contact the developers on this subject (by e-mail, this forum or private messages)

I have written a collada viewer myself for a third-party engine while working with the Blender plug-in so I am familiar with this problem both from a plug-in developers perspective and as a programmer. Here are some solutions I can think of.

  1. Wait for stable 1.4 plug-ins that features the import and export of the <extra> element in the dcc tools that your company use.

  2. Use collada params, supported in current plug-ins, as extensive properties. For example a friction coefficient in your material could be your application specific property (unless your editor happens to use the friction coefficient) or a material emissive param that uses 3 floats could instead represent other properties in your application. It ensure safe travels between dcc tools and allows you to edit the param in every software. Sure it’s a quick hack but it gets the job done for time critical projects.

  3. Use element names as hints for your application. Element names should be preserved through each dcc tool so using a clever name system could provide information where collada is not sufficient.

  4. Modify and build your own plug-in from the plug-in sources to include support for the features you want.

For a long term solution I would post a request to each plug-in developers for stable 1.4 releases with support for the <extra> element.

EDIT:
The 1.4 specification (5-6) clearly states as a requirement that:
It must be possible to import all conformant COLLADA data, even if some data is not understood by the
tool and retained for later export. The <asset> element will be used by external tools to recognize that
some exported data may require synchronization.

This was also a requirement for Collada 1.3 plug-ins but I don’t think any plug-in actually supported it.

The COLLADA plugin for XSI actually persists any extra data stored on known elements. You can also add custom (blind) data on models, materials, etc. as a Custom Parameter Set in XSI and it will be exported as an extra data on the respective owner element. In 1.3 extras are parameterized and map perfectly with our Custom Parameter Set concept in XSI.

However, just to set expectations right, our plugin does not try to preserve the entire COLLADA document and do partial updates to it. Importing and Exporting are duplication/transformation actions performed on a COLLADA document. In other words, we are trying our best to preserve the data in the COLLADA document as is, but when we export back, things that have a different equivalent in XSI are exported as an XSI friendly equivalent. For example, let’s say you have two meshes in the COLLADA document that use the same position array element. Importing this mesh in XSI will result in two meshes with their own copy of the position array because the DCC doesn’t support the original COLLADA mesh construct natively. Re-exporting this back as a COLLADA mesh will result in two meshes with their own position array element. That being said, improvements come with every new versions of COLLADA support features, so maybe one day :-).That’s it for this little parenthesis.

Using your original numbering of the relevant issues:

(3) Conformance data for 3dsMax: right now, the zipped release files should contain a .XLS file with the conformance test results. Knowing that the conformance test suite is being redone for COLLADA 1.4, there is currently no up-to-date conformance test results for any of the DCC plug-ins.

The “Sch” column in the conformance test results stands for “Schema Validation”. On the other hand, the test suite’s validation system for the COLLADA 1.3 conformance tests was broken, so disregard that column completely.

(2) Keeping <extra> data through import/export: this is a fairly complex problem, as I’m sure you know. For Maya, it would be possible to add dynamic attributes, but for 3dsMax that isn’t possible (except for scene nodes, as far as I know). So, we’re considering options right now. If you have some good idea on how to do this, please feel free to share them with us ;). The only solution we have found so far implies quite a few months of development.

I like Master Tonberry’s solutions of re-using some of the known parameters for “custom” uses, or for flags: adding them to the entity names. These methods have plenty of draw-backs but they are time-proven and work!

Regards,

Yeah, I bumped into that yesterday but I’m having a difficult time figuring out exactly what it means. Does the term “conformant COLLADA data” include custom data included in <extra> blocks? Is there a difference between the phrase “It must be possible to import all conformant COLLADA data” and “It must import all conformant COLLADA data”? Is it trying to say that the tool must import all conformant data and retain it for export, or that it must import all conformant data even if it won’t retain it for export? What would be the point of importing data if it isn’t retained and exported? I wish the spec were a little more precise with its wording here.

In my opinion, this should be the model for how custom data should be handled in all tools. I understand that the process of importing a Collada document into a dcc tool is inherently transformative rather than a simple deserialization process, and as such the document won’t be perfectly preserved when passing through a tool. But I want at least some way to get my custom data to pass through cleanly. Now, if only I could get the damn thing to work so I could test it :lol:.

The ones available here don’t contain any xls files it seems. I’m more interested in 1.4 conformance results so I’ll wait for those tests to be finalized before asking again.

Yes, I know it’s tough. No, I don’t have any bright ideas I’m afraid. It seems to me that whether or not it’s doable depends on how flexible the dcc tool’s architecture is. SoftImage has a remarkably flexible (and very elegant) architecture that allows for this type of extensibility, and I’m sure that’s why the SoftImage tools have “proper” support for <extra> blocks whereas the other tools don’t. The only solution I see is for the tool vendors to modify their data structures to support this type of extensibility in a generic way the way SoftImage does, or to hack in more specific support for Collada <extra> blocks. Realistically, Collada probably isn’t a big enough player yet for us to convince the tool vendors to do that.

Can you share any info on this? We had thought about the problem for a while and had a few potential solutions, but nothing that couldn’t easily be broken via normal usage patterns.

But the tools can choose to throw out even “standard” Collada data if it wants to, right? I’m looking for something that’s sort of “guaranteed” to work in a standard-conforming Collada tool. I think what I’ll do is itemize all the custom data types we need, and see if we can map them into standard Collada elements that we won’t be using. Then I’ll do some tests on the dcc tools to see if they correctly import/export these data elements.

Thanks for all the replies and suggestions.

I just had a lengthy post thrown out by hitting the “Preview” button. Annoying.

I’ve spent some time evaluating Max and SoftImage’s support for <extra> data. Max just throws out any <extra> data it sees. SoftImage tries to import <extra> data into its custom parameter sets that are completely editable and exportable, but there are several bugs that I won’t get into here. Suffice it to say that I think the SoftImage guys have the right idea here. As far as Blender and Maya, from what I know they’re both similar to Max in that they just throw out <extra> data.

The problem: Two goals of Collada (as I understand it) are that it serve as a data interchange format and that it be extensible so that developers can adapt it to their needs. The key method of extensibility in Collada is the <extra> tag, but <extra> data isn’t interchangeable amongst most of the tools that support Collada. This greatly limits the usefulness of Collada’s extensibility.

In the importer requirements section the spec says that anything that requires export also requires import, and that “It must be possible to import all conforming COLLADA data, even if some data is not understood by the tool, and retained for later export.” I think this language is somewhat confusing and can be interpreted in a few ways, and that the people writing support tools for Collada aren’t totally sure what their responsibilities are with respect to custom data. This has led to the current situation where most tools just ignore <extra> data altogether.

The solution: The requirements specific to custom data should be fleshed out and made clear to provide direction to the tool developers. My suggestions for how exactly to do that:

(1) Identify a subset of all the elements that <extra> data can be attached to and require that a conformant Collada tool cleanly import/export that data. I think a good set to start with would be <scene>s, <node>s, and <material>s. Alternatively we could just require that all <extra> data be imported/exported cleanly regardless of what it’s attached to, but that would likely place a prohibitively high burden on the tool programmer, especially in cases such as Max and Blender where it’s difficult to arbitrarily attach data blocks to scene elements.

(2) Standardize the <param> types. As of now it just says that a param type “must be understood by the application”. That’s good but int, float, bool, string, etc should all be standard param types that have a specified string representation. This would make it easier for a tool to provide gui support for <extra> data if it wanted to.

(3) Add custom data (<extra>) tests to the conformance test suite.

I’m eager to hear your thoughts on this.

Thanks,
Steve

Also, I know that I could probably figure out some hack involving object names and hijacking various parameters to get around my problem for now. This is good as a temporary band-aid, but I’m hoping for something more generic and less hackish in the long term.

The problem is that you want to store the imported extra attributes directly into your .max .mb or .blend file for later data exchange. So if adding properties is not supported in some cases then we (the developers) have to find other ways of storing the data.

Some of my ideas:
Blender allows script links for material and scene objects. This would allow artist to link script files to a node object or to a material. The script file could be in any format and the text file could contain any information the user wants. In any case this text should provide enough information so that when I export I can put extra elements and data where I have decided to put them into the collada document.
My idea is that the artist create a dummy scene node object (in blender called Empty) and put it in my node hierachy. I can name the node for example “COLLADAExtraData”. I then add child nodes that have names that corresponds to the names of the objects (data or instances) that I want to add properties to. These child nodes have scripts links which contains xml text information that should be added as an <extra> element to the very same object that was stated as an ID-reference in the node name.
When exporting the data I can get the “COLLADAExtraData” node and with that information find its children. Whenever I’m exporting a xml element in my exporter I can see if I encounter an ID match with my dummy nodes and the ID-reference the xml element will have. If I have a match then I add the information in the script link into a <extra> element into the document.
The “COLLADAExtraData” node will not be exported into the final document since the information here has already been preserved into the document.
The COLLADA importer then simply creates this node system whenever I encounter an <extra> element.

That was a bit implementation specific for Blender but other DCC tools can perhaps use the same node system for storing extra data. Even if a link system is missing for nodes there should be some way of storing text information in objects other then nodes.

To allow artist to add and/or modify extra properties an GUI can be scripted.


On a similiar topic most modelling software have features that can be exported into collada but their different content creation philosophies makes it hard to get a perfect data exchange between tools. Maya cannot understand light falloff data and Blender cannot understand polygons with more than 4 vertices. Storing such data between DCC are not hard. It’s impossible.

Yes. In COLLADA, extension by addition is the purpose of <extra>. Extension by alternative (substitution) is what <technique> is for.

This is indeed a problem and the DCC vendors have ask for customers to give them direct feedback to help them prioritize their COLLADA feature support.

I think it’s more a matter of priorities for the DCC vendors then confusion. :wink:

The wording in the spec comes from a recognition that each DCC tool handles meta data differently. We wanted to make a strong statement that COLLADA requires that import/export to be lossless and that DCC tools should improve their internal architectures (if need be) to comply with that requirement. Representatives of Alias, Discreet, and Softimage all participated and approved of these requirements. It’s a matter of time and priority as to which COLLADA features get implemented first and customer feedback is a good way to help them set those priorities.

Thanks for taking the time to offer suggested improvements!

Without making exceptions, the spec. currently requires that all elements be imported/exported faithfully. Perhaps a prioritized list of elements could be added to the requirements to give implementors an idea of the order of importance is?

The <param> element is largely removed in COLLADA 1.4 in favor of <extra> and <technique> because they supply a proper context for extension.

Absolutely!

Thanks,
Marcus

This sounds like it might work. This is similar to what we were going to do for Blender when we were considering writing plugins for it. We were going to drop xml into the text document area of the Blender model. If you could get something going on this I’d be happy to bang on it to try to find bugs.

Are you absolutely certain the dcc guys knew they were agreeing to that? I mean, as it currently stands, there’s a monstrous gap between what the spec says a Collada tool must do and what the current Collada tools do, with respect to custom data. Perhaps it’s solely an issue of priorities as you say, but I don’t see any evidence that the dcc tools are moving in the direction of lossless Collada file import/export, e.g. ajclaude: “… our plugin does not try to preserve the entire COLLADA document …”.

Without making exceptions, the spec. currently requires that all elements be imported/exported faithfully. Perhaps a prioritized list of elements could be added to the requirements to give implementors an idea of the order of importance is?
Ok, I wasn’t totally sure that the spec required full data import/export. If that’s the case then I don’t think a prioritized list is necessary. What elements get <extra> data support will be decided by the dcc tool architecture rather than any prioritized list we’d come up with.

The <param> element is largely removed in COLLADA 1.4 in favor of <extra> and <technique> because they supply a proper context for extension.
Hmm, ok. It’s still in the spec and the spec says nothing about it being deprecated.

The reason <param> is nice is that it provides information about the type of data contained inside. So apps like SoftImage that want to provide a gui for this data can do a better job. A string gets a text box, a bool gets a check box, etc. That doesn’t actually require <param> of course… it just requires a standard way of specifying the type of data contained inside the <extra> block.

Are you absolutely certain the dcc guys knew they were agreeing to that? I mean, as it currently stands, there’s a monstrous gap between what the spec says a Collada tool must do and what the current Collada tools do, with respect to custom data.
[/quote]
Yes I’m sure! :wink: We have the verbal commitment written into the specification as requirements, but it still takes time and resources to follow through on them by each company.

COLLADA as an interchange format is everyone’s near-term goals I think, with source (i.e. lossless) format support a longer term goal and vision.

Khronos is working on a conformance test framework this year for COLLADA. Softimage and Autodesk have both expressed the intention of passing their products through this test process. We think the test suite is very important and will help to drive development of COLLADA in DCC and middleware tools.

It’s not deprecated. It’s just that 1.4 is different enough, due to strong typing in the schema, that 1.3 uses of <param> are largely gone. The meta data associated with weakly typed elements (e.g. <param>) has moved from each and every instance document into the schema, where it belongs. Instance documents will be a little bit smaller as a nice side-effect.

Exactly! The <extra> element is designed for this usage. It has a [b]type[/b] attribute and techniques so that all kinds of information can be added.

Ah, of course. For some reason I thought <extra> didn’t have a way of specifying the type of data contained inside, and thus that it was advantageous to use <param>. My mistake.

Ok, I think that all my questions have been answered, and I want to thank everyone who contributed to the discussion. Due to the current dcc tool limitations with respect to custom data and the lack of Collada support in Lightwave, Collada probably isn’t an ideal format for us at the moment. However I’m very encouraged by Collada (the design is remarkably elegant imo), and I do think we’ll be adopting it at some point in the future. I hope that I raised awareness of the desirability for custom data persistence in the dcc tools.

Thanks,
Steve