Loading large Meshes

Quote from WebGL - General/Working Group Info:

I’d love to see the results![/quote]
I didn’t really implemented several ways to load meshes, but I thought about some time. The idea was to include meshes and shader data into one single library file. An ResourceManager class does load one or more of this library files, e.g. one default file and some specific files for the current map or level. After that you can request resources by ID, for example “defaultshader.vs”.

XML
First thing I tried was to load data from an XML file like this:

<library>
	<vertexshader id="defaultshader.vs">
		uniform mat4 uModelViewProjection;
		uniform mat4 uModelView;
		...
		void main() {
			...
		}
	</vertexshader>	 
	<fragmentshader id="defaultshader.fs">
		varying vec3 vPosition;
		varying vec3 vNormal;
		varying vec2 vTexCoord;
		...
		void main()
		{
			gl_FragColor = ...
		}
	</fragmentshader>
	<mesh id="spheres.mesh">
		<vertices>
	0,-1,2,-2.94008e-07,-1,3.2738e-07,0.0,0.0,
	0.7236,-0.447215,2.52572,0.723607,-0.447219,0.525726,0.0,0.0,
	-0.276385,-0.447215,2.85064,-0.276387,-0.44722,0.850649,0.0,0.0,
	-0.894425,-0.447215,2,-0.894426,-0.447216,1.49148e-09,0.0,0.0,
	...
	0.528274,0.628275,-0.571137,0.528362,0.628139,-0.571205,0.0,0.0,
	0.589185,0.578092,-0.564509,0.589041,0.578138,-0.564612,0.0,0.0
		</vertices>
		<indices>
	1602,1123,402,1123,1602,885,283,885,1602,885,102,1123,1602,1603,283,1603,1602,1604,402,1604,1602,
	1604,172,1603,663,1122,47,1122,663,1604,172,1604,663,1604,402,1122,662,1603,172,1603,662,884,14,884,662,
	...
	20894,21610,23057,23057,21332,20755,21611,20894,23056,23056,22094,21611,20656,21611,22094,22094,
	23056,21136
		</indices>
	</mesh>
</library>

This is pretty fast, because browsers are designed to parse XML data. To parse the meshdata itself I put brackets (“[…]”) around the string and use the eval function, which does return an array. This is also fast, because eval is implemented in native code. I didn’t try an JavaScript implementation, because even if eval is slow, it should be at least as fast as a JS implementation.

However, the problem with XML files is that getElementById() does not work out of the box. You need to specify the DTD. Because Firefox (didn’t try other browsers) does not support external DTD you need to include the DTD definition in every library file. The DTD for the document above looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE library [
	<!ELEMENT library (vertexshader|fragmentshader|mesh)*>
	<!ELEMENT vertexshader (#PCDATA)>
	<!ATTLIST vertexshader id ID #REQUIRED>
	<!ELEMENT fragmentshader (#PCDATA)>
	<!ATTLIST fragmentshader id ID #REQUIRED>
	<!ELEMENT mesh (vertices, indices?)>
	<!ATTLIST mesh id ID #REQUIRED>
	<!ELEMENT vertices (#PCDATA)>
	<!ATTLIST vertices type CDATA #REQUIRED>
	<!ELEMENT indices (#PCDATA)>
]>

JSON
Because this DTD stuff is annoying, I tried to use JSON. Actually this was the idea of Jeff Chimene at GWT groups. JSON is a really simple format, it’s just JavaScript code. You can parse the complete file at once with the eval function. As result you will get an JavaScript object structure.

The library file above does look like this:

{
"defaultshader.fs" : "varying vec3 vPosition;
varying vec3 vNormal;
varying vec2 vTexCoord;

uniform vec3 uLightPos;

const float cShininess = 100.0;
const vec4 cAmbient = vec4(0.2,0.2,0.2,1);
const vec4 cDiffuse = vec4(0.5,0.5,0.5,0);
const vec4 cSpecular = vec4(1,1,1,0);

void main() {
	vec3 lightDir = normalize(uLightPos-vPosition);
	vec3 normal = normalize(vNormal);
	gl_FragColor = cAmbient;
	float diffuse = max(dot(lightDir, normal), 0.0);
	gl_FragColor += cDiffuse * diffuse;
	//gl_FragColor = vec4(lightDir,1.0);
	if (diffuse > 0.0) {
		vec3 r = normalize( (2.0 * dot(normal, lightDir) * normal) - lightDir );
		float f = dot(r, normalize(-vPosition));
		float specular = pow(max(f, 0.0), cShininess);
		gl_FragColor += cSpecular * specular;
	}
}
",
"defaultshader.vs" : "uniform mat4 uModelViewProjection;
uniform mat4 uModelView;
uniform mat4 uNormalMatrix;

attribute vec3 aPosition;
attribute vec3 aNormal;
attribute vec3 aTexCoord;

varying vec3 vPosition;
varying vec3 vNormal;
varying vec2 vTexCoord;

void main() {
	gl_Position = uModelViewProjection * vec4(aPosition, 1.0);
	vPosition = uModelView * vec4(aPosition, 1.0);
	vTexCoord = aTexCoord;
	//vNormal = normalize(uNormalMatrix * aNormal);
	vNormal = normalize(uNormalMatrix * vec4(aNormal, 0.0)).xyz;
}
",
"spheres.mesh" : {
	"vertices" : [
0,-1,2,-2.94008e-07,-1,3.2738e-07,0.0,0.0,
0.7236,-0.447215,2.52572,0.723607,-0.447219,0.525726,0.0,0.0,
...
0.528274,0.628275,-0.571137,0.528362,0.628139,-0.571205,0.0,0.0,
0.589185,0.578092,-0.564509,0.589041,0.578138,-0.564612,0.0,0.0],
	"indices" : [
1602,1123,402,1123,1602,885,283,885,1602,885,102,1123,1602,1603,283,1603,1602,1604,402,1604,1602,
...
21332,21332,23057,21610,20894,21610,23057,23057,21332,20755,21611,20894,23056,23056,22094,21611,20656,21611,22094,
22094,23056,21136]
}
}

=> result: JSON is fast and also easy to implement, since you just need to use eval which does the parsing for you. The only drawback is you need some kind of tool which does convert shader files into this format since JSON does not support multi-line strings. However, a tool that does convert newlines into "
" is not that hard to write.

In both cases you need a tool that converts your mesh data into the required format. I wrote such a tool based on lib3ds for Linux x86_64, it does also convert shader files. I will provide the source code if some one asks for it.

If have also implemented that ResourceManager stuff, but it is integrated into an complete WebGL engine. This engine is an module for GoogleWebToolkit and provides an WebGL wrapper, the ResourceManager, some Math classes and classes for comfortable use of Shaders, Textures, Meshes and so on. Since GWT does compile Java 1.5 code into JavaScript, you don’t have to bother about JavaScript. Coding in Java is much better :wink: However, it will take some weeks until I can release that engine.

Since GWT does compile Java 1.5 code into JavaScript

What do you mean? Can someone describe this compiling stuff cause I’m out of subject

What do you mean? Can someone describe this compiling stuff cause I’m out of subject

GoogleWebToolkit does translate Java to JavaScript. This means you simple write your Website in Java, use all that benefits like type checking at compile time and rational classes, inheritance, polymorphism… You can create dynamic HTML elements by using AWT-like widget classes. GWT does also optimize the code (e.g. inline functions) and does attend to browser specific things. Because of this optimizations there is almost no overhead through emulation. Since your code on client side is written in Java you can even share parts of the code with the server side when that is also written in Java. That makes things like AJAX really easy. Every time Java is not sufficient you can always switch back to JavaScript by using the native keyword. Also I should mention that GWT is OpenSource and available under Apache 2.0 license.

Quote from the GWT page:

Writing web apps today is a tedious and error-prone process. Developers can spend 90% of their time working around browser quirks. In addition, building, reusing, and maintaining large JavaScript code bases and AJAX components can be difficult and fragile. Google Web Toolkit (GWT), especially when combined with the Google Plugin for Eclipse, eases this burden by allowing developers to quickly build and maintain complex yet highly performant JavaScript front-end applications in the Java programming language.

Coolcat – thanks for posting this, it’s really interesting stuff! Personally I’m happy with JavaScript (my own programming career has been BASIC -> Pascal -> C -> C++ -> Java -> Python, so I’m currently quite keen on dynamic languages) but the structure you’re suggesting for JSON meshes definitely looks good. What kind of speed did you get when you were rendering your 21,000-vertex mesh?

I’m loading a mesh with 23058 vertices and 46080 faces in about 0.65 seconds. That does include loading the library (2.2 MB!!) from server, parsing all JSON data, compiling shaders, creating a texture (38 kB) from server and creating vertex and index buffer. Each vertex consists of 8 floats (position, normal and texcoords).

My system is an Intel Core 2 Quad Q9300 @ 2.5 GHz. I’m using a local Apache server. If the browser has already cached the files that saves about 0.03 seconds. However, loading 2.2. Mb over Internet would obviously take much longer. So I think my method is fast enough.

So the bottleneck here is the filesize, not JavaScript speed. What about GZIP compression or something like that? Is there an free zlib implementation for JavaScript? Since the meshfile contains mostly only digits and few other symbols there should be great compression rates.

Ok, browsers should already have native for support compressed data. For example see mod_deflate for Apache. GZIP compression does reduce the file size of the JSON library in my case by factor 4.

Hmmm. Browsers should certainly be able to uncompress, but a bit of googling around gave me the impression that JavaScript doesn’t have access to that code. Here’s a typical discussion: http://stackoverflow.com/questions/9022 … javascript

…so if you do find a way to do it, I’m sure there are a lot of people who’d be interested in hearing about it even outside the WebGL community :slight_smile:

So when you display your mesh, do you get a decent framerate?

but a bit of googling around gave me the impression that JavaScript doesn’t have access to that code.

That’s not required, it’s build in into HTTP requests. A browser that supports compression does include automatically for example

Accept-Encoding: x-compress; x-zip 

into the header of each HTTP-Request. If the server does also support this, the server delivers compressed data and the browser does handle it automatically. It’s part of the protocol…no JavaScript implementation required.

However, I’m not exactly sure how to configure Apache to do that…:wink:

So when you display your mesh, do you get a decent framerate?

I’m rendering at 93fps using my Nvidia/Gainward Geforce 9800 GT. The mesh size doesn’t matter here, since if it is once in the graphics card, everything is done on the GPU only.

That’s not required, it’s build in into HTTP requests. A browser that supports compression does include automatically

That’s cool!

However, I’m not exactly sure how to configure Apache to do that…:wink:

Looking at mod_mime - Apache HTTP Server Version 2.2, it looks like if you have

AddEncoding x-gzip .gz
AddEncoding x-compress .Z

…in your mod_mime config then at least pre-compressed mesh files stored as .gz on your server’s disk should go out with the right headers – so perhaps then the browser would decompress them happily?

Re: the performance – good point! I’ll eventually get used to thinking about this the right way… Do you know how much time it took to create the buffers – that is, the time to go from having the mesh stored in JavaScript arrays to having it sitting up on the graphics card?

Do you know how much time it took to create the buffers – that is, the time to go from having the mesh stored in JavaScript arrays to having it sitting up on the graphics card?

About 0.015 seconds, fast enough for static meshes. A bit slow for larger dynamic data.

So I guess that would be 60fps for that mesh alone, and would scale roughly linearly? (Apologies for my ignorance!)

and would scale roughly linearly?

yes, at least it should be in O(n) class :wink:

If performance/memory usage is important, you might want to look at the latest XML open source
parsing lib called vtd-xml

http://vtd-xml.sf.net

@barriers: Without JavaScript implementation this will not be helpful here…

Thanks for the effort Coolcat

Whats the fastest way to load large binary data?
I have files with up to 2gb binary data with this format:
float32 float32 float32 uint8 uint8 uint8 uint8 and again from the beginning.

At the moment I request the binary file content with ajax and then I use jDataView to parse it value by value. This takes about 40 seconds for 353 files with a total size of 20mb. I’m using a local webserver so connection speed shouldn’t be the issue.

Browser support is spotty at this point, but I’ve been having a good deal of luck lately with requesting binary files using xhr.responseType = “arraybuffer”; and then sending vertex/index data directly to the GPU with sub arrays of that buffer. There’s endianness issues to contend with if you want it to be cross-platform, and the data has to already be in the right format/order, but otherwise I can’t possibly think of a faster way to move buffers into video memory.

For binary data that you do need to manipulate, I’ve found DataViews to be pretty efficient. I even binary parsing utility around them. Again, though, support is a bit spotty.