Low texture performance via remote X connection

Hello everybody!

I guess I’m not at all the first one who meets the problem described below.

My Linux application draws a lot of text labels in an OpenGL scene using texture-mapped fonts from FTGL library, it works well locally, but graphic performance gets significantly (several times) worse when trying to run the application on the same workstation from another machine using rlogin.
This is true for native Linux X server and Win32 X server (X-Win32 that boasts about hardware accelerated remote GLX context).

My question: is there a way to make an OpenGL application have texture mapping performance on a remote X server comparable with local performance, provided that the remote machine has the same or even better graphic hardware?

I have already seen some recommendations here related to advanced OpenGL features + remote X server, and AFAIU the people prefer using some remote display software instead (e.g. VNC, VirtualGL,…). However in my case it is necessary that the workstation on which the OpenGL code is running be able to connect to several X servers and used simultaneously by several users. I’m afraid it is not possible for remote display solution, isn’t it?

Thanks a lot in advance for any ideas how to improve texturing performance in my case!

Well, you need to work with your application running in a remote X server setup where you can see this perf problem, and get to the bottom of what aspect of what you are doing is causing the performance problem. Then you’ll/we’ll have a better idea whether there’s anything you can do about it.

Are you uploading textures during rendering (not just during startup)? Potential perf problem. Are you rendering a bunch of really tiny batches? Potential perf problem. Are you uploading and rendering big textures? Potential perf problem.

Once you identify the main problem, post back and I’m sure folks can give you some ideas.

Short of that, one thing you can just try is to use ssh’s remote X forwarding feature to route X client protocol through the SSH connection, and you can also tell ssh to compress that connection. Could be that that might help your perf, particularly if what you’re sending is very compressible and/or your network is relatively slow.

To do this, use this command to have a client connect to the remote server:

ssh -C -X servermachine

which requests compression and X forwarding. Now in the resulting shell, verify that your X connection is going to be routed through SSH:

> echo $DISPLAY
localhost:10.0

The display number will be something other than 0. Then try a test app to verify it’s working:

> glxgears

And if so, go ahead and try your app.

Hi Dark Photon,

Thanks a lot for your answer, it’s a good roadmap to approach my problem, indeed! I’ve already started following it, here are some results.

The font texture is uploaded “on demand” - that is, when a font is rendered for the first time. Afterwards the texture remaisn unchanged, that is, no glTexImage2d() or glTexSubImage2d() calls take place, I’ve checked this with gdb.

As for “a bunch of really tiny batches” - probably yes, as I draw about 20K glyphs (each one is a texture-mapped quad) - but I do not see why it is a problem. Previously we used polygonal text in our application, and each character was represented with its own mesh containing dozens triangles, and still “remote performance” was acceptable…Probably, I don’t catch what you meant, so could you please explain this point in more detail?

I have also tried your proposal related to SSH, unfortunately it gives no visible performance boost in my case :frowning:

Now I’m playing with slightly modified FTGL demo application (http://sourceforge.net/projects/ftgl/, see demo/FTGLDemo.cpp), I removed all font types except texture-mapped one from there, and I draw the sample string 200 times using 20 x 10 lattice to simulate the amount of text drawn by my app.

I have some progress: VirtualGL + TurboVNC give me what I need!!! Performance is OK! :slight_smile:

However, it would be great to find out the reason for bad texturing performance, even via hardware-accelerated X connection…Some other OpenGL stuff like point sprites that also requires hardware acceleration works fine in X-Win32 - but something wrong happened to “good old” textures :frowning:

Hmm. If you suspect this may be a problem, you might try prerendering with all of the font characters to force them to upload on startup.

As for “a bunch of really tiny batches” - probably yes, as I draw about 20K glyphs (each one is a texture-mapped quad) - but I do not see why it is a problem. Previously we used polygonal text in our application, and each character was represented with its own mesh containing dozens triangles, and still “remote performance” was acceptable…Probably, I don’t catch what you meant, so could you please explain this point in more detail?

A batch is a “draw call” (e.g. glDrawElements).

There are at least two ways a font could be rendered: 1) each character is its own batch (draw call), or 2) a bunch of characters in one batch (draw call). Almost certainly there’s a lot less GLX overhead for option #2 than option #1. But of course, this implies that an entire font is in one texture, not just single characters.

I don’t know how this app is doing font rendering, but I’d hope it’s option 2). When you’re talking to a local X server, bandwidth isn’t that relevent – you’ve got tons. When talking over the net, bandwidth and latency can become issues.

That said, I caveat this with I’ve never dived into the GLX protocol, so I don’t know exactly what form it takes.

Speaking of latency, I wonder. Does this app do anything that would break pipelining and stall your app? That is, request any information from the GL X server? For example, glGetError, occlusion query, glFinish, etc. Those should make for nice delays over the network.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.