many rendering context

Hi all,
at the moment I´m writing an application with many OpenGL windows. I like to switch from a stack layout to e.g. 12x12 layout. My first test is to create for every window an own rendering context (total 144) but the initialisation is very, very slow. Perhaps the approch is wrong ?!?

What can I do ? Thanks for help … :slight_smile:

Are you actually using the individual windows? ie. Do they have borders or are they just individual areas on the screen, each of which can have something different rendered to it. If this is the case then you should be using a single window with multiple viewports.

If you have 144 windows, with borders, and the windows must look like windows, then you’re probably stuck with a long intialisation time (what to do mean by Slow? 30 seconds? 60 seconds? A day?) Have you measured to see if it’s the creation of so many windows or the actual initialisation of your rendering context that is taking so long?

The windows (segments) don´t have borders etc. They a just individual areas on the screen. But it should be possible to make different things on every segment e.g. to play an movie A on segment 12 and an movie B on segment 18 at the same time.
I think, I need a context for every segment, because how can I make an update (or Swapbuffer) in every segment with one context ?

The software is written in C# and I have written an managed C++ OpenGLWrapper. For the initialisation I need with a Pentium 4 (1700 Mhz), DualMonitor compatibility mode, nVIDIA FX5600 1:49.712 min !!! I know a big problem is the DualMonitor mode, but also with Pentium 4 (2400 Mhz), SingelMonitor, nVIDIA FX 5600 Ultra the initialisation needs 14 sec. This is also to long … :frowning:

Thanks …

rgpc: nice hobby – I also do Paragliding :slight_smile:

You only need a single context.

You can use glViewport() and glScissor() to only affect certain parts of a bigger window. Create the context with SWAP_COPY present parameters, and a SwapBuffers will only affect the part that’s viewported/scissored.

jwatte’s suggestion will get you across the line. With two monitors you would (AFAIK) have two contexts (one for each monitor).

As for Paragliding (I presume it’s in my profile - XP SP2 stops you from being able to see the popups - and won’t let me specify that this site is “safe”) I haven’t been paragliding since my honeymoon - almost 5 years ago. :smiley: Shows how often I look at my profile. :wink:

I just want to point out one thing. Suppose that
each of these areas are to be rendered at different
“frame rates” and by this I mean at different time
intervals. If all of these areas are to be trully
independent of each other, I would assume that one
needs to implement a different context for each one
and have something like a different thread render
each one. In more modern OpenGL implementations
this is possible (I use 3D accelerated data vis. progs.
simultanuously on a linux workstation and never
have a problem). This is somewhat non-intuitive,
because OpenGL is a state-specific machine… but
I know different contexts can work independently
on modern implementations.

Am I making sense? Is this possible, and do you
think this is a good approach?

Maybe I am thinking too simple but wouldnt it suffice to treat every “section” as a quad with a texture?

This way you can update the sections at any intervall you want and you dont run into trouble when the sections overlap each other (just assign different z values for each quad).

But perhaps I am missing something.

Multiple contexts are possible, but I don’t think they’re a good idea. The screen only re-freshes every 16 milliseconds (at 60 Hz); as long as you can update the areas that need updating within that time period, it doesn’t matter whether you’re single-threaded or multi-threaded. However, I’ve found overall throughput with single-threading and a single context to be significantly higher when drawing to the same device – because, after all, there’s just a single piece of hardware doing all the rendering.

many thanks for all the answers and hints …

It is not possibel to use only one thread, because it should be possible to play different (size, speed, data) movies in the segments.
Is this possible with one context ? The data of each movie are preprocessed in a pipeline and these pipelines running in
different threads.
Regarding the re-fresh rate: I think, if you switch off the vertical sync of your graphiccard you can reach a higer re-fresh rate.

If I treat every segment as a quad with a texture, I´ll lose a lot of performance, because OpenGL will synchronise the whole screen.
And if I have many differents OpenGL-objects in every segment, all objects will be updated every time – this is not possible.
I need a solution where I can update only a part of my screen - is this possible, if I use one context ?
I will test the suggestion glViewport() and glScissor() … Thanks

rgpc: What do you mean with AFAIK ?!?

We have a lot of apps using multiple threads and in that case we find it easier to use multiple rendering contexts. The design also scales good if you want to change to multiple computers…

Each thread renders its own context, then wait for a sync that is generated by the last finished thread that triggers all htreads to do the swap. That way you get minimal tearing effects between multiple windows.

AFAIK - As Far As I Know.

I haven’t used multiple monitors but from memory I think you have to have a window/context per monitor (I might be wrong, no doubt someone on this list else will know the answer to this).

It is true that you can reach higher “frame rate” by disabling Vsync. However, as Jwatte was indicating, while the rendering frame rate may be higher than you achieve with vsync, the actual frame rate is still only that of the vsync rate. This is because you monitor is refreshing at a specified rate. Any extra frames are discarded because they can’t be displayed if the monitor is not ready for them.

Displaying movies, and using them as textures, is just a matter of timing. There’s a tutorial kicking about (I think it’s on gametutorials.com or maybe on nehe.gamedev.net) where an AVI is used as a texture. The way it works it it queries the AVI stream to find out which frame should be drawn at the specified time. The time specified is simply the number of seconds from the start of the movie. The frame rate doesn’t really come into it. If the time from the last render hasn’t progressed enough, then you get the same frame as the last rendering pass.

And when you think about it, an AVI plays at something like 29.7/30 fps - and your vsync is at least 60fps - so every second frame will be identical to the previous frame that was rendered.

With 144 “contexts” the overhead of switching those contexts would be horrendous. All you really need to do is render each image (as a texture) for each frame based on the amount of time you have progressed through the length of the video.

Originally posted by psp:

If I treat every segment as a quad with a texture, I´ll lose a lot of performance, because OpenGL will synchronise the whole screen.
And if I have many differents OpenGL-objects in every segment, all objects will be updated every time – this is not possible.

On the contrary, your “main” drawing loop just draws a bunch of quads with textures, thats all.

One or more worker threads can then update the content of their associated texture(s) whenever you like and the way you like by, for example, just replaceing the texture with the next frame from a videofile or by rendering an complete OpenGL scene to a texture (render-to-texture).

Of course you need some sort of signaling so the worker threads know its safe to update the textures and so on but you need something along those lines anyway.

Regards,
LG