Concerning combining shaders

Hello,

if a shader-program is used without rasterisation, i’ve a function that (in order to simplify my program) automatically creates a miniml fragment-shader, to avoid shader-optimisation-problems.

Because for rendering images (as far as i know - but i still have to learn a lot) the vertex-shader is only used to give varyings the following shader, i now wonder if it may also be possible, to automatically gererate the vertex-shader in this case. So my question is, what else the vertex-shader can be used for, if tff is not used.

Best,
Frank

The intended purpose of the vertex shader is to assign the output variables which are interpolated to provide the fragment shader’s inputs. Initially that was all it could do, but later versions added images, atomic counters and buffer variables.

if a shader-program is used without rasterisation, i’ve a function that (in order to simplify my program) automatically creates a miniml fragment-shader, to avoid shader-optimisation-problems.

If you are discarding rasterized primitives, then you don’t need a fragment shader at all. I don’t know what “shader-optimisation-problems” you’re referring to, but it is perfectly valid to not have a fragment shader in a linked program.

Because for rendering images (as far as i know - but i still have to learn a lot) the vertex-shader is only used to give varyings the following shader, i now wonder if it may also be possible, to automatically gererate the vertex-shader in this case.

“Automatically generate” it to do… what, exactly? And “automatically generate” it from what? Not every vertex shader does the same things in the same way. Even if you hand someone a list of the VS’s input values, they could only “generate” the code for it if they follow a bunch of assumptions about what those values should be doing.

if tff is not used

What is “tff”?

I see, there are some explanations needed :wink:

tff

Tranformfeedback - sorry i thougt it is i common shortcut.

If you are discarding rasterized primitives, then you don’t need a fragment shader at all. I don’t know what “shader-optimisation-problems” you’re referring to, but it is perfectly valid to not have a fragment shader in a linked program.

I had a lot of problems, to get it to work last week. One information, i had from this forum was, that a fragment-shader my be neede, because OpenGL might optimize shader-code in a way that an output-varying of the vertex-shader that has no corresponding input to the fragment-shader, is optimized out. And so it is. Just tested it again. Without a minimal fragment-shader my transformfeedback-vayings stay empty.

“Automatically generate” it to do… what, exactly?

In order to have non-empty transformfeedback-vayings it’s quiet simple: I use the following fragment-shader-template and replace XXXXX by the name of my transormfeedback-varying.


in float XXXXX[12];
out vec4 color;
void main() {
    color = vec4(XXXXX[0],1,1,1);
}

So always when a transformfeedback-varying is linked to the shader-program, this code is automatically compiled as fragment-shader by my application.

One of the advantages of this is: as my shader-codes are store in single files, for program without rasterisation, a fragment-shder-file is not needed like this.

Now, the second point is, that my application is using shader-programs mostly in pairs: One (withot raserisatin) is used to fill od feedback vertex-arrays, the second one is used when the array is drawn to screen. The first one needs no (or only an alias) frgment-shader, and at the second one most things happen in the fragment-shader.
So my idea was to use the same method to generate a vertex-shader, for the second program. This will be a bit more complicated, like this simple fragment-shader, but it seems possible.

The intended purpose of the vertex shader is to assign the output variables which are interpolated to provide the fragment shader’s inputs. Initially that was all it could do, but later versions added images, atomic counters and buffer variables.

Images and atomic counters i did not understand yet. But as far as i understand you, without these the vertex-shader is only used to transfer varyings to the following shaders. This is really interesting. So i’ll write this auto-generation-function. If i later implement things like counter etc., it’ll be easy to implement a flag, that swithes off this feature if a “real” vertex-shader is needed.

Thank you!

Frank

[QUOTE=art-ganseforth;1292694]

I had a lot of problems, to get it to work last week. One information, i had from this forum was, that a fragment-shader my be neede, because OpenGL might optimize shader-code in a way that an output-varying of the vertex-shader that has no corresponding input to the fragment-shader, is optimized out. And so it is. Just tested it again. Without a minimal fragment-shader my transformfeedback-vayings stay empty.[/QUOTE]

Even if you explicitly provide specific TF outputs with glTransformFeedbackVaryings before you call glLinkProgram? That sounds like a driver bug to me.

In order to have non-empty transformfeedback-vayings it’s quiet simple

When I said “automatically generate it”, the “it” in question was the vertex shader you were talking about.

Also, as Dark Photon mentioned, if you have to make up a fragment shader you don’t use, then something is going wrong.

So my idea was to use the same method to generate a vertex-shader, for the second program. This will be a bit more complicated, like this simple fragment-shader, but it seems possible.

Why do you need a feedback operation here at all? If a pass-through VS is adequate for rendering the feedback vertices (that is, you’re not doing anything to further transform the data, like camera transforms and the like), then you could have just rendered the object instead of doing transform feedback. So there seems to be a lot of time spent writing and reading data that doesn’t need to happen.

Are you reading the output vertex data on the CPU or something? If that’s the case, you can render while doing a feedback operation; it doesn’t have to be either/or. Are you processing the data between the feedback and the eventual render? If so, why is a geometry shader not sufficient to do that processing?

Even if you explicitly provide specific TF outputs with glTransformFeedbackVaryings before you call glLinkProgram? That sounds like a driver bug to me.

Yes. This was one of the problems i had. I thought it sholt work without GClements told me these days to try it. I’m not totally but mostly (95%) sure, that this is not caused by my program.

Why do you need a feedback operation here at all?

Puh…
The principals of my application…

What i do currently is, to draw my vertex-array without rasterisation to a second (temporary) one. This may be done (without using any of the input-values) just to set completly new values. After this i swap the pointers and delete the old vertex-array. The result is used to be rendered to an FBO in the second step.
The point is, that for tranformfeedback a glDrawArrays-call is used. So, there must exist an array to be drawn, even if i don’t use it’s data.

If i i.e. want to generate a torus or a sphere, i only nees the vertex-id and some data concerning “with” and “height” of the array. So i can draw a NULL-array, just using the shader-calls, to generate vertices (normals etc.) for an 3d-object. But there may also be data inside the drawn array. It’ a question of the shader-program, to use it or not.

So, if there has to be an array to be drawn anyway, why shouldn’t i always:

  • create a destination-array
  • use the existing vertex-array just for calling the shader even if the data is not used
  • swap pointers afterwards?

Best,
Frank

Concerning this (maybe you need this):


GPU-Vendor:  Intel 
GPU-Renderer:  Intel(R) HD Graphics 4000 
GL-Version:  4.0.0 - Build 10.18.10.4885 
SL-Version:  4.00 - Build 10.18.10.4885 
GL-Extensions:  188 

[QUOTE=art-ganseforth;1292703]Puh…
The principals of my application…

What i do currently is, to draw my vertex-array without rasterisation to a second (temporary) one. This may be done (without using any of the input-values) just to set completly new values. After this i swap the pointers and delete the old vertex-array. The result is used to be rendered to an FBO in the second step.
The point is, that for tranformfeedback a glDrawArrays-call is used. So, there must exist an array to be drawn, even if i don’t use it’s data.[/quote]

FYI: no, there doesn’t have to be an array. Your VAO can be completely empty, with no attached buffer objects, so long as you’re only using built-in input variables.

[QUOTE=art-ganseforth;1292703]If i i.e. want to generate a torus or a sphere, i only nees the vertex-id and some data concerning “with” and “height” of the array. So i can draw a NULL-array, just using the shader-calls, to generate vertices (normals etc.) for an 3d-object. But there may also be data inside the drawn array. It’ a question of the shader-program, to use it or not.

So, if there has to be an array to be drawn anyway, why shouldn’t i always:

  • create a destination-array
  • use the existing vertex-array just for calling the shader even if the data is not used
  • swap pointers afterwards?[/quote]

So… what happens if you want to render two spheres in two different locations? Are you going to do two different feedback operations which will generate the exact same vertex data, just in a different place? Or what happens if you want to animate the sphere moving? To render it in one place in one frame, then in a different place next frame?

In normal code, you would generate a basic sphere mesh of whatever density you want. Then you transform it into whatever position you want to show it. So you have a single sphere mesh that you can re-use in different locations. The thing that does the transformation into that position is called the “vertex shader”.

I don’t see a reason why you need to feedback this data into buffers, when you could just render those triangles you generate to the screen. I mean, you’re going to do it anyway; what do you think you’re gaining by using feedback? Sphere/torus computations are hardly expensive per-vertex computations. It’s probably going to be a lot cheaper to just repeatedly generate the triangles, since you don’t have to pay the cost of reading 32 bytes of data for each vertex.

GPU-Vendor: Intel
GPU-Renderer: Intel(R) HD Graphics 4000
GL-Version: 4.0.0 - Build 10.18.10.4885
SL-Version: 4.00 - Build 10.18.10.4885

Oh, that explains a lot. You’re using old Intel hardware, and Intel isn’t very good at writing OpenGL drivers at the best of times. Since they don’t seem to be providing updated drivers for it, you’re pretty much out of luck.

Oh, that explains a lot. …

I did not know this but for some reasons i suspected something like this - at least since i had to set a special gluexperimental (or something like this) -flag to avoid crashes usig glTranformfeebackveryings…

FYI: no, there doesn’t have to be an array. Your VAO can be completely empty, with no attached buffer objects, so long as you’re only using built-in input variables.

Okay… But, without “glBufferData(GL_ARRAY_BUFFER, size, data, …);”, with data might be NULL, it won’t work. As far as i understand this, the GPU is caused to allocate an array of type GL_ARRAY_BUFFER, and a given size.

So…
There is an existing array and one to be allocated - either as source or as destination. But there are needed always two, even if one isn’t filled with data (NULL-pointer). For me that means: giving a ram-pointer (sorry, don’t know how to express it better) will cause the CPU to tranfer data to GPU. Using NULL as data-pointer does not.

Now my destination is (re)written. Therefore i don’t care what is inside (tt may be trash-data from compleatly other things), but the source-data i need in some cases (when i use it as source for feedbacks).
As explained i have to allocate (glBufferData) the memory anyway. So i see no reason, why i sould not allocate a new destination instead of a temporary source, to simply swap pointers after drawing and to be free to use or to ignore / overwrite the previous data in the shader-program or to use it as feedback-souce.

So… what happens if you want to render two spheres in two different locations?

Again: Puh!
Almost nothing in my program is fixed. It has it’s own programming-lanuage, that is controlling everything. So, what happens if… depends on what you program, like i.e. this, which is one of the text-file interpreted by my interpreter to create vertex-arrays:


    var Type         = "MACRO";
    var Title        = 'Render TFF';

    InitControl      ( Type, __File, 'Array functions');
    LinkTimer        ( _T_THREAD_IDX_ARRAYS );

    SetPosition      ( x,  y);
    SetSize          ( 1,  2);

    var TitleBar     = new titleBar ('_MD_BUTTONS_ALL', '_MD_STORE_TEXTURE_MENU');
        TitleBar     : ModuleMenu   : CommandItem (20, "Save array as CSV");  
        TitleBar     : ModuleMenu   : CommandItem (21, "Save vertex-shader");  

    var OnMenu       = '
        if (caller.MenuValue == 20) {
            SaveArray("txt", _DYNAMIC_FOLDER + "Array functions/" + CreateFileName("Vertex-array") );
        };
        if (caller.MenuValue == 21) {
            Shader.VertexShader.Save(_DYNAMIC_FOLDER + "Array functions/" + (parent.CreateFileName("Vertex-shader")) );
        };
'; 

    var Shader       = new IncludeShader ( 0, 0, (_DYNAMIC_FOLDER + "Array functions/shader/shader-vbo.txt"));

        Shader       : AddUniforms  ('
uniform   float DrawType;
uniform   vec2  MapSize;
uniform   vec2  Cc1;
uniform   vec4  Cc2;
');

    var vcW          = new valAny   ('Width',          3.0, 0,  3.0,     1, 500,   1,    24);
    var vcH          = new valAny   ('Height',         6.0, 0,  3.0,     1, 500,   1,    24);

    var vc1          = new valAny   ('Cc1.x',           0.0, 1,  3.0,     0,   1,   0.01,  0.25);
    var vc2          = new valAny   ('Cc2.x',           3.0, 1,  3.0,     0,   1,   0.01,  1);
    var vc3          = new valAny   ('Cc2.y',           6.0, 1,  3.0,     0,   1,   0.01,  1);
    var vc4          = new valAny   ('Cc2.z',           9.0, 1,  3.0,    -1,   1,   0.01,  0);
    var vc5          = new valAny   ('Cc2.r',          12.0, 1,  3.0,    -1,   1,   0.01,  0);

    var OnPrepare    = '0;';

    var OnNext       = '

        Shader       . Apply        ( "", GlobalUniforms, "tff_out", 8);

        PrepareArray ( vcW.GetInt(), vcH.GetInt(), 4, "tff_in", GL_T2F_N3F_V3F);

        Shader       . glUniform    ( "DrawType", GL_QUADS ); 
        Shader       . glUniform    ( "MapSize",  vcW.GetInt(), vcH.GetInt() ); 
        Shader       . glUniform    ( "Cc1",      vc1.Get01(),  vc2.Get01()  ); 
        Shader       . glUniform    ( "Cc2",      vc2.Get01(),  vc3.Get01(),  vc4.Get01(),  vc5.Get01() ); 

        RefeedArray  ();

     //   ResetGL      ();
';

    var OnApply      = '
        DrawInterleaved ( GL_T2F_N3F_V3F, GL_QUADS );

';

So, if i want to use a sphere-vertex-array several times, i add several times


        glPush();
        glTranlate3f(vc1.GetValue(), vc2.GetValue(), vc3.GetValue());
        DrawInterleaved ( GL_T2F_N3F_V3F, GL_QUADS );
        glPop();

… to the OnApply-string. And the vertex-array will be drawn several times on different positions.

As all of this is kept in strings and interpreted by my interpreter at runtime. Therfore there is, just some copy/paste needed, to draw the same array several times.

Best,
Frank

But, without “glBufferData(GL_ARRAY_BUFFER, size, data, …);”, with data might be NULL, it won’t work.

If that’s the case, it is only because of your broken graphics drivers, not because of what OpenGL says or allows. Equally important, that line does not set a buffer to be used by a VAO. It is only glVertexAttrib*Pointer calls that attach a buffer to a VAO.

My point is that if you are not using any user-defined vertex shader inputs, then you do not need buffers for the attributes that you are not actually using.

And the vertex-array will be drawn several times on different positions.

When I asked how that would work, I wasn’t asking about some high-level scripting language. I wasn’t asking what was responsible for making OpenGL calls. I was asking about what OpenGL commands you use to actually cause it to happen. How those commands get generated is essentially irrelevant.

As for the specifics of what you wrote, you didn’t really explain what you’re doing. The code you posted suggests that you are using fixed-function OpenGL to render your feedback-generated data, as the glTranslate call suggest. If so, then you’re cannot be using a vertex shader of any kind. So your original question about “generating” your VS is kind of moot; you’re not using a VS and you can’t unless you’re willing to ditch the fixed-function pipeline.

And when it comes time to switch to a VS, you are going to have to send matrix data to the shader in order to do the equivalent of that glTranslate call. Which was part of my point: it will not be a “pass-through” vertex shader; it needs to do actual work.

Lastly, you keep dodging my question: why are you using transform feedback at all, instead of just rendering the generated vertices directly? Do you think that writing and then reading memory is faster than executing shader code? Everything you are doing can be achieved without feedback, and unless your vertex generation code is exceedingly complex (FYI: spheres and torii are not), it will likely be faster without the feedback operation.

Lastly, you keep dodging my question

I’m very sory about this. I probably need a lot more experience, even to understand your questions.

Like i.e.:

instead of just rendering the generated vertices directly?

… where i’ve no idea, what “directly” or (as opposite) “indirectly” may mean.

Also:

Do you think that writing and then reading memory is faster than executing shader code?

Surely not, but i don’t really see where i do things like i.e. “writing and then reading memory”. Might be, that i still don’t understand what i’m doing. So i hope taht i don’t lose motivation and i find the time to read some more tutorials these days.

Best,
Frank

[QUOTE=art-ganseforth;1292721]I’m very sory about this. I probably need a lot more experience, even to understand your questions.

Like i.e.:

… where i’ve no idea, what “directly” or (as opposite) “indirectly” may mean.[/quote]

What you’re doing is basically:

  1. Render using transform feedback to build some vertex data, using a vertex shader that generates vertex data.
  2. Render the stuff you built in step 1 to the screen.

So you’re “rendering” to generate vertex data. And then rendering the vertex data you generated. That’s “indirectly” rendering, since it requires an intermediate step.

What I’m saying you should do is:

  1. Render, using a vertex shader that generates vertex data, to the screen.

No transform feedback, no writing to buffer objects, no intermediate step. You just take your VS that generates the vertex data and link it to the fragment shader that you would have used in your step 2. And you render with the same drawing command you would have used in step 1.

Now i understand you.

In fact, this helps to correct an older missunderstanding. A long time ago, i wanted to do so. The result was distorted which was (after now remembering this) surely some matrix-problem. That time, i posted some questions concerning this, but i probably missunderstood somesthing. So i thought this was not possible. Therefore i was not able to understand you.

By the way:
I know that for using more then one transformfeedback-varyings the function glTransformFeedbackVaryings takes an array of name-stings as parameter. Now i was wondering if it is also possible to use several calls. Like instead of calling it once with an array containing two names, calling it twice each time with one name?

Best,
Frank

No. glTransformFeedbackVaryings sets all of the feedback variables, overriding all prior invocations of this function on that program (before linking).

Thank you. I thougt so, but i was not sure…