I'm a little bit frustrated and here is why.

I was reading an article and there was like an exercise about creating a colorful flag using the step function in GLSL, but I didn't succeed to do it. The point is that I think I don't know to do this because of graphs in math, polynomial functions or whatever is used. :)

Also, let's say that I want to create a nebula, or any shapes that I want (like the one in the attachment). I would like to know please, from where I have to start with the math to understand how I can create something that I want, any shape etc.

I hope you will guide me through this step and let me thank you in advance!

I'm working on a simple ray-marcher in OpenGL. The ray marching is done in the fragment shader of a full-screen quad. The shader code is as follows:

Code :

#version 460 core
in vec3 ray_view;
uniform mat4 viewMatrixInv;
vec3 rayDirection_world;
vec3 rayOrigin_world;
vec3 marchPoint;
vec3 normal;
vec3 toEye;
vec3 lightDirection = vec3(3, -1, -1);
vec3 toLight;
vec3 sdfVector;
float sdfValue;
out vec3 color;
const vec3 sphereCenter_world = vec3(0, 0, 0);
const float sphereRadius = 1.0f;
const vec3 cubeCenter_world = vec3(2, 0, 0);
const vec3 cubeDimensions = vec3(1, 1, 1);
vec3 deformationRepeat4(vec3 wp){
return vec3(
mod(wp.x + 2, 4) - 2,
mod(wp.y + 2, 4) - 2,
mod(wp.z + 2, 4) - 2
);
}
vec3 reflectAcrossPlane(vec3 n, float offsetFromOrigin, vec3 p) {
if(dot(n, p) - offsetFromOrigin < 0)
return p - 2 * n * (dot(normalize(n), p) - offsetFromOrigin);
return p;
}
float sdfCube(vec3 p) {
vec3 d = abs(p) - cubeDimensions;
return length(max(d,0.0))
+ min(max(d.x,max(d.y,d.z)),0.0) - .01;
}
float sdfSphere(vec3 p) {
return length(abs(p - cubeCenter_world)) - sphereRadius;
}
vec3 deformation(vec3 wp) {
return wp;
}
float distanceToNearest(vec3 p){
//First, applies the deformation
vec3 pDeformed = deformation(p);
//Then gets the actual sdf of the shape
return sdfCube(pDeformed - cubeCenter_world);
}
void main() {
//while(true);
rayDirection_world = (vec3(viewMatrixInv * vec4(normalize(ray_view), 0)));
rayOrigin_world = vec3(viewMatrixInv * vec4(0, 0, 0, 1));
marchPoint = rayOrigin_world;
while(true){
sdfValue = distanceToNearest(marchPoint);
//If the ray has hit the sphere
if(sdfValue < 0.001){
//Draw the sphere's color
//Finds the cube normal
//vec3 normal = vec3(
// (distanceToNearest(vec3(marchPoint.x + 0.001, marchPoint.y, marchPoint.z)) - sdfValue) / 0.001,
// (distanceToNearest(vec3(marchPoint.x, marchPoint.y + 0.001, marchPoint.z)) - sdfValue) / 0.001,
// (distanceToNearest(vec3(marchPoint.x, marchPoint.y, marchPoint.z + 0.001)) - sdfValue) / 0.001
//);
toLight = -normalize(lightDirection);
toEye = normalize(rayOrigin_world - marchPoint);
color = vec3(1, 0, 0);// * clamp(dot(normal, toLight), 0, 1) + vec3(1, 1, 1) * pow(clamp(dot(normalize(toLight + toEye), normal), 0, 1), 7);
return;
}
//If the ray is beyond the distance from the camera
if(length(marchPoint - rayOrigin_world) > 1000){
//Draw the background color
color = vec3(0, 0.5, 0.2);
return;
}
//March the ray forward
marchPoint += rayDirection_world * sdfValue;
}
//color = vec3(distanceToNearest(rayOrigin_world + rayDirection_world) / 15, 0, 0);
}

I compile and run the whole program using Visual Studio 2017. Everything compiles fine and starts to run, but after a few seconds Visual Studio breaks and shows an error message that reads, "Unhandled exception at 0x69CA9C29 (nvoglv32.dll) in GravelMarcher.exe: Fatal program exit requested."

This is the same error I get when the shader can't exit and loops forever, but I cannot for the life of me figure out why the shader code would be looping forever. Any input would be greatly appreciated, and I am happy to post whatever extra code I need to.

Thank you in advance for your help! ]]>

But it doesn't work, how can I get view coordinates of gl_FragPos and then transform it to the other frame coordinates ?

Code :

const std::string transfFragVertexShader =
R"(#version 130
out mat4 newViewMatrix;
out mat4 projMat;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
newViewMatrix = gl_ModelViewMatrix;
projMat = gl_ProjectionMatrix;
})";
const std::string transfFragFragmentShader =
R"(#version 130
uniform mat4 oldViewMatrix;
in mat4 newViewMatrix;
in mat4 projMat;
uniform vec3 resolution;
uniform sampler2D oldFrameBuffer;
void main() {
vec4 oldFragPos = inverse(projMat) * gl_FragCoord;
oldFragPos /= oldFragPos.w;
oldFragPos = inverse(newViewMatrix) * oldViewMatrix * oldFragPos;
oldFragPos /= oldFragPos.w;
oldFragPos = oldFragPos * projMat;
vec2 position = (resolution.xy / oldFragPos.xy);
gl_FragColor = texture2D(oldFrameBuffer, position);
})";

I have two problems with the shader code. I'm trying to get lighting to work and:

1. Parts of the bunny I'm using the shader on are transparent.

2. The lighting is static.

This is kinda new for me so I don't really know how to debug this. Please ask anything, Thanks.

Video ]]>

Code :

const std::string vertexShader =
"#version 130 \n"
"out mat4 projMat;"
"void main () {"
"gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;"
"gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;"
"gl_FrontColor = gl_Color;"
"projMat = gl_ProjectionMatrix;"
"}";

"#version 130 \n"

"in mat4 projMat;"

"uniform sampler2D texture;"

"uniform float haveTexture;"

"void main () {"

"vec4 texel = texture2D(texture, gl_TexCoord[0].xy);"

"vec4 colors[2];"

"colors[1] = texel * gl_Color;"

"colors[0] = gl_Color;"

"bool b = (haveTexture > 0.9);"

"vec4 color = colors[int(b)];"

"float z = (gl_FragCoord.w != 1.f) ? (inverse(projMat) * vec4(0, 0, 0, gl_FragCoord.w)).w : gl_FragCoord.z;"

"gl_FragColor = vec4(0, 0, z, color.a);"

"}";

[/code]

I've written another shader that discard closest fragments.

Code :

const std::string frameBufferGenFragShader =
"#version 130 \n"
"uniform sampler2D depthBuffer;"
"uniform sampler2D texture;"
"uniform vec3 resolution;"
"uniform float haveTexture;"
"in mat4 projMat;"
"void main () {"
"vec2 position = ( gl_FragCoord.xy / resolution.xy );"
"float max_z = texture2D(depthBuffer, position).z;"
"vec4 texel = texture2D(texture, gl_TexCoord[0].xy);"
"vec4 colors[2];"
"colors[1] = texel * gl_Color;"
"colors[0] = gl_Color;"
"bool b = (haveTexture > 0.9);"
"vec4 color = colors[int(b)];"
"float z = (gl_FragCoord.w != 1.f) ? (inverse(projMat) * vec4(0, 0, 0, gl_FragCoord.w)).w : gl_FragCoord.z;"
"colors[1] = color;"
"colors[0] = vec4(0, 0, 0, 0);"
"b = (z < max_z);"
"gl_FragColor = colors[int(b)];"
"}";

But it's not good some closest fragments are still written to the texture. What's gl_FragCoord.z, the z in window space (between 0 ans 1) or the result of the multiplication of the interpolated vertex by the modelViewProjectionMatrix ?

And it's seems there isn't enought space to write z in a GLRGBA8 text.

But I use exactly the same technique to test if a light fragment is behind an object or not and it works. ]]>

Code :

Texture tmpDepthBufferTexture = Texture(depthBuffer.getTexture());
Texture tmpStencilBufferTexture = Texture (stencilBuffer.getStencilTexture());
depthBuffer.clear(sf::Color::Transparent);
stencilBuffer.clear(sf::Color::Transparent);
frameBufferGenerator.setParameter("haveTexture", 1);
for (unsigned int i = 0; i < m_instances.size(); i++) {
currentStates.texture = &tmpStencilBufferTexture;
stencilBuffer.draw(m_instances[i].getAllVertices(),currentStates);
frameBufferGenerator.setParameter("depthBuffer", tmpDepthBufferTexture);
depthBuffer.draw(m_instances[i].getAllVertices(),currentStates);
}

The texture is not changed so it's the cleared one which is referenced in my FS.

Code :

const std::string frameBufferGenFragShader =
"#version 130 \n"
"uniform sampler2D depthBuffer;"
"uniform sampler2D texture;"
"uniform vec3 resolution;"
"uniform float haveTexture;"
"in mat4 projMat;"
"void main () {"
"vec2 position = ( gl_FragCoord.xy / resolution.xy );"
"float max_z = texture2D(depthBuffer, position).z;"
"vec4 texel = texture2D(texture, gl_TexCoord[0].xy);"
"vec4 colors[2];"
"colors[1] = texel * gl_Color;"
"colors[0] = gl_Color;"
"bool b = (haveTexture > 0.9);"
"vec4 color = colors[int(b)];"
"float z = (gl_FragCoord.w != 1.f) ? (inverse(projMat) * vec4(0, 0, 0, gl_FragCoord.w)).w : gl_FragCoord.z;"
"colors[1] = color;"
"colors[0] = vec4(0, 0, 0, 0);"
"b = (z < max_z);"
"gl_FragColor = colors[int(b)];"
"}";

Here we see an example with color on a simple quad. I want to eliminate the clearly visible diagonal, though it may be a "human eye" thing. I remembered barycentric coordinates from graphics class a few years ago but I feel the shader pipeline does that correctly, so probably not related to this issue.

cap1.jpg

Here is an example with the Phong reflection model exponent set way high. This time it's the normals that are not quite right.

cap2.jpg

So, to smooth things out am I right in suspecting I will just have to subdivide more until there's enough points to make it unnoticeable? Or is it likely I've messed something up? I'll be happy to post any code if needed.