I want to enable the user to add a point to the screen. With an orthographic projection matrix (with left=-0.5, right=0.5, top=0.5, bottom=-0.5), I simply need to normalise the coordinates (as in step 1) and send them to the vertex shader (step 2):
- A point is given by the user at (lx, ly). I normalise the coordinates of that point and get (nx, ny)=(lx/screenWidth - 0.5, -ly/screenHeight + 0.5).
- The normalised point is sent to the vertex shader through a buffer, and with a fixed Z position. Let’s say the point is (nx, ny, Z).
From my understanding, what happens after that is:
- Vertex shader transformation occurs. We get (a, b, c, w)=MVP*(nx, ny, Z, 1.0)
- Division by w. We are now in the NDC space, and we have a new point (x, y, z, 1) = (a/w, b/sw, c/w, w/w)
- That point is then transformed to window coordinates and rendered. Let’s say that point is (sx, sy).
The problem is that I can’t figure out how to do this process properly with a perspective projection matrix. What happens is: the user gives me a point in window coordinates and I normalise it, getting (nx, ny). But when such point is rendered, it’s drawn somewhere else bit due to the perspective distortion. I need some way to, given a point (nx, ny), find another point (wx, wy) and feed that point instead of (nx, ny) to the shader, so that, in the end (step 5), the point rendered to the screen is in the same place as (nx, ny) in terms of window coordinates.
I’ve managed, with some math, to write a (Px, Py, Z) in terms of (nx, ny, Z) and the MVP matrix such that the transformed (Px, Py, Z) equals to (nx, ny, Z) in step 4, but there’s still an offset when rendering occurs, that may or may not be happening from step 4 to step 5 (not sure where/why it’s happening, but I’ve checked that the transformed (Px, Py, Z) to NDC coordinates has x’s and y’s equal to the x’s and y’s of (nx, ny)).
A simple picture illustrating the issue: https://imgur.com/a/M57s0XZ
Can anyone help me finding (wx, wy) given the MVP matrix and (nx, ny, Z)? Any help is appreciated.