I’ve written elt’s to exploit complex pixel synthetic apperture radar data using c# gui and C++ servers / dlls. I’d love to move into the WebGL world, but if I’m stuck with png, jpg, and tif how do you make this thing work? My images are 4 - 20 GB in size - I have to convert them to RSETs (image pyramids) worth of tiles, and then glTexImage2d these through a tiling scheme - loading only what I need to cover the display surface. I’ve spent days googling for ways to connect a C++ server (running FFTW fourier transform code) via websockets, libwebsocket, websocket++, etc. etc. and haven’t found the way.
If I had to I could dynamically write thousands of 512x512 tiles as individual jpegs (after converting the floating point complex pixel to an 8 bit magnitude), but there has to be a better way. I’d like to be able to generate my tiles dynamically and send them in binary form over a socket to WebGL. That means something has to translate tcp/ip to websockets.
And from an image processing perspective, after a user has manipulated an image, how do they save it to their local machine? I’m guessing I would have to run a shadow local compiled code server on the host PC and communicate with it via sockets (again, how do you do this) and have the compiled server implement the save file commands it gets from the html5 javascript code running WebGL.
Is there anybody out there doing image processing with WebGL? Any groups / discussions, tips for how to make this work?