Right tool for the job

Hi everyone! I am new to the Khronos community and would like some insight on Open*L languages before I dive into the details.

I am currently developing a simulation using MATLAB – this is the tool will be used in – and would like to offload part of the simulation to the GPU. Although I have experience with CUDA C, which is more natively supported by the platform, portability is key and the tool must be able to run on non-Nvidia gpu’s as well(Mostly Intel’s integrated gpu’s). Linking C code to matlab is however quite trivial, and so I wanted to use either OpenCL or OpenGL for the job.

The computations required to be run on the GPU are quite simple, and I think using either should be able to perform the required computations. My question is which one is easiest to learn/implement. An overview of the necessary computations(without going into detail):

let v_a be an array containing ~1E4 floats
let v_b be an array containing ~20 floats
let v_r be an array containing ~1E4 floats
let m_a be a 2d array containing ~20*1E4 floats
let m_b be a 2d array containing ~20*1E4 floats
let m_c be a 2d array containing ~20*1E4 floats


let f(x) be a trigonometric function

for (int i=0;i<20;i++){
    m_a(i,:) = v_a + s(i); // each row in m_a equals the vector v_a plus a float
}

m_b = m_c*f(m_a); // apply function to each element in matrix m_a and do pointwise multiplication between result and m_c
v_r = sum(m_b, 1); // sum over first dimension of matrix

What do you guys think the way is to go? I have no experience with either. Thank you very much for reading my post; I would really appreciate advice on the matter!

Kind regards,
Tom

if you’ve written mex functions, opencl mex is not too different from a cuda mex.