NVIDIA's GeForce 9400 GT: CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT = 1

Hello,

On my device (NVIDIA’s GeForce 9400 GT) I ran the following command:

clGetDeviceInfo(devices[i], CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT, 		
            sizeof(vec_width), &vec_width, NULL);	

I got vec_width=1.
Does it make sense ?

I ran the following kernel and got correct results:

__kernel void matvec_mult(__global float4* matrix,
                          __global float4* vector,
                          __global float* result) {
   
   int i = get_global_id(0);
   result[i] = dot(matrix[i], vector[0]);
}

So it seems the NVIDIA device supports at least a vecotr of 4 floats.

I’m using AMD’s SDK. Can this cause the problem ?

Thanks,
Zvika

Hi Zvika,

The preferred vector width is just a recommendation for improving performance. In this case, NVIDIA’s OpenCL implementation is telling you that it would prefer vectors of size 1 (i.e. scalar), because it doesn’t have native support for vectors. It will work perfectly fine with vectors of any other size however (as will all implementations that conform to the standard), so there’s no restrictions as to what vector size you can actually use in your kernels.

Best wishes,

James

Good advice. Thanks for sharing this.