Hi,
I have recently started learning OpenCl, i started with a simple matrix multiplication example to see by how much a gpu can reduce computation time and also to learn how to optimize data movement.i tried the following
Matrices A,B,C are all 1024x1024
GPU: C(i, j) per work-item, all global memory [1D Work Space](1024*1024 work item)
GPU: C(i, j) per work-item, all global memory [2D Work Space]
GPU: C row per work-item, all global memory [1D Work Space]
GPU: C row per work-item, A private, B in global memory [1D Work Space]
GPU: C row per work-item, A private, B in local memory [1D Work Space]
The results are as follows 0.4308s,3.9784s,2.3082s,1.6315s,1.6561s.
i have already checked and all give the correct results.
i am using opencl 1.2, catalyst 12.4 drivers on amd 3400m APU.the following are the kernel for the first.
"__kernel
"
"void matrixmultiply(__global float *A,
"
" __global float *B,
"
" __global float C,int WidthA,int WidthB)
"
"{
"
"
"
" // Get the work-item’s unique ID
"
" int idx = get_global_id(0);
"
" float sum=0;
"
" int row;
"
" int column;
"
" row=idx/WidthB;
"
" column=idx%WidthB;
"
" // Add the corresponding locations of
"
" // ‘A’ and ‘B’, and store the result in ‘C’.
"
" for(int i=0;i<WidthA;i++)
"
" {
"
" sum+= A[rowWidthA+i]*B[i*WidthB+column];
"
" }
"
" C[idx]=sum;
"
"}
"
kernel for the second
"__kernel
"
"void matrixmultiply(__global float *A,
"
" __global float *B,
"
" __global float C,int WidthA,int WidthB)
"
"{
"
"
"
" // Get the work-item’s unique ID
"
" float sum=0;
"
" int row = get_global_id(0);
"
" int column = get_global_id(1);
"
" // Add the corresponding locations of
"
" // ‘A’ and ‘B’, and store the result in ‘C’.
"
" for(int i=0;i<WidthA;i++)
"
" {
"
" sum+= A[rowWidthA+i]B[i*WidthB+column];
"
" }
"
" C[rowWidthB+column]=sum;
"
"}
"\
My question is why is the first fastest when it access all data from the global memory.