Why does my CUDA kernel crash (unspecified launch failure) with a different dataset size?

I have a kernel to calculate different elements of a matrix, based on their position (diagonal or off-diagonal). The kernel works as expected when calculating matrices of sizes:

  • 14 x 14 (I understand this is small and does not make proper use of the GPU resources but this was purely for testing purposes to ensure results were correct)
  • 118 x 118, and
  • 300 x 300

However, when I am trying to calculate a matrix of size 2383 x 2383, the kernel crashes. Specifically, the error "Unspecified launch failure" is thrown on the cudaMemcpy() line to return results from device to host. From research, I understand that this error usually arises in the case of an out of bounds memory access (e.g. in an array), however, what I don't get is that it works for the three previous cases but not for the 2383 x 2383 case. The kernel code is shown below:

__global__ void createYBus(float *R, float *X, float *B, int numberOfBuses, int numberOfBranches, int *fromBus, int *toBus, cuComplex *y)
{
    int rowIdx = blockIdx.y*blockDim.y + threadIdx.y;
    int colIdx = blockIdx.x*blockDim.x + threadIdx.x;
    int index = rowIdx*numberOfBuses + colIdx;
    if (rowIdx<numberOfBuses && colIdx<numberOfBuses)
    {
        for (int i=0; i<numberOfBranches; ++i)
        {
            if (rowIdx==fromBus[i] && colIdx==fromBus[i]) { //diagonal element
                y[index] = cuCaddf(y[index], make_cuComplex((R[i]/((R[i]*R[i])+(X[i]*X[i]))), (-(X[i]/((R[i]*R[i])+(X[i]*X[i])))+ (B[i]/2))));
            }
            if (rowIdx==toBus[i] && colIdx==toBus[i]) { //diagonal element
                y[index] = cuCaddf(y[index], make_cuComplex((R[i]/((R[i]*R[i])+(X[i]*X[i]))), (-(X[i]/((R[i]*R[i])+(X[i]*X[i])))+ (B[i]/2))));
            }
            if (rowIdx==fromBus[i] && colIdx==toBus[i]) { //off-diagonal element
                y[index] = make_cuComplex(-(R[i]/((R[i]*R[i])+(X[i]*X[i]))), X[i]/((R[i]*R[i])+(X[i]*X[i])));
            }
            if (rowIdx==toBus[i] && colIdx==fromBus[i]) { //off-diagonal element
                y[index] = make_cuComplex(-(R[i]/((R[i]*R[i])+(X[i]*X[i]))), X[i]/((R[i]*R[i])+(X[i]*X[i])));
            }
        }
    }
}

Global memory allocations are done via calls to cudaMalloc(). The allocations made in the code are as follows:

cudaStat1 = cudaMalloc((void**)&dev_fromBus, numLines*sizeof(int));
cudaStat2 = cudaMalloc((void**)&dev_toBus, numLines*sizeof(int));
cudaStat3 = cudaMalloc((void**)&dev_R, numLines*sizeof(float));
cudaStat4 = cudaMalloc((void**)&dev_X, numLines*sizeof(float));
cudaStat5 = cudaMalloc((void**)&dev_B, numLines*sizeof(float));
cudaStat6 = cudaMalloc((void**)&dev_y, numberOfBuses*numberOfBuses*sizeof(cuComplex));
cudaStat7 = cudaMalloc((void**)&dev_Pd, numberOfBuses*sizeof(float));
cudaStat8 = cudaMalloc((void**)&dev_Qd, numberOfBuses*sizeof(float));
cudaStat9 = cudaMalloc((void**)&dev_Vmag, numberOfBuses*sizeof(float));
cudaStat10 = cudaMalloc((void**)&dev_theta, numberOfBuses*sizeof(float));
cudaStat11 = cudaMalloc((void**)&dev_Peq, numberOfBuses*sizeof(float));
cudaStat12 = cudaMalloc((void**)&dev_Qeq, numberOfBuses*sizeof(float));
cudaStat13 = cudaMalloc((void**)&dev_Peq1, numberOfBuses*sizeof(float));
cudaStat14 = cudaMalloc((void**)&dev_Qeq1, numberOfBuses*sizeof(float));
...
...
cudaStat15 = cudaMalloc((void**)&dev_powerMismatch, jacSize*sizeof(float));
cudaStat16 = cudaMalloc((void**)&dev_jacobian, jacSize*jacSize*sizeof(float));
cudaStat17 = cudaMalloc((void**)&dev_stateVector, jacSize*sizeof(float));
cudaStat18 = cudaMalloc((void**)&dev_PQindex, jacSize*sizeof(int));

where cudaStatN are of type cudaError_t to catch errors. The last four allocations were done later on in the code and are for another kernel. However these allocations were done before the kernel in question was called.

The launch parameters are as follows:

dim3 dimBlock(16, 16); //number of threads 
dim3 dimGrid((numberOfBuses+15)/16, (numberOfBuses+15)/16);  //number of blocks

//launch kernel once data has been copied to GPU
createYBus<<<dimGrid, dimBlock>>>(dev_R, dev_X, dev_B, numberOfBuses, numLines, dev_fromBus, dev_toBus, dev_y);

//copy results back to CPU
cudaStat6 = cudaMemcpy(y_bus, dev_y, numberOfBuses*numberOfBuses*sizeof(cuComplex), cudaMemcpyDeviceToHost);
if (cudaStat6 != cudaSuccess) {
    cout<<"Device memcpy failed"<<endl;
    cout<<cudaGetErrorString(cudaStat6)<<endl;
    return 1;
}

I removed the timing code just to show the block and grid dimensions and error checking technique used.

I also have a host (C++ code) version of this function and I'm passing the data to both functions and then comparing results, firstly, to ensure the kernel produces correct results, and secondly in terms of execution time to compare performance. I have double checked the data for the 2383 x 2383 case (it's being read in from a text file and copied to global memory) and I'm not finding any anomalies in array accesses/indexing.

I'm using Visual Studio 2010, so I tried using Nsight to find the error (I'm not too well-versed with Nsight). The summary report overview states: "There was 1 runtime API call error reported. (Please see the CUDA Runtime API Calls report for further information). In the list of runtime API calls, cudaMemcpy returns error 4 - not sure if the Thread ID (5012) is of any significance in the table - this number varies with every run. CUDA memcheck tool (in the command line) returns the following:

Thank you for using this program
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaMemcpy.
=========     Saved host backtrace up to driver entry point at error
=========
========= ERROR SUMMARY: 1 error

I know my kernel isn't the most efficient as there are many global memory accesses. Why is the kernel crashing for this larger matrix? Is there an out of bounds array access that I'm missing? Any assistance would be greatly appreciated.


Solution 1:

Solved the problem. Turns out the WDDM TDR (timeout detecion recovery) was enabled and the delay was set to 2 seconds. This means that if the kernel execution time exceeds 2s, the driver will crash and recover. This is applicable to graphics and rendering (for general purpose uses of the GPU). In this case however, the TDR must either me disabled or the delay increased. By increasing the delay to 10s, the crash error "unspecified launch failure" ceased to appear and kernel execution continued as before.

The TDR delay (as well as enabling/disabling) can be done through Nsight options in the Nsight Monitor or through the Registry (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\GraphicsDrivers) - DWORDS Tdrdelay and Tdrlevel.