Comparing GPU programmable platforms on NVIDIA and AMD

So far, we have explored the scope of computing on NVIDIA and AMD GPUs through two separate chapters. Now, let's specifically look into the comparisons between their respective APIs:

NVIDIA CUDA

AMD ROCm

The API is called Compute Unified Device Architecture

The API is called Radeon Open Compute platform

Proprietary

Open source

Released in 2007

Released in 2016

Wider support

Still under adoption and very actively catching up

Significant number of programmable libraries

Fewer libraries than CUDA but active ongoing development

Cannot be used with non-NVIDIA devices

Cross-platform independence due to open standards

CUDA-C language being used

HIP for cross-platform; HC for AMD GPUs

.cu extension used for files

.cpp extension used for files

Non-portable

CUDA code portability possible with HIP

OpenCL compatible

Also OpenCL compatible

Significant progress in machine learning

Steady progress in machine learning

 

It is interesting to note that C++ programmers learning GPU computing for the first time will still be using the .cpp extension with AMD ROCm. So, in terms of familiarity, ROCm looks like a better option. NVIDIA GPU owners can opt for HIPCC, whereas AMD GPU owners can go for HCC.