Comparing Numba to NumPy, ROCm, and CUDA

Let's now compare Numba to NumPy, ROCm, and CUDA in terms of simplicity in parallelization. In the following table, we explore the scope of Numba with respect to NumPy, ROCm, and CUDA to understand the scenarios when Numba could be advantageous to both. Some of the differences are as follows:

CUDA

ROCm

NumPy

Numba

Based on C/C++ programming language.

Based on C/C++ programming language.

Based on Python programming language.

Based on Python programming language.

Uses C/C++ combined with specialized code to accelerate computations.

Uses C/C++ combined with specialized code to accelerate computations for HCC and HIP.

Fundamental package for scientific computing with Python on conventional CPUs.

Natively understands NumPy arrays, shapes, and dtypes and can index a NumPy array without relying on Python (close to C efficiency).

Universal functions can be implemented CUDA ufuncs in Numba.

Universal functions can be implemented ROCm ufuncs (experimental) in Numba.

Universal functions (ufuncs) are much more prevalent in NumPy and allows mapping of scalar operations.

ufuncs are typically built using Numpy's C API. Numba provides the vectorize decorator to build ufunc.

cuRAND is available in CUDA C/C++ for random value generation.

rocRAND is available in HIP and hcRNG is available in HCC for random value generation.

Random value generator does not support dtype option and always returns a float32 value.

It supports any type of float values because of using cuRAND in CuPy.

CUDA handles out-of-bounds integer array indexing by raising an error.

HCC/HIP also handles out-of-bounds integer array indexing by raising an error.

NumPy handles out-of-bounds integer array indexing by raising an error.

In Numba, range checking is not performed to allow generating code that performs better. Code needs to carefully examined as any indexing that goes out of range can cause a bad-access or a memory overwrite, and crash the interpreter process.

 

To achieve GPU reduction in Numba, you can use the @reduce decorator with CUDA, which is an instance of the Reduce class.