In this chapter, we learned about different GPU manufacturers and computing on NVIDIA and AMD GPU platforms. We also compared these two leading GPU manufactures and explored their scope and applicability options through a CUDA versus ROCm comparison. We looked through different GPUs and saw which one to choose according to a specific requirement. Finally, we revisited configuration options from Chapter 2, Designing a GPU Computing Strategy, and saw how we can modify them toward a liquid-cooled setup. Considering the RTX 2080 Ti and the Radeon VII, we understood their applicability by modifying two of our previously listed configurations in the High-end budget section in Chapter 2, Designing a GPU Computing Strategy.
Now that you have come to the end of this chapter, you should now be able to distinguish between NVIDIA and AMD GPUs based on your set of computational requirements. You should also be able to decide whether to opt for CUDA or ROCm based on your project goals. You have now become much more acquainted with how customized liquid cooling works and you can now think of ways to apply them based on your computing field.
In the next chapter, we will learn about the fundamentals of GPU programming. These fundamental concepts will help you understand how GPUs can reduce the CPU's workload by handling intensive computational tasks. We will then introduce you to CUDA, ROCm, PyCUDA, PyOpenCL, OpenCL, Anaconda, CuPy, and Numba—all from a Python-programming perspective.