In this chapter, programmable environments were discussed with a perspective on system-wide and virtual environments. Scenarios where a particular option is preferred were discussed. The directory structures of both system-wide and virtual environments were explored, in addition to their advantages and disadvantages. The containerization concept was introduced as an evolution from virtualization. Finally, local and cloud containers were explored in detail with a hands-on approach.
You are now familiar with the different environments to choose from in order to set up and use a development platform with GPUs. With Virtualenv and VirtualBox, you can now set up your own isolated development environments. The benefits of using Virtualenv will help prepare you to customize future preferences when setting up a closed programmable environment. Depending upon your usability requirements and situations, you can choose to work offline on a local container or online on a cloud container. You can now start testing Docker applications on either an NVIDIA or an AMD GPU platform. If you do not own a GPU, you can still make use of the freely accessible Tesla T4 GPU and develop your GPU-based Python code.
In the next chapter, we move on to a new section purely based on machine learning. In Chapter 10, Accelerated Machine Learning on GPUs, we will learn about the significance of GPU-enabled Python in Artificial Intelligence (AI) and science. We will look into how deep learning evolved from AI through machine learning. An introduction to machine learning frameworks such as Tensorflow and Theano will follow, particularly their GPU-focused modules via Python.