This method is highly recommended to get started with DeepChem. Why? At this time of writing, Google has recently replaced Tesla K80 with Tesla T4, which is an AI inference accelerator GPU with tensor cores and 16 GB of memory:
As mentioned in Chapter 10, Accelerated Machine Learning on GPUs, it remains free to access for anyone with a Google account. An AI accelerator built for inference is intriguing, because that is a very important feature in one-shot learning, as discussed earlier. The Tesla T4 GPU belongs to the Turing architecture.
We are going to use a system-wide strategy but on Colab only. So, you can safely experiment with your deep learning skills in that notebook after checking out DeepChem. It is not installed by default, as you can see here:
To set up DeepChem within Conda on Colab, follow these steps:
- Download miniconda by using the following command:
!wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
Here, we are using a simplified name for the Miniconda installer, miniconda.sh:
- Configure Conda and install DeepChem for the GPU system wide, specifically on your Colab virtual machine. The suffix =2.1.0 is for the most recent version of the deepchem-gpu package:
!chmod +x miniconda.sh
!bash ./miniconda.sh -b -f -p /usr/local
!conda install -y --prefix /usr/local -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.1.0
- Colab will then perform all three operations step by step:
- After it is done, we need to add /usr/local/lib/python3.6/site-packages/ to this configuration:
import sys
sys.path
This shows you the existing paths:
- Now, use the following line of code to add it:
sys.path.append('/usr/local/lib/python3.6/site-packages/')
Executing this line will append a new path:
Confirm the change:
- Confirm your deepchem installation:
import deepchem
This time, you should not get any error:
If all goes well, you will not see any Python error, and the outcome of the command will be as shown.
Now, you're all set to start programming with DeepChem.