Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Title Page Copyright and Credits
Hands-On GPU Computing with Python
Dedication About Packt
Why subscribe? Packt.com
Contributors
About the author About the reviewer Packt is searching for authors like you
Preface
Who this book is for What this book covers To get the most out of this book
Download the example code files Download the color images Code in Action Conventions used
Get in touch
Reviews
Section 1: Computing with GPUs Introduction, Fundamental Concepts, and Hardware Introducing GPU Computing
The world of GPU computing beyond PC gaming
What is a GPU?
Conventional CPU computing – before the advent of GPUs How the gaming industry made GPU computing affordable for individuals The emergence of full-fledged GPU computing
The rise of AI and the need for GPUs
The simplicity of Python code and the power of GPUs –  a dual advantage
The C language – a short prologue From C to Python The simplicity of Python as a programming language – why many researchers and scientists prefer it The power of GPUs
Ray tracing Artificial intelligence (AI) Programmable shading
RTX-OPS
Latest GPUs at the time of writing this book (can be subject to change)
NVIDIA GeForce RTX 2070 NVIDIA GeForce RTX 2080 NVIDIA GeForce RTX 2080 Ti NVIDIA Titan RTX Radeon RX Vega 56 Radeon RX Vega 64 Radeon VII
Significance of FP64 in GPU computing The dual advantage – Python and GPUs, a powerful combination
How GPUs empower science and AI in current times
Bioinformatics workflow management Magnetic Resonance Imaging (MRI) reconstruction techniques Digital-signal processing for communication receivers Studies on the brain – neuroscience research Large-scale molecular dynamics simulations GPU-powered AI and self-driving cars Research work posited by AI scientists
Deep learning on commodity Android devices Motif discovery with deep learning Structural biology meets data science Heart-rate estimation on modern wearable devices Drug target discovery Deep learning for computational chemistry
The social impact of GPUs
Archaeological restoration/reconstruction Numerical weather prediction Composing music Real-time segmentation of sports players Creating art Security Agriculture Economics
Summary Further reading
Designing a GPU Computing Strategy
Getting started with the hardware
The significance of compatible hardware for your GPU
Beginners Intermediate users Advanced users
Motherboard Case Power supply unit (PSU) CPU RAM Hard-disk drive (HDD) Solid-state drive (SSD) Monitor
Building your first GPU-enabled parallel computer – minimum system requirements
Scope of hardware scalability Branded desktops Do it yourself (DIY) desktops
Beginner range Mid range High-end range
Liquid cooling – should you consider it?
The temperature factor Airflow management Thermal paste Conventional air cooling Stock coolers Overclocking So, what are custom/aftermarket coolers? Liquid cooling The specific heat capacity of cooling agents Why is water the best liquid coolant?
Branded GPU-enabled PCs
Purpose Feasibility Upgradeability Refining an effective budget Warranty Bundled monitors Ready-to-deploy GPU systems GPU solutions for individuals Branded solutions in liquid cooling
Why not DIY?
GPU CPU Motherboard RAM Storage PSU Uninterrupted power supply (UPS) Thermal paste Heat sink Radiator Types of cooling fans Bottlenecking Estimating the build and performing compatibility checks Purpose Feasibility Upgradeability Refining an effective budget Warranty for individual components DIY solutions in liquid cooling Assembling your system Connecting all the power and case cables in place Installing CUDA on a fresh Ubuntu installation
Entry-level budget Mid-range budget High-end budget Summary Further reading
Setting Up a GPU Computing Platform with NVIDIA and AMD
GPU manufacturers
First generation Second generation Third generation Fourth generation Fifth generation Sixth generation Seventh generation and beyond
Computing on NVIDIA GPUs
GeForce platforms Quadro platforms Tesla platforms GPUDirect SXM and NVLink NVIDIA CUDA
Computing on AMD APUs and GPUs
Accelerated processing units (APUs) The GPU in the APU – the significance of APU design AMD GPUs – programmable platforms
Radeon platforms Radeon Pro platforms Radeon Instinct platforms
AMD ROCm
Comparing GPU programmable platforms on NVIDIA and AMD
GPUOpen The significance of double precision in scientific computing from a GPU perspective
Current models from both brands that are ideal for GPU computing
AMD Radeon VII GPU – the new people's champion NVIDIA Titan V GPU – raw compute power
An enthusiast's guide to GPU computing hardware Summary Further reading
Section 2: Hands-On Development with GPU Programming Fundamentals of GPU Programming
GPU-programmable platforms Basic CUDA concepts
Installing and testing Compute capability Threads, blocks, and grids
Threads Blocks Grids
Managing memory Unified Memory Access (UMA) Dynamic parallelism Predefined libraries OpenCL
Basic ROCm concepts
Installation procedure and testing
Official deprecation notice for HCC from AMD
Generating chips ROCm components (APIs), including OpenCL CUDA-like memory management with HIP hipify Predefined libraries OpenCL
The Anaconda Python distribution for package management and deployment
Installing the Anaconda Python distribution on Ubuntu 18.04 Application-specific usage
GPU-enabled Python programming
The dual advantage PyCUDA PyOpenCL CuPy Numba (formerly Accelerate)
Summary Further reading
Setting Up Your Environment for GPU Programming
Choosing a suitable IDE for your Python code PyCharm – an IDE exclusively made for Python
Different versions of PyCharm
The Community edition The Professional edition The Educational edition – PyCharm Edu
Features for learners Features for educators
PyCharm for Anaconda
Installing PyCharm
First run EduTools plugin for existing PyCharm users
Alternative IDEs for Python – PyDev and Jupyter Installing the PyDev Python IDE for Eclipse Installing Jupyter Notebook and Jupyter Lab Summary Further reading
Working with CUDA and PyCUDA
Technical requirements Understanding how CUDA-C/C++ works via a simple example Installing PyCUDA for Python within an existing CUDA environment
Anaconda-based installation of PyCUDA pip – system-wide Python-based installation of PyCUDA
Configuring PyCUDA on your Python IDE
Conda-based virtual environment pip-based system-wide environment
How computing in PyCUDA works on Python Comparing PyCUDA to CUDA – an introductory perspective on reduction
What is reduction?
Writing your first PyCUDA programs to compute a general-purpose solution Useful exercise on computational problem solving
Exercise
Summary Further reading
Working with ROCm and PyOpenCL
Technical requirements Understanding how ROCm-C/C++ works with hipify, HIP, and OpenCL
Converting CUDA code into cross-platform HIP code with hipify Understanding how ROCm-C/C++ works with HIP
Output on an NVIDIA platform Output on an AMD platform
Understanding how OpenCL works
Installing PyOpenCL for Python (AMD and NVIDIA)
Anaconda-based installation of PyOpenCL pip – system-wide Python base installation of PyOpenCL
Configuring PyOpenCL on your Python IDE
Conda-based virtual environment pip-based system-wide environment
How computing in PyOpenCL works on Python Comparing PyOpenCL to HIP and OpenCL – revisiting the reduction perspective
Reduction with HIP, OpenCL, and PyOpenCL
Writing your first PyOpenCL programs to compute a general-purpose solution Useful exercise on computational problem solving
Solution assistance
Summary Further reading
Working with Anaconda, CuPy, and Numba for GPUs
Technical requirements Understanding how Anaconda works with CuPy and Numba
Conda CuPy Numba
GPU-accelerated Numba on Python
Installing CuPy and Numba for Python within an existing Anaconda environment
Coupling Python with CuPy Conda-based installation of CuPy pip-based installation of CuPy Coupling Python with Numba for CUDA and ROCm
Installing Numba with Conda for NVIDIA CUDA GPUs Installing Numba with Conda for AMD ROC GPUs System-wide installation of Numba with pip (optional)
Configuring CuPy on your Python IDE How computing in CuPy works on Python
Implementing multiple GPUs with CuPy
Configuring Numba on your Python IDE How computing in Numba works on Python
Using vectorize Explicit kernels
Writing your first CuPy and Numba enabled accelerated programs to compute GPGPU solutions Interoperability between CuPy and Numba within a single Python program Comparing CuPy to NumPy and CUDA Comparing Numba to NumPy, ROCm, and CUDA Useful exercise on computational problem solving Summary Further reading
Section 3: Containerization and Machine Learning with GPU-Powered Python Containerization on GPU-Enabled Platforms
Programmable environments
Programmable environments – system-wide and virtual Specific situations of usage
Preferring virtual over system-wide Preferring system-wide over virtual
System-wide (open) environments
$HOME directory System directories Advantages of open environments Disadvantages of open environments
Virtual (closed) environments
$HOME directory Virtual system directories Advantages of closed environments Disadvantages of closed environments
Virtualization
Virtualenv
Installing virtualenv on Ubuntu Linux system Using Virtualenv to create and manage a virtual environment Key benefits of using Virtualenv
VirtualBox
Installing VirtualBox GPU passthrough
Local containers
Docker
Installing Docker Community Edition (CE) on Ubuntu 18.04 NVIDIA Docker
Installing NVIDIA Docker
ROCm Docker
Kubernetes
Cloud containers
An overview on GPU computing with Google Colab
Summary Further reading
Accelerated Machine Learning on GPUs
Technical requirements The significance of Python in AI – the dual advantage
The need for big data management Using Python for machine learning
Exploring machine learning training modules
The advent of deep learning
Introducing machine learning frameworks
Tensors by example
Introducing TensorFlow
Dataflow programming Differentiable programming TensorFlow on GPUs
Introducing PyTorch
The two primary features of PyTorch
Installing TensorFlow and PyTorch for GPUs
Installing cuDNN Coupling Python with TensorFlow for GPUs Coupling Python with PyTorch for GPUs
Configuring TensorFlow on PyCharm and Google Colab
Using TensorFlow on PyCharm Using TensorFlow on Google Colab
Configuring PyTorch on PyCharm and Google Colab
Using PyTorch on PyCharm Using PyTorch on Google Colab
Machine learning with TensorFlow and PyTorch
MNIST Fashion-MNIST CIFAR-10 Keras Dataset downloads
Downloading Fashion-MNIST with Keras Downloading CIFAR-10 with PyTorch
Writing your first GPU-accelerated machine learning programs
Fashion-MNIST prediction with TensorFlow TensorFlow output on the PyCharm console Training Fashion-MNIST for 100 epochs CIFAR-10 prediction with PyTorch PyTorch output on a PyCharm console
Revisiting our computational exercises with a machine learning approach
Solution assistance
Summary Further reading
GPU Acceleration for Scientific Applications Using DeepChem
Technical requirements Decoding scientific concepts for DeepChem
Atom Molecule Protein molecule Biological cell Medicinal drug – a small molecule Ki Crystallographic structures Assays Histogram Open Source Drug Discovery (OSDD) Convolution Ensemble Random Forest (RF) Graph convolutional neural networks (GCN) One-shot learning
Multiple ways to install DeepChem
Installing Google Colab Conda on your local PyCharm IDE NVIDIA Docker-based deployment
Configuring DeepChem on PyCharm Testing an example from the DeepChem repository
How medicines reach their targets in our body Alzheimer's disease IC50 The Beta-Site APP-Cleaving Enzyme (BACE) A DeepChem programming example Output on the PyCharm console
Developing your own deep learning framework like DeepChem – a brief outlook Summary Final thoughts References
Appendix A
GPU-accelerated machine learning in Python – benchmark research GPU-accelerated machine learning with Python applied to cancer research Deep Learning with GPU-accelerated Python for applied computer vision – Pavement Distress
Other Books You May Enjoy
Leave a review - let other readers know what you think
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion