Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Title Page Copyright and Credits
Python: Beginner's Guide to Artificial Intelligence
About Packt
Why subscribe? Packt.com
Contributors
About the authors Packt is searching for authors like you
Preface
Who this book is for What this book covers To get the most out of this book
Download the example code files Conventions used
Get in touch
Reviews
Become an Adaptive Thinker
Technical requirements How to be an adaptive thinker
Addressing real-life issues before coding a solution
Step 1 – MDP in natural language Step 2 – the mathematical representation of the Bellman equation and MDP
From MDP to the Bellman equation
Step 3 – implementing the solution in Python
The lessons of reinforcement learning
How to use the outputs Machine learning versus traditional applications
Summary
Think Like a Machine
Technical requirements Designing datasets – where the dream stops and the hard work begins
Designing datasets in natural language meetings Using the McCulloch-Pitts neuron  The McCulloch-Pitts neuron The architecture of Python TensorFlow
Logistic activation functions and classifiers
Overall architecture Logistic classifier Logistic function Softmax
Summary
Apply Machine Thinking to a Human Problem
Technical requirements Determining what and how to measure
Convergence
Implicit convergence Numerical –  controlled convergence
Applying machine thinking to a human problem
Evaluating a position in a chess game Applying the evaluation and convergence process to a business problem Using supervised learning to evaluate result quality
Summary
Become an Unconventional Innovator
Technical requirements The XOR limit of the original perceptron
XOR and linearly separable models
Linearly separable models The XOR limit of a linear model, such as the original perceptron
Building a feedforward neural network from scratch
Step 1 – Defining a feedforward neural network Step 2 – how two children solve the XOR problem every day Implementing a vintage XOR solution in Python with an FNN and backpropagation
A simplified version of a cost function and gradient descent Linear separability was achieved
Applying the FNN XOR solution to a case study to optimize subsets of data Summary
Manage the Power of Machine Learning and Deep Learning
Technical requirements Building the architecture of an FNN with TensorFlow
Writing code using the data flow graph as an architectural roadmap
A data flow graph translated into source code
The input data layer The hidden layer The output layer The cost or loss function Gradient descent and backpropagation Running the session Checking linear separability
Using TensorBoard to design the architecture of your machine learning and deep learning solutions
Designing the architecture of the data flow graph Displaying the data flow graph in TensorBoard The final source code with TensorFlow and TensorBoard
Using TensorBoard in a corporate environment
Using TensorBoard to explain the concept of classifying customer products to a CEO Will your views on the project survive this meeting?
Summary
Focus on Optimizing Your Solutions
Technical requirements Dataset optimization and control
Designing a dataset and choosing an ML/DL model
Approval of the design matrix
Agreeing on the format of the design matrix Dimensionality reduction The volume of a training dataset
Implementing a k-means clustering solution
The vision 
The data
Conditioning management
The strategy
The k-means clustering program
The mathematical definition of k-means clustering
Lloyd's algorithm 
The goal of k-means clustering in this case study The Python program
1 – The training dataset 2 – Hyperparameters 3 – The k-means clustering algorithm 4 – Defining the result labels 5 – Displaying the results – data points and clusters Test dataset and prediction
Analyzing and presenting the results
AGV virtual clusters as a solution
Summary
When and How to Use Artificial Intelligence
Technical requirements Checking whether AI can be avoided
Data volume and applying k-means clustering
Proving your point
NP-hard – the meaning of P NP-hard – The meaning of non-deterministic The meaning of hard
Random sampling
The law of large numbers – LLN The central limit theorem
Using a Monte Carlo estimator
Random sampling applications
Cloud solutions – AWS
Preparing your baseline model
Training the full sample training dataset Training a random sample of the training dataset Shuffling as an alternative to random sampling
AWS – data management
Buckets Uploading files Access to output results
SageMaker notebook Creating a job Running a job
Reading the results
Recommended strategy
Summary
Revolutions Designed for Some Corporations and Disruptive Innovations for Small to Large Companies
Technical requirements Is AI disruptive?
What is new and what isn't in AI
AI is based on mathematical theories that are not new Neural networks are not new Cloud server power, data volumes, and web sharing of the early 21st century started to make AI disruptive Public awareness contributed to making AI disruptive
Inventions versus innovations Revolutionary versus disruptive solutions Where to start?
Discover a world of opportunities with Google Translate
Getting started The program
The header Implementing Google's translation service 
Google Translate from a linguist's perspective
Playing with the tool Linguistic assessment of Google Translate
Lexical field theory Jargon Translating is not just translating but interpreting How to check a translation
AI as a new frontier
Lexical field and polysemy Exploring the frontier – the program
k-nearest neighbor algorithm
The KNN algorithm The knn_polysemy.py program Implementing the KNN compressed function in Google_Translate_Customized.py Conclusions on the Google Translate customized experiment
The disruptive revolutionary loop
Summary
Getting Your Neurons to Work
Technical requirements Defining a CNN
Defining a CNN Initializing the CNN Adding a 2D convolution 
Kernel
Intuitive approach Developers' approach Mathematical approach
Shape ReLu
Pooling Next convolution and pooling layer Flattening Dense layers
Dense activation functions
Training a CNN model
The goal Compiling the model
Loss function
Quadratic loss function Binary cross-entropy
Adam optimizer Metrics
Training dataset
Data augmentation Loading the data
Testing dataset
Data augmentation Loading the data
Training with the classifier Saving the model
Next steps
Summary
Applying Biomimicking to Artificial Intelligence
Technical requirements Human biomimicking
TensorFlow, an open source machine learning framework
Does deep learning represent our brain or our mind?
A TensorBoard representation of our mind
Input data Layer 1 – managing the inputs to the network
Weights, biases, and preactivation Displaying the details of the activation function through the preactivation process The activation function of Layer 1
Dropout and Layer 2 Layer 2 Measuring the precision of prediction of a network through accuracy values
Correct prediction accuracy
Cross-entropy Training
Optimizing speed with Google's Tensor Processing Unit
Summary
Conceptual Representation Learning
Technical requirements Generate profit with transfer learning
The motivation of transfer learning
Inductive thinking Inductive abstraction The problem AI needs to solve
The Γ gap concept Loading the Keras model after training
Loading the model to optimize training Loading the model to use it
Using transfer learning to be profitable or see a project stopped
Defining the strategy
Applying the model Making the model profitable by using it for another problem
Where transfer learning ends and domain learning begins
Domain learning
How to use the programs
The trained models used in this section The training model program
GAP – loaded or unloaded GAP – jammed or open lanes The gap dataset
Generalizing the Γ(gap conceptual dataset) Generative adversarial networks
Generating conceptual representations The use of autoencoders
The motivation of conceptual representation learning meta-models
The curse of dimensionality  The blessing of dimensionality 
Scheduling and blockchains Chatbots Self-driving cars
Summary
Optimizing Blockchains with AI
Technical requirements Blockchain technology background 
Mining bitcoins Using cryptocurrency  Using blockchains
Using blockchains in the A-F network
Creating a block Exploring the blocks
Using naive Bayes in a blockchain process
A naive Bayes example
The blockchain anticipation novelty The goal
Step 1 the dataset Step 2 frequency Step 3 likelihood Step 4 naive Bayes equation
Implementation
Gaussian naive Bayes
The Python program
Summary
Cognitive NLP Chatbots
Technical requirements IBM Watson
Intents
Testing the subsets
Entities Dialog flow
Scripting and building up the model
Adding services to a chatbot
A cognitive chatbot service
The case study
A cognitive dataset
Cognitive natural language processing
Activating an image + word cognitive chat Solving the problem 
Implementation
Summary
Improve the Emotional Intelligence Deficiencies of Chatbots
Technical requirements Building a mind 
How to read this chapter The profiling scenario
Restricted Boltzmann Machines
The connections between visible and hidden units Energy-based models
Gibbs random sampling Running the epochs and analyzing the results
Sentiment analysis
Parsing the datasets
Conceptual representation learning meta-models
Profiling with images
RNN for data augmentation
RNNs and LSTMs
RNN, LSTM, and vanishing gradients
Prediction as data augmentation
Step1 – providing an input file Step 2 – running an RNN Step 3 – producing data augmentation
Word embedding
The Word2vec model
Principal component analysis
Intuitive explanation  Mathematical explanation
Variance Covariance Eigenvalues and eigenvectors Creating the feature vector Deriving the dataset Summing it up
TensorBoard Projector
Using Jacobian matrices
Summary
Building Deep Learning Environments
Building a common DL environment
Get focused and into the code!
DL environment setup locally
Downloading and installing Anaconda
Installing DL libraries
Setting up a DL environment in the cloud Cloud platforms for deployment 
Prerequisites Setting up the GCP
Automating the setup process Summary
Training NN for Prediction Using Regression
Building a regression model for prediction using an MLP deep neural network Exploring the MNIST dataset Intuition and preparation
Defining regression Defining the project structure
Let's code the implementation!
Defining hyperparameters Model definition Building the training loop
Overfitting and underfitting 
Building inference
Concluding the project Summary
Generative Language Model for Content Creation
LSTM for text generation
Data pre-processing Defining the LSTM model for text generation Training the model Inference and results
Generating lyrics using deep (multi-layer) LSTM
Data pre-processing Defining the model Training the deep TensorFlow-based LSTM model Inference Output
Generating music using a multi-layer LSTM
Pre-processing data Defining the model and training Generating music
Summary
Building Speech Recognition with DeepSpeech2
Data preprocessing
Corpus exploration Feature engineering Data transformation
DS2 model description and intuition Training the model Testing and evaluating the model Summary
Handwritten Digits Classification Using ConvNets
Code implementation
Importing all of the dependencies Exploring the data Defining the hyperparameters Building and training a simple deep neural network
Fitting a model Evaluating a model MLP – Python file
Convolution Convolution in Keras
Fitting the model Evaluating the model Convolution – Python file
Pooling
Fitting the model Evaluating the model Convolution with pooling – Python file
Dropout
Fitting the model Evaluating the model Convolution with pooling – Python file
Going deeper
Compiling the model Fitting the model Evaluating the model Convolution with pooling and Dropout – Python file
Data augmentation
Using ImageDataGenerator Fitting ImageDataGenerator Compiling the model Fitting the model Evaluating the model Augmentation – Python file
Additional topic – convolution autoencoder
Importing the dependencies Generating low-resolution images Scaling Defining the autoencoder Fitting the autoencoder Loss plot and test results Autoencoder – Python file
Conclusion Summary
Object Detection Using OpenCV and TensorFlow
Object detection intuition
Improvements in object detection models
Object detection using OpenCV
A handcrafted red object detector
Installing dependencies  Exploring image data Normalizing the image Preparing a mask Post-processing of a mask Applying a mask
Object detection using deep learning
Quick implementation of object detection
Installing all the dependencies Implementation Deployment
Object Detection In Real-Time Using YOLOv2
Preparing the dataset
Using the pre-existing COCO dataset Using the custom dataset
Installing all the dependencies Configuring the YOLO model Defining the YOLO v2 model Training the model Evaluating the model
Image segmentation
Importing all the dependencies Exploring the data
Images Annotations
Preparing the data
Normalizing the image  Encoding Model data 
Defining hyperparameters Define SegNet
Compiling the model Fitting the model Testing the model
Conclusion Summary
Building Face Recognition Using FaceNet
Setup environment
Getting the code Building the Docker image Downloading pre-trained models
Building the pipeline Preprocessing of images
Face detection Aligning faces Feature extraction Execution on Docker
Training the classifier Evaluation Summary
Generative Adversarial Networks
GANs Implementation of GANs
Real data generator Random data generator Linear layer Generator Discriminator GAN Keep calm and train the GAN
GAN for 2D data generation
MNIST digit dataset DCGAN
Discriminator Generator
Transposed convolutions Batch normalization
GAN-2D
Training the GAN model for 2D data generation
GAN Zoo
BiGAN – bidirectional generative adversarial networks CycleGAN GraspGAN – for deep robotic grasping Progressive growth of GANs for improved quality
Summary
From GPUs to Quantum computing - AI Hardware
Computers – an ordinary tale
A brief history
Central Processing Unit
CPU for machine learning
Motherboard Processor
 Clock speed Number of cores Architecture
RAM HDD and SSD Operating system (OS)
Graphics Processing Unit (GPU)
GP-GPUs and NVIDIA CUDA cuDNN
 ASICs, TPUs, and FPGAs
ASIC TPU
Systolic array
Field-programmable gate arrays
Quantum computers
Can we really build a quantum computer? How far are we from the quantum era?
Summary
TensorFlow Serving
What is TensorFlow Serving?
Understanding the basics of TensorFlow Serving
Servables Servable versions Models Sources Loaders Aspired versions Managers
Installing and running TensorFlow Serving
Virtual machines Containers Installation using Docker Toolbox
Operations for model serving
Model creation Saving the model Serving a model
What is gRPC?
Calling the model server Running the model from the client side
Summary
Other Books You May Enjoy
Leave a review - let other readers know what you think
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion