Index
A
Accuracy performance
bias term
deep learning vs. traditional ML
optimal weight and bias values
Activation functions
Adam optimizer
Aggregated error terms
AlexNet
Anaconda
Anomaly detection
Artificial intelligence (AI)
problem
taxonomy
timeline
Artificial neural networks (ANNs)
activation function
deep neural networks
hidden layers
LTU
McCulloch-Pitts Neuron
perceptron
Artificial neurons
Autoencoders
advantages and disadvantages
architecture
encoding and decoding structure
fashion MNIST
SeeImage denoising with fashion MNIST (case study
layers
use cases
variations
regularized
SeeRegularized autoencoders
undercomplete
visualization
Automatic summarization
Auto MPG dataset (case study)
attributes
data preparation
categorical variables
DataFrame creation
dropping null values
training and testing
downloading
library installation
model building and training
configuration
sequential API
Tensorflow imports
observation
overview
predictions
results, evaluating
TensorFlow Docs library
B
Backpropagation
Bag of words
Basic autoencoder network
Bias and variance trade-off
Bidirectional layers
Big data
Binary classification
Bioinformatic sequence analysis
C
Capabilities of deep learning
Cell-based recurrent neural network activity
Chatbots
Classification problems
Cliché dataset
Clustering analysis
Cognitive skills
Collaborative filtering
depiction
issues
memory-based approach
model-based approach
MovieLens dataset (case study)
build models
custom model training
data processing
initial imports
load data
model Subclassing
ratings dataframe
recommendations
splitting dataset
primary assumption
Confusion matrix
Content-based filtering
Continuous training
Contractive autoencoders (CAEs)
Convolutional layer
Convolutional neural networks (CNNs)
accuracy
architecture
convolutional layer
fully connected layer
pooling layers
deep learning networks
feedforward neural networks
MNIST dataset
SeeImage classification with MNIST (case study
Cost function
Crossentropy function
Cross-sectional data
Cross-validation
Custom training
D
Data augmentation
Data science
libraries
taxonomy
Dataset API
Dataset object (tf.data.Dataset)
Datasets catalog, TensorFlow
importing
installation
Keras
load function
DataSets module
Decision tree
Deep belief nets
Deep deterministic policy gradient (DDPG)
Deep feedforward neural networks
Deep Learning (DL)
See alsoMachine learning
activation function
cost function
distinct accuracy curve
framework, power scores
history
loss functions
optimizer
Deep neural networks
Deep Q Network (DQN)
Deep reinforcement learning
Define-by-run approach
Denoising autoencoders (DAEs)
Digit generation with MNIST (case study)
animate
display_images() function
GAN model, building
checkpoint set
discriminator network
generator network
loss function
optimizers
initial imports
load and process
train, GAN model
image generation function
starting training loop
training loop
train_step() function
Dimensionality reduction methods
Directed acyclic graph (DAG)
Discriminator network
display_images() function
DistBelief
Dummy variable
E
Eager execution
Edge TPU
Error function
Estimator API
Explicit data collection
F
Facebook’s AI Research Lab (FAIR)
Fashion MNIST dataset
Feature scaling
Feedforward neural networks
Auto MPG
SeeAuto MPG dataset (case study
deep
hidden layers
input layer
layers
limitations
output layer
shallow
supervised learning tasks
Filtering
Fully connected network
G
Gated recurrent units (GRUs)
Generative adversarial networks (GANs)
applications
art and fashion
malicious applications and deep fake
manufacturing, research, and R&D
miscellaneous applications
video games
architecture
components
generator and discriminator networks
“mode collapse” issue
invention
method
visualization
Generator network
Genetic clustering
get_file() function
Google Colab
Anaconda distribution
Pip
setup process
GPU for deep learning
Gradient descent algorithm
Grammar induction
Graph execution
H
Hardware options
Hebbian learning
Hierarchical clustering
Hybrid recommender systems
Hyperparameter tuning
I
Image classification with MNIST (case study)
building, CNN
compiling and fitting, model
downloading
evaluate, trained model
reshaping and normalizing
trained model, saving
Image denoising with fashion MNIST (case study)
adding noise to images
data loading and data processing
denoising noisy images
initial imports
model, building
ImageNet
Image processing
Image recognition
Implicit data collection
__init__ function
Interactive programming environments
advantages
build and train models, options
Google Colab
IPython
Jupyter Notebook
Interpreter
IPython
J
Jupyter Notebook
Anaconda distribution
installation
Mac
Windows
K
Keras
Keras datasets
Keras Functional API
Keras Sequential API
K-means clustering
k-nearest neighbors algorithm
Kubeflow
L
Latent space
Layer subclassing
Learning rate
Libraries, TensorFlow
Flask
Matplotlib
NumPy
Pandas
Scikit-learn
SciPy
use cases
Limited customization
Linear regression
Linear Threshold Unit (LTU)
load() function
load_data() function
load_model() function
Local minima
Logistic regression
Long short-term memory (LSTM) networks
Loss functions
M
Machine learning (ML)
AI
algorithms
applications
big data
characteristics, approaches
data science
description
DL
evaluations
history
model
process flow
evaluation
gathering data
hyperparameter tuning
model selection
prediction
preparing data
training
reinforcement learning
semi-supervised learning approach
supervised learning approach
unsupervised learning
Machine translation
Market intelligence
Matplotlib
McCulloch-Pitts Neuron
Mean absolute error (MAE)
Mean absolute percentage error (MAPE)
Mean squared error (MSE)
Memory-based approach
Microsoft cognitive toolkit (CNTK)
Miles per gallon (MPG)
Mobile recommender systems
Mode collapse
Model-based approach
Model building
estimator API
keras API
model.compile()
model.evaluate() function
model.fit() function
model.predict() function
Model selection
Model subclassing
Model training performance
Morphological segmentation
Morphosyntax
Multi-criteria recommender systems
Multilayer perceptron (MLP)
MXNet
N
Named entity recognition (NER)
Natural language generation
Natural language processing (NLP)
history
early ideas
rule-based NLP
statistical NLP and supervised learning
unsupervised and semi-supervised NLP
problems
real-world applications
scope
tasks
cognition
dialogue
discourse
morphosyntax
semantics
speech
Natural language toolkit (NLTK)
Neural networks
activation functions
history
loss functions
Noise reduction
np.hstack() function
NumPy arrays
NumPy (Numerical Python)
O
Object-oriented programming (OOP)
Object recognition
Open source
Optical character recognition (OCR)
Optimization algorithm
Optimization in deep learning
backpropagation algorithm
challenges
local minima
saddle points
vanishing gradients
optimization algorithm
Optimizer algorithms
Origin variable
Overfitting
P
Pandas
Pandas DataFrame
Part-of-speech (POS) tagging
Pattern mining
Perceptron
Pip installation
complementary libraries
confirmation
libraries
Pooling layers
Potential sequence data tasks
predict() function
Principal component analysis (PCA)
proof of concept (POC)
Python
benefits
community support
data science libraries
ease of learning
visualization options
interpreted language
object-oriented programming (OOP)
Python 2 vs. Python 3
timeline
PyTorch
Q
Q-Learning
R
Ragged tensors
Random forest algorithm
Read–eval–print loop (REPL)
RecommenderNet model
Recommender systems (RSs)
approach
collaborative filtering
SeeCollaborative filtering
content-based filtering
mobile recommender systems
multi-criteria recommender systems
risk-aware recommender systems
cold start
scalability
sparsity
Recurrent gated units (GRUs)
Recurrent neural networks (RNNs)
applications
characteristics
GRUs
history
LSTM
mechanism
sequence data
simple RNNs
time-series data
Regression
Regularization
Regularized autoencoders
CAEs
DAEs
SAEs
VAEs
variations
Regular neural networks
Reinforcement learning
action
agent
comprehensive module support
deep learning
environment
models
power
reward
Restricted Boltzmann Machines (RBMs)
Risk-Aware recommender systems
Root mean squared error (RMSE)
Rule-based NLP
S
Saddle points
SavedModel
save_model() function
Scaling methods
Scikit-learn
SciPy
Seaborn
Semantics
Semi-supervised learning approach
Sentence breaking
Sentiment analysis (case study)
compiling and fitting, model
dataset preparation
Google Drive
Colab access
trained model
GPU Acceleration in Google Colab
IMDB reviews
load() function, tensorflow_datasets API
TensorFlow import, dataset downloading
text encoding and decoding
loaded model object
model evaluation
predictions
RNN model, building
bidirectional layers
encoding layer
flowchart
Keras Sequential API
saved_model
saving and loading, model
Sequence data
Sequential()
Sequential() model object
Shakespeare Corpus
Shallow feedforward neural network
Sigmoid functions
Simple RNNs
Single-layer perceptron
Softmax function
Sparse autoencoders (SAEs)
Sparse tensors
Speech recognition
Spell checking
Standard training method
model.compile()
model.evaluate()
model.fit()
model.predict()
State-action-reward-state-action (SARSA)
Stochastic gradient descent (SGD) optimizer
summary() function
Supervised learning
classification problems
decision trees and ensemble methods
k-nearest neighbors algorithm
linear and logistic regression
neural networks
regression problems
support vector machines
Support vector machine
T
TensorFlow
competitors
CNTK
Keras
MXNet
PyTorch
dataset object
deep learning pipeline
eager execution
objects
open-source machine learning platform
Python and C
tensors
timeline
variable
TensorFlow 1.0.0
TensorFlow 2.0
architecture
experimentation experience for researchers
model building with Keras and eager execution
AutoGraph API
build, train, and validate
data loading, tf.data
distributed training
SavedModel
robust model deployment in production
TensorFlow 2.0
TensorFlow docs
tensorflow_datasets API
TensorFlow Graphics
TensorFlow.js
TensorFlow pipeline guide
TensorFlow serving
tensor.numpy() function
Tensor processing unit (TPU)
Tensors
test_dataset
Text classification
Text generation with deep NLP (case study)
compiling and training model
Corpus loading
dataset creation
goal
model, building
required libraries, import
Shakespeare Corpus
text vectorization
trained model
Text vectorization
tf.Constant()
tf.data module
tf.estimators (Estimator API)
tf.feature_column module
tf.function decorator
tf.GradientTape()
tf.keras (TensorFlow Keras API)
tf.Tensor class
tf.zeros() functions
Time-series data
Training data
train_step() function
U
Undercomplete autoencoders
Universal approximation theory
Unsupervised learning
anomaly detection problems and generative systems
clustering analysis
dimensionality reduction
hierarchical clustering
K-means clustering
machine learning algorithms
neural networks
PCA
util module
V
Vanishing gradients
Variable
Variational autoencoders (VAEs)
Vector space
Video analysis
W, X, Y
Word segmentation
Word sense disambiguation
Z
ZipFile() function