Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Title Page Copyright and Credits
Hands-On One-shot Learning with Python
About Packt
Why subscribe?
Contributors
About the authors About the reviewer Packt is searching for authors like you
Preface
Who this book is for What this book covers To get the most out of this book
Download the example code files Download the color images Conventions used
Get in touch
Reviews
Section 1: One-shot Learning Introduction Introduction to One-shot Learning
Technical requirements The human brain – overview
How the human brain learns Comparing human neurons and artificial neurons
Machine learning – historical overview
Challenges in machine learning and deep learning
One-shot learning – overview
Prerequisites of one-shot learning Types of one-shot learning
Setting up your environment Coding exercise
kNN – basic one-shot learning
Summary Questions
Section 2: Deep Learning Architectures Metrics-Based Methods
Technical requirements Parametric methods – an overview
Neural networks – learning procedure Visualizing parameters
Understanding Siamese networks
Architecture Preprocessing Contrastive loss function Triplet loss function
Applications
Understanding matching networks
Model architecture
Training procedure Modeling level – the matching networks architecture
Coding exercise
Siamese networks – the MNIST dataset Matching networks – the Omniglot dataset
Summary Questions Further reading
Model-Based Methods
Technical requirements Understanding Neural Turing Machines
Architecture of an NTM Modeling 
Reading  Writing Addressing
Memory-augmented neural networks
Reading Writing
Understanding meta networks
Algorithm of meta networks
Algorithm
Coding exercises
Implementation of NTM Implementation of MAAN
Summary Questions Further reading
Optimization-Based Methods
Technical requirements Overview of gradient descent Understanding model-agnostic meta-learning
Understanding the logic behind MAML
Algorithm
MAML application – domain-adaptive meta-learning
Understanding LSTM meta-learner
Architecture of the LSTM meta-learner
Data preprocessing Algorithm – pseudocode implementation
Exercises
A simple implementation of model-agnostic meta-learning A simple implementation of domain-adaption meta-learning
Summary Questions Further reading
Section 3: Other Methods and Conclusion Generative Modeling-Based Methods
Technical requirements Overview of Bayesian learning Understanding directed graphical models Overview of probabilistic methods Bayesian program learning
Model
Type generation Token generation Image generation
Discriminative k-shot learning
Representational learning Probabilistic model of the weights
Choosing a model for the weights
Computation and approximation for each phase 
Phase 1 – representation learning Phase 2 – concept learning Phase 3 – k-shot learning Phase 4 – k-shot testing
Summary Further reading
Conclusions and Other Approaches
Recent advancements
Object detection in few-shot domains Image segmentation in few-shot domains
Related fields
Semi-supervised learning Imbalanced learning  Meta-learning Transfer learning
Applications Further reading
Other Books You May Enjoy
Leave a review - let other readers know what you think
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion