Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Copyright Brief Table of Contents Table of Contents Foreword Preface Acknowledgments About this Book About the Authors About the cover Illustration Part 1. Wordy machines
Chapter 1. Packets of thought (NLP overview)
1.1. Natural language vs. programming language 1.2. The magic 1.3. Practical applications 1.4. Language through a computer’s “eyes” 1.5. A brief overflight of hyperspace 1.6. Word order and grammar 1.7. A chatbot natural language pipeline 1.8. Processing in depth 1.9. Natural language IQ Summary
Chapter 2. Build your vocabulary (word tokenization)
2.1. Challenges (a preview of stemming) 2.2. Building your vocabulary with a tokenizer 2.3. Sentiment Summary
Chapter 3. Math with words (TF-IDF vectors)
3.1. Bag of words 3.2. Vectorizing 3.3. Zipf’s Law 3.4. Topic modeling Summary
Chapter 4. Finding meaning in word counts (semantic analysis)
4.1. From word counts to topic scores 4.2. Latent semantic analysis 4.3. Singular value decomposition 4.4. Principal component analysis 4.5. Latent Dirichlet allocation (LDiA) 4.6. Distance and similarity 4.7. Steering with feedback 4.8. Topic vector power Summary
Part 2. Deeper learning (neural networks)
Chapter 5. Baby steps with neural networks (perceptrons and backpropagation)
5.1. Neural networks, the ingredient list Summary
Chapter 6. Reasoning with word vectors (Word2vec)
6.1. Semantic queries and analogies 6.2. Word vectors Summary
Chapter 7. Getting words in order with convolutional neural networks (CNNs)
7.1. Learning meaning 7.2. Toolkit 7.3. Convolutional neural nets 7.4. Narrow windows indeed Summary
Chapter 8. Loopy (recurrent) neural networks (RNNs)
8.1. Remembering with recurrent networks 8.2. Putting things together 8.3. Let’s get to learning our past selves 8.4. Hyperparameters 8.5. Predicting Summary
Chapter 9. Improving retention with long short-term memory networks
9.1. LSTM Summary
Chapter 10. Sequence-to-sequence models and attention
10.1. Encoder-decoder architecture 10.2. Assembling a sequence-to-sequence pipeline 10.3. Training the sequence-to-sequence network 10.4. Building a chatbot using sequence-to-sequence networks 10.5. Enhancements 10.6. In the real world Summary
Part 3. Getting real (real-world NLP challenges)
Chapter 11. Information extraction (named entity extraction and question answering)
11.1. Named entities and relations 11.2. Regular patterns 11.3. Information worth extracting 11.4. Extracting relationships (relations) 11.5. In the real world Summary
Chapter 12. Getting chatty (dialog engines)
12.1. Language skill 12.2. Pattern-matching approach 12.3. Grounding 12.4 Retrieval (search) 12.5. Generative models 12.6 Four-wheel drive 12.7. Design process 12.8 Trickery 12.9. In the real world Summary
Chapter 13. Scaling up (optimization, parallelization, and batch processing)
13.1. Too much of a good thing (data) 13.2. Optimizing NLP algorithms 13.3. Constant RAM algorithms 13.4. Parallelizing your NLP computations 13.5. Reducing the memory footprint during model training 13.6. Gaining model insights with TensorBoard Summary
Appendix A. Your NLP tools
A.1. Anaconda3 A.2. Install NLPIA A.3. IDE A.4. Ubuntu package manager A.5. Mac A.6. Windows A.7. NLPIA automagic
Appendix B. Playful Python and regular expressions
B.1. Working with strings B.2. Mapping in Python (dict and OrderedDict) B.3. Regular expressions B.4. Style B.5. Mastery
Appendix C. Vectors and matrices (linear algebra fundamentals)
C.1. Vectors
Appendix D. Machine learning tools and techniques
D.1. Data selection and avoiding bias D.2. How fit is fit? D.3. Knowing is half the battle D.4. Cross-fit training D.5. Holding your model back D.6. Imbalanced training sets D.7. Performance metrics D.8. Pro tips
Appendix E. Setting up your AWS GPU
E.1. Steps to create your AWS GPU instance
Appendix F. Locality sensitive hashing
F.1. High-dimensional vectors are different F.2. High-dimensional indexing F.3. “Like” prediction
Resources
Applications and project ideas Courses and tutorials Tools and packages Research papers and talks Competitions and awards Datasets Search engines
Glossary
Acronyms Terms
Chatbot Recirculating (Recurrent) Pipeline Index List of Figures List of Tables List of Listings
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion