Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Cover Title Copyright Dedication Acknowledgements Foreword Preface Chapter 1: Introduction
Organization of the book Other books
Part I: Probability
Chapter 2: Introduction to probability
2.1 Random variables 2.2 Joint probability 2.3 Marginalization 2.4 Conditional probability 2.5 Bayes’ rule 2.6 Independence 2.7 Expectation
Chapter 3: Common probability distributions
3.1 Bernoulli distribution 3.2 Beta distribution 3.3 Categorical distribution 3.4 Dirichlet distribution 3.5 Univariate normal distribution 3.6 Normal-scaled inverse gamma distribution 3.7 Multivariate normal distribution 3.8 Normal inverse Wishart distribution 3.9 Conjugacy
Chapter 4: Fitting probability models
4.1 Maximum likelihood 4.2 Maximum a posteriori 4.3 The Bayesian approach 4.4 Worked example 1: Univariate normal 4.5 Worked example 2: Categorical distribution
Chapter 5: The normal distribution
5.1 Types of covariance matrix 5.2 Decomposition of covariance 5.3 Linear transformations of variables 5.4 Marginal distributions 5.5 Conditional distributions 5.6 Product of two normals 5.7 Change of variable
Part II: Machine learning for machine vision
Chapter 6: Learning and inference in vision
6.1 Computer vision problems 6.2 Types of model 6.3 Example 1: Regression 6.4 Example 2: Binary classification 6.5 Which type of model should we use? 6.6 Applications
Chapter 7: Modeling complex data densities
7.1 Normal classification model 7.2 Hidden variables 7.3 Expectation maximization 7.4 Mixture of Gaussians 7.5 The t-distribution 7.6 Factor analysis 7.7 Combining models 7.8 Expectation maximization in detail 7.9 Applications
Chapter 8: Regression models
8.1 Linear regression 8.2 Bayesian linear regression 8.3 Nonlinear regression 8.4 Kernels and the kernel trick 8.5 Gaussian process regression 8.6 Sparse linear regression 8.7 Dual linear regression 8.8 Relevance vector regression 8.9 Regression to multivariate data 8.10 Applications
Chapter 9: Classification models
9.1 Logistic regression 9.2 Bayesian logistic regression 9.3 Nonlinear logistic regression 9.4 Dual logistic regression 9.5 Kernel logistic regression 9.6 Relevance vector classification 9.7 Incremental fitting and boosting 9.8 Classification trees 9.9 Multiclass logistic regression 9.10 Random trees, forests, and ferns 9.11 Relation to non-probabilistic models 9.12 Applications
Part III: Connecting local models
Chapter 10: Graphical models
10.1 Conditional independence 10.2 Directed graphical models 10.3 Undirected graphical models 10.4 Comparing directed and undirected graphical models 10.5 Graphical models in computer vision 10.6 Inference in models with many unknowns 10.7 Drawing samples 10.8 Learning
Chapter 11: Models for chains and trees
11.1 Models for chains 11.2 MAP inference for chains 11.3 MAP inference for trees 11.4 Marginal posterior inference for chains 11.5 Marginal posterior inference for trees 11.6 Learning in chains and trees 11.7 Beyond chains and trees 11.8 Applications
Chapter 12: Models for grids
12.1 Markov random fields 12.2 MAP inference for binary pairwise MRFs 12.3 MAP inference for multilabel pairwise MRFs 12.4 Multilabel MRFs with non-convex potentials 12.5 Conditional random fields 12.6 Higher order models 12.7 Directed models for grids 12.8 Applications
Part IV: Preprocessing
Chapter 13: Image preprocessing and feature extraction
13.1 Per-pixel transformations 13.2 Edges, corners, and interest points 13.3 Descriptors 13.4 Dimensionality reduction
Part V: Models for geometry
Chapter 14: The pinhole camera
14.1 The pinhole camera 14.2 Three geometric problems 14.3 Homogeneous coordinates 14.4 Learning extrinsic parameters 14.5 Learning intrinsic parameters 14.6 Inferring three-dimensional world points 14.7 Applications
Chapter 15: Models for transformations
15.1 Two-dimensional transformation models 15.2 Learning in transformation models 15.3 Inference in transformation models 15.4 Three geometric problems for planes 15.5 Transformations between images 15.6 Robust learning of transformations 15.7 Applications
Chapter 16: Multiple cameras
16.1 Two-view geometry 16.2 The essential matrix 16.3 The fundamental matrix 16.4 Two-view reconstruction pipeline 16.5 Rectification 16.6 Multiview reconstruction 16.7 Applications
Part VI: Models for vision
Chapter 17: Models for shape
17.1 Shape and its representation 17.2 Snakes 17.3 Shape templates 17.4 Statistical shape models 17.5 Subspace shape models 17.6 Three-dimensional shape models 17.7 Statistical models for shape and appearance 17.8 Non-Gaussian statistical shape models 17.9 Articulated models 17.10 Applications
Chapter 18: Models for style and identity
18.1 Subspace identity model 18.2 Probabilistic linear discriminant analysis 18.3 Nonlinear identity models 18.4 Asymmetric bilinear models 18.5 Symmetric bilinear and multilinear models 18.6 Applications
Chapter 19: Temporal models
19.1 Temporal estimation framework 19.2 Kalman filter 19.3 Extended Kalman filter 19.4 Unscented Kalman filter 19.5 Particle filtering 19.6 Applications
Chapter 20: Models for visual words
20.1 Images as collections of visual words 20.2 Bag of words 20.3 Latent Dirichlet allocation 20.4 Single author–topic model 20.5 Constellation models 20.6 Scene models 20.7 Applications
Part VII: Appendices
Appendix A: Notation Appendix B: Optimization
B.1 Problem statement B.2 Choosing a search direction B.3 Line search B.4 Reparameterization
Appendix C: Liner algebra
C.1 Vectors C.2 Matrices C.3 Tensors C.4 Linear transformations C.5 Singular value decomposition C.6 Matrix calculus C.7 Common problems C.8 Tricks for inverting large matrices
Bibliography Index
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion