Log In
Or create an account ->
Imperial Library
Home
About
News
Upload
Forum
Help
Login/SignUp
Index
Half-title page
Title page
Copyright page
Dedication
Contents
1. Introduction
1.1 Overview
1.2 Birth of the text
1.3 Who will benefit
1.4 Why this book is relevant
1.5 Examples
1.5.1 CO[sub(2)] emissions
1.5.2 Age earnings
1.5.3 Hedonic price function
1.6 Examples in the text
1.6.1 Density
1.6.2 Regression
1.7 Outline of the remainder of the book
1.8 Supplemental materials
1.9 Acknowledgments
2. Univariate density estimation
2.1 Smoothing preliminaries
2.2 Estimation
2.2.1 A crude estimator
2.2.2 Naïve estimator
2.2.3 Kernel estimator
2.3 Kernel selection
2.4 Kernel efficiency
2.5 Bandwidth selection
2.5.1 Optimal selection
2.5.2 Data-driven methods
2.5.3 Plug-in or cross-validation?
2.6 Density derivatives
2.6.1 Bias and variance
2.6.2 Bandwidth selection
2.6.3 Relative efficiency
2.7 Application
2.7.1 Histograms
2.7.2 Kernel densities
3. Multivariate density estimation
3.1 Joint densities
3.2 Bias, variance, and AMISE
3.3 The curse of dimensionality
3.4 Bandwidth selection
3.4.1 Rule-of-thumb bandwidth selection
3.4.2 Cross-validation bandwidth selection
3.5 Conditional density estimation
3.5.1 Bias, variance, and AMSE
3.5.2 Bandwidth selection
3.5.3 Inclusion of irrelevant variables
3.6 Application
4. Inference about the density
4.1 Fundamentals
4.1.1 Consistent test
4.1.2 Distance measures
4.1.3 Centering terms
4.1.4 Degenerate U-statistics
4.1.5 Bootstrap
4.2 Equality
4.3 Parametric specification
4.4 Independence
4.5 Symmetry
4.6 Silverman test for multimodality
4.7 Testing in practice
4.7.1 Bootstrap versus asymptotic distribution
4.7.2 Role of bandwidth selection on reliability of tests
4.8 Application
4.8.1 Equality
4.8.2 Correct parametric specification
4.8.3 Independence
4.8.4 Symmetry
4.8.5 Modality
5. Regression
5.1 Smoothing preliminaries
5.2 Local-constant estimator
5.2.1 Derivation from density estimators
5.2.2 An indicator approach
5.2.3 Kernel regression on a constant
5.3 Bias, variance, and AMISE of the LCLS estimator
5.4 Bandwidth selection
5.4.1 Univariate digression
5.4.2 Optimal bandwidths in higher dimensions
5.4.3 Least-squares cross-validation
5.4.4 Cross-validation based on Akaike information criteria
5.4.5 Interpretation of bandwidths for LCLS
5.5 Gradient estimation
5.6 Limitations of LCLS
5.7 Local-linear estimation
5.7.1 Choosing LLLS over LCLS
5.7.2 Efficiency of the local-linear estimator
5.8 Local-polynomial estimation
5.9 Gradient-based bandwidth selection
5.10 Standard errors and confidence bounds
5.10.1 Pairs bootstrap
5.10.2 Residual bootstrap
5.10.3 Wild bootstrap
5.11 Displaying estimates
5.12 Assessing fit
5.13 Prediction
5.14 Application
5.14.1 Data
5.14.2 Results
6. Testing in regression
6.1 Testing preliminaries
6.1.1 Goodness-of-fit tests
6.1.2 Conditional-moment test
6.2 Correct parametric specification
6.2.1 Goodness-of-fit test
6.2.2 Conditional-moment test
6.3 Irrelevant regressors
6.3.1 Goodness-of-fit test
6.3.2 Conditional-moment test
6.4 Heteroskedasticity
6.5 Testing in practice
6.5.1 Bootstrap versus asymptotic distribution
6.5.2 Role of bandwidth selection on reliability of tests
6.6 Application
6.6.1 Correct functional form
6.6.2 Relevance
6.6.3 Heteroskedasticity
6.6.4 Density tests
7. Smoothing discrete variables
7.1 Estimation of a density
7.1.1 Kernels for smoothing discrete variables
7.1.2 Generalized product kernel
7.2 Finite sample properties
7.2.1 Discrete-only bias
7.2.2 Discrete-only variance
7.2.3 Discrete-only MSE
7.2.4 Mixed-data bias
7.2.5 Mixed-data variance
7.2.6 Mixed-data MSE
7.3 Bandwidth estimation
7.3.1 Discrete-data only
7.3.2 Mixed data
7.4 Why the faster rate of convergence?
7.5 Alternative discrete kernels
7.6 Testing
7.7 Application
8. Regression with discrete covariates
8.1 Estimation of the conditional mean
8.1.1 Local-constant least-squares
8.1.2 Local-linear least-squares
8.2 Estimation of gradients
8.2.1 Continuous covariates
8.2.2 Discrete covariates
8.3 Bandwidth selection
8.3.1 Automatic bandwidth selection
8.3.2 Upper and lower bounds for discrete bandwidths
8.4 Testing
8.4.1 Correct parametric specification
8.4.2 Significance of continuous regressors
8.4.3 Significance of discrete regressors
8.5 All discrete regressors
8.6 Application
8.6.1 Bandwidths
8.6.2 Elasticities
8.6.3 Numerical gradients
8.6.4 Testing
9. Semiparametric methods
9.1 Semiparametric efficiency
9.2 Partially linear models
9.2.1 Estimation
9.2.2 Bandwidth selection
9.2.3 Testing
9.3 Single-index models
9.3.1 Estimation
9.3.2 Bandwidth selection
9.3.3 Testing
9.4 Semiparametric smooth coefficient models
9.4.1 Estimation
9.4.2 Bandwidth selection
9.4.3 Testing
9.5 Additive models
9.5.1 Estimation
9.5.2 Bandwidth selection
9.5.3 Testing
9.6 Application
9.6.1 Bandwidths
9.6.2 Plotting estimates
9.6.3 Specification testing
10. Instrumental variables
10.1 The ill-posed inverse problem
10.2 Tackling the ill-posed inverse
10.3 Local-polynomial estimation of the control-function model
10.3.1 Multiple endogenous regressors
10.3.2 Bandwidth selection
10.3.3 Choice of polynomial order
10.3.4 Simulated evidence of the counterfactual simplification
10.3.5 A valid bootstrap procedure
10.4 Weak instruments
10.4.1 Weak identification
10.4.2 Estimation in the presence of weak instruments
10.4.3 Importance of nonlinearity in the first stage
10.5 Discrete endogenous regressor
10.6 Testing
10.7 Application
11. Panel data
11.1 Pooled models
11.2 Random effects
11.2.1 Local-linear weighted least-squares
11.2.2 Wang’s iterative estimator
11.3 Fixed effects
11.3.1 Additive individual effects
11.3.2 Discrete individual effects
11.4 Dynamic panel estimation
11.5 Semiparametric estimators
11.6 Bandwidth selection
11.7 Standard errors
11.7.1 Pairs bootstrap
11.7.2 Residual bootstrap
11.8 Testing
11.8.1 Poolability
11.8.2 Functional form specification
11.8.3 Nonparametric Hausman test
11.9 Application
11.9.1 Bandwidths
11.9.2 Estimation
11.9.3 Testing
12. Constrained estimation and inference
12.1 Rearrangement
12.1.1 Imposing convexity
12.1.2 Existing literature
12.2 Motivating alternative shape-constrained estimators
12.3 Implementation methods via reweighting
12.3.1 Constraint-weighted bootstrapping
12.3.2 Data sharpening
12.4 Practical issues
12.4.1 Selecting the distance metric
12.4.2 Choice of smoothing parameter
12.4.3 Linear in p implementation issues
12.4.4 Imposing additive separability
12.5 Hypothesis testing on shape constraints
12.6 Further extensions
12.7 Application
12.7.1 Imposing positive marginal product
12.7.2 Imposing constant returns to scale
Bibliography
Index
← Prev
Back
Next →
← Prev
Back
Next →