Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Cover Title Page Copyright Page Contents Preface to the Dover Edition Preface Part One - Fundamentals of Digital Communication and Block Coding
Chapter 1: Digital Communication Systems: Fundamental Concepts and Parameters
1.1 Sources, Entropy, and the Noiseless Coding Theorem 1.2 Mutual Information and Channel Capacity 1.3 The Converse to the Coding Theorem 1.4 Summary and Bibliographical Notes Appendix 1A Convex Functions Appendix 1B Jensen Inequality for Convex Functions Problems
Chapter 2: Channel Models and Block Coding
2.1 Block-coded Digital Communication on the Additive Gaussian Noise Channel 2.2 Minimum Error Probability and Maximum Likelihood Decoder 2.3 Error Probability and a Simple Upper Bound 2.4 A Tighter Upper Bound on Error Probability 2.5 Equal Energy Orthogonal Signals on the AWGN Channel 2.6 Bandwidth Constraints, Intersymbol Interference, and Tracking Uncertainty 2.7 Channel Input Constraints 2.8 Channel Output Quantization: Discrete Memoryless Channels 2.9 Linear Codes 2.10 Systematic Linear Codes and Optimum Decoding for the BSC 2.11 Examples of Linear Block Code Performance on the AWGN Channel and Its Quantized Reductions 2.12 Other Memoryless Channels 2.13 Bibliographical Notes and References Appendix 2A Gram-Schmidt Orthogonalization and Signal Representation Problems
Chapter 3: Block Code Ensemble Performance Analysis
3.1 Code Ensemble Average Error Probability: Upper Bound 3.2 The Channel Coding Theorem and Error Exponent Properties for Memoryless Channels 3.3 Expurgated Ensemble Average Error Probability: Upper Bound at Low Rates 3.4 Examples: Binary-Input, Output-Symmetric Channels, and Very Noisy Channels 3.5 Chernoff Bounds and the Neyman-Pearson Lemma 3.6 Sphere-Packing Lower Bounds 3.7 Zero Rate Lower Bounds 3.8 Low Rate Lower Bounds 3.9 Conjectures and Converses 3.10 Ensemble Bounds for Linear Codes 3.11 Bibliographical Notes and References Appendix 3A Useful Inequalities and the Proofs of Lemma 3.2.1 and Corollary 3.3.2 Appendix 3B Kuhn-Tucker Conditions and Proofs of Theorems 3.2.2 and 3.2.3 Appendix 3C Computational Algorithm for Capacity Problems
Part Two - Convolutional Coding and Digital Communication
Chapter 4: Convolutional Codes
4.1 Introduction and Basic Structure 4.2 Maximum Likelihood Decoder for Convolutional Codes— The Viterbi Algorithm 4.3 Distance Properties of Convolutional Codes for Binary-Input Channels 4.4 Performance Bounds for Specific Convolutional Codes on Binary-Input, Output-Symmetric Memoryless Channels 4.5 Special Cases and Examples 4.6 Structure of Rate 1/n Codes and Orthogonal Convolutional Codes 4.7 Path Memory Truncation a, Metric Quantization, and Code Synchronization in Viterbi Decoders 4.8 Feedback Decoding 4.9 Intersymbol Interference Channels 4.10 Coding for Intersymbol Interference Channels 4.11 Bibliographical Notes and References Problems
Chapter 5: Convolutional Code Ensemble Performance
5.1 The Channel Coding Theorem for Time-varying Convolutional Codes 5.2 Examples: Convolutional Coding Exponents for Very Noisy Channels 5.3 Expurgated Upper Bound for Binary-Input, Output-Symmetric Channels 5.4 Lower Bound on Error Probability 5.5 Critical Lengths of Error Events 5.6 Path Memory Truncation and Initial Synchronization Errors 5.7 Error Bounds for Systematic Convolutional Codes 5.8 Time-varying Convolutional Codes on Intersymbol Interference Channels 5.9 Bibliographical Notes and References Problems
Chapter 6: Sequential Decoding of Convolutional Codes
6.1 Fundamentals and a Basic Stack Algorithm 6.2 Distribution of Computation: Upper Bound 6.3 Error Probability Upper Bound 6.4 Distribution of Computations: Lower Bound 6.5 The Fano Algorithm and Other Sequential Decoding Algorithms 6.6 Complexity, Buffer Overflow, and Other System Considerations 6.7 Bibliographical Notes and References Problems
Part Three - Source Coding for Digital Communication
Chapter 7: Rate Distortion Theory: Fundamental Concepts for Memoryless Sources
7.1 The Source Coding Problem 7.2 Discrete Memoryless Sources—Block Codes 7.3 Relationships with Channel Coding 7.4 Discrete Memoryless Sources—Trellis Codes 7.5 Continuous Amplitude Memoryless Sources 7.6 Evaluation of R(D)—Discrete Memoryless Sources 7.7 Evaluation of R(D)—Continuous Amplitude Memoryless Sources 7.8 Bibliographical Notes and References Appendix 7A Computational Algorithm for R(D) Problems
Chapter 8: Rate Distortion Theory: Memory, Gaussian Sources, and Universal Coding
8.1 Memoryless Vector Sources 8.2 Sources with Memory 8.3 Bounds for R(D) 8.4 Gaussian Sources with Squared-Error Distortion 8.5 Symmetric Sources with Balanced Distortion Measures and Fixed Composition Sequences 8.6 Universal Coding 8.7 Bibliographical Notes and References Appendix 8A Chernoff Bounds for Distortion Distributions Problems
Bibliography Index Authors’ Biographies
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion