Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Preface
Who This Book Is For How This Book Is Organized Conventions Used in This Book O’Reilly Online Learning How to Contact Us Acknowledgments
I. MLOps: What and Why 1. Why Now and Challenges
Defining MLOps and Its Challenges MLOps to Mitigate Risk
Risk Assessment Risk Mitigation MLOps for Responsible AI
MLOps for Scale Closing Thoughts
2. People of MLOps
Subject Matter Experts Data Scientists Data Engineers Software Engineers DevOps Model Risk Manager/Auditor Machine Learning Architect Closing Thoughts
3. Key MLOps Features
A Primer on Machine Learning Model Development
Establishing Business Objectives Data Sources and Exploratory Data Analysis Feature Engineering and Selection Training and Evaluation Reproducibility Responsible AI
Productionalization and Deployment
Model Deployment Types and Contents Model Deployment Requirements
Monitoring
DevOps Concerns Data Scientist Concerns Business Concerns
Iteration and Life Cycle
Iteration The Feedback Loop
Governance
Data Governance Process Governance
Closing Thoughts
II. MLOps: How 4. Developing Models
What Is a Machine Learning Model?
In Theory In Practice Required Components Different ML Algorithms, Different MLOps Challenges
Data Exploration Feature Engineering and Selection
Feature Engineering Techniques How Feature Selection Impacts MLOps Strategy
Experimentation Evaluating and Comparing Models
Choosing Evaluation Metrics Cross-Checking Model Behavior Impact of Responsible AI on Modeling
Version Management and Reproducibility Closing Thoughts
5. Preparing for Production
Runtime Environments
Adaptation from Development to Production Environments Data Access Before Validation and Launch to Production Final Thoughts on Runtime Environments
Model Risk Evaluation
The Purpose of Model Validation The Origins of ML Model Risk
Quality Assurance for Machine Learning Key Testing Considerations Reproducibility and Auditability Machine Learning Security
Adversarial Attacks Other Vulnerabilities
Model Risk Mitigation
Changing Environments Interactions Between Models Model Misbehavior
Closing Thoughts
6. Deploying to Production
CI/CD Pipelines Building ML Artifacts
What’s in an ML Artifact? The Testing Pipeline
Deployment Strategies
Categories of Model Deployment Considerations When Sending Models to Production Maintenance in Production
Containerization Scaling Deployments Requirements and Challenges Closing Thoughts
7. Monitoring and Feedback Loop
How Often Should Models Be Retrained? Understanding Model Degradation
Ground Truth Evaluation Input Drift Detection
Drift Detection in Practice
Example Causes of Data Drift Input Drift Detection Techniques
The Feedback Loop
Logging Model Evaluation Online Evaluation
Closing Thoughts
8. Model Governance
Who Decides What Governance the Organization Needs? Matching Governance with Risk Level Current Regulations Driving MLOps Governance
Pharmaceutical Regulation in the US: GxP Financial Model Risk Management Regulation GDPR and CCPA Data Privacy Regulations
The New Wave of AI-Specific Regulations The Emergence of Responsible AI Key Elements of Responsible AI
Element 1: Data Element 2: Bias Element 3: Inclusiveness Element 4: Model Management at Scale Element 5: Governance
A Template for MLOps Governance
Step 1: Understand and Classify the Analytics Use Cases Step 2: Establish an Ethical Position Step 3: Establish Responsibilities Step 4: Determine Governance Policies Step 5: Integrate Policies into the MLOps Process Step 6: Select the Tools for Centralized Governance Management Step 7: Engage and Educate Step 8: Monitor and Refine
Closing Thoughts
III. MLOps: Real-World Examples 9. MLOps in Practice: Consumer Credit Risk Management
Background: The Business Use Case Model Development Model Bias Considerations Prepare for Production Deploy to Production Closing Thoughts
10. MLOps in Practice: Marketing Recommendation Engines
The Rise of Recommendation Engines
The Role of Machine Learning Push or Pull?
Data Preparation Design and Manage Experiments Model Training and Deployment
Scalability and Customizability Monitoring and Retraining Strategy Real-Time Scoring Ability to Turn Recommendations On and Off
Pipeline Structure and Deployment Strategy Monitoring and Feedback
Retraining Models Updating Models Runs Overnight, Sleeps During Daytime Option to Manually Control Models Option to Automatically Control Models Monitoring Performance
Closing Thoughts
11. MLOps in Practice: Consumption Forecast
Power Systems Data Collection Problem Definition: Machine Learning, or Not Machine Learning? Spatial and Temporal Resolution Implementation Modeling Deployment Monitoring Closing Thoughts
Index
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion