Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Preface
What this book covers What you need for this book Who this book is for Conventions Reader feedback Customer support
Downloading the example code Downloading the color images of this book Errata Piracy Questions
Introducing Ceph Storage
The history and evolution of Ceph
Ceph releases
New since the first edition The future of storage
Ceph as the cloud storage solution Ceph is software-defined Ceph is a unified storage solution The next-generation architecture RAID: the end of an era Ceph Block Storage
Ceph compared to other storage solutions
GPFS iRODS HDFS Lustre Gluster Ceph
Summary
Ceph Components and Services
Introduction Core components
Reliable Autonomic Distributed Object Store (RADOS) MONs Object Storage Daemons (OSDs) Ceph manager RADOS GateWay (RGW) Admin host CephFS MetaData server (MDS) The community
Core services
RADOS Block Device (RBD) RADOS Gateway (RGW) CephFS Librados
Summary
Hardware and Network Selection
Introduction Hardware selection criteria
Corporate procurement policies Power requirements-amps, volts, and outlets Compatibility with management infrastructure Compatibility with physical infrastructure Configuring options for one-stop shopping
Memory
RAM capacity and speed
Storage drives
Storage drive capacity Storage drive form factor Storage drive durability and speed Storage drive type Number of storage drive bays per chassis
Controllers
Storage HBA / controller type Networking options Network versus serial versus KVM management Adapter slots
Processors
CPU socket count CPU model Emerging technologies
Summary
Planning Your Deployment
Layout decisions
Convergence: Wisdom or Hype? Planning Ceph component servers Rack strategy Server naming
Architectural decisions
Pool decisions
Replication Erasure Coding Placement Group calculations
OSD decisions
Back end: FileStore or BlueStore? OSD device strategy Journals Filesystem Encryption
Operating system decisions
Kernel and operating system
Ceph packages Operating system deployment Time synchronization Packages
Networking decisions Summary
Deploying a Virtual Sandbox Cluster
Installing prerequisites for our Sandbox environment Bootstrapping our Ceph cluster Deploying our Ceph cluster Scaling our Ceph cluster Summary
Operations and Maintenance
Topology
The 40,000 foot view Drilling down
OSD dump OSD list OSD find CRUSH dump
Pools Monitors CephFS
Configuration
Cluster naming and configuration The Ceph configuration file Admin sockets Injection Configuration management
Scrubs Logs
MON logs OSD logs Debug levels
Common tasks
Installation
Ceph-deploy
Flags Service management
Systemd: the wave (tsunami?) of the future Upstart sysvinit
Component failures Expansion Balancing Upgrades
Working with remote hands Summary
Monitoring Ceph
Monitoring Ceph clusters
Ceph cluster health Watching cluster events Utilizing your cluster OSD variance and fillage Cluster status Cluster authentication
Monitoring Ceph MONs
MON status MON quorum status
Monitoring Ceph OSDs
OSD tree lookup OSD statistics OSD CRUSH map
Monitoring Ceph placement groups
PG states
Monitoring Ceph MDS Open source dashboards and tools
Kraken Ceph-dash Decapod Rook Calamari Ceph-mgr Prometheus and Grafana
Summary
Ceph Architecture: Under the Hood
Objects
Accessing objects
Placement groups Setting PGs on pools
PG peering PG Up and Acting sets PG states
CRUSH
The CRUSH Hierarchy CRUSH Lookup Backfill, Recovery, and Rebalancing Customizing CRUSH
Ceph pools
Pool operations Creating and listing pools Ceph data flow Erasure coding
Summary
Storage Provisioning with Ceph
Client Services Ceph Block Device (RADOS Block Device)
Creating and Provisioning RADOS Block Devices Resizing RADOS Block Devices RADOS Block Device Snapshots RADOS Block Device Clones
The Ceph Filesystem (CephFS)
CephFS with Kernel Driver CephFS with the FUSE Driver
Ceph Object Storage (RADOS Gateway)
Configuration for the RGW Service Performing S3 Object Operations Using s3cmd Enabling the Swift API Performing Object Operations using the Swift API
Summary
Integrating Ceph with OpenStack
Introduction to OpenStack
Nova Glance Cinder Swift Ganesha / Manila Horizon Keystone
The Best Choice for OpenStack storage
Integrating Ceph and OpenStack Guest Operating System Presentation
Virtual OpenStack Deployment Summary
Performance and Stability Tuning
Ceph performance overview Kernel settings
pid_max kernel.threads-max, vm.max_map_count XFS filesystem settings Virtual memory settings
Network settings
Jumbo frames TCP and network core iptables and nf_conntrack
Ceph settings
max_open_files Recovery OSD and FileStore settings MON settings
Client settings Benchmarking
RADOS bench CBT FIO
Fill volume, then random 1M writes for 96 hours, no read verification: Fill volume, then small block writes for 96 hours, no read verification: Fill volume, then 4k random writes for 96 hours, occasional read verification:
Summary
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion