Ceph Block Storage
Block-based storage interfaces are the most common way to store data with rotating media such as HDD, CD. Ceph virtual block device is the favored candidate for mass data storage applications. Ceph block devices are think provisioned, resizable, and high data durability. They provide capabilities such as snapshot, geo-replication, and asynchronous mirroring.
Ceph’s block devices deliver high performance with infinite scalability to kernel modules, or to KVMs such as QEMU, and cloud-based computing systems like OpenStack and CloudStack that rely on libvirt and QEMU to integrate with Ceph block devices.
Scale out SAN, Ceph RADOS Block Device RBD
Ceph Block Storage SpotlightSnapshot
A snapshot is a read-only copy of the state of an image at a particular point in time. One of the advanced features of Ceph block devices is that you can create snapshots of the images to retain a history of an image’s state. Ceph also supports snapshot layering, which allows you to clone images (e.g., a VM image) quickly and easily. Ceph supports block device snapshots using the rbd command and many higher-level interfaces, including QEMU, libvirt, OpenStack, and CloudStack.RBD Mirroring
Ceph can asynchronously mirror RBD images between two Ceph clusters. This capability uses the RBD journaling image feature to ensure crash-consistent replication between clusters. Mirroring is configured on a per pool basis within peer clusters and can be configured to automatically mirror all images within a pool or only a specific subset of images.Image Live-Migration
RBD images can be live-migrated between different pools within the same cluster or between different image formats and layouts. When started, the source image will be deep-copied to the destination image, pulling all snapshot history and optionally keeping any link to the source image’s parent to help preserve sparseness.Integrate Ceph RBD with Cloud Native Hosts
Ceph Block Device provides many ways to integrate the RBD with different kinds of hosts and cloud platforms.
● Kernel Module - Linux kernel module to map RBD device as a virtual block device.
● QEMU -Ceph Block Devices can integrate with the QEMU virtual machines. The ability to make copy-on-write clones of a snapshot enable Ceph to provision block device images to virtual machines quickly, because the client doesn’t have to download an entire image each time it spins up a new virtual machine.
● Kubernetes - You may use Ceph Block Device images with Kubernetes through Ceph-CSI, which dynamically provisions RBD images to back Kubernetes volumes. The Ceph-CSI can also map these RBD images as block devices on worker nodes running pods that reference an RBD-backed volume. Large Ceph Block Device images have better performance than a standalone server!
● OpenStack - OpenStack images, volumes, and guest disks integrate natively with Ceph’s block devices. OpenStack use Ceph Block Device images through libvirt, which configures the QEMU interface to librbd. You can use OpenStack Glance to store images in a Ceph Block Device and use Cinder to boot a VM using a copy-on-write clone of an image.
● CloudStack - CloudStack integrates with Ceph’s block devices to provide CloudStack with a backend for CloudStack’s Primary Storage.
- Create and Manage RBD Images with Ceph Dashboard
Ambedded Ceph & SUSE Enterprise Storage Appliance supports users to manage Ceph with both Ambedded UVS Manager and Ceph Dashboard. The UVS manager provides more features of deployment and Ceph Dashboard provides more detail configuration features. In this video, you can learn how to use the Ceph dashboard to manage the block devices.
- Photo Set
- Edit Ceph RBD images via the UVS manager