Functional Ceph RadosGW docker container and config

This article provides a basic guide to deploying RadosGW in a Docker container with Keystone authentication support. RadosGW is a part of the Ceph distributed storage system and offers interfaces compatible with S3 and Swift APIs, making it versatile for various cloud storage applications.

Here, we will go through setting up RadosGW in a Docker container and configuring it to use Keystone for authentication. This setup allows you to leverage tools like s3cmd for S3 API or OpenStack commands (e.g., openstack container list) for Swift API.

Setting Up RadosGW in Docker

First, we will start with running the RadosGW daemon inside the Docker container. Here's the command you need:

1radosgw -d -f --cluster ceph --name client.rgw.mon-1.rgw0 --setuser ceph --setgroup ceph

This command starts the RadosGW daemon in debug mode and foreground, specifying the cluster, client name, and user/group settings.

Creating the Dockerfile

Next, we need to create a Dockerfile to build our Docker image. The Dockerfile should be set up as follows:

1FROM ubuntu:18.04
2RUN apt update
3RUN apt install wget gnupg -y
4RUN wget -q -O- 'https://download.ceph.com/keys/release.asc' | apt-key add -
5RUN echo deb https://download.ceph.com/debian-octopus/ bionic main | tee /etc/apt/sources.list.d/ceph.list
6RUN apt update
7ENV DEBIAN_FRONTEND=noninteractive
8RUN apt install -y radosgw

This Dockerfile is based on Ubuntu 18.04. It includes steps to update the package list, install necessary tools, add the Ceph repository, and finally install RadosGW.

Configuring RadosGW with Keystone

For RadosGW to work with Keystone authentication, you need to edit the ceph.conf file. Here's an example configuration:

 1[client.rgw.mon-1.rgw0]
 2host = mon-1
 3keyring = /var/lib/ceph/radosgw/ceph-rgw.mon-1.rgw0/keyring
 4log file = /var/log/ceph/ceph-rgw-mon-1.rgw0.log
 5rgw frontends = beast port=8080 
 6#endpoint=0.0.0.0:8080
 7rgw thread pool size = 51
 8
 9rgw keystone api version = 3
10rgw keystone url = http://172.17.8.101:500
11rgw keystone admin user = admin
12rgw keystone admin password = c5Tl0IsBeIS2XDZ58AkBmI2M
13rgw keystone admin domain = default
14rgw keystone admin project = admin
15rgw keystone accepted roles = member,admin
16rgw keystone token cache size = 500
17#rgw keystone implicit tenants = {true for private tenant for each new user}
18
19# Please do not change this file directly since it is managed by Ansible and will be overwritten
20[global]
21cluster network = 172.21.0.0/16
22fsid = 8b8dff27-af09-4a7a-bc71-f12b57832ecd
23mon host = [v2:172.21.15.12:3300,v1:172.21.15.12:6789],[v2:172.21.15.13:3300,v1:172.21.15.13:6789],[v2:172.21.15.14:3300,v1:172.21.15.14:6789]
24mon initial members = mon-1,mon-2,mon-3
25osd pool default crush rule = -1
26public network = 172.21.0.0/16

This configuration sets up the client with the necessary Keystone information, including the API version, URL, admin credentials, and accepted roles.

Automating Database Backups

The script provided below is an automated solution for backing up databases and storing them on an S3 compatible storage. It reads database configurations from a CSV file and uses mysqldump to backup each database, which is then compressed and uploaded to an S3 bucket. Here's the script:

 1#!/bin/bash
 2
 3# Set the enviroment variable so read function knows to seperate on ",".
 4export IFS=","
 5
 6NOW=$(date +"%Y_%m_%d_%H_%M")
 7
 8DATABASES_CONFIG_FILE="/home/bfnadmin/databases.csv"
 9
10S3_ENDPOINT="https://adleast-ceres.copyworldit.com.au"
11S3_BUCKET="s3://sql-backups"
12
13TEMP_BACKUP_DIR="backups"
14
15while read HOST USERNAME PASSWORD DB_SRV DB