Functional Ceph RadosGW docker container and config

This is just a basic example of getting RadosGW to run in docker with Keystone auth support. It runs S3 and Swift API's out of the box, so you can use tools like s3cmd or 'openstack container list' using the below configs.

Command to run inside docker container

1radosgw -d -f --cluster ceph --name client.rgw.mon-1.rgw0 --setuser ceph --setgroup ceph

Dockerfile

1FROM ubuntu:18.04
2RUN apt update
3RUN apt install wget gnupg -y
4RUN wget -q -O- 'https://download.ceph.com/keys/release.asc' | apt-key add -
5RUN echo deb https://download.ceph.com/debian-octopus/ bionic main | tee /etc/apt/sources.list.d/ceph.list
6RUN apt update
7ENV DEBIAN_FRONTEND=noninteractive
8RUN apt install -y radosgw

ceph.conf example

 1[client.rgw.mon-1.rgw0]
 2host = mon-1
 3keyring = /var/lib/ceph/radosgw/ceph-rgw.mon-1.rgw0/keyring
 4log file = /var/log/ceph/ceph-rgw-mon-1.rgw0.log
 5rgw frontends = beast port=8080 
 6#endpoint=0.0.0.0:8080
 7rgw thread pool size = 51
 8
 9rgw keystone api version = 3
10rgw keystone url = http://172.17.8.101:500
11rgw keystone admin user = admin
12rgw keystone admin password = c5Tl0IsBeIS2XDZ58AkBmI2M
13rgw keystone admin domain = default
14rgw keystone admin project = admin
15rgw keystone accepted roles = member,admin
16rgw keystone token cache size = 500
17#rgw keystone implicit tenants = {true for private tenant for each new user}
18
19
20# Please do not change this file directly since it is managed by Ansible and will be overwritten
21[global]
22cluster network = 172.21.0.0/16
23fsid = 8b8dff27-af09-4a7a-bc71-f12b57832ecd
24mon host = [v2:172.21.15.12:3300,v1:172.21.15.12:6789],[v2:172.21.15.13:3300,v1:172.21.15.13:6789],[v2:172.21.15.14:3300,v1:172.21.15.14:6789]
25mon initial members = mon-1,mon-2,mon-3
26osd pool default crush rule = -1
27public network = 172.21.0.0/16
 1#!/bin/bash
 2
 3# Set the enviroment variable so read function knows to seperate on ",".
 4export IFS=","
 5
 6NOW=$(date +"%Y_%m_%d_%H_%M")
 7
 8DATABASES_CONFIG_FILE="/home/bfnadmin/databases.csv"
 9
10S3_ENDPOINT="https://adleast-ceres.copyworldit.com.au"
11S3_BUCKET="s3://sql-backups"
12
13TEMP_BACKUP_DIR="backups"
14
15while read HOST USERNAME PASSWORD DB_SRV DB_NAME; 
16do
17    echo "[$DB_SRV - $DB_NAME]"
18    
19    mysqldump --single-transaction --quick --lock-tables=false \
20    -h $HOST \
21    -u $USERNAME \
22    -p$PASSWORD \
23    $DB_NAME | gzip > $TEMP_BACKUP_DIR/$DB_SRV-$DB_NAME-$NOW.sql.gz
24    
25    echo "Uploading backup to S3 storage - $DB_SRV-$DB_NAME-$NOW.sql.gz"
26    
27    s3cmd put $TEMP_BACKUP_DIR/$DB_SRV-$DB_NAME-$NOW.sql.gz $S3_BUCKET
28
29    rm $TEMP_BACKUP_DIR/$DB_SRV-$DB_NAME-$NOW.sql.gz
30
31    echo -e "\n\n"
32
33done < $DATABASES_CONFIG_FILE