I recently rebuilt my Ceph lab to more closely mirror a real production deployment, rather than the usual - it works but don’t look too closely lab setups. The goals were simple but non-negotiable: 3 MONs (odd quorum) 2 MGRs (HA control plane) Host-level fault domain Replication size = 3 RGW (S3) only — no CephFS, no RBD Clean DNS (no /etc/hosts hacks) This post walks through the exact process I used to deploy a clean, repeatable Ceph RGW cluster using cephadm on Ubuntu, with explicit placement control and zero surprises. 🧠Cluster Design – Nodes & IPs Monitor / Manager Nodes (Control Plane) ceph-mon01 — MON + MGR — 172.16.1.81 ceph-mon02 — MON + MGR — 172.16.1.82 ceph-mon03 — MON + MGR — 172.16.1.83 RGW (S3 Gateway) ceph-rgw01 — RGW Gateway — 172.16.1.86 OSD Storage Nodes ceph-osd01 — OSD Node (6 × 80 GB disks) — 172.16.1.91 ceph-osd02 — OSD Node (6 × 80 GB disks) — 172.16.1.92 ceph-osd03 — OSD Node (6 × 80 GB disks) — 172.16.1.93 📋 Cluster Requirements ...