October 23, 2025 | 17:13 | 2 minutes

MinIO OSS: Another Rug Pull in Open Source?

MinIO, once a darling of the open-source object storage world, has followed a path we’ve seen before: slowly pulling the rug on its OSS community. It started subtly, features were stripped from the community edition, leaving users with a barebones experience. Now, MinIO has taken it a step further by ceasing distribution of pre-built binaries altogether. The project is now source-only, as confirmed in their GitHub README. This move significantly raises the barrier to entry for users who relied on the simplicity of ready-to-run binaries. One of MinIO’s biggest selling points was its sleek UI. With that gone from the OSS version and enterprise pricing remaining steep, it’s time to consider alternatives.

Enter Ceph

A strong contender is Ceph, especially when paired with Rook for Kubernetes-native deployments. Ceph supports:

  • Multi-node HA setups
  • Single-node deployments with replication or erasure coding
  • CephFS for true RWX support
  • RBD for block storage

Depending on your architecture, Ceph can scale impressively and offers a rich feature set that MinIO’s OSS version no longer matches.

Personal usecase

Personally, I use Ceph in my homelab as a single-host NAS setup. I’ve configured different pools for:

  • VM storage
  • Object storage
  • File storage
  • RWX-capable storage for my talos Kubernetes cluster via the Ceph CSI

My hardware setup includes:

countdrive
44TB NVMe datacenter-grade drives
218TB SAS drives
1216TB SATA drives (joining the pool soon)

Using CRUSH maps, I assign disks to different use cases:

  • VMs reside on NVMe with three-way replication
  • CephFS metadata also uses three-way replicated NVMe
  • File and object storage use Erasure Coding (EC 18+3) on HDDs, yielding ~86% usable space while tolerating up to 3 disk failures

Performance-wise, the HDD pool delivers around 2.6 GB/s read/write, bottlenecked by my Xeon E2640v4 CPU, as the EC 18+3 config is quite compute-intensive. Replication deliveres native NVMe speeds.

If you’re exploring Ceph or Rook and have questions, feel free to reach out, happy to help!

© marschall.systems 2025

This Site is affected by your System Dark/Lightmode

Powered by Hugo & Kiss'Em.