Recently it was announced that MinIO, a common S3 server used from homelab to enterprise, removed a lot of features from its open source front end as well as putting the project into "maintenance mode". The company backing the project proposed an alternative called "AIStor", which does not interest me and from a lot of discussion I have seen not many others. So I looked for alternatives, and as the title suggests, landed on using Garage as the direct replacement. I also use Rook-Ceph Object Store for S3 storage within my homelab's Kubernetes cluster, however its intended to be for direct, in cluster storage. What I needm and what my prior use for MinIO was for, is S3 storage backed by my two NASs and, importantly, outside of the cluster.
Available Options
Garage certainly isn't the only self hosted S3 solution out there. Alongside Ceph, there is SeaweedFS and RustFS as well known examples. However, what drew me to Garage first, and more fitting for what I had in mind, is specifically single node capability and lightweight resource usage. These other options are much more suited for large scale enterprise deployments using distributed hardware and providing very high performance. A basic garage deployment can be handled with a simple docker compose file.
Planning the Install
The storage I have backing my S3 endpoints will be my two NASs, one large and one small. Alongside using a public cloud S3 service, this completes my 3-2-1 backup strategy for my homelab's data. Primarily I use a combination of CloudNativePG, VolSync , and Backrest to backup data and these can use S3 endpoints as the backup target.
My larger NAS is in the same rack as my cluster and I use a iSCSI driver to provision storage from this device into my cluster. I generally only use this for large volumes that I don't need or want to take up space on the Ceph cluster which uses the node's SSDs. This NAS will provide storage to the Garage instance that will be used for failure and recovery of only the application domain, as well as real time backups such as WALs from my databases. I will run the application and provide endpoints in the cluster, backed by out of cluster storage, and use my other two S3 endpoints for possible cluster failure and recovery operations.
The smaller NAS is a RaspberryPi 4 with two 2TB SSDs in RAID1. This will be used as an out of rack and geographically distant backup. While I can't backup all my data to this device, it can backup data that I consider most important and irrecoverable. Here I will run the Garage instance alongside Tailscale for a secure connection.
To have an easy to use web UI for Garage I will be using this project. It is simple and offers all the options that I need from a web experience.
Installation
I started by adapting the documentation on deploying a cluster for my instance that will run inside the Kubernetes cluster. I utilized the app-template chart by bjw-s-labs as a way to adapt docker run/compose deployments into a Helm Chart. This resulted in this Chart that is integrated into my deployment pattern. It created 3 deployments with accompanying services, routes, config maps, persistence, etc. I have one deployment deploying an Ubuntu "debug" container that gives me shell access to garage when I need to run commands.
Shortly after I installed this on my cluster I found a new project has been recently started for a Garage Operator. In the future I may migrate to this project instead of using Helm Charts. I likely would have used it had it been available at the start as I prefer the operator pattern to Helm.
On my smaller NAS I used this set of configuration files which I deployed using Komodo. Here I only exposed the Garage endpoints with Tailscale for secure remote access.
Operations
One of my first concerns with Garage was going to be S3 API compatibility. Here is the list of API feature's that Garage supports, which does cover most, but there are exceptions. In my setup I do not need these specific features, so it is not a concern.
However I did run into one issue that I needed to change, which was adding explicit region configuration such as the ACCESS_REGION credential env. In Garage and in my backup configurations I set the default region of "us-east-1" since I do not have any real geographic distribution.
And once that was set and I deployed my changes files began to populate in the appropriate buckets. The first thing I did next was to test restoration of these backups. With Volsync and CloudnativePG this process is easy with GitOps and I followed my documented practices I used with the cloud provider and prior MinIO setup. I may go into more depth in a future article about how I use these two applications specifically since I use a customized "helper" Chart to vastly simplify this process. But for now all things work!
Conclusion
Migrating over to Garage became a lot less work that I imagined, in large part to the mature tooling and deployment practices I have over my infrastructure. And with the increased popularity that I see Garage experiencing, no doubt to others having similar opinions of the MinIO situation I think I can expect Garage to see good support and maintenance in the future. Outside of the possible migration to the operator, I don't expect much future work into maintenance this system. I am very thankful to the Garage devs for providing this reliable software!
