Suspension bridge over body of water with dense fog and hills in background.

My Homelab

  1. Homelab
    /
  2. Jul 24, 2025 /
  3. 5 minutes to read

There is probably no better place to start then with an overview of how my homelab works and a bit of its history. It has pulled double duty as both the place I tinker as well as where I deploy my selfhosted apps. This has forced me to make sure that I know how to do disaster recovery. I've had a few times where I needed to rebuild from backups and fortunately that has always worked. But even with those inconveniences I see it as a positive. It lets me try out things with real data and usage, but quite low risk as well.

Hardware

Over the years I've had a homelab I've explored most of the trends for managing and deploying software. I started with VMs running on CentOS with noisy Dell Rx720 rack servers. Then as containers became more popular in the selfhosting space I migrated over to managing docker compose files. For the RaspberryPis I have running this is what I still do, though now managed remotely using Komodo My current iteration is one that I feel most comfortable with, but certainly the most complicated, which is Kubernetes using NUCs for hosts and a Synology NAS for storage.

Repository

In the past I also used to have separate repos for each host, sometimes for application groups, sometimes for different domains, but now I use a mono repo for all my homelab and selfhosting infrastructure code. I host it on my Gitea instance where it also mirrors to my Github account.

Infrastructure Repository

I have the main directory split first into clusters and hosts at the top level for logical domains. For each domain I use a common naming scheme: the first two characters are a type code, followed by 2 digit unique id, and 2 characters of description (often times the OS). For example, my main cluster uses cl01tl which means "cluster, 01, Talos Linux". In that cluster the entire folder is organized for my ApplicationSet and Application generator in ArgoCD, which I'll explain more about in a bit. For hosts its quite similar, I group by application with a docker compose file at the root.

ArgoCD

When I first made the migration to Kubernetes it took a bit of time to settle on an idea of how to deploy applications, which lead to ArgoCD. And once I began to get familiar with it I took some more time testing out various ways to find what felt natural to me. I've always liked this period of discovery and I try to avoid looking up guides or read too deeply into documentation while I am at it. Its probably why I've never finished a game of Civilization. But I settled on this pattern where I generate ApplicationSets and Applications on a folder structure.

This has made it near frictionless to deploy and tear down applications. Just create or delete a folder and ArgoCD will reconcile that change. Inside each application folder is a helm chart, where I often pull vendor charts or use bjw-s-labs app-template as a generic chart. This lets my add specific resources to customize my specific needs such as an External Secret to pull secrets from Vault or my postgres-cluster chart which gives me quick and easy access to a database managed by the Cloudnative-PG operator. 

Renovate

Renovate runs as a Gitea workflow managing updates. So for I've found automerge on patches and selecting a few apps I trust to automerge on minor to be the best balance of stability and low maintenance. I have a few other workflows doing linting on helm and docker compose files mostly to keep things tidy.

Future Plans

In the future I have two major changes planned. The first is to migrate my remaining RaspberryPis to single node Kubernetes. This may seem like adding a bit too much complexity and overhead, and perhaps it will prove to be, but I'm a bit curious on how well or how poorly it works out. Talos Linux has made managing the host software easy and whether intentionally or not it also makes single node Kubernetes a bit easier too. Using a common deployment method, monitoring tools, etc, seems like a useful simplification of my process. My big concern will be how they would handle outages from my main cluster, since I would be using its ArgoCD instance to remotely manage them.

Rendered Manifests

The other major change is how exactly manifest get generated and applied into my cluster(s). Right now I use Helm with ArgoCD, but for a while I have been curious about other ways. Instead of generating manifests within ArgoCD, I want to use a Gitea workflow where manifests are generated from cdk8 code. I expect this combination will give me greater flexibility and ease of management, while also ensuring I have proper oversight over what specifically gets deployed. Right now, unless I manually render Helm within my application folders I do not know specifically all the resources that will be deployed until ArgoCD renders and applies them. That process isn't something I am happy with, as it has occasionally surprised me when new applications are created.

I expect both of those changes to be pretty good topics to write about among many other ideas I'd like to share.

infrastucture homelab selfhosting