GitOps with NixOS · 2026-05-05
Two servers, one Git repo
by Max Dollinger
A few months ago I decided to give self-hosting another shot. To keep things convenient, I rented two small servers from a cloud provider and got going. It didn’t take long before managing them started to feel like a chore. Every change meant SSHing in, editing files in place, or copying configs back and forth from my laptop, then doing it all again on the other box. The two ran different application stacks, which didn’t make any of it easier; every update, every package bump, every tweak still had to happen on both. I wanted to spend my time on the software actually running, not on babysitting the machines underneath it.
That’s when I learned about GitOps.
#Enter GitOps
GitOps is a way of managing infrastructure and apps where Git is the single source of truth. Instead of logging into a server, running commands from your laptop, or clicking around in the AWS console, you write a file (YAML, HCL, Nix, whatever) that describes what you want: “an S3 bucket named logs-prod with versioning on” or “run this container from myapp:1.4.2 with these env vars.” You commit that file and open a pull request. Once it’s merged, an agent (just a program running on the server, watching your repo in the background) notices the change and makes reality match the file. If something drifts (someone tweaks a setting by hand, a service gets disabled), the agent quietly puts it back. Want to update a container? Change the image tag, maybe tweak a setting or two, commit, and you’re done. Every change flows through Git, so infrastructure gets the same review, audit trail, and rollback story as your application code: revert the commit, and the system follows. For developers, shipping a new service or bumping an image tag is just a normal Git workflow. For DevOps teams, it means no more “who ran that command at 2am,” less drift, and a delivery pipeline that’s reproducible by design.
#A spaceship to the grocery store
Naturally, I did what most GitOps tutorials point you at and reached for Kubernetes, TalosOS specifically. I got it working eventually, though the bootstrapping alone gave me more grey hair than my kids have. Once the dust settled, I took a step back and realized I’d built a small spaceship to drive to the grocery store. Multiple nodes, autoscaling, rolling fleet upgrades, none of that was actually on my list. I had two little servers and wanted them to behave.
So I went looking for something simpler, and that’s how I ended up at NixOS: the same “the file is the truth, the system follows” idea as GitOps, but scoped down to a single machine instead of a cluster.
#NixOS: the same idea, smaller
Nix is a package manager with an unusual trick: instead of installing software into shared system folders like most Linux distros, it tucks every package into its own isolated directory, labeled with a hash of everything that went into building it. That means you can have ten versions of Python on one machine without them stepping on each other, and an upgrade that breaks something can be rolled back instantly. NixOS takes this idea and applies it to the whole operating system. Instead of configuring a server by installing packages, editing files in /etc, and tweaking services by hand, you write a single file that describes what the machine should look like: which packages are installed, which services run, what the firewall allows, who the users are. You rebuild, and NixOS makes the system match the file atomically, with the old version kept around in case you want to boot back into it. If GitOps is “the repo describes the system, and an agent makes it real,” NixOS is the same idea pushed down to the level of a single machine: the config file is the source of truth, and the OS itself is the agent.
#Why not Ansible?
Ansible was the obvious alternative, but it didn’t quite fit. Playbooks are imperative: you describe every step needed to bring a machine to your desired state, rather than the state itself. That doesn’t fit the declarative model GitOps is built on, and anything you didn’t write a task for stays as it was: a manually installed package stays installed, a hand-edited config stays edited. NixOS goes the other way; the file describes the whole state, and anything not in the file isn’t on the machine. Runs are also slow, since each one is a stack of SSH connections, Python bootstraps, and tasks executed one at a time; a NixOS rebuild on the same hardware is usually seconds. And the delivery model doesn’t fit either: Ansible pushes from a runner, so “commit and forget” means standing up a CI pipeline with credentials and network access into every box. The pull model GitOps actually wants (the server watches the repo and applies changes itself) is what NixOS plus a small daemon gives you for free.
#The missing pieces
One more piece of the usual GitOps stack I skipped: OpenTofu, for provisioning the infrastructure from code. For two servers I set up once, it wasn’t worth it. I clicked them into existence and moved on; if I ever outgrow that, OpenTofu is waiting.
That left three questions about the machines themselves. How do I get NixOS onto a cloud VM that only ships Ubuntu or Debian images? How do I keep secrets in a Git repo without leaking them? And what plays the role of the agent, the thing that watches the repo and applies changes? Three tools answer them. nixos-anywhere reinstalls a stock cloud image as NixOS from a config file. sops with an age key keeps secrets encrypted in the repo, with each machine holding the key for its own bits. comin is the agent: a small daemon that runs on each box, watches a branch, and rebuilds the system on every change.
A walkthrough of the whole setup will follow in the next posts.