From Single Host to Global Distribution

The evolution of a FreeBSD infrastructure from one VM to six hosts across two continents

Post 1: From a Single Host to Global Distribution - never touch a running system!

The Beginning: When (Nicola) Tesla Was Enough

Like many infrastructure projects, mine started simple. A single FreeBSD host named tesla in West Europe, running a few services. Later in jails. FreeBSD's jail system provided excellent isolation, the ZFS filesystem offered robust storage management, and the ports system gave me control over every component. Life was good.

But as I grew in life and carriere, I felt the need to try other more and other things. And I happened to get the Oracle's free tier on ampere VM. And a discounted VM in the US. And company free resources for personal usage and tests.

The Current Landscape: Six Hosts, Two Continents

Today, the infrastructure spans six hosts across three regions:

West Europe (Primary Region)

North Europe

USA

Each of the three main regions (tesla, newton, bohr) runs identical core services, while specialized hosts handle specific functions like package building and load balancing.

Why FreeBSD for This Distributed Chaos?

When people ask why I chose FreeBSD for a multi-continent setup (and in general :D ), the answer isn't just technical—it's philosophical (Because I can). FreeBSD embodies the Unix principle of "do one thing and do it well," but at the system level. And yes, I do like FreeBSD.

Jails: The Unsung Heroes - While everyone else is wrestling with Docker orchestration and Kubernetes complexity, FreeBSD jails provide lightweight, secure isolation that just works. Each service gets its own jail, complete process isolation, and dedicated networking, but they all share the same kernel. No overhead, no complexity, just clean separation. And I use bastille for jails management.

ZFS: Storage That Thinks Ahead - Native ZFS integration means snapshots, compression, and data integrity features work seamlessly across all hosts. When you're managing services across continents, having storage that can detect and correct corruption automatically isn't just nice—it's essential.

Network Stack Maturity - FreeBSD's network stack handles the complex routing required for Wireguard mesh networking without breaking a sweat. Multiple jails, multiple network interfaces, complex routing—it all just works.

Predictable Performance - When your database is in Northern Europe, your storage is distributed across three regions, and your users might be anywhere, you need an OS that behaves consistently. FreeBSD's predictable performance characteristics make debugging and optimization possible across geographic distances.

The Service Ecosystem: More Complex Than It Started

What began as a few simple service(s) has evolved into a "sophisticated" ecosystem:

Database Layer - PostgreSQL cluster managed by Patroni, with HAProxy providing intelligent load balancing and automatic failover. My applications connect to one endpoint, but traffic gets routed to the healthiest database instance across three regions.

Storage Layer - Garage S3-compatible storage creates a distributed object store that backs everything from Nextcloud files to static website content and SSL certificates. Think of it as your own private AWS S3, but spread across continents.

Caching & Messaging - Redis/Valkey instances with Sentinel provide both caching and messaging services. Each region has its own instance for performance, but they coordinate through the Wireguard mesh when needed.

Application Services - Nextcloud runs locally in each region for performance, DNS services ensure everything resolves correctly, and a complete mail infrastructure handles email with the reliability people expect in 2025.

Monitoring & GitOps - Prometheus, Loki, and Grafana provide observability across all services, while Uptime-kuma watches endpoints and alerts via Discord. The newest addition is a developing GitOps system using Gitea with Act runners—think GitHub Actions, but distributed across my own infrastructure.

The Networking Foundation

All hosts connect via Wireguard in a (full) mesh configuration, creating secure encrypted tunnels across the public internet. This approach eliminates the need for expensive dedicated connections while providing the security and reliability needed for production services.

Jails communicate with each other through this Wireguard mesh, enabling services on tesla to seamlessly interact with databases on bohr or storage on newton, regardless of geographic location.

What Makes This Interesting

This isn't just another homelab setup. It demonstrates several compelling patterns:

Coming Up in This Series

In the following posts, we'll dive deep into each component of this infrastructure. You'll see actual configurations, learn about the challenges of running distributed services, and understand the operational practices that keep everything running smoothly.

Whether you're considering FreeBSD for your own infrastructure, curious about distributed systems, or interested in alternatives to traditional cloud architectures, this series will provide practical insights from a working production environment.

The next post will explore the Wireguard mesh that makes this all possible - how six hosts across two continents communicate securely and efficiently.