Infrastructure · Linux · Kubernetes · August 2025

Personal Deployment Server

A self-hosted Linux server running Kubernetes (k3s), NGINX, and containerized services. Built to understand the full stack of production infrastructure — from bare metal to live deployments — on my own hardware.

k3s Linux NGINX Docker Systemd
Personal Deployment Server

Overview

This project is a self-hosted production-grade server running on physical hardware. The goal was to build and operate a real infrastructure environment — not a managed cloud platform — to develop genuine understanding of how production systems work beneath the abstractions.

The server runs k3s (lightweight Kubernetes), NGINX for ingress and routing, and multiple containerized services deployed via Docker. All configuration is declarative and version-controlled, so the entire environment can be rebuilt from scratch.

This serves as the deployment target for other projects including the Dracula AI Agent and the Splunk monitoring platform — making it a real platform, not just a demo.

Architecture

☸️
Kubernetes (k3s)
  • Lightweight k3s distribution on Ubuntu
  • Declarative manifests for all workloads
  • Pod restarts and health checks
  • Namespaced service isolation
🌐
NGINX Ingress
  • Reverse proxy for all services
  • Host-based routing to containers
  • Static file serving
  • TLS termination
📦
Container Runtime
  • Docker for image builds
  • containerd runtime in k3s
  • Multi-service deployments
  • Image versioning by tag
🐧
Linux Foundation
  • Ubuntu Server, hardened config
  • Systemd service management
  • SSH key auth, UFW firewall
  • Automated system updates

Features

♻️
Reproducible deployments
All k3s manifests and NGINX configs are version-controlled. The full environment can be rebuilt from scratch deterministically.
🔀
Multi-service routing
NGINX routes traffic to multiple services by hostname or path prefix — all running on the same physical machine.
🔒
Hardened Linux baseline
UFW firewall, SSH key-only auth, minimal installed packages, and automatic security updates.
📦
Container isolation
Each service runs in its own container with scoped resources, preventing interference between workloads.
Zero-downtime redeploy
k3s rolling updates replace containers while traffic continues routing to the old pod until the new one is healthy.

Deployment Workflow

Build image
Docker image built locally or in CI, tagged and pushed to the registry.
Update manifest
k3s deployment YAML updated with new image tag. Change committed to version control.
Apply to cluster
kubectl apply -f deployment.yaml rolls out the new pod. k3s handles scheduling and health checks.
NGINX routes traffic
Ingress rules forward requests to the new service. Host-based routing keeps services isolated.
Verify
Check pod status with kubectl get pods and confirm live routing via NGINX logs.

Tech Stack

☸️
k3sKubernetes
🐧
Ubuntu ServerHost OS
🌐
NGINXReverse proxy / ingress
📦
DockerContainer builds
⚙️
SystemdService management
🔒
UFWFirewall