Engineering Guide

The Hard Way: Deploying Production Kubernetes on Bare Metal

By DevOps Team 20 min read Updated: Feb 2026

Running Kubernetes on Virtual Machines (EC2/Droplets) introduces a hypervisor tax. You lose 10-15% of your CPU cycles to virtualization overhead. For high-performance workloads, running K8s directly on Bare Metal is the only way to unlock the full potential of your hardware.

Prerequisites

Unlike managed services (EKS/GKE), here you are the master of your network. We will assume you have provisioned 3 Bare Metal servers from FORESTER CREDO LIMITED:

  • node-1 (Master): AMD EPYC, 64GB RAM
  • node-2 (Worker): AMD EPYC, 128GB RAM
  • node-3 (Worker): AMD EPYC, 128GB RAM
  • OS: Ubuntu 24.04 LTS

Step 1: Preparing the Kernel

Kubernetes requires specific kernel modules for bridging network traffic. Run this on all nodes:

cat < overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

Step 2: Installing Containerd

We will use `containerd` as our CRI (Container Runtime Interface). It is lighter and faster than Docker.

# Add Docker repo keys
sudo apt-get update && sudo apt-get install -y containerd.io

# Configure systemd cgroup driver
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

Step 3: Initializing the Control Plane

On the Master Node only, initialize the cluster using `kubeadm`. Note that we specify a pod network CIDR for Calico.

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint="10.0.0.5:6443"

This command will output a `kubeadm join` token. Save this! You will need it to join the worker nodes.

Step 4: Networking (CNI)

Bare metal networking is tricky. We recommend Calico for its BGP capabilities, allowing you to advertise pod IPs directly to your top-of-rack router.

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml

Why Bare Metal?

When you run this setup on FORESTER CREDO LIMITED hardware, you get:

  • Direct NVMe Access: Database pods (PostgreSQL/Redis) can talk directly to the PCIe bus, reducing latency to microseconds.
  • No "Steal Time": You never wait for a neighbor VM to finish its task. All CPU cores are 100% yours.

Conclusion

Managing your own Kubernetes cluster is complex, but the performance gains for specific workloads are undeniable. If you prefer to focus on code rather than `etcd` backups, consider our Managed Kubernetes service.

Need a Cluster Ready to Go?

We can provision a fully managed, hardened Bare Metal Kubernetes cluster for you in under 10 minutes.

View Managed Plans Rent Bare Metal