Proxmox Networking Patterns — Complete Tutorial
Proxmox Networking Patterns — Complete Tutorial
A progressive lab guide from raw Linux bridges to production-grade multi-cluster networking. Each exercise builds on the previous one, ending with a Kubernetes multi-cluster topology that mirrors real CAPI deployments.
Requirements: Proxmox VE 7+ · Linux / FRR / Cilium · 6 Exercises
Table of Contents
- Linux Bridge · OVS · Proxmox SDN
- VLAN Segmentation
- VXLAN Overlay Networks
- BGP with FRR
- Network Namespaces & CNI Primitives
- Multi-Cluster Networking Lab
01 — Linux Bridge · OVS · Proxmox SDN
Understand the three networking layers available in Proxmox and when to use each.
Theory
Proxmox gives you three distinct ways to connect VMs to networks. Linux Bridge is the default: a simple L2 switch implemented in the kernel. It's reliable, well-understood, and sufficient for most cases. Open vSwitch (OVS) adds programmable flow tables, port mirroring, and VXLAN tunneling at the interface level — this is what OpenStack Neutron uses under the hood. Proxmox SDN is a management layer built on top of either bridges or OVS that brings a declarative API, VNets, zones, and EVPN routing via FRR.
The conceptual leap matters: Linux Bridge operates at L2 in a single host; OVS adds programmability and multi-host L2; SDN adds L3 routing and network lifecycle management across your cluster.
Architecture Comparison
| Mode | Description |
|---|---|
| Linux Bridge (default) | Kernel-native, zero config overhead. VMs connect like ports on a dumb switch. |
| Open vSwitch (advanced) | Programmable via OpenFlow. Port mirroring, QoS, VXLAN tunnels. Used by OpenStack Neutron/OVN. |
| Proxmox SDN (cluster) | Built-in since PVE 7. Zones, VNets, subnets. Backed by Linux bridges or OVS. |
Step 1 — Inspect your existing Linux Bridges
# Show all bridges and their members
brctl show
# More detailed: see the bridge in ip link context
ip -d link show type bridge
# See which VMs tap interfaces are attached
bridge link showStep 2 — Create a second isolated bridge for experiments
Add a new bridge in /etc/network/interfaces that has no upstream NIC — useful as an internal lab network where VMs can talk to each other without leaving the host.
# Add to /etc/network/interfaces
auto vmbr1
iface vmbr1 inet static
address 10.10.0.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
# Apply without rebooting
ifreload -aNote: Set VMs to use
vmbr1for their network interface. They'll be isolated from the outside world but can reach each other and the host at10.10.0.1.
Step 3 — Install Open vSwitch
OVS is not installed by default on Proxmox. Install it and create your first OVS bridge alongside the existing Linux bridge.
apt install openvswitch-switch -y
# Create an OVS bridge
ovs-vsctl add-br ovsbr0
# Verify
ovs-vsctl show
# Add a port (e.g. a second NIC if available)
ovs-vsctl add-port ovsbr0 enp2s0
# List all OVS bridges and ports
ovs-vsctl list-br
ovs-vsctl list-ports ovsbr0Step 4 — Enable Proxmox SDN and create a Simple Zone
In the Proxmox web UI, navigate to Datacenter → SDN. Start with a Simple zone (a standard Linux bridge managed by SDN) and add a VNet to it.
# Via pvesh CLI (alternative to UI)
pvesh create /cluster/sdn/zones \
--zone lab-zone \
--type simple
pvesh create /cluster/sdn/vnets \
--vnet lab-net \
--zone lab-zone
# Apply the SDN config
pvesh set /cluster/sdn
# Verify generated bridge
ip link show lab-netTip: After applying, Proxmox generates a Linux bridge named after your VNet. Attach VMs to this VNet from their network settings in the UI.
Step 5 — Try OVS Flow Tables (traffic inspection)
This is where OVS gets interesting. You can view and insert flow rules that govern how packets are forwarded — the foundation of SDN controllers.
# View the current flow table (by default: normal forwarding)
ovs-ofctl dump-flows ovsbr0
# Insert a rule: drop all ARP from a specific MAC
ovs-ofctl add-flow ovsbr0 \
"dl_type=0x0806,dl_src=52:54:00:aa:bb:cc,action=drop"
# Mirror all traffic to a monitoring port
ovs-vsctl add-port ovsbr0 mirror0 -- set Interface mirror0 type=internal
ovs-vsctl -- --id=@m create Mirror name=m0 \
select-all=true output-port=mirror0 \
-- add Bridge ovsbr0 mirrors @m02 — VLAN Segmentation
Segment traffic across multiple isolated networks on a single physical interface.
Theory
VLANs (802.1Q) let you carry multiple isolated L2 networks over a single physical cable. Tagged frames carry a VLAN ID (1-4094); untagged frames belong to the native VLAN. A trunk port carries multiple VLANs (tagged); an access port carries one VLAN (untagged, towards a VM or end device).
In a real Kubernetes cluster deployment, this is how you separate management traffic, storage replication traffic, and workload traffic — each gets a dedicated VLAN, preventing broadcast storms and enabling QoS policies per segment. CAPI and OpenStack use exactly this model for their internal networks.
VLAN-Aware Bridge Topology
Warning: Make sure your upstream switch port (or home router port) is configured as a trunk if you want tagged traffic to leave the host. For a pure single-Proxmox-box lab, you don't need a real switch — the bridge handles it internally.
Step 1 — Enable VLAN-awareness on the bridge
Edit /etc/network/interfaces to mark vmbr0 as VLAN-aware. This lets the bridge process 802.1Q tags instead of treating them as unknown traffic.
auto vmbr0
iface vmbr0 inet static
address 192.168.1.10/24
gateway 192.168.1.1
bridge-ports eth0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094 # allow all VLAN IDs
ifreload -aStep 2 — Create three VMs and assign them to different VLANs
In the Proxmox UI, edit each VM's network device. Set the "VLAN Tag" field to 10, 20, or 30 respectively. This adds the tag to the tap interface connecting the VM to the bridge.
# Verify VLAN assignments on the bridge
bridge vlan show
# Expected output shows tap interfaces tagged with their VLANs:
# tap100i0 10 PVID Egress Untagged
# tap101i0 20 PVID Egress Untagged
# tap102i0 30 PVID Egress UntaggedStep 3 — Configure IPs inside each VM
Each VM should have an IP in its designated subnet. VMs in the same VLAN can ping each other; VMs in different VLANs cannot (yet) — that's the isolation working correctly.
# On VM in VLAN 10 (k8s-cp)
ip addr add 10.0.10.2/24 dev eth0
ip link set eth0 up
# On VM in VLAN 20 (k8s-wk1)
ip addr add 10.0.20.2/24 dev eth0
# Test: ping within same VLAN (should work)
ping 10.0.10.3 # from 10.0.10.2, another VLAN 10 VM
# Test: ping across VLANs (should FAIL — no router yet)
ping 10.0.20.2 # from 10.0.10.2Step 4 — Add inter-VLAN routing on the Proxmox host
Create sub-interfaces on the bridge for each VLAN. This turns the Proxmox host into a router, enabling controlled inter-VLAN traffic.
# Add to /etc/network/interfaces
auto vmbr0.10
iface vmbr0.10 inet static
address 10.0.10.1/24
auto vmbr0.20
iface vmbr0.20 inet static
address 10.0.20.1/24
auto vmbr0.30
iface vmbr0.30 inet static
address 10.0.30.1/24
ifreload -a
# Enable IP forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
# Now cross-VLAN pings work (via the host as gateway)
# Set gateway on each VM: 10.0.10.1, 10.0.20.1, etc.03 — VXLAN Overlay Networks
Build L2 tunnels over an L3 network — the foundation of every container CNI plugin.
Theory
VXLAN (Virtual eXtensible LAN) encapsulates L2 Ethernet frames inside UDP packets (port 4789). This lets you create a virtual L2 network that spans multiple physical hosts connected only by L3 routing. Each VXLAN network is identified by a 24-bit VNI (VXLAN Network Identifier), giving you up to 16 million isolated segments — far more than 4094 VLANs.
This is exactly what Flannel (VXLAN backend), Cilium (in VXLAN mode), and Calico use to build pod networks. The VTEP (VXLAN Tunnel Endpoint) is the local IP that sends and receives encapsulated traffic. Understanding VXLAN manually first makes the CNI layer far less magical.
VXLAN Encapsulation — Two Proxmox VMs
Step 1 — Spin up two lightweight VMs
Create two Alpine Linux or Debian VMs in Proxmox, both connected to vmbr1 (internal bridge from Exercise 1). Assign them IPs on the 10.10.0.0/24 underlay network.
# VM-A: 10.10.0.2/24 — VM-B: 10.10.0.3/24
# Verify underlay connectivity first — from VM-A:
ping 10.10.0.3 # must succeed before VXLAN setupStep 2 — Create the VXLAN interface on VM-A
The vxlan device is the local VTEP. It knows the remote VTEP IP (VM-B) and the VNI to use.
# On VM-A (10.10.0.2)
ip link add vxlan10 type vxlan \
id 10 \
dstport 4789 \
remote 10.10.0.3 \
local 10.10.0.2 \
dev eth0
ip addr add 172.20.0.1/24 dev vxlan10
ip link set vxlan10 up
# Verify
ip -d link show vxlan10Step 3 — Create the VXLAN interface on VM-B
Mirror the setup on VM-B with reversed local/remote addresses and a different overlay subnet.
# On VM-B (10.10.0.3)
ip link add vxlan10 type vxlan \
id 10 \
dstport 4789 \
remote 10.10.0.2 \
local 10.10.0.3 \
dev eth0
ip addr add 172.20.1.1/24 dev vxlan10
ip link set vxlan10 upStep 4 — Test and observe the encapsulation
Ping across the tunnel and capture packets on the underlay interface to see VXLAN encapsulation in action.
# From VM-A, ping VM-B's overlay address
ping 172.20.1.1
# On VM-A, capture underlay to see encapsulation
tcpdump -i eth0 -n udp port 4789 -v
# You'll see packets like:
# IP 10.10.0.2.PORT > 10.10.0.3.4789: VXLAN, flags [I], vni 10
# IP 172.20.0.1 > 172.20.1.1: ICMP echo
# Check MTU — VXLAN adds 50 bytes of overhead
ip link show vxlan10 # MTU should be ~1450Warning: MTU matters. VXLAN encapsulation adds ~50 bytes. If your underlay MTU is 1500, set
vxlan10MTU to 1450 to avoid fragmentation — the same issue CNI plugins handle automatically.
Step 5 — Upgrade to FDB-based discovery
The static remote IP works for 2 nodes. For more, manage the Forwarding Database (FDB) manually — exactly what a CNI control plane does.
# Create VXLAN without static remote (learning mode)
ip link add vxlan10 type vxlan \
id 10 dstport 4789 \
local 10.10.0.2 dev eth0
# Manually add remote VTEP entries in FDB
bridge fdb append 00:00:00:00:00:00 dev vxlan10 dst 10.10.0.3
bridge fdb append 00:00:00:00:00:00 dev vxlan10 dst 10.10.0.4
# This is what kube-proxy / Cilium does in the control plane
bridge fdb show dev vxlan1004 — BGP with FRR
Advertise routes between cluster nodes the way Cilium BGP Control Plane does.
Theory
BGP (Border Gateway Protocol) is the routing protocol of the Internet, but it's increasingly used inside Kubernetes clusters. Cilium's BGP Control Plane uses BGP to advertise pod CIDRs and LoadBalancer IPs to upstream routers, enabling bare-metal load balancing without a cloud provider. FRR (Free Range Routing) is the open-source routing suite that both Cilium and Proxmox SDN use underneath.
In this exercise, you'll set up a simple iBGP (internal BGP, same AS number) topology with a route reflector, then advertise a "pod CIDR" from one node and verify another node learns the route. This is the exact pattern Cilium uses.
BGP Topology — Route Reflector Pattern
Step 1 — Install FRR on all VMs
FRR (Free Range Routing) is the successor to Quagga. Install it on your router VM and your node VMs.
# On Debian/Ubuntu VMs
curl -s https://deb.frrouting.org/frr/keys.gpg | \
gpg --dearmor > /usr/share/keyrings/frr.gpg
echo "deb [signed-by=/usr/share/keyrings/frr.gpg] \
https://deb.frrouting.org/frr $(lsb_release -sc) frr-stable" \
> /etc/apt/sources.list.d/frr.list
apt update && apt install frr frr-pythontools -y
# Enable BGP daemon
sed -i 's/bgpd=no/bgpd=yes/' /etc/frr/daemons
systemctl restart frrStep 2 — Configure the Route Reflector (router-vm)
The route reflector accepts iBGP sessions from all nodes and re-advertises (reflects) their routes to all other peers — avoiding the O(n²) full mesh problem.
# /etc/frr/frr.conf on router-vm
frr defaults traditional
hostname router-vm
router bgp 65000
bgp router-id 10.10.0.1
bgp cluster-id 10.10.0.1
# Define each node as a neighbor
neighbor 10.10.0.2 remote-as 65000
neighbor 10.10.0.3 remote-as 65000
neighbor 10.10.0.4 remote-as 65000
address-family ipv4 unicast
# Enable route reflection for all peers
neighbor 10.10.0.2 route-reflector-client
neighbor 10.10.0.3 route-reflector-client
neighbor 10.10.0.4 route-reflector-client
exit-address-familyStep 3 — Configure each node to advertise its pod CIDR
Each node connects to the route reflector and announces its assigned pod subnet. In Cilium BGP mode, this config is generated automatically from CiliumBGPPeeringPolicy.
# /etc/frr/frr.conf on node-1 (10.10.0.2)
frr defaults traditional
hostname node-1
router bgp 65000
bgp router-id 10.10.0.2
neighbor 10.10.0.1 remote-as 65000 # route reflector
address-family ipv4 unicast
network 172.16.1.0/24 # advertise pod CIDR
exit-address-family# Add a local dummy route so BGP has something to advertise
ip link add dummy0 type dummy
ip addr add 172.16.1.0/24 dev dummy0
ip link set dummy0 upStep 4 — Verify BGP sessions and route propagation
Use vtysh (FRR's interactive CLI) to inspect sessions and routes — like kubectl but for your routing layer.
# Enter FRR interactive shell
vtysh
# Check BGP neighbors
show bgp neighbors
# See BGP routing table
show bgp ipv4 unicast
# On node-2, verify it learned node-1's route
show ip route 172.16.1.0/24
# Should show: B> 172.16.1.0/24 via 10.10.0.2
# Test reachability
ping -I 172.16.2.1 172.16.1.1 # from node-2 pod CIDR to node-1Tip: The
B>prefix in the route table means the route was learned via BGP and selected as the best path. This is exactly the output you'd see on a physical router connected to a Cilium-managed cluster.
05 — Network Namespaces & CNI Primitives
Manually replicate what a CNI plugin does when setting up a pod network.
Theory
Every container runtime isolates network using Linux network namespaces — a kernel feature that gives each namespace its own interfaces, routing table, and iptables rules. When a pod starts, the CNI plugin: creates a new netns, creates a veth pair, moves one end into the pod netns, assigns an IP, and connects the other end to a bridge or directly to the host. You'll do all of this manually to demystify the process.
This exercise also covers iptables/nftables rules for pod-level network policy — which is what Calico and the iptables backend of Cilium operate on. After this, Cilium's eBPF path will feel like a natural evolution of the same concept.
Manual Pod Network Setup — veth pair + bridge
Step 1 — Create two network namespaces
These simulate two pods. Each gets its own isolated network stack.
ip netns add pod-1
ip netns add pod-2
# List them
ip netns list
# Execute commands inside a namespace
ip netns exec pod-1 ip link # only sees loopback by defaultStep 2 — Create a bridge and veth pairs
The bridge simulates the CNI bridge plugin. Each veth pair connects a namespace to the bridge — one end in the namespace (like eth0 in a pod), one end on the bridge.
# Create bridge
ip link add cni0 type bridge
ip addr add 10.244.0.1/24 dev cni0
ip link set cni0 up
# Veth pair for pod-1
ip link add veth0a type veth peer name veth0b
ip link set veth0b netns pod-1 # move one end into pod-1
ip link set veth0a master cni0 # attach host end to bridge
ip link set veth0a up
# Veth pair for pod-2
ip link add veth1a type veth peer name veth1b
ip link set veth1b netns pod-2
ip link set veth1a master cni0
ip link set veth1a upStep 3 — Assign IPs and bring up interfaces inside namespaces
# Inside pod-1
ip netns exec pod-1 ip addr add 10.244.0.2/24 dev veth0b
ip netns exec pod-1 ip link set veth0b up
ip netns exec pod-1 ip link set lo up
ip netns exec pod-1 ip route add default via 10.244.0.1
# Inside pod-2
ip netns exec pod-2 ip addr add 10.244.0.3/24 dev veth1b
ip netns exec pod-2 ip link set veth1b up
ip netns exec pod-2 ip link set lo up
ip netns exec pod-2 ip route add default via 10.244.0.1
# Test pod-to-pod connectivity
ip netns exec pod-1 ping 10.244.0.3Step 4 — Apply Network Policy with iptables
Simulate a NetworkPolicy that denies traffic from pod-2 to pod-1 on port 80 — what Calico or the iptables backend does for you automatically.
# Drop traffic from pod-2 (10.244.0.3) to pod-1 port 80
iptables -I FORWARD \
-s 10.244.0.3 -d 10.244.0.2 \
-p tcp --dport 80 \
-j DROP
# Verify: start a listener in pod-1
ip netns exec pod-1 nc -lp 80 &
# This should be blocked (timeout)
ip netns exec pod-2 nc -w 2 10.244.0.2 80
# Remove the rule
iptables -D FORWARD \
-s 10.244.0.3 -d 10.244.0.2 \
-p tcp --dport 80 -j DROPNote: Cilium replaces iptables with eBPF programs attached to the veth interfaces for much better performance, but the logical model is identical — you're just doing it by hand here.
Step 5 — Enable external access via NAT
Allow pods to reach the outside world — exactly what kube-proxy's masquerade rule does for pod traffic leaving the cluster.
# Enable forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
# Masquerade outbound traffic from pod CIDR
iptables -t nat -A POSTROUTING \
-s 10.244.0.0/24 ! -d 10.244.0.0/24 \
-j MASQUERADE
# Test from pod-1
ip netns exec pod-1 ping 8.8.8.806 — Multi-Cluster Networking Lab
Build a full two-cluster topology mirroring a real CAPI multi-region deployment.
Theory
This is where all previous exercises converge. You'll create two isolated k3s clusters (simulating CAPI-provisioned clusters), separated by a virtual router running FRR with BGP. Each cluster runs Cilium with BGP Control Plane enabled. Cilium will advertise pod CIDRs and LoadBalancer IPs to the FRR router, which distributes them to the other cluster — enabling direct cross-cluster pod routing.
This is functionally equivalent to a multi-region Cluster API setup where each cluster lives in a different OpenStack tenant or availability zone, connected by a transit network. It also sets the foundation for Cilium Cluster Mesh, which adds cross-cluster service discovery on top of this routing layer.
Full Lab Topology
Step 1 — Set up the network topology
Use the VLAN setup from Exercise 2. Create VLAN 10 for Cluster A and VLAN 20 for Cluster B, with the router-vm having a leg in each VLAN.
# router-vm: two NICs, one in VLAN 10, one in VLAN 20
# In Proxmox VM config:
# net0: vmbr0, tag=10 → eth0: 10.10.10.1/24
# net1: vmbr0, tag=20 → eth1: 10.10.20.1/24
# Cluster A VMs: VLAN tag 10, IPs 10.10.10.2, 10.10.10.3
# Cluster B VMs: VLAN tag 20, IPs 10.10.20.2, 10.10.20.3
# On router-vm: enable IP forwarding
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -pStep 2 — Deploy k3s on each cluster
Install k3s without its default CNI (Flannel) and with custom pod/service CIDRs so the two clusters don't overlap.
# On cp-a (10.10.10.2) — Cluster A
curl -sfL https://get.k3s.io | sh -s - server \
--flannel-backend=none \
--disable-network-policy \
--cluster-cidr=10.42.0.0/16 \
--service-cidr=10.96.0.0/12 \
--disable=traefik \
--node-ip=10.10.10.2
# On cp-b (10.10.20.2) — Cluster B
curl -sfL https://get.k3s.io | sh -s - server \
--flannel-backend=none \
--disable-network-policy \
--cluster-cidr=10.43.0.0/16 \
--service-cidr=10.97.0.0/12 \
--disable=traefik \
--node-ip=10.10.20.2Step 3 — Install Cilium with BGP Control Plane enabled
# Install Cilium CLI
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
curl -L --remote-name-all \
"https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz"
tar xf cilium-linux-amd64.tar.gz -C /usr/local/bin
# Install Cilium on Cluster A (run with KUBECONFIG pointing to cp-a)
cilium install \
--version 1.15.0 \
--set tunnel=vxlan \
--set bgpControlPlane.enabled=true \
--set k8sServiceHost=10.10.10.2 \
--set k8sServicePort=6443
# Verify
cilium status --waitStep 4 — Configure Cilium BGP peering with FRR router
Create a CiliumBGPPeeringPolicy that tells each cluster to peer with the FRR route reflector and advertise its pod CIDR and LoadBalancer IPs.
# cluster-a-bgp.yaml
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPPeeringPolicy
metadata:
name: cluster-a-bgp
spec:
nodeSelector:
matchLabels: {}
virtualRouters:
- localASN: 65000
exportPodCIDR: true
neighbors:
- peerAddress: "10.10.10.1/32" # router-vm eth0
peerASN: 65000
serviceSelector:
matchExpressions:
- key: somekey
operator: NotIn
values: [""] # select all LB serviceskubectl apply -f cluster-a-bgp.yamlStep 5 — Configure FRR on router-vm to accept both clusters
# /etc/frr/frr.conf on router-vm
router bgp 65000
bgp router-id 10.10.10.1
# Cluster A nodes
neighbor 10.10.10.2 remote-as 65000
neighbor 10.10.10.3 remote-as 65000
# Cluster B nodes
neighbor 10.10.20.2 remote-as 65000
neighbor 10.10.20.3 remote-as 65000
address-family ipv4 unicast
neighbor 10.10.10.2 route-reflector-client
neighbor 10.10.10.3 route-reflector-client
neighbor 10.10.20.2 route-reflector-client
neighbor 10.10.20.3 route-reflector-client
# Advertise routes between the two VLANs
redistribute connected
exit-address-familyStep 6 — Verify cross-cluster pod routing
# On router-vm: verify both cluster's routes are known
vtysh -c "show ip route"
# Should show:
# B>* 10.42.0.0/16 via 10.10.10.2 (Cluster A pod CIDR)
# B>* 10.43.0.0/16 via 10.10.20.2 (Cluster B pod CIDR)
# Deploy a pod in Cluster A
kubectl --context=cluster-a run test-a --image=alpine \
--command -- sleep 3600
# Get its IP
kubectl --context=cluster-a get pod test-a -o wide
# From Cluster B, ping Cluster A pod IP directly
kubectl --context=cluster-b exec -it test-b -- ping <cluster-a-pod-ip>Tip: If this works, you have a functioning multi-cluster routed network — the same foundation used in production CAPI multi-region setups. The next step is adding Cilium Cluster Mesh on top for cross-cluster service discovery and identity-aware policy.
Step 7 (Bonus) — Enable Cilium Cluster Mesh
Cluster Mesh adds a control plane overlay (etcd-based) that synchronizes service endpoints and identities between clusters, enabling cross-cluster Service access by DNS name.
# Enable Cluster Mesh on both clusters
cilium clustermesh enable --context cluster-a
cilium clustermesh enable --context cluster-b
# Connect the two clusters
cilium clustermesh connect \
--context cluster-a \
--destination-context cluster-b
# Verify mesh status
cilium clustermesh status --context cluster-a
# Annotate a service as global (visible across clusters)
kubectl --context=cluster-a annotate svc nginx-a \
service.cilium.io/global="true"
# Now from Cluster B, nginx-a is reachable by its ClusterIP
# and Cilium handles load balancing across both clustersWhat You Built
| Exercise | Skill |
|---|---|
| 🔌 Bridges | Linux Bridge, OVS, and Proxmox SDN zones — the three layers of virtual networking in a PVE cluster. |
| 🏷 VLANs | VLAN-aware bridge with segmented management, workload, and storage networks — the production Kubernetes network model. |
| 🌐 VXLAN | Manual overlay tunnels with encapsulation visible via tcpdump — the primitive every CNI plugin uses. |
| 📡 BGP | FRR route reflector with pod CIDR advertisement — the exact mechanism Cilium BGP Control Plane automates. |
| 📦 Namespaces | Manual CNI simulation: netns, veth pairs, bridge, iptables — what every container runtime does at pod start. |
| 🔗 Multi-Cluster | Two k3s + Cilium clusters peered via BGP with optional Cluster Mesh — a working CAPI multi-region analog. |