Setting up a homelab on a RaspberryPi cluster with k3s
Introduction
Since beginning my career in DevOps, my focus has largely been on cloud services (SaaS, PaaS, IaaS), and I haven’t had the opportunity to engage deeply with traditional sysadmin tasks or server administration. I believe that setting up my own homelab and hosting applications will enhance my DevOps skills. This hands-on experience will allow me to explore storage, networking, and container orchestration at a more fundamental level, beyond the cloud abstractions I am used to. This article will highlight the hardware I chose for my homelab and the configuration needed to deploy k3s on a RaspberryPi cluster.
The hardware
Below is a list of components I selected for my homelab setup. After researching budget-friendly options online, I decided on a configuration that includes at least three servers, a minimal number of power cables, external storage, and straightforward networking.
Component | Model | Quantity |
---|---|---|
Raspberry Pi | 4B, 8GB RAM | 1 |
Raspberry Pi | 4B, 4GB RAM | 2 |
Raspberry Pi POE+ Hat | - | 3 |
Raspberry Pi Cluster Rack | GeeekPi Raspberry Pi 4 Cluster Case | 1 |
Micro SD Card | Lexar 64GB | 3 |
HDD | 4TB Seagate Expansion | 1 |
SSD | 500GB Kingston M.2 NVMe | 3 |
SSD Adapter | UGREEN M.2 NVMe SSD Enclosure | 3 |
Router | TP-Link TL-WR802N N300 WLAN Nano Router | 1 |
Network Switch | TP-Link LS105G 5-Port Switch with 4-Port PoE+ | 1 |
Ethernet cables | - | 4 |
RPi (Raspberry Pi) was an easy choice due to its powerful specs given its size and low power consumption. It has plenty of ports to meet all my future requirements, and there is a huge community of RPi enthusiasts who are very helpful with troubleshooting. I went for the model 4B because it was slightly cheaper than the latest model 5 and it still has great specs. The 8GB RPi serves as the master node while the 2 4GB RPis are the worker nodes. I also bought 3 micro SD cards because the RPis will initially boot from a micro SD card (the boot is transferred to external SSD later). To save space and ensure efficient cooling, I placed all the RPis in a cluster which took me a bit of time to assemble:

I wanted to avoid using individual power supplies for each RPi to reduce the clutter of wires and plugs. Instead, I opted to power the RPis through Power over Ethernet (PoE). Since RPis don’t natively support PoE, I had to purchase three PoE+ HATs. The cluster, equipped with the RPis and PoE HATs, looks like this:

For PoE I needed a PoE switch with at least 3 PoE ports. Given that I needed 15W per RPi to allow the stable operation of the RPi and the ability to power an external SSD, I picked a PoE switch which has a total output of up to 65W, spread across 4 ports. The switch is connected to a nano router which runs in client mode (it connects to the ISP router via WiFi and connects to the switch via Ethernet). I wanted to store all future logs and media files in an external HDD drive which is mouted to an old RPi 3 and shared via NFS with the rest of the cluster nodes. The external SSDs are used as boot drives since they have longer lifespan and much better IOPS than the micro SD cards. Later, they will also serve as OSDs in a Rook Ceph cluster which will provide Persistent Volume Claims (PVC) for my k3s cluster. The final setup looks like this (could not get the wiring tidier than that 😄):

Installing the OS
Initially, all the RPis boot from the Micro SD cards. I used the very popular Raspberry Pi Imager for writing the boot files to the SD cards. The OS of choice was Raspberry Pi OS Lite, a Debian Bookworm flavour which is optimized for the RPi firmware and used by many RPi enthusiasts. As part of the RPi Imager configuration I also added my public SSH key since I will need SSH access to the servers.
After I inserted the SD cards and the RPis booted, I partitioned the external SSDs in 3 partitions: /dev/sda1, /dev/sda2 and /dev/sda3. The first partition serves as the /boot partition, the second partition is mounted on the root path and the third partition is an empty, unformatted partition to be later used as a ceph-osd. I used the rpi-clone tool to move the /boot and / contents from the micro SD card to the first and second partition on the SSD.
I checked the existing partitions on the micro SD card and mirrored the same capacity on the first 2 partitions of my SSDs. This was necessary because I wanted rpi-clone to leave the third partition untouched and to sync only the first 2 partitions. I also needed to change the filesystem of the partition with the below commands:
mkfs -t vfat -F 32 /dev/sda1
mkfs.ext4 /dev/sda2
This scenario is explained well here. To copy the boot files, I ran the command bellow:
sudo rpi-clone -l /dev/sda
It detected that the first 2 partitions on my SSD matched the file system and capacities of the partitions on the SD cards and copied the files. The -l flag was added so that the /etc/fstab and /boot/firmware/cmdline.txt files were changed to mount the new SSD partitions at boot time and to reflect the new partition to be used for the boot process.
After rebooting the RPi and checking the mounted volumes, I could see the new /boot and / mounts next to the first 2 partitions on the SSD. Checking the contents of /etc/fstab and /boot/firmware/cmdline.txt also indicated the PARTUUID of the SSD partition.
Installing k3s
Once the OS was installed and SSH access was set up, I chose to manage my Raspberry Pis with Ansible. The first step was to install a container orchestration tool. I opted for k3s because of its simplicity and lightweight nature, which lets me concentrate more on deploying applications rather than managing a fully fledged Kubernetes cluster.
There are 2 main prerequisites needed for installing k3s on Raspberry Pi OS: installing iptables and enabling cgroups. I used the tasks bellow as part of my k3s role to achieve that:
- name: Install necessary packages
package:
name:
- iptables
state: present
- name: Append kernel parameters to /boot/firmware/cmdline.txt
lineinfile:
path: /boot/firmware/cmdline.txt
regexp: '^(.*?)(\s+cgroup_enable=cpuset\s+cgroup_memory=1\s+cgroup_enable=memory)?$'
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory'
state: present
backrefs: yes
insertafter: BOF
register: cmdline_change
- name: Reboot the servers
tags: reboot
become: yes
shell: "reboot"
async: 1
poll: 0
when: cmdline_change.changed == true
- name: Wait for the reboot to complete if there was a change.
wait_for_connection:
connect_timeout: 10
sleep: 5
timeout: 300
I decided to go with the default k3s settings which include a Traefik ingress controller and Flannel CNI. I wanted to installation to be as straightforward as possible and get a working cluster in the shortest amount of time. These are the tasks I used to install k3s on the master node:
---
- name: Install k3s master node
shell: |
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE=true sh -
- name: Fetch K3s node token
shell: "cat /var/lib/rancher/k3s/server/token"
register: k3s_token
- name: Set k3s token fact
set_fact:
k3s_token: "{{ k3s_token.stdout }}"
- name: Get Master Hostname
command: hostname
register: master_hostname
- name: Set master hostname fact
set_fact:
master_hostname: "{{ master_hostname.stdout }}"
- name: Install k3s worker node
shell: |
curl -sfL https://get.k3s.io | K3S_URL=https://{{ hostvars['master']['master_hostname'] }}:6443 K3S_TOKEN={{ hostvars['master']['k3s_token'] }} sh -
On the worker nodes:
---
- name: Install k3s worker node
shell: |
curl -sfL https://get.k3s.io | K3S_URL=https://{{ hostvars['master']['master_hostname'] }}:6443 K3S_TOKEN={{ hostvars['master']['k3s_token'] }} sh -
Configuring NFS
The external HDD is mounted on a fourth RPi model 3 with 1 GB RAM. This RPi serves as an NFS server which shares the mounted HDD with the k3s cluster. The RPi OS already came configured with the nfs-common package and there was no need for extra configuraton on the client side. The configuration of the NFS server was done with the tasks below:
---
- name: Ensure the drive is unmounted
shell: |
umount /dev/sda1 || true
ignore_errors: yes
- name: Create a single partition on the USB drive
parted:
device: /dev/sda
number: 1
label: gpt
state: present
part_start: 0%
part_end: 100%
- name: Format the new partition as ext4
filesystem:
fstype: ext4
dev: /dev/sda1
- name: Create the mount directory
file:
path: /mnt/share
state: directory
- name: Mount the partition on /mnt/share
ansible.posix.mount:
path: /mnt/share
src: /dev/sda1
fstype: ext4
opts: defaults
state: mounted
- name: Install NFS server packages
apt:
name: nfs-kernel-server
state: present
update_cache: yes
- name: Configure NFS export
lineinfile:
path: /etc/exports
line: "/mnt/share 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)"
state: present
- name: Restart NFS server to apply changes
systemd:
name: nfs-kernel-server
state: restarted
Checking the available NFS exports from the client side showed the below:
showmount -e 192.168.1.82
/mnt/share 192.168.1.0/24
For now, this NFS share will be utilized to create Kubernetes Persistent Volumes for applications deployed in the cluster. In the future, a Rook Ceph cluster will be set up using the SSDs to serve as the storage pool.
Conclusion
I hope this post has provided you with a solid foundation for getting started with your homelab. It offers a great starting point for exploring day-to-day sysadmin tasks and automation while stepping away from the cloud-managed services we’re all familiar with. I plan to host a streaming application on this k3s cluster and set up a Rook Ceph cluster to fully utilize the unused partitions on my SSDs. I’ll keep you updated on the progress. Until next time, happy experimenting and learning!
comments powered by Disqus