Setting up Longhorn on a Raspberry Pi k3s cluster
Introduction
My article on setting up a Ceph cluster with Rook on a Raspberry Pi k3s cluster turned out to be my most popular post. That’s not surprising, considering the process involves several steps and often requires patience and digging through related GitHub issues for troubleshooting. While Rook can be a solid choice for administrators familiar with managing Ceph storage clusters, it does come with a steep learning curve and considerable complexity. As an operator, Rook wraps around Ceph, automating many of the intricate tasks that cluster admins previously handled manually. One drawback of using Ceph is its requirement for unformatted partitions or devices on each node, which makes it less suitable for setups with tighter resource constraints.
Longhorn is a more approachable choice for those seeking a simpler solution for distributed storage in a Kubernetes environment. It works with existing partitions already mounted on the nodes, making setup more straightforward. Unlike Rook, Longhorn includes a user-friendly CLI (introduced in version 1.8.0, replacing an earlier script—more on that later) that clearly flags any issues with installation prerequisites. Upon successful setup, it also creates a default StorageClass, allowing users to provision PVCs without extra configuration. The Longhorn deployment introduces fewer supporting resources compared to Rook, and its UI is more polished and intuitive than what Ceph offers.
As part of this post we will run through the installation steps, expose the UI and do a demo on restoring a Postgres database from a PVC created from a restored Longhorn backup.
Installation
The installation steps are fairly straightforward, however I will still outline the main prerequisites for completeness. To minimize the risk of running into setup issues, use the longhornctl CLI which helps identify any missing requirements. Initially I used the environment check shell script which failed to flag that dm_crypt kernel module was not enabled on my nodes and led to installation errors.
Here are the main prerequisites that should be met on each node:
- Service iscsid enabled and running
- NFS4 is supported
- Package nfs-common is installed
- Package open-iscsi is installed
- Package cryptsetup is installed
- Package dmsetup is installed
- Kernel module dm_crypt is loaded
- Kubernetes version >= 1.25
Given that all the above requirements are met, the helm installation will take around 5 minutes to complete. The helm release will also deploy the UI and expose it via a ClusterIP service under longhorn-frontend.
I decided to enable https for the ingress exposing the UI and came across mkcert, a very neat tool to create locally-trusted development certificates. The steps needed to generate the certificates and create the TLS secret for the ingress are highlighted below:
mkcert -install
mkcert longhorn.local
kubectl -n longhorn-system create secret tls longhorn-tls \
--cert=longhorn.local.pem \
--key=longhorn.local-key.pem
I am using Traefik as the ingress controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
name: longhorn-ui
namespace: longhorn-system
spec:
ingressClassName: traefik
rules:
- host: longhorn.local
http:
paths:
- backend:
service:
name: longhorn-frontend
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- longhorn.local
secretName: longhorn-tls
I added an entry to my /etc/hosts file on my local machine to map longhorn.local to the the IP address assigned to the LoadBalancer behind the ingress, created with Metallb.
The Longhorn UI accessible locally:

I deployed Longhorn on a 3 node k3s cluster, each with a 100Gb partition mounted on the root filesystem. I did not modify the default value for the Reserved Storage which is 30% of the Total Disk capacity, however the value can be changed in the helm values by configuring the storageReservedPercentageForDefaultDisk value.
Demo
I recently made changes to my url-shortener app to allow restoring the Postgres database from an existing PVC. This was a good opportunity to demo PV/PVC provisioning and backup in Longhorn.
The application is dependant on both Redis and Postgres. Both database and cache are provisioned using operators and configured to provision volumes dynamically with PVCs using the longhorn storageClass.
Upon application helm deployment, the Longhorn UI Volume tab looks like below:

The bound PVCs are confirmed by the kubectl command:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
url-shortener-postgres-1 Bound pvc-b345e0f5-76fc-4761-8a21-9066980275db 5Gi RWO longhorn <unset> 7m30s
url-shortener-redis-url-shortener-redis-0 Bound pvc-b8d25f39-a549-43bf-844f-baa7801f4bb2 5Gi RWO longhorn <unset> 7m33s
The application can write to the database as expected:
$ curl -X POST "http://k8s.local/url-shortener/shorten" \
-H "Content-Type: application/json" \
-d '{"url": "http://google.com"}'
{"short_url":"http://k8s.local/url-shortener/qiI5wX"}
It is time to test the backup functionality in Longhorn and see if we can access previous short URLs on new database deployments configured with recovery from a volume snapshot via an existing PVC.
A prerequisite for creating backups is configuring a backup target. I mounted an external HDD on one of my k8s nodes and configured the nfs-server service on it. The configuration in /etc/exports allowed read/write access to all IPs from my local subnet.
I modified the default backup target to point to the NFS share endpoint:

Once the default Backup Target transitioned into Available status, I could backup my Postgres volume:

The application helm release was uninstalled and the corresponding PVs were automatically removed from Longhorn.
The backup was restored to a volume:

Before I deploy the application and point the Postgres db operator to the PVC from which it can restore the database, I need to create a PV/PVC from the restored backup:

A new application helm release was deployed with Postgres recovery from the postgres-restored PVC. When testing shortening the same URL as I did before, the previous database entry is detected:
$ curl -X POST "http://k8s.local/url-shortener/shorten" \
-H "Content-Type: application/json" \
-d '{"url": "http://google.com"}'
{"message":"URL http://google.com is already shortened.","short_url":"http://k8s.local/url-shortener/qiI5wX"}
Conclusion
Given the relative ease of setup, better user experience, and lower resource requirements, I find Longhorn to be a more practical and user-friendly choice over Rook with Ceph—especially for lightweight clusters or users who prefer a less complex storage solution. I hope you found the demo on restoring a PostgreSQL database from a PVC helpful and enjoyable. It demonstrated how Longhorn can simplify data recovery in Kubernetes environments, making it a solid option for stateful workloads.
comments powered by Disqus