My lab at home for PostgreSQL things
From sandbox to sand castle!
- tags
- #Kubernetes #Operators #Tools
- published
- reading time
- 8 minutes
Why an home lab?
I don’t know if “Home labs” are becoming less and less frequent, because people tend to use cloud machines or services. Or, on the contrary, if it’s (re?) gaining interest, because, people like me, are still interested in understanding how things works ?
This came even more true when I started putting hands on Kubernetes for real, and this time, really deep.
My company has a well-known offering for PostgreSQL in Kubernetes. When I feel
confortable enough with the Ansible-based products, before learning the
PostgreSQL in Kubernetes products, I decided to learn Kubernetes itself,
and I saw no better way than install it from scratch. I can say that even
today I’m very happy to have achieved that, because it give me insights one
can’t have by only doing things with kubectl
or k9s
.
I don’t have much hardware at home, I rather try to minimize it to reduce my energy footprint. It’s good for the planet, of course, and I did it for that. Also, electricity prices raised crazy in France for some years now, and in about 3 years, the bill just did x2…
My 2025’s main project will be to have my full kubernetes stack running in raspbery PI’s, trying to switch on the old Dell in the garage only when I need it :-)
PROXMOX
So my home runs a Proxmox. It’s still a version 7.4-18, and I’m currently looking at upgrades.
I use it for everything, work and personal projects! I’ve counted of 40 VMs and some more templates I’ve created to instanciate VMs faster when I need it. This came less and less necessary, because I’m more in more into Kubernetes daily, and my current k3s cluster is just perfectly fine for learning and testing purposes.
Proxmox is installed on an old DELL PowerEdge T410, running with 2 Intel(R) Xeon(R) CPUs (E5506 @2.13GHz), so 8 total cores. I’ve been able to recycle some RAM modules I was given here and there, so it has 112 Gb RAM. It has 2 arrays of disks:
- 3 x SATA 7.2K 500 Gb and
- 3 x SAS 15K 500 Gb
It has Debian bullseye as main OS hosting Proxmox. Once everything was installed, I had one storage of SAS disks of 1.2 Tb and another one on the SATAs of 850 Gb.
Needless to say it’s a bit short for all I do, but everything like backups and other let’s say “non live” or “less live” resides on the QNAP I have at home, where I have some Tb available, mounted as a nas-nfs-storage on the Proxmox.
QNAP NAS
I really recommend having a NAS somewhere with lots of data you can mount in NFS if you want to play with PostgreSQL more seriously, just because backups need often x2 or x3 the volume of your databases.
I use this NFS storage for both Bare Metal and VMs deployments of PostgreSQL (based on Crunchy Postgres and deployed with Ansible) or Crunchy Postgres for Kubernetes and sometimes cloudnative-pg.
About Kubernetes, I use NFS subdir external provisioner, and it just works as expected:
$ k get sc nfs-client
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 437d
Truenas
I also have a dedicated box for Truenas, I did install it directly on a pair of (old) 3 Tb drives I had, in RAID 1. This is a former linux station I used to work on like 10 years ago, an old Intel i7 CPU 920 @ 2.67GHz, with 32 Gb RAM. It was still working, so I recycled it in a Trunas server
I use Truenas exclusively to store data from Kubernetes, when I need to play (mostly testing but also prepare work for customers) with democratic-csi drivers.
So with the default local-path, the NFS one and the CSI ones I have some different options to match my needs:
$ k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
freenas-iscsi-csi org.democratic-csi.iscsi Delete Immediate true 110d
freenas-nfs-csi org.democratic-csi.nfs Delete Immediate true 110d
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 646d
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 437d
I use Truenas to demo PostgreSQL Snapshots, and do some tests around it. If you’re not familiar to this, I recommend you read excellent Brian Pace’s blogpost PostgreSQL Snapshots and Backups with pgBackRest in Kubernetes.
Different, specialized VMs
I have 3 distinct VMs running different versions of Ansible and in both RPM and DEB based distros (actually Red-Hat 8 and Ubuntu 22-04 and 24-04).
3 others are dedicated to personal projects I’ll blog about another day. One of those 3 is a bit hybrid. Since it had nginx installed on it, I’ve added to it a docker-compose to run a registry service on it, so I have a local registry of images to be used by my k3s server (see later on this article).
So when I need to build an image to be used in CPK directly I can upload it to that registry so it’s immediately available in my k3s.
A bunch of VMs are used to deploy, work, test, … whatever is related to Crunchy Postgres that is deployed with Ansible.
- 2 VMs here
- 12 VMs here, for a certain project I had to do
- 5 VMs here for another one…
I backup those, install/restore, that depends on the needs. Just because I’m a bit limitated in terms of disk space on the Proxmox server.
k3s VMs
5 VMs are dedicated to my Kubernetes cluster. I choose k3s because I liked the “lightweight” install. I really recommend the Small homelab K8s cluster on Proxmox VE documention for such an install! It saved me a lot of time!
$ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-node-01 Ready worker 646d v1.31.3+k3s1 10.0.0.11 192.168.2.101 Debian GNU/Linux 11 (bullseye) 5.10.0-33-amd64 containerd://1.7.23-k3s2
k3s-node-02 Ready worker 646d v1.31.3+k3s1 10.0.0.12 192.168.2.102 Debian GNU/Linux 11 (bullseye) 5.10.0-33-amd64 containerd://1.7.23-k3s2
k3s-node-03 Ready worker 642d v1.31.3+k3s1 10.0.0.13 192.168.2.103 Debian GNU/Linux 11 (bullseye) 5.10.0-33-amd64 containerd://1.7.23-k3s2
k3s-node-04 Ready worker 544d v1.31.3+k3s1 10.0.0.14 192.168.2.104 Debian GNU/Linux 11 (bullseye) 5.10.0-33-amd64 containerd://1.7.23-k3s2
k3s-server-01 Ready control-plane,master 646d v1.31.3+k3s1 10.0.0.2 192.168.2.100 Debian GNU/Linux 11 (bullseye) 5.10.0-33-amd64 containerd://1.7.23-k3s2
How this is managed, monitored, upgraded, etc, could be another dedicated article here!
MinIO
I have one dedicated VM to MinIO, to be used by daily work on Crunchy Postgres for Kubernetes (aka “CPK”).
Many customers use S3 storages to be used by pgBackRest that CPK relies on for a lot of things. Since MinIO is perfectly compatible with S3 and is free, I use that instead. It works very (very) well. This well that I see more and more of our customers move from S3 to MinIO, when it’s possible for them, of course.
A lot of everything
Off course, on the different VMs, there’s a bunch of things installed that I had to learn, in a complete disorder, what comes to my mind is:
-
how to create subnets in Proxmox, to simulate many DCs in one lab… manage dnsmasq and QEMU/KVM for all those VMs
-
5 VMs dedicated to build an
etcd
cluster, where I could tests a lot of things in it around performance, and how it can become to be a real problem if it is not working properly… specially if Patroni relies on it :-) -
backup everything everywhere anytime… It’s a challenge in itself, because backuping is a thing but validating restores probably more important :-)
-
docker, a.. lot.. of docker
-
a lot of Go language read pgSimload (#1): why & what. This was a great idea. Because when sometimes in CPK I have questions, I can go straitght forward and read postgres-operaator code and answer myself
-
a lot of Ansible. I used to play with it many (many) years ago. But playing wasn’t enough when your company uses it for customers; So I got back to reading books on it. One I’d recommend is Ansible: Up and Running.
-
prometheus, grafana, postgres-exporter, sql-exporter, mainly because it is used in pgMonitor. But I’ve capitalized on lot on that knowledge, it helped me create my own personal projects (in the IOT sector, with Shelly Plus Plugs, MQTT broker, etc..)
-
old linux tools, as always, there are so many.. curl, scp, etc.. you know them all I guess!! One I use daily now is probably jq; when it’s about to decode a JSON some API sends me when I
curl
-it :-) -
newer tools like k9s: this tool is simply fantastic, and I recommend it very very warmly! I discover quite everyday something new with it. It is so powerfull !
Conclusion
I realize that my home lab is a lot of things! At a point, I’m quite 100% sure I did forget many things here: so I just plan to review this article from time to time!
I’ve been able during my last holidays to automate a lot of things, starting with upgrades. Despite this is 100% offline, at home, I really need to be up-to-date with everything, because customer are too, and I want to avoid bugs, mostly.
What’s next on my plate for the incoming year, as a TODOlist could be:
-
upgrade Proxmox… maybe run from it. Don’t know yet.
-
buy 4 more Raspbery Pi 5 to build another k3s on it, migrate everything, including the Truenas and the MinIO.
-
continue learning!
I started 2025 by learning ArgoCD thanks to excellent Bob Pacheco’s blogposts (starting with CI/CD with Crunchy Postgres for Kubernetes and Argo.
By “learning” I mean, install, configure, use, etc… Not only “now the strict minimum needed to understand customers” : that I really had it :-)
I must say that the last 2 years and a half when I started to install Proxmox I couldn’t imagine I would have been this far with its usage, and learn this many thing.
I really don’t regret I never stopped “doing things” in the linux console to be up-to-date on Infrastructure tools, even when I was in charge of a company :-)