Ever since I got my AMD Athlon XP 2500+, I’ve been into overclocking. While my overclocking activities were limited at the time (as a student I couldn’t risk burning up my CPU or motherboard), I made sure that ever since, none of my desktops ran at stock speeds. Even my trusty Intel 2500K that I’m writing this blog on still hums along at 4,4Ghz all core.
Overclocking has always been a Windows thing though, and for good reason; in 2009 the Linux market share was only 0,6%, while Windows dominated the market with a 95% market share. With such a dominating OS, motherboards manufacturers focused fully on (usually terrible) software which allowed you to overclock and monitor your system without leaving Windows. The overclocking community didn’t stop there either, tools like 8rdavcore (apparently ported from Linux), setfsb, MemSet, CPU-Tweaker and many more made it possible to overclock and tweak your system to the max. Combined with a lot of monitoring software like HWInfo, Aida64, SpeedFan, CPU-z and benchmarks like 3Dmark, Sisoft Sandra, Cinebench, and it was clear: overclocking belonged to Windows.
Fast forward to 2025, and things have changed; Linux has a market share of 3% while Windows has dropped to 66%. OCCT is now also available on Linux, GreenWithEnvy makes it easier to overclock NVIDIA gpu’s and benchmarks like y-cruncher, 7-zip and Geekbench run fine on Linux. But when it comes to graphical monitoring applications, we only have Psensor or xsensors. Both work fine but it can still be better.
Xsensors and PSensor side by side
This is where I want to change a couple of things and after this years release of Java 25 and its Foreign Function and Memory API, I can finally work in a language I love while using C libraries like libsensors, libcpuid, the NVIDIA management API and many more.
After returning from Devoxx I decided to create a Linux alternative to Open Hardware Monitor, HWMonitor and HWInfo and that’s how lnxsense was born. It’s a still in early alpha stages and what it can show depends heavily on what the underlying libraries can return (e.g. NVIDIA’s nvml doesn’t even have an option to get the hotspot temperature or actual fan RPM). Even so, I’m already really happy with what it can do.
In it’s very early stage it supports (when running the back-end server as root)
CPU Frequencies (as reported by the Linux kernel)
CPU Utilization
Memory Utilization
Core temperatures
Intel requested VCore (the VID)
Intel Core multipliers
Intel Throttling reasons
Intel RAPL Power Management information like PP0, PP1 and Platform power limits and usage
NVIDIA Clocks, Utilization, Temperature and Fan speed (in % because why would nvml expose the actual fan speed), P-state and current PCIe speed
SMART and NVMe log
Blockdevice IOPS and read/write speed
Remote monitoring using sockets
If you want to try it out, you can download a release version from Codeberg. Just be sure to read the INSTALL.md, it’s still in early development, so it’s not a one-click experience and definitely not production-ready.
// 2025/12/15: I decided to rename the project from HWJinfo to lnxsense, it just makes more sense, doesn’t it ?
So, I let my certificates expire (again) and thus I had to re-run all my Ansible playbooks to roll out my new self-signed certificates on all my severs and the reality was that a lot of my playbooks didn’t run or didn’t survive the galaxy update I ran a couple of weeks before this happened.
The weirdest thing of all was that the Postgres role that I use failed on an assert for a variable that 100% exists and which has worked before.
TASK [robertdebock.postgres : assert | Test postgres_hba_entries] *************************************************************************************************************************************************************************************************************
fatal: [postgresql-01]: FAILED! => changed=false
assertion: postgres_hba_entries is defined
evaluated_to: false
msg: Assertion failed
Running an ansible.builtin.debug in a pre-task did confirm that the variable “did not exist”
TASK [Debug] ******************************************************************************************************************************************************************************************************************************************************************
ok: [postgresql-01] =>
postgres_hba_entries: VARIABLE IS NOT DEFINED!
Even with the verbosity set to 6 there was no sign of anything being wrong. While debugging other variables, I noticed the same behavior when trying to output the value of postgres_listen_addresses: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}" while gathering facts was disabled.
As it turns out, if you’re using an unknown variable to create another variable, then it will simply not exist, even if you’re using it in a map like postgres_hba_entries. So in the below example, the non existing DOES_NOT_EXIST variable will result in the complete map missing from the environment.
postgres_hba_entries:
- type: local
database: all
user: all
method: peer
- type: host
database: all
user: all
address: 127.0.0.1/32
method: ident
- type: hostssl
address: all
database: "{{ DOES_NOT_EXIST }}"
method: md5
user: all
This is on some ways pretty good, because you don’t want to roll out only half of your config without knowing it. On the other hand it’s pretty annoying there’s absolutely no feedback about what is going on (even though I can come up with many reasons why it is so).
I recently wanted to switch from Google Photos to Immich and while doing so I stumbled across some difficulties while adding the photo’s on my NAS as an external library. In the past 20+ years I organized my library by hand without relying on any tools, so I did not want Immich to make any changes to my photo library, hence I mounted the Samba share as read-only.
If I try to add a folder from this share as an external library I get the following error: “Lacking read permissions for folder”
Disabling SELinux would fix the issue, but even if the instance is not publicly available, it’s still a bad idea to disable any security measures. So we need to tell SELinux it’s fine for the container to access the share. Usually this is done by appending :z to the volume:
services:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
# extends:
# file: hwaccel.ml.yml
# service: vaapi # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
volumes:
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
- /var/immich/media/:/usr/src/app/upload:z
- /etc/localtime:/etc/localtime:ro
- /mnt/photo:/usr/src/app/external:z
env_file:
- immich.env
ports:
- '2283:2283'
depends_on:
- database
restart: always
healthcheck:
disable: false
But simply adding “:z” in the Docker compose file won’t work for two reasons:
The user does not have any root privileges to change the SELinux context
The filesystem is mounted read-only and changing the context is a write operation
Luckily, we can mount the SMB share with an SELinux content which will allow the container to access the files:
Yesterday I wanted to do a system upgrade on my Manjaro box because there were about 250+ packages that needed an update. There were quite a couple of conflicts that prevented me from upgrading so I manually disabled the packages that were conflicting until pacman was happy and I left my system to upgrade while I went to watch The Office.
An hour later I went to check the progress and I was greeted with a “screen locker is broken” error message all over the screen with instructions on how I could fix it. After logging in on another TTY I got an error that flatpak was missing some libraries and this error would block the whole terminal. A hard reset later I got nothing but a black screen. No services scrolling by, no splash screen, no kernel panic and not even the sound of my spinning rust stopping. My Manjaro installation was dead.
To fix this I downloaded the latest Manjaro ISO and booted from it. Once I got into the live environment it was fairly easy to chroot into my dead system
manjaro-chroot -a
The first thing people suggested was to update the mirrors and re-run a system update
pacman-mirrors -f
pacman -Syyu
Unfortunately, pacman was also broken and gave me the following error message
pacman: error while loading shared libraries: libicuuc.so.76: cannot open shared object file: No such file or directory
This is unfortunate, because without pacman I can’t re-install the missing library. So I headed to the package repository and downloaded the package from the mirror. After unpacking the archive in the live environment’s Downloads folder I moved everything to my Manjaro installation (your target location my vary depending on how manjaro-chroot mounted your volumes).
After copying the files, pacman worked, but I couldn’t install the missing icu package because the files were already there. Telling pacman to overwrite the files anyways solved the issue.
Unfortunately, after rebooting my system, I still got nothing but a black screen. So once more I went to the live environment and chroot’ed into my system. At this point I started guessing that there was something wrong with my nvidia drivers and after looking through the installed packages I quickly discovered that the nvidia drivers weren’t even installed anymore. So I reinstalled them (depending on your GPU, you might need another version):
pacman -Sy nvidia-470xx-dkms
After ignoring the packagekit error, I rebooted the system and ‘lo and behold, it was alive again.
So, what did we learn this weekend ?
Don’t cherry pick when doing system updates.
Make sure flatpak isn’t installed
Nvidia drivers can really kill your system (if I remove my GPU to boot from the iGPU, I actually get the same black screen)
In my previous post I explored the usage of libvirt and CoreOS with the main goal to quickly setup a Kubernetes cluster with k3s. Admittedly I did give myself a lot of challenges which lead to me not being able to deploy a complete cluster, but only a single k3s node.
Sudo make me a VM
Having not had enough challenges I wanted to play around with Makefiles because I needed to run the same commands a lot with regularly changing parameters. That’s why I made this abomination of a Makefile. I know I could have written everything in a simple Bash script, but my goal is to learn things and so I did. I can now spin up and destroy a 3 node Kubernetes cluster using nothing but
make download
PROJECT=kube1 make
PROJECT=kube2 make
PROJECT=kube3 make
# play around with kubernetes
PROJECT=kube1 make delete
PROJECT=kube2 make delete
PROJECT=kube3 make delete
If I need to ssh into one of the instances I can just do it with PROJECT=kube2 make ssh.
Setting up the nodes
The first node is a bit special since this one needs to initialize the cluster, so there’s a different Butane config file for this node:
variant: fcos
version: 1.6.0
passwd:
users:
- name: core
ssh_authorized_keys:
- "ssh-ed25519 XYZ"
shell: /bin/bash
storage:
files:
- path: /etc/systemd/zram-generator.conf
mode: 0644
contents:
inline: |
# This config file enables a /dev/zram0 device with the default settings
[zram0]
zram-size = ram / 2
compression-algorithm = lzo
# Set vim as default editor
# We use `zz-` as prefix to make sure this is processed last in order to
# override any previously set defaults.
- path: /etc/profile.d/zz-default-editor.sh
overwrite: true
contents:
inline: |
export EDITOR=vim
- path: /etc/hostname
mode: 0644
contents:
inline: ${IGN_HOSTNAME}
- path: /etc/vconsole.conf
mode: 0644
contents:
inline: KEYMAP=be
- path: /etc/rancher/k3s/config.yaml
mode: 0666
contents:
inline: |
write-kubeconfig-mode: "0644"
node-label:
- "server=first"
- "something=amazing"
cluster-init: true
token: meh-123
disable-apiserver: false
disable-controller-manager: false
disable-scheduler: false
- path: /etc/NetworkManager/system-connections/ens2.nmconnection
mode: 0600
contents:
inline: |
[connection]
id=enp1s0
type=ethernet
interface-name=enp1s0
[ipv4]
address1=${IGN_IP}/16,10.0.0.1
dns=1.1.1.1
may-fail=false
method=manual
systemd:
units:
# Install tools as a layered package with rpm-ostree
- name: rpm-ostree-install-tools.service
enabled: true
contents: |
[Unit]
Description=Layer tools with rpm-ostree
Wants=network-online.target
After=network-online.target
# We run before `zincati.service` to avoid conflicting rpm-ostree
# transactions.
Before=zincati.service
ConditionPathExists=!/var/lib/%N.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
# `--allow-inactive` ensures that rpm-ostree does not return an error
# if the package is already installed. This is useful if the package is
# added to the root image in a future Fedora CoreOS release as it will
# prevent the service from failing.
ExecStart=/usr/bin/rpm-ostree install -y --allow-inactive vim qemu-guest-agent htop
ExecStart=/bin/touch /var/lib/%N.stamp
ExecStart=/bin/systemctl --no-block reboot
[Install]
WantedBy=multi-user.target
- name: install-k3s.service
enabled: true
contents: |
[Unit]
Description=Install kubernetes
Wants=rpm-ostree-install-tools.service
After=rpm-ostree-install-tools.service
Before=zincati.service
ConditionPathExists=!/var/lib/%N.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/sh -c '/usr/bin/curl -sfL https://get.k3s.io | sh -'
ExecStart=systemctl enable k3s.service
ExecStart=/bin/touch /var/lib/%N.stamp
ExecStart=/bin/systemctl --no-block reboot
[Install]
WantedBy=multi-user.target
- name: print-k3s.service
enabled: true
contents: |
[Unit]
Description=Print KubeInfo
Wants=install-k3s
After=install-k3s
After=k3s.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=sh -c 'kubectl cluster-info | tee /etc/issue.d/33_kubectl.issue'
[Install]
WantedBy=multi-user.target
Once the first node is setup, we can deploy new agents with the following template
variant: fcos
version: 1.6.0
passwd:
users:
- name: core
ssh_authorized_keys:
- "ssh-ed25519 XYZ"
shell: /bin/bash
storage:
files:
- path: /etc/systemd/zram-generator.conf
mode: 0644
contents:
inline: |
# This config file enables a /dev/zram0 device with the default settings
[zram0]
zram-size = ram / 2
compression-algorithm = lzo
# Set vim as default editor
# We use `zz-` as prefix to make sure this is processed last in order to
# override any previously set defaults.
- path: /etc/profile.d/zz-default-editor.sh
overwrite: true
contents:
inline: |
export EDITOR=vim
- path: /etc/hostname
mode: 0644
contents:
inline: ${IGN_HOSTNAME}
- path: /etc/vconsole.conf
mode: 0644
contents:
inline: KEYMAP=be
- path: /etc/rancher/k3s/config.yaml
mode: 0666
contents:
inline: |
node-label:
- "server=agent"
- "something=amazing"
token: meh-123
- path: /etc/NetworkManager/system-connections/ens2.nmconnection
mode: 0600
contents:
inline: |
[connection]
id=enp1s0
type=ethernet
interface-name=enp1s0
[ipv4]
address1=${IGN_IP}/16,10.0.0.1
dns=1.1.1.1
may-fail=false
method=manual
systemd:
units:
# Install tools as a layered package with rpm-ostree
- name: rpm-ostree-install-tools.service
enabled: true
contents: |
[Unit]
Description=Layer tools with rpm-ostree
Wants=network-online.target
After=network-online.target
# We run before `zincati.service` to avoid conflicting rpm-ostree
# transactions.
Before=zincati.service
ConditionPathExists=!/var/lib/%N.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/rpm-ostree install -y --allow-inactive vim qemu-guest-agent htop
ExecStart=/bin/touch /var/lib/%N.stamp
ExecStart=/bin/systemctl --no-block reboot
[Install]
WantedBy=multi-user.target
- name: install-k3s.service
enabled: true
contents: |
[Unit]
Description=Install kubernetes
Wants=rpm-ostree-install-tools.service
After=rpm-ostree-install-tools.service
# We run before `zincati.service` to avoid conflicting rpm-ostree
# transactions.
Before=zincati.service
ConditionPathExists=!/var/lib/%N.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/sh -c '/usr/bin/curl -sfL https://get.k3s.io | K3S_URL=https://10.0.12.12:6443 K3S_TOKEN=meh-123 sh -'
ExecStart=systemctl enable k3s-agent.service
ExecStart=/bin/touch /var/lib/%N.stamp
ExecStart=/bin/systemctl --no-block reboot
[Install]
WantedBy=multi-user.target
So after running make a couple of times we do have a working Kubernetes cluster with 3 nodes
Local config
Those who read my previous post probably noticed that I also challenged myself to use nix, so let’s deviate a bit and setup kubectl locally using nix-shell.
First we’ll need to get our kubernetes config into ~/.kube/admin.conf. The content of the file can be extracted from the first node with the command k3s kubectl config view --raw
What’s important here is to make sure to change cluster.server from 127.0.0.1 to the IP of the first node (in my case 10.0.12.12).
Next we need to configure the nix-shell to not only install kubectl but to also to ensure that we use our admin.conf file when running kubectl (and for fun we’re also installing k9s).
With the above config we can use kubectl from our host to manage the Kubernetes cluster in our VM’s
My first “real” application
To at least have anything deployed in my cluster I went for the Guestbook application. One caveat is that exposing the application requires some extra configuration in the form of an Ingress config which will be picked up by Traefik.