lnxsense, a system monitoring tool for Linux

Ever since I got my AMD Athlon XP 2500+, I’ve been into overclocking. While my overclocking activities were limited at the time (as a student I couldn’t risk burning up my CPU or motherboard), I made sure that ever since, none of my desktops ran at stock speeds. Even my trusty Intel 2500K that I’m writing this blog on still hums along at 4,4Ghz all core.

Overclocking has always been a Windows thing though, and for good reason; in 2009 the Linux market share was only 0,6%, while Windows dominated the market with a 95% market share. With such a dominating OS, motherboards manufacturers focused fully on (usually terrible) software which allowed you to overclock and monitor your system without leaving Windows. The overclocking community didn’t stop there either, tools like 8rdavcore (apparently ported from Linux), setfsb, MemSet, CPU-Tweaker and many more made it possible to overclock and tweak your system to the max. Combined with a lot of monitoring software like HWInfo, Aida64, SpeedFan, CPU-z and benchmarks like 3Dmark, Sisoft Sandra, Cinebench, and it was clear: overclocking belonged to Windows.

Fast forward to 2025, and things have changed; Linux has a market share of 3% while Windows has dropped to 66%. OCCT is now also available on Linux, GreenWithEnvy makes it easier to overclock NVIDIA gpu’s and benchmarks like y-cruncher, 7-zip and Geekbench run fine on Linux. But when it comes to graphical monitoring applications, we only have Psensor or xsensors. Both work fine but it can still be better.

A screenshot showing xsensors and psensor side by side
Xsensors and PSensor side by side

This is where I want to change a couple of things and after this years release of Java 25 and its Foreign Function and Memory API, I can finally work in a language I love while using C libraries like libsensors, libcpuid, the NVIDIA management API and many more.

After returning from Devoxx I decided to create a Linux alternative to Open Hardware Monitor, HWMonitor and HWInfo and that’s how lnxsense was born. It’s a still in early alpha stages and what it can show depends heavily on what the underlying libraries can return (e.g. NVIDIA’s nvml doesn’t even have an option to get the hotspot temperature or actual fan RPM). Even so, I’m already really happy with what it can do.

lnxsense showing different metrics like cpu usage, power draw, GPU frequency.

In it’s very early stage it supports (when running the back-end server as root)

  • CPU Frequencies (as reported by the Linux kernel)
  • CPU Utilization
  • Memory Utilization
  • Core temperatures
  • Intel requested VCore (the VID)
  • Intel Core multipliers
  • Intel Throttling reasons
  • Intel RAPL Power Management information like PP0, PP1 and Platform power limits and usage
  • NVIDIA Clocks, Utilization, Temperature and Fan speed (in % because why would nvml expose the actual fan speed), P-state and current PCIe speed
  • SMART and NVMe log
  • Blockdevice IOPS and read/write speed
  • Remote monitoring using sockets

If you want to try it out, you can download a release version from Codeberg. Just be sure to read the INSTALL.md, it’s still in early development, so it’s not a one-click experience and definitely not production-ready.

// 2025/12/15: I decided to rename the project from HWJinfo to lnxsense, it just makes more sense, doesn’t it ?

J-ExifTool v0.0.11

Today I’ve released version 0.0.11 of J-ExifTool. After more than 10 years this release does not add any new functionality but is mainly a long overdue maintenance release:

  • Java 17: this version is built with and for Java 17 [BREAKING CHANGE]
  • A lot of boilerplate code was replaced with Lombok
  • General code cleanup
  • Eclipse configuration removed from git

The jar is not yet on the maven repo due to it not supporting my Bitbucket username.

The new jar (+ sources) can be downloaded from BitBucket.

For the record only: v0.0.11 is commit c6d76be.

Slow performance with NamedParameterJdbcTemplate

Today I tried inserting 256 rows in a single, empty PostgreSQL table which has only one index on it using Spring’s NamedParameterJdbcTemplate . To my surprise the single transaction took over 3 minutes to complete, over 500ms per INSERT statement. To make things worse, the same inserts during integration testing on an H2 database completed within a second.

My first guess was that I had an issue with the TOAST tables since the actual table has 28 columns and most of them are VARCHAR(256). As I didn’t not find any issue with it, I continued my quest … just up to the point where I replaced all named parameters to hardcoded values and used and EmpySqlParameterSource() instead. To my great surprise, this resulted in sub-second completion of all inserts.

So obviously, there had to be an issue with the NamedParameterJdbcTemplate, right ? I fired up VisualVM to verify my idea and sampled the CPU time of all org.springframework classes:

The obvious pain point is the setNull() method of the StatementCreatorsUtil and looking at the source code it’s quite obvious what’s going on: every time I set a null value in a statement, this method tries to find out what SqlType the null value should be because I didn’t suggested it myself.

I decided to not waste more time on this issue so I just fixed my code by re-writing parts of my code. Instead of writing

source.addValue("myParam", null);

I now write

source.addValue("MyParam", null, 
     JDBCType.VARCHAR.getVendorTypeNumber());

 

Et voila, instant turbo-charged insert statements.

Adding Elastichsearch filebeat to Docker images

One of the projects I’m working on uses a micro-service architecture. Every micro-service is shipped as a Docker image and executed on Amazon AWS using ECS. This basically means that on a random number of servers, a (somewhat) random number of instances of a single micro-service can run … and this about 15 times (the number of micro-services we have).

In the best case this means that 15 Docker containers are running on 4 EC2 instances and thus there are 15 logs files to gather. To handle this we use the Elastichsearch ELK stack using ElasticSearch, Logstash and Kibana. Our micro-services do not directly connect to the Logstash server, instead we use filebeat to read the logfile and send it to Logstash for parsing (as such, the load of processing the logs is moved to the Logstash server).

Filebeat configuration

Filebeat is configured using YAML files, the following is a basic configuration which uses secured connection to logstash (using certificates). Using the fields property we can injection additional parameters like the environment and the application (in this case the micro-service’s name). The multi-line pattern will make sure that stacktraces are sent as one line to the server.

In this case the filebeat server will monitor the /tmp/application*.log files (which is where we log or logs)

filebeat.prospectors:
- input_type: log
  paths:
    - /tmp/application*.log
  document_type: logback  
  multiline.pattern: ^\d\d\d\d
  multiline.negate: true
  multiline.match: after
fields:
  env: %ENVIR%
  app: %APP%
output.logstash:
  hosts: ["%STASH%"]
  ssl.certificate_authorities: ["/app/ca.pem"]
  ssl.certificate: "/app/cert.pem"
  ssl.key: "/app/key.pem"

 

The %ENVIR%, %APP% and %STASH% parts will be replace later so that we can customize the logging for each environment and micro-service.

Installing filebeat in a Docker image

This is actually pretty straight forward. The following Dockerfile will install the filebeat service and on startup of the container it will run entrypoint.sh with the command of your likings.

FROM openjdk:8u111-jdk
#
#
# Stuff to get the micro-service running (this is not important for this blog)
#
#

# Monitoring
RUN wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.0-amd64.deb
RUN dpkg -i filebeat-5.2.0-amd64.deb
RUN rm filebeat-5.2.0-amd64.deb

COPY filebeat.yml /etc/filebeat/filebeat.yml
COPY entrypoint.sh /bin/entrypoint.sh
RUN chmod +x /bin/entrypoint.sh

ENV APP unk
ENV ENVIR unk
ENV STASH your-server:5044
#because we have a secured connection to logstash
COPY ca.pem /app/ca.pem
COPY cert.pem /app/cert.pem
COPY key.pem /app/key.pem

CMD /bin/entrypoint.sh "java -jar /app/service.jar -server --spring.profiles.active=${PROFILE}"

A couple of notes: APP and ENVIR are configure to ‘unk’ here because it’s a base image for other micro-services. Each micro-service will have it’s own Dockerfile which extends this image where it configures the APP variable. The ENVIR variable will be set during startup of the container by passing it as a variable.

Starting the service

Because Docker images don’t contain running service, we need to start the service, that’s why we need the entrypoint.sh file. This script will also alter the filebeat.yml file so that it’s configured with information about the environment and the micro-service.

#!/usr/bin/env bash
sed -i -e s/%STASH%/$STASH/g /etc/filebeat/filebeat.yml
sed -i -e s/%APP%/$APP/g /etc/filebeat/filebeat.yml
sed -i -e s/%ENVIR%/$ENVIR/g /etc/filebeat/filebeat.yml
service filebeat start
echo "$*"
/bin/sh -c "$*"

 

 

Log contextual information to all log messages in Spring Boot using Logback’s MDC

Logging is sometimes one of the things on which too little attention is spent, but when production errors start to arrive you’ll sure want to have the maximum amount of information you want. Most logging implementations in Java will give you the time, the name of the class (or to be more precise, the name of the logger), the name of the thread, and a message and this is actually the best a logging implementation can do by default.

Sometimes it’s interesting to add extra information to every message you log, like for example the id of the user or the tenant’s id (in case of a multi-tenancy application). It would be stupid to manually append it to each log-message because it’s boring and error prone (and you can’t reliably parse it for use in an ELK-stack). To automate this we can use Logback’s (SLF4J) ‘Mapped Diagnostic Context’. Everything you put in the MDC can be used in the log pattern and it’s comparable to a ThreadLocal (each incoming REST request will have different values).

For example:

MDC.put("userId", SecurityUtil.getUserId() == null ? "-1" : SecurityUtil.getUserId().toString());

would put the userId in the MDC and it can then be added to the log message using

%mdc{userId:--2}

The :–2 refers to the default value of -2 which will be logged in case the MDC is empty. I’ll explain later when this happens.

Filling in the MDC

What we want to achieve is that on every REST request, the current user id and the tenant is stored in the MDC and that this information is logged. A Spring FilterRegistrationBean will register a custom servlet Filter (javax.servlet.Filter) which will be triggered on each request and it will set the values in the MDC.

It’s important to note that Spring executes all filters in a certain order and the MDC filter should be executed after the security filter (otherwise we can’t have the user id because the security hasn’t yet extracted this information from the request).

By default (in older versions of Spring) the Spring security filter runs quite late in the chain so it’s best to force it to run a bit earlier by putting this in your application.properties file. This is optional but the default value might change in the future and you want to be sure that when this happens, it still runs in an earlier stage.

security.filter-order=0

The filter registration bean looks like this:

@Component
public class LogbackDiagnosticContext extends FilterRegistrationBean {

   public LogbackDiagnosticContext() {
      super(new MDCContextFilter());
      addUrlPatterns("/*");
      setOrder(Integer.MAX_VALUE);
   }

   public static class MDCContextFilter implements Filter {
      /**
       * {@inheritDoc}
       */
      @Override
      public void init(FilterConfig
                         filterConfig) throws ServletException {
         // NOOP
      }

      @Override
      public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
         MDC.put("userId", SecurityUtil.getUserId() == null ? "-1" : SecurityUtil.getUserId().toString());
         MDC.put("tenant", StringUtils.isBlank(CurrentContext.getTenant()) ? "none" : CurrentContext.getTenant());
         filterChain.doFilter(servletRequest, servletResponse);
      }

      /**
       * {@inheritDoc}
       */
      @Override
      public void destroy() {
         // NOOP
      }
   }

}
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/application.log}"/>
<property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %clr([%mdc{userId:--2}] [%-10.10mdc{tenant:-null}]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
<property name="FILE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} : [%mdc{userId:--2}] [%-10.10mdc{tenant:-null}] %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>

<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<include resource="org/springframework/boot/logging/logback/file-appender.xml" />
<root level="INFO">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
</root>
<logger name="org.springframework" level="INFO"/>
<logger name="be.pw999" level="INFO"/>
</configuration>

 

This pattern will result in a log message like this (given that the tenant is junit and the userId is 123456789):

2017-05-06 17:42:11.410  INFO   --- [           main] be.pw999.secretproject.base.LogTest      : [12345679] [junit     ] INFO
2017-05-06 17:42:11.440  WARN   --- [           main] be.pw999.secretproject.base.LogTest      : [12345679] [junit     ] WARNING
2017-05-06 17:42:11.441 ERROR   --- [           main] be.pw999.secretproject.base.LogTest      : [12345679] [junit     ] ERRORRRR

 

As previously said, it’s possible that the MDC is empty. This can happen in a couple of cases:

  • A message is being logged before the custom filter was executed. You can make it run earlier in the chain by passing a smaller number to setOrder(int).
  • A message is being logged for something else than a REST call. Since this is a servlet filter, it won’t work for stuff like JMS messages or Spring Batch
  • A message is called from an asynchronous thread (eg. using @Async)

Parsing the log message using Grok

Here’s a little bonus for you. Our log message are captured using Filebeat and sent to Logstash before being stored into ElasticSearch (classic ELK stack). Logstash will parse the log-message and convert them so that we can search on the tenant and userId using Kibana. For this we use the following grok-pattern:

filter {
    if [type] == "logback" {
       grok {
          patterns_dir => "/etc/logstash/grok/patterns"
          # Do multiline matching with (?m) as the above mutliline filter may add newlines to the log messages.
          match => [ "message", "^%{LOGBACK_TIMESTAMP:logtime}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{NUMBER:pid}%{SPACE}---%{SPACE}%{SYSLOG5424SD:thread}%{SPACE}%{JAVACLASSSPRING:javaclass}%{SPACE}:%{SPACE}\[%{USERID:userId}\]%{SPACE}\[%{TENANT:tenant}\]%{SPACE}%{GREEDYDATA:logmessage}"]
        }
        mutate {
            convert => [ "pid", "integer"]
            convert => [ "userId", "integer" ]
        }
        date {
            match => [ "logtime" , "yyyy-MM-dd HH:mm:ss.SSS" ]
            timezone => "Europe/Brussels"
            add_tag => [ "dateparsed" ]
        }
    }
}

 

and these are the extra regex patterns used in the grok parser

JAVACLASSSPRING (?:[\.]?[\[\]/a-zA-Z0-9-]+\.)*[\[\]/A-Za-z0-9$]+
MSEC (\d{3})
LOGBACK_TIMESTAMP %{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{SPACE}%{HOUR}:%{MINUTE}:%{SECOND}.%{MSEC}
USERID [\-0-9]*
TENANT [a-zA-Z0-9 ]+