lnxsense, a system monitoring tool for Linux

Ever since I got my AMD Athlon XP 2500+, I’ve been into overclocking. While my overclocking activities were limited at the time (as a student I couldn’t risk burning up my CPU or motherboard), I made sure that ever since, none of my desktops ran at stock speeds. Even my trusty Intel 2500K that I’m writing this blog on still hums along at 4,4Ghz all core.

Overclocking has always been a Windows thing though, and for good reason; in 2009 the Linux market share was only 0,6%, while Windows dominated the market with a 95% market share. With such a dominating OS, motherboards manufacturers focused fully on (usually terrible) software which allowed you to overclock and monitor your system without leaving Windows. The overclocking community didn’t stop there either, tools like 8rdavcore (apparently ported from Linux), setfsb, MemSet, CPU-Tweaker and many more made it possible to overclock and tweak your system to the max. Combined with a lot of monitoring software like HWInfo, Aida64, SpeedFan, CPU-z and benchmarks like 3Dmark, Sisoft Sandra, Cinebench, and it was clear: overclocking belonged to Windows.

Fast forward to 2025, and things have changed; Linux has a market share of 3% while Windows has dropped to 66%. OCCT is now also available on Linux, GreenWithEnvy makes it easier to overclock NVIDIA gpu’s and benchmarks like y-cruncher, 7-zip and Geekbench run fine on Linux. But when it comes to graphical monitoring applications, we only have Psensor or xsensors. Both work fine but it can still be better.

A screenshot showing xsensors and psensor side by side
Xsensors and PSensor side by side

This is where I want to change a couple of things and after this years release of Java 25 and its Foreign Function and Memory API, I can finally work in a language I love while using C libraries like libsensors, libcpuid, the NVIDIA management API and many more.

After returning from Devoxx I decided to create a Linux alternative to Open Hardware Monitor, HWMonitor and HWInfo and that’s how lnxsense was born. It’s a still in early alpha stages and what it can show depends heavily on what the underlying libraries can return (e.g. NVIDIA’s nvml doesn’t even have an option to get the hotspot temperature or actual fan RPM). Even so, I’m already really happy with what it can do.

lnxsense showing different metrics like cpu usage, power draw, GPU frequency.

In it’s very early stage it supports (when running the back-end server as root)

  • CPU Frequencies (as reported by the Linux kernel)
  • CPU Utilization
  • Memory Utilization
  • Core temperatures
  • Intel requested VCore (the VID)
  • Intel Core multipliers
  • Intel Throttling reasons
  • Intel RAPL Power Management information like PP0, PP1 and Platform power limits and usage
  • NVIDIA Clocks, Utilization, Temperature and Fan speed (in % because why would nvml expose the actual fan speed), P-state and current PCIe speed
  • SMART and NVMe log
  • Blockdevice IOPS and read/write speed
  • Remote monitoring using sockets

If you want to try it out, you can download a release version from Codeberg. Just be sure to read the INSTALL.md, it’s still in early development, so it’s not a one-click experience and definitely not production-ready.

// 2025/12/15: I decided to rename the project from HWJinfo to lnxsense, it just makes more sense, doesn’t it ?

J-ExifTool v0.0.11

Today I’ve released version 0.0.11 of J-ExifTool. After more than 10 years this release does not add any new functionality but is mainly a long overdue maintenance release:

  • Java 17: this version is built with and for Java 17 [BREAKING CHANGE]
  • A lot of boilerplate code was replaced with Lombok
  • General code cleanup
  • Eclipse configuration removed from git

The jar is not yet on the maven repo due to it not supporting my Bitbucket username.

The new jar (+ sources) can be downloaded from BitBucket.

For the record only: v0.0.11 is commit c6d76be.

Slow performance with NamedParameterJdbcTemplate

Today I tried inserting 256 rows in a single, empty PostgreSQL table which has only one index on it using Spring’s NamedParameterJdbcTemplate . To my surprise the single transaction took over 3 minutes to complete, over 500ms per INSERT statement. To make things worse, the same inserts during integration testing on an H2 database completed within a second.

My first guess was that I had an issue with the TOAST tables since the actual table has 28 columns and most of them are VARCHAR(256). As I didn’t not find any issue with it, I continued my quest … just up to the point where I replaced all named parameters to hardcoded values and used and EmpySqlParameterSource() instead. To my great surprise, this resulted in sub-second completion of all inserts.

So obviously, there had to be an issue with the NamedParameterJdbcTemplate, right ? I fired up VisualVM to verify my idea and sampled the CPU time of all org.springframework classes:

The obvious pain point is the setNull() method of the StatementCreatorsUtil and looking at the source code it’s quite obvious what’s going on: every time I set a null value in a statement, this method tries to find out what SqlType the null value should be because I didn’t suggested it myself.

I decided to not waste more time on this issue so I just fixed my code by re-writing parts of my code. Instead of writing

source.addValue("myParam", null);

I now write

source.addValue("MyParam", null, 
     JDBCType.VARCHAR.getVendorTypeNumber());

 

Et voila, instant turbo-charged insert statements.

Log contextual information to all log messages in Spring Boot using Logback’s MDC

Logging is sometimes one of the things on which too little attention is spent, but when production errors start to arrive you’ll sure want to have the maximum amount of information you want. Most logging implementations in Java will give you the time, the name of the class (or to be more precise, the name of the logger), the name of the thread, and a message and this is actually the best a logging implementation can do by default.

Sometimes it’s interesting to add extra information to every message you log, like for example the id of the user or the tenant’s id (in case of a multi-tenancy application). It would be stupid to manually append it to each log-message because it’s boring and error prone (and you can’t reliably parse it for use in an ELK-stack). To automate this we can use Logback’s (SLF4J) ‘Mapped Diagnostic Context’. Everything you put in the MDC can be used in the log pattern and it’s comparable to a ThreadLocal (each incoming REST request will have different values).

For example:

MDC.put("userId", SecurityUtil.getUserId() == null ? "-1" : SecurityUtil.getUserId().toString());

would put the userId in the MDC and it can then be added to the log message using

%mdc{userId:--2}

The :–2 refers to the default value of -2 which will be logged in case the MDC is empty. I’ll explain later when this happens.

Filling in the MDC

What we want to achieve is that on every REST request, the current user id and the tenant is stored in the MDC and that this information is logged. A Spring FilterRegistrationBean will register a custom servlet Filter (javax.servlet.Filter) which will be triggered on each request and it will set the values in the MDC.

It’s important to note that Spring executes all filters in a certain order and the MDC filter should be executed after the security filter (otherwise we can’t have the user id because the security hasn’t yet extracted this information from the request).

By default (in older versions of Spring) the Spring security filter runs quite late in the chain so it’s best to force it to run a bit earlier by putting this in your application.properties file. This is optional but the default value might change in the future and you want to be sure that when this happens, it still runs in an earlier stage.

security.filter-order=0

The filter registration bean looks like this:

@Component
public class LogbackDiagnosticContext extends FilterRegistrationBean {

   public LogbackDiagnosticContext() {
      super(new MDCContextFilter());
      addUrlPatterns("/*");
      setOrder(Integer.MAX_VALUE);
   }

   public static class MDCContextFilter implements Filter {
      /**
       * {@inheritDoc}
       */
      @Override
      public void init(FilterConfig
                         filterConfig) throws ServletException {
         // NOOP
      }

      @Override
      public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
         MDC.put("userId", SecurityUtil.getUserId() == null ? "-1" : SecurityUtil.getUserId().toString());
         MDC.put("tenant", StringUtils.isBlank(CurrentContext.getTenant()) ? "none" : CurrentContext.getTenant());
         filterChain.doFilter(servletRequest, servletResponse);
      }

      /**
       * {@inheritDoc}
       */
      @Override
      public void destroy() {
         // NOOP
      }
   }

}
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/application.log}"/>
<property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %clr([%mdc{userId:--2}] [%-10.10mdc{tenant:-null}]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
<property name="FILE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} : [%mdc{userId:--2}] [%-10.10mdc{tenant:-null}] %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>

<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<include resource="org/springframework/boot/logging/logback/file-appender.xml" />
<root level="INFO">
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
</root>
<logger name="org.springframework" level="INFO"/>
<logger name="be.pw999" level="INFO"/>
</configuration>

 

This pattern will result in a log message like this (given that the tenant is junit and the userId is 123456789):

2017-05-06 17:42:11.410  INFO   --- [           main] be.pw999.secretproject.base.LogTest      : [12345679] [junit     ] INFO
2017-05-06 17:42:11.440  WARN   --- [           main] be.pw999.secretproject.base.LogTest      : [12345679] [junit     ] WARNING
2017-05-06 17:42:11.441 ERROR   --- [           main] be.pw999.secretproject.base.LogTest      : [12345679] [junit     ] ERRORRRR

 

As previously said, it’s possible that the MDC is empty. This can happen in a couple of cases:

  • A message is being logged before the custom filter was executed. You can make it run earlier in the chain by passing a smaller number to setOrder(int).
  • A message is being logged for something else than a REST call. Since this is a servlet filter, it won’t work for stuff like JMS messages or Spring Batch
  • A message is called from an asynchronous thread (eg. using @Async)

Parsing the log message using Grok

Here’s a little bonus for you. Our log message are captured using Filebeat and sent to Logstash before being stored into ElasticSearch (classic ELK stack). Logstash will parse the log-message and convert them so that we can search on the tenant and userId using Kibana. For this we use the following grok-pattern:

filter {
    if [type] == "logback" {
       grok {
          patterns_dir => "/etc/logstash/grok/patterns"
          # Do multiline matching with (?m) as the above mutliline filter may add newlines to the log messages.
          match => [ "message", "^%{LOGBACK_TIMESTAMP:logtime}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{NUMBER:pid}%{SPACE}---%{SPACE}%{SYSLOG5424SD:thread}%{SPACE}%{JAVACLASSSPRING:javaclass}%{SPACE}:%{SPACE}\[%{USERID:userId}\]%{SPACE}\[%{TENANT:tenant}\]%{SPACE}%{GREEDYDATA:logmessage}"]
        }
        mutate {
            convert => [ "pid", "integer"]
            convert => [ "userId", "integer" ]
        }
        date {
            match => [ "logtime" , "yyyy-MM-dd HH:mm:ss.SSS" ]
            timezone => "Europe/Brussels"
            add_tag => [ "dateparsed" ]
        }
    }
}

 

and these are the extra regex patterns used in the grok parser

JAVACLASSSPRING (?:[\.]?[\[\]/a-zA-Z0-9-]+\.)*[\[\]/A-Za-z0-9$]+
MSEC (\d{3})
LOGBACK_TIMESTAMP %{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{SPACE}%{HOUR}:%{MINUTE}:%{SECOND}.%{MSEC}
USERID [\-0-9]*
TENANT [a-zA-Z0-9 ]+

 

Using alternative credentials for Liquibase in Spring Boot

One of the projects I’m working uses Spring Boot to handle all database changes for each micro-service. One of the obvious requirements to make this work is a database user with DBA rights, otherwise it can not create, alter or drop tables.

You could configure the default datasource to use such user, but this would mean that every component will use this datasource and in case of a security breach (eg. SQL injection) this would be bad because all of a sudden someone else has DBA access to your database.

Therefore it’s best to configure a second datasource for Liquibase with a DBA user and a primary datasource with a read-write database user.

Configuring the Liquibase datasource

	@LiquibaseDataSource
	@Bean
	public DataSource liquibaseDataSource() {
		DataSource ds =  DataSourceBuilder.create()
				.username(liquibaseDataSourceProperties.getUser())
				.password(liquibaseDataSourceProperties.getPassword())
				.url(liquibaseDataSourceProperties.getUrl())
				.driverClassName(liquibaseDataSourceProperties.getDriver())
				.build();
		if (ds instanceof org.apache.tomcat.jdbc.pool.DataSource) {
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setInitialSize(0);
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxActive(2);
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxAge(30000);
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinIdle(0);
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinEvictableIdleTimeMillis(60000);
		} else {
			LOG.warn("#################################################################");
			LOG.warn("Datasource was not of type org.apache.tomcat.jdbc.pool.DataSource");
			LOG.warn("but was of type {}", ds.getClass().getName());
			LOG.warn("Number of leaked connections might be 10 per instance !");
			LOG.warn("#################################################################");
		}

		LOG.info("Initialized a datasource for {}", liquibaseDataSourceProperties.getUrl());
		return ds;
	}

FYI: LiquibaseDataSourceProperties is just a standard bean annotated with

@Component
@ConfigurationProperties("datasource.liquibase")

in order to have different configurations per environment. Just must configure the pool to only use one connection and to release this connection after a while, otherwise your user will keep 10 connections open. With 10 micro-services which can be up- and down-scaled you’ll quickly block over 100 database connections which might prevent your application to make new connections. In our case Spring uses the default Tomcat pool as it’s readily available on the classpath, but it might be different for your setup.
For more info see the original Stackoverflow question.

Configuring the default datasource

If you already have a datasource configured in your application then you just need to annotate it with  the @Primary annotation to make sure that this read-write user is the one used by all the other Spring components. If you don’t do this then Spring Boot won’t start because you have 2 instances of DataSource configured and Spring doesn’t know which one to pick.