Slow performance with NamedParameterJdbcTemplate

Today I tried inserting 256 rows in a single, empty PostgreSQL table which has only one index on it using Spring’s NamedParameterJdbcTemplate . To my surprise the single transaction took over 3 minutes to complete, over 500ms per INSERT statement. To make things worse, the same inserts during integration testing on an H2 database completed within a second.

My first guess was that I had an issue with the TOAST tables since the actual table has 28 columns and most of them are VARCHAR(256). As I didn’t not find any issue with it, I continued my quest … just up to the point where I replaced all named parameters to hardcoded values and used and EmpySqlParameterSource() instead. To my great surprise, this resulted in sub-second completion of all inserts.

So obviously, there had to be an issue with the NamedParameterJdbcTemplate, right ? I fired up VisualVM to verify my idea and sampled the CPU time of all org.springframework classes:

The obvious pain point is the setNull() method of the StatementCreatorsUtil and looking at the source code it’s quite obvious what’s going on: every time I set a null value in a statement, this method tries to find out what SqlType the null value should be because I didn’t suggested it myself.

I decided to not waste more time on this issue so I just fixed my code by re-writing parts of my code. Instead of writing

source.addValue("myParam", null);

I now write

source.addValue("MyParam", null, 
     JDBCType.VARCHAR.getVendorTypeNumber());

 

Et voila, instant turbo-charged insert statements.

Using alternative credentials for Liquibase in Spring Boot

One of the projects I’m working uses Spring Boot to handle all database changes for each micro-service. One of the obvious requirements to make this work is a database user with DBA rights, otherwise it can not create, alter or drop tables.

You could configure the default datasource to use such user, but this would mean that every component will use this datasource and in case of a security breach (eg. SQL injection) this would be bad because all of a sudden someone else has DBA access to your database.

Therefore it’s best to configure a second datasource for Liquibase with a DBA user and a primary datasource with a read-write database user.

Configuring the Liquibase datasource

	@LiquibaseDataSource
	@Bean
	public DataSource liquibaseDataSource() {
		DataSource ds =  DataSourceBuilder.create()
				.username(liquibaseDataSourceProperties.getUser())
				.password(liquibaseDataSourceProperties.getPassword())
				.url(liquibaseDataSourceProperties.getUrl())
				.driverClassName(liquibaseDataSourceProperties.getDriver())
				.build();
		if (ds instanceof org.apache.tomcat.jdbc.pool.DataSource) {
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setInitialSize(0);
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxActive(2);
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxAge(30000);
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinIdle(0);
			((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinEvictableIdleTimeMillis(60000);
		} else {
			LOG.warn("#################################################################");
			LOG.warn("Datasource was not of type org.apache.tomcat.jdbc.pool.DataSource");
			LOG.warn("but was of type {}", ds.getClass().getName());
			LOG.warn("Number of leaked connections might be 10 per instance !");
			LOG.warn("#################################################################");
		}

		LOG.info("Initialized a datasource for {}", liquibaseDataSourceProperties.getUrl());
		return ds;
	}

FYI: LiquibaseDataSourceProperties is just a standard bean annotated with

@Component
@ConfigurationProperties("datasource.liquibase")

in order to have different configurations per environment. Just must configure the pool to only use one connection and to release this connection after a while, otherwise your user will keep 10 connections open. With 10 micro-services which can be up- and down-scaled you’ll quickly block over 100 database connections which might prevent your application to make new connections. In our case Spring uses the default Tomcat pool as it’s readily available on the classpath, but it might be different for your setup.
For more info see the original Stackoverflow question.

Configuring the default datasource

If you already have a datasource configured in your application then you just need to annotate it with  the @Primary annotation to make sure that this read-write user is the one used by all the other Spring components. If you don’t do this then Spring Boot won’t start because you have 2 instances of DataSource configured and Spring doesn’t know which one to pick.

 

 

My IBM DB2 Cheat Sheet

This is a set of DB2 snippets I often used at work. I’m glad to share them with you.

Generate GRANT statements for a new DB2 user with name MYUSER user on all tables of a schema named MYSCHEMA:

select 'GRANT SELECT ON TABLE ' || TRIM(tabschema) || '.' || TRIM(tabname) || ' TO USER MYUSER @' from syscat.tables where tabschema = 'MYSCHEMA'
UNION
select 'GRANT INSERT ON TABLE ' || TRIM(tabschema) || '.' || TRIM(tabname) || ' TO USER MYUSER @' from syscat.tables where tabschema = 'MYSCHEMA'
UNION
select 'GRANT UPDATE ON TABLE ' || TRIM(tabschema) || '.' || TRIM(tabname) || ' TO USER MYUSER @' from syscat.tables where tabschema = 'MYSCHEMA' @

The following stored procedure will automatically generate and execute the grant statements

CREATE OR REPLACE PROCEDURE MYSCHEMA.GRANT_ME_SOME (IN SCHEMANAME VARCHAR(128), IN USRNME VARCHAR(128))
       DYNAMIC RESULT SETS 1
       MODIFIES SQL DATA
P1: BEGIN
		DECLARE q VARCHAR(1024);
		DECLARE AT_END SMALLINT DEFAULT 0 ;
		DECLARE NOT_FOUND CONDITION FOR SQLSTATE '02000' ;
		
		DECLARE CUR CURSOR WITH HOLD FOR 
		select 'GRANT SELECT ON TABLE ' || TRIM(tabschema) || '.' || TRIM(tabname) || ' TO USER ' || USRNME AS KWERIE from syscat.tables where tabschema = SCHEMANAME
		UNION
		select 'GRANT INSERT ON TABLE ' || TRIM(tabschema) || '.' || TRIM(tabname) || ' TO USER ' || USRNME  AS KWERIE from syscat.tables where tabschema = SCHEMANAME
		UNION
		select 'GRANT UPDATE ON TABLE ' || TRIM(tabschema) || '.' || TRIM(tabname) || ' TO USER ' || USRNME  AS KWERIE from syscat.tables where tabschema = SCHEMANAME; 
	
		DECLARE CONTINUE HANDLER FOR NOT_FOUND SET AT_END = 1 ;
		OPEN CUR;
		
		UPDATE_LOOP : LOOP
		
		FETCH CUR INTO q;
		IF AT_END <> 0 THEN
	        LEAVE UPDATE_LOOP ;
	    END IF ;
		
		EXECUTE IMMEDIATE q;
		
		END LOOP;
END P1;

Generate ‘reorg index’ and and ‘reorg columns’ statements for all tables of a schema MYSCHEMA

select 'CALL SYSPROC.ADMIN_CMD(''RUNSTATS ON TABLE ' ||TRIM(tabschema)||'.'||TRIM(tabname)||' FOR DETAILED INDEX ALL '')@' from syscat.tables where tabschema='MYSCHEMA' and type='T' 
UNION
select 'CALL SYSPROC.ADMIN_CMD(''RUNSTATS ON TABLE ' ||TRIM(tabschema)||'.'||TRIM(tabname)||' ON ALL COLUMNS'')@' from syscat.tables where tabschema='MYSCHEMA' and type='T' @

Generate reorg table statements for all tables of a schema MYSCHEMA

select 'CALL SYSPROC.ADMIN_CMD(''REORG TABLE '||TRIM(tabschema)||'.'||TRIM(tabname)||''') @' from syscat.tables where tabschema='MYSCHEMA' and type='T'

Find all inactive tables on your database

SELECT COALESCE(
 'SET INTEGRITY FOR  "' || TRIM(TABSCHEMA) || '"."' || TRIM(TABNAME) || 
  '"  IMMEDIATE CHECKED @' , '')
FROM

 TABLE
 ( SELECT TABSCHEMA, TABNAME
 FROM SYSCAT.TABLES
 WHERE (CONST_CHECKED LIKE '%N%'
 OR STATUS = 'C')
 UNION

 SELECT a.REFTABSCHEMA,a.REFTABNAME
 FROM SYSCAT.REFERENCES a,SYSCAT.TABLES b
 WHERE (a.TABSCHEMA, a.TABNAME) IN
 (SELECT TABSCHEMA, TABNAME
 FROM SYSCAT.TABLES
 WHERE (CONST_CHECKED LIKE '%N%'
 OR STATUS = 'C'))
 AND a.REFTABSCHEMA = b.TABSCHEMA
 AND a.REFTABNAME = b.TABNAME
 AND (b.CONST_CHECKED LIKE '%N%'
 OR b.STATUS = 'C')

 ) AS TAB( TABSCHEMA, TABNAME )
ORDER BY TABSCHEMA
FETCH FIRST 50 ROWS ONLY
OPTIMIZE FOR 1 ROW

Quickly unload a table (sets your table in an invalid state). Works from CLI only

db2 LOAD FROM /dev/null of del REPLACE INTO MYSCHEMA.MYTABLE

An example of a DELETE WITH statement using the SELECT FROM OLD TABLE structure. I previously covered this statement.

WITH 
	DOCLINK(ID) AS ( SELECT ID FROM OLD TABLE (
			DELETE FROM MYSCHEMA.MYTABLE1 WHERE CODE = 'XYZ'
		)
	), 
	DOCU(ID) AS (SELECT ID FROM OLD TABLE (
			DELETE FROM MYSCHEMA.MYTABLE2 WHERE DOC_ID IN (SELECT ID FROM DOCLINK)
		)
	)	
	SELECT COUNT FROM MYSCHEMA.MYTABLE1 WHERE CODE = 'XYZ'
 @

An example of a MERGE statement. The field MYTABLE.JAAR will be updated with the value of the matching field in MYTABLE2.JAAR. It’s what you would do with UPDATE MYTABLE.JAAR = (SELECT JAAR FROM MYTABLE2 WHERE MYTABLE.VELD_ID = MYTABLE.ID) if it were a valid statement.

MERGE INTO MYSCHEMA.MYTABLE VELD 
	USING (
		select DISTINCT V.ID AS VELD_ID, T.JAAR AS JAAR 
		 from MYSCHEMA.MYTABLE2 T, MYSCHEMA.MYTABLE3 V
                 WHERE    V.ID = T.V_ID
			AND V.TYPE_ID IN (  100000,   100065,   100130)
			AND T.JAAR IS NOT NULL
	) AS VELD_TO_UPDATE
	ON VELD_TO_UPDATE.VELD_ID = VELD.ID
	WHEN MATCHED THEN UPDATE SET VELD.JAAR = VELD_TO_UPDATE.JAAR 
	@

A simple WITH statements

WITH IDS(FRST, SCND, CNT) AS (
	SELECT MIN(ID), MAX(ID), COUNT(ID) FROM MYSCHEMA.MYTABLE
		WHERE ID > 105220
		GROUP BY EXT_ID, INT_ID
		HAVING COUNT(ID) > 1
) SELECT ID, REF FROM MYSCHEMA.MYTABLE2 WHERE
	IN in (SELECT FRST FROM IDS) 
	OR ID IN (SELECT SCND FROM IDS)
	ORDER BY REF
	FOR READ ONLY @

List aggregation (LISTAGG)

SELECT LISTAGG(REF, ', ') WITHIN GROUP(ORDER BY ID) 
   FROM MYSCHEMA.MYTABLE GROUP BY REF_PREFIX

Drop a schema (CLI)

db2 "CALL SYSPROC.ADMIN_DROP_SCHEMA('MYSCHEMA', NULL, 
 'DELETESCHEMA', 'ERR_DEL')"

Copy a schema (CLI)

db2 "CALL SYSPROC.ADMIN_COPY_SCHEMA('MYSCHEMA', 'MYNEWSCHEMA', 
 'COPY', null,  null, null, 'ERRORSCHEMA', 'ERRORTAB') "

DB2Advisor on schema MYSCHEMA (CLI) using an input file YOUR_QUERIES (you can also pass a password separately with -x)

db2advis -d MY_DB -t 30 -m IMP -noxml -i YOUR_QUERIES.SQL 
 -a USERNAME/PASSWORD -o OUTPUT.OUT -n MYSCHEMA -q MYSCHEMA

Create explain tables in your scherma.

CALL SYSPROC.SYSINSTALLOBJECTS('EXPLAIN', 'C', CAST (NULL AS VARCHAR(128)), 
 CAST ('MYUSER' AS VARCHAR(128)))

Migrating Sonar from Derby to MySQL

For those people who are not familiar with Sonar, it’s an opensource source code analysis tool which you can use to improve the quality of your coding. By default, Sonar comes with a simple Java Derby embedded database which is great for testing it out, but with at least 8 different projects with 25.000 – 200.000 lines of code it becomes clear that using a Derby embedded database is not a good solution. Analysis of a single project takes about 1,5 hour and these projects are growing so it’s obvious that we need a serious performance boost.
Sonar Screenshot

The great thing about Sonar is that it supports different databases by default; MySQL, DB2, PostgreSQL and MSSQL. I decided it was time to move from Derby to MySQL and so began my queeste to transfer all the data from one db to another…

It took me a while to find the great (but unfortunately not free) tool RazorSQL. This tool was one of the few ones who was able to easily make a SQL dump of a Derby database. Dumping this 500Mb large DB generated over 400.000 SQL Queries (many of them contained LOBs) and took a good two hours to complete. Using some old school

mysql -u root < dump.sql

I then tried to import the data in MySQL. Unfortunately this didn’t work since RazorSQL used the Derby SQL dialect.

After a good night sleep I went back to the internet and quickly found a great tool: openDBCopy. This tool makes it easy to migrate from one database to another !
It did require some cheating tough because Derby isn’t supported by default. So here’s a small step-by-step guide to help you migrate your Sonar database. You can find mysql.exe and mysqldump.exe in the MySQL Workbench.
Fyi: I don’t think you can ruin anything by following this guide, but everything should be done at own risk. I’m not responsible if something goes wrong.
Fyi2: this guide describes a Derby to MySQL migration for Sonar. However, it’s easy to transform this guide in a x -> y migration guide for z. Feel free to do so and don’t forget to mention me ;) . Of course, all at own risk.

Configuring the MySQL database:
1) Execute the MySQL query you can find in Sonar\extras\database\mysql. This will set up the schema and user.
2) Change the sonar.properties file in the conf folder so that Sonar uses the MySQL database.

# Comment the 3 following lines to deactivate the default embedded database
#sonar.jdbc.url:                            jdbc:derby://localhost:1537/sonar;create=true
#sonar.jdbc.driverClassName:                org.apache.derby.jdbc.ClientDriver
#sonar.jdbc.validationQuery:                values(1)
# directory containing Derby database files. By default it's the /data directory in the sonar installation.
#sonar.embeddedDatabase.dataDir:
# derby embedded database server listening port, defaults to 1527
#sonar.derby.drda.portNumber:               1537

#----- MySQL 5.x/6.x
# Comment the embedded database and uncomment the following lines to use MySQL
sonar.jdbc.url:                            jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8
sonar.jdbc.driverClassName:                com.mysql.jdbc.Driver
sonar.jdbc.validationQuery:                select 1

3) Start Sonar and verify it is working. If Sonar is up and running then it has populated the schema for you (if not, check the log files).
4) Stop Sonar and change the sonar.properties file so that it uses the derby database (we’ll need the database up and running later).
5) To prevent duplicate primary keys we’ll have to dump all the data in the MySQL database. This can be done in two steps:
a)

mysqldump.exe -u root --no-data --add-drop-table --database sonar > mysql.sql

b)

mysql.exe -u root < mysql.sql

Moving the data from derby to MySQL with openDBCopy:
1) You’ll need to add some libraries to the lib folder of openDBCopy:
derbyclient.jar
derbytools.jar
mysql.jar

You can find these libraries on the MySQL site and the Derby site or copy them from another application
2) start openDBcopy (I used v0.51rc2)
3) Plugin > Select new Plugin > copy data from a source into a destination database
4) Important step: click on * dir() and select a directory, otherwise it won’t work.
5) Make sure Sonar is running (since the Derby database needs to be running) and fill in the second tab like this:
OpenDBCopy settings for Sonar migration

Ps. the default derby port is 1527 ;)
6) Click the “apply and test” buttons, you should see the connection url appear on the bottom of the application. You’ll get a pop-up only if there’s a problem.
7) The next screens are just a matter of clicking on the available buttons. Leave the mapping as it is, it should work fine.
8) On the last tab, just press execute and wait for it to finish.
9) Change the sonar.properties file so it uses the MySQL database and start Sonar.

Optional steps:
These steps are optional, but might be required

  • Update your project’s pom.xml to match the new database url and driver
  • Update Continuum integration tool (eg. Hudson)
  • Test new builds (not realy optional I guess :p )

If you forget to update the database url you’ll probably get something like this (in Maven):

Embedded error: PicoLifecycleException: method ‘public void org.sonar.jpa.session.AbstractDatabaseConnector.start()’, instance ‘org.sonar.jpa.session.DriverDatabaseConnector@168c1e1, java.lang.RuntimeException: wrapper
org.apache.derby.jdbc.ClientDriver

If everything went fine then you’ve successfully migrated the Sonar database from Derby to MySQL.

This post was originally posted on my My.Opera blog on Sunday, April 3, 2011 10:00:38 AM. This post has been copied to my WordPress blog since the My.Opera blogging service will be discontinued.

Delete from multiple tables in DB2 in a single query

Consider following, simple, relational database model:

Document DB model
Document DB model

We have a list with documents and a table linking these document to something unimportant. The relation between DOCUMENT and LINK is one-to-one or one-to-none.

Now imaging that we’d have to delete all documents where the type column has the value ‘TEMPORARY’.

DELETE FROM LINK WHERE TYPE = 'TEMPORARY' @

If we’d execute this query, we will not be able to delete the correct elements from the DOCUMENT table anymore.

Starting the other way around:

DELETE FROM DOCUMENT WHERE ID IN (SELECT ID FROM LINK WHERE TYPE = 'TEMPORARY') @

won’t work either because the foreign key from LINK to DOCUMENT is restricting the deletion.

Fortunately, DB2 allows us to do multiple deletes in a single query:

WITH
-- DELETE THE LINKS FIRST AND LET DB2 STORE THE IDs IN AN 'OLD TABLE'
DELETED_LINKS(ID) AS (SELECT ID FROM OLD TABLE (
DELETE FROM LINK 
WHERE type = 'TEMPORARY' 
)
),
-- DELETE THE DOCUMENTS USING THE DELETED_LINK IDs
DELETED_DOCUMENTS(ID) AS (SELECT ID FROM OLD TABLE (
DELETE FROM DOCUMENT
WHERE document IN (SELECT DISTINCT ID FROM DELETED_LINKS) 
)
) 
(
-- SELECT IN WITH STATEMENT IS MANDATORY -&gt; USELESS STATEMENT
SELECT * FROM SYSIBM.SYSDUMMY1
) @

What this query will do is delete all LINK elements where TYPE is ‘TEMPORARY’ and store the IDs from the delete elements in a temporary table called DELETED_LINKS which has a single column named ID.
The next query will use the values from this temporary table to delete the correct rows from the DOCUMENT table.
Last but not least, a completely useless SELECT statement because otherwise it won’t work ;)

Ps. I used this online schema designer.