Do you want to save your mental energy on solving complex coding challenges instead of fixing syntax issues in code generated by Gen AI? The Qt AI Assistant is the world's first coding assistant that seamlessly embeds a QML linter agent for the prompts you write. The latest release also comes with the ability to configure your LLM.
Embedded QML Linter
This second agent of the Qt AI Assistant (the first one was the code review agent) springs into action whenever you ask for expert help from your LLM. If the LLM response includes a QML code snippet that can be linted, the QML linter analyses the code. If the LLM response includes syntax issues such as syntax issues or outdated QML definitions, the linter agent will request a fix for those. The (hopefully) improved response will be displayed as additional information besides the original response.

While the linter doesn't improve the LLMs' pre-trained knowledge, it can improve the results. When using the embedded linter with Sonnet 4, we were able to improve the QML100 benchmark results by 3%. And even if the LLM does not know how to fix the issues, at least you are aware of them.
The QML linter is also used for the /fix and /review smart commands, providing more context for LLMs. The QML linter can be disabled in the AI Assistant preferences in Qt Creator.
Custom LLM Configuration
You can now configure a custom LLM for prompts and code completion.

Please remember that connecting to other LLMs is a system integration effort that requires significant LLMOps knowledge. Help with integrating custom LLMs is not in the scope of Qt's Technical Support but is a Professional Service. We will provide documentation and examples. However, a lot of prompt engineering has gone into an optimal experience for the pre-configured LLMs, so expect to spend some effort on your custom LLM.
How to upgrade to v0.9.4
You can install or upgrade the Qt AI Assistant in the Extensions View of Qt Creator. You need to upgrade to Qt Creator 17 to benefit from all features of the Qt AI Assistant. Remember that the installation can take still quite a while...
Meanwhile… we also made the following changes:
- Clicking on the Send button while LLM content is being streamed will stop the request processing
- StarCoder has been removed from the LLM portfolio because of the disappointing coding performance
- The LLM configuration file has been migrated from a JSON format to a TOML format for much better readability, especially the readability of prompts