Context, Context, Context – Qt AI Assistant v0.9.5 is out!
September 11, 2025 by Peter Schneider | Comments
The big picture matters! Based on popular requests, we are adding support to deliver additional context from the active project to the LLM.
Qt AI Assistant v0.9.5 also supports Gemini 2.5 Pro and DeepSeek v3.1 as pre-configured LLMs for code completion and prompts.
Why is Project Context Important?
Project context is important for coding assistants because it helps them understand the overall purpose of the application and how different parts of the codebase fit together. With this knowledge, they can better detect syntax issues when reusing code objects and ensure suggestions integrate smoothly into the existing structure. Context also enables assistants to align with project-specific patterns and goals, reducing the risk of generic or incompatible solutions.
Project Context for Prompts
The Qt AI Assistant uses other files in the project as context to respond to prompts. This allows the AI Assistant to:
Explain the entire project in a nutshell
Create code documentation, including dependencies
Help identifying where objects such as custom QML types are used
Fix syntax issues related to custom QML types and custom properties
You can manage the scope of the project context in several, complementary ways:
- Limit the number of tokens that will be used for the project context
- Limit the scope of the project context to cover only open files
- Limit the scope of the project context by creating an aiignore file
Screen capture: Context configuration for the Qt AI Assistant
You can create an aiignore file using Qt Creator's file wizard as an AI Assistant template. The initial aiignore file will contain a formatting suggestion and a list of file types that are commonly excluded, such as binaries. Any file or file types listed in the aiignore document will be excluded from the project context.
Screen capture: Qt Creator file creation wizard for aiignore file
From the remaining project context, we will prioritise context based on proximity to the file in question (files in the same subfolder are more likely to be included). We include a list of file paths as context, when the source code doesn't fit due to LLM input limitations or the content are binary content such as images or icons.
Please note: Enabling context from other project files significantly increases token consumption. While additional context provides a dramatic improvement in coding assistance, it does come at a cost. You can limit the amount of additional context, and therefore, the number of tokens to be used, by setting a context limit for each LLM.
The project context is not shared with custom models due to the complexity of defining the related prompts.
Gemini 2.5 Pro Support
Gemini 2.5 Pro has become a real alternative to Anthropic’s Sonnet 4 model for QML development.
Gemini 2.5 Pro delivers a 58% QML coding performance on the QML100 benchmark, performing better than GPT4o (47%) and DeepSeek R1 models (57%) but lower than Sonnet models (up to 77%). The lower performance is based on suggesting frequently deprecated QML syntax from Qt 5 compared to the Sonnet models, which have adopted the Qt 6 syntax due to more recent training data.
You can use Gemini 2.5 Pro for code completion, but it occasionally fails to follow the instructions on indentation and line breaks, leading to code suggestions that do not fit the format of the existing code. Therefore, we have labelled it as "Experimental" for code completion.
DeepSeek v3.1 Support
For our customers in China, we upgraded DeepSeek support to v3.1. DeepSeek v3.1 is improving its rating on the QML100 coding benchmark for natural language prompts by 5% to an overall score of 62%. The code completion score remains unchanged at an 87% success rate on the QML100FIM benchmark. The maximum context window has been raised to 128k tokens.
DeepSeek v3.1 suggests unnecessary imports, especially Qt Quick Controls. The LLM also creates code with many unqualified accesses. Furthermore, it prefers to create UI controls such as spinboxes, drop-downs, or progress bars from atomic components instead of using the available Qt Quick Controls.
Customers upgrading from DeepSeek v3 or R1 to DeepSeek v3.1 can use the pre-configured DeepSeek model support in the Qt AI Assistant over the existing chat API.
How to Install
You can install (or upgrade to) the Qt AI Assistant v0.9.5 from the Qt Creator Extensions view in Qt Creator 17.0.1 and later releases. Due to the application's size, the installation may take a moment or two.
Claude Sonnet 3.5 Deprecated
We are discontinuing the support of Claude Sonnet 3.5 as a pre-configured LLM to maintain a high-quality portfolio of LLMs. Claude Sonnet 3.7 and Claude Sonnet 4 are available as great coding assistance models from Anthropic.
Meanwhile… we made the following minor enhancements:
- Reasoning mode is now supported for Claude models. This can be enabled from the Advanced LLM settings. The response for your own prompts, /doc, /explain, and /review includes more transparency on which logic the LLM follows to generate the response. Reasoning has been deactivated for /inlinecomments, /fix, and /qtest smart commands for more efficiency.
- The context window for Claude Sonnet 4 has been raised to 1 million tokens.
- You can stop requests by clicking on the stop button (replacing the send button temporarily in the inline prompt window). It won't necessarily stop the LLM from burning tokens, but you are ready to move on faster).
- Prompt caching is now in use for all LLMs supporting such functionality to reduce token utilisation, especially when more context is provided to the LLM.
Blog Topics:
Comments
Subscribe to our newsletter
Subscribe Newsletter
Try Qt 6.9 Now!
Download the latest release here: www.qt.io/download.
Qt 6.9 is now available, with new features and improvements for application developers and device creators.
We're Hiring
Check out all our open positions here and follow us on Instagram to see what it's like to be #QtPeople.