I have an SQL query that fetches some stats, and those stats are displayed on a dashboard. The dashboard runs this query every 15 seconds to fetch the most up-to-date data.
The query is very CPU-heavy for the database server (according to the stats provided by Azure SQL Server), and eats 15% of the available CPU capacity of the server.
The query runs on a log (which records messages received from user and the answers returned by a chatbot). It returns four pieces of information, namely conversations, messages, licenses and failed:
SELECT
-- Total unique `conversationId`s:
COUNT (DISTINCT conversationId) AS conversations,
-- Total records in table:
COUNT (*) AS messages,
-- Total unique `licenseId`s:
COUNT (DISTINCT licenseId) AS licenses,
-- Total messages where `intent` meets certain criteria:
(
SELECT COUNT (*)
FROM MessageLog
WHERE intent LIKE 'None' OR intent IS NULL OR intent LIKE ''
) AS failed
FROM MessageLog
What do you think is the most CPU intensive part of that query, and what are the possible ways to consume less CPU?
Update: Here's the DB schema:
CREATE TABLE [dbo].[MessageLog](
[id] [int] IDENTITY(1,1) NOT NULL,
[licenseId] [int] NOT NULL,
[message] [nvarchar](1000) NOT NULL,
[timestamp] [datetime] NOT NULL,
[intent] [nvarchar](70) NULL,
[entities] [nvarchar](3000) NULL,
[conversationId] [nvarchar](100) NULL,
[confidence] [float] NULL,
[failed] [bit] NOT NULL,
[ipAddress] [varchar](25) NULL,
[userAgent] [varchar](256) NULL,
[nlpExecutionTime] [int] NULL,
[answer] [nvarchar](1000) NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[MessageLog] ADD CONSTRAINT [DF_MessageLog_failed] DEFAULT ((0)) FOR [failed]
GO
There are non-clustered indexes on id, intent, conversationId, licenseId, but it doesn't seem to improve the performance.