I'm doing a stress test on an application that I'm developing. I added a few million records to a table, which is what it will take in about 5 years of use. I use Entity Framework. When counting records of a query , it took a long time to occur. I thought it was EF's problem. But when I do a trace in SQL Server and run the query directly in the database I noticed that the delay is in SQL Server itself.
Look at the following query example, similar to that generated by EF, but much simpler and still takes a long time:
select count(1) from tbMovimentoConta where con_codigo_fk = 1 and mcon_data >= '2017-01-01' and mcon_data <= '2018-01-01'
Result: 1,162,158 records
Time: 24 seconds
Repeating the query is instantaneous, I believe because of the cache. However, if you clear the cache with the
DBCC DROPCLEANBUFFERS command it will take about 25 seconds again.
con_codigo_fk field used in the
where of the above query is a foreign key and has index.
mcon_data field also used in
where of the above query has index too.
This table has four more foreign key indexes, plus two indexes for the
mcon_dataInclusao fields that are used in searches.
Script of the index of the column
mcon_data to analyze and point out some possible failure in it.
CREATE NONCLUSTERED INDEX [ix_mcon_data] ON [dbo].[tbMovimentoConta] ( [mcon_data] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) GO
Is this delay for this amount of records normal? Does not seem that big to me. I do not know if it's the amount of indexes, or the way they were created. I'm investigating.
Computer Configuration: 8GB of RAM and Core I7