Question

I have a table called Users with 10 million records in it. This is the table structure:

CREATE TABLE [dbo].[Users](
    [UsersID] [int] IDENTITY(100000,1) NOT NULL,
    [LoginUsersName] [nvarchar](50) NOT NULL,
    [LoginUsersPwd] [nvarchar](50) NOT NULL,
    [Email] [nvarchar](80) NOT NULL,
    [IsEnable] [int] NOT NULL,
    [CreateTime] [datetime] NOT NULL,
    [LastLoginTime] [datetime] NOT NULL,
    [LastLoginIp] [nvarchar](50) NOT NULL,
    [UpdateTime] [datetime] NOT NULL,
 CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED 
(
    [UsersID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

I have a nonclustered index on the UpdateTime column.

The paging sql:

;WITH UserCTE AS (
   SELECT * FROM 
    (SELECT 
        ROW_NUMBER() OVER (ORDER BY UpdateTime DESC) AS row,UsersID as rec_id -- select primary key only
    FROM 
        dbo.Users WITH (NOLOCK)
    ) A WHERE row BETWEEN 9700000 AND 9700020
)
SELECT
   *
FROM
    dbo.Users WITH (NOLOCK) WHERE UsersID IN (SELECT UserCTE.rec_id FROM UserCTE)

The query above:

SQL Server parse and compile time: 
CPU time = 0 ms, elapsed time = 3 ms.
(21 row(s) affected)

SQL Server Execution Times:
CPU time = 2574 ms,  elapsed time = 3549 ms.

enter image description here

Anyone give me some suggests about how to improve paging speed will appreciate. Thanks!

Was it helpful?

Solution

That looks about as good as it is going to get without changing the way it works or doing some sort of pre-calculation.

The index used to locate the UserIds on the page is as narrow as it can be (the leaf pages will contain just the UpdateTime and the clustered index key of UsersID. You could make the index slightly narrower by changing to datetime2 but this won't make a significant difference. Also you could check that this index doesn't have excessive fragmentation.

If you had an indexed sequential integer column of UpdateTimeOrder then you could just do

SELECT *
FROM dbo.Users
WHERE UpdateTimeOrder BETWEEN 9700000 AND 9700020

But maintaining such a column along with concurrent INSERTS/UPDATES/DELETES will be difficult. One easier but less effective precalculation would be to create an indexed view.

CREATE VIEW dbo.UserCount
WITH SCHEMABINDING
AS
SELECT COUNT_BIG(*) AS Count
FROM [dbo].[Users]

GO

CREATE UNIQUE CLUSTERED INDEX IX ON dbo.UserCount(Count)

Then retrieve the pre-calculated count and call a different query with ROW_NUMBER() OVER (ORDER BY UpdateTime ASC) if looking for rows more than halfway through the index (and subtracting the original row numbers from the count of course)

But why do you actually need this anyway? Do you actually get people visiting page 485,000?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top