Question

I have SQL Server 2008 Standard Edition.

Some of our tables contain around 2 million rows of data. We are using a M$ Access front end (horrible I know, but too big to re-write at the moment).

I want to increase performance and speed as we are starting to see a dip in our Access performance.

I have looked into partitioned tables, which seemed perfect, but its not possible in Standard edition, and as upgrading is ££,£££'s its out of the question.

I could split the database up into separate databases (one for each year) and use partitioned views to access the data, but im not sure how much of a performance increase this will give me for the effort.

We already rebuild the indexes every night, so that is ok.

Any ideas or suggestions?

Partitioned views is the main thing that looks like it could help, but im unsure of the real gains.

Thanks

Was it helpful?

Solution

Have you tried to use indexes? Have you profiled the workload on your database and search for costly selects? normally if you have the selects and you execute them on the database you want to include the execution plan and look for missing indexes and lookups. you can reindex the database with the help of the Database Engine Tuning Advisor http://technet.microsoft.com/en-us/library/ms173494(v=sql.105).aspx for index maintainance (rebuilding all indexes every night is definitly not the way to go) you can use this sollution http://ola.hallengren.com/. On the last step you can think about changing the hardware to get better performance. and just to have a feeling for your database i work with blazing fast tables with 50million rows and database sizes exceeding 1TB.

OTHER TIPS

Firstly, you could archive off data that is no longer required, if this is an option.

For example, every day I used to archive off data older than 3 years from certain tables as it was not a requirement for the production system to refer to data older than this. This keeps table sizes manageable for full index rebuilds etc.

2 million rows doesn't sound much unless there's a lot of data in each row though (I'm talking above about tables with > 100 million rows).

Secondly, you could evaluate the queries you're using to see if there's any way they could be optimised.

Thirdly, revaluate which indexes you do and don't have. You can use the SQL Server Profiler and Analyser to help you build a list of recommended indexes to create (or drop!)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top