arsalan


The Background:

I have a SQL Serever with more than 10 million records.

I have to update/delete/insert records on a daily basis.

I am using the standard edition of SQL Server.

The tables are constantly having data inserted into them and the server has different jobs running all day long.

My Problem:

I cannot create index on my database and the database is getting really slow as each month/year passes.

Any/All suggestions are welcomed.




Re: Suggestions Required!

carlop


The only alternative way is to spend money on hardware... a 8 GB dual XEON with 12 disks in RAID 10 may help...

or maybe to partition your DB among several servers...

But both solutions are more 'expensive' and 'intrusive' that adding an index... can I ask you why you cannot add an index...







Re: Suggestions Required!

arsalan

I already have a 16 GB dual Xeon.

The problem is that I am running standard version of SQL Server cannot be indexed online!






Re: Suggestions Required!

carlop

AFAIK adding an index without online indexing will block users until the index build is complete... I understand that this may be a problem if the db must be online 24/7... but even eBay and Google plans some maintenace... Seems that you already have a good server, I don't think that building an index on a 10 millions table will take more than a few seconds.


Anyway you can create a copy of the DB on the same machine an try to do it on this second db to see how long it takes




Re: Suggestions Required!

Vidhya Sagar

If the data's are in a single table, try to partition the table which will help you.

If you have interest in partition the table refer the link Link1 & Link2

I dono whether table partitioning feature is available in standard edition.







Re: Suggestions Required!

arsalan

Here is what I am thinking about!

My Solution 1:

I create 2 databases db1 and db2.

Both the db's are replica of each other.

What I can do is take db1 down and idex it and shrink the log files while db2 is attached to my applications.

Then I backup db1.

Set db1 as the active db for all applications.

Restore this back up on db2.

Now I can do all the tasks as inserts and updates etc on the db2 while indexing could be done before its live again.

I keep repeating this process after every 12 or 24 hours as my application requires.

(setting db1 active or db2 active through the code)

Now I guess there is more food for thought!

Anyone please can you suggest another alternate method or pin point any pros and cons of this solution





Re: Suggestions Required!

carlop

I cannot understand why to use such a complicated process.

Once an index has been created it is always keep up to date, your only problem with online indexing is that upon creation of the index db users will be blocked for a while or, at least, you will need to keep them offline for that period...

So you have to keep your db offline for a few minutes, create the index, and restart it. That's all.

What you suggest is to periodically shutdown and exchange databases and this IMO is much slower and error prone.







Re: Suggestions Required!

Arnie Rowland

Besides, data will be added to one database that then is not in the second database.

Syncronization will be a major headache.

Follow the previous suggestions. Find a maintenance window and build the indexes (or re-index).






Re: Suggestions Required!

arsalan

Its because I have importers running on my database and it takes about 8 hours to get the data inserted in the database main table.

I get approx 3-4 million records updated everyday.

So if there is a change i about 1000 or so rows in a table the whole index is changed thats whats in my knowledge.

Please could you suggest any link that shows how much time is required for an index to be developed on a table any benchmarking reports etc...

Or still do you have any suggestions





Re: Suggestions Required!

Arnie Rowland

I must have something very wrong in my understanding of your problem.

If your tables are indexed, I just can't imagine tables with millions of rows would possible get the indexes 'messed up' by the alteration of a few 1000 rows.

SQL Server does a excellent job of managing the indexes.

BUT perhaps you need to rebuild the statistics...

Could it be that managing the statistics is the 'real' issue here






Re: Suggestions Required!

Mohamad Tantawy

i was suffering from the same problem and i applied the following after searches, testing , and Microsoft consultants' recommendations

  1. It is highly recommended to have RAID
  2. Move tempdb to a separate partition

use master

go

Alter database tempdb modify file (name = tempdev, filename = 'T:\Tempdb\tempdb.mdf')

go

Alter database tempdb modify file (name = templog, filename = 'T:\Tempdb\templog.ldf')

go

  1. Partition tempdb over multiple files [Number of files=number of Logical CPUs]

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev1', FILENAME = N'T:\Tempdb\tempdev1.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev2', FILENAME = N'T:\Tempdb\tempdev2.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev3', FILENAME = N'T:\Tempdb\tempdev3.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev4', FILENAME = N'T:\Tempdb\tempdev4.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev5', FILENAME = N'T:\Tempdb\tempdev5.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev6', FILENAME = N'T:\Tempdb\tempdev6.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev7', FILENAME = N'T:\Tempdb\tempdev7.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev8', FILENAME = N'T:\Tempdb\tempdev8.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev9', FILENAME = N'T:\Tempdb\tempdev9.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev10', FILENAME = N'T:\Tempdb\tempdev10.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev11', FILENAME = N'T:\Tempdb\tempdev11.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev12', FILENAME = N'T:\Tempdb\tempdev12.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev13', FILENAME = N'T:\Tempdb\tempdev13.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev14', FILENAME = N'T:\Tempdb\tempdev14.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev15', FILENAME = N'T:\Tempdb\tempdev15.ndf' , SIZE = 2048000KB , FILEGROWTH = 10% ) TO FILEGROUP [PRIMARY]

GO

sp_helpdb tempdb

USE tempdb

GO

DBCC SHRINKFILE (N'tempdev' , EmptyFile)

GO

  1. Configure Max Degree of Parallelism Option

sp_configure 'show advanced options',1

reconfigure with override

sp_configure 'max degree of parallelism',1

reconfigure with override

  1. Configure Max Server Memory Option [let some memory space to the OS]

sp_configure 'max server memory (MB)',15360

reconfigure with override

  1. Partition Data File over multiple files [No of Files = (from .25 to 1)*No of Physical CPUs]

USE [master]

go

sp_helpdb [Test-DB]

GO

ALTER DATABASE [Test-DB] ADD FILE ( NAME = N'Test_Data1', FILENAME = N'D:\data\Test_Data1.ndf' , SIZE = 2048000KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE [Test-DB] ADD FILE ( NAME = N'Test_Data2', FILENAME = N'D:\data\Test_Data2.ndf' , SIZE = 2048000KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE [Test-DB] ADD FILE ( NAME = N'Test_Data3', FILENAME = N'D:\data\Test_Data3.ndf' , SIZE = 2048000KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE [Test-DB] ADD FILE ( NAME = N'Test_Data4', FILENAME = N'D:\data\Test_Data4.ndf' , SIZE = 2048000KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE [Test-DB] ADD FILE ( NAME = N'Test_Data5', FILENAME = N'D:\data\Test_Data5.ndf' , SIZE = 2048000KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE [Test-DB] ADD FILE ( NAME = N'Test_Data6', FILENAME = N'D:\data\Test_Data6.ndf' , SIZE = 2048000KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE [Test-DB] ADD FILE ( NAME = N'Test_Data7', FILENAME = N'D:\data\Test_Data7.ndf' , SIZE = 2048000KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]

GO

ALTER DATABASE [Test-DB] ADD FILE ( NAME = N'Test_Data8', FILENAME = N'D:\data\Test_Data8.ndf' , SIZE = 2048000KB , FILEGROWTH = 1024KB ) TO FILEGROUP [PRIMARY]

GO

USE [Test-DB]

GO

DBCC SHRINKFILE (N'Test_Data' , EmptyFile)

7. Create a job that rebuild most used indexes on daily basis