Hi all
I include the 'Reorganize pages with the original amount of free space'
option on my database maintenance plan. On some of my databases this makes
the transaction log balloon. What's the mechanics of this option?
The database recovery models are set to full, is it simply all the index's
are being rebuilt and under the full recovery model everything is being
logged?> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?
Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command will first create a new
index, then drop the old one. All is logged. Two suggestions:
Use the code in Books Online, under DBCC SHOWCONTIG instead. This will only reindex the indexes that
are fragmented in the first place.
But first read http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx so you
can determine whether to do reindex or not and using which method (DBREINDEX or INDEXDEFRAG).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"BenUK" <BenUK@.discussions.microsoft.com> wrote in message
news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
> Hi all
> I include the 'Reorganize pages with the original amount of free space'
> option on my database maintenance plan. On some of my databases this makes
> the transaction log balloon. What's the mechanics of this option?
> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?|||Cool thanks for your help Tibor
"Tibor Karaszi" wrote:
> > The database recovery models are set to full, is it simply all the index's
> > are being rebuilt and under the full recovery model everything is being
> > logged?
> Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command will first create a new
> index, then drop the old one. All is logged. Two suggestions:
> Use the code in Books Online, under DBCC SHOWCONTIG instead. This will only reindex the indexes that
> are fragmented in the first place.
> But first read http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx so you
> can determine whether to do reindex or not and using which method (DBREINDEX or INDEXDEFRAG).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "BenUK" <BenUK@.discussions.microsoft.com> wrote in message
> news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
> > Hi all
> >
> > I include the 'Reorganize pages with the original amount of free space'
> > option on my database maintenance plan. On some of my databases this makes
> > the transaction log balloon. What's the mechanics of this option?
> >
> > The database recovery models are set to full, is it simply all the index's
> > are being rebuilt and under the full recovery model everything is being
> > logged?
>sql
Showing posts with label original. Show all posts
Showing posts with label original. Show all posts
Thursday, March 22, 2012
DBCC Reindex
Hi all
I include the 'Reorganize pages with the original amount of free space'
option on my database maintenance plan. On some of my databases this makes
the transaction log balloon. What's the mechanics of this option?
The database recovery models are set to full, is it simply all the index's
are being rebuilt and under the full recovery model everything is being
logged?> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?
Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command wil
l first create a new
index, then drop the old one. All is logged. Two suggestions:
Use the code in Books Online, under DBCC SHOWCONTIG instead. This will only
reindex the indexes that
are fragmented in the first place.
But first read http://www.microsoft.com/technet/pr...r />
idbp.mspx so you
can determine whether to do reindex or not and using which method (DBREINDEX
or INDEXDEFRAG).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"BenUK" <BenUK@.discussions.microsoft.com> wrote in message
news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
> Hi all
> I include the 'Reorganize pages with the original amount of free space'
> option on my database maintenance plan. On some of my databases this make
s
> the transaction log balloon. What's the mechanics of this option?
> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?|||Cool thanks for your help Tibor
"Tibor Karaszi" wrote:
> Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command w
ill first create a new
> index, then drop the old one. All is logged. Two suggestions:
> Use the code in Books Online, under DBCC SHOWCONTIG instead. This will onl
y reindex the indexes that
> are fragmented in the first place.
> But first read http://www.microsoft.com/technet/pr.../>
2kidbp.mspx so you
> can determine whether to do reindex or not and using which method (DBREIND
EX or INDEXDEFRAG).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "BenUK" <BenUK@.discussions.microsoft.com> wrote in message
> news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
>
I include the 'Reorganize pages with the original amount of free space'
option on my database maintenance plan. On some of my databases this makes
the transaction log balloon. What's the mechanics of this option?
The database recovery models are set to full, is it simply all the index's
are being rebuilt and under the full recovery model everything is being
logged?> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?
Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command wil
l first create a new
index, then drop the old one. All is logged. Two suggestions:
Use the code in Books Online, under DBCC SHOWCONTIG instead. This will only
reindex the indexes that
are fragmented in the first place.
But first read http://www.microsoft.com/technet/pr...r />
idbp.mspx so you
can determine whether to do reindex or not and using which method (DBREINDEX
or INDEXDEFRAG).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"BenUK" <BenUK@.discussions.microsoft.com> wrote in message
news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
> Hi all
> I include the 'Reorganize pages with the original amount of free space'
> option on my database maintenance plan. On some of my databases this make
s
> the transaction log balloon. What's the mechanics of this option?
> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?|||Cool thanks for your help Tibor
"Tibor Karaszi" wrote:
> Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command w
ill first create a new
> index, then drop the old one. All is logged. Two suggestions:
> Use the code in Books Online, under DBCC SHOWCONTIG instead. This will onl
y reindex the indexes that
> are fragmented in the first place.
> But first read http://www.microsoft.com/technet/pr.../>
2kidbp.mspx so you
> can determine whether to do reindex or not and using which method (DBREIND
EX or INDEXDEFRAG).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "BenUK" <BenUK@.discussions.microsoft.com> wrote in message
> news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
>
Wednesday, March 21, 2012
DBCC Reindex
Hi all
I include the 'Reorganize pages with the original amount of free space'
option on my database maintenance plan. On some of my databases this makes
the transaction log balloon. What's the mechanics of this option?
The database recovery models are set to full, is it simply all the index's
are being rebuilt and under the full recovery model everything is being
logged?
> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?
Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command will first create a new
index, then drop the old one. All is logged. Two suggestions:
Use the code in Books Online, under DBCC SHOWCONTIG instead. This will only reindex the indexes that
are fragmented in the first place.
But first read http://www.microsoft.com/technet/pro.../ss2kidbp.mspx so you
can determine whether to do reindex or not and using which method (DBREINDEX or INDEXDEFRAG).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"BenUK" <BenUK@.discussions.microsoft.com> wrote in message
news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
> Hi all
> I include the 'Reorganize pages with the original amount of free space'
> option on my database maintenance plan. On some of my databases this makes
> the transaction log balloon. What's the mechanics of this option?
> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?
|||Cool thanks for your help Tibor
"Tibor Karaszi" wrote:
> Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command will first create a new
> index, then drop the old one. All is logged. Two suggestions:
> Use the code in Books Online, under DBCC SHOWCONTIG instead. This will only reindex the indexes that
> are fragmented in the first place.
> But first read http://www.microsoft.com/technet/pro.../ss2kidbp.mspx so you
> can determine whether to do reindex or not and using which method (DBREINDEX or INDEXDEFRAG).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "BenUK" <BenUK@.discussions.microsoft.com> wrote in message
> news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
>
sql
I include the 'Reorganize pages with the original amount of free space'
option on my database maintenance plan. On some of my databases this makes
the transaction log balloon. What's the mechanics of this option?
The database recovery models are set to full, is it simply all the index's
are being rebuilt and under the full recovery model everything is being
logged?
> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?
Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command will first create a new
index, then drop the old one. All is logged. Two suggestions:
Use the code in Books Online, under DBCC SHOWCONTIG instead. This will only reindex the indexes that
are fragmented in the first place.
But first read http://www.microsoft.com/technet/pro.../ss2kidbp.mspx so you
can determine whether to do reindex or not and using which method (DBREINDEX or INDEXDEFRAG).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"BenUK" <BenUK@.discussions.microsoft.com> wrote in message
news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
> Hi all
> I include the 'Reorganize pages with the original amount of free space'
> option on my database maintenance plan. On some of my databases this makes
> the transaction log balloon. What's the mechanics of this option?
> The database recovery models are set to full, is it simply all the index's
> are being rebuilt and under the full recovery model everything is being
> logged?
|||Cool thanks for your help Tibor
"Tibor Karaszi" wrote:
> Correct. Maint Plan executes DBCC DBREINDEX for all tables. This command will first create a new
> index, then drop the old one. All is logged. Two suggestions:
> Use the code in Books Online, under DBCC SHOWCONTIG instead. This will only reindex the indexes that
> are fragmented in the first place.
> But first read http://www.microsoft.com/technet/pro.../ss2kidbp.mspx so you
> can determine whether to do reindex or not and using which method (DBREINDEX or INDEXDEFRAG).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "BenUK" <BenUK@.discussions.microsoft.com> wrote in message
> news:4E359BFE-8A82-445C-BD22-8136F6F9C506@.microsoft.com...
>
sql
Friday, February 24, 2012
DBCC CLEANTABLE (SQL Server 2000) question
Hi,
I've got a 40mill row table and I've normalised one of the columns out
to a separate table. I have then dropped the original varchar(100)
column.
I want to reclaim the space as efficiently as possible.
I set recovery model to simple. Then ran DBCC CLEANTABLE with the
batch option set to 10000 rows.
Here I ran into a problem: it seems to block itself... anyone ever
encountered that before?
I then ran it without the batch parameter and it is still running
after 2 hours and the transaction log is still creeping up. Several GB
so far...
Any advice very welcome...
Cheers,
JamesJimLad schrieb:
> Hi,
> I've got a 40mill row table and I've normalised one of the columns out
> to a separate table. I have then dropped the original varchar(100)
> column.
> I want to reclaim the space as efficiently as possible.
> I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> batch option set to 10000 rows.
> Here I ran into a problem: it seems to block itself... anyone ever
> encountered that before?
> I then ran it without the batch parameter and it is still running
> after 2 hours and the transaction log is still creeping up. Several GB
> so far...
> Any advice very welcome...
> Cheers,
> James
>
Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
This has the same effect but is usually more resource-intensive than
cleantable.
hth
Gregor Stefka|||On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
> JimLad schrieb:
>
> > Hi,
> > I've got a 40mill row table and I've normalised one of the columns out
> > to a separate table. I have then dropped the original varchar(100)
> > column.
> > I want to reclaim the space as efficiently as possible.
> > I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> > batch option set to 10000 rows.
> > Here I ran into a problem: it seems to block itself... anyone ever
> > encountered that before?
> > I then ran it without the batch parameter and it is still running
> > after 2 hours and the transaction log is still creeping up. Several GB
> > so far...
> > Any advice very welcome...
> > Cheers,
> > James
> Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> This has the same effect but is usually more resource-intensive than
> cleantable.
> hth
> Gregor Stefka- Hide quoted text -
> - Show quoted text -
Hi,
When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
waits when I set a value for the batch size. Is this normal? The wait
resource keeps changing, but I'm wondering why this is happening? Can
someone explain this for me?
Without the batch size being set, it took 3.25 hours to run and
created 10GB of log. Seems rather overlong to me. Am I doing something
wrong?
Wouldn't DBCC REINDEX result in even worse performance?
Cheers,
James|||On May 1, 5:41 pm, JimLad <jamesdbi...@.yahoo.co.uk> wrote:
> On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
>
>
> > JimLad schrieb:
> > > Hi,
> > > I've got a 40mill row table and I've normalised one of the columns out
> > > to a separate table. I have then dropped the original varchar(100)
> > > column.
> > > I want to reclaim the space as efficiently as possible.
> > > I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> > > batch option set to 10000 rows.
> > > Here I ran into a problem: it seems to block itself... anyone ever
> > > encountered that before?
> > > I then ran it without the batch parameter and it is still running
> > > after 2 hours and the transaction log is still creeping up. Several GB
> > > so far...
> > > Any advice very welcome...
> > > Cheers,
> > > James
> > Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> > This has the same effect but is usually more resource-intensive than
> > cleantable.
> > hth
> > Gregor Stefka- Hide quoted text -
> > - Show quoted text -
> Hi,
> When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
> waits when I set a value for the batch size. Is this normal? The wait
> resource keeps changing, but I'm wondering why this is happening? Can
> someone explain this for me?
> Without the batch size being set, it took 3.25 hours to run and
> created 10GB of log. Seems rather overlong to me. Am I doing something
> wrong?
> Wouldn't DBCC REINDEX result in even worse performance?
> Cheers,
> James- Hide quoted text -
> - Show quoted text -
Hi,
In answer to my own questions:
PAGEIOLATCH_EX waits don't matter.
Reason for slow performance is due to amount of data written to
transaction log. 12GB is written with filegrowth set to 1MB! Most of
the 3hrs is spent growing the log!
So remembering that the original number of rows is 40mill, then if we
set the batch size on DBCC CLEANTABLE to 4,000,000 then the max size
of the transaction log will be ~1.2GB. This obviously assumes SIMPLE
recovery model.
So now rather than filegrowth in 1MB increments up to 1.2GB, which is
terribly inefficient and time consuming, we set the log size manually
or increase the autogrow value.
*************************************************************************************************
--This script temporarily changes recovery model to SIMPLE.
--A FULL backup must be taken PRIOR TO AND AFTER executing this
script...
--This script is not transaction safe. On error, RESTORE FROM
BACKUP...
--
*************************************************************************************************
SELECT DATABASEPROPERTYEX('db','Recovery') AS [Initial Recovery Model
(script leaves db in FULL recovery model)]
GO
-- Allow log to be truncated for these large amount of changes.
ALTER DATABASE [db] SET RECOVERY SIMPLE
GO
-- Drop the varchar or text column that you want to reclaim the space
for.
ALTER TABLE [table]
DROP COLUMN column_name
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', SIZE = 2000) -- Set
log size to slightly bigger than what is required for the data change.
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 100)
-- or set a larger filegrowth size (first is better).
GO
DBCC CLEANTABLE ('db', 'table', 4000000) -- batches of 4mill rows.
this is 10% of the table so transaction log will only reach 1.2GB
instead of 12GB (in this case).
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 1) --
reset filegrowth to original value.
GO
-- Put db back into FULL recovery mode.
ALTER DATABASE [db] SET RECOVERY FULL
GO
I've got a 40mill row table and I've normalised one of the columns out
to a separate table. I have then dropped the original varchar(100)
column.
I want to reclaim the space as efficiently as possible.
I set recovery model to simple. Then ran DBCC CLEANTABLE with the
batch option set to 10000 rows.
Here I ran into a problem: it seems to block itself... anyone ever
encountered that before?
I then ran it without the batch parameter and it is still running
after 2 hours and the transaction log is still creeping up. Several GB
so far...
Any advice very welcome...
Cheers,
JamesJimLad schrieb:
> Hi,
> I've got a 40mill row table and I've normalised one of the columns out
> to a separate table. I have then dropped the original varchar(100)
> column.
> I want to reclaim the space as efficiently as possible.
> I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> batch option set to 10000 rows.
> Here I ran into a problem: it seems to block itself... anyone ever
> encountered that before?
> I then ran it without the batch parameter and it is still running
> after 2 hours and the transaction log is still creeping up. Several GB
> so far...
> Any advice very welcome...
> Cheers,
> James
>
Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
This has the same effect but is usually more resource-intensive than
cleantable.
hth
Gregor Stefka|||On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
> JimLad schrieb:
>
> > Hi,
> > I've got a 40mill row table and I've normalised one of the columns out
> > to a separate table. I have then dropped the original varchar(100)
> > column.
> > I want to reclaim the space as efficiently as possible.
> > I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> > batch option set to 10000 rows.
> > Here I ran into a problem: it seems to block itself... anyone ever
> > encountered that before?
> > I then ran it without the batch parameter and it is still running
> > after 2 hours and the transaction log is still creeping up. Several GB
> > so far...
> > Any advice very welcome...
> > Cheers,
> > James
> Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> This has the same effect but is usually more resource-intensive than
> cleantable.
> hth
> Gregor Stefka- Hide quoted text -
> - Show quoted text -
Hi,
When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
waits when I set a value for the batch size. Is this normal? The wait
resource keeps changing, but I'm wondering why this is happening? Can
someone explain this for me?
Without the batch size being set, it took 3.25 hours to run and
created 10GB of log. Seems rather overlong to me. Am I doing something
wrong?
Wouldn't DBCC REINDEX result in even worse performance?
Cheers,
James|||On May 1, 5:41 pm, JimLad <jamesdbi...@.yahoo.co.uk> wrote:
> On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
>
>
> > JimLad schrieb:
> > > Hi,
> > > I've got a 40mill row table and I've normalised one of the columns out
> > > to a separate table. I have then dropped the original varchar(100)
> > > column.
> > > I want to reclaim the space as efficiently as possible.
> > > I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> > > batch option set to 10000 rows.
> > > Here I ran into a problem: it seems to block itself... anyone ever
> > > encountered that before?
> > > I then ran it without the batch parameter and it is still running
> > > after 2 hours and the transaction log is still creeping up. Several GB
> > > so far...
> > > Any advice very welcome...
> > > Cheers,
> > > James
> > Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> > This has the same effect but is usually more resource-intensive than
> > cleantable.
> > hth
> > Gregor Stefka- Hide quoted text -
> > - Show quoted text -
> Hi,
> When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
> waits when I set a value for the batch size. Is this normal? The wait
> resource keeps changing, but I'm wondering why this is happening? Can
> someone explain this for me?
> Without the batch size being set, it took 3.25 hours to run and
> created 10GB of log. Seems rather overlong to me. Am I doing something
> wrong?
> Wouldn't DBCC REINDEX result in even worse performance?
> Cheers,
> James- Hide quoted text -
> - Show quoted text -
Hi,
In answer to my own questions:
PAGEIOLATCH_EX waits don't matter.
Reason for slow performance is due to amount of data written to
transaction log. 12GB is written with filegrowth set to 1MB! Most of
the 3hrs is spent growing the log!
So remembering that the original number of rows is 40mill, then if we
set the batch size on DBCC CLEANTABLE to 4,000,000 then the max size
of the transaction log will be ~1.2GB. This obviously assumes SIMPLE
recovery model.
So now rather than filegrowth in 1MB increments up to 1.2GB, which is
terribly inefficient and time consuming, we set the log size manually
or increase the autogrow value.
*************************************************************************************************
--This script temporarily changes recovery model to SIMPLE.
--A FULL backup must be taken PRIOR TO AND AFTER executing this
script...
--This script is not transaction safe. On error, RESTORE FROM
BACKUP...
--
*************************************************************************************************
SELECT DATABASEPROPERTYEX('db','Recovery') AS [Initial Recovery Model
(script leaves db in FULL recovery model)]
GO
-- Allow log to be truncated for these large amount of changes.
ALTER DATABASE [db] SET RECOVERY SIMPLE
GO
-- Drop the varchar or text column that you want to reclaim the space
for.
ALTER TABLE [table]
DROP COLUMN column_name
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', SIZE = 2000) -- Set
log size to slightly bigger than what is required for the data change.
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 100)
-- or set a larger filegrowth size (first is better).
GO
DBCC CLEANTABLE ('db', 'table', 4000000) -- batches of 4mill rows.
this is 10% of the table so transaction log will only reach 1.2GB
instead of 12GB (in this case).
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 1) --
reset filegrowth to original value.
GO
-- Put db back into FULL recovery mode.
ALTER DATABASE [db] SET RECOVERY FULL
GO
DBCC CLEANTABLE (SQL Server 2000) question
Hi,
I've got a 40mill row table and I've normalised one of the columns out
to a separate table. I have then dropped the original varchar(100)
column.
I want to reclaim the space as efficiently as possible.
I set recovery model to simple. Then ran DBCC CLEANTABLE with the
batch option set to 10000 rows.
Here I ran into a problem: it seems to block itself... anyone ever
encountered that before?
I then ran it without the batch parameter and it is still running
after 2 hours and the transaction log is still creeping up. Several GB
so far...
Any advice very welcome...
Cheers,
JamesJimLad schrieb:
> Hi,
> I've got a 40mill row table and I've normalised one of the columns out
> to a separate table. I have then dropped the original varchar(100)
> column.
> I want to reclaim the space as efficiently as possible.
> I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> batch option set to 10000 rows.
> Here I ran into a problem: it seems to block itself... anyone ever
> encountered that before?
> I then ran it without the batch parameter and it is still running
> after 2 hours and the transaction log is still creeping up. Several GB
> so far...
> Any advice very welcome...
> Cheers,
> James
>
Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
This has the same effect but is usually more resource-intensive than
cleantable.
hth
Gregor Stefka|||On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
> JimLad schrieb:
>
>
>
>
>
>
>
>
>
>
> Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> This has the same effect but is usually more resource-intensive than
> cleantable.
> hth
> Gregor Stefka- Hide quoted text -
> - Show quoted text -
Hi,
When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
waits when I set a value for the batch size. Is this normal? The wait
resource keeps changing, but I'm wondering why this is happening? Can
someone explain this for me?
Without the batch size being set, it took 3.25 hours to run and
created 10GB of log. Seems rather overlong to me. Am I doing something
wrong?
Wouldn't DBCC REINDEX result in even worse performance?
Cheers,
James|||On May 1, 5:41 pm, JimLad <jamesdbi...@.yahoo.co.uk> wrote:
> On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hi,
> When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
> waits when I set a value for the batch size. Is this normal? The wait
> resource keeps changing, but I'm wondering why this is happening? Can
> someone explain this for me?
> Without the batch size being set, it took 3.25 hours to run and
> created 10GB of log. Seems rather overlong to me. Am I doing something
> wrong?
> Wouldn't DBCC REINDEX result in even worse performance?
> Cheers,
> James- Hide quoted text -
> - Show quoted text -
Hi,
In answer to my own questions:
PAGEIOLATCH_EX waits don't matter.
Reason for slow performance is due to amount of data written to
transaction log. 12GB is written with filegrowth set to 1MB! Most of
the 3hrs is spent growing the log!
So remembering that the original number of rows is 40mill, then if we
set the batch size on DBCC CLEANTABLE to 4,000,000 then the max size
of the transaction log will be ~1.2GB. This obviously assumes SIMPLE
recovery model.
So now rather than filegrowth in 1MB increments up to 1.2GB, which is
terribly inefficient and time consuming, we set the log size manually
or increase the autogrow value.
****************************************
************************************
*********************
--This script temporarily changes recovery model to SIMPLE.
--A FULL backup must be taken PRIOR TO AND AFTER executing this
script...
--This script is not transaction safe. On error, RESTORE FROM
BACKUP...
--
****************************************
************************************
*********************
SELECT DATABASEPROPERTYEX('db','Recovery') AS [Initial Recovery Model
(script leaves db in FULL recovery model)]
GO
-- Allow log to be truncated for these large amount of changes.
ALTER DATABASE [db] SET RECOVERY SIMPLE
GO
-- Drop the varchar or text column that you want to reclaim the space
for.
ALTER TABLE [table]
DROP COLUMN column_name
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', SIZE = 2000) -- Set
log size to slightly bigger than what is required for the data change.
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 100)
-- or set a larger filegrowth size (first is better).
GO
DBCC CLEANTABLE ('db', 'table', 4000000) -- batches of 4mill rows.
this is 10% of the table so transaction log will only reach 1.2GB
instead of 12GB (in this case).
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 1) --
reset filegrowth to original value.
GO
-- Put db back into FULL recovery mode.
ALTER DATABASE [db] SET RECOVERY FULL
GO
I've got a 40mill row table and I've normalised one of the columns out
to a separate table. I have then dropped the original varchar(100)
column.
I want to reclaim the space as efficiently as possible.
I set recovery model to simple. Then ran DBCC CLEANTABLE with the
batch option set to 10000 rows.
Here I ran into a problem: it seems to block itself... anyone ever
encountered that before?
I then ran it without the batch parameter and it is still running
after 2 hours and the transaction log is still creeping up. Several GB
so far...
Any advice very welcome...
Cheers,
JamesJimLad schrieb:
> Hi,
> I've got a 40mill row table and I've normalised one of the columns out
> to a separate table. I have then dropped the original varchar(100)
> column.
> I want to reclaim the space as efficiently as possible.
> I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> batch option set to 10000 rows.
> Here I ran into a problem: it seems to block itself... anyone ever
> encountered that before?
> I then ran it without the batch parameter and it is still running
> after 2 hours and the transaction log is still creeping up. Several GB
> so far...
> Any advice very welcome...
> Cheers,
> James
>
Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
This has the same effect but is usually more resource-intensive than
cleantable.
hth
Gregor Stefka|||On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
> JimLad schrieb:
>
>
>
>
>
>
>
>
>
>
> Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> This has the same effect but is usually more resource-intensive than
> cleantable.
> hth
> Gregor Stefka- Hide quoted text -
> - Show quoted text -
Hi,
When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
waits when I set a value for the batch size. Is this normal? The wait
resource keeps changing, but I'm wondering why this is happening? Can
someone explain this for me?
Without the batch size being set, it took 3.25 hours to run and
created 10GB of log. Seems rather overlong to me. Am I doing something
wrong?
Wouldn't DBCC REINDEX result in even worse performance?
Cheers,
James|||On May 1, 5:41 pm, JimLad <jamesdbi...@.yahoo.co.uk> wrote:
> On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hi,
> When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
> waits when I set a value for the batch size. Is this normal? The wait
> resource keeps changing, but I'm wondering why this is happening? Can
> someone explain this for me?
> Without the batch size being set, it took 3.25 hours to run and
> created 10GB of log. Seems rather overlong to me. Am I doing something
> wrong?
> Wouldn't DBCC REINDEX result in even worse performance?
> Cheers,
> James- Hide quoted text -
> - Show quoted text -
Hi,
In answer to my own questions:
PAGEIOLATCH_EX waits don't matter.
Reason for slow performance is due to amount of data written to
transaction log. 12GB is written with filegrowth set to 1MB! Most of
the 3hrs is spent growing the log!
So remembering that the original number of rows is 40mill, then if we
set the batch size on DBCC CLEANTABLE to 4,000,000 then the max size
of the transaction log will be ~1.2GB. This obviously assumes SIMPLE
recovery model.
So now rather than filegrowth in 1MB increments up to 1.2GB, which is
terribly inefficient and time consuming, we set the log size manually
or increase the autogrow value.
****************************************
************************************
*********************
--This script temporarily changes recovery model to SIMPLE.
--A FULL backup must be taken PRIOR TO AND AFTER executing this
script...
--This script is not transaction safe. On error, RESTORE FROM
BACKUP...
--
****************************************
************************************
*********************
SELECT DATABASEPROPERTYEX('db','Recovery') AS [Initial Recovery Model
(script leaves db in FULL recovery model)]
GO
-- Allow log to be truncated for these large amount of changes.
ALTER DATABASE [db] SET RECOVERY SIMPLE
GO
-- Drop the varchar or text column that you want to reclaim the space
for.
ALTER TABLE [table]
DROP COLUMN column_name
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', SIZE = 2000) -- Set
log size to slightly bigger than what is required for the data change.
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 100)
-- or set a larger filegrowth size (first is better).
GO
DBCC CLEANTABLE ('db', 'table', 4000000) -- batches of 4mill rows.
this is 10% of the table so transaction log will only reach 1.2GB
instead of 12GB (in this case).
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 1) --
reset filegrowth to original value.
GO
-- Put db back into FULL recovery mode.
ALTER DATABASE [db] SET RECOVERY FULL
GO
Sunday, February 19, 2012
DBCC CHECKDB REPAIR error 5244
Hi Luc,
Could you please let me know what the newsgroup in which you put the same
post was? Is it a managed newsgroup? If your original post was not posted
at a Managed newsgroup, it could not be monitored by Microsoft Newsgroup
Support Team.
For your issue, I understand that you were trying to recover your database
from the latest database backup. When you ran DBCC CHECKDB with REPAIR_FAST
option, you encountered the error.
If I have misunderstood, please let me know.
After you restore the latest version of your database, I recommend that you
first trying detaching your database and then attaching the database with
single file, such as:
exec sp_detach_db 'database name'
Go
exec sp_attach_single_file_db @.dbname='database name',@.physname=N'your mdf
file path'
Go
After that, run DBCC CHECKDB with REPAIR_FAST to see if it can succeed.
Please note that REPAIR_FAST actually did not perform any repair actions.
If you want to repair the reported errors from DBCC CHECKDB, please specify
REPAIR_ALLOW_DATA_LOSS or REPAIR_REBUILD (minor repair).
If the same error occurs, I recommend that you try running DBCC CHECKTABLE
on each of your table to see if which tables could not be repaired. You may
check if you can replace the table by creating a new one. Also as a more
convenient way, you may consider exporting all of your data tables to
another database via Import/Export Data wizard from SQL Server Management
Studio and then run DBCC CHECKDB to see if there are any errors.
Hope this helps. If you have any other questions or concerns, please feel
free to let me know.
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =========
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =========
Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications.
Note: The MSDN Managed Newsgroup support offering is for
non-urgent issues where an initial response from the community
or a Microsoft Support Engineer within 1 business day is acceptable.
Please note that each follow up response may take approximately
2 business days as the support professional working with you may
need further investigation to reach the most efficient resolution.
The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by
contacting Microsoft Customer Support Services (CSS) at
http://msdn.microsoft.com/subscriptions/support/default.aspx.
================================================== ==========
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
Hi Luc,
Could you please post the DBCC CHECKDB error here for further research? Or
you may mail it to me (changliw_at_microsoft_dot_com).
Also I would like to know if you have tried my suggestion of
importing/exporting your data tables to another database? Does it help?
Note that regarding data corruption, it is always not an easy thing and
sometimes it is impossible to recover for some serious corruptions. We will
try our best to give you some suggestions, however if it does not help at
last, you may consider contacting third party data recovery company to
repair your data.
If you have any other questions or concerns, please feel free to let me
know. Have a nice day!
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =======
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =======
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
|||Hi Luc,
Thanks for your detailed response.
I notice that almost all the errors are regarding indexes on the two table
'EPISLog' and 'PhotoPresence'. Also the errors are for consistency not
allocation, so I recommend that you run "DBCC CHECKDB('GT',REPAIR_REBUILD)"
to repair indexes to see if it helps. Please set your database to single
user mode with the following statement before you run it:
alter database <database name>
set single_user
Go
Please feel free to let me know if you have any other questions or oncerns.
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =======
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =======
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
|||Hi Luc,
I am interested in this issue. Would you mind letting me know the result of
the suggestions? If you need further assistance, feel free to let me know.
I will be more than happy to be of assistance.
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =======
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =======
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
|||Hi Luc,
Appreciate your letting me know the situation of this issue.
If you have any other questions or concerns, please feel free to let me
know.
Have a nice day!
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =======
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =======
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
Could you please let me know what the newsgroup in which you put the same
post was? Is it a managed newsgroup? If your original post was not posted
at a Managed newsgroup, it could not be monitored by Microsoft Newsgroup
Support Team.
For your issue, I understand that you were trying to recover your database
from the latest database backup. When you ran DBCC CHECKDB with REPAIR_FAST
option, you encountered the error.
If I have misunderstood, please let me know.
After you restore the latest version of your database, I recommend that you
first trying detaching your database and then attaching the database with
single file, such as:
exec sp_detach_db 'database name'
Go
exec sp_attach_single_file_db @.dbname='database name',@.physname=N'your mdf
file path'
Go
After that, run DBCC CHECKDB with REPAIR_FAST to see if it can succeed.
Please note that REPAIR_FAST actually did not perform any repair actions.
If you want to repair the reported errors from DBCC CHECKDB, please specify
REPAIR_ALLOW_DATA_LOSS or REPAIR_REBUILD (minor repair).
If the same error occurs, I recommend that you try running DBCC CHECKTABLE
on each of your table to see if which tables could not be repaired. You may
check if you can replace the table by creating a new one. Also as a more
convenient way, you may consider exporting all of your data tables to
another database via Import/Export Data wizard from SQL Server Management
Studio and then run DBCC CHECKDB to see if there are any errors.
Hope this helps. If you have any other questions or concerns, please feel
free to let me know.
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =========
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =========
Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscriptions/managednewsgroups/default.aspx#notif
ications.
Note: The MSDN Managed Newsgroup support offering is for
non-urgent issues where an initial response from the community
or a Microsoft Support Engineer within 1 business day is acceptable.
Please note that each follow up response may take approximately
2 business days as the support professional working with you may
need further investigation to reach the most efficient resolution.
The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by
contacting Microsoft Customer Support Services (CSS) at
http://msdn.microsoft.com/subscriptions/support/default.aspx.
================================================== ==========
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
Hi Luc,
Could you please post the DBCC CHECKDB error here for further research? Or
you may mail it to me (changliw_at_microsoft_dot_com).
Also I would like to know if you have tried my suggestion of
importing/exporting your data tables to another database? Does it help?
Note that regarding data corruption, it is always not an easy thing and
sometimes it is impossible to recover for some serious corruptions. We will
try our best to give you some suggestions, however if it does not help at
last, you may consider contacting third party data recovery company to
repair your data.
If you have any other questions or concerns, please feel free to let me
know. Have a nice day!
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =======
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =======
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
|||Hi Luc,
Thanks for your detailed response.
I notice that almost all the errors are regarding indexes on the two table
'EPISLog' and 'PhotoPresence'. Also the errors are for consistency not
allocation, so I recommend that you run "DBCC CHECKDB('GT',REPAIR_REBUILD)"
to repair indexes to see if it helps. Please set your database to single
user mode with the following statement before you run it:
alter database <database name>
set single_user
Go
Please feel free to let me know if you have any other questions or oncerns.
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =======
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =======
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
|||Hi Luc,
I am interested in this issue. Would you mind letting me know the result of
the suggestions? If you need further assistance, feel free to let me know.
I will be more than happy to be of assistance.
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =======
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =======
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
|||Hi Luc,
Appreciate your letting me know the situation of this issue.
If you have any other questions or concerns, please feel free to let me
know.
Have a nice day!
Best regards,
Charles Wang
Microsoft Online Community Support
================================================== =======
Delighting our customers is our #1 priority. We welcome your
comments and suggestions about how we can improve the
support we provide to you. Please feel free to let my manager
know what you think of the level of service provided. You can
send feedback directly to my manager at: msdnmg@.microsoft.com.
================================================== =======
This posting is provided "AS IS" with no warranties, and confers no rights.
================================================== =======
Subscribe to:
Posts (Atom)