Hi,
I've got a 40mill row table and I've normalised one of the columns out
to a separate table. I have then dropped the original varchar(100)
column.
I want to reclaim the space as efficiently as possible.
I set recovery model to simple. Then ran DBCC CLEANTABLE with the
batch option set to 10000 rows.
Here I ran into a problem: it seems to block itself... anyone ever
encountered that before?
I then ran it without the batch parameter and it is still running
after 2 hours and the transaction log is still creeping up. Several GB
so far...
Any advice very welcome...
Cheers,
JamesJimLad schrieb:
> Hi,
> I've got a 40mill row table and I've normalised one of the columns out
> to a separate table. I have then dropped the original varchar(100)
> column.
> I want to reclaim the space as efficiently as possible.
> I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> batch option set to 10000 rows.
> Here I ran into a problem: it seems to block itself... anyone ever
> encountered that before?
> I then ran it without the batch parameter and it is still running
> after 2 hours and the transaction log is still creeping up. Several GB
> so far...
> Any advice very welcome...
> Cheers,
> James
>
Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
This has the same effect but is usually more resource-intensive than
cleantable.
hth
Gregor Stefka|||On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
> JimLad schrieb:
>
>
>
>
>
>
>
>
>
>
> Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> This has the same effect but is usually more resource-intensive than
> cleantable.
> hth
> Gregor Stefka- Hide quoted text -
> - Show quoted text -
Hi,
When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
waits when I set a value for the batch size. Is this normal? The wait
resource keeps changing, but I'm wondering why this is happening? Can
someone explain this for me?
Without the batch size being set, it took 3.25 hours to run and
created 10GB of log. Seems rather overlong to me. Am I doing something
wrong?
Wouldn't DBCC REINDEX result in even worse performance?
Cheers,
James|||On May 1, 5:41 pm, JimLad <jamesdbi...@.yahoo.co.uk> wrote:
> On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hi,
> When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
> waits when I set a value for the batch size. Is this normal? The wait
> resource keeps changing, but I'm wondering why this is happening? Can
> someone explain this for me?
> Without the batch size being set, it took 3.25 hours to run and
> created 10GB of log. Seems rather overlong to me. Am I doing something
> wrong?
> Wouldn't DBCC REINDEX result in even worse performance?
> Cheers,
> James- Hide quoted text -
> - Show quoted text -
Hi,
In answer to my own questions:
PAGEIOLATCH_EX waits don't matter.
Reason for slow performance is due to amount of data written to
transaction log. 12GB is written with filegrowth set to 1MB! Most of
the 3hrs is spent growing the log!
So remembering that the original number of rows is 40mill, then if we
set the batch size on DBCC CLEANTABLE to 4,000,000 then the max size
of the transaction log will be ~1.2GB. This obviously assumes SIMPLE
recovery model.
So now rather than filegrowth in 1MB increments up to 1.2GB, which is
terribly inefficient and time consuming, we set the log size manually
or increase the autogrow value.
****************************************
************************************
*********************
--This script temporarily changes recovery model to SIMPLE.
--A FULL backup must be taken PRIOR TO AND AFTER executing this
script...
--This script is not transaction safe. On error, RESTORE FROM
BACKUP...
--
****************************************
************************************
*********************
SELECT DATABASEPROPERTYEX('db','Recovery') AS [Initial Recovery Model
(script leaves db in FULL recovery model)]
GO
-- Allow log to be truncated for these large amount of changes.
ALTER DATABASE [db] SET RECOVERY SIMPLE
GO
-- Drop the varchar or text column that you want to reclaim the space
for.
ALTER TABLE [table]
DROP COLUMN column_name
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', SIZE = 2000) -- Set
log size to slightly bigger than what is required for the data change.
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 100)
-- or set a larger filegrowth size (first is better).
GO
DBCC CLEANTABLE ('db', 'table', 4000000) -- batches of 4mill rows.
this is 10% of the table so transaction log will only reach 1.2GB
instead of 12GB (in this case).
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 1) --
reset filegrowth to original value.
GO
-- Put db back into FULL recovery mode.
ALTER DATABASE [db] SET RECOVERY FULL
GO
No comments:
Post a Comment