Thursday, March 8, 2012
DBCC IND
I need a brief description about the columns of DBCC IND result.
Thanks in advance,
Leila
http://www.sql-server-performance.com/gv_index_data_structures.asp
http://www.sqlmag.com/Articles/ArticleID/44572/pg/2/2.html
Andrew J. Kelly SQL MVP
"Leila" <Leilas@.hotpop.com> wrote in message
news:%23gbkhKHgHHA.3632@.TK2MSFTNGP02.phx.gbl...
> Hi,
> I need a brief description about the columns of DBCC IND result.
> Thanks in advance,
> Leila
>
DBCC IND
I need a brief description about the columns of DBCC IND result.
Thanks in advance,
Leilahttp://www.sql-server-performance.c..._structures.asp
http://www.sqlmag.com/Articles/Arti...572/pg/2/2.html
Andrew J. Kelly SQL MVP
"Leila" <Leilas@.hotpop.com> wrote in message
news:%23gbkhKHgHHA.3632@.TK2MSFTNGP02.phx.gbl...
> Hi,
> I need a brief description about the columns of DBCC IND result.
> Thanks in advance,
> Leila
>
DBCC IND
I need a brief description about the columns of DBCC IND result.
Thanks in advance,
Leilahttp://www.sql-server-performance.com/gv_index_data_structures.asp
http://www.sqlmag.com/Articles/ArticleID/44572/pg/2/2.html
Andrew J. Kelly SQL MVP
"Leila" <Leilas@.hotpop.com> wrote in message
news:%23gbkhKHgHHA.3632@.TK2MSFTNGP02.phx.gbl...
> Hi,
> I need a brief description about the columns of DBCC IND result.
> Thanks in advance,
> Leila
>
Friday, February 24, 2012
DBCC CLEANTABLE (SQL Server 2000) question
I've got a 40mill row table and I've normalised one of the columns out
to a separate table. I have then dropped the original varchar(100)
column.
I want to reclaim the space as efficiently as possible.
I set recovery model to simple. Then ran DBCC CLEANTABLE with the
batch option set to 10000 rows.
Here I ran into a problem: it seems to block itself... anyone ever
encountered that before?
I then ran it without the batch parameter and it is still running
after 2 hours and the transaction log is still creeping up. Several GB
so far...
Any advice very welcome...
Cheers,
JamesJimLad schrieb:
> Hi,
> I've got a 40mill row table and I've normalised one of the columns out
> to a separate table. I have then dropped the original varchar(100)
> column.
> I want to reclaim the space as efficiently as possible.
> I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> batch option set to 10000 rows.
> Here I ran into a problem: it seems to block itself... anyone ever
> encountered that before?
> I then ran it without the batch parameter and it is still running
> after 2 hours and the transaction log is still creeping up. Several GB
> so far...
> Any advice very welcome...
> Cheers,
> James
>
Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
This has the same effect but is usually more resource-intensive than
cleantable.
hth
Gregor Stefka|||On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
> JimLad schrieb:
>
> > Hi,
> > I've got a 40mill row table and I've normalised one of the columns out
> > to a separate table. I have then dropped the original varchar(100)
> > column.
> > I want to reclaim the space as efficiently as possible.
> > I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> > batch option set to 10000 rows.
> > Here I ran into a problem: it seems to block itself... anyone ever
> > encountered that before?
> > I then ran it without the batch parameter and it is still running
> > after 2 hours and the transaction log is still creeping up. Several GB
> > so far...
> > Any advice very welcome...
> > Cheers,
> > James
> Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> This has the same effect but is usually more resource-intensive than
> cleantable.
> hth
> Gregor Stefka- Hide quoted text -
> - Show quoted text -
Hi,
When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
waits when I set a value for the batch size. Is this normal? The wait
resource keeps changing, but I'm wondering why this is happening? Can
someone explain this for me?
Without the batch size being set, it took 3.25 hours to run and
created 10GB of log. Seems rather overlong to me. Am I doing something
wrong?
Wouldn't DBCC REINDEX result in even worse performance?
Cheers,
James|||On May 1, 5:41 pm, JimLad <jamesdbi...@.yahoo.co.uk> wrote:
> On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
>
>
> > JimLad schrieb:
> > > Hi,
> > > I've got a 40mill row table and I've normalised one of the columns out
> > > to a separate table. I have then dropped the original varchar(100)
> > > column.
> > > I want to reclaim the space as efficiently as possible.
> > > I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> > > batch option set to 10000 rows.
> > > Here I ran into a problem: it seems to block itself... anyone ever
> > > encountered that before?
> > > I then ran it without the batch parameter and it is still running
> > > after 2 hours and the transaction log is still creeping up. Several GB
> > > so far...
> > > Any advice very welcome...
> > > Cheers,
> > > James
> > Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> > This has the same effect but is usually more resource-intensive than
> > cleantable.
> > hth
> > Gregor Stefka- Hide quoted text -
> > - Show quoted text -
> Hi,
> When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
> waits when I set a value for the batch size. Is this normal? The wait
> resource keeps changing, but I'm wondering why this is happening? Can
> someone explain this for me?
> Without the batch size being set, it took 3.25 hours to run and
> created 10GB of log. Seems rather overlong to me. Am I doing something
> wrong?
> Wouldn't DBCC REINDEX result in even worse performance?
> Cheers,
> James- Hide quoted text -
> - Show quoted text -
Hi,
In answer to my own questions:
PAGEIOLATCH_EX waits don't matter.
Reason for slow performance is due to amount of data written to
transaction log. 12GB is written with filegrowth set to 1MB! Most of
the 3hrs is spent growing the log!
So remembering that the original number of rows is 40mill, then if we
set the batch size on DBCC CLEANTABLE to 4,000,000 then the max size
of the transaction log will be ~1.2GB. This obviously assumes SIMPLE
recovery model.
So now rather than filegrowth in 1MB increments up to 1.2GB, which is
terribly inefficient and time consuming, we set the log size manually
or increase the autogrow value.
*************************************************************************************************
--This script temporarily changes recovery model to SIMPLE.
--A FULL backup must be taken PRIOR TO AND AFTER executing this
script...
--This script is not transaction safe. On error, RESTORE FROM
BACKUP...
--
*************************************************************************************************
SELECT DATABASEPROPERTYEX('db','Recovery') AS [Initial Recovery Model
(script leaves db in FULL recovery model)]
GO
-- Allow log to be truncated for these large amount of changes.
ALTER DATABASE [db] SET RECOVERY SIMPLE
GO
-- Drop the varchar or text column that you want to reclaim the space
for.
ALTER TABLE [table]
DROP COLUMN column_name
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', SIZE = 2000) -- Set
log size to slightly bigger than what is required for the data change.
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 100)
-- or set a larger filegrowth size (first is better).
GO
DBCC CLEANTABLE ('db', 'table', 4000000) -- batches of 4mill rows.
this is 10% of the table so transaction log will only reach 1.2GB
instead of 12GB (in this case).
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 1) --
reset filegrowth to original value.
GO
-- Put db back into FULL recovery mode.
ALTER DATABASE [db] SET RECOVERY FULL
GO
DBCC CLEANTABLE (SQL Server 2000) question
I've got a 40mill row table and I've normalised one of the columns out
to a separate table. I have then dropped the original varchar(100)
column.
I want to reclaim the space as efficiently as possible.
I set recovery model to simple. Then ran DBCC CLEANTABLE with the
batch option set to 10000 rows.
Here I ran into a problem: it seems to block itself... anyone ever
encountered that before?
I then ran it without the batch parameter and it is still running
after 2 hours and the transaction log is still creeping up. Several GB
so far...
Any advice very welcome...
Cheers,
JamesJimLad schrieb:
> Hi,
> I've got a 40mill row table and I've normalised one of the columns out
> to a separate table. I have then dropped the original varchar(100)
> column.
> I want to reclaim the space as efficiently as possible.
> I set recovery model to simple. Then ran DBCC CLEANTABLE with the
> batch option set to 10000 rows.
> Here I ran into a problem: it seems to block itself... anyone ever
> encountered that before?
> I then ran it without the batch parameter and it is still running
> after 2 hours and the transaction log is still creeping up. Several GB
> so far...
> Any advice very welcome...
> Cheers,
> James
>
Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
This has the same effect but is usually more resource-intensive than
cleantable.
hth
Gregor Stefka|||On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
> JimLad schrieb:
>
>
>
>
>
>
>
>
>
>
> Instead of CLEANTABLE you could rebuild the indexes with DBCC REINDEX.
> This has the same effect but is usually more resource-intensive than
> cleantable.
> hth
> Gregor Stefka- Hide quoted text -
> - Show quoted text -
Hi,
When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
waits when I set a value for the batch size. Is this normal? The wait
resource keeps changing, but I'm wondering why this is happening? Can
someone explain this for me?
Without the batch size being set, it took 3.25 hours to run and
created 10GB of log. Seems rather overlong to me. Am I doing something
wrong?
Wouldn't DBCC REINDEX result in even worse performance?
Cheers,
James|||On May 1, 5:41 pm, JimLad <jamesdbi...@.yahoo.co.uk> wrote:
> On May 1, 2:32 pm, Gregor Stefka <ste...@.zuehlke-bieker.de> wrote:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hi,
> When I say it is blocking itself I mean I'm getting PAGEIOLATCH_EX
> waits when I set a value for the batch size. Is this normal? The wait
> resource keeps changing, but I'm wondering why this is happening? Can
> someone explain this for me?
> Without the batch size being set, it took 3.25 hours to run and
> created 10GB of log. Seems rather overlong to me. Am I doing something
> wrong?
> Wouldn't DBCC REINDEX result in even worse performance?
> Cheers,
> James- Hide quoted text -
> - Show quoted text -
Hi,
In answer to my own questions:
PAGEIOLATCH_EX waits don't matter.
Reason for slow performance is due to amount of data written to
transaction log. 12GB is written with filegrowth set to 1MB! Most of
the 3hrs is spent growing the log!
So remembering that the original number of rows is 40mill, then if we
set the batch size on DBCC CLEANTABLE to 4,000,000 then the max size
of the transaction log will be ~1.2GB. This obviously assumes SIMPLE
recovery model.
So now rather than filegrowth in 1MB increments up to 1.2GB, which is
terribly inefficient and time consuming, we set the log size manually
or increase the autogrow value.
****************************************
************************************
*********************
--This script temporarily changes recovery model to SIMPLE.
--A FULL backup must be taken PRIOR TO AND AFTER executing this
script...
--This script is not transaction safe. On error, RESTORE FROM
BACKUP...
--
****************************************
************************************
*********************
SELECT DATABASEPROPERTYEX('db','Recovery') AS [Initial Recovery Model
(script leaves db in FULL recovery model)]
GO
-- Allow log to be truncated for these large amount of changes.
ALTER DATABASE [db] SET RECOVERY SIMPLE
GO
-- Drop the varchar or text column that you want to reclaim the space
for.
ALTER TABLE [table]
DROP COLUMN column_name
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', SIZE = 2000) -- Set
log size to slightly bigger than what is required for the data change.
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 100)
-- or set a larger filegrowth size (first is better).
GO
DBCC CLEANTABLE ('db', 'table', 4000000) -- batches of 4mill rows.
this is 10% of the table so transaction log will only reach 1.2GB
instead of 12GB (in this case).
GO
ALTER DATABASE [db] MODIFY FILE (NAME = N'db_log', FILEGROWTH = 1) --
reset filegrowth to original value.
GO
-- Put db back into FULL recovery mode.
ALTER DATABASE [db] SET RECOVERY FULL
GO
DBCC CHECKIDENT Atomically Change Identity Seed?
I have a set of staging tables that need to be used to update a hierarchy of tables with foreign keys between them, and identity columns for the primary keys. One way I'm thinking of doing this is to reset the identity seed on the target tables based on the number of rows I have in the staging tables, then to update the staging tables keys to match the vacated range of identity values. I'd insert them with SET IDENTITY_INSERT ON.
The question is: can this be done atomically? It seems that DBCC CHECKIDENT will return the current identity value, but can only change the seed to an absolute value. That would require that I get the current value, add "n" to it, then set the seed value. This would seem to be non-atomic, in that a new row could be inserted between the time I find the "current" value and the time I set the new value.
Does anyone know of a way to pre-allocate a block of identity values atomically? This has to be done in a live OLTB database.
Further research shows that a fundamental assumtion I was maing is incorrect. DBCC CHECKIDENT does not return the current identity seed. I was depending on this to allow me to increment the seed value, but it appears that cannot be done.
It looks like replication might help here, but since this project must be finished by September 1, we don't have time to learn anything else new. It looks like I'll be stuck using cursors and doing these inserts one at a time.
|||
You can use IDENT_CURRENT() function to return the last used value and IDENT_SEED() to find the created seed.
e.g.
Code Snippet
SELECT TABLE_SCHEMA, TABLE_NAME,IDENT_SEED(TABLE_SCHEMA + '.' + TABLE_NAME) AS [IDENT_SEED],
IDENT_CURRENT(TABLE_SCHEMA + '.' + TABLE_NAME) AS [IDENT_CURRENT]
FROM INFORMATION_SCHEMA.TABLES
WHERE IDENT_SEED(TABLE_SCHEMA + '.' + TABLE_NAME) IS NOT NULL;
Then use dbcc checkident() to reseed as needed.
|||Thanks. This should do it for me. I can check after the DBCC CHECKIDENT to see whether any rows were created within the block that I "reserved", and try again until I get all the values I need.