Showing posts with label time. Show all posts
Showing posts with label time. Show all posts

Tuesday, March 27, 2012

DBCC shrinkdatabase

Hi All
I am currently in a catch 22 situation. I need to shrink my database before
i can back it up ( as the time it takes to back up now has gone over the
time allocate to the backup process) .
Is it safe to run the DBCC SHRINKDATABASE (DBName,0,TRUNCATEONLY) command on
a production database ( 450GB with 9000 + connections ) ?
Any info with this regard will be highly appreciated.
Thanks
Elrond
The backup process only backs up the data...not the empty space in the MDF
file.
Shrink only removes the empty space, so I don't think it will gain you
anything.
Unless I am incrorect in the above, you may want to investigate some 3rd
party utilities that compress the backup as it is done, which saves time and
drive space.
Red gate makes SQL Backup ($295/server)
Quest sells SQL Litespeed (price based on version/processors, but higher
than red gate)
Both are good products.
Kevin Hill
3NF Consulting
http://www.3nf-inc.com/NewsGroups.htm
Real-world stuff I run across with SQL Server:
http://kevin3nf.blogspot.com
"news.microsoft.com" <someone@.microsoft.com> wrote in message
news:OxE4j8XCHHA.4832@.TK2MSFTNGP06.phx.gbl...
> Hi All
> I am currently in a catch 22 situation. I need to shrink my database
> before i can back it up ( as the time it takes to back up now has gone
> over the time allocate to the backup process) .
> Is it safe to run the DBCC SHRINKDATABASE (DBName,0,TRUNCATEONLY) command
> on a production database ( 450GB with 9000 + connections ) ?
> Any info with this regard will be highly appreciated.
> Thanks
> Elrond
>
|||Kevin is right on the money. The time to backup a db is not affected by the
amount of free space only the data. I would use one of the 3rd party tools
to compress the backups on the fly. You may also want to look at using some
sort of hardware backups using the SAN or filegroup backups.
Andrew J. Kelly SQL MVP
"news.microsoft.com" <someone@.microsoft.com> wrote in message
news:OxE4j8XCHHA.4832@.TK2MSFTNGP06.phx.gbl...
> Hi All
> I am currently in a catch 22 situation. I need to shrink my database
> before i can back it up ( as the time it takes to back up now has gone
> over the time allocate to the backup process) .
> Is it safe to run the DBCC SHRINKDATABASE (DBName,0,TRUNCATEONLY) command
> on a production database ( 450GB with 9000 + connections ) ?
> Any info with this regard will be highly appreciated.
> Thanks
> Elrond
>
sql

Sunday, March 11, 2012

DBCC INDEXDEFRAG

Hi Guru,
One of my vendor database has about 500 tables where owner not belong
to dbo. I'm having a hard time to do online defrag from M-F because my
code is not working. Please look at my code and modify it when owner
not belong to DBO.
USE ObjectManager
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(id,
N'IsProcedure') = 1)
drop procedure [dbo].[uspOM_OnlineFragmentation]
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
CREATE PROC dbo.uspOM_OnlineFragmentation
@.dbname sysname
AS
SET NOCOUNT ON
CREATE TABLE #UserTables(
[id] INT,
[name] VARCHAR(60)
)
--Create temporary table to hold DBCC SHOWCONTIG output
CREATE TABLE #FragmentationResult(
ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
IndexId INT, [Level] INT, Pages INT, [Rows] INT,
MinimumRecordSize INT, MaximumRecordSize INT,
AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
ExtentSwitches INT, AverageFreeBytes FLOAT,
AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
ActualCount INT, LogicalFragmentation FLOAT,
ExtentFragmentation FLOAT
)
--Create temporary table to hold tables/indexes that require
-- defragmentation
CREATE TABLE #Defragmentation(
[id] INT IDENTITY,
ObjectName VARCHAR(255),
IndexName VARCHAR(255),
ScanDensity FLOAT
)
--Identify all user tables in the current database to analyze
-- fragmentation
INSERT INTO #UserTables([id], [name])
EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
xtype = ''U'' and [name] <> ''dtproperties''')
--SELECT [id], [name] INTO #UserTables
-- FROM sysobjects
-- WHERE type = 'U'
-- ORDER BY [id]
--Determine fragmentation of every user table/index
DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #UserTables
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id], @.name=[name]
FROM #UserTables
WHERE [id] > @.id
INSERT INTO #FragmentationResult
EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
ALL_INDEXES, TABLERESULTS')
SET @.TableCnt = @.TableCnt - 1
END
--Determine user tables/indexes that require defragmentation
INSERT INTO #Defragmentation
SELECT ObjectName, IndexName, ScanDensity
FROM #FragmentationResult
WHERE IndexID NOT IN(0,255)
AND ScanDensity < 40 --Scan Density is low
AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
AND PAGES > 8 --Not a very small table
DROP TABLE #FragmentationResult
--Defragment tables/indexes with high fragmentation
DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id]
, @.oname = ObjectName
, @.iname = IndexName
, @.sdensity = ScanDensity
FROM #Defragmentation
WHERE [id] > @.id
PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
+ '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
--DBCC DBREINDEX(@.oname, @.iname)
DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
SET @.TableCnt = @.TableCnt - 1
END
--Release resources
DROP TABLE #UserTables
DROP TABLE #Defragmentation
SET NOCOUNT OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
Thanks,
Silaphet,
Hi Silaphet
It's not the code that needs to change, but the roles assigned to whomever
is running this code.
DBO is not a role. It is a user name. You can assign the users who need to
run this to the db_owner or db_ddladmin role. Table owners can only defrag
or showcontig on their own tables.
FROM BOL:
Permissions
DBCC INDEXDEFRAG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database role, and the table owner,
and are not transferable.
DBCC SHOWCONTIG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database roles, and the table
owner, and are not transferable.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
<smounkhaty@.bremer.com> wrote in message
news:1133496455.993570.168870@.g44g2000cwa.googlegr oups.com...
> Hi Guru,
> One of my vendor database has about 500 tables where owner not belong
> to dbo. I'm having a hard time to do online defrag from M-F because my
> code is not working. Please look at my code and modify it when owner
> not belong to DBO.
> USE ObjectManager
> GO
> if exists (select * from dbo.sysobjects where id =
> object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(id,
> N'IsProcedure') = 1)
> drop procedure [dbo].[uspOM_OnlineFragmentation]
> GO
> SET QUOTED_IDENTIFIER ON
> GO
> SET ANSI_NULLS ON
> GO
>
> CREATE PROC dbo.uspOM_OnlineFragmentation
> @.dbname sysname
> AS
> SET NOCOUNT ON
> CREATE TABLE #UserTables(
> [id] INT,
> [name] VARCHAR(60)
> )
> --Create temporary table to hold DBCC SHOWCONTIG output
> CREATE TABLE #FragmentationResult(
> ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
> IndexId INT, [Level] INT, Pages INT, [Rows] INT,
> MinimumRecordSize INT, MaximumRecordSize INT,
> AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
> ExtentSwitches INT, AverageFreeBytes FLOAT,
> AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
> ActualCount INT, LogicalFragmentation FLOAT,
> ExtentFragmentation FLOAT
> )
> --Create temporary table to hold tables/indexes that require
> -- defragmentation
> CREATE TABLE #Defragmentation(
> [id] INT IDENTITY,
> ObjectName VARCHAR(255),
> IndexName VARCHAR(255),
> ScanDensity FLOAT
> )
> --Identify all user tables in the current database to analyze
> -- fragmentation
> INSERT INTO #UserTables([id], [name])
> EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
> xtype = ''U'' and [name] <> ''dtproperties''')
> --SELECT [id], [name] INTO #UserTables
> -- FROM sysobjects
> -- WHERE type = 'U'
> -- ORDER BY [id]
> --Determine fragmentation of every user table/index
> DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #UserTables
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id], @.name=[name]
> FROM #UserTables
> WHERE [id] > @.id
> INSERT INTO #FragmentationResult
> EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
> ALL_INDEXES, TABLERESULTS')
> SET @.TableCnt = @.TableCnt - 1
> END
> --Determine user tables/indexes that require defragmentation
> INSERT INTO #Defragmentation
> SELECT ObjectName, IndexName, ScanDensity
> FROM #FragmentationResult
> WHERE IndexID NOT IN(0,255)
> AND ScanDensity < 40 --Scan Density is low
> AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
> AND PAGES > 8 --Not a very small table
> DROP TABLE #FragmentationResult
> --Defragment tables/indexes with high fragmentation
> DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id]
> , @.oname = ObjectName
> , @.iname = IndexName
> , @.sdensity = ScanDensity
> FROM #Defragmentation
> WHERE [id] > @.id
> PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
> PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
> + '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
> --DBCC DBREINDEX(@.oname, @.iname)
> DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
> SET @.TableCnt = @.TableCnt - 1
> END
> --Release resources
> DROP TABLE #UserTables
> DROP TABLE #Defragmentation
> SET NOCOUNT OFF
>
> GO
> SET QUOTED_IDENTIFIER OFF
> GO
> SET ANSI_NULLS ON
> GO
> Thanks,
> Silaphet,
>

Thursday, March 8, 2012

DBCC INDEXDEFRAG

Hi Guru,
One of my vendor database has about 500 tables where owner not belong
to dbo. I'm having a hard time to do online defrag from M-F because my
code is not working. Please look at my code and modify it when owner
not belong to DBO.
USE ObjectManager
GO
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(id,
N'IsProcedure') = 1)
drop procedure [dbo].[uspOM_OnlineFragmentation]
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
CREATE PROC dbo.uspOM_OnlineFragmentation
@.dbname sysname
AS
SET NOCOUNT ON
CREATE TABLE #UserTables(
[id] INT,
[name] VARCHAR(60)
)
--Create temporary table to hold DBCC SHOWCONTIG output
CREATE TABLE #FragmentationResult(
ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
IndexId INT, [Level] INT, Pages INT, [Rows] INT,
MinimumRecordSize INT, MaximumRecordSize INT,
AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
ExtentSwitches INT, AverageFreeBytes FLOAT,
AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
ActualCount INT, LogicalFragmentation FLOAT,
ExtentFragmentation FLOAT
)
--Create temporary table to hold tables/indexes that require
-- defragmentation
CREATE TABLE #Defragmentation(
[id] INT IDENTITY,
ObjectName VARCHAR(255),
IndexName VARCHAR(255),
ScanDensity FLOAT
)
--Identify all user tables in the current database to analyze
-- fragmentation
INSERT INTO #UserTables([id], [name])
EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
xtype = ''U'' and [name] <> ''dtproperties''')
--SELECT [id], [name] INTO #UserTables
-- FROM sysobjects
-- WHERE type = 'U'
-- ORDER BY [id]
--Determine fragmentation of every user table/index
DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #UserTables
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id], @.name=[name]
FROM #UserTables
WHERE [id] > @.id
INSERT INTO #FragmentationResult
EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
ALL_INDEXES, TABLERESULTS')
SET @.TableCnt = @.TableCnt - 1
END
--Determine user tables/indexes that require defragmentation
INSERT INTO #Defragmentation
SELECT ObjectName, IndexName, ScanDensity
FROM #FragmentationResult
WHERE IndexID NOT IN(0,255)
AND ScanDensity < 40 --Scan Density is low
AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
AND PAGES > 8 --Not a very small table
DROP TABLE #FragmentationResult
--Defragment tables/indexes with high fragmentation
DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id]
, @.oname = ObjectName
, @.iname = IndexName
, @.sdensity = ScanDensity
FROM #Defragmentation
WHERE [id] > @.id
PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
+ '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
--DBCC DBREINDEX(@.oname, @.iname)
DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
SET @.TableCnt = @.TableCnt - 1
END
--Release resources
DROP TABLE #UserTables
DROP TABLE #Defragmentation
SET NOCOUNT OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
Thanks,
Silaphet,Hi Silaphet
It's not the code that needs to change, but the roles assigned to whomever
is running this code.
DBO is not a role. It is a user name. You can assign the users who need to
run this to the db_owner or db_ddladmin role. Table owners can only defrag
or showcontig on their own tables.
--
FROM BOL:
Permissions
DBCC INDEXDEFRAG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database role, and the table owner,
and are not transferable.
DBCC SHOWCONTIG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database roles, and the table
owner, and are not transferable.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
<smounkhaty@.bremer.com> wrote in message
news:1133496455.993570.168870@.g44g2000cwa.googlegroups.com...
> Hi Guru,
> One of my vendor database has about 500 tables where owner not belong
> to dbo. I'm having a hard time to do online defrag from M-F because my
> code is not working. Please look at my code and modify it when owner
> not belong to DBO.
> USE ObjectManager
> GO
> if exists (select * from dbo.sysobjects where id => object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(id,
> N'IsProcedure') = 1)
> drop procedure [dbo].[uspOM_OnlineFragmentation]
> GO
> SET QUOTED_IDENTIFIER ON
> GO
> SET ANSI_NULLS ON
> GO
>
> CREATE PROC dbo.uspOM_OnlineFragmentation
> @.dbname sysname
> AS
> SET NOCOUNT ON
> CREATE TABLE #UserTables(
> [id] INT,
> [name] VARCHAR(60)
> )
> --Create temporary table to hold DBCC SHOWCONTIG output
> CREATE TABLE #FragmentationResult(
> ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
> IndexId INT, [Level] INT, Pages INT, [Rows] INT,
> MinimumRecordSize INT, MaximumRecordSize INT,
> AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
> ExtentSwitches INT, AverageFreeBytes FLOAT,
> AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
> ActualCount INT, LogicalFragmentation FLOAT,
> ExtentFragmentation FLOAT
> )
> --Create temporary table to hold tables/indexes that require
> -- defragmentation
> CREATE TABLE #Defragmentation(
> [id] INT IDENTITY,
> ObjectName VARCHAR(255),
> IndexName VARCHAR(255),
> ScanDensity FLOAT
> )
> --Identify all user tables in the current database to analyze
> -- fragmentation
> INSERT INTO #UserTables([id], [name])
> EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
> xtype = ''U'' and [name] <> ''dtproperties''')
> --SELECT [id], [name] INTO #UserTables
> -- FROM sysobjects
> -- WHERE type = 'U'
> -- ORDER BY [id]
> --Determine fragmentation of every user table/index
> DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #UserTables
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id], @.name=[name]
> FROM #UserTables
> WHERE [id] > @.id
> INSERT INTO #FragmentationResult
> EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
> ALL_INDEXES, TABLERESULTS')
> SET @.TableCnt = @.TableCnt - 1
> END
> --Determine user tables/indexes that require defragmentation
> INSERT INTO #Defragmentation
> SELECT ObjectName, IndexName, ScanDensity
> FROM #FragmentationResult
> WHERE IndexID NOT IN(0,255)
> AND ScanDensity < 40 --Scan Density is low
> AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
> AND PAGES > 8 --Not a very small table
> DROP TABLE #FragmentationResult
> --Defragment tables/indexes with high fragmentation
> DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id]
> , @.oname = ObjectName
> , @.iname = IndexName
> , @.sdensity = ScanDensity
> FROM #Defragmentation
> WHERE [id] > @.id
> PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
> PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
> + '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
> --DBCC DBREINDEX(@.oname, @.iname)
> DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
> SET @.TableCnt = @.TableCnt - 1
> END
> --Release resources
> DROP TABLE #UserTables
> DROP TABLE #Defragmentation
> SET NOCOUNT OFF
>
> GO
> SET QUOTED_IDENTIFIER OFF
> GO
> SET ANSI_NULLS ON
> GO
> Thanks,
> Silaphet,
>

DBCC INDEXDEFRAG

Hi Guru,
One of my vendor database has about 500 tables where owner not belong
to dbo. I'm having a hard time to do online defrag from M-F because my
code is not working. Please look at my code and modify it when owner
not belong to DBO.
USE ObjectManager
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(i
d,
N'IsProcedure') = 1)
drop procedure [dbo].[uspOM_OnlineFragmentation]
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
CREATE PROC dbo.uspOM_OnlineFragmentation
@.dbname sysname
AS
SET NOCOUNT ON
CREATE TABLE #UserTables(
[id] INT,
[name] VARCHAR(60)
)
--Create temporary table to hold DBCC SHOWCONTIG output
CREATE TABLE #FragmentationResult(
ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
IndexId INT, [Level] INT, Pages INT, [Rows] INT,
MinimumRecordSize INT, MaximumRecordSize INT,
AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
ExtentSwitches INT, AverageFreeBytes FLOAT,
AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
ActualCount INT, LogicalFragmentation FLOAT,
ExtentFragmentation FLOAT
)
--Create temporary table to hold tables/indexes that require
-- defragmentation
CREATE TABLE #Defragmentation(
[id] INT IDENTITY,
ObjectName VARCHAR(255),
IndexName VARCHAR(255),
ScanDensity FLOAT
)
--Identify all user tables in the current database to analyze
-- fragmentation
INSERT INTO #UserTables([id], [name])
EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
xtype = ''U'' and [name] <> ''dtproperties''')
--SELECT [id], [name] INTO #UserTables
-- FROM sysobjects
-- WHERE type = 'U'
-- ORDER BY [id]
--Determine fragmentation of every user table/index
DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #UserTables
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id], @.name=[name]
FROM #UserTables
WHERE [id] > @.id
INSERT INTO #FragmentationResult
EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
ALL_INDEXES, TABLERESULTS')
SET @.TableCnt = @.TableCnt - 1
END
--Determine user tables/indexes that require defragmentation
INSERT INTO #Defragmentation
SELECT ObjectName, IndexName, ScanDensity
FROM #FragmentationResult
WHERE IndexID NOT IN(0,255)
AND ScanDensity < 40 --Scan Density is low
AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
AND PAGES > 8 --Not a very small table
DROP TABLE #FragmentationResult
--Defragment tables/indexes with high fragmentation
DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id]
, @.oname = ObjectName
, @.iname = IndexName
, @.sdensity = ScanDensity
FROM #Defragmentation
WHERE [id] > @.id
PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
+ '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
--DBCC DBREINDEX(@.oname, @.iname)
DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
SET @.TableCnt = @.TableCnt - 1
END
--Release resources
DROP TABLE #UserTables
DROP TABLE #Defragmentation
SET NOCOUNT OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
Thanks,
Silaphet,Hi Silaphet
It's not the code that needs to change, but the roles assigned to whomever
is running this code.
DBO is not a role. It is a user name. You can assign the users who need to
run this to the db_owner or db_ddladmin role. Table owners can only defrag
or showcontig on their own tables.
FROM BOL:
Permissions
DBCC INDEXDEFRAG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database role, and the table owner,
and are not transferable.
DBCC SHOWCONTIG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database roles, and the table
owner, and are not transferable.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
<smounkhaty@.bremer.com> wrote in message
news:1133496455.993570.168870@.g44g2000cwa.googlegroups.com...
> Hi Guru,
> One of my vendor database has about 500 tables where owner not belong
> to dbo. I'm having a hard time to do online defrag from M-F because my
> code is not working. Please look at my code and modify it when owner
> not belong to DBO.
> USE ObjectManager
> GO
> if exists (select * from dbo.sysobjects where id =
> object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY
(id,
> N'IsProcedure') = 1)
> drop procedure [dbo].[uspOM_OnlineFragmentation]
> GO
> SET QUOTED_IDENTIFIER ON
> GO
> SET ANSI_NULLS ON
> GO
>
> CREATE PROC dbo.uspOM_OnlineFragmentation
> @.dbname sysname
> AS
> SET NOCOUNT ON
> CREATE TABLE #UserTables(
> [id] INT,
> [name] VARCHAR(60)
> )
> --Create temporary table to hold DBCC SHOWCONTIG output
> CREATE TABLE #FragmentationResult(
> ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
> IndexId INT, [Level] INT, Pages INT, [Rows] INT,
> MinimumRecordSize INT, MaximumRecordSize INT,
> AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
> ExtentSwitches INT, AverageFreeBytes FLOAT,
> AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
> ActualCount INT, LogicalFragmentation FLOAT,
> ExtentFragmentation FLOAT
> )
> --Create temporary table to hold tables/indexes that require
> -- defragmentation
> CREATE TABLE #Defragmentation(
> [id] INT IDENTITY,
> ObjectName VARCHAR(255),
> IndexName VARCHAR(255),
> ScanDensity FLOAT
> )
> --Identify all user tables in the current database to analyze
> -- fragmentation
> INSERT INTO #UserTables([id], [name])
> EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHE
RE
> xtype = ''U'' and [name] <> ''dtproperties''')
> --SELECT [id], [name] INTO #UserTables
> -- FROM sysobjects
> -- WHERE type = 'U'
> -- ORDER BY [id]
> --Determine fragmentation of every user table/index
> DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #UserTables
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id], @.name=[name]
> FROM #UserTables
> WHERE [id] > @.id
> INSERT INTO #FragmentationResult
> EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
> ALL_INDEXES, TABLERESULTS')
> SET @.TableCnt = @.TableCnt - 1
> END
> --Determine user tables/indexes that require defragmentation
> INSERT INTO #Defragmentation
> SELECT ObjectName, IndexName, ScanDensity
> FROM #FragmentationResult
> WHERE IndexID NOT IN(0,255)
> AND ScanDensity < 40 --Scan Density is low
> AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
> AND PAGES > 8 --Not a very small table
> DROP TABLE #FragmentationResult
> --Defragment tables/indexes with high fragmentation
> DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id]
> , @.oname = ObjectName
> , @.iname = IndexName
> , @.sdensity = ScanDensity
> FROM #Defragmentation
> WHERE [id] > @.id
> PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
> PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
> + '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
> --DBCC DBREINDEX(@.oname, @.iname)
> DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
> SET @.TableCnt = @.TableCnt - 1
> END
> --Release resources
> DROP TABLE #UserTables
> DROP TABLE #Defragmentation
> SET NOCOUNT OFF
>
> GO
> SET QUOTED_IDENTIFIER OFF
> GO
> SET ANSI_NULLS ON
> GO
> Thanks,
> Silaphet,
>

DBCC fails, why?

Every time I run DBCC to do an integrity check I get the following errors...
and the first one i dont understand it says Expected value 0_PCT_FULL,
actual value 100_PCT_FULL, I checked and it was set to 0...
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:628) in object ID
949578421, index ID 0, partition ID 72057594042253312, alloc unit ID
72057594046644224 (type LOB data). Expected value 0_PCT_FULL, actual value
100_PCT_FULL.
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:632) in object ID
949578421, index ID 0, partition ID 72057594042253312, alloc unit ID
72057594046644224 (type LOB data). Expected value 0_PCT_FULL, actual value
100_PCT_FULL.
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:640) in object ID
949578421, index ID 0, partition ID 72057594042253312, alloc unit ID
72057594046644224 (type LOB data). Expected value 0_PCT_FULL, actual value
100_PCT_FULL.
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:3410) in object ID
949578421, index ID 0, partition ID 72057594042253312, alloc unit ID
72057594046644224 (type LOB data). Expected value 0_PCT_FULL, actual value
100_PCT_FULL.
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:3510) in object ID
949578421, index ID 0, partition ID 72057594042253312, alloc unit ID
72057594046644224 (type LOB data). Expected value 0_PCT_FULL, actual value
100_PCT_FULL.
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:4948) in object ID
949578421, index ID 0, partition ID 72057594042253312, alloc unit ID
72057594046644224 (type LOB data). Expected value 0_PCT_FULL, actual value
100_PCT_FULL.
CHECKDB found 0 allocation errors and 6 consistency errors in table
'sysmaintplan_logdetail' (object ID 949578421).
CHECKDB found 0 allocation errors and 6 consistency errors in database
'msdb'.
repair_allow_data_loss is the minimum repair level for the errors found by
DBCC CHECKDB (msdb).
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:158) in object ID 60, index
ID 1, partition ID 281474980642816, alloc unit ID 71776119065149440 (type
LOB data). Expected value 0_PCT_FULL, actual value 100_PCT_FULL.
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:161) in object ID 60, index
ID 1, partition ID 281474980642816, alloc unit ID 71776119065149440 (type
LOB data). Expected value 0_PCT_FULL, actual value 100_PCT_FULL.
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:338) in object ID 60, index
ID 1, partition ID 281474980642816, alloc unit ID 71776119065149440 (type
LOB data). Expected value 0_PCT_FULL, actual value 100_PCT_FULL.
Msg 8914, Level 16, State 1, Line 1
Incorrect PFS free space information for page (1:448) in object ID 60, index
ID 1, partition ID 281474980642816, alloc unit ID 71776119065149440 (type
LOB data). Expected value 0_PCT_FULL, actual value 100_PCT_FULL.
CHECKDB found 0 allocation errors and 4 consistency errors in table
'sys.sysobjvalues' (object ID 60).
CHECKDB found 0 allocation errors and 4 consistency errors in database
'BENE_Testing'.
repair_allow_data_loss is the minimum repair level for the errors found by
DBCC CHECKDB (BENE_Testing).
here is the SQL used for this
USE [master]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [model]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [msdb]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [BENE_Live]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [BENE_Users]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [BDB_FileTransferImports]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [RESC_Intranet]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [RPTS_Reports]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [RSMN_Messaging]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [BUGS_GeminiWeb]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [BENE_Testing]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [ReportServer]
GO
DBCC CHECKDB WITH NO_INFOMSGS
GO
USE [ReportServerTempDB]
GO
DBCC CHECKDB WITH NO_INFOMSGSYou have a corruption in your database. I'm surprised that allow data loss i
s considered as the
minimum repair level for these messages. Download Books Online for SQL Serve
r 2000, and search for
the error numbers there for further explanations (these error numbers isn't
documented yet in BOL
2005). Also, you might want to check out
http://www.karaszi.com/SQLServer/in..._suspect_db.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Brian Henry" <nospam@.nospam.com> wrote in message news:ulc0G36YGHA.4620@.TK2MSFTNGP04.phx.g
bl...
> Every time I run DBCC to do an integrity check I get the following errors.
.
> and the first one i dont understand it says Expected value 0_PCT_FULL, act
ual value 100_PCT_FULL,
> I checked and it was set to 0...
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:628) in object ID 9495784
21, index ID 0,
> partition ID 72057594042253312, alloc unit ID 72057594046644224 (type LOB
data). Expected value
> 0_PCT_FULL, actual value 100_PCT_FULL.
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:632) in object ID 9495784
21, index ID 0,
> partition ID 72057594042253312, alloc unit ID 72057594046644224 (type LOB
data). Expected value
> 0_PCT_FULL, actual value 100_PCT_FULL.
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:640) in object ID 9495784
21, index ID 0,
> partition ID 72057594042253312, alloc unit ID 72057594046644224 (type LOB
data). Expected value
> 0_PCT_FULL, actual value 100_PCT_FULL.
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:3410) in object ID 949578
421, index ID 0,
> partition ID 72057594042253312, alloc unit ID 72057594046644224 (type LOB
data). Expected value
> 0_PCT_FULL, actual value 100_PCT_FULL.
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:3510) in object ID 949578
421, index ID 0,
> partition ID 72057594042253312, alloc unit ID 72057594046644224 (type LOB
data). Expected value
> 0_PCT_FULL, actual value 100_PCT_FULL.
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:4948) in object ID 949578
421, index ID 0,
> partition ID 72057594042253312, alloc unit ID 72057594046644224 (type LOB
data). Expected value
> 0_PCT_FULL, actual value 100_PCT_FULL.
> CHECKDB found 0 allocation errors and 6 consistency errors in table 'sysma
intplan_logdetail'
> (object ID 949578421).
> CHECKDB found 0 allocation errors and 6 consistency errors in database 'ms
db'.
> repair_allow_data_loss is the minimum repair level for the errors found by
DBCC CHECKDB (msdb).
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:158) in object ID 60, ind
ex ID 1, partition ID
> 281474980642816, alloc unit ID 71776119065149440 (type LOB data). Expected
value 0_PCT_FULL,
> actual value 100_PCT_FULL.
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:161) in object ID 60, ind
ex ID 1, partition ID
> 281474980642816, alloc unit ID 71776119065149440 (type LOB data). Expected
value 0_PCT_FULL,
> actual value 100_PCT_FULL.
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:338) in object ID 60, ind
ex ID 1, partition ID
> 281474980642816, alloc unit ID 71776119065149440 (type LOB data). Expected
value 0_PCT_FULL,
> actual value 100_PCT_FULL.
> Msg 8914, Level 16, State 1, Line 1
> Incorrect PFS free space information for page (1:448) in object ID 60, ind
ex ID 1, partition ID
> 281474980642816, alloc unit ID 71776119065149440 (type LOB data). Expected
value 0_PCT_FULL,
> actual value 100_PCT_FULL.
> CHECKDB found 0 allocation errors and 4 consistency errors in table 'sys.s
ysobjvalues' (object ID
> 60).
> CHECKDB found 0 allocation errors and 4 consistency errors in database 'BE
NE_Testing'.
> repair_allow_data_loss is the minimum repair level for the errors found by
DBCC CHECKDB
> (BENE_Testing).
>
>
> here is the SQL used for this
>
> USE [master]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [model]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [msdb]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [BENE_Live]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [BENE_Users]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [BDB_FileTransferImports]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [RESC_Intranet]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [RPTS_Reports]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [RSMN_Messaging]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [BUGS_GeminiWeb]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [BENE_Testing]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [ReportServer]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
> GO
> USE [ReportServerTempDB]
> GO
> DBCC CHECKDB WITH NO_INFOMSGS
>|||well I repaired one of them successfully with no data loss... but the msdb
databsae of course i cant repair with that allow data loss because it
requires single user mode, but you cant take a system db into that... so a
little lost on what to do with that one
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:uyYjlM7YGHA.4916@.TK2MSFTNGP04.phx.gbl...
> You have a corruption in your database. I'm surprised that allow data loss
> is considered as the minimum repair level for these messages. Download
> Books Online for SQL Server 2000, and search for the error numbers there
> for further explanations (these error numbers isn't documented yet in BOL
> 2005). Also, you might want to check out
> http://www.karaszi.com/SQLServer/in..._suspect_db.asp
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Brian Henry" <nospam@.nospam.com> wrote in message
> news:ulc0G36YGHA.4620@.TK2MSFTNGP04.phx.gbl...
>|||RESTORE DATABASE msdb ? Assuming of course you do backup of your system data
bases.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Brian Henry" <nospam@.nospam.com> wrote in message news:OaEPGX7YGHA.1764@.TK2MSFTNGP05.phx.g
bl...
> well I repaired one of them successfully with no data loss... but the msdb
databsae of course i
> cant repair with that allow data loss because it requires single user mode
, but you cant take a
> system db into that... so a little lost on what to do with that one
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote i
n message
> news:uyYjlM7YGHA.4916@.TK2MSFTNGP04.phx.gbl...
>|||we do backups, but the big problem is, apparently this has been like this
for months now!...
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:u3rbdM8YGHA.428@.TK2MSFTNGP02.phx.gbl...
> RESTORE DATABASE msdb ? Assuming of course you do backup of your system
> databases.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Brian Henry" <nospam@.nospam.com> wrote in message
> news:OaEPGX7YGHA.1764@.TK2MSFTNGP05.phx.gbl...
>|||Come to think about it, how did you conclude that you can't set msdb to sing
le user? I just tried it
on both a 2000 sp3 instance as well as 2005 instance. Both were successful.
You have to stop Agent
first, of course.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Brian Henry" <nospam@.nospam.com> wrote in message news:uyZEFr8YGHA.4936@.TK2MSFTNGP05.phx.g
bl...
> we do backups, but the big problem is, apparently this has been like this
for months now!...
>
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote i
n message
> news:u3rbdM8YGHA.428@.TK2MSFTNGP02.phx.gbl...
>|||didn't try stoping the anget, just tried altering the db to single user mode
and it failed and said it couldnt be... ill try it without the agent and see
what happens (this is sql 2005 btw)
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%23eo3O58YGHA.3704@.TK2MSFTNGP03.phx.gbl...
> Come to think about it, how did you conclude that you can't set msdb to
> single user? I just tried it on both a 2000 sp3 instance as well as 2005
> instance. Both were successful. You have to stop Agent first, of course.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Brian Henry" <nospam@.nospam.com> wrote in message
> news:uyZEFr8YGHA.4936@.TK2MSFTNGP05.phx.gbl...
>|||it works with stoping the agent, thanks! wouldnt of thought of that..
"Brian Henry" <nospam@.nospam.com> wrote in message
news:%238qrve9YGHA.3448@.TK2MSFTNGP04.phx.gbl...
> didn't try stoping the anget, just tried altering the db to single user
> mode and it failed and said it couldnt be... ill try it without the agent
> and see what happens (this is sql 2005 btw)
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote
> in message news:%23eo3O58YGHA.3704@.TK2MSFTNGP03.phx.gbl...
>

Saturday, February 25, 2012

DBCC DBREINDEX

Can anyone confirm this for me. If I were to run this script below. Will it
reindex all tables indexes at one time as it goes through the cursor. Or
will it reindex one table indexes at a time as it goes through the cursor.
Basically will it pause each time the DBCC command is executed?
If the first part is true where it executes all at once then what is a good
way to only do one at a time? Can you pause between executions till each
execution is done?
THanks for the help.
DECLARE @.TableName varchar(255)
DECLARE ReindexTableCursor CURSOR FOR
SELECT name FROM sysobjects WHERE xtype = 'U' AND (name = 'Practice' OR name
= 'ID_Master')
ORDER BY name
OPEN ReindexTableCursor
-- Perform the first fetch.
FETCH NEXT FROM ReindexTableCursor INTO @.TableName
-- Check @.@.FETCH_STATUS to see if there are any rows to fetch.
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- This is executed as long as the previous fetch succeeds.
PRINT 'Reindexing' + ' ' + @.TableName
DBCC DBREINDEX (@.TableName )
FETCH NEXT FROM ReindexTableCursor into @.TableName
END
CLOSE ReindexTableCursor
DEALLOCATE ReindexTableCursor
See the reply in the other newsgroup.
Andrew J. Kelly SQL MVP
"Dave Mortenson" <dmortenson@.dentrix.com> wrote in message
news:enpAF$obEHA.1356@.TK2MSFTNGP09.phx.gbl...
> Can anyone confirm this for me. If I were to run this script below. Will
it
> reindex all tables indexes at one time as it goes through the cursor. Or
> will it reindex one table indexes at a time as it goes through the cursor.
> Basically will it pause each time the DBCC command is executed?
> If the first part is true where it executes all at once then what is a
good
> way to only do one at a time? Can you pause between executions till each
> execution is done?
> THanks for the help.
> DECLARE @.TableName varchar(255)
> DECLARE ReindexTableCursor CURSOR FOR
> SELECT name FROM sysobjects WHERE xtype = 'U' AND (name = 'Practice' OR
name
> = 'ID_Master')
> ORDER BY name
> OPEN ReindexTableCursor
> -- Perform the first fetch.
> FETCH NEXT FROM ReindexTableCursor INTO @.TableName
> -- Check @.@.FETCH_STATUS to see if there are any rows to fetch.
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> -- This is executed as long as the previous fetch succeeds.
> PRINT 'Reindexing' + ' ' + @.TableName
> DBCC DBREINDEX (@.TableName )
> FETCH NEXT FROM ReindexTableCursor into @.TableName
> END
> CLOSE ReindexTableCursor
> DEALLOCATE ReindexTableCursor
>

DBCC DBREINDEX

Can anyone confirm this for me. If I were to run this script below. Will it
reindex all tables indexes at one time as it goes through the cursor. Or
will it reindex one table indexes at a time as it goes through the cursor.
Basically will it pause each time the DBCC command is executed?
If the first part is true where it executes all at once then what is a good
way to only do one at a time? Can you pause between executions till each
execution is done?
THanks for the help.
DECLARE @.TableName varchar(255)
DECLARE ReindexTableCursor CURSOR FOR
SELECT name FROM sysobjects WHERE xtype = 'U' AND (name = 'Practice' OR name
= 'ID_Master')
ORDER BY name
OPEN ReindexTableCursor
-- Perform the first fetch.
FETCH NEXT FROM ReindexTableCursor INTO @.TableName
-- Check @.@.FETCH_STATUS to see if there are any rows to fetch.
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- This is executed as long as the previous fetch succeeds.
PRINT 'Reindexing' + ' ' + @.TableName
DBCC DBREINDEX (@.TableName )
FETCH NEXT FROM ReindexTableCursor into @.TableName
END
CLOSE ReindexTableCursor
DEALLOCATE ReindexTableCursorSee the reply in the other newsgroup.
--
Andrew J. Kelly SQL MVP
"Dave Mortenson" <dmortenson@.dentrix.com> wrote in message
news:enpAF$obEHA.1356@.TK2MSFTNGP09.phx.gbl...
> Can anyone confirm this for me. If I were to run this script below. Will
it
> reindex all tables indexes at one time as it goes through the cursor. Or
> will it reindex one table indexes at a time as it goes through the cursor.
> Basically will it pause each time the DBCC command is executed?
> If the first part is true where it executes all at once then what is a
good
> way to only do one at a time? Can you pause between executions till each
> execution is done?
> THanks for the help.
> DECLARE @.TableName varchar(255)
> DECLARE ReindexTableCursor CURSOR FOR
> SELECT name FROM sysobjects WHERE xtype = 'U' AND (name = 'Practice' OR
name
> = 'ID_Master')
> ORDER BY name
> OPEN ReindexTableCursor
> -- Perform the first fetch.
> FETCH NEXT FROM ReindexTableCursor INTO @.TableName
> -- Check @.@.FETCH_STATUS to see if there are any rows to fetch.
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> -- This is executed as long as the previous fetch succeeds.
> PRINT 'Reindexing' + ' ' + @.TableName
> DBCC DBREINDEX (@.TableName )
> FETCH NEXT FROM ReindexTableCursor into @.TableName
> END
> CLOSE ReindexTableCursor
> DEALLOCATE ReindexTableCursor
>

DBCC DBREINDEX

Can anyone confirm this for me. If I were to run this script below. Will it
reindex all tables indexes at one time as it goes through the cursor. Or
will it reindex one table indexes at a time as it goes through the cursor.
Basically will it pause each time the DBCC command is executed?
If the first part is true where it executes all at once then what is a good
way to only do one at a time? Can you pause between executions till each
execution is done?
THanks for the help.
DECLARE @.TableName varchar(255)
DECLARE ReindexTableCursor CURSOR FOR
SELECT name FROM sysobjects WHERE xtype = 'U' AND (name = 'Practice' OR name
= 'ID_Master')
ORDER BY name
OPEN ReindexTableCursor
-- Perform the first fetch.
FETCH NEXT FROM ReindexTableCursor INTO @.TableName
-- Check @.@.FETCH_STATUS to see if there are any rows to fetch.
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- This is executed as long as the previous fetch succeeds.
PRINT 'Reindexing' + ' ' + @.TableName
DBCC DBREINDEX (@.TableName )
FETCH NEXT FROM ReindexTableCursor into @.TableName
END
CLOSE ReindexTableCursor
DEALLOCATE ReindexTableCursorSee the reply in the other newsgroup.
Andrew J. Kelly SQL MVP
"Dave Mortenson" <dmortenson@.dentrix.com> wrote in message
news:enpAF$obEHA.1356@.TK2MSFTNGP09.phx.gbl...
> Can anyone confirm this for me. If I were to run this script below. Will
it
> reindex all tables indexes at one time as it goes through the cursor. Or
> will it reindex one table indexes at a time as it goes through the cursor.
> Basically will it pause each time the DBCC command is executed?
> If the first part is true where it executes all at once then what is a
good
> way to only do one at a time? Can you pause between executions till each
> execution is done?
> THanks for the help.
> DECLARE @.TableName varchar(255)
> DECLARE ReindexTableCursor CURSOR FOR
> SELECT name FROM sysobjects WHERE xtype = 'U' AND (name = 'Practice' OR
name
> = 'ID_Master')
> ORDER BY name
> OPEN ReindexTableCursor
> -- Perform the first fetch.
> FETCH NEXT FROM ReindexTableCursor INTO @.TableName
> -- Check @.@.FETCH_STATUS to see if there are any rows to fetch.
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> -- This is executed as long as the previous fetch succeeds.
> PRINT 'Reindexing' + ' ' + @.TableName
> DBCC DBREINDEX (@.TableName )
> FETCH NEXT FROM ReindexTableCursor into @.TableName
> END
> CLOSE ReindexTableCursor
> DEALLOCATE ReindexTableCursor
>

Sunday, February 19, 2012

DBCC Checkdb runs 12 hours

We have a really ancient Sql Server db -- version 6.5 --
with a 7.6 GB database on it. The DBCC Checkdb runs 12
hours, and now it fails every time it runs, without giving
any meaningful error messages. Is there some way to speed
up DBCC Checkdb? Or should we be running some other
utility?The best way to speed it up is to migrate to SQL 7.0 or better yet 2000. I
don't know of any way in 6.5.
Andrew J. Kelly
SQL Server MVP
"khabita" <anonymous@.discussions.microsoft.com> wrote in message
news:fc0901c3f24c$16443170$a001280a@.phx.gbl...
> We have a really ancient Sql Server db -- version 6.5 --
> with a 7.6 GB database on it. The DBCC Checkdb runs 12
> hours, and now it fails every time it runs, without giving
> any meaningful error messages. Is there some way to speed
> up DBCC Checkdb? Or should we be running some other
> utility?|||Thanks. But we are moving away from Sql Server, so I doubt
we'll upgrade. I guess we'll just have to live with the
slow performance until we move the db to Oracle.

>--Original Message--
>The best way to speed it up is to migrate to SQL 7.0 or
better yet 2000. I
>don't know of any way in 6.5.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"khabita" <anonymous@.discussions.microsoft.com> wrote in
message
>news:fc0901c3f24c$16443170$a001280a@.phx.gbl...
giving
speed
>
>.
>|||Too bad. It would cost you tons less just to upgrade to SQL Server 2000. I
did an upgrade in place (i.e. on the same box) from 6.5 to 7.0 and tripled
the speed of an app.
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
<anonymous@.discussions.microsoft.com> wrote in message
news:f71201c3f25a$37fc35c0$a301280a@.phx.gbl...
Thanks. But we are moving away from Sql Server, so I doubt
we'll upgrade. I guess we'll just have to live with the
slow performance until we move the db to Oracle.

>--Original Message--
>The best way to speed it up is to migrate to SQL 7.0 or
better yet 2000. I
>don't know of any way in 6.5.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"khabita" <anonymous@.discussions.microsoft.com> wrote in
message
>news:fc0901c3f24c$16443170$a001280a@.phx.gbl...
giving
speed
>
>.
>

DBCC Checkdb runs 12 hours

We have a really ancient Sql Server db -- version 6.5 --
with a 7.6 GB database on it. The DBCC Checkdb runs 12
hours, and now it fails every time it runs, without giving
any meaningful error messages. Is there some way to speed
up DBCC Checkdb? Or should we be running some other
utility?The best way to speed it up is to migrate to SQL 7.0 or better yet 2000. I
don't know of any way in 6.5.
--
Andrew J. Kelly
SQL Server MVP
"khabita" <anonymous@.discussions.microsoft.com> wrote in message
news:fc0901c3f24c$16443170$a001280a@.phx.gbl...
> We have a really ancient Sql Server db -- version 6.5 --
> with a 7.6 GB database on it. The DBCC Checkdb runs 12
> hours, and now it fails every time it runs, without giving
> any meaningful error messages. Is there some way to speed
> up DBCC Checkdb? Or should we be running some other
> utility?|||Thanks. But we are moving away from Sql Server, so I doubt
we'll upgrade. I guess we'll just have to live with the
slow performance until we move the db to Oracle.
>--Original Message--
>The best way to speed it up is to migrate to SQL 7.0 or
better yet 2000. I
>don't know of any way in 6.5.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"khabita" <anonymous@.discussions.microsoft.com> wrote in
message
>news:fc0901c3f24c$16443170$a001280a@.phx.gbl...
>> We have a really ancient Sql Server db -- version 6.5 --
>> with a 7.6 GB database on it. The DBCC Checkdb runs 12
>> hours, and now it fails every time it runs, without
giving
>> any meaningful error messages. Is there some way to
speed
>> up DBCC Checkdb? Or should we be running some other
>> utility?
>
>.
>|||This is a multi-part message in MIME format.
--=_NextPart_000_023E_01C3F232.508BED60
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
Too bad. It would cost you tons less just to upgrade to SQL Server 2000. I
did an upgrade in place (i.e. on the same box) from 6.5 to 7.0 and tripled
the speed of an app.
--
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
<anonymous@.discussions.microsoft.com> wrote in message
news:f71201c3f25a$37fc35c0$a301280a@.phx.gbl...
Thanks. But we are moving away from Sql Server, so I doubt
we'll upgrade. I guess we'll just have to live with the
slow performance until we move the db to Oracle.
>--Original Message--
>The best way to speed it up is to migrate to SQL 7.0 or
better yet 2000. I
>don't know of any way in 6.5.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"khabita" <anonymous@.discussions.microsoft.com> wrote in
message
>news:fc0901c3f24c$16443170$a001280a@.phx.gbl...
>> We have a really ancient Sql Server db -- version 6.5 --
>> with a 7.6 GB database on it. The DBCC Checkdb runs 12
>> hours, and now it fails every time it runs, without
giving
>> any meaningful error messages. Is there some way to
speed
>> up DBCC Checkdb? Or should we be running some other
>> utility?
>
>.
>
--=_NextPart_000_023E_01C3F232.508BED60
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Too bad. It would cost you tons =less just to upgrade to SQL Server 2000. I did an upgrade in place (i.e. on the =same box) from 6.5 to 7.0 and tripled the speed of an app.
-- Tom
---T=homas A. Moreau, BSc, PhD, MCSE, MCDBASQL Server MVPColumnist, SQL =Server ProfessionalToronto, ON Canadahttp://www.pinnaclepublishing.com/sql">www.pinnaclepublishing.com=/sql
wrote in message news:f71201c3f25a$37=fc35c0$a301280a@.phx.gbl...Thanks. But we are moving away from Sql Server, so I doubt we'll upgrade. I =guess we'll just have to live with the slow performance until we move the =db to Oracle.>--Original Message-->The best way to =speed it up is to migrate to SQL 7.0 or better yet 2000. I>don't =know of any way in 6.5.>>-- >>Andrew J. =Kelly>SQL Server MVP>>>"khabita" wrote in message>news:fc0901c3f24c$16443170$a001280a@.phx.gbl...>=> We have a really ancient Sql Server db -- version 6.5 =-- > with a 7.6 GB database on it. The DBCC Checkdb runs 12> hours, and =now it fails every time it runs, without giving> any meaningful =error messages. Is there some way to speed> up DBCC Checkdb? Or =should we be running some other> utility?>>>.>

--=_NextPart_000_023E_01C3F232.508BED60--

dbcc checkdb on a large table

We have a 200 GB database with a single table that spans 36 partitions.
Each time we run a dbcc checkdb on this database, the process takes
close to 40 hours to finish, and in the course of doing it expands
tempdb to almost 200 GB. What are the recommended options to speed up
the dbcc process? What kind of dbcc's can be done? Suggestions are
welcome.
The database runs on SQL Server 2005 - there are 4 CPUs on the box and
the total memory is 8 GB.
Thanks,
PrakashHi,
DBCC CHECKDB on my 320 GB database on 8 CPU X 8 GB machine is taking me only
1 hour. 40 hours is too long on a 4 CPU machine. Just see the Disk trasfer
and Queueu length at that point of time. Check with your hardware person to
do a hardware check.
From you side you could enable the degree of paralleism (DOP) during the DBC
C
and see if that helps. Hope you have already enabled the AWE memory since
you have 8 GB RAM.
Thanks
Hari
SQL Server MVP
"pnat331@.gmail.com" wrote:

> We have a 200 GB database with a single table that spans 36 partitions.
> Each time we run a dbcc checkdb on this database, the process takes
> close to 40 hours to finish, and in the course of doing it expands
> tempdb to almost 200 GB. What are the recommended options to speed up
> the dbcc process? What kind of dbcc's can be done? Suggestions are
> welcome.
> The database runs on SQL Server 2005 - there are 4 CPUs on the box and
> the total memory is 8 GB.
> Thanks,
> Prakash
>

dbcc checkdb on a large table

We have a 200 GB database with a single table that spans 36 partitions.
Each time we run a dbcc checkdb on this database, the process takes
close to 40 hours to finish, and in the course of doing it expands
tempdb to almost 200 GB. What are the recommended options to speed up
the dbcc process? What kind of dbcc's can be done? Suggestions are
welcome.
The database runs on SQL Server 2005 - there are 4 CPUs on the box and
the total memory is 8 GB.
Thanks,
Prakash
Hi,
DBCC CHECKDB on my 320 GB database on 8 CPU X 8 GB machine is taking me only
1 hour. 40 hours is too long on a 4 CPU machine. Just see the Disk trasfer
and Queueu length at that point of time. Check with your hardware person to
do a hardware check.
From you side you could enable the degree of paralleism (DOP) during the DBCC
and see if that helps. Hope you have already enabled the AWE memory since
you have 8 GB RAM.
Thanks
Hari
SQL Server MVP
"pnat331@.gmail.com" wrote:

> We have a 200 GB database with a single table that spans 36 partitions.
> Each time we run a dbcc checkdb on this database, the process takes
> close to 40 hours to finish, and in the course of doing it expands
> tempdb to almost 200 GB. What are the recommended options to speed up
> the dbcc process? What kind of dbcc's can be done? Suggestions are
> welcome.
> The database runs on SQL Server 2005 - there are 4 CPUs on the box and
> the total memory is 8 GB.
> Thanks,
> Prakash
>

dbcc checkdb on a large table

We have a 200 GB database with a single table that spans 36 partitions.
Each time we run a dbcc checkdb on this database, the process takes
close to 40 hours to finish, and in the course of doing it expands
tempdb to almost 200 GB. What are the recommended options to speed up
the dbcc process? What kind of dbcc's can be done? Suggestions are
welcome.
The database runs on SQL Server 2005 - there are 4 CPUs on the box and
the total memory is 8 GB.
Thanks,
PrakashHi,
DBCC CHECKDB on my 320 GB database on 8 CPU X 8 GB machine is taking me only
1 hour. 40 hours is too long on a 4 CPU machine. Just see the Disk trasfer
and Queueu length at that point of time. Check with your hardware person to
do a hardware check.
From you side you could enable the degree of paralleism (DOP) during the DBCC
and see if that helps. Hope you have already enabled the AWE memory since
you have 8 GB RAM.
Thanks
Hari
SQL Server MVP
"pnat331@.gmail.com" wrote:
> We have a 200 GB database with a single table that spans 36 partitions.
> Each time we run a dbcc checkdb on this database, the process takes
> close to 40 hours to finish, and in the course of doing it expands
> tempdb to almost 200 GB. What are the recommended options to speed up
> the dbcc process? What kind of dbcc's can be done? Suggestions are
> welcome.
> The database runs on SQL Server 2005 - there are 4 CPUs on the box and
> the total memory is 8 GB.
> Thanks,
> Prakash
>

Friday, February 17, 2012

dbcc checkdb found 2 errors

I'm in trouble...
dbcc checkdb found 2 errors on my database and this is the first time it was
checked. It says the lowest repair is the "allow data loss". If I run repair
and data is lost, how can I find out which data it is?
Select * from table
does not finish due to
error on page xxxxx.
oh i forgot, the trouble is I don't have a backup without that error, but I
have the way to recreate database but it's huge and I wonder if I could
identify the defect rows and insert only them?
tia
Zarko
Make sure you have an updated books online (Jan 2004) and search for the error numbers in there. Almost all
corruption errors has specific recommendations in the updated Books Online.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Zarko Jovanovic" <RmEaMkOnViEoTHvIoSmind_less@.bigfoot.com> wrote in message
news:1086238949.478311@.internet.fina.hr...
> I'm in trouble...
> dbcc checkdb found 2 errors on my database and this is the first time it was
> checked. It says the lowest repair is the "allow data loss". If I run repair
> and data is lost, how can I find out which data it is?
> Select * from table
> does not finish due to
> error on page xxxxx.
> oh i forgot, the trouble is I don't have a backup without that error, but I
> have the way to recreate database but it's huge and I wonder if I could
> identify the defect rows and insert only them?
> tia
> Zarko
>
|||Also the errors should give you the page numbers,
you might try to use dbcc page to see if you can look at the page and see
what data is on it..
Google dbcc page
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Zarko Jovanovic" <RmEaMkOnViEoTHvIoSmind_less@.bigfoot.com> wrote in message
news:1086238949.478311@.internet.fina.hr...
> I'm in trouble...
> dbcc checkdb found 2 errors on my database and this is the first time it
was
> checked. It says the lowest repair is the "allow data loss". If I run
repair
> and data is lost, how can I find out which data it is?
> Select * from table
> does not finish due to
> error on page xxxxx.
> oh i forgot, the trouble is I don't have a backup without that error, but
I
> have the way to recreate database but it's huge and I wonder if I could
> identify the defect rows and insert only them?
> tia
> Zarko
>
|||If you could post the errors, we might be able to give you a bit more insight.
Thanks,
Ryan Stonecipher
MS SQL Server Storage Engine
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message news:e4uZK0WSEHA.1544@.TK2MSFTNGP09.phx.gbl...
Also the errors should give you the page numbers,
you might try to use dbcc page to see if you can look at the page and see
what data is on it..
Google dbcc page
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Zarko Jovanovic" <RmEaMkOnViEoTHvIoSmind_less@.bigfoot.com> wrote in message
news:1086238949.478311@.internet.fina.hr...
> I'm in trouble...
> dbcc checkdb found 2 errors on my database and this is the first time it
was
> checked. It says the lowest repair is the "allow data loss". If I run
repair
> and data is lost, how can I find out which data it is?
> Select * from table
> does not finish due to
> error on page xxxxx.
> oh i forgot, the trouble is I don't have a backup without that error, but
I
> have the way to recreate database but it's huge and I wonder if I could
> identify the defect rows and insert only them?
> tia
> Zarko
>

dbcc checkdb found 2 errors

I'm in trouble...
dbcc checkdb found 2 errors on my database and this is the first time it was
checked. It says the lowest repair is the "allow data loss". If I run repair
and data is lost, how can I find out which data it is?
Select * from table
does not finish due to
error on page xxxxx.
oh i forgot, the trouble is I don't have a backup without that error, but I
have the way to recreate database but it's huge and I wonder if I could
identify the defect rows and insert only them?
tia
ZarkoMake sure you have an updated books online (Jan 2004) and search for the err
or numbers in there. Almost all
corruption errors has specific recommendations in the updated Books Online.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Zarko Jovanovic" <RmEaMkOnViEoTHvIoSmind_less@.bigfoot.com> wrote in message
news:1086238949.478311@.internet.fina.hr...
> I'm in trouble...
> dbcc checkdb found 2 errors on my database and this is the first time it w
as
> checked. It says the lowest repair is the "allow data loss". If I run repa
ir
> and data is lost, how can I find out which data it is?
> Select * from table
> does not finish due to
> error on page xxxxx.
> oh i forgot, the trouble is I don't have a backup without that error, but
I
> have the way to recreate database but it's huge and I wonder if I could
> identify the defect rows and insert only them?
> tia
> Zarko
>|||Also the errors should give you the page numbers,
you might try to use dbcc page to see if you can look at the page and see
what data is on it..
Google dbcc page
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Zarko Jovanovic" <RmEaMkOnViEoTHvIoSmind_less@.bigfoot.com> wrote in message
news:1086238949.478311@.internet.fina.hr...
> I'm in trouble...
> dbcc checkdb found 2 errors on my database and this is the first time it
was
> checked. It says the lowest repair is the "allow data loss". If I run
repair
> and data is lost, how can I find out which data it is?
> Select * from table
> does not finish due to
> error on page xxxxx.
> oh i forgot, the trouble is I don't have a backup without that error, but
I
> have the way to recreate database but it's huge and I wonder if I could
> identify the defect rows and insert only them?
> tia
> Zarko
>|||If you could post the errors, we might be able to give you a bit more insigh
t.
Thanks,
Ryan Stonecipher
MS SQL Server Storage Engine
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message news:e
4uZK0WSEHA.1544@.TK2MSFTNGP09.phx.gbl...
Also the errors should give you the page numbers,
you might try to use dbcc page to see if you can look at the page and see
what data is on it..
Google dbcc page
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Zarko Jovanovic" <RmEaMkOnViEoTHvIoSmind_less@.bigfoot.com> wrote in message
news:1086238949.478311@.internet.fina.hr...
> I'm in trouble...
> dbcc checkdb found 2 errors on my database and this is the first time it
was
> checked. It says the lowest repair is the "allow data loss". If I run
repair
> and data is lost, how can I find out which data it is?
> Select * from table
> does not finish due to
> error on page xxxxx.
> oh i forgot, the trouble is I don't have a backup without that error, but
I
> have the way to recreate database but it's huge and I wonder if I could
> identify the defect rows and insert only them?
> tia
> Zarko
>

dbcc checkdb found 2 errors

I'm in trouble...
dbcc checkdb found 2 errors on my database and this is the first time it was
checked. It says the lowest repair is the "allow data loss". If I run repair
and data is lost, how can I find out which data it is?
Select * from table
does not finish due to
error on page xxxxx.
oh i forgot, the trouble is I don't have a backup without that error, but I
have the way to recreate database but it's huge and I wonder if I could
identify the defect rows and insert only them?
tia
ZarkoMake sure you have an updated books online (Jan 2004) and search for the error numbers in there. Almost all
corruption errors has specific recommendations in the updated Books Online.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Zarko Jovanovic" <RmEaMkOnViEoTHvIoSmind_less@.bigfoot.com> wrote in message
news:1086238949.478311@.internet.fina.hr...
> I'm in trouble...
> dbcc checkdb found 2 errors on my database and this is the first time it was
> checked. It says the lowest repair is the "allow data loss". If I run repair
> and data is lost, how can I find out which data it is?
> Select * from table
> does not finish due to
> error on page xxxxx.
> oh i forgot, the trouble is I don't have a backup without that error, but I
> have the way to recreate database but it's huge and I wonder if I could
> identify the defect rows and insert only them?
> tia
> Zarko
>|||Also the errors should give you the page numbers,
you might try to use dbcc page to see if you can look at the page and see
what data is on it..
Google dbcc page
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Zarko Jovanovic" <RmEaMkOnViEoTHvIoSmind_less@.bigfoot.com> wrote in message
news:1086238949.478311@.internet.fina.hr...
> I'm in trouble...
> dbcc checkdb found 2 errors on my database and this is the first time it
was
> checked. It says the lowest repair is the "allow data loss". If I run
repair
> and data is lost, how can I find out which data it is?
> Select * from table
> does not finish due to
> error on page xxxxx.
> oh i forgot, the trouble is I don't have a backup without that error, but
I
> have the way to recreate database but it's huge and I wonder if I could
> identify the defect rows and insert only them?
> tia
> Zarko
>|||This is a multi-part message in MIME format.
--=_NextPart_000_0017_01C4495A.57644590
Content-Type: text/plain;
charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
If you could post the errors, we might be able to give you a bit more =insight.
Thanks,
Ryan Stonecipher
MS SQL Server Storage Engine
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message =news:e4uZK0WSEHA.1544@.TK2MSFTNGP09.phx.gbl...
Also the errors should give you the page numbers,
you might try to use dbcc page to see if you can look at the page and =see
what data is on it..
Google dbcc page
-- Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Zarko Jovanovic" <RmEaMkOnViEoTHvIoSmind_less@.bigfoot.com> wrote in =message
news:1086238949.478311@.internet.fina.hr...
> I'm in trouble...
>
> dbcc checkdb found 2 errors on my database and this is the first =time it
was
> checked. It says the lowest repair is the "allow data loss". If I =run
repair
> and data is lost, how can I find out which data it is?
> Select * from table
> does not finish due to
> error on page xxxxx.
>
> oh i forgot, the trouble is I don't have a backup without that =error, but
I
> have the way to recreate database but it's huge and I wonder if I =could
> identify the defect rows and insert only them?
>
> tia
>
> Zarko
>
>
--=_NextPart_000_0017_01C4495A.57644590
Content-Type: text/html;
charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

If you could post the errors, we might be able to =give you a bit more insight.
Thanks,
Ryan Stonecipher
MS SQL Server Storage Engine
"Wayne Snyder" wrote in message news:e4uZK0WSEHA.1544=@.TK2MSFTNGP09.phx.gbl...
Also the errors should give you the page numbers,you might try to use =dbcc page to see if you can look at the page and seewhat data is on it..Google dbcc page-- Wayne Snyder, MCDBA, SQL =Server MVPMariner, Charlotte, NChttp://www.mariner-usa.com">www.mariner-usa.com(Please =respond only to the newsgroups.)I support the Professional Association =of SQL Server (PASS) and it'scommunity of SQL Server professionals.http://www.sqlpass.org">www.sqlpass.org"Zarko =Jovanovic" wrote in messagenews:1086238949.478311@.i=nternet.fina.hr...> I'm in trouble...>> dbcc checkdb found 2 errors on my =database and this is the first time itwas> checked. It says the =lowest repair is the "allow data loss". If I runrepair> and data =is lost, how can I find out which data it is?> Select * from =table> does not finish due to> error on page xxxxx.>> oh i =forgot, the trouble is I don't have a backup without that error, =butI> have the way to recreate database but it's huge and I wonder if I =could> identify the defect rows and insert only them?>> tia>> =Zarko>>

--=_NextPart_000_0017_01C4495A.57644590--

Tuesday, February 14, 2012

Dbcc Checkdb , so time consuming ?

hi,
I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
normal?
Will I face any problem if I cancel it now?
Thanks
Banu
Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
etc. (not changing anything) so there would be nothing to rollback. It
does have to read every page in the DB, so it will be time consuming,
and the duration will depend a lot on the hardware it's running on too
(speed of the processor(s), I/O throughput, etc.). However 21 hours
does sounds a little excessive for a 200GB database, but it does depend
a lot, as I said, on hardware.
*mike hodgson*
blog: http://sqlnerd.blogspot.com
Banu_tr wrote:

>hi,
>I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
>normal?
>Will I face any problem if I cancel it now?
>Thanks
>Banu
>
|||Hi Mike,
Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
Controller, Proliant DL380 GB System,
I have Active/pasive cluster.
I started the script last night which means when load is relativeley low.
I couldn't estimate whether it comes closer to end or not.
thanks
"Mike Hodgson" wrote:

> Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
> etc. (not changing anything) so there would be nothing to rollback. It
> does have to read every page in the DB, so it will be time consuming,
> and the duration will depend a lot on the hardware it's running on too
> (speed of the processor(s), I/O throughput, etc.). However 21 hours
> does sounds a little excessive for a 200GB database, but it does depend
> a lot, as I said, on hardware.
> --
> *mike hodgson*
> blog: http://sqlnerd.blogspot.com
>
> Banu_tr wrote:
>
|||I'd check sp_who2 for the amount of IO & CPU that particular spid has
consumed, and run sp_who2 a few times to see if the figures are changing
at all (particularly the IO figure). Also, check to make sure it's not
part of a blocking chain, and you can check the wait time, last wait
type & wait resource (in sysprocesses), if it is waiting, to see what
the problem is.
*mike hodgson*
blog: http://sqlnerd.blogspot.com
Banu_tr wrote:
[vbcol=seagreen]
>Hi Mike,
>Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
>Controller, Proliant DL380 GB System,
>I have Active/pasive cluster.
>I started the script last night which means when load is relativeley low.
>I couldn't estimate whether it comes closer to end or not.
>thanks
>
>
>"Mike Hodgson" wrote:
>
|||This hardware sounds small for a 200GB DB. Of course a lot depends on what
you are doing with the database. How's your single cpu and average disk
queue length doing during the DBCC?
"Banu_tr" <abuslu@.hotmail.com> wrote in message
news:863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com...[vbcol=seagreen]
> Hi Mike,
> Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
> Controller, Proliant DL380 GB System,
> I have Active/pasive cluster.
> I started the script last night which means when load is relativeley low.
> I couldn't estimate whether it comes closer to end or not.
> thanks
>
>
> "Mike Hodgson" wrote:
|||I give up after 23 hours. I checked what you recommend but it seems normal.
Thanks
"Mike Hodgson" wrote:

> I'd check sp_who2 for the amount of IO & CPU that particular spid has
> consumed, and run sp_who2 a few times to see if the figures are changing
> at all (particularly the IO figure). Also, check to make sure it's not
> part of a blocking chain, and you can check the wait time, last wait
> type & wait resource (in sysprocesses), if it is waiting, to see what
> the problem is.
> --
> *mike hodgson*
> blog: http://sqlnerd.blogspot.com
>
> Banu_tr wrote:
>
|||A single CPU is not a lot of horse power for a 200GB DBCC. Especially if
anything else is requesting time on the cpu. You can have blocking as well
if other users are updating the pages it wants to read. 3GB is not a lot of
memory either.
Andrew J. Kelly SQL MVP
"Banu_tr" <abuslu@.hotmail.com> wrote in message
news:863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com...[vbcol=seagreen]
> Hi Mike,
> Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
> Controller, Proliant DL380 GB System,
> I have Active/pasive cluster.
> I started the script last night which means when load is relativeley low.
> I couldn't estimate whether it comes closer to end or not.
> thanks
>
>
> "Mike Hodgson" wrote:
|||Right - this sounds like normal runtime on a single-proc box, expecially if
its found errors and has had to go back to do more in-depth checks of the
non-clustered indexes. Have you had a faster run-time in the past?
How much concurrent activity is there that's generating txn-log? CHECKDB has
to internally roll-back all the txn-log that's occured while its been
running which will further slow things down.
In SQL Server 2005, the behavior's changed so it doesn't have to do this,
and there's progress reporting so you can see how far its progressed.
Thanks
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OhSVgn6hFHA.2852@.TK2MSFTNGP15.phx.gbl...
>A single CPU is not a lot of horse power for a 200GB DBCC. Especially if
>anything else is requesting time on the cpu. You can have blocking as well
>if other users are updating the pages it wants to read. 3GB is not a lot
>of memory either.
> --
> Andrew J. Kelly SQL MVP
>
> "Banu_tr" <abuslu@.hotmail.com> wrote in message
> news:863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com...
>

Dbcc Checkdb , so time consuming ?

hi,
I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
normal?
Will I face any problem if I cancel it now?
Thanks
BanuShould be fine if you cancel the DBCC CHECKDB, it's just reading pages,
etc. (not changing anything) so there would be nothing to rollback. It
does have to read every page in the DB, so it will be time consuming,
and the duration will depend a lot on the hardware it's running on too
(speed of the processor(s), I/O throughput, etc.). However 21 hours
does sounds a little excessive for a 200GB database, but it does depend
a lot, as I said, on hardware.
*mike hodgson*
blog: http://sqlnerd.blogspot.com
Banu_tr wrote:

>hi,
>I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
>normal?
>Will I face any problem if I cancel it now?
>Thanks
>Banu
>|||Hi Mike,
Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
Controller, Proliant DL380 GB System,
I have Active/pasive cluster.
I started the script last night which means when load is relativeley low.
I couldn't estimate whether it comes closer to end or not.
thanks
"Mike Hodgson" wrote:

> Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
> etc. (not changing anything) so there would be nothing to rollback. It
> does have to read every page in the DB, so it will be time consuming,
> and the duration will depend a lot on the hardware it's running on too
> (speed of the processor(s), I/O throughput, etc.). However 21 hours
> does sounds a little excessive for a 200GB database, but it does depend
> a lot, as I said, on hardware.
> --
> *mike hodgson*
> blog: http://sqlnerd.blogspot.com
>
> Banu_tr wrote:
>
>|||I'd check sp_who2 for the amount of IO & CPU that particular spid has
consumed, and run sp_who2 a few times to see if the figures are changing
at all (particularly the IO figure). Also, check to make sure it's not
part of a blocking chain, and you can check the wait time, last wait
type & wait resource (in sysprocesses), if it is waiting, to see what
the problem is.
*mike hodgson*
blog: http://sqlnerd.blogspot.com
Banu_tr wrote:
[vbcol=seagreen]
>Hi Mike,
>Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
>Controller, Proliant DL380 GB System,
>I have Active/pasive cluster.
>I started the script last night which means when load is relativeley low.
>I couldn't estimate whether it comes closer to end or not.
>thanks
>
>
>"Mike Hodgson" wrote:
>
>|||This hardware sounds small for a 200GB DB. Of course a lot depends on what
you are doing with the database. How's your single cpu and average disk
queue length doing during the DBCC?
"Banu_tr" <abuslu@.hotmail.com> wrote in message
news:863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com...[vbcol=seagreen]
> Hi Mike,
> Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
> Controller, Proliant DL380 GB System,
> I have Active/pasive cluster.
> I started the script last night which means when load is relativeley low.
> I couldn't estimate whether it comes closer to end or not.
> thanks
>
>
> "Mike Hodgson" wrote:
>|||I give up after 23 hours. I checked what you recommend but it seems normal.
Thanks
"Mike Hodgson" wrote:

> I'd check sp_who2 for the amount of IO & CPU that particular spid has
> consumed, and run sp_who2 a few times to see if the figures are changing
> at all (particularly the IO figure). Also, check to make sure it's not
> part of a blocking chain, and you can check the wait time, last wait
> type & wait resource (in sysprocesses), if it is waiting, to see what
> the problem is.
> --
> *mike hodgson*
> blog: http://sqlnerd.blogspot.com
>
> Banu_tr wrote:
>
>|||A single CPU is not a lot of horse power for a 200GB DBCC. Especially if
anything else is requesting time on the cpu. You can have blocking as well
if other users are updating the pages it wants to read. 3GB is not a lot of
memory either.
Andrew J. Kelly SQL MVP
"Banu_tr" <abuslu@.hotmail.com> wrote in message
news:863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com...[vbcol=seagreen]
> Hi Mike,
> Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
> Controller, Proliant DL380 GB System,
> I have Active/pasive cluster.
> I started the script last night which means when load is relativeley low.
> I couldn't estimate whether it comes closer to end or not.
> thanks
>
>
> "Mike Hodgson" wrote:
>|||Right - this sounds like normal runtime on a single-proc box, expecially if
its found errors and has had to go back to do more in-depth checks of the
non-clustered indexes. Have you had a faster run-time in the past?
How much concurrent activity is there that's generating txn-log? CHECKDB has
to internally roll-back all the txn-log that's occured while its been
running which will further slow things down.
In SQL Server 2005, the behavior's changed so it doesn't have to do this,
and there's progress reporting so you can see how far its progressed.
Thanks
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OhSVgn6hFHA.2852@.TK2MSFTNGP15.phx.gbl...
>A single CPU is not a lot of horse power for a 200GB DBCC. Especially if
>anything else is requesting time on the cpu. You can have blocking as well
>if other users are updating the pages it wants to read. 3GB is not a lot
>of memory either.
> --
> Andrew J. Kelly SQL MVP
>
> "Banu_tr" <abuslu@.hotmail.com> wrote in message
> news:863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com...
>

Dbcc Checkdb , so time consuming ?

hi,
I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
normal?
Will I face any problem if I cancel it now?
Thanks
BanuThis is a multi-part message in MIME format.
--080500060300040102000409
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
etc. (not changing anything) so there would be nothing to rollback. It
does have to read every page in the DB, so it will be time consuming,
and the duration will depend a lot on the hardware it's running on too
(speed of the processor(s), I/O throughput, etc.). However 21 hours
does sounds a little excessive for a 200GB database, but it does depend
a lot, as I said, on hardware.
--
*mike hodgson*
blog: http://sqlnerd.blogspot.com
Banu_tr wrote:
>hi,
>I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
>normal?
>Will I face any problem if I cancel it now?
>Thanks
>Banu
>
--080500060300040102000409
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=UTF-8" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>Should be fine if you cancel the DBCC CHECKDB, it's just reading
pages, etc. (not changing anything) so there would be nothing to
rollback. It does have to read every page in the DB, so it will be
time consuming, and the duration will depend a lot on the hardware it's
running on too (speed of the processor(s), I/O throughput, etc.).Â
However 21 hours does sounds a little excessive for a 200GB database,
but it does depend a lot, as I said, on hardware.</tt><br>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font></span> <b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"><br>
<font face="Tahoma" size="2">blog:</font><font face="Tahoma" size="2"> <a
href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></font></span>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></font></span>
</p>
</div>
<br>
<br>
Banu_tr wrote:
<blockquote cite="mid31419D19-FC73-4DFD-96E8-5F975DA245C5@.microsoft.com"
type="cite">
<pre wrap="">hi,
I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
normal?
Will I face any problem if I cancel it now?
Thanks
Banu
</pre>
</blockquote>
</body>
</html>
--080500060300040102000409--|||Hi Mike,
Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
Controller, Proliant DL380 GB System,
I have Active/pasive cluster.
I started the script last night which means when load is relativeley low.
I couldn't estimate whether it comes closer to end or not.
thanks
"Mike Hodgson" wrote:
> Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
> etc. (not changing anything) so there would be nothing to rollback. It
> does have to read every page in the DB, so it will be time consuming,
> and the duration will depend a lot on the hardware it's running on too
> (speed of the processor(s), I/O throughput, etc.). However 21 hours
> does sounds a little excessive for a 200GB database, but it does depend
> a lot, as I said, on hardware.
> --
> *mike hodgson*
> blog: http://sqlnerd.blogspot.com
>
> Banu_tr wrote:
> >hi,
> >I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
> >normal?
> >Will I face any problem if I cancel it now?
> >Thanks
> >Banu
> >
> >
>|||This is a multi-part message in MIME format.
--020304040708060102070705
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
I'd check sp_who2 for the amount of IO & CPU that particular spid has
consumed, and run sp_who2 a few times to see if the figures are changing
at all (particularly the IO figure). Also, check to make sure it's not
part of a blocking chain, and you can check the wait time, last wait
type & wait resource (in sysprocesses), if it is waiting, to see what
the problem is.
--
*mike hodgson*
blog: http://sqlnerd.blogspot.com
Banu_tr wrote:
>Hi Mike,
>Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
>Controller, Proliant DL380 GB System,
>I have Active/pasive cluster.
>I started the script last night which means when load is relativeley low.
>I couldn't estimate whether it comes closer to end or not.
>thanks
>
>
>"Mike Hodgson" wrote:
>
>>Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
>>etc. (not changing anything) so there would be nothing to rollback. It
>>does have to read every page in the DB, so it will be time consuming,
>>and the duration will depend a lot on the hardware it's running on too
>>(speed of the processor(s), I/O throughput, etc.). However 21 hours
>>does sounds a little excessive for a 200GB database, but it does depend
>>a lot, as I said, on hardware.
>>--
>>*mike hodgson*
>>blog: http://sqlnerd.blogspot.com
>>
>>Banu_tr wrote:
>>
>>hi,
>>I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
>>normal?
>>Will I face any problem if I cancel it now?
>>Thanks
>>Banu
>>
>>
--020304040708060102070705
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>I'd check sp_who2 for the amount of IO & CPU that particular
spid has consumed, and run sp_who2 a few times to see if the figures
are changing at all (particularly the IO figure). Also, check to make
sure it's not part of a blocking chain, and you can check the wait
time, last wait type & wait resource (in sysprocesses), if it is
waiting, to see what the problem is.</tt><br>
<div class="moz-signature">
<title></title>
<meta http-equiv="Content-Type" content="text/html; ">
<p><span lang="en-au"><font face="Tahoma" size="2">--<br>
</font></span> <b><span lang="en-au"><font face="Tahoma" size="2">mike
hodgson</font></span></b><span lang="en-au"><br>
<font face="Tahoma" size="2">blog:</font><font face="Tahoma" size="2"> <a
href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></font></span>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></font></span>
</p>
</div>
<br>
<br>
Banu_tr wrote:
<blockquote cite="mid863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com"
type="cite">
<pre wrap="">Hi Mike,
Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
Controller, Proliant DL380 GB System,
I have Active/pasive cluster.
I started the script last night which means when load is relativeley low.
I couldn't estimate whether it comes closer to end or not.
thanks
"Mike Hodgson" wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
etc. (not changing anything) so there would be nothing to rollback. It
does have to read every page in the DB, so it will be time consuming,
and the duration will depend a lot on the hardware it's running on too
(speed of the processor(s), I/O throughput, etc.). However 21 hours
does sounds a little excessive for a 200GB database, but it does depend
a lot, as I said, on hardware.
--
*mike hodgson*
blog: <a class="moz-txt-link-freetext" href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a>
Banu_tr wrote:
</pre>
<blockquote type="cite">
<pre wrap="">hi,
I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
normal?
Will I face any problem if I cancel it now?
Thanks
Banu
</pre>
</blockquote>
</blockquote>
</blockquote>
</body>
</html>
--020304040708060102070705--|||This hardware sounds small for a 200GB DB. Of course a lot depends on what
you are doing with the database. How's your single cpu and average disk
queue length doing during the DBCC?
"Banu_tr" <abuslu@.hotmail.com> wrote in message
news:863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com...
> Hi Mike,
> Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
> Controller, Proliant DL380 GB System,
> I have Active/pasive cluster.
> I started the script last night which means when load is relativeley low.
> I couldn't estimate whether it comes closer to end or not.
> thanks
>
>
> "Mike Hodgson" wrote:
>> Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
>> etc. (not changing anything) so there would be nothing to rollback. It
>> does have to read every page in the DB, so it will be time consuming,
>> and the duration will depend a lot on the hardware it's running on too
>> (speed of the processor(s), I/O throughput, etc.). However 21 hours
>> does sounds a little excessive for a 200GB database, but it does depend
>> a lot, as I said, on hardware.
>> --
>> *mike hodgson*
>> blog: http://sqlnerd.blogspot.com
>>
>> Banu_tr wrote:
>> >hi,
>> >I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is
>> >it
>> >normal?
>> >Will I face any problem if I cancel it now?
>> >Thanks
>> >Banu
>> >
>> >|||I give up after 23 hours. I checked what you recommend but it seems normal.
Thanks
"Mike Hodgson" wrote:
> I'd check sp_who2 for the amount of IO & CPU that particular spid has
> consumed, and run sp_who2 a few times to see if the figures are changing
> at all (particularly the IO figure). Also, check to make sure it's not
> part of a blocking chain, and you can check the wait time, last wait
> type & wait resource (in sysprocesses), if it is waiting, to see what
> the problem is.
> --
> *mike hodgson*
> blog: http://sqlnerd.blogspot.com
>
> Banu_tr wrote:
> >Hi Mike,
> >Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
> >Controller, Proliant DL380 GB System,
> >I have Active/pasive cluster.
> >I started the script last night which means when load is relativeley low.
> >I couldn't estimate whether it comes closer to end or not.
> >thanks
> >
> >
> >
> >
> >"Mike Hodgson" wrote:
> >
> >
> >
> >>Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
> >>etc. (not changing anything) so there would be nothing to rollback. It
> >>does have to read every page in the DB, so it will be time consuming,
> >>and the duration will depend a lot on the hardware it's running on too
> >>(speed of the processor(s), I/O throughput, etc.). However 21 hours
> >>does sounds a little excessive for a 200GB database, but it does depend
> >>a lot, as I said, on hardware.
> >>
> >>--
> >>*mike hodgson*
> >>blog: http://sqlnerd.blogspot.com
> >>
> >>
> >>
> >>Banu_tr wrote:
> >>
> >>
> >>
> >>hi,
> >>I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is it
> >>normal?
> >>Will I face any problem if I cancel it now?
> >>Thanks
> >>Banu
> >>
> >>
> >>
> >>
>|||A single CPU is not a lot of horse power for a 200GB DBCC. Especially if
anything else is requesting time on the cpu. You can have blocking as well
if other users are updating the pages it wants to read. 3GB is not a lot of
memory either.
--
Andrew J. Kelly SQL MVP
"Banu_tr" <abuslu@.hotmail.com> wrote in message
news:863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com...
> Hi Mike,
> Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
> Controller, Proliant DL380 GB System,
> I have Active/pasive cluster.
> I started the script last night which means when load is relativeley low.
> I couldn't estimate whether it comes closer to end or not.
> thanks
>
>
> "Mike Hodgson" wrote:
>> Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
>> etc. (not changing anything) so there would be nothing to rollback. It
>> does have to read every page in the DB, so it will be time consuming,
>> and the duration will depend a lot on the hardware it's running on too
>> (speed of the processor(s), I/O throughput, etc.). However 21 hours
>> does sounds a little excessive for a 200GB database, but it does depend
>> a lot, as I said, on hardware.
>> --
>> *mike hodgson*
>> blog: http://sqlnerd.blogspot.com
>>
>> Banu_tr wrote:
>> >hi,
>> >I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours. Is
>> >it
>> >normal?
>> >Will I face any problem if I cancel it now?
>> >Thanks
>> >Banu
>> >
>> >|||Right - this sounds like normal runtime on a single-proc box, expecially if
its found errors and has had to go back to do more in-depth checks of the
non-clustered indexes. Have you had a faster run-time in the past?
How much concurrent activity is there that's generating txn-log? CHECKDB has
to internally roll-back all the txn-log that's occured while its been
running which will further slow things down.
In SQL Server 2005, the behavior's changed so it doesn't have to do this,
and there's progress reporting so you can see how far its progressed.
Thanks
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OhSVgn6hFHA.2852@.TK2MSFTNGP15.phx.gbl...
>A single CPU is not a lot of horse power for a 200GB DBCC. Especially if
>anything else is requesting time on the cpu. You can have blocking as well
>if other users are updating the pages it wants to read. 3GB is not a lot
>of memory either.
> --
> Andrew J. Kelly SQL MVP
>
> "Banu_tr" <abuslu@.hotmail.com> wrote in message
> news:863EB638-0EF6-40AC-B5D5-DB93F4820F9F@.microsoft.com...
>> Hi Mike,
>> Infact my hardware is fine I think. 3 Gb Ram, 2.8 Ghz Xeon CPU, MSA 1000
>> Controller, Proliant DL380 GB System,
>> I have Active/pasive cluster.
>> I started the script last night which means when load is relativeley low.
>> I couldn't estimate whether it comes closer to end or not.
>> thanks
>>
>>
>> "Mike Hodgson" wrote:
>> Should be fine if you cancel the DBCC CHECKDB, it's just reading pages,
>> etc. (not changing anything) so there would be nothing to rollback. It
>> does have to read every page in the DB, so it will be time consuming,
>> and the duration will depend a lot on the hardware it's running on too
>> (speed of the processor(s), I/O throughput, etc.). However 21 hours
>> does sounds a little excessive for a 200GB database, but it does depend
>> a lot, as I said, on hardware.
>> --
>> *mike hodgson*
>> blog: http://sqlnerd.blogspot.com
>>
>> Banu_tr wrote:
>> >hi,
>> >I run dbcc checkdb over my 200 GB db. It runs for more than 21 hours.
>> >Is it
>> >normal?
>> >Will I face any problem if I cancel it now?
>> >Thanks
>> >Banu
>> >
>> >
>