Showing posts with label online. Show all posts
Showing posts with label online. Show all posts

Monday, March 19, 2012

DBCC Log does not show all the logs in LDF file

Hi,
I am using DBCC Log command to see all the logs in online log file. After
checkpoint these logs disaapear from the dbcc log result as expected.
But when I open the .LDF file using tools available in market to read the
.LDF file, I found that logs before checkpoint are still there in .LDF file
,
so when these logs will go from .LDF file and what I am gaining by doing
checkpoint?
How does DBCC Log command works? Does it read .LDF and explicitly ignore the
logs before the checkpoint or something else.
Please clear my thoughts on this issue.
Thanks a lot in advance.
Pushkarcould you please put the whole statement that you launch?
--
Current location: Alicante (ES)
"Pushkar" wrote:

> Hi,
> I am using DBCC Log command to see all the logs in online log file. After
> checkpoint these logs disaapear from the dbcc log result as expected.
> But when I open the .LDF file using tools available in market to read the
> ..LDF file, I found that logs before checkpoint are still there in .LDF fi
le,
> so when these logs will go from .LDF file and what I am gaining by doing
> checkpoint?
> How does DBCC Log command works? Does it read .LDF and explicitly ignore t
he
> logs before the checkpoint or something else.
> Please clear my thoughts on this issue.
> Thanks a lot in advance.
> Pushkar
>
>|||I just launch dbcc log(dbname,2)
and to read LDF file I used external tools.
"Enric" <vtam13@.terra.es.(donotspam)> wrote in message
news:5EE4F325-A80E-459C-8A8F-40B99E0C83AB@.microsoft.com...
> could you please put the whole statement that you launch?
> --
> Current location: Alicante (ES)
>
> "Pushkar" wrote:
>|||> I am using DBCC Log command to see all the logs in online log file. After checkpoint thes
e logs
> disaapear from the dbcc log result as expected.
So I assume the database is in simple recovery mode?

> But when I open the .LDF file using tools available in market to read the
.LDF file, I found that
> logs before checkpoint are still there in .LDF file, so when these logs wi
ll go from .LDF file and
> what I am gaining by doing checkpoint?
My guess that the checkpoint (when in simple recovery mode) makes the space
available for re-use,
but doesn't zero out the data. And that the log reader tools does show this
data which isn't part of
the active log, but haven't been zeroed out.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Pushkar" <pushkartiwari@.gmail.com> wrote in message news:eDjPDhMTGHA.1868@.TK2MSFTNGP09.phx
.gbl...
> Hi,
> I am using DBCC Log command to see all the logs in online log file. After
checkpoint these logs
> disaapear from the dbcc log result as expected.
> But when I open the .LDF file using tools available in market to read the
.LDF file, I found that
> logs before checkpoint are still there in .LDF file, so when these logs wi
ll go from .LDF file and
> what I am gaining by doing checkpoint?
> How does DBCC Log command works? Does it read .LDF and explicitly ignore t
he logs before the
> checkpoint or something else.
> Please clear my thoughts on this issue.
> Thanks a lot in advance.
> Pushkar
>|||About simple recovery model we had in our development databases such feature
put on
--
Current location: Alicante (ES)
"Tibor Karaszi" wrote:

> So I assume the database is in simple recovery mode?
>
> My guess that the checkpoint (when in simple recovery mode) makes the spac
e available for re-use,
> but doesn't zero out the data. And that the log reader tools does show thi
s data which isn't part of
> the active log, but haven't been zeroed out.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "Pushkar" <pushkartiwari@.gmail.com> wrote in message news:eDjPDhMTGHA.1868
@.TK2MSFTNGP09.phx.gbl...
>|||Thanks!!!
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:efZEXMNTGHA.5728@.tk2msftngp13.phx.gbl...
> So I assume the database is in simple recovery mode?
>
> My guess that the checkpoint (when in simple recovery mode) makes the
> space available for re-use, but doesn't zero out the data. And that the
> log reader tools does show this data which isn't part of the active log,
> but haven't been zeroed out.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "Pushkar" <pushkartiwari@.gmail.com> wrote in message
> news:eDjPDhMTGHA.1868@.TK2MSFTNGP09.phx.gbl...
>

Sunday, March 11, 2012

DBCC INDEXDEFRAG

Hi Guru,
One of my vendor database has about 500 tables where owner not belong
to dbo. I'm having a hard time to do online defrag from M-F because my
code is not working. Please look at my code and modify it when owner
not belong to DBO.
USE ObjectManager
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(id,
N'IsProcedure') = 1)
drop procedure [dbo].[uspOM_OnlineFragmentation]
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
CREATE PROC dbo.uspOM_OnlineFragmentation
@.dbname sysname
AS
SET NOCOUNT ON
CREATE TABLE #UserTables(
[id] INT,
[name] VARCHAR(60)
)
--Create temporary table to hold DBCC SHOWCONTIG output
CREATE TABLE #FragmentationResult(
ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
IndexId INT, [Level] INT, Pages INT, [Rows] INT,
MinimumRecordSize INT, MaximumRecordSize INT,
AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
ExtentSwitches INT, AverageFreeBytes FLOAT,
AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
ActualCount INT, LogicalFragmentation FLOAT,
ExtentFragmentation FLOAT
)
--Create temporary table to hold tables/indexes that require
-- defragmentation
CREATE TABLE #Defragmentation(
[id] INT IDENTITY,
ObjectName VARCHAR(255),
IndexName VARCHAR(255),
ScanDensity FLOAT
)
--Identify all user tables in the current database to analyze
-- fragmentation
INSERT INTO #UserTables([id], [name])
EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
xtype = ''U'' and [name] <> ''dtproperties''')
--SELECT [id], [name] INTO #UserTables
-- FROM sysobjects
-- WHERE type = 'U'
-- ORDER BY [id]
--Determine fragmentation of every user table/index
DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #UserTables
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id], @.name=[name]
FROM #UserTables
WHERE [id] > @.id
INSERT INTO #FragmentationResult
EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
ALL_INDEXES, TABLERESULTS')
SET @.TableCnt = @.TableCnt - 1
END
--Determine user tables/indexes that require defragmentation
INSERT INTO #Defragmentation
SELECT ObjectName, IndexName, ScanDensity
FROM #FragmentationResult
WHERE IndexID NOT IN(0,255)
AND ScanDensity < 40 --Scan Density is low
AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
AND PAGES > 8 --Not a very small table
DROP TABLE #FragmentationResult
--Defragment tables/indexes with high fragmentation
DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id]
, @.oname = ObjectName
, @.iname = IndexName
, @.sdensity = ScanDensity
FROM #Defragmentation
WHERE [id] > @.id
PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
+ '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
--DBCC DBREINDEX(@.oname, @.iname)
DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
SET @.TableCnt = @.TableCnt - 1
END
--Release resources
DROP TABLE #UserTables
DROP TABLE #Defragmentation
SET NOCOUNT OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
Thanks,
Silaphet,
Hi Silaphet
It's not the code that needs to change, but the roles assigned to whomever
is running this code.
DBO is not a role. It is a user name. You can assign the users who need to
run this to the db_owner or db_ddladmin role. Table owners can only defrag
or showcontig on their own tables.
FROM BOL:
Permissions
DBCC INDEXDEFRAG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database role, and the table owner,
and are not transferable.
DBCC SHOWCONTIG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database roles, and the table
owner, and are not transferable.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
<smounkhaty@.bremer.com> wrote in message
news:1133496455.993570.168870@.g44g2000cwa.googlegr oups.com...
> Hi Guru,
> One of my vendor database has about 500 tables where owner not belong
> to dbo. I'm having a hard time to do online defrag from M-F because my
> code is not working. Please look at my code and modify it when owner
> not belong to DBO.
> USE ObjectManager
> GO
> if exists (select * from dbo.sysobjects where id =
> object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(id,
> N'IsProcedure') = 1)
> drop procedure [dbo].[uspOM_OnlineFragmentation]
> GO
> SET QUOTED_IDENTIFIER ON
> GO
> SET ANSI_NULLS ON
> GO
>
> CREATE PROC dbo.uspOM_OnlineFragmentation
> @.dbname sysname
> AS
> SET NOCOUNT ON
> CREATE TABLE #UserTables(
> [id] INT,
> [name] VARCHAR(60)
> )
> --Create temporary table to hold DBCC SHOWCONTIG output
> CREATE TABLE #FragmentationResult(
> ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
> IndexId INT, [Level] INT, Pages INT, [Rows] INT,
> MinimumRecordSize INT, MaximumRecordSize INT,
> AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
> ExtentSwitches INT, AverageFreeBytes FLOAT,
> AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
> ActualCount INT, LogicalFragmentation FLOAT,
> ExtentFragmentation FLOAT
> )
> --Create temporary table to hold tables/indexes that require
> -- defragmentation
> CREATE TABLE #Defragmentation(
> [id] INT IDENTITY,
> ObjectName VARCHAR(255),
> IndexName VARCHAR(255),
> ScanDensity FLOAT
> )
> --Identify all user tables in the current database to analyze
> -- fragmentation
> INSERT INTO #UserTables([id], [name])
> EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
> xtype = ''U'' and [name] <> ''dtproperties''')
> --SELECT [id], [name] INTO #UserTables
> -- FROM sysobjects
> -- WHERE type = 'U'
> -- ORDER BY [id]
> --Determine fragmentation of every user table/index
> DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #UserTables
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id], @.name=[name]
> FROM #UserTables
> WHERE [id] > @.id
> INSERT INTO #FragmentationResult
> EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
> ALL_INDEXES, TABLERESULTS')
> SET @.TableCnt = @.TableCnt - 1
> END
> --Determine user tables/indexes that require defragmentation
> INSERT INTO #Defragmentation
> SELECT ObjectName, IndexName, ScanDensity
> FROM #FragmentationResult
> WHERE IndexID NOT IN(0,255)
> AND ScanDensity < 40 --Scan Density is low
> AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
> AND PAGES > 8 --Not a very small table
> DROP TABLE #FragmentationResult
> --Defragment tables/indexes with high fragmentation
> DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id]
> , @.oname = ObjectName
> , @.iname = IndexName
> , @.sdensity = ScanDensity
> FROM #Defragmentation
> WHERE [id] > @.id
> PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
> PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
> + '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
> --DBCC DBREINDEX(@.oname, @.iname)
> DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
> SET @.TableCnt = @.TableCnt - 1
> END
> --Release resources
> DROP TABLE #UserTables
> DROP TABLE #Defragmentation
> SET NOCOUNT OFF
>
> GO
> SET QUOTED_IDENTIFIER OFF
> GO
> SET ANSI_NULLS ON
> GO
> Thanks,
> Silaphet,
>

Thursday, March 8, 2012

DBCC INDEXDEFRAG

Hi Guru,
One of my vendor database has about 500 tables where owner not belong
to dbo. I'm having a hard time to do online defrag from M-F because my
code is not working. Please look at my code and modify it when owner
not belong to DBO.
USE ObjectManager
GO
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(id,
N'IsProcedure') = 1)
drop procedure [dbo].[uspOM_OnlineFragmentation]
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
CREATE PROC dbo.uspOM_OnlineFragmentation
@.dbname sysname
AS
SET NOCOUNT ON
CREATE TABLE #UserTables(
[id] INT,
[name] VARCHAR(60)
)
--Create temporary table to hold DBCC SHOWCONTIG output
CREATE TABLE #FragmentationResult(
ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
IndexId INT, [Level] INT, Pages INT, [Rows] INT,
MinimumRecordSize INT, MaximumRecordSize INT,
AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
ExtentSwitches INT, AverageFreeBytes FLOAT,
AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
ActualCount INT, LogicalFragmentation FLOAT,
ExtentFragmentation FLOAT
)
--Create temporary table to hold tables/indexes that require
-- defragmentation
CREATE TABLE #Defragmentation(
[id] INT IDENTITY,
ObjectName VARCHAR(255),
IndexName VARCHAR(255),
ScanDensity FLOAT
)
--Identify all user tables in the current database to analyze
-- fragmentation
INSERT INTO #UserTables([id], [name])
EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
xtype = ''U'' and [name] <> ''dtproperties''')
--SELECT [id], [name] INTO #UserTables
-- FROM sysobjects
-- WHERE type = 'U'
-- ORDER BY [id]
--Determine fragmentation of every user table/index
DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #UserTables
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id], @.name=[name]
FROM #UserTables
WHERE [id] > @.id
INSERT INTO #FragmentationResult
EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
ALL_INDEXES, TABLERESULTS')
SET @.TableCnt = @.TableCnt - 1
END
--Determine user tables/indexes that require defragmentation
INSERT INTO #Defragmentation
SELECT ObjectName, IndexName, ScanDensity
FROM #FragmentationResult
WHERE IndexID NOT IN(0,255)
AND ScanDensity < 40 --Scan Density is low
AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
AND PAGES > 8 --Not a very small table
DROP TABLE #FragmentationResult
--Defragment tables/indexes with high fragmentation
DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id]
, @.oname = ObjectName
, @.iname = IndexName
, @.sdensity = ScanDensity
FROM #Defragmentation
WHERE [id] > @.id
PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
+ '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
--DBCC DBREINDEX(@.oname, @.iname)
DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
SET @.TableCnt = @.TableCnt - 1
END
--Release resources
DROP TABLE #UserTables
DROP TABLE #Defragmentation
SET NOCOUNT OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
Thanks,
Silaphet,Hi Silaphet
It's not the code that needs to change, but the roles assigned to whomever
is running this code.
DBO is not a role. It is a user name. You can assign the users who need to
run this to the db_owner or db_ddladmin role. Table owners can only defrag
or showcontig on their own tables.
--
FROM BOL:
Permissions
DBCC INDEXDEFRAG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database role, and the table owner,
and are not transferable.
DBCC SHOWCONTIG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database roles, and the table
owner, and are not transferable.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
<smounkhaty@.bremer.com> wrote in message
news:1133496455.993570.168870@.g44g2000cwa.googlegroups.com...
> Hi Guru,
> One of my vendor database has about 500 tables where owner not belong
> to dbo. I'm having a hard time to do online defrag from M-F because my
> code is not working. Please look at my code and modify it when owner
> not belong to DBO.
> USE ObjectManager
> GO
> if exists (select * from dbo.sysobjects where id => object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(id,
> N'IsProcedure') = 1)
> drop procedure [dbo].[uspOM_OnlineFragmentation]
> GO
> SET QUOTED_IDENTIFIER ON
> GO
> SET ANSI_NULLS ON
> GO
>
> CREATE PROC dbo.uspOM_OnlineFragmentation
> @.dbname sysname
> AS
> SET NOCOUNT ON
> CREATE TABLE #UserTables(
> [id] INT,
> [name] VARCHAR(60)
> )
> --Create temporary table to hold DBCC SHOWCONTIG output
> CREATE TABLE #FragmentationResult(
> ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
> IndexId INT, [Level] INT, Pages INT, [Rows] INT,
> MinimumRecordSize INT, MaximumRecordSize INT,
> AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
> ExtentSwitches INT, AverageFreeBytes FLOAT,
> AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
> ActualCount INT, LogicalFragmentation FLOAT,
> ExtentFragmentation FLOAT
> )
> --Create temporary table to hold tables/indexes that require
> -- defragmentation
> CREATE TABLE #Defragmentation(
> [id] INT IDENTITY,
> ObjectName VARCHAR(255),
> IndexName VARCHAR(255),
> ScanDensity FLOAT
> )
> --Identify all user tables in the current database to analyze
> -- fragmentation
> INSERT INTO #UserTables([id], [name])
> EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
> xtype = ''U'' and [name] <> ''dtproperties''')
> --SELECT [id], [name] INTO #UserTables
> -- FROM sysobjects
> -- WHERE type = 'U'
> -- ORDER BY [id]
> --Determine fragmentation of every user table/index
> DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #UserTables
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id], @.name=[name]
> FROM #UserTables
> WHERE [id] > @.id
> INSERT INTO #FragmentationResult
> EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
> ALL_INDEXES, TABLERESULTS')
> SET @.TableCnt = @.TableCnt - 1
> END
> --Determine user tables/indexes that require defragmentation
> INSERT INTO #Defragmentation
> SELECT ObjectName, IndexName, ScanDensity
> FROM #FragmentationResult
> WHERE IndexID NOT IN(0,255)
> AND ScanDensity < 40 --Scan Density is low
> AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
> AND PAGES > 8 --Not a very small table
> DROP TABLE #FragmentationResult
> --Defragment tables/indexes with high fragmentation
> DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id]
> , @.oname = ObjectName
> , @.iname = IndexName
> , @.sdensity = ScanDensity
> FROM #Defragmentation
> WHERE [id] > @.id
> PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
> PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
> + '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
> --DBCC DBREINDEX(@.oname, @.iname)
> DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
> SET @.TableCnt = @.TableCnt - 1
> END
> --Release resources
> DROP TABLE #UserTables
> DROP TABLE #Defragmentation
> SET NOCOUNT OFF
>
> GO
> SET QUOTED_IDENTIFIER OFF
> GO
> SET ANSI_NULLS ON
> GO
> Thanks,
> Silaphet,
>

DBCC INDEXDEFRAG

Hi Guru,
One of my vendor database has about 500 tables where owner not belong
to dbo. I'm having a hard time to do online defrag from M-F because my
code is not working. Please look at my code and modify it when owner
not belong to DBO.
USE ObjectManager
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY(i
d,
N'IsProcedure') = 1)
drop procedure [dbo].[uspOM_OnlineFragmentation]
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
CREATE PROC dbo.uspOM_OnlineFragmentation
@.dbname sysname
AS
SET NOCOUNT ON
CREATE TABLE #UserTables(
[id] INT,
[name] VARCHAR(60)
)
--Create temporary table to hold DBCC SHOWCONTIG output
CREATE TABLE #FragmentationResult(
ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
IndexId INT, [Level] INT, Pages INT, [Rows] INT,
MinimumRecordSize INT, MaximumRecordSize INT,
AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
ExtentSwitches INT, AverageFreeBytes FLOAT,
AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
ActualCount INT, LogicalFragmentation FLOAT,
ExtentFragmentation FLOAT
)
--Create temporary table to hold tables/indexes that require
-- defragmentation
CREATE TABLE #Defragmentation(
[id] INT IDENTITY,
ObjectName VARCHAR(255),
IndexName VARCHAR(255),
ScanDensity FLOAT
)
--Identify all user tables in the current database to analyze
-- fragmentation
INSERT INTO #UserTables([id], [name])
EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHERE
xtype = ''U'' and [name] <> ''dtproperties''')
--SELECT [id], [name] INTO #UserTables
-- FROM sysobjects
-- WHERE type = 'U'
-- ORDER BY [id]
--Determine fragmentation of every user table/index
DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #UserTables
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id], @.name=[name]
FROM #UserTables
WHERE [id] > @.id
INSERT INTO #FragmentationResult
EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
ALL_INDEXES, TABLERESULTS')
SET @.TableCnt = @.TableCnt - 1
END
--Determine user tables/indexes that require defragmentation
INSERT INTO #Defragmentation
SELECT ObjectName, IndexName, ScanDensity
FROM #FragmentationResult
WHERE IndexID NOT IN(0,255)
AND ScanDensity < 40 --Scan Density is low
AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
AND PAGES > 8 --Not a very small table
DROP TABLE #FragmentationResult
--Defragment tables/indexes with high fragmentation
DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
SET @.id = 0
SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
WHILE @.TableCnt > 0
BEGIN
SELECT TOP 1 @.id=[id]
, @.oname = ObjectName
, @.iname = IndexName
, @.sdensity = ScanDensity
FROM #Defragmentation
WHERE [id] > @.id
PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
+ '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
--DBCC DBREINDEX(@.oname, @.iname)
DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
SET @.TableCnt = @.TableCnt - 1
END
--Release resources
DROP TABLE #UserTables
DROP TABLE #Defragmentation
SET NOCOUNT OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
Thanks,
Silaphet,Hi Silaphet
It's not the code that needs to change, but the roles assigned to whomever
is running this code.
DBO is not a role. It is a user name. You can assign the users who need to
run this to the db_owner or db_ddladmin role. Table owners can only defrag
or showcontig on their own tables.
FROM BOL:
Permissions
DBCC INDEXDEFRAG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database role, and the table owner,
and are not transferable.
DBCC SHOWCONTIG permissions default to members of the sysadmin fixed server
role, the db_owner and db_ddladmin fixed database roles, and the table
owner, and are not transferable.
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
<smounkhaty@.bremer.com> wrote in message
news:1133496455.993570.168870@.g44g2000cwa.googlegroups.com...
> Hi Guru,
> One of my vendor database has about 500 tables where owner not belong
> to dbo. I'm having a hard time to do online defrag from M-F because my
> code is not working. Please look at my code and modify it when owner
> not belong to DBO.
> USE ObjectManager
> GO
> if exists (select * from dbo.sysobjects where id =
> object_id(N'[dbo].[uspOM_OnlineFragmentation]') and OBJECTPROPERTY
(id,
> N'IsProcedure') = 1)
> drop procedure [dbo].[uspOM_OnlineFragmentation]
> GO
> SET QUOTED_IDENTIFIER ON
> GO
> SET ANSI_NULLS ON
> GO
>
> CREATE PROC dbo.uspOM_OnlineFragmentation
> @.dbname sysname
> AS
> SET NOCOUNT ON
> CREATE TABLE #UserTables(
> [id] INT,
> [name] VARCHAR(60)
> )
> --Create temporary table to hold DBCC SHOWCONTIG output
> CREATE TABLE #FragmentationResult(
> ObjectName VARCHAR(255), ObjectId INT, IndexName VARCHAR(255),
> IndexId INT, [Level] INT, Pages INT, [Rows] INT,
> MinimumRecordSize INT, MaximumRecordSize INT,
> AverageRecordSize FLOAT, ForwardedRecords INT, Extents INT,
> ExtentSwitches INT, AverageFreeBytes FLOAT,
> AveragePageDensity FLOAT, ScanDensity FLOAT, BestCount INT,
> ActualCount INT, LogicalFragmentation FLOAT,
> ExtentFragmentation FLOAT
> )
> --Create temporary table to hold tables/indexes that require
> -- defragmentation
> CREATE TABLE #Defragmentation(
> [id] INT IDENTITY,
> ObjectName VARCHAR(255),
> IndexName VARCHAR(255),
> ScanDensity FLOAT
> )
> --Identify all user tables in the current database to analyze
> -- fragmentation
> INSERT INTO #UserTables([id], [name])
> EXEC ('SELECT [id], [name] FROM ' + @.dbname + '.dbo.sysobjects WHE
RE
> xtype = ''U'' and [name] <> ''dtproperties''')
> --SELECT [id], [name] INTO #UserTables
> -- FROM sysobjects
> -- WHERE type = 'U'
> -- ORDER BY [id]
> --Determine fragmentation of every user table/index
> DECLARE @.id INT, @.name VARCHAR(255), @.TableCnt INT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #UserTables
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id], @.name=[name]
> FROM #UserTables
> WHERE [id] > @.id
> INSERT INTO #FragmentationResult
> EXEC('USE ' + @.dbname + ' DBCC SHOWCONTIG ([' + @.name + ']) WITH
> ALL_INDEXES, TABLERESULTS')
> SET @.TableCnt = @.TableCnt - 1
> END
> --Determine user tables/indexes that require defragmentation
> INSERT INTO #Defragmentation
> SELECT ObjectName, IndexName, ScanDensity
> FROM #FragmentationResult
> WHERE IndexID NOT IN(0,255)
> AND ScanDensity < 40 --Scan Density is low
> AND LogicalFragmentation > 10 --Logical Scan Fragmentation is high
> AND PAGES > 8 --Not a very small table
> DROP TABLE #FragmentationResult
> --Defragment tables/indexes with high fragmentation
> DECLARE @.oname VARCHAR(255), @.iname VARCHAR(255), @.sdensity FLOAT
> SET @.id = 0
> SELECT @.TableCnt = COUNT(*) FROM #Defragmentation
> WHILE @.TableCnt > 0
> BEGIN
> SELECT TOP 1 @.id=[id]
> , @.oname = ObjectName
> , @.iname = IndexName
> , @.sdensity = ScanDensity
> FROM #Defragmentation
> WHERE [id] > @.id
> PRINT '** De-fragmentation #' + CAST(@.id AS VARCHAR(15))+ ' **'
> PRINT 'DBCC INDEXDEFRAG on [' + @.oname + '].[' + @.iname
> + '] with ScanDensity = ' + CAST(@.sdensity AS VARCHAR(15)) + '%'
> --DBCC DBREINDEX(@.oname, @.iname)
> DBCC INDEXDEFRAG(@.dbname,@.oname,@.iname)
> SET @.TableCnt = @.TableCnt - 1
> END
> --Release resources
> DROP TABLE #UserTables
> DROP TABLE #Defragmentation
> SET NOCOUNT OFF
>
> GO
> SET QUOTED_IDENTIFIER OFF
> GO
> SET ANSI_NULLS ON
> GO
> Thanks,
> Silaphet,
>

DBCC dllname (FREE)

According to SQL Server 2000 Books Online, this command requires membership
in the sysadmin or db_owner role. We have added the user to the db_owner
role of database 'master', yet the command fails and reports error 2571. Is
this the correct database? (The command succeeds so long as the login is a
member of sysadmin.)
Thanks,
DanaHi Dana,
My understanding of your issue is that:
You found that DBCC dllname(FREE) could not be executed with db_owner. You
could run it with sysadmin.
If I have misunderstood, please let me know.
I looked through our internal database and found that this was a
documentation issue ( 223167 ) of BOL.
BOL should read:
DBCC dllname (FREE) permissions default to members of the sysadmin fixed
server role, and are not transferable.
Please assign the login account with sysadmin permission to run the
statement.
If you have any other questions or concerns, please feel free to let me
know. It is my pleasure to be of assistance.
Charles Wang
Microsoft Online Community Support
========================================
==============
When responding to posts, please "Reply to Group" via
your newsreader so that others may learn and benefit
from this issue.
========================================
==============
This posting is provided "AS IS" with no warranties, and confers no rights.
========================================
==============|||Hi Dana,
I am interested in this issue. Would you mind letting me know the result of
the suggestions? If you need further assistance, feel free to let me know.
I will be more than happy to be of assistance.
Have a great day!
Charles Wang
Microsoft Online Community Support

Friday, February 24, 2012

DBCC commands stability

Hi all,

I have a server from which keeping clients off for maintenance is
difficult. They all have VPN connections and can be online any time
they want, and uptime as always is important.

Now I need to run dbcc shrinkdatabase, checkdb and of course
checkpoint right before backups, and when the log seems to grow. I
just tried dbcc checkdb on my home computer and apparently its really
io and CPU intensive on this dual P3. Can users be running queries and
the occasional update and insert while checkdb is doing its thing? Or
is it better to lock everyone out?

How about shrinkdatabase? Any benchmarks on the stability of these
commands while other clients are running? If tables are getting locked
during these commands, the log file will grow even if shrinkdatabase
is running...

Any commands to show which tables are locked, and by whom or what?

I just tried our 2.5GB database on my dual P3 with 256MB ram home
computer.. checkdb took 8 minutes and shrinkdb took 1.5 minutes. We've
a dualP3 server at work, IBM xSeries 232 with 1GB ram, but even 5
minutes of downtime can hurt if shrinkdb needs to be run during work
hours.

Any thoughts?"Ghazan Haider" <ghazan@.ghazan.haider.name> wrote in message
news:2f57764a.0404091155.33f4439f@.posting.google.c om...
> Hi all,
> I have a server from which keeping clients off for maintenance is
> difficult. They all have VPN connections and can be online any time
> they want, and uptime as always is important.
> Now I need to run dbcc shrinkdatabase, checkdb and of course
> checkpoint right before backups, and when the log seems to grow. I
> just tried dbcc checkdb on my home computer and apparently its really
> io and CPU intensive on this dual P3. Can users be running queries and
> the occasional update and insert while checkdb is doing its thing? Or
> is it better to lock everyone out?
> How about shrinkdatabase? Any benchmarks on the stability of these
> commands while other clients are running? If tables are getting locked
> during these commands, the log file will grow even if shrinkdatabase
> is running...
> Any commands to show which tables are locked, and by whom or what?
> I just tried our 2.5GB database on my dual P3 with 256MB ram home
> computer.. checkdb took 8 minutes and shrinkdb took 1.5 minutes. We've
> a dualP3 server at work, IBM xSeries 232 with 1GB ram, but even 5
> minutes of downtime can hurt if shrinkdb needs to be run during work
> hours.
> Any thoughts?

Yes, why should anyone contemplate running these things during
production hours when you have an automated task queue at your
disposal? Surely any realtime high volume transaction database
system has its natural cycles of usage and minimal usage?

Always move routine tasks into this window of opportunity via
automation and queuing of the task to the off-peak times.

--
Pete Brown
Winluck P/L
IT Managers & Engineers
Falls Creek
Australia
www.mountainman.com.au/software|||> Yes, why should anyone contemplate running these things during
> production hours when you have an automated task queue at your
> disposal? Surely any realtime high volume transaction database
> system has its natural cycles of usage and minimal usage?
> Always move routine tasks into this window of opportunity via
> automation and queuing of the task to the off-peak times.

The log file simply blows up at the wrongest of times, swallows all of
the 36GB disk in a matter of 12 minutes. Apparently something is
locked while some other heavy transaction or bulk upload is going on.
Some of the financial transactions are really heavy and update lots of
rows. I'd just like to have the flexibility to checkpoint and shrinkdb
the database, and know what is locked and why.|||Why do you want to shrink the database? You've said it has very heavy
usage - if you shrink it, it'll grow again. That's because it needs all the
space for regular running - this is demonstrated by your (I'm assuming) need
to shrink every so often. Why cause the extra work for no gain? You'd be far
better off not shrinking the database at all.

You can run shrink and checkdb at any time, although they can cause up to a
20% drop (observed on a test system - YMMV) in transaction throughput.

Regards.

--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine

This posting is provided "AS IS" with no warranties, and confers no rights.

"Ghazan Haider" <ghazan@.ghazan.haider.name> wrote in message
news:2f57764a.0404092151.27001b80@.posting.google.c om...
> > Yes, why should anyone contemplate running these things during
> > production hours when you have an automated task queue at your
> > disposal? Surely any realtime high volume transaction database
> > system has its natural cycles of usage and minimal usage?
> > Always move routine tasks into this window of opportunity via
> > automation and queuing of the task to the off-peak times.
> The log file simply blows up at the wrongest of times, swallows all of
> the 36GB disk in a matter of 12 minutes. Apparently something is
> locked while some other heavy transaction or bulk upload is going on.
> Some of the financial transactions are really heavy and update lots of
> rows. I'd just like to have the flexibility to checkpoint and shrinkdb
> the database, and know what is locked and why.|||"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:<4078339a$1@.news.microsoft.com>...
> Why do you want to shrink the database? You've said it has very heavy
> usage - if you shrink it, it'll grow again. That's because it needs all the
> space for regular running - this is demonstrated by your (I'm assuming) need
> to shrink every so often. Why cause the extra work for no gain? You'd be far
> better off not shrinking the database at all.
> You can run shrink and checkdb at any time, although they can cause up to a
> 20% drop (observed on a test system - YMMV) in transaction throughput.

We've had the system freeze with the message transaction log full (no
more transactions or ERP system logins which inserts rows). That was
when the log file grew to several gigabytes and filled up the disk.

It'd be nice to be able to check what is locked and why, making the
logs grow. The shrink and checkdb are just an assurance after
unlocking whatever is locked, and running checkpoint to make sure
everything has been committed, so work can begin all over again. I
wouldnt need to really shrinkdb if theres a command to show the number
of uncommitted transactions in the log file, so I know everything has
been flushed.|||"Ghazan Haider" <ghazan@.ghazan.haider.name> wrote:

...[trim]...

> wouldnt need to really shrinkdb if theres a command to show the number
> of uncommitted transactions in the log file, so I know everything has
> been flushed.

Look up dbcc opentran

--
Pete Brown
Winluck P/L
IT Managers & Engineers
Falls Creek
Australia
www.mountainman.com.au/software