Tuesday, March 27, 2012
DBCC Shrinkdatabase
A question related to shrinking a database. Find below details of a
database I use, extracted from sp_helpdb abd sp_spaceused commands
Database
Name : ABC
Size : 56999.44 MB
Status : select into/bulkcopy, trunc. log on chkpt
NameSizeMax SizeGrowth Usage
Abc_Data53734848 KBUnlimited10%data only
Abc_Log 4632576 KBUnlimited10%log only
database_name database_size unallocated space
-----
Abc 56999.44 MB 23566.41 MB
reserved data index_size unused
-- -- --
34235424 KB 30824144 KB 3404648 KB 6632 KB
I am looking to free up the unallocated space of 23 GB. How do I go
about doing the same ?
DBCC SHRINKDATABASE Truncateonly did not result in any freeing up of
space.
Should I try "DBCC Shrinkdatabase" ? If so,
a) Is the amount of time taken for the process related to the size of
the database ?
b) If the answer to a) is in the affirmative, how long do you think
shrinking a 56 GB database would take ?
c) Should I stop all activity on the database before doing this ?
d) Will the process lock the database in entirety for the duration of
the execution of the command ?
Thank you,
Narayanan B
> a) Is the amount of time taken for the process related to the size of
> the database ?
Depends more on the level of fragmentation. Obviously, you have parts of
objects at the end of the data file, otherwise the TRUNCATEONLY option would
have been enough.
> b) If the answer to a) is in the affirmative, how long do you think
> shrinking a 56 GB database would take ?
Again, depends on the level of fragmentation.
> c) Should I stop all activity on the database before doing this ?
Would help, but not necessary.
> d) Will the process lock the database in entirety for the duration of
> the execution of the command ?
No, not the entire database.
It may sometimes be a good idea to leave the database size as is, if you
think it would grow to that size some day. This is because if the database
needs to grow dynamically, it comes with a small impact on performance. If
performance is currently an issue, you could try defragmenting individual
tables / indexes.
Peter Yeoh
http://www.yohz.com
Need smaller backup files? Try MiniSQLBackup
"narayanan" <narayanan@.mytiger.com> wrote in message
news:13949517.0406030114.52d7432d@.posting.google.c om...
> Hello,
> A question related to shrinking a database. Find below details of a
> database I use, extracted from sp_helpdb abd sp_spaceused commands
> Database
> Name : ABC
> Size : 56999.44 MB
> Status : select into/bulkcopy, trunc. log on chkpt
>
> Name Size Max Size Growth Usage
> ----
> Abc_Data 53734848 KB Unlimited 10% data only
> Abc_Log 4632576 KB Unlimited 10% log only
>
> database_name database_size unallocated space
> ----
--
> Abc 56999.44 MB 23566.41 MB
>
> reserved data index_size unused
> -- -- --
> --
> 34235424 KB 30824144 KB 3404648 KB 6632 KB
>
> I am looking to free up the unallocated space of 23 GB. How do I go
> about doing the same ?
> DBCC SHRINKDATABASE Truncateonly did not result in any freeing up of
> space.
> Should I try "DBCC Shrinkdatabase" ? If so,
> a) Is the amount of time taken for the process related to the size of
> the database ?
> b) If the answer to a) is in the affirmative, how long do you think
> shrinking a 56 GB database would take ?
> c) Should I stop all activity on the database before doing this ?
> d) Will the process lock the database in entirety for the duration of
> the execution of the command ?
> Thank you,
> Narayanan B
|||Also, a bit more info is found at:
http://www.karaszi.com/sqlserver/info_dont_shrink.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Peter Yeoh" <nospam@.nospam.com> wrote in message news:eieOE4USEHA.2480@.TK2MSFTNGP10.phx.gbl...
> Depends more on the level of fragmentation. Obviously, you have parts of
> objects at the end of the data file, otherwise the TRUNCATEONLY option would
> have been enough.
> Again, depends on the level of fragmentation.
> Would help, but not necessary.
> No, not the entire database.
> It may sometimes be a good idea to leave the database size as is, if you
> think it would grow to that size some day. This is because if the database
> needs to grow dynamically, it comes with a small impact on performance. If
> performance is currently an issue, you could try defragmenting individual
> tables / indexes.
>
> Peter Yeoh
> http://www.yohz.com
> Need smaller backup files? Try MiniSQLBackup
>
> "narayanan" <narayanan@.mytiger.com> wrote in message
> news:13949517.0406030114.52d7432d@.posting.google.c om...
> --
>
DBCC Shrinkdatabase
A question related to shrinking a database. Find below details of a
database I use, extracted from sp_helpdb abd sp_spaceused commands
Database
Name : ABC
Size : 56999.44 MB
Status : select into/bulkcopy, trunc. log on chkpt
Name Size Max Size Growth Usage
----
Abc_Data 53734848 KB Unlimited 10% data only
Abc_Log 4632576 KB Unlimited 10% log only
database_name database_size unallocated space
-----
Abc 56999.44 MB 23566.41 MB
reserved data index_size unused
-- -- --
--
34235424 KB 30824144 KB 3404648 KB 6632 KB
I am looking to free up the unallocated space of 23 GB. How do I go
about doing the same ?
DBCC SHRINKDATABASE Truncateonly did not result in any freeing up of
space.
Should I try "DBCC Shrinkdatabase" ? If so,
a) Is the amount of time taken for the process related to the size of
the database ?
b) If the answer to a) is in the affirmative, how long do you think
shrinking a 56 GB database would take ?
c) Should I stop all activity on the database before doing this ?
d) Will the process lock the database in entirety for the duration of
the execution of the command ?
Thank you,
Narayanan B> a) Is the amount of time taken for the process related to the size of
> the database ?
Depends more on the level of fragmentation. Obviously, you have parts of
objects at the end of the data file, otherwise the TRUNCATEONLY option would
have been enough.
> b) If the answer to a) is in the affirmative, how long do you think
> shrinking a 56 GB database would take ?
Again, depends on the level of fragmentation.
> c) Should I stop all activity on the database before doing this ?
Would help, but not necessary.
> d) Will the process lock the database in entirety for the duration of
> the execution of the command ?
No, not the entire database.
It may sometimes be a good idea to leave the database size as is, if you
think it would grow to that size some day. This is because if the database
needs to grow dynamically, it comes with a small impact on performance. If
performance is currently an issue, you could try defragmenting individual
tables / indexes.
Peter Yeoh
http://www.yohz.com
Need smaller backup files? Try MiniSQLBackup
"narayanan" <narayanan@.mytiger.com> wrote in message
news:13949517.0406030114.52d7432d@.posting.google.com...
> Hello,
> A question related to shrinking a database. Find below details of a
> database I use, extracted from sp_helpdb abd sp_spaceused commands
> Database
> Name : ABC
> Size : 56999.44 MB
> Status : select into/bulkcopy, trunc. log on chkpt
>
> Name Size Max Size Growth Usage
> ----
> Abc_Data 53734848 KB Unlimited 10% data only
> Abc_Log 4632576 KB Unlimited 10% log only
>
> database_name database_size unallocated space
> ----
--
> Abc 56999.44 MB 23566.41 MB
>
> reserved data index_size unused
> -- -- --
> --
> 34235424 KB 30824144 KB 3404648 KB 6632 KB
>
> I am looking to free up the unallocated space of 23 GB. How do I go
> about doing the same ?
> DBCC SHRINKDATABASE Truncateonly did not result in any freeing up of
> space.
> Should I try "DBCC Shrinkdatabase" ? If so,
> a) Is the amount of time taken for the process related to the size of
> the database ?
> b) If the answer to a) is in the affirmative, how long do you think
> shrinking a 56 GB database would take ?
> c) Should I stop all activity on the database before doing this ?
> d) Will the process lock the database in entirety for the duration of
> the execution of the command ?
> Thank you,
> Narayanan B|||Also, a bit more info is found at:
http://www.karaszi.com/sqlserver/info_dont_shrink.asp
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Peter Yeoh" <nospam@.nospam.com> wrote in message news:eieOE4USEHA.2480@.TK2MSFTNGP10.phx.gbl...
> > a) Is the amount of time taken for the process related to the size of
> > the database ?
> Depends more on the level of fragmentation. Obviously, you have parts of
> objects at the end of the data file, otherwise the TRUNCATEONLY option would
> have been enough.
> > b) If the answer to a) is in the affirmative, how long do you think
> > shrinking a 56 GB database would take ?
> Again, depends on the level of fragmentation.
> > c) Should I stop all activity on the database before doing this ?
> Would help, but not necessary.
> > d) Will the process lock the database in entirety for the duration of
> > the execution of the command ?
> No, not the entire database.
> It may sometimes be a good idea to leave the database size as is, if you
> think it would grow to that size some day. This is because if the database
> needs to grow dynamically, it comes with a small impact on performance. If
> performance is currently an issue, you could try defragmenting individual
> tables / indexes.
>
> Peter Yeoh
> http://www.yohz.com
> Need smaller backup files? Try MiniSQLBackup
>
> "narayanan" <narayanan@.mytiger.com> wrote in message
> news:13949517.0406030114.52d7432d@.posting.google.com...
> > Hello,
> >
> > A question related to shrinking a database. Find below details of a
> > database I use, extracted from sp_helpdb abd sp_spaceused commands
> >
> > Database
> > Name : ABC
> > Size : 56999.44 MB
> > Status : select into/bulkcopy, trunc. log on chkpt
> >
> >
> > Name Size Max Size Growth Usage
> > ----
> > Abc_Data 53734848 KB Unlimited 10% data only
> > Abc_Log 4632576 KB Unlimited 10% log only
> >
> >
> > database_name database_size unallocated space
> > ----
> --
> > Abc 56999.44 MB 23566.41 MB
> >
> >
> > reserved data index_size unused
> > -- -- --
> > --
> > 34235424 KB 30824144 KB 3404648 KB 6632 KB
> >
> >
> > I am looking to free up the unallocated space of 23 GB. How do I go
> > about doing the same ?
> > DBCC SHRINKDATABASE Truncateonly did not result in any freeing up of
> > space.
> >
> > Should I try "DBCC Shrinkdatabase" ? If so,
> > a) Is the amount of time taken for the process related to the size of
> > the database ?
> > b) If the answer to a) is in the affirmative, how long do you think
> > shrinking a 56 GB database would take ?
> > c) Should I stop all activity on the database before doing this ?
> > d) Will the process lock the database in entirety for the duration of
> > the execution of the command ?
> >
> > Thank you,
> > Narayanan B
>
DBCC Shrinkdatabase
A question related to shrinking a database. Find below details of a
database I use, extracted from sp_helpdb abd sp_spaceused commands
Database
Name : ABC
Size : 56999.44 MB
Status : select into/bulkcopy, trunc. log on chkpt
Name Size Max Size Growth Usage
----
Abc_Data 53734848 KB Unlimited 10% data only
Abc_Log 4632576 KB Unlimited 10% log only
database_name database_size unallocated space
----
--
Abc 56999.44 MB 23566.41 MB
reserved data index_size unused
-- -- --
--
34235424 KB 30824144 KB 3404648 KB 6632 KB
I am looking to free up the unallocated space of 23 GB. How do I go
about doing the same ?
DBCC SHRINKDATABASE Truncateonly did not result in any freeing up of
space.
Should I try "DBCC Shrinkdatabase" ? If so,
a) Is the amount of time taken for the process related to the size of
the database ?
b) If the answer to a) is in the affirmative, how long do you think
shrinking a 56 GB database would take ?
c) Should I stop all activity on the database before doing this ?
d) Will the process lock the database in entirety for the duration of
the execution of the command ?
Thank you,
Narayanan B> a) Is the amount of time taken for the process related to the size of
> the database ?
Depends more on the level of fragmentation. Obviously, you have parts of
objects at the end of the data file, otherwise the TRUNCATEONLY option would
have been enough.
> b) If the answer to a) is in the affirmative, how long do you think
> shrinking a 56 GB database would take ?
Again, depends on the level of fragmentation.
> c) Should I stop all activity on the database before doing this ?
Would help, but not necessary.
> d) Will the process lock the database in entirety for the duration of
> the execution of the command ?
No, not the entire database.
It may sometimes be a good idea to leave the database size as is, if you
think it would grow to that size some day. This is because if the database
needs to grow dynamically, it comes with a small impact on performance. If
performance is currently an issue, you could try defragmenting individual
tables / indexes.
Peter Yeoh
http://www.yohz.com
Need smaller backup files? Try MiniSQLBackup
"narayanan" <narayanan@.mytiger.com> wrote in message
news:13949517.0406030114.52d7432d@.posting.google.com...
> Hello,
> A question related to shrinking a database. Find below details of a
> database I use, extracted from sp_helpdb abd sp_spaceused commands
> Database
> Name : ABC
> Size : 56999.44 MB
> Status : select into/bulkcopy, trunc. log on chkpt
>
> Name Size Max Size Growth Usage
> ----
> Abc_Data 53734848 KB Unlimited 10% data only
> Abc_Log 4632576 KB Unlimited 10% log only
>
> database_name database_size unallocated space
> ----
--
> Abc 56999.44 MB 23566.41 MB
>
> reserved data index_size unused
> -- -- --
> --
> 34235424 KB 30824144 KB 3404648 KB 6632 KB
>
> I am looking to free up the unallocated space of 23 GB. How do I go
> about doing the same ?
> DBCC SHRINKDATABASE Truncateonly did not result in any freeing up of
> space.
> Should I try "DBCC Shrinkdatabase" ? If so,
> a) Is the amount of time taken for the process related to the size of
> the database ?
> b) If the answer to a) is in the affirmative, how long do you think
> shrinking a 56 GB database would take ?
> c) Should I stop all activity on the database before doing this ?
> d) Will the process lock the database in entirety for the duration of
> the execution of the command ?
> Thank you,
> Narayanan B|||Also, a bit more info is found at:
http://www.karaszi.com/sqlserver/info_dont_shrink.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Peter Yeoh" <nospam@.nospam.com> wrote in message news:eieOE4USEHA.2480@.TK2MSFTNGP10.phx.gbl
..
> Depends more on the level of fragmentation. Obviously, you have parts of
> objects at the end of the data file, otherwise the TRUNCATEONLY option wou
ld
> have been enough.
>
> Again, depends on the level of fragmentation.
>
> Would help, but not necessary.
>
> No, not the entire database.
> It may sometimes be a good idea to leave the database size as is, if you
> think it would grow to that size some day. This is because if the databas
e
> needs to grow dynamically, it comes with a small impact on performance. I
f
> performance is currently an issue, you could try defragmenting individual
> tables / indexes.
>
> Peter Yeoh
> http://www.yohz.com
> Need smaller backup files? Try MiniSQLBackup
>
> "narayanan" <narayanan@.mytiger.com> wrote in message
> news:13949517.0406030114.52d7432d@.posting.google.com...
> --
>
Sunday, March 25, 2012
Dbcc Showcontig
P.S. If possible this command can be used on SQL 7.0 and 2K.
USE master
DECLARE dnames_cursor CURSOR
FOR
SELECT NAME FROM sysdatabases
where name not in ('master', 'model', 'tempdb', 'pubs', 'Northwind', 'msdb')
OPEN dnames_cursor
DECLARE @.dbname sysname
FETCH NEXT FROM dnames_cursor INTO @.dbname
WHILE (@.@.FETCH_STATUS <> -1)
BEGIN
IF (@.@.FETCH_STATUS <> -2)
BEGIN
SELECT @.dbname = RTRIM(@.dbname)
set nocount on
select 'ZZ_Date; ', getdate()
set nocount on
Declare @.My_dbname char(30)
exec ('use ' + @.dbname)
select @.My_dbname = @.dbname
print 'ZZ_Name; ' + @.My_dbname
dbcc showcontig
END
FETCH NEXT FROM dnames_cursor INTO @.dbname
END
CLOSE dnames_cursor
DEALLOCATE dnames_cursorMay check this DB Journal (http://www.databasejournal.com/features/mssql/article.php/1442901) link for more information.sql
Thursday, March 22, 2012
DBCC Results into a Table Brings Errors!
run the select portion after the error, it will have accomplished the
desired result. Unfortunately, the table has to be manually dropped
after the proc is run. Any ideas why'? USing the GO keyword is not
an option as it will blow the rest of the script.
CREATE TABLE #db_file_information(
fileid integer,
theFileGroup integer,
Total_Extents integer,
Used_Extents integer,
db varchar(30),
file_Path_name varchar(300)--,
-- File_Free_space decimal(15,2),
-- Percent_Free decimal(15,2)
)
-- Get the size of the datafiles
insert into #db_file_information exec('DBCC showfilestats')
-- add two columns to the temp table
alter table #db_file_information add dude as
((Total_Extents-Used_Extents)/(Total_extents*1.0))
select * from #db_file_information
drop table #db_file_informationTry,
use northwind
go
CREATE TABLE #db_file_information(
Fileid bigint,
[FileGroup] bigint,
TotalExtents bigint,
UsedExtents bigint,
[Name] sysname,
[FileName] varchar(260)
)
go
-- Get the size of the datafiles
insert into #db_file_information
exec('DBCC showfilestats with NO_INFOMSGS')
-- add two columns to the temp table
alter table #db_file_information
add dude as ((TotalExtents - UsedExtents) / nullif((TotalExtents * 1.0), 0))
go
select * from #db_file_information
go
drop table #db_file_information
go
AMB
"dpaskiet@.comcast.net" wrote:
> If you run the script below, it will cause an error. However, if you
> run the select portion after the error, it will have accomplished the
> desired result. Unfortunately, the table has to be manually dropped
> after the proc is run. Any ideas why'? USing the GO keyword is not
> an option as it will blow the rest of the script.
>
> CREATE TABLE #db_file_information(
> fileid integer,
> theFileGroup integer,
> Total_Extents integer,
> Used_Extents integer,
> db varchar(30),
> file_Path_name varchar(300)--,
> -- File_Free_space decimal(15,2),
> -- Percent_Free decimal(15,2)
> )
> -- Get the size of the datafiles
> insert into #db_file_information exec('DBCC showfilestats')
> -- add two columns to the temp table
> alter table #db_file_information add dude as
> ((Total_Extents-Used_Extents)/(Total_extents*1.0))
>
> select * from #db_file_information
>
> drop table #db_file_information
>
Monday, March 19, 2012
DBCC MemoryStatus dump and help 'Insufficient memory'
Hey guys. Today morning at about 9:58am I had an insufficient memory exception in SQL. I've reserved 12gb memory for my SQL server. I've pasted below the dbcc MemoryStatus dump gotten from SQL log. It also gave me a BPool::Map: No remappable address found
My SQL Server version is 8.00.2148. Enterprise Ed. Can you see any problems in the below pasted log?
2006-08-09 09:58:11.63 spid175 BPool::Map: no remappable address found.
2006-08-09 09:58:11.69 spid101 Buffer Distribution: Stolen=156843 Free=1634 Procedures=17623
Inram=0 Dirty=108669 Kept=0
I/O=0, Latched=1220, Other=1286875
2006-08-09 09:58:11.69 spid101 Buffer Counts: Commited=1572864 Target=1572864 Hashed=1396751
InternalReservation=510 ExternalReservation=130 Min Free=1552 Visible= 185824
2006-08-09 09:58:11.69 spid101 Procedure Cache: TotalProcs=3811 TotalPages=17623 InUsePages=16353
2006-08-09 09:58:11.69 spid101 Dynamic Memory Manager: Stolen=151680 OS Reserved=1432
OS Committed=1397
OS In Use=1387
Query Plan=90752 Optimizer=21
General=17603
Utilities=3220 Connection=10805
2006-08-09 09:58:11.69 spid101 Global Memory Objects: Resource=6560 Locks=31195
SQLCache=651 Replication=2
LockBytes=2 ServerGlobal=22
Xact=1021
2006-08-09 09:58:11.69 spid101 Query Memory Manager: Grants=2 Waiting=0 Maximum=26799 Available=3906
Thank you.
Are you running a server app on this SQL Server machine?|||I don't quite understand your question but I think this is what you are asking. There are 4 machines which could acces the db server. Two of these are the web servers. The front app is written in vb.net. The other two are backend application servers which have windows services installed on them. What they do is, they read data from a file and depending on the data, do some ins,upd,del etc. I hope I answered your question...|||Hi Tej, I experienced a similar problem a week back. The performancecounters were normal but it ran out of memory for some reason. A
restart fixed it but I don't know the cause.|||
Tej,
I'm having the same issue on build 2187. I'm working with PSS on resolution (SLOWLY). It's my theory that the optimizer is the root issue as dbcc memorystatus is bringing back huge swings within a minute or two timeframe (from 0 to 125,000 buffers) just before the 701 error pops into the applog. Have you been able to reproduce this issue or find a 701 error message in your application log? It doesn't show up in the sql server error log... but there is a good command for getting a mini-dump during the 701 errors. Simply use dbcc dumptrigger ('set',701) . I wouldn't recommend using this "just because" but it will help MS PSS further diagnose your problem.
|||Determining the root cause of a memory issue is sometimes difficult.
From the output in the original log it seems there are a number of contributing factors.
Here's the breakdown on that system.
There are 185824 (Visible= 185824) buffers in virtual address space that can be mapped in at any one time. This is a normal value on 32 bit systems.
Of those buffers 156843 are "stolen" (Stolen=156843) - or in use by a component other than the buffer pool.
Stolen buffers can further be broken down:
- Procedure cache 17623
- Query Plan=90752
- Connection= 10805
- Locks=31195
Procedure cache and query plan could well be driven up by a large number of adhoc queries that are not parameterized.
The connection size indicates there are a fair amount of client connected.
The memory stolen for locks is large, but not abnormal.
Resolving the insufficient memory issues here will most likely involve some tuning of the apps to use better parameterized queries, and improving query plans to reduce the number of locks taken.
|||Hi, I have a similar question in that our server only shows 100MB of memory being used by SQL when AWE is enabled and we have 8GB in the box with a max setting of 7GB. We have PAE enabled and the boot.ini is below as well. Does this dump mean that SQL is only using 104MB of memory at the moment?
Thanks in advance,
Scott
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise" /fastdetect /noexecute=OptIn /PAE
Here is our dump.
Memory Manager KB
--
VM Reserved 1674140
VM Committed 104384
AWE Allocated 6747704
Reserved Memory 1024
Reserved Memory In Use 0
(5 row(s) affected)
Memory node Id = 0 KB
--
VM Reserved 3456
VM Committed 34976
AWE Allocated 2540400
MultiPage Allocator 3264
SinglePage Allocator 183128
(5 row(s) affected)
Memory node Id = 1 KB
--
VM Reserved 1666588
VM Committed 65460
AWE Allocated 4207304
MultiPage Allocator 9632
SinglePage Allocator 183128
|||I presume you see the 100MB value through task manager or perfmon. This is normal when AWE is enabled as these tools do not account for physical page allocations done by applications.
Have a look at the AWE Allocated entry. It indicates that over 6GB has been allocated.
AWE Allocated 6747704
Another way to confirm is to look at the SQL Server:Memory Manager\Total Server memory(KB) counter from performance monitor.
DBCC MemoryStatus dump and help 'Insufficient memory'
Hey guys. Today morning at about 9:58am I had an insufficient memory exception in SQL. I've reserved 12gb memory for my SQL server. I've pasted below the dbcc MemoryStatus dump gotten from SQL log. It also gave me a BPool::Map: No remappable address found
My SQL Server version is 8.00.2148. Enterprise Ed. Can you see any problems in the below pasted log?
2006-08-09 09:58:11.63 spid175 BPool::Map: no remappable address found.
2006-08-09 09:58:11.69 spid101 Buffer Distribution: Stolen=156843 Free=1634 Procedures=17623
Inram=0 Dirty=108669 Kept=0
I/O=0, Latched=1220, Other=1286875
2006-08-09 09:58:11.69 spid101 Buffer Counts: Commited=1572864 Target=1572864 Hashed=1396751
InternalReservation=510 ExternalReservation=130 Min Free=1552 Visible= 185824
2006-08-09 09:58:11.69 spid101 Procedure Cache: TotalProcs=3811 TotalPages=17623 InUsePages=16353
2006-08-09 09:58:11.69 spid101 Dynamic Memory Manager: Stolen=151680 OS Reserved=1432
OS Committed=1397
OS In Use=1387
Query Plan=90752 Optimizer=21
General=17603
Utilities=3220 Connection=10805
2006-08-09 09:58:11.69 spid101 Global Memory Objects: Resource=6560 Locks=31195
SQLCache=651 Replication=2
LockBytes=2 ServerGlobal=22
Xact=1021
2006-08-09 09:58:11.69 spid101 Query Memory Manager: Grants=2 Waiting=0 Maximum=26799 Available=3906
Thank you.
Are you running a server app on this SQL Server machine?|||I don't quite understand your question but I think this is what you are asking. There are 4 machines which could acces the db server. Two of these are the web servers. The front app is written in vb.net. The other two are backend application servers which have windows services installed on them. What they do is, they read data from a file and depending on the data, do some ins,upd,del etc. I hope I answered your question...|||Hi Tej, I experienced a similar problem a week back. The performancecounters were normal but it ran out of memory for some reason. A
restart fixed it but I don't know the cause.|||
Tej,
I'm having the same issue on build 2187. I'm working with PSS on resolution (SLOWLY). It's my theory that the optimizer is the root issue as dbcc memorystatus is bringing back huge swings within a minute or two timeframe (from 0 to 125,000 buffers) just before the 701 error pops into the applog. Have you been able to reproduce this issue or find a 701 error message in your application log? It doesn't show up in the sql server error log... but there is a good command for getting a mini-dump during the 701 errors. Simply use dbcc dumptrigger ('set',701) . I wouldn't recommend using this "just because" but it will help MS PSS further diagnose your problem.
|||Determining the root cause of a memory issue is sometimes difficult.
From the output in the original log it seems there are a number of contributing factors.
Here's the breakdown on that system.
There are 185824 (Visible= 185824) buffers in virtual address space that can be mapped in at any one time. This is a normal value on 32 bit systems.
Of those buffers 156843 are "stolen" (Stolen=156843) - or in use by a component other than the buffer pool.
Stolen buffers can further be broken down:
- Procedure cache 17623
- Query Plan=90752
- Connection= 10805
- Locks=31195
Procedure cache and query plan could well be driven up by a large number of adhoc queries that are not parameterized.
The connection size indicates there are a fair amount of client connected.
The memory stolen for locks is large, but not abnormal.
Resolving the insufficient memory issues here will most likely involve some tuning of the apps to use better parameterized queries, and improving query plans to reduce the number of locks taken.
|||Hi, I have a similar question in that our server only shows 100MB of memory being used by SQL when AWE is enabled and we have 8GB in the box with a max setting of 7GB. We have PAE enabled and the boot.ini is below as well. Does this dump mean that SQL is only using 104MB of memory at the moment?
Thanks in advance,
Scott
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise" /fastdetect /noexecute=OptIn /PAE
Here is our dump.
Memory Manager KB
--
VM Reserved 1674140
VM Committed 104384
AWE Allocated 6747704
Reserved Memory 1024
Reserved Memory In Use 0
(5 row(s) affected)
Memory node Id = 0 KB
--
VM Reserved 3456
VM Committed 34976
AWE Allocated 2540400
MultiPage Allocator 3264
SinglePage Allocator 183128
(5 row(s) affected)
Memory node Id = 1 KB
--
VM Reserved 1666588
VM Committed 65460
AWE Allocated 4207304
MultiPage Allocator 9632
SinglePage Allocator 183128
|||I presume you see the 100MB value through task manager or perfmon. This is normal when AWE is enabled as these tools do not account for physical page allocations done by applications.
Have a look at the AWE Allocated entry. It indicates that over 6GB has been allocated.
AWE Allocated 6747704
Another way to confirm is to look at the SQL Server:Memory Manager\Total Server memory(KB) counter from performance monitor.
DBCC MemoryStatus dump and help 'Insufficient memory'
Hey guys. Today morning at about 9:58am I had an insufficient memory exception in SQL. I've reserved 12gb memory for my SQL server. I've pasted below the dbcc MemoryStatus dump gotten from SQL log. It also gave me a BPool::Map: No remappable address found
My SQL Server version is 8.00.2148. Enterprise Ed. Can you see any problems in the below pasted log?
2006-08-09 09:58:11.63 spid175 BPool::Map: no remappable address found.
2006-08-09 09:58:11.69 spid101 Buffer Distribution: Stolen=156843 Free=1634 Procedures=17623
Inram=0 Dirty=108669 Kept=0
I/O=0, Latched=1220, Other=1286875
2006-08-09 09:58:11.69 spid101 Buffer Counts: Commited=1572864 Target=1572864 Hashed=1396751
InternalReservation=510 ExternalReservation=130 Min Free=1552 Visible= 185824
2006-08-09 09:58:11.69 spid101 Procedure Cache: TotalProcs=3811 TotalPages=17623 InUsePages=16353
2006-08-09 09:58:11.69 spid101 Dynamic Memory Manager: Stolen=151680 OS Reserved=1432
OS Committed=1397
OS In Use=1387
Query Plan=90752 Optimizer=21
General=17603
Utilities=3220 Connection=10805
2006-08-09 09:58:11.69 spid101 Global Memory Objects: Resource=6560 Locks=31195
SQLCache=651 Replication=2
LockBytes=2 ServerGlobal=22
Xact=1021
2006-08-09 09:58:11.69 spid101 Query Memory Manager: Grants=2 Waiting=0 Maximum=26799 Available=3906
Thank you.
Are you running a server app on this SQL Server machine?|||I don't quite understand your question but I think this is what you are asking. There are 4 machines which could acces the db server. Two of these are the web servers. The front app is written in vb.net. The other two are backend application servers which have windows services installed on them. What they do is, they read data from a file and depending on the data, do some ins,upd,del etc. I hope I answered your question...|||Hi Tej, I experienced a similar problem a week back. The performancecounters were normal but it ran out of memory for some reason. A
restart fixed it but I don't know the cause.|||
Tej,
I'm having the same issue on build 2187. I'm working with PSS on resolution (SLOWLY). It's my theory that the optimizer is the root issue as dbcc memorystatus is bringing back huge swings within a minute or two timeframe (from 0 to 125,000 buffers) just before the 701 error pops into the applog. Have you been able to reproduce this issue or find a 701 error message in your application log? It doesn't show up in the sql server error log... but there is a good command for getting a mini-dump during the 701 errors. Simply use dbcc dumptrigger ('set',701) . I wouldn't recommend using this "just because" but it will help MS PSS further diagnose your problem.
|||Determining the root cause of a memory issue is sometimes difficult.
From the output in the original log it seems there are a number of contributing factors.
Here's the breakdown on that system.
There are 185824 (Visible= 185824) buffers in virtual address space that can be mapped in at any one time. This is a normal value on 32 bit systems.
Of those buffers 156843 are "stolen" (Stolen=156843) - or in use by a component other than the buffer pool.
Stolen buffers can further be broken down:
- Procedure cache 17623
- Query Plan=90752
- Connection= 10805
- Locks=31195
Procedure cache and query plan could well be driven up by a large number of adhoc queries that are not parameterized.
The connection size indicates there are a fair amount of client connected.
The memory stolen for locks is large, but not abnormal.
Resolving the insufficient memory issues here will most likely involve some tuning of the apps to use better parameterized queries, and improving query plans to reduce the number of locks taken.
|||Hi, I have a similar question in that our server only shows 100MB of memory being used by SQL when AWE is enabled and we have 8GB in the box with a max setting of 7GB. We have PAE enabled and the boot.ini is below as well. Does this dump mean that SQL is only using 104MB of memory at the moment?
Thanks in advance,
Scott
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise" /fastdetect /noexecute=OptIn /PAE
Here is our dump.
Memory Manager KB
--
VM Reserved 1674140
VM Committed 104384
AWE Allocated 6747704
Reserved Memory 1024
Reserved Memory In Use 0
(5 row(s) affected)
Memory node Id = 0 KB
--
VM Reserved 3456
VM Committed 34976
AWE Allocated 2540400
MultiPage Allocator 3264
SinglePage Allocator 183128
(5 row(s) affected)
Memory node Id = 1 KB
--
VM Reserved 1666588
VM Committed 65460
AWE Allocated 4207304
MultiPage Allocator 9632
SinglePage Allocator 183128
|||I presume you see the 100MB value through task manager or perfmon. This is normal when AWE is enabled as these tools do not account for physical page allocations done by applications.
Have a look at the AWE Allocated entry. It indicates that over 6GB has been allocated.
AWE Allocated 6747704
Another way to confirm is to look at the SQL Server:Memory Manager\Total Server memory(KB) counter from performance monitor.
Sunday, March 11, 2012
DBCC INDEXDEFRAG , DBREINDEX
BOL. I added the DBCC DBREINDEX to the script. I used in
the script listed below if the logical fragmentation is
greater than 75 then used DBCC DBREINDEX, else then
between 5 and 75 then use DBCC INDEXDEFRAG.
The script
INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
eliminates all cluster indexes.
Should this statement be eliminate from my script?
I would like to re-index or defragment all indexes in my
database without using the SQL Server maintenance
Optimization job.
Thanks,
Don
SET NOCOUNT ON
DECLARE @.tablename VARCHAR (128)
DECLARE @.execstr VARCHAR (255)
DECLARE @.objectid INT
DECLARE @.indexid INT
DECLARE @.frag DECIMAL
DECLARE @.IndexName VARCHAR (255)
DECLARE @.maxfrag DECIMAL
DECLARE @.CmdType VARCHAR (255)
-- Decide on the maximum fragmentation to allow
SELECT @.maxfrag = 5.0
-- Declare cursor
DECLARE tables CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
-- Create the table
CREATE TABLE #fraglist (
ObjectName CHAR (255) NULL,
ObjectId INT NULL,
IndexName CHAR (255) NULL,
IndexId INT NULL,
Lvl INT NULL,
CountPages INT NULL,
CountRows INT NULL,
MinRecSize INT NULL,
MaxRecSize INT NULL,
AvgRecSize INT NULL,
ForRecCount INT NULL,
Extents INT NULL,
ExtentSwitches INT NULL,
AvgFreeBytes INT NULL,
AvgPageDensity INT NULL,
ScanDensity DECIMAL NULL,
BestCount INT NULL,
ActualCount INT NULL,
LogicalFrag DECIMAL NULL,
ExtentFrag DECIMAL NULL)
-- Open the cursor
OPEN tables
-- Loop through all the tables in the database
FETCH NEXT
FROM tables
INTO @.tablename
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
INSERT INTO #fraglist
EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''')
WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
FETCH NEXT
FROM tables
INTO @.tablename
END
-- Close and deallocate the cursor
CLOSE tables
DEALLOCATE tables
-- Declare cursor for list of indexes to be defragged
DECLARE indexes CURSOR FOR
SELECT#FragList. ObjectName
, #FragList. ObjectId
, #FragList. IndexId
, #FragList. LogicalFrag
,#FragList. IndexName
FROM #fraglist
WHERE LogicalFrag >= @._iMinDefrag
AND INDEXPROPERTY (ObjectId,
IndexName, 'IndexDepth') > 0
ORDER BY LogicalFrag desc
-- Open the cursor
OPEN indexes
-- loop through the indexes
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag, @.IndexName
WHILE @.@.FETCH_STATUS = 0
BEGIN
PRINT 'Executing Check on (0, ' + RTRIM(@.tablename)
+ ', ' + RTRIM(@.IndexName) + ') - fragmentation
currently ' + RTRIM(CONVERT(varchar(15),@.frag)) + '%'
if @.frag > 75
select @.CmdType = 'DBCC DBREINDEX ' + '(' + RTRIM
(@.tablename) + ',' + rtrim(@.IndexName) + ', 0)'
else if @.frag between @.maxfrag AND 75
SELECT @.Cmdtype = 'DBCC INDEXDEFRAG (0, ' + RTRIM
(@.objectid) + ', ' + RTRIM(@.indexid) + ')'
EXEC (@.CmdType)
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag,
@.IndexName
END
-- Close and deallocate the cursor
CLOSE indexes
DEALLOCATE indexes
-- Delete the temporary table
DROP TABLE #fraglist
GO
Don,
My reading of INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0 is that
it tells how many levels there are in the index tree (levels above the
leaves). That should not _specifically_ target clustered indexes.
In any case, your use of this property depends on what you are wanting to
do. (I wouldn't use it, personally.)
Russell Fields
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1b6eb01c44fbf$b00964d0$a001280a@.phx.gbl...
> I modified the DBCC INDEXDEFRAG script from SQL Server
> BOL. I added the DBCC DBREINDEX to the script. I used in
> the script listed below if the logical fragmentation is
> greater than 75 then used DBCC DBREINDEX, else then
> between 5 and 75 then use DBCC INDEXDEFRAG.
>
> The script
> INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
> eliminates all cluster indexes.
> Should this statement be eliminate from my script?
> I would like to re-index or defragment all indexes in my
> database without using the SQL Server maintenance
> Optimization job.
> Thanks,
> Don
>
> SET NOCOUNT ON
> DECLARE @.tablename VARCHAR (128)
> DECLARE @.execstr VARCHAR (255)
> DECLARE @.objectid INT
> DECLARE @.indexid INT
> DECLARE @.frag DECIMAL
> DECLARE @.IndexName VARCHAR (255)
> DECLARE @.maxfrag DECIMAL
> DECLARE @.CmdType VARCHAR (255)
> -- Decide on the maximum fragmentation to allow
> SELECT @.maxfrag = 5.0
> -- Declare cursor
> DECLARE tables CURSOR FOR
> SELECT TABLE_NAME
> FROM INFORMATION_SCHEMA.TABLES
> WHERE TABLE_TYPE = 'BASE TABLE'
> -- Create the table
> CREATE TABLE #fraglist (
> ObjectName CHAR (255) NULL,
> ObjectId INT NULL,
> IndexName CHAR (255) NULL,
> IndexId INT NULL,
> Lvl INT NULL,
> CountPages INT NULL,
> CountRows INT NULL,
> MinRecSize INT NULL,
> MaxRecSize INT NULL,
> AvgRecSize INT NULL,
> ForRecCount INT NULL,
> Extents INT NULL,
> ExtentSwitches INT NULL,
> AvgFreeBytes INT NULL,
> AvgPageDensity INT NULL,
> ScanDensity DECIMAL NULL,
> BestCount INT NULL,
> ActualCount INT NULL,
> LogicalFrag DECIMAL NULL,
> ExtentFrag DECIMAL NULL)
> -- Open the cursor
> OPEN tables
> -- Loop through all the tables in the database
> FETCH NEXT
> FROM tables
> INTO @.tablename
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> -- Do the showcontig of all indexes of the table
> INSERT INTO #fraglist
> EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''')
> WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
> FETCH NEXT
> FROM tables
> INTO @.tablename
> END
> -- Close and deallocate the cursor
> CLOSE tables
> DEALLOCATE tables
> -- Declare cursor for list of indexes to be defragged
> DECLARE indexes CURSOR FOR
> SELECT #FragList. ObjectName
> , #FragList. ObjectId
> , #FragList. IndexId
> , #FragList. LogicalFrag
> , #FragList. IndexName
> FROM #fraglist
> WHERE LogicalFrag >= @._iMinDefrag
> AND INDEXPROPERTY (ObjectId,
> IndexName, 'IndexDepth') > 0
> ORDER BY LogicalFrag desc
> -- Open the cursor
> OPEN indexes
> -- loop through the indexes
> FETCH NEXT
> FROM indexes
> INTO @.tablename, @.objectid, @.indexid, @.frag, @.IndexName
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> PRINT 'Executing Check on (0, ' + RTRIM(@.tablename)
> + ', ' + RTRIM(@.IndexName) + ') - fragmentation
> currently ' + RTRIM(CONVERT(varchar(15),@.frag)) + '%'
> if @.frag > 75
> select @.CmdType = 'DBCC DBREINDEX ' + '(' + RTRIM
> (@.tablename) + ',' + rtrim(@.IndexName) + ', 0)'
> else if @.frag between @.maxfrag AND 75
> SELECT @.Cmdtype = 'DBCC INDEXDEFRAG (0, ' + RTRIM
> (@.objectid) + ', ' + RTRIM(@.indexid) + ')'
>
> EXEC (@.CmdType)
> FETCH NEXT
> FROM indexes
> INTO @.tablename, @.objectid, @.indexid, @.frag,
> @.IndexName
> END
> -- Close and deallocate the cursor
> CLOSE indexes
> DEALLOCATE indexes
> -- Delete the temporary table
> DROP TABLE #fraglist
> GO
>
|||The INDEXPROPERTY clause I included in the BOL example removes all heaps
from consideration. All indexes (clustered or non-clustered) have depth > 0.
You have to keep that clause in there or you may get errors.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
news:e$$F$T#TEHA.3872@.TK2MSFTNGP10.phx.gbl...
> Don,
> My reading of INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0 is
that
> it tells how many levels there are in the index tree (levels above the
> leaves). That should not _specifically_ target clustered indexes.
> In any case, your use of this property depends on what you are wanting to
> do. (I wouldn't use it, personally.)
> Russell Fields
>
> "Don" <anonymous@.discussions.microsoft.com> wrote in message
> news:1b6eb01c44fbf$b00964d0$a001280a@.phx.gbl...
>
DBCC INDEXDEFRAG , DBREINDEX
BOL. I added the DBCC DBREINDEX to the script. I used in
the script listed below if the logical fragmentation is
greater than 75 then used DBCC DBREINDEX, else then
between 5 and 75 then use DBCC INDEXDEFRAG.
The script
INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
eliminates all cluster indexes.
Should this statement be eliminate from my script?
I would like to re-index or defragment all indexes in my
database without using the SQL Server maintenance
Optimization job.
Thanks,
Don
SET NOCOUNT ON
DECLARE @.tablename VARCHAR (128)
DECLARE @.execstr VARCHAR (255)
DECLARE @.objectid INT
DECLARE @.indexid INT
DECLARE @.frag DECIMAL
DECLARE @.IndexName VARCHAR (255)
DECLARE @.maxfrag DECIMAL
DECLARE @.CmdType VARCHAR (255)
-- Decide on the maximum fragmentation to allow
SELECT @.maxfrag = 5.0
-- Declare cursor
DECLARE tables CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
-- Create the table
CREATE TABLE #fraglist (
ObjectName CHAR (255) NULL,
ObjectId INT NULL,
IndexName CHAR (255) NULL,
IndexId INT NULL,
Lvl INT NULL,
CountPages INT NULL,
CountRows INT NULL,
MinRecSize INT NULL,
MaxRecSize INT NULL,
AvgRecSize INT NULL,
ForRecCount INT NULL,
Extents INT NULL,
ExtentSwitches INT NULL,
AvgFreeBytes INT NULL,
AvgPageDensity INT NULL,
ScanDensity DECIMAL NULL,
BestCount INT NULL,
ActualCount INT NULL,
LogicalFrag DECIMAL NULL,
ExtentFrag DECIMAL NULL)
-- Open the cursor
OPEN tables
-- Loop through all the tables in the database
FETCH NEXT
FROM tables
INTO @.tablename
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
INSERT INTO #fraglist
EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''')
WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
FETCH NEXT
FROM tables
INTO @.tablename
END
-- Close and deallocate the cursor
CLOSE tables
DEALLOCATE tables
-- Declare cursor for list of indexes to be defragged
DECLARE indexes CURSOR FOR
SELECT #FragList. ObjectName
, #FragList. ObjectId
, #FragList. IndexId
, #FragList. LogicalFrag
, #FragList. IndexName
FROM #fraglist
WHERE LogicalFrag >= @._iMinDefrag
AND INDEXPROPERTY (ObjectId,
IndexName, 'IndexDepth') > 0
ORDER BY LogicalFrag desc
-- Open the cursor
OPEN indexes
-- loop through the indexes
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag, @.IndexName
WHILE @.@.FETCH_STATUS = 0
BEGIN
PRINT 'Executing Check on (0, ' + RTRIM(@.tablename)
+ ', ' + RTRIM(@.IndexName) + ') - fragmentation
currently ' + RTRIM(CONVERT(varchar(15),@.frag)) + '%'
if @.frag > 75
select @.CmdType = 'DBCC DBREINDEX ' + '(' + RTRIM
(@.tablename) + ',' + rtrim(@.IndexName) + ', 0)'
else if @.frag between @.maxfrag AND 75
SELECT @.Cmdtype = 'DBCC INDEXDEFRAG (0, ' + RTRIM
(@.objectid) + ', ' + RTRIM(@.indexid) + ')'
EXEC (@.CmdType)
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag,
@.IndexName
END
-- Close and deallocate the cursor
CLOSE indexes
DEALLOCATE indexes
-- Delete the temporary table
DROP TABLE #fraglist
GODon,
My reading of INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0 is that
it tells how many levels there are in the index tree (levels above the
leaves). That should not _specifically_ target clustered indexes.
In any case, your use of this property depends on what you are wanting to
do. (I wouldn't use it, personally.)
Russell Fields
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1b6eb01c44fbf$b00964d0$a001280a@.phx
.gbl...
> I modified the DBCC INDEXDEFRAG script from SQL Server
> BOL. I added the DBCC DBREINDEX to the script. I used in
> the script listed below if the logical fragmentation is
> greater than 75 then used DBCC DBREINDEX, else then
> between 5 and 75 then use DBCC INDEXDEFRAG.
>
> The script
> INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
> eliminates all cluster indexes.
> Should this statement be eliminate from my script?
> I would like to re-index or defragment all indexes in my
> database without using the SQL Server maintenance
> Optimization job.
> Thanks,
> Don
>
> SET NOCOUNT ON
> DECLARE @.tablename VARCHAR (128)
> DECLARE @.execstr VARCHAR (255)
> DECLARE @.objectid INT
> DECLARE @.indexid INT
> DECLARE @.frag DECIMAL
> DECLARE @.IndexName VARCHAR (255)
> DECLARE @.maxfrag DECIMAL
> DECLARE @.CmdType VARCHAR (255)
> -- Decide on the maximum fragmentation to allow
> SELECT @.maxfrag = 5.0
> -- Declare cursor
> DECLARE tables CURSOR FOR
> SELECT TABLE_NAME
> FROM INFORMATION_SCHEMA.TABLES
> WHERE TABLE_TYPE = 'BASE TABLE'
> -- Create the table
> CREATE TABLE #fraglist (
> ObjectName CHAR (255) NULL,
> ObjectId INT NULL,
> IndexName CHAR (255) NULL,
> IndexId INT NULL,
> Lvl INT NULL,
> CountPages INT NULL,
> CountRows INT NULL,
> MinRecSize INT NULL,
> MaxRecSize INT NULL,
> AvgRecSize INT NULL,
> ForRecCount INT NULL,
> Extents INT NULL,
> ExtentSwitches INT NULL,
> AvgFreeBytes INT NULL,
> AvgPageDensity INT NULL,
> ScanDensity DECIMAL NULL,
> BestCount INT NULL,
> ActualCount INT NULL,
> LogicalFrag DECIMAL NULL,
> ExtentFrag DECIMAL NULL)
> -- Open the cursor
> OPEN tables
> -- Loop through all the tables in the database
> FETCH NEXT
> FROM tables
> INTO @.tablename
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> -- Do the showcontig of all indexes of the table
> INSERT INTO #fraglist
> EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''')
> WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
> FETCH NEXT
> FROM tables
> INTO @.tablename
> END
> -- Close and deallocate the cursor
> CLOSE tables
> DEALLOCATE tables
> -- Declare cursor for list of indexes to be defragged
> DECLARE indexes CURSOR FOR
> SELECT #FragList. ObjectName
> , #FragList. ObjectId
> , #FragList. IndexId
> , #FragList. LogicalFrag
> , #FragList. IndexName
> FROM #fraglist
> WHERE LogicalFrag >= @._iMinDefrag
> AND INDEXPROPERTY (ObjectId,
> IndexName, 'IndexDepth') > 0
> ORDER BY LogicalFrag desc
> -- Open the cursor
> OPEN indexes
> -- loop through the indexes
> FETCH NEXT
> FROM indexes
> INTO @.tablename, @.objectid, @.indexid, @.frag, @.IndexName
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> PRINT 'Executing Check on (0, ' + RTRIM(@.tablename)
> + ', ' + RTRIM(@.IndexName) + ') - fragmentation
> currently ' + RTRIM(CONVERT(varchar(15),@.frag)) + '%'
> if @.frag > 75
> select @.CmdType = 'DBCC DBREINDEX ' + '(' + RTRIM
> (@.tablename) + ',' + rtrim(@.IndexName) + ', 0)'
> else if @.frag between @.maxfrag AND 75
> SELECT @.Cmdtype = 'DBCC INDEXDEFRAG (0, ' + RTRIM
> (@.objectid) + ', ' + RTRIM(@.indexid) + ')'
>
> EXEC (@.CmdType)
> FETCH NEXT
> FROM indexes
> INTO @.tablename, @.objectid, @.indexid, @.frag,
> @.IndexName
> END
> -- Close and deallocate the cursor
> CLOSE indexes
> DEALLOCATE indexes
> -- Delete the temporary table
> DROP TABLE #fraglist
> GO
>|||The INDEXPROPERTY clause I included in the BOL example removes all heaps
from consideration. All indexes (clustered or non-clustered) have depth > 0.
You have to keep that clause in there or you may get errors.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
news:e$$F$T#TEHA.3872@.TK2MSFTNGP10.phx.gbl...
> Don,
> My reading of INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0 is
that
> it tells how many levels there are in the index tree (levels above the
> leaves). That should not _specifically_ target clustered indexes.
> In any case, your use of this property depends on what you are wanting to
> do. (I wouldn't use it, personally.)
> Russell Fields
>
> "Don" <anonymous@.discussions.microsoft.com> wrote in message
> news:1b6eb01c44fbf$b00964d0$a001280a@.phx
.gbl...
>
DBCC INDEXDEFRAG , DBREINDEX
BOL. I added the DBCC DBREINDEX to the script. I used in
the script listed below if the logical fragmentation is
greater than 75 then used DBCC DBREINDEX, else then
between 5 and 75 then use DBCC INDEXDEFRAG.
The script
INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
eliminates all cluster indexes.
Should this statement be eliminate from my script?
I would like to re-index or defragment all indexes in my
database without using the SQL Server maintenance
Optimization job.
Thanks,
Don
SET NOCOUNT ON
DECLARE @.tablename VARCHAR (128)
DECLARE @.execstr VARCHAR (255)
DECLARE @.objectid INT
DECLARE @.indexid INT
DECLARE @.frag DECIMAL
DECLARE @.IndexName VARCHAR (255)
DECLARE @.maxfrag DECIMAL
DECLARE @.CmdType VARCHAR (255)
-- Decide on the maximum fragmentation to allow
SELECT @.maxfrag = 5.0
-- Declare cursor
DECLARE tables CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
-- Create the table
CREATE TABLE #fraglist (
ObjectName CHAR (255) NULL,
ObjectId INT NULL,
IndexName CHAR (255) NULL,
IndexId INT NULL,
Lvl INT NULL,
CountPages INT NULL,
CountRows INT NULL,
MinRecSize INT NULL,
MaxRecSize INT NULL,
AvgRecSize INT NULL,
ForRecCount INT NULL,
Extents INT NULL,
ExtentSwitches INT NULL,
AvgFreeBytes INT NULL,
AvgPageDensity INT NULL,
ScanDensity DECIMAL NULL,
BestCount INT NULL,
ActualCount INT NULL,
LogicalFrag DECIMAL NULL,
ExtentFrag DECIMAL NULL)
-- Open the cursor
OPEN tables
-- Loop through all the tables in the database
FETCH NEXT
FROM tables
INTO @.tablename
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
INSERT INTO #fraglist
EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''')
WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
FETCH NEXT
FROM tables
INTO @.tablename
END
-- Close and deallocate the cursor
CLOSE tables
DEALLOCATE tables
-- Declare cursor for list of indexes to be defragged
DECLARE indexes CURSOR FOR
SELECT #FragList. ObjectName
, #FragList. ObjectId
, #FragList. IndexId
, #FragList. LogicalFrag
, #FragList. IndexName
FROM #fraglist
WHERE LogicalFrag >= @._iMinDefrag
AND INDEXPROPERTY (ObjectId,
IndexName, 'IndexDepth') > 0
ORDER BY LogicalFrag desc
-- Open the cursor
OPEN indexes
-- loop through the indexes
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag, @.IndexName
WHILE @.@.FETCH_STATUS = 0
BEGIN
PRINT 'Executing Check on (0, ' + RTRIM(@.tablename)
+ ', ' + RTRIM(@.IndexName) + ') - fragmentation
currently ' + RTRIM(CONVERT(varchar(15),@.frag)) + '%'
if @.frag > 75
select @.CmdType = 'DBCC DBREINDEX ' + '(' + RTRIM
(@.tablename) + ',' + rtrim(@.IndexName) + ', 0)'
else if @.frag between @.maxfrag AND 75
SELECT @.Cmdtype = 'DBCC INDEXDEFRAG (0, ' + RTRIM
(@.objectid) + ', ' + RTRIM(@.indexid) + ')'
EXEC (@.CmdType)
FETCH NEXT
FROM indexes
INTO @.tablename, @.objectid, @.indexid, @.frag,
@.IndexName
END
-- Close and deallocate the cursor
CLOSE indexes
DEALLOCATE indexes
-- Delete the temporary table
DROP TABLE #fraglist
GODon,
My reading of INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0 is that
it tells how many levels there are in the index tree (levels above the
leaves). That should not _specifically_ target clustered indexes.
In any case, your use of this property depends on what you are wanting to
do. (I wouldn't use it, personally.)
Russell Fields
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1b6eb01c44fbf$b00964d0$a001280a@.phx.gbl...
> I modified the DBCC INDEXDEFRAG script from SQL Server
> BOL. I added the DBCC DBREINDEX to the script. I used in
> the script listed below if the logical fragmentation is
> greater than 75 then used DBCC DBREINDEX, else then
> between 5 and 75 then use DBCC INDEXDEFRAG.
>
> The script
> INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
> eliminates all cluster indexes.
> Should this statement be eliminate from my script?
> I would like to re-index or defragment all indexes in my
> database without using the SQL Server maintenance
> Optimization job.
> Thanks,
> Don
>
> SET NOCOUNT ON
> DECLARE @.tablename VARCHAR (128)
> DECLARE @.execstr VARCHAR (255)
> DECLARE @.objectid INT
> DECLARE @.indexid INT
> DECLARE @.frag DECIMAL
> DECLARE @.IndexName VARCHAR (255)
> DECLARE @.maxfrag DECIMAL
> DECLARE @.CmdType VARCHAR (255)
> -- Decide on the maximum fragmentation to allow
> SELECT @.maxfrag = 5.0
> -- Declare cursor
> DECLARE tables CURSOR FOR
> SELECT TABLE_NAME
> FROM INFORMATION_SCHEMA.TABLES
> WHERE TABLE_TYPE = 'BASE TABLE'
> -- Create the table
> CREATE TABLE #fraglist (
> ObjectName CHAR (255) NULL,
> ObjectId INT NULL,
> IndexName CHAR (255) NULL,
> IndexId INT NULL,
> Lvl INT NULL,
> CountPages INT NULL,
> CountRows INT NULL,
> MinRecSize INT NULL,
> MaxRecSize INT NULL,
> AvgRecSize INT NULL,
> ForRecCount INT NULL,
> Extents INT NULL,
> ExtentSwitches INT NULL,
> AvgFreeBytes INT NULL,
> AvgPageDensity INT NULL,
> ScanDensity DECIMAL NULL,
> BestCount INT NULL,
> ActualCount INT NULL,
> LogicalFrag DECIMAL NULL,
> ExtentFrag DECIMAL NULL)
> -- Open the cursor
> OPEN tables
> -- Loop through all the tables in the database
> FETCH NEXT
> FROM tables
> INTO @.tablename
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> -- Do the showcontig of all indexes of the table
> INSERT INTO #fraglist
> EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''')
> WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
> FETCH NEXT
> FROM tables
> INTO @.tablename
> END
> -- Close and deallocate the cursor
> CLOSE tables
> DEALLOCATE tables
> -- Declare cursor for list of indexes to be defragged
> DECLARE indexes CURSOR FOR
> SELECT #FragList. ObjectName
> , #FragList. ObjectId
> , #FragList. IndexId
> , #FragList. LogicalFrag
> , #FragList. IndexName
> FROM #fraglist
> WHERE LogicalFrag >= @._iMinDefrag
> AND INDEXPROPERTY (ObjectId,
> IndexName, 'IndexDepth') > 0
> ORDER BY LogicalFrag desc
> -- Open the cursor
> OPEN indexes
> -- loop through the indexes
> FETCH NEXT
> FROM indexes
> INTO @.tablename, @.objectid, @.indexid, @.frag, @.IndexName
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> PRINT 'Executing Check on (0, ' + RTRIM(@.tablename)
> + ', ' + RTRIM(@.IndexName) + ') - fragmentation
> currently ' + RTRIM(CONVERT(varchar(15),@.frag)) + '%'
> if @.frag > 75
> select @.CmdType = 'DBCC DBREINDEX ' + '(' + RTRIM
> (@.tablename) + ',' + rtrim(@.IndexName) + ', 0)'
> else if @.frag between @.maxfrag AND 75
> SELECT @.Cmdtype = 'DBCC INDEXDEFRAG (0, ' + RTRIM
> (@.objectid) + ', ' + RTRIM(@.indexid) + ')'
>
> EXEC (@.CmdType)
> FETCH NEXT
> FROM indexes
> INTO @.tablename, @.objectid, @.indexid, @.frag,
> @.IndexName
> END
> -- Close and deallocate the cursor
> CLOSE indexes
> DEALLOCATE indexes
> -- Delete the temporary table
> DROP TABLE #fraglist
> GO
>|||The INDEXPROPERTY clause I included in the BOL example removes all heaps
from consideration. All indexes (clustered or non-clustered) have depth > 0.
You have to keep that clause in there or you may get errors.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
news:e$$F$T#TEHA.3872@.TK2MSFTNGP10.phx.gbl...
> Don,
> My reading of INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0 is
that
> it tells how many levels there are in the index tree (levels above the
> leaves). That should not _specifically_ target clustered indexes.
> In any case, your use of this property depends on what you are wanting to
> do. (I wouldn't use it, personally.)
> Russell Fields
>
> "Don" <anonymous@.discussions.microsoft.com> wrote in message
> news:1b6eb01c44fbf$b00964d0$a001280a@.phx.gbl...
> >
> > I modified the DBCC INDEXDEFRAG script from SQL Server
> > BOL. I added the DBCC DBREINDEX to the script. I used in
> > the script listed below if the logical fragmentation is
> > greater than 75 then used DBCC DBREINDEX, else then
> > between 5 and 75 then use DBCC INDEXDEFRAG.
> >
> >
> > The script
> > INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
> > eliminates all cluster indexes.
> >
> > Should this statement be eliminate from my script?
> >
> > I would like to re-index or defragment all indexes in my
> > database without using the SQL Server maintenance
> > Optimization job.
> >
> > Thanks,
> >
> > Don
> >
> >
> >
> > SET NOCOUNT ON
> > DECLARE @.tablename VARCHAR (128)
> > DECLARE @.execstr VARCHAR (255)
> > DECLARE @.objectid INT
> > DECLARE @.indexid INT
> > DECLARE @.frag DECIMAL
> > DECLARE @.IndexName VARCHAR (255)
> > DECLARE @.maxfrag DECIMAL
> > DECLARE @.CmdType VARCHAR (255)
> >
> > -- Decide on the maximum fragmentation to allow
> > SELECT @.maxfrag = 5.0
> >
> > -- Declare cursor
> > DECLARE tables CURSOR FOR
> > SELECT TABLE_NAME
> > FROM INFORMATION_SCHEMA.TABLES
> > WHERE TABLE_TYPE = 'BASE TABLE'
> >
> > -- Create the table
> > CREATE TABLE #fraglist (
> > ObjectName CHAR (255) NULL,
> > ObjectId INT NULL,
> > IndexName CHAR (255) NULL,
> > IndexId INT NULL,
> > Lvl INT NULL,
> > CountPages INT NULL,
> > CountRows INT NULL,
> > MinRecSize INT NULL,
> > MaxRecSize INT NULL,
> > AvgRecSize INT NULL,
> > ForRecCount INT NULL,
> > Extents INT NULL,
> > ExtentSwitches INT NULL,
> > AvgFreeBytes INT NULL,
> > AvgPageDensity INT NULL,
> > ScanDensity DECIMAL NULL,
> > BestCount INT NULL,
> > ActualCount INT NULL,
> > LogicalFrag DECIMAL NULL,
> > ExtentFrag DECIMAL NULL)
> >
> > -- Open the cursor
> > OPEN tables
> >
> > -- Loop through all the tables in the database
> > FETCH NEXT
> > FROM tables
> > INTO @.tablename
> >
> > WHILE @.@.FETCH_STATUS = 0
> > BEGIN
> > -- Do the showcontig of all indexes of the table
> > INSERT INTO #fraglist
> > EXEC ('DBCC SHOWCONTIG (''' + @.tablename + ''')
> > WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
> > FETCH NEXT
> > FROM tables
> > INTO @.tablename
> > END
> >
> > -- Close and deallocate the cursor
> > CLOSE tables
> > DEALLOCATE tables
> >
> > -- Declare cursor for list of indexes to be defragged
> > DECLARE indexes CURSOR FOR
> > SELECT #FragList. ObjectName
> > , #FragList. ObjectId
> > , #FragList. IndexId
> > , #FragList. LogicalFrag
> > , #FragList. IndexName
> > FROM #fraglist
> > WHERE LogicalFrag >= @._iMinDefrag
> > AND INDEXPROPERTY (ObjectId,
> > IndexName, 'IndexDepth') > 0
> > ORDER BY LogicalFrag desc
> >
> > -- Open the cursor
> > OPEN indexes
> >
> > -- loop through the indexes
> > FETCH NEXT
> > FROM indexes
> > INTO @.tablename, @.objectid, @.indexid, @.frag, @.IndexName
> >
> > WHILE @.@.FETCH_STATUS = 0
> > BEGIN
> > PRINT 'Executing Check on (0, ' + RTRIM(@.tablename)
> > + ', ' + RTRIM(@.IndexName) + ') - fragmentation
> > currently ' + RTRIM(CONVERT(varchar(15),@.frag)) + '%'
> > if @.frag > 75
> > select @.CmdType = 'DBCC DBREINDEX ' + '(' + RTRIM
> > (@.tablename) + ',' + rtrim(@.IndexName) + ', 0)'
> > else if @.frag between @.maxfrag AND 75
> > SELECT @.Cmdtype = 'DBCC INDEXDEFRAG (0, ' + RTRIM
> > (@.objectid) + ', ' + RTRIM(@.indexid) + ')'
> >
> >
> > EXEC (@.CmdType)
> >
> > FETCH NEXT
> > FROM indexes
> > INTO @.tablename, @.objectid, @.indexid, @.frag,
> > @.IndexName
> > END
> >
> > -- Close and deallocate the cursor
> > CLOSE indexes
> > DEALLOCATE indexes
> >
> > -- Delete the temporary table
> > DROP TABLE #fraglist
> > GO
> >
> >
>
Saturday, February 25, 2012
DBCC dbreindex and Backup Size
be Mon and Tuesday after the full backup on Sunday:
DBCC dbreindex (My Table,'',90)
The table being DBCC'ed are rather big, can be 30GB in total.
Our backup size increases in those two days. Our differential backups on Mon
and Tuesdays are usually significantly lower. I wonder someone can relate
this to DBCC DBReindex or any other possibilites.
Thanks
TO
> Our backup size increases in those two days.
Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before of
after the operation?
If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of data
with the reindex).
If it is a log backup, then the same goes, you have modified a lot of data. These modifications have
been logged.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
> We have two one-time scheduled jobs running the DBCC below, this happened to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on Mon
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>
|||There are regular large amount of inserts (because of ETL process)
***everyday***, no noticeable different for those days though.
We have full db backup on Sunday and Diff db backup daily.
Should I not run the DBCC REINDEX with fill factor of 90 a day after full
backup?
>You have shuffled (= modified) a lot of data with the reindex).
Can you further explain? It sounds like this may be the answer.
The database is 58 GB and the diff for the day is 55 GB. Same day last week
was 26.961 GB.
BTW, 27 GB diff is way too big of size to me. What strategy I can use to
save our backup size, without jeopardizing or compromising our recoverability
(longer restoring is acceptable)?
Thanks.
"Tibor Karaszi" wrote:
> Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before of
> after the operation?
> If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of data
> with the reindex).
> If it is a log backup, then the same goes, you have modified a lot of data. These modifications have
> been logged.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "C TO" <CTO@.discussions.microsoft.com> wrote in message
> news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
>
|||That is a logged operation, so it will expand your transaction log. Take a
peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
your scenario.
"C TO" wrote:
> We have two one-time scheduled jobs running the DBCC below, this happened to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on Mon
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>
|||Forgot to mention that the recovery mode is "Simple", my bad. We have no
logged operation.
Is there any DBCC + Backup strategy can be implement?
Thanks.
"Dennis Forbes" wrote:
[vbcol=seagreen]
> That is a logged operation, so it will expand your transaction log. Take a
> peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> your scenario.
> "C TO" wrote:
|||Ahh, okay.
The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
the reindex routines before your full backup.
"C TO" wrote:
[vbcol=seagreen]
> Forgot to mention that the recovery mode is "Simple", my bad. We have no
> logged operation.
> Is there any DBCC + Backup strategy can be implement?
> Thanks.
> "Dennis Forbes" wrote:
|||>>You have shuffled (= modified) a lot of data with the reindex).
> Can you further explain? It sounds like this may be the answer.
Yes, the index is rebuilt, hence all the data in the table has been modified (assuming there's a
clustered index on the table). All that data is included in the diff backup.
Can you be more selective with your rebuilds? Also, perhaps DBCC INDEXDEFRAG will suit you better?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:91C42295-D1E9-4545-990D-43AEDF876184@.microsoft.com...[vbcol=seagreen]
> There are regular large amount of inserts (because of ETL process)
> ***everyday***, no noticeable different for those days though.
> We have full db backup on Sunday and Diff db backup daily.
> Should I not run the DBCC REINDEX with fill factor of 90 a day after full
> backup?
> Can you further explain? It sounds like this may be the answer.
>
> The database is 58 GB and the diff for the day is 55 GB. Same day last week
> was 26.961 GB.
> BTW, 27 GB diff is way too big of size to me. What strategy I can use to
> save our backup size, without jeopardizing or compromising our recoverability
> (longer restoring is acceptable)?
> Thanks.
>
>
> "Tibor Karaszi" wrote:
|||Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
was reindexing the tables with just about one third of the 59 GB database
size. Assuming this is true, the marked pages are 28GB. Are these all index
pages?
"Dennis Forbes" wrote:
[vbcol=seagreen]
> Ahh, okay.
> The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
> the reindex routines before your full backup.
> "C TO" wrote:
|||You probably have a clustered index on the table, and a clustered index
contains the entirity of the data (a clustered index is the data), so in that
case the answer would be yes.
"C TO" wrote:
[vbcol=seagreen]
> Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
> was reindexing the tables with just about one third of the 59 GB database
> size. Assuming this is true, the marked pages are 28GB. Are these all index
> pages?
>
> "Dennis Forbes" wrote:
DBCC dbreindex and Backup Size
be Mon and Tuesday after the full backup on Sunday:
DBCC dbreindex (My Table,'',90)
The table being DBCC'ed are rather big, can be 30GB in total.
Our backup size increases in those two days. Our differential backups on Mon
and Tuesdays are usually significantly lower. I wonder someone can relate
this to DBCC DBReindex or any other possibilites.
Thanks
TO> Our backup size increases in those two days.
Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before of
after the operation?
If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of data
with the reindex).
If it is a log backup, then the same goes, you have modified a lot of data. These modifications have
been logged.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
> We have two one-time scheduled jobs running the DBCC below, this happened to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on Mon
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>|||There are regular large amount of inserts (because of ETL process)
***everyday***, no noticeable different for those days though.
We have full db backup on Sunday and Diff db backup daily.
Should I not run the DBCC REINDEX with fill factor of 90 a day after full
backup?
>You have shuffled (= modified) a lot of data with the reindex).
Can you further explain? It sounds like this may be the answer.
The database is 58 GB and the diff for the day is 55 GB. Same day last week
was 26.961 GB.
BTW, 27 GB diff is way too big of size to me. What strategy I can use to
save our backup size, without jeopardizing or compromising our recoverability
(longer restoring is acceptable)?
Thanks.
"Tibor Karaszi" wrote:
> > Our backup size increases in those two days.
> Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before of
> after the operation?
> If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of data
> with the reindex).
> If it is a log backup, then the same goes, you have modified a lot of data. These modifications have
> been logged.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "C TO" <CTO@.discussions.microsoft.com> wrote in message
> news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
> >
> > We have two one-time scheduled jobs running the DBCC below, this happened to
> > be Mon and Tuesday after the full backup on Sunday:
> >
> > DBCC dbreindex (My Table,'',90)
> >
> > The table being DBCC'ed are rather big, can be 30GB in total.
> >
> > Our backup size increases in those two days. Our differential backups on Mon
> > and Tuesdays are usually significantly lower. I wonder someone can relate
> > this to DBCC DBReindex or any other possibilites.
> >
> > Thanks
> > TO
> >
> >
>|||That is a logged operation, so it will expand your transaction log. Take a
peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
your scenario.
"C TO" wrote:
> We have two one-time scheduled jobs running the DBCC below, this happened to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on Mon
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>|||Forgot to mention that the recovery mode is "Simple", my bad. We have no
logged operation.
Is there any DBCC + Backup strategy can be implement?
Thanks.
"Dennis Forbes" wrote:
> That is a logged operation, so it will expand your transaction log. Take a
> peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> your scenario.
> "C TO" wrote:
> >
> > We have two one-time scheduled jobs running the DBCC below, this happened to
> > be Mon and Tuesday after the full backup on Sunday:
> >
> > DBCC dbreindex (My Table,'',90)
> >
> > The table being DBCC'ed are rather big, can be 30GB in total.
> >
> > Our backup size increases in those two days. Our differential backups on Mon
> > and Tuesdays are usually significantly lower. I wonder someone can relate
> > this to DBCC DBReindex or any other possibilites.
> >
> > Thanks
> > TO
> >
> >|||Ahh, okay.
The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
the reindex routines before your full backup.
"C TO" wrote:
> Forgot to mention that the recovery mode is "Simple", my bad. We have no
> logged operation.
> Is there any DBCC + Backup strategy can be implement?
> Thanks.
> "Dennis Forbes" wrote:
> > That is a logged operation, so it will expand your transaction log. Take a
> > peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> > your scenario.
> >
> > "C TO" wrote:
> >
> > >
> > > We have two one-time scheduled jobs running the DBCC below, this happened to
> > > be Mon and Tuesday after the full backup on Sunday:
> > >
> > > DBCC dbreindex (My Table,'',90)
> > >
> > > The table being DBCC'ed are rather big, can be 30GB in total.
> > >
> > > Our backup size increases in those two days. Our differential backups on Mon
> > > and Tuesdays are usually significantly lower. I wonder someone can relate
> > > this to DBCC DBReindex or any other possibilites.
> > >
> > > Thanks
> > > TO
> > >
> > >|||>>You have shuffled (= modified) a lot of data with the reindex).
> Can you further explain? It sounds like this may be the answer.
Yes, the index is rebuilt, hence all the data in the table has been modified (assuming there's a
clustered index on the table). All that data is included in the diff backup.
Can you be more selective with your rebuilds? Also, perhaps DBCC INDEXDEFRAG will suit you better?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:91C42295-D1E9-4545-990D-43AEDF876184@.microsoft.com...
> There are regular large amount of inserts (because of ETL process)
> ***everyday***, no noticeable different for those days though.
> We have full db backup on Sunday and Diff db backup daily.
> Should I not run the DBCC REINDEX with fill factor of 90 a day after full
> backup?
>>You have shuffled (= modified) a lot of data with the reindex).
> Can you further explain? It sounds like this may be the answer.
>
> The database is 58 GB and the diff for the day is 55 GB. Same day last week
> was 26.961 GB.
> BTW, 27 GB diff is way too big of size to me. What strategy I can use to
> save our backup size, without jeopardizing or compromising our recoverability
> (longer restoring is acceptable)?
> Thanks.
>
>
> "Tibor Karaszi" wrote:
>> > Our backup size increases in those two days.
>> Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before
>> of
>> after the operation?
>> If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of
>> data
>> with the reindex).
>> If it is a log backup, then the same goes, you have modified a lot of data. These modifications
>> have
>> been logged.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>> Blog: http://solidqualitylearning.com/blogs/tibor/
>>
>> "C TO" <CTO@.discussions.microsoft.com> wrote in message
>> news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
>> >
>> > We have two one-time scheduled jobs running the DBCC below, this happened to
>> > be Mon and Tuesday after the full backup on Sunday:
>> >
>> > DBCC dbreindex (My Table,'',90)
>> >
>> > The table being DBCC'ed are rather big, can be 30GB in total.
>> >
>> > Our backup size increases in those two days. Our differential backups on Mon
>> > and Tuesdays are usually significantly lower. I wonder someone can relate
>> > this to DBCC DBReindex or any other possibilites.
>> >
>> > Thanks
>> > TO
>> >
>> >
>>|||Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
was reindexing the tables with just about one third of the 59 GB database
size. Assuming this is true, the marked pages are 28GB. Are these all index
pages?
"Dennis Forbes" wrote:
> Ahh, okay.
> The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
> the reindex routines before your full backup.
> "C TO" wrote:
> > Forgot to mention that the recovery mode is "Simple", my bad. We have no
> > logged operation.
> >
> > Is there any DBCC + Backup strategy can be implement?
> >
> > Thanks.
> >
> > "Dennis Forbes" wrote:
> >
> > > That is a logged operation, so it will expand your transaction log. Take a
> > > peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> > > your scenario.
> > >
> > > "C TO" wrote:
> > >
> > > >
> > > > We have two one-time scheduled jobs running the DBCC below, this happened to
> > > > be Mon and Tuesday after the full backup on Sunday:
> > > >
> > > > DBCC dbreindex (My Table,'',90)
> > > >
> > > > The table being DBCC'ed are rather big, can be 30GB in total.
> > > >
> > > > Our backup size increases in those two days. Our differential backups on Mon
> > > > and Tuesdays are usually significantly lower. I wonder someone can relate
> > > > this to DBCC DBReindex or any other possibilites.
> > > >
> > > > Thanks
> > > > TO
> > > >
> > > >|||You probably have a clustered index on the table, and a clustered index
contains the entirity of the data (a clustered index is the data), so in that
case the answer would be yes.
"C TO" wrote:
> Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
> was reindexing the tables with just about one third of the 59 GB database
> size. Assuming this is true, the marked pages are 28GB. Are these all index
> pages?
>
> "Dennis Forbes" wrote:
> > Ahh, okay.
> >
> > The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
> > the reindex routines before your full backup.
> >
> > "C TO" wrote:
> >
> > > Forgot to mention that the recovery mode is "Simple", my bad. We have no
> > > logged operation.
> > >
> > > Is there any DBCC + Backup strategy can be implement?
> > >
> > > Thanks.
> > >
> > > "Dennis Forbes" wrote:
> > >
> > > > That is a logged operation, so it will expand your transaction log. Take a
> > > > peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> > > > your scenario.
> > > >
> > > > "C TO" wrote:
> > > >
> > > > >
> > > > > We have two one-time scheduled jobs running the DBCC below, this happened to
> > > > > be Mon and Tuesday after the full backup on Sunday:
> > > > >
> > > > > DBCC dbreindex (My Table,'',90)
> > > > >
> > > > > The table being DBCC'ed are rather big, can be 30GB in total.
> > > > >
> > > > > Our backup size increases in those two days. Our differential backups on Mon
> > > > > and Tuesdays are usually significantly lower. I wonder someone can relate
> > > > > this to DBCC DBReindex or any other possibilites.
> > > > >
> > > > > Thanks
> > > > > TO
> > > > >
> > > > >
DBCC dbreindex and Backup Size
be Mon and Tuesday after the full backup on Sunday:
DBCC dbreindex (My Table,'',90)
The table being DBCC'ed are rather big, can be 30GB in total.
Our backup size increases in those two days. Our differential backups on Mon
and Tuesdays are usually significantly lower. I wonder someone can relate
this to DBCC DBReindex or any other possibilites.
Thanks
TO> Our backup size increases in those two days.
Can you elaborate further? What type of backup? Database, diff or log? Is th
e backup large before of
after the operation?
If a diff backup is larger than usual it is expected. You have shuffled (= m
odified) a lot of data
with the reindex).
If it is a log backup, then the same goes, you have modified a lot of data.
These modifications have
been logged.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
> We have two one-time scheduled jobs running the DBCC below, this happened
to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on M
on
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>|||There are regular large amount of inserts (because of ETL process)
***everyday***, no noticeable different for those days though.
We have full db backup on Sunday and Diff db backup daily.
Should I not run the DBCC REINDEX with fill factor of 90 a day after full
backup?
>You have shuffled (= modified) a lot of data with the reindex).
Can you further explain? It sounds like this may be the answer.
The database is 58 GB and the diff for the day is 55 GB. Same day last week
was 26.961 GB.
BTW, 27 GB diff is way too big of size to me. What strategy I can use to
save our backup size, without jeopardizing or compromising our recoverabilit
y
(longer restoring is acceptable)?
Thanks.
"Tibor Karaszi" wrote:
> Can you elaborate further? What type of backup? Database, diff or log? Is
the backup large before of
> after the operation?
> If a diff backup is larger than usual it is expected. You have shuffled (=
modified) a lot of data
> with the reindex).
> If it is a log backup, then the same goes, you have modified a lot of data
. These modifications have
> been logged.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "C TO" <CTO@.discussions.microsoft.com> wrote in message
> news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
>|||That is a logged operation, so it will expand your transaction log. Take a
peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
your scenario.
"C TO" wrote:
> We have two one-time scheduled jobs running the DBCC below, this happened
to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on M
on
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>|||Forgot to mention that the recovery mode is "Simple", my bad. We have no
logged operation.
Is there any DBCC + Backup strategy can be implement?
Thanks.
"Dennis Forbes" wrote:
[vbcol=seagreen]
> That is a logged operation, so it will expand your transaction log. Take a
> peek at the BULK_LOGGED recovery model to see if perhaps it makes sense fo
r
> your scenario.
> "C TO" wrote:
>|||Ahh, okay.
The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
the reindex routines before your full backup.
"C TO" wrote:
[vbcol=seagreen]
> Forgot to mention that the recovery mode is "Simple", my bad. We have no
> logged operation.
> Is there any DBCC + Backup strategy can be implement?
> Thanks.
> "Dennis Forbes" wrote:
>|||>>You have shuffled (= modified) a lot of data with the reindex).
> Can you further explain? It sounds like this may be the answer.
Yes, the index is rebuilt, hence all the data in the table has been modified
(assuming there's a
clustered index on the table). All that data is included in the diff backup.
Can you be more selective with your rebuilds? Also, perhaps DBCC INDEXDEFRAG
will suit you better?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:91C42295-D1E9-4545-990D-43AEDF876184@.microsoft.com...[vbcol=seagreen]
> There are regular large amount of inserts (because of ETL process)
> ***everyday***, no noticeable different for those days though.
> We have full db backup on Sunday and Diff db backup daily.
> Should I not run the DBCC REINDEX with fill factor of 90 a day after full
> backup?
>
> Can you further explain? It sounds like this may be the answer.
>
> The database is 58 GB and the diff for the day is 55 GB. Same day last wee
k
> was 26.961 GB.
> BTW, 27 GB diff is way too big of size to me. What strategy I can use to
> save our backup size, without jeopardizing or compromising our recoverabil
ity
> (longer restoring is acceptable)?
> Thanks.
>
>
> "Tibor Karaszi" wrote:
>|||Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
was reindexing the tables with just about one third of the 59 GB database
size. Assuming this is true, the marked pages are 28GB. Are these all index
pages?
"Dennis Forbes" wrote:
[vbcol=seagreen]
> Ahh, okay.
> The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
> the reindex routines before your full backup.
> "C TO" wrote:
>|||You probably have a clustered index on the table, and a clustered index
contains the entirity of the data (a clustered index is the data), so in tha
t
case the answer would be yes.
"C TO" wrote:
[vbcol=seagreen]
> Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX j
ob
> was reindexing the tables with just about one third of the 59 GB database
> size. Assuming this is true, the marked pages are 28GB. Are these all inde
x
> pages?
>
> "Dennis Forbes" wrote:
>