Saturday, February 25, 2012

DBCC dbreindex and transactional replication ..

I have transactional replication going. The database size is about 27 gigs
and we purge data every week to keep it around that size. Obviously i want t
o
run dbcc dbredindex after that. What are the performance implications ?
Any gothcas ?
Regards
-sriniAfter the reindex, queries that benefit from range scans will on clustered
indexes will run faster. What database recovery mode are you using for this
database? After the reindex, your transaction log could be huge.
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
"srini" <srini@.discussions.microsoft.com> wrote in message
news:4ED82FB1-02C3-4DDF-A2AA-8D12040BDD63@.microsoft.com...
I have transactional replication going. The database size is about 27 gigs
and we purge data every week to keep it around that size. Obviously i want
to
run dbcc dbredindex after that. What are the performance implications ?
Any gothcas ?
Regards
-srini

DBCC DBREINDEX and Transactional Replication

I have transactional replication going. The database size is about 27 gigs
and we purge data every week to keep it around that size. Obviously i want to
run dbcc dbredindex after that. What are the performance implications ?
Any gothcas ?
Regards
-srini
Srini,
the choice is between DBCC DBREINDEX and DBCC INDEXDEFRAG. Unlike DBCC
DBREINDEX , DBCC INDEXDEFRAG is an online operation - it doesn't hold locks
long term and thus will not block running queries or updates, however a very
fragmented index might take considerably longer to defragment than to
rebuild. So, if you have a definite window then I'd recommend DBCC DBREINDEX
otherwise DBCC INDEXDEFRAG.
Regards,
Paul Ibison
|||Paul Ibison wrote:
> Srini,
> the choice is between DBCC DBREINDEX and DBCC INDEXDEFRAG. Unlike DBCC
> DBREINDEX , DBCC INDEXDEFRAG is an online operation - it doesn't hold locks
> long term and thus will not block running queries or updates, however a very
> fragmented index might take considerably longer to defragment than to
> rebuild. So, if you have a definite window then I'd recommend DBCC DBREINDEX
> otherwise DBCC INDEXDEFRAG.
> Regards,
> Paul Ibison
On SQLServer 7 you have only DBCC DBREINDEX, so this reduces your
options. But if there are a lot of indices on your tables, you can
reindex them one by one ( or only some of them ) which can be much faster
|||Olivier,
this is a useful caveat, but there is no mention of SQL 7.0 in the original
post.
Regards,
Paul Ibison
|||I guess my question was more like what impact will it have on replication.
The resident DBA tells me that doing dbcc dbreindex will delete an insert
rows on the subscriber(s) obviously causing a lot of headache. Apparently he
has seen this happen. Anyone else see this ?
-sk
"srini" wrote:

> I have transactional replication going. The database size is about 27 gigs
> and we purge data every week to keep it around that size. Obviously i want to
> run dbcc dbredindex after that. What are the performance implications ?
> Any gothcas ?
> Regards
> -srini
|||Srini,
I'm only aware of blocking issues - I've never heard of DBCC DBREINDEX
removing rows. If possible, please try to get more information and post back
because this would be very important if it is reproducible.
Regards,
Paul Ibison
|||Paul Ibison wrote:
> Srini,
> I'm only aware of blocking issues - I've never heard of DBCC DBREINDEX
> removing rows. If possible, please try to get more information and post back
> because this would be very important if it is reproducible.
> Regards,
> Paul Ibison
>
In my experience with sqlserver 7 as primary server ( I hope that stated
this way I won't offend Paul )
dbcc reindex does not produce any insert/delete or log consumption.
But beware you datafiles: typically the table is physically duplicated.
For example, if your datafile has only 100meg left, do not reindex a 1GB
table.
|||> In my experience with sqlserver 7 as primary server ( I hope that stated
> this way I won't offend Paul )
Not at all :-)

> dbcc reindex does not produce any insert/delete or log consumption.
In SQL 2000 I've seen the log impact of DBCC DBREINDEX as high, but it can
minimal if the bulk-logged recovery model is used. This may be another
reason, apart from locking, to select DBCC IndexDefrag as the logging is
much less.
Regards,
Paul Ibison
|||Thanks for the responses. Am aware of the log impact. Was puzzled myself
about the delete/insert rows impact. I have still not been able to duplicate
this
thanks again
-srini
"Paul Ibison" wrote:

> Not at all :-)
>
> In SQL 2000 I've seen the log impact of DBCC DBREINDEX as high, but it can
> minimal if the bulk-logged recovery model is used. This may be another
> reason, apart from locking, to select DBCC IndexDefrag as the logging is
> much less.
> Regards,
> Paul Ibison
>
>

dbcc dbreindex and tables with no clustered index

i have tables in my data base with no clustered indexes just some non
clustered ones. is there any benefit to running dbcc dbreindex on
these tables?
DBCC DBREINDEX will not do anything for your heap (data pages in a table without a clustered index).
It can defrag your non-clustered indexes, though. See
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx for elaboration.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Derek" <gepetto_2000@.yahoo.com> wrote in message
news:1178199551.534111.4840@.h2g2000hsg.googlegroup s.com...
> i have tables in my data base with no clustered indexes just some non
> clustered ones. is there any benefit to running dbcc dbreindex on
> these tables?
>
|||Derek wrote:
> i have tables in my data base with no clustered indexes just some non
> clustered ones. is there any benefit to running dbcc dbreindex on
> these tables?
If any of your nonclustered indexes covers 100 disk pages or more, and
it is highly fragmented, and you have some SELECTS that require a full
or partial index scans, then yes, it could be beneficial. Otherwise you
probably won't see any difference.
Please note that a nonclustered index usually occupies only a few
percent when compared to the table size. So you would have to have a
really big table, or a very wide nonclustered index to notice any
effect.
HTH,
Gert-Jan

dbcc dbreindex and tables with no clustered index

i have tables in my data base with no clustered indexes just some non
clustered ones. is there any benefit to running dbcc dbreindex on
these tables?DBCC DBREINDEX will not do anything for your heap (data pages in a table without a clustered index).
It can defrag your non-clustered indexes, though. See
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx for elaboration.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Derek" <gepetto_2000@.yahoo.com> wrote in message
news:1178199551.534111.4840@.h2g2000hsg.googlegroups.com...
> i have tables in my data base with no clustered indexes just some non
> clustered ones. is there any benefit to running dbcc dbreindex on
> these tables?
>|||Derek wrote:
> i have tables in my data base with no clustered indexes just some non
> clustered ones. is there any benefit to running dbcc dbreindex on
> these tables?
If any of your nonclustered indexes covers 100 disk pages or more, and
it is highly fragmented, and you have some SELECTS that require a full
or partial index scans, then yes, it could be beneficial. Otherwise you
probably won't see any difference.
Please note that a nonclustered index usually occupies only a few
percent when compared to the table size. So you would have to have a
really big table, or a very wide nonclustered index to notice any
effect.
HTH,
Gert-Jan

dbcc dbreindex and tables with no clustered index

i have tables in my data base with no clustered indexes just some non
clustered ones. is there any benefit to running dbcc dbreindex on
these tables?DBCC DBREINDEX will not do anything for your heap (data pages in a table wit
hout a clustered index).
It can defrag your non-clustered indexes, though. See
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
for elaboration.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Derek" <gepetto_2000@.yahoo.com> wrote in message
news:1178199551.534111.4840@.h2g2000hsg.googlegroups.com...
> i have tables in my data base with no clustered indexes just some non
> clustered ones. is there any benefit to running dbcc dbreindex on
> these tables?
>|||Derek wrote:
> i have tables in my data base with no clustered indexes just some non
> clustered ones. is there any benefit to running dbcc dbreindex on
> these tables?
If any of your nonclustered indexes covers 100 disk pages or more, and
it is highly fragmented, and you have some SELECTS that require a full
or partial index scans, then yes, it could be beneficial. Otherwise you
probably won't see any difference.
Please note that a nonclustered index usually occupies only a few
percent when compared to the table size. So you would have to have a
really big table, or a very wide nonclustered index to notice any
effect.
HTH,
Gert-Jan

DBCC DBREINDEX and Statistics

In SQL Server 2000 when you create an Index, statistics are created with fullscan. My question is when you issue the DBCC DBREINDEX command, are statistics updated with a fullscan or the default sample size? If the answer is fullscan, then I am curious
as to why the DBCC DBREINDEX runs faster than an UPDATE STATISTICS WITH FULLSCAN?
Thanks
Donnie
Can you provide some timing details along with index size etc?
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Donnie" <anonymous@.discussions.microsoft.com> wrote in message
news:94BBF0E3-894C-4702-BC24-F7760A0CF40C@.microsoft.com...
> In SQL Server 2000 when you create an Index, statistics are created with
fullscan. My question is when you issue the DBCC DBREINDEX command, are
statistics updated with a fullscan or the default sample size? If the
answer is fullscan, then I am curious as to why the DBCC DBREINDEX runs
faster than an UPDATE STATISTICS WITH FULLSCAN?
> Thanks
> Donnie
|||Yes, The reindex takes about 4 hours on the table, the update statistics with a fullscan was killed after 11 hours. The size of the table is about 200 million rows.
-- Paul S Randal [MS] wrote: --
Can you provide some timing details along with index size etc?
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Donnie" <anonymous@.discussions.microsoft.com> wrote in message
news:94BBF0E3-894C-4702-BC24-F7760A0CF40C@.microsoft.com...
> In SQL Server 2000 when you create an Index, statistics are created with
fullscan. My question is when you issue the DBCC DBREINDEX command, are
statistics updated with a fullscan or the default sample size? If the
answer is fullscan, then I am curious as to why the DBCC DBREINDEX runs
faster than an UPDATE STATISTICS WITH FULLSCAN?[vbcol=seagreen]
|||Hi Donnie,
An index rebuild always has a stats recreate piggy-backed on top of it.
I can only guess that you have a concurrent workload while you're doing the
update statistics (which is an online operation), whereas the index rebuild
is effectively offline (as it takes a table S or X lock).
Also, an index rebuild will do an allocation-order scan of the data whereas
an update statistics is probably doing a logical-order scan. Does the index
have high logical scan fragmentation?
According to a colleague in the query optimizer team, a full scan update of
such a large index is a highly unusual operation. Are you certain that a
lower sampling rate is producing bad statistics?
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Donnie" <anonymous@.discussions.microsoft.com> wrote in message
news:379598EC-0F22-4BC0-A070-B30B860090A3@.microsoft.com...
> Yes, The reindex takes about 4 hours on the table, the update statistics
with a fullscan was killed after 11 hours. The size of the table is about
200 million rows.
> -- Paul S Randal [MS] wrote: --
> Can you provide some timing details along with index size etc?
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no
rights.[vbcol=seagreen]
> "Donnie" <anonymous@.discussions.microsoft.com> wrote in message
> news:94BBF0E3-894C-4702-BC24-F7760A0CF40C@.microsoft.com...
with
> fullscan. My question is when you issue the DBCC DBREINDEX command,
are
> statistics updated with a fullscan or the default sample size? If
the
> answer is fullscan, then I am curious as to why the DBCC DBREINDEX
runs
> faster than an UPDATE STATISTICS WITH FULLSCAN?
>
>
|||Ok, That makes since as to why the Update Statistics takes longer.
Just so I understand, when a reindex is run, the statistics are re-created for the table that is being reindexed? Is that correct?
Thanks for your help. It is much appreciated.
Thanks
Donnie
|||Yes, correct. And you're most welcome.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Donnie" <anonymous@.discussions.microsoft.com> wrote in message
news:EB8918BF-B938-47A8-8D88-7E308D748340@.microsoft.com...
> Ok, That makes since as to why the Update Statistics takes longer.
> Just so I understand, when a reindex is run, the statistics are re-created
for the table that is being reindexed? Is that correct?
>
> Thanks for your help. It is much appreciated.
> Thanks
> Donnie

DBCC DBREINDEX and Statistics

In SQL Server 2000 when you create an Index, statistics are created with ful
lscan. My question is when you issue the DBCC DBREINDEX command, are statis
tics updated with a fullscan or the default sample size? If the answer is f
ullscan, then I am curious
as to why the DBCC DBREINDEX runs faster than an UPDATE STATISTICS WITH FULL
SCAN?
Thanks
DonnieCan you provide some timing details along with index size etc?
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Donnie" <anonymous@.discussions.microsoft.com> wrote in message
news:94BBF0E3-894C-4702-BC24-F7760A0CF40C@.microsoft.com...
> In SQL Server 2000 when you create an Index, statistics are created with
fullscan. My question is when you issue the DBCC DBREINDEX command, are
statistics updated with a fullscan or the default sample size? If the
answer is fullscan, then I am curious as to why the DBCC DBREINDEX runs
faster than an UPDATE STATISTICS WITH FULLSCAN?
> Thanks
> Donnie|||Yes, The reindex takes about 4 hours on the table, the update statistics wit
h a fullscan was killed after 11 hours. The size of the table is about 200
million rows.
-- Paul S Randal [MS] wrote: --
Can you provide some timing details along with index size etc?
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Donnie" <anonymous@.discussions.microsoft.com> wrote in message
news:94BBF0E3-894C-4702-BC24-F7760A0CF40C@.microsoft.com...
> In SQL Server 2000 when you create an Index, statistics are created with
fullscan. My question is when you issue the DBCC DBREINDEX command, are
statistics updated with a fullscan or the default sample size? If the
answer is fullscan, then I am curious as to why the DBCC DBREINDEX runs
faster than an UPDATE STATISTICS WITH FULLSCAN?[vbcol=seagreen]|||Hi Donnie,
An index rebuild always has a stats recreate piggy-backed on top of it.
I can only guess that you have a concurrent workload while you're doing the
update statistics (which is an online operation), whereas the index rebuild
is effectively offline (as it takes a table S or X lock).
Also, an index rebuild will do an allocation-order scan of the data whereas
an update statistics is probably doing a logical-order scan. Does the index
have high logical scan fragmentation?
According to a colleague in the query optimizer team, a full scan update of
such a large index is a highly unusual operation. Are you certain that a
lower sampling rate is producing bad statistics?
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Donnie" <anonymous@.discussions.microsoft.com> wrote in message
news:379598EC-0F22-4BC0-A070-B30B860090A3@.microsoft.com...
> Yes, The reindex takes about 4 hours on the table, the update statistics
with a fullscan was killed after 11 hours. The size of the table is about
200 million rows.
> -- Paul S Randal [MS] wrote: --
> Can you provide some timing details along with index size etc?
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> "Donnie" <anonymous@.discussions.microsoft.com> wrote in message
> news:94BBF0E3-894C-4702-BC24-F7760A0CF40C@.microsoft.com...
with[vbcol=seagreen]
> fullscan. My question is when you issue the DBCC DBREINDEX command,
are
> statistics updated with a fullscan or the default sample size? If
the
> answer is fullscan, then I am curious as to why the DBCC DBREINDEX
runs
> faster than an UPDATE STATISTICS WITH FULLSCAN?
>
>|||Ok, That makes since as to why the Update Statistics takes longer.
Just so I understand, when a reindex is run, the statistics are re-created f
or the table that is being reindexed? Is that correct?
Thanks for your help. It is much appreciated.
Thanks
Donnie|||Yes, correct. And you're most welcome.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Donnie" <anonymous@.discussions.microsoft.com> wrote in message
news:EB8918BF-B938-47A8-8D88-7E308D748340@.microsoft.com...
> Ok, That makes since as to why the Update Statistics takes longer.
> Just so I understand, when a reindex is run, the statistics are re-created
for the table that is being reindexed? Is that correct?
>
> Thanks for your help. It is much appreciated.
> Thanks
> Donnie

DBCC DBREINDEX and FillFactor

My database has one data file only, and the data file was 900M of data after
I ran DBCC SHRINKDATABASE(DBName). But I wanted to give it 10% free space so
it does not have to expand in a while; I ran DBCC
DBREINDEX(TableName,N'',90) on all of my user tables (all tables have
clustered index). I was expecting the database to expand 900 X 90% = 90M.
However, when it completed, the data file became 1.9G in total. When I view
the database from EM (taskpad view), looks like 50% free space. Why? Thanks.
"John" <john@.aicinc.com> wrote in message
news:%23ss5hYFbEHA.1248@.TK2MSFTNGP11.phx.gbl...
> My database has one data file only, and the data file was 900M of data
after
> I ran DBCC SHRINKDATABASE(DBName). But I wanted to give it 10% free space
so
> it does not have to expand in a while; I ran DBCC
> DBREINDEX(TableName,N'',90) on all of my user tables (all tables have
> clustered index). I was expecting the database to expand 900 X 90% = 90M.
> However, when it completed, the data file became 1.9G in total. When I
view
> the database from EM (taskpad view), looks like 50% free space. Why?
Thanks.
>
The SHRINKDATABASE by default will leave 10% of your database as free space.
It also does a few other things similar to a defrag on a hard disk, but for
you database objects.
The DBREINDEX command that you specified tells the indexes to be rebuilt
with a fillfactor of 90%. This means that each index will use 90% of a
data page and leave 10% as free space for future growth. For non-clustered
indexes, this does do a whole lot to your overall database growth. For
clustered indexes however, this forces 10% free space at the leaf level of
the index (which is the actual table itself.).
Every table that you created a primary key on using default settings will
create a clustered index on that primary key column. So in essence, by
setting the fill-factor of those indexes to 90% forces up to a 10% growth in
the size of each table.
I hope this explanation helps.
Rick Sawtell
MCT, MCSD, MCDBA
|||Rick,
Sorry but that's a little misleading.

> The SHRINKDATABASE by default will leave 10% of your database as free
space.
> It also does a few other things similar to a defrag on a hard disk, but
for
> you database objects.
Actually a Shrink operation will fragment the objects not defrag. Shrinking
a database is the worst thing to do if your concerned with fragmentation and
the objects being contiguous on disk. Another thing, SHRINKDATABASE will
shrink both the Data files and the Log file to get the target amount and
does not discriminate between how much of each it will shrink . You should
consider using SHRINKFILE if shrinking is absolutely necessary so as you get
the control you want.

> The DBREINDEX command that you specified tells the indexes to be rebuilt
> with a fillfactor of 90%. This means that each index will use 90% of a
> data page and leave 10% as free space for future growth. For
non-clustered
> indexes, this does do a whole lot to your overall database growth.
A non clustered index is simply implemented behind the scenes as a table
with a clustered index using the data from the column(s) in the index
expression with the addition of the Clustered Index key value appended. It
will adhere to the same fill factor and actually requires less space and
trouble to rebuild than a clustered index.

> For
> clustered indexes however, this forces 10% free space at the leaf level of
> the index (which is the actual table itself.).
> Every table that you created a primary key on using default settings will
> create a clustered index on that primary key column. So in essence, by
> setting the fill-factor of those indexes to 90% forces up to a 10% growth
in
> the size of each table.
Any time you rebuild a clustered index you need at least 1.2 times the size
of the table in free space in the database or it will grow. And each
nonclustered index requires similar free space equal to the size of the
index. This is why it is so important to have lots of free space in the
database at all times and especially before reindexing. By reindexing all
the tables in a database with little free space it will force multiple
autogrows and end up with numbers like John reported.
Andrew J. Kelly SQL MVP
|||Thanks Andrew. I am still confused why it is doubling the space in my data
file (log file) instead of somewhere around 10% increase. We do have a lot
of indexes on each table.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23r2zZAHbEHA.3476@.tk2msftngp13.phx.gbl...
> Rick,
> Sorry but that's a little misleading.
>
> space.
> for
> Actually a Shrink operation will fragment the objects not defrag.
Shrinking
> a database is the worst thing to do if your concerned with fragmentation
and
> the objects being contiguous on disk. Another thing, SHRINKDATABASE will
> shrink both the Data files and the Log file to get the target amount and
> does not discriminate between how much of each it will shrink . You
should
> consider using SHRINKFILE if shrinking is absolutely necessary so as you
get[vbcol=seagreen]
> the control you want.
> non-clustered
> A non clustered index is simply implemented behind the scenes as a table
> with a clustered index using the data from the column(s) in the index
> expression with the addition of the Clustered Index key value appended. It
> will adhere to the same fill factor and actually requires less space and
> trouble to rebuild than a clustered index.
of[vbcol=seagreen]
will[vbcol=seagreen]
growth
> in
>
> Any time you rebuild a clustered index you need at least 1.2 times the
size
> of the table in free space in the database or it will grow. And each
> nonclustered index requires similar free space equal to the size of the
> index. This is why it is so important to have lots of free space in the
> database at all times and especially before reindexing. By reindexing all
> the tables in a database with little free space it will force multiple
> autogrows and end up with numbers like John reported.
>
> --
> Andrew J. Kelly SQL MVP
>
>
|||Thanks Andrew. I read your reply again and I think I got it.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23r2zZAHbEHA.3476@.tk2msftngp13.phx.gbl...
> Rick,
> Sorry but that's a little misleading.
>
> space.
> for
> Actually a Shrink operation will fragment the objects not defrag.
Shrinking
> a database is the worst thing to do if your concerned with fragmentation
and
> the objects being contiguous on disk. Another thing, SHRINKDATABASE will
> shrink both the Data files and the Log file to get the target amount and
> does not discriminate between how much of each it will shrink . You
should
> consider using SHRINKFILE if shrinking is absolutely necessary so as you
get[vbcol=seagreen]
> the control you want.
> non-clustered
> A non clustered index is simply implemented behind the scenes as a table
> with a clustered index using the data from the column(s) in the index
> expression with the addition of the Clustered Index key value appended. It
> will adhere to the same fill factor and actually requires less space and
> trouble to rebuild than a clustered index.
of[vbcol=seagreen]
will[vbcol=seagreen]
growth
> in
>
> Any time you rebuild a clustered index you need at least 1.2 times the
size
> of the table in free space in the database or it will grow. And each
> nonclustered index requires similar free space equal to the size of the
> index. This is why it is so important to have lots of free space in the
> database at all times and especially before reindexing. By reindexing all
> the tables in a database with little free space it will force multiple
> autogrows and end up with numbers like John reported.
>
> --
> Andrew J. Kelly SQL MVP
>
>
|||John,

> Thanks Andrew. I am still confused why it is doubling the space in my data
> file (log file) instead of somewhere around 10% increase. We do have a lot
> of indexes on each table.
OK do you mean data file or Log file? You say both. Data files hold the
actual data and have extensions such as .mdf and .ndf. Log files hold
transaction information and have a .ldf extension. And when you talk about
the database size are you referring to just the data file size or both the
data and Log files? You really need to treat them as two completely
separate things. It is likely your data file autogrew many times during the
reindexing operation and not just once. Each time it autogrows it gets
larger and then it will grow more the next time. 10% free space is usually
no where near the amount of free space required to do a Reindex properly.
Another thing to keep in mind is that the total free space in the database
does not really mean you can use it. For instance if your tables have a lot
of fragmentation or a very low fill factor that free space on those pages is
not useable for anything other than inserts to that particular table. A
reindex in table A can not use the free space from pages on table B and so
on.
Andrew J. Kelly SQL MVP
|||If not, please let us know what we can help with.
Andrew J. Kelly SQL MVP
"John" <john@.aicinc.com> wrote in message
news:eny1c3NbEHA.3792@.TK2MSFTNGP09.phx.gbl...[vbcol=seagreen]
> Thanks Andrew. I read your reply again and I think I got it.
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23r2zZAHbEHA.3476@.tk2msftngp13.phx.gbl...
but[vbcol=seagreen]
> Shrinking
> and
will[vbcol=seagreen]
> should
> get
rebuilt[vbcol=seagreen]
a[vbcol=seagreen]
It[vbcol=seagreen]
level[vbcol=seagreen]
> of
> will
by[vbcol=seagreen]
> growth
> size
all
>

DBCC DBREINDEX and FillFactor

My database has one data file only, and the data file was 900M of data after
I ran DBCC SHRINKDATABASE(DBName). But I wanted to give it 10% free space so
it does not have to expand in a while; I ran DBCC
DBREINDEX(TableName,N'',90) on all of my user tables (all tables have
clustered index). I was expecting the database to expand 900 X 90% = 90M.
However, when it completed, the data file became 1.9G in total. When I view
the database from EM (taskpad view), looks like 50% free space. Why? Thanks."John" <john@.aicinc.com> wrote in message
news:%23ss5hYFbEHA.1248@.TK2MSFTNGP11.phx.gbl...
> My database has one data file only, and the data file was 900M of data
after
> I ran DBCC SHRINKDATABASE(DBName). But I wanted to give it 10% free space
so
> it does not have to expand in a while; I ran DBCC
> DBREINDEX(TableName,N'',90) on all of my user tables (all tables have
> clustered index). I was expecting the database to expand 900 X 90% = 90M.
> However, when it completed, the data file became 1.9G in total. When I
view
> the database from EM (taskpad view), looks like 50% free space. Why?
Thanks.
>
The SHRINKDATABASE by default will leave 10% of your database as free space.
It also does a few other things similar to a defrag on a hard disk, but for
you database objects.
The DBREINDEX command that you specified tells the indexes to be rebuilt
with a fillfactor of 90%. This means that each index will use 90% of a
data page and leave 10% as free space for future growth. For non-clustered
indexes, this does do a whole lot to your overall database growth. For
clustered indexes however, this forces 10% free space at the leaf level of
the index (which is the actual table itself.).
Every table that you created a primary key on using default settings will
create a clustered index on that primary key column. So in essence, by
setting the fill-factor of those indexes to 90% forces up to a 10% growth in
the size of each table.
I hope this explanation helps.
Rick Sawtell
MCT, MCSD, MCDBA|||Rick,
Sorry but that's a little misleading.

> The SHRINKDATABASE by default will leave 10% of your database as free
space.
> It also does a few other things similar to a defrag on a hard disk, but
for
> you database objects.
Actually a Shrink operation will fragment the objects not defrag. Shrinking
a database is the worst thing to do if your concerned with fragmentation and
the objects being contiguous on disk. Another thing, SHRINKDATABASE will
shrink both the Data files and the Log file to get the target amount and
does not discriminate between how much of each it will shrink . You should
consider using SHRINKFILE if shrinking is absolutely necessary so as you get
the control you want.

> The DBREINDEX command that you specified tells the indexes to be rebuilt
> with a fillfactor of 90%. This means that each index will use 90% of a
> data page and leave 10% as free space for future growth. For
non-clustered
> indexes, this does do a whole lot to your overall database growth.
A non clustered index is simply implemented behind the scenes as a table
with a clustered index using the data from the column(s) in the index
expression with the addition of the Clustered Index key value appended. It
will adhere to the same fill factor and actually requires less space and
trouble to rebuild than a clustered index.

> For
> clustered indexes however, this forces 10% free space at the leaf level of
> the index (which is the actual table itself.).
> Every table that you created a primary key on using default settings will
> create a clustered index on that primary key column. So in essence, by
> setting the fill-factor of those indexes to 90% forces up to a 10% growth
in
> the size of each table.
Any time you rebuild a clustered index you need at least 1.2 times the size
of the table in free space in the database or it will grow. And each
nonclustered index requires similar free space equal to the size of the
index. This is why it is so important to have lots of free space in the
database at all times and especially before reindexing. By reindexing all
the tables in a database with little free space it will force multiple
autogrows and end up with numbers like John reported.
Andrew J. Kelly SQL MVP|||Thanks Andrew. I am still confused why it is doubling the space in my data
file (log file) instead of somewhere around 10% increase. We do have a lot
of indexes on each table.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23r2zZAHbEHA.3476@.tk2msftngp13.phx.gbl...
> Rick,
> Sorry but that's a little misleading.
>
> space.
> for
> Actually a Shrink operation will fragment the objects not defrag.
Shrinking
> a database is the worst thing to do if your concerned with fragmentation
and
> the objects being contiguous on disk. Another thing, SHRINKDATABASE will
> shrink both the Data files and the Log file to get the target amount and
> does not discriminate between how much of each it will shrink . You
should
> consider using SHRINKFILE if shrinking is absolutely necessary so as you
get
> the control you want.
>
> non-clustered
> A non clustered index is simply implemented behind the scenes as a table
> with a clustered index using the data from the column(s) in the index
> expression with the addition of the Clustered Index key value appended. It
> will adhere to the same fill factor and actually requires less space and
> trouble to rebuild than a clustered index.
>
of[vbcol=seagreen]
will[vbcol=seagreen]
growth[vbcol=seagreen]
> in
>
> Any time you rebuild a clustered index you need at least 1.2 times the
size
> of the table in free space in the database or it will grow. And each
> nonclustered index requires similar free space equal to the size of the
> index. This is why it is so important to have lots of free space in the
> database at all times and especially before reindexing. By reindexing all
> the tables in a database with little free space it will force multiple
> autogrows and end up with numbers like John reported.
>
> --
> Andrew J. Kelly SQL MVP
>
>|||Thanks Andrew. I read your reply again and I think I got it.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23r2zZAHbEHA.3476@.tk2msftngp13.phx.gbl...
> Rick,
> Sorry but that's a little misleading.
>
> space.
> for
> Actually a Shrink operation will fragment the objects not defrag.
Shrinking
> a database is the worst thing to do if your concerned with fragmentation
and
> the objects being contiguous on disk. Another thing, SHRINKDATABASE will
> shrink both the Data files and the Log file to get the target amount and
> does not discriminate between how much of each it will shrink . You
should
> consider using SHRINKFILE if shrinking is absolutely necessary so as you
get
> the control you want.
>
> non-clustered
> A non clustered index is simply implemented behind the scenes as a table
> with a clustered index using the data from the column(s) in the index
> expression with the addition of the Clustered Index key value appended. It
> will adhere to the same fill factor and actually requires less space and
> trouble to rebuild than a clustered index.
>
of[vbcol=seagreen]
will[vbcol=seagreen]
growth[vbcol=seagreen]
> in
>
> Any time you rebuild a clustered index you need at least 1.2 times the
size
> of the table in free space in the database or it will grow. And each
> nonclustered index requires similar free space equal to the size of the
> index. This is why it is so important to have lots of free space in the
> database at all times and especially before reindexing. By reindexing all
> the tables in a database with little free space it will force multiple
> autogrows and end up with numbers like John reported.
>
> --
> Andrew J. Kelly SQL MVP
>
>|||John,

> Thanks Andrew. I am still confused why it is doubling the space in my data
> file (log file) instead of somewhere around 10% increase. We do have a lot
> of indexes on each table.
OK do you mean data file or Log file? You say both. Data files hold the
actual data and have extensions such as .mdf and .ndf. Log files hold
transaction information and have a .ldf extension. And when you talk about
the database size are you referring to just the data file size or both the
data and Log files? You really need to treat them as two completely
separate things. It is likely your data file autogrew many times during the
reindexing operation and not just once. Each time it autogrows it gets
larger and then it will grow more the next time. 10% free space is usually
no where near the amount of free space required to do a Reindex properly.
Another thing to keep in mind is that the total free space in the database
does not really mean you can use it. For instance if your tables have a lot
of fragmentation or a very low fill factor that free space on those pages is
not useable for anything other than inserts to that particular table. A
reindex in table A can not use the free space from pages on table B and so
on.
Andrew J. Kelly SQL MVP|||If not, please let us know what we can help with.
Andrew J. Kelly SQL MVP
"John" <john@.aicinc.com> wrote in message
news:eny1c3NbEHA.3792@.TK2MSFTNGP09.phx.gbl...
> Thanks Andrew. I read your reply again and I think I got it.
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23r2zZAHbEHA.3476@.tk2msftngp13.phx.gbl...
but[vbcol=seagreen]
> Shrinking
> and
will[vbcol=seagreen]
> should
> get
rebuilt[vbcol=seagreen]
a[vbcol=seagreen]
It[vbcol=seagreen]
level[vbcol=seagreen]
> of
> will
by[vbcol=seagreen]
> growth
> size
all[vbcol=seagreen]
>

DBCC DBREINDEX and FillFactor

My database has one data file only, and the data file was 900M of data after
I ran DBCC SHRINKDATABASE(DBName). But I wanted to give it 10% free space so
it does not have to expand in a while; I ran DBCC
DBREINDEX(TableName,N'',90) on all of my user tables (all tables have
clustered index). I was expecting the database to expand 900 X 90% = 90M.
However, when it completed, the data file became 1.9G in total. When I view
the database from EM (taskpad view), looks like 50% free space. Why? Thanks."John" <john@.aicinc.com> wrote in message
news:%23ss5hYFbEHA.1248@.TK2MSFTNGP11.phx.gbl...
> My database has one data file only, and the data file was 900M of data
after
> I ran DBCC SHRINKDATABASE(DBName). But I wanted to give it 10% free space
so
> it does not have to expand in a while; I ran DBCC
> DBREINDEX(TableName,N'',90) on all of my user tables (all tables have
> clustered index). I was expecting the database to expand 900 X 90% = 90M.
> However, when it completed, the data file became 1.9G in total. When I
view
> the database from EM (taskpad view), looks like 50% free space. Why?
Thanks.
>
The SHRINKDATABASE by default will leave 10% of your database as free space.
It also does a few other things similar to a defrag on a hard disk, but for
you database objects.
The DBREINDEX command that you specified tells the indexes to be rebuilt
with a fillfactor of 90%. This means that each index will use 90% of a
data page and leave 10% as free space for future growth. For non-clustered
indexes, this does do a whole lot to your overall database growth. For
clustered indexes however, this forces 10% free space at the leaf level of
the index (which is the actual table itself.).
Every table that you created a primary key on using default settings will
create a clustered index on that primary key column. So in essence, by
setting the fill-factor of those indexes to 90% forces up to a 10% growth in
the size of each table.
I hope this explanation helps.
Rick Sawtell
MCT, MCSD, MCDBA|||Rick,
Sorry but that's a little misleading.
> The SHRINKDATABASE by default will leave 10% of your database as free
space.
> It also does a few other things similar to a defrag on a hard disk, but
for
> you database objects.
Actually a Shrink operation will fragment the objects not defrag. Shrinking
a database is the worst thing to do if your concerned with fragmentation and
the objects being contiguous on disk. Another thing, SHRINKDATABASE will
shrink both the Data files and the Log file to get the target amount and
does not discriminate between how much of each it will shrink . You should
consider using SHRINKFILE if shrinking is absolutely necessary so as you get
the control you want.
> The DBREINDEX command that you specified tells the indexes to be rebuilt
> with a fillfactor of 90%. This means that each index will use 90% of a
> data page and leave 10% as free space for future growth. For
non-clustered
> indexes, this does do a whole lot to your overall database growth.
A non clustered index is simply implemented behind the scenes as a table
with a clustered index using the data from the column(s) in the index
expression with the addition of the Clustered Index key value appended. It
will adhere to the same fill factor and actually requires less space and
trouble to rebuild than a clustered index.
> For
> clustered indexes however, this forces 10% free space at the leaf level of
> the index (which is the actual table itself.).
> Every table that you created a primary key on using default settings will
> create a clustered index on that primary key column. So in essence, by
> setting the fill-factor of those indexes to 90% forces up to a 10% growth
in
> the size of each table.
Any time you rebuild a clustered index you need at least 1.2 times the size
of the table in free space in the database or it will grow. And each
nonclustered index requires similar free space equal to the size of the
index. This is why it is so important to have lots of free space in the
database at all times and especially before reindexing. By reindexing all
the tables in a database with little free space it will force multiple
autogrows and end up with numbers like John reported.
Andrew J. Kelly SQL MVP|||Thanks Andrew. I am still confused why it is doubling the space in my data
file (log file) instead of somewhere around 10% increase. We do have a lot
of indexes on each table.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23r2zZAHbEHA.3476@.tk2msftngp13.phx.gbl...
> Rick,
> Sorry but that's a little misleading.
>
> > The SHRINKDATABASE by default will leave 10% of your database as free
> space.
> > It also does a few other things similar to a defrag on a hard disk, but
> for
> > you database objects.
> Actually a Shrink operation will fragment the objects not defrag.
Shrinking
> a database is the worst thing to do if your concerned with fragmentation
and
> the objects being contiguous on disk. Another thing, SHRINKDATABASE will
> shrink both the Data files and the Log file to get the target amount and
> does not discriminate between how much of each it will shrink . You
should
> consider using SHRINKFILE if shrinking is absolutely necessary so as you
get
> the control you want.
> >
> > The DBREINDEX command that you specified tells the indexes to be rebuilt
> > with a fillfactor of 90%. This means that each index will use 90% of a
> > data page and leave 10% as free space for future growth. For
> non-clustered
> > indexes, this does do a whole lot to your overall database growth.
> A non clustered index is simply implemented behind the scenes as a table
> with a clustered index using the data from the column(s) in the index
> expression with the addition of the Clustered Index key value appended. It
> will adhere to the same fill factor and actually requires less space and
> trouble to rebuild than a clustered index.
> > For
> > clustered indexes however, this forces 10% free space at the leaf level
of
> > the index (which is the actual table itself.).
> >
> > Every table that you created a primary key on using default settings
will
> > create a clustered index on that primary key column. So in essence, by
> > setting the fill-factor of those indexes to 90% forces up to a 10%
growth
> in
> > the size of each table.
>
> Any time you rebuild a clustered index you need at least 1.2 times the
size
> of the table in free space in the database or it will grow. And each
> nonclustered index requires similar free space equal to the size of the
> index. This is why it is so important to have lots of free space in the
> database at all times and especially before reindexing. By reindexing all
> the tables in a database with little free space it will force multiple
> autogrows and end up with numbers like John reported.
>
> --
> Andrew J. Kelly SQL MVP
>
>|||Thanks Andrew. I read your reply again and I think I got it.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23r2zZAHbEHA.3476@.tk2msftngp13.phx.gbl...
> Rick,
> Sorry but that's a little misleading.
>
> > The SHRINKDATABASE by default will leave 10% of your database as free
> space.
> > It also does a few other things similar to a defrag on a hard disk, but
> for
> > you database objects.
> Actually a Shrink operation will fragment the objects not defrag.
Shrinking
> a database is the worst thing to do if your concerned with fragmentation
and
> the objects being contiguous on disk. Another thing, SHRINKDATABASE will
> shrink both the Data files and the Log file to get the target amount and
> does not discriminate between how much of each it will shrink . You
should
> consider using SHRINKFILE if shrinking is absolutely necessary so as you
get
> the control you want.
> >
> > The DBREINDEX command that you specified tells the indexes to be rebuilt
> > with a fillfactor of 90%. This means that each index will use 90% of a
> > data page and leave 10% as free space for future growth. For
> non-clustered
> > indexes, this does do a whole lot to your overall database growth.
> A non clustered index is simply implemented behind the scenes as a table
> with a clustered index using the data from the column(s) in the index
> expression with the addition of the Clustered Index key value appended. It
> will adhere to the same fill factor and actually requires less space and
> trouble to rebuild than a clustered index.
> > For
> > clustered indexes however, this forces 10% free space at the leaf level
of
> > the index (which is the actual table itself.).
> >
> > Every table that you created a primary key on using default settings
will
> > create a clustered index on that primary key column. So in essence, by
> > setting the fill-factor of those indexes to 90% forces up to a 10%
growth
> in
> > the size of each table.
>
> Any time you rebuild a clustered index you need at least 1.2 times the
size
> of the table in free space in the database or it will grow. And each
> nonclustered index requires similar free space equal to the size of the
> index. This is why it is so important to have lots of free space in the
> database at all times and especially before reindexing. By reindexing all
> the tables in a database with little free space it will force multiple
> autogrows and end up with numbers like John reported.
>
> --
> Andrew J. Kelly SQL MVP
>
>|||John,
> Thanks Andrew. I am still confused why it is doubling the space in my data
> file (log file) instead of somewhere around 10% increase. We do have a lot
> of indexes on each table.
OK do you mean data file or Log file? You say both. Data files hold the
actual data and have extensions such as .mdf and .ndf. Log files hold
transaction information and have a .ldf extension. And when you talk about
the database size are you referring to just the data file size or both the
data and Log files? You really need to treat them as two completely
separate things. It is likely your data file autogrew many times during the
reindexing operation and not just once. Each time it autogrows it gets
larger and then it will grow more the next time. 10% free space is usually
no where near the amount of free space required to do a Reindex properly.
Another thing to keep in mind is that the total free space in the database
does not really mean you can use it. For instance if your tables have a lot
of fragmentation or a very low fill factor that free space on those pages is
not useable for anything other than inserts to that particular table. A
reindex in table A can not use the free space from pages on table B and so
on.
--
Andrew J. Kelly SQL MVP|||If not, please let us know what we can help with.
--
Andrew J. Kelly SQL MVP
"John" <john@.aicinc.com> wrote in message
news:eny1c3NbEHA.3792@.TK2MSFTNGP09.phx.gbl...
> Thanks Andrew. I read your reply again and I think I got it.
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23r2zZAHbEHA.3476@.tk2msftngp13.phx.gbl...
> > Rick,
> >
> > Sorry but that's a little misleading.
> >
> >
> > > The SHRINKDATABASE by default will leave 10% of your database as free
> > space.
> > > It also does a few other things similar to a defrag on a hard disk,
but
> > for
> > > you database objects.
> >
> > Actually a Shrink operation will fragment the objects not defrag.
> Shrinking
> > a database is the worst thing to do if your concerned with fragmentation
> and
> > the objects being contiguous on disk. Another thing, SHRINKDATABASE
will
> > shrink both the Data files and the Log file to get the target amount and
> > does not discriminate between how much of each it will shrink . You
> should
> > consider using SHRINKFILE if shrinking is absolutely necessary so as you
> get
> > the control you want.
> >
> > >
> > > The DBREINDEX command that you specified tells the indexes to be
rebuilt
> > > with a fillfactor of 90%. This means that each index will use 90% of
a
> > > data page and leave 10% as free space for future growth. For
> > non-clustered
> > > indexes, this does do a whole lot to your overall database growth.
> >
> > A non clustered index is simply implemented behind the scenes as a table
> > with a clustered index using the data from the column(s) in the index
> > expression with the addition of the Clustered Index key value appended.
It
> > will adhere to the same fill factor and actually requires less space and
> > trouble to rebuild than a clustered index.
> >
> > > For
> > > clustered indexes however, this forces 10% free space at the leaf
level
> of
> > > the index (which is the actual table itself.).
> > >
> > > Every table that you created a primary key on using default settings
> will
> > > create a clustered index on that primary key column. So in essence,
by
> > > setting the fill-factor of those indexes to 90% forces up to a 10%
> growth
> > in
> > > the size of each table.
> >
> >
> > Any time you rebuild a clustered index you need at least 1.2 times the
> size
> > of the table in free space in the database or it will grow. And each
> > nonclustered index requires similar free space equal to the size of the
> > index. This is why it is so important to have lots of free space in the
> > database at all times and especially before reindexing. By reindexing
all
> > the tables in a database with little free space it will force multiple
> > autogrows and end up with numbers like John reported.
> >
> >
> > --
> > Andrew J. Kelly SQL MVP
> >
> >
> >
>

DBCC DBREINDEX and diskspace

I have a large table that is fragmented and causing poor query
performance. I want to use DBCC DBREINDEX to defragment them. I used
it previously and noticed that I almost ran out of drive space. Then
the files compacted down to a managebale size. The table has grown
again and I fear that it might run out of drive space.
What happens if DBCC DBREINDEX maxes out the drive space. Does it
partially created the index? Does it just not run because it knows it
doesnt have enough space to complete? Or does it just use as much as
it can use without running out of space?
It will fail and the index rebuild will be rolled back. You need 1.2X the
size of the data in your table in free space in order to use DBREINDEX. If
you cannot get more space, consider using DBCC INDEXDEFRAG.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
"dramos" <dan.ramos@.gartner.com> wrote in message
news:641f8b7.0404190802.2e6ca0e1@.posting.google.co m...
I have a large table that is fragmented and causing poor query
performance. I want to use DBCC DBREINDEX to defragment them. I used
it previously and noticed that I almost ran out of drive space. Then
the files compacted down to a managebale size. The table has grown
again and I fear that it might run out of drive space.
What happens if DBCC DBREINDEX maxes out the drive space. Does it
partially created the index? Does it just not run because it knows it
doesnt have enough space to complete? Or does it just use as much as
it can use without running out of space?
|||This helps me out alot:
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
ALTER procedure admin_DBCCDBReindex
as
declare @.Tablename varchar(128),@.db varchar(128),@.exec varchar(256)
set nocount on
set @.db = (select db_name())
declare @.Table table(TableName varchar(128))
insert into @.Table
select so.name
from sysobjects so
where so.name not like 'sys%' and
so.name not like 'dt%' and
so.name not like 'tmp%' and
so.xtype = 'u'
order by so.name
declare mycur cursor for (select TableName from @.Table)
open mycur
fetch next from mycur into @.TableName
while @.@.fetch_status = 0
begin
set @.exec = 'dbcc dbreindex (''' + @.db + '.' + 'dbo.' + @.TableName + ''')'
exec (@.exec)
BACKUP LOG [tmpReindex] TO [tmp]
WITH INIT , NOUNLOAD , NAME = N'tmp log backup', NOSKIP , STATS = 10,
NOFORMAT
fetch next from mycur into @.TableName
end
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
"dramos" <dan.ramos@.gartner.com> wrote in message
news:641f8b7.0404190802.2e6ca0e1@.posting.google.co m...
> I have a large table that is fragmented and causing poor query
> performance. I want to use DBCC DBREINDEX to defragment them. I used
> it previously and noticed that I almost ran out of drive space. Then
> the files compacted down to a managebale size. The table has grown
> again and I fear that it might run out of drive space.
> What happens if DBCC DBREINDEX maxes out the drive space. Does it
> partially created the index? Does it just not run because it knows it
> doesnt have enough space to complete? Or does it just use as much as
> it can use without running out of space?
|||As Tom suggests, consider DBCC INDEXDEFRAG. There's a whitepaper at
http://www.microsoft.com/technet/pro.../ss2kidbp.mspx
that explains the difference between the two, how to choose between them,
performance comparison etc.
Let us know if you have any more questions after reading the whitepaper.
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"dramos" <dan.ramos@.gartner.com> wrote in message
news:641f8b7.0404190802.2e6ca0e1@.posting.google.co m...
> I have a large table that is fragmented and causing poor query
> performance. I want to use DBCC DBREINDEX to defragment them. I used
> it previously and noticed that I almost ran out of drive space. Then
> the files compacted down to a managebale size. The table has grown
> again and I fear that it might run out of drive space.
> What happens if DBCC DBREINDEX maxes out the drive space. Does it
> partially created the index? Does it just not run because it knows it
> doesnt have enough space to complete? Or does it just use as much as
> it can use without running out of space?

DBCC DBREINDEX and diskspace

I have a large table that is fragmented and causing poor query
performance. I want to use DBCC DBREINDEX to defragment them. I used
it previously and noticed that I almost ran out of drive space. Then
the files compacted down to a managebale size. The table has grown
again and I fear that it might run out of drive space.
What happens if DBCC DBREINDEX maxes out the drive space. Does it
partially created the index? Does it just not run because it knows it
doesnt have enough space to complete? Or does it just use as much as
it can use without running out of space?It will fail and the index rebuild will be rolled back. You need 1.2X the
size of the data in your table in free space in order to use DBREINDEX. If
you cannot get more space, consider using DBCC INDEXDEFRAG.
--
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
"dramos" <dan.ramos@.gartner.com> wrote in message
news:641f8b7.0404190802.2e6ca0e1@.posting.google.com...
I have a large table that is fragmented and causing poor query
performance. I want to use DBCC DBREINDEX to defragment them. I used
it previously and noticed that I almost ran out of drive space. Then
the files compacted down to a managebale size. The table has grown
again and I fear that it might run out of drive space.
What happens if DBCC DBREINDEX maxes out the drive space. Does it
partially created the index? Does it just not run because it knows it
doesnt have enough space to complete? Or does it just use as much as
it can use without running out of space?|||This helps me out alot:
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
ALTER procedure admin_DBCCDBReindex
as
declare @.Tablename varchar(128),@.db varchar(128),@.exec varchar(256)
set nocount on
set @.db = (select db_name())
declare @.Table table(TableName varchar(128))
insert into @.Table
select so.name
from sysobjects so
where so.name not like 'sys%' and
so.name not like 'dt%' and
so.name not like 'tmp%' and
so.xtype = 'u'
order by so.name
declare mycur cursor for (select TableName from @.Table)
open mycur
fetch next from mycur into @.TableName
while @.@.fetch_status = 0
begin
set @.exec = 'dbcc dbreindex (''' + @.db + '.' + 'dbo.' + @.TableName + ''')'
exec (@.exec)
BACKUP LOG [tmpReindex] TO [tmp]
WITH INIT , NOUNLOAD , NAME = N'tmp log backup', NOSKIP , STATS = 10,
NOFORMAT
fetch next from mycur into @.TableName
end
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
"dramos" <dan.ramos@.gartner.com> wrote in message
news:641f8b7.0404190802.2e6ca0e1@.posting.google.com...
> I have a large table that is fragmented and causing poor query
> performance. I want to use DBCC DBREINDEX to defragment them. I used
> it previously and noticed that I almost ran out of drive space. Then
> the files compacted down to a managebale size. The table has grown
> again and I fear that it might run out of drive space.
> What happens if DBCC DBREINDEX maxes out the drive space. Does it
> partially created the index? Does it just not run because it knows it
> doesnt have enough space to complete? Or does it just use as much as
> it can use without running out of space?|||As Tom suggests, consider DBCC INDEXDEFRAG. There's a whitepaper at
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
that explains the difference between the two, how to choose between them,
performance comparison etc.
Let us know if you have any more questions after reading the whitepaper.
Regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"dramos" <dan.ramos@.gartner.com> wrote in message
news:641f8b7.0404190802.2e6ca0e1@.posting.google.com...
> I have a large table that is fragmented and causing poor query
> performance. I want to use DBCC DBREINDEX to defragment them. I used
> it previously and noticed that I almost ran out of drive space. Then
> the files compacted down to a managebale size. The table has grown
> again and I fear that it might run out of drive space.
> What happens if DBCC DBREINDEX maxes out the drive space. Does it
> partially created the index? Does it just not run because it knows it
> doesnt have enough space to complete? Or does it just use as much as
> it can use without running out of space?

DBCC DBREINDEX and diskspace

I have a large table that is fragmented and causing poor query
performance. I want to use DBCC DBREINDEX to defragment them. I used
it previously and noticed that I almost ran out of drive space. Then
the files compacted down to a managebale size. The table has grown
again and I fear that it might run out of drive space.
What happens if DBCC DBREINDEX maxes out the drive space. Does it
partially created the index? Does it just not run because it knows it
doesnt have enough space to complete? Or does it just use as much as
it can use without running out of space?It will fail and the index rebuild will be rolled back. You need 1.2X the
size of the data in your table in free space in order to use DBREINDEX. If
you cannot get more space, consider using DBCC INDEXDEFRAG.
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
"dramos" <dan.ramos@.gartner.com> wrote in message
news:641f8b7.0404190802.2e6ca0e1@.posting.google.com...
I have a large table that is fragmented and causing poor query
performance. I want to use DBCC DBREINDEX to defragment them. I used
it previously and noticed that I almost ran out of drive space. Then
the files compacted down to a managebale size. The table has grown
again and I fear that it might run out of drive space.
What happens if DBCC DBREINDEX maxes out the drive space. Does it
partially created the index? Does it just not run because it knows it
doesnt have enough space to complete? Or does it just use as much as
it can use without running out of space?|||This helps me out alot:
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
ALTER procedure admin_DBCCDBReindex
as
declare @.Tablename varchar(128),@.db varchar(128),@.exec varchar(256)
set nocount on
set @.db = (select db_name())
declare @.Table table(TableName varchar(128))
insert into @.Table
select so.name
from sysobjects so
where so.name not like 'sys%' and
so.name not like 'dt%' and
so.name not like 'tmp%' and
so.xtype = 'u'
order by so.name
declare mycur cursor for (select TableName from @.Table)
open mycur
fetch next from mycur into @.TableName
while @.@.fetch_status = 0
begin
set @.exec = 'dbcc dbreindex (''' + @.db + '.' + 'dbo.' + @.TableName + ''')'
exec (@.exec)
BACKUP LOG [tmpReindex] TO [tmp]
WITH INIT , NOUNLOAD , NAME = N'tmp log backup', NOSKIP , STATS = 10,
NOFORMAT
fetch next from mycur into @.TableName
end
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
"dramos" <dan.ramos@.gartner.com> wrote in message
news:641f8b7.0404190802.2e6ca0e1@.posting.google.com...
> I have a large table that is fragmented and causing poor query
> performance. I want to use DBCC DBREINDEX to defragment them. I used
> it previously and noticed that I almost ran out of drive space. Then
> the files compacted down to a managebale size. The table has grown
> again and I fear that it might run out of drive space.
> What happens if DBCC DBREINDEX maxes out the drive space. Does it
> partially created the index? Does it just not run because it knows it
> doesnt have enough space to complete? Or does it just use as much as
> it can use without running out of space?|||As Tom suggests, consider DBCC INDEXDEFRAG. There's a whitepaper at
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
that explains the difference between the two, how to choose between them,
performance comparison etc.
Let us know if you have any more questions after reading the whitepaper.
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"dramos" <dan.ramos@.gartner.com> wrote in message
news:641f8b7.0404190802.2e6ca0e1@.posting.google.com...
> I have a large table that is fragmented and causing poor query
> performance. I want to use DBCC DBREINDEX to defragment them. I used
> it previously and noticed that I almost ran out of drive space. Then
> the files compacted down to a managebale size. The table has grown
> again and I fear that it might run out of drive space.
> What happens if DBCC DBREINDEX maxes out the drive space. Does it
> partially created the index? Does it just not run because it knows it
> doesnt have enough space to complete? Or does it just use as much as
> it can use without running out of space?

DBCC DBREINDEX and DBCC INDEXDEFRAG not defragging indexes

I am running DBCC DBREINDEX and DBCC INDEXDEFRAG against
my database tables and they are not defragging any of my
indexes. Why not?Why are you running both? Also, what command exact are you running? And,
does the tables have clustered indexes? And how do you determine that the
fragmentation isn't reduced? And, remember that there is no idea wasting
time with fragmentation unless an index uses at least somewhere between 500
and 1000 pages.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Colby Jones" <anonymous@.discussions.microsoft.com> wrote in message
news:b8bc01c407b2$c7b3d0c0$a101280a@.phx.gbl...
> I am running DBCC DBREINDEX and DBCC INDEXDEFRAG against
> my database tables and they are not defragging any of my
> indexes. Why not?|||Please post a before and after listing of DBCC SHOWCONTIG for one of the
indexes you are concerned about.
Andrew J. Kelly SQL MVP
"Colby Jones" <anonymous@.discussions.microsoft.com> wrote in message
news:b8bc01c407b2$c7b3d0c0$a101280a@.phx.gbl...
> I am running DBCC DBREINDEX and DBCC INDEXDEFRAG against
> my database tables and they are not defragging any of my
> indexes. Why not?|||most likely because the tables in question dont have clustered indexes.
Heaps cannot be defragged.
Greg Jackson
PDX, Oregon|||Looks like the number of pages being used is too low for
th ereindexing to make any difference. Thanks for the
info...
>--Original Message--
>Why are you running both? Also, what command exact are
you running? And,
>does the tables have clustered indexes? And how do you
determine that the
>fragmentation isn't reduced? And, remember that there is
no idea wasting
>time with fragmentation unless an index uses at least
somewhere between 500
>and 1000 pages.
>--
>Tibor Karaszi, SQL Server MVP
>http://www.karaszi.com/sqlserver/default.asp
>
>"Colby Jones" <anonymous@.discussions.microsoft.com> wrote
in message
>news:b8bc01c407b2$c7b3d0c0$a101280a@.phx.gbl...
>
>.
>|||The following is from after running DBREINDEX.
DBCC SHOWCONTIG scanning 'PA33203' table...
Table: 'PA33203' (186964984); index ID: 1, database ID: 10
TABLE level scan performed.
- Pages Scanned........................: 361
- Extent Switches.......................: 45
- Scan Density [Best Count:Actual Count]......: 100.00%
[46:46]
- Logical Scan Fragmentation ..............: 0.00%

>--Original Message--
>Please post a before and after listing of DBCC SHOWCONTIG
for one of the
>indexes you are concerned about.
>--
>Andrew J. Kelly SQL MVP
>
>"Colby Jones" <anonymous@.discussions.microsoft.com> wrote
in message
>news:b8bc01c407b2$c7b3d0c0$a101280a@.phx.gbl...
>
>.
>|||That looks perfect! No fragmentation at all!
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
<anonymous@.discussions.microsoft.com> wrote in message
news:bdc301c40847$ec11db70$a401280a@.phx.gbl...
> The following is from after running DBREINDEX.
> DBCC SHOWCONTIG scanning 'PA33203' table...
> Table: 'PA33203' (186964984); index ID: 1, database ID: 10
> TABLE level scan performed.
> - Pages Scanned........................: 361
> - Extent Switches.......................: 45
> - Scan Density [Best Count:Actual Count]......: 100.00%
> [46:46]
> - Logical Scan Fragmentation ..............: 0.00%
>|||As Tibor states the fragmentation is 0 so how much better do you want<g>?
Besides a table this small is hardly worth spending a lot of effort on.
Andrew J. Kelly SQL MVP
<anonymous@.discussions.microsoft.com> wrote in message
news:bdc301c40847$ec11db70$a401280a@.phx.gbl...
> The following is from after running DBREINDEX.
> DBCC SHOWCONTIG scanning 'PA33203' table...
> Table: 'PA33203' (186964984); index ID: 1, database ID: 10
> TABLE level scan performed.
> - Pages Scanned........................: 361
> - Extent Switches.......................: 45
> - Scan Density [Best Count:Actual Count]......: 100.00%
> [46:46]
> - Logical Scan Fragmentation ..............: 0.00%
>
> for one of the
> in message|||OK, looks like I picked a bad example. How about this one?
DBCC SHOWCONTIG scanning 'POP30000' table...
Table: 'POP30000' (4195065); index ID: 0, database ID: 10
TABLE level scan performed.
- Pages Scanned........................: 25
- Extents Scanned.......................: 21
- Extent Switches.......................: 20
- Avg. Pages per Extent..................: 1.2
- Scan Density [Best Count:Actual Count]......: 19.05%
[4:21]
- Extent Scan Fragmentation ...............: 95.24%
- Avg. Bytes Free per Page................: 2410.2
- Avg. Page Density (full)................: 70.22%

>--Original Message--
>As Tibor states the fragmentation is 0 so how much better
do you want<g>?
>Besides a table this small is hardly worth spending a lot
of effort on.
>--
>Andrew J. Kelly SQL MVP
>
><anonymous@.discussions.microsoft.com> wrote in message
>news:bdc301c40847$ec11db70$a401280a@.phx.gbl...
10
SHOWCONTIG
wrote
against
of my
>
>.
>|||And another:
DBCC SHOWCONTIG scanning 'SOP40500' table...
Table: 'SOP40500' (16719112); index ID: 0, database ID: 10
TABLE level scan performed.
- Pages Scanned........................: 161
- Extents Scanned.......................: 86
- Extent Switches.......................: 85
- Avg. Pages per Extent..................: 1.9
- Scan Density [Best Count:Actual Count]......: 24.42%
[21:86]
- Extent Scan Fragmentation ...............: 96.51%
- Avg. Bytes Free per Page................: 1650.9
- Avg. Page Density (full)................: 79.60%

>--Original Message--
>As Tibor states the fragmentation is 0 so how much better
do you want<g>?
>Besides a table this small is hardly worth spending a lot
of effort on.
>--
>Andrew J. Kelly SQL MVP
>
><anonymous@.discussions.microsoft.com> wrote in message
>news:bdc301c40847$ec11db70$a401280a@.phx.gbl...
10
SHOWCONTIG
wrote
against
of my
>
>.
>

DBCC dbreindex and Backup Size

We have two one-time scheduled jobs running the DBCC below, this happened to
be Mon and Tuesday after the full backup on Sunday:
DBCC dbreindex (My Table,'',90)
The table being DBCC'ed are rather big, can be 30GB in total.
Our backup size increases in those two days. Our differential backups on Mon
and Tuesdays are usually significantly lower. I wonder someone can relate
this to DBCC DBReindex or any other possibilites.
Thanks
TO
> Our backup size increases in those two days.
Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before of
after the operation?
If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of data
with the reindex).
If it is a log backup, then the same goes, you have modified a lot of data. These modifications have
been logged.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
> We have two one-time scheduled jobs running the DBCC below, this happened to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on Mon
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>
|||There are regular large amount of inserts (because of ETL process)
***everyday***, no noticeable different for those days though.
We have full db backup on Sunday and Diff db backup daily.
Should I not run the DBCC REINDEX with fill factor of 90 a day after full
backup?

>You have shuffled (= modified) a lot of data with the reindex).
Can you further explain? It sounds like this may be the answer.
The database is 58 GB and the diff for the day is 55 GB. Same day last week
was 26.961 GB.
BTW, 27 GB diff is way too big of size to me. What strategy I can use to
save our backup size, without jeopardizing or compromising our recoverability
(longer restoring is acceptable)?
Thanks.
"Tibor Karaszi" wrote:

> Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before of
> after the operation?
> If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of data
> with the reindex).
> If it is a log backup, then the same goes, you have modified a lot of data. These modifications have
> been logged.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "C TO" <CTO@.discussions.microsoft.com> wrote in message
> news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
>
|||That is a logged operation, so it will expand your transaction log. Take a
peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
your scenario.
"C TO" wrote:

> We have two one-time scheduled jobs running the DBCC below, this happened to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on Mon
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>
|||Forgot to mention that the recovery mode is "Simple", my bad. We have no
logged operation.
Is there any DBCC + Backup strategy can be implement?
Thanks.
"Dennis Forbes" wrote:
[vbcol=seagreen]
> That is a logged operation, so it will expand your transaction log. Take a
> peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> your scenario.
> "C TO" wrote:
|||Ahh, okay.
The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
the reindex routines before your full backup.
"C TO" wrote:
[vbcol=seagreen]
> Forgot to mention that the recovery mode is "Simple", my bad. We have no
> logged operation.
> Is there any DBCC + Backup strategy can be implement?
> Thanks.
> "Dennis Forbes" wrote:
|||>>You have shuffled (= modified) a lot of data with the reindex).
> Can you further explain? It sounds like this may be the answer.
Yes, the index is rebuilt, hence all the data in the table has been modified (assuming there's a
clustered index on the table). All that data is included in the diff backup.
Can you be more selective with your rebuilds? Also, perhaps DBCC INDEXDEFRAG will suit you better?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:91C42295-D1E9-4545-990D-43AEDF876184@.microsoft.com...[vbcol=seagreen]
> There are regular large amount of inserts (because of ETL process)
> ***everyday***, no noticeable different for those days though.
> We have full db backup on Sunday and Diff db backup daily.
> Should I not run the DBCC REINDEX with fill factor of 90 a day after full
> backup?
> Can you further explain? It sounds like this may be the answer.
>
> The database is 58 GB and the diff for the day is 55 GB. Same day last week
> was 26.961 GB.
> BTW, 27 GB diff is way too big of size to me. What strategy I can use to
> save our backup size, without jeopardizing or compromising our recoverability
> (longer restoring is acceptable)?
> Thanks.
>
>
> "Tibor Karaszi" wrote:
|||Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
was reindexing the tables with just about one third of the 59 GB database
size. Assuming this is true, the marked pages are 28GB. Are these all index
pages?
"Dennis Forbes" wrote:
[vbcol=seagreen]
> Ahh, okay.
> The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
> the reindex routines before your full backup.
> "C TO" wrote:
|||You probably have a clustered index on the table, and a clustered index
contains the entirity of the data (a clustered index is the data), so in that
case the answer would be yes.
"C TO" wrote:
[vbcol=seagreen]
> Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
> was reindexing the tables with just about one third of the 59 GB database
> size. Assuming this is true, the marked pages are 28GB. Are these all index
> pages?
>
> "Dennis Forbes" wrote:

DBCC dbreindex and Backup Size

We have two one-time scheduled jobs running the DBCC below, this happened to
be Mon and Tuesday after the full backup on Sunday:
DBCC dbreindex (My Table,'',90)
The table being DBCC'ed are rather big, can be 30GB in total.
Our backup size increases in those two days. Our differential backups on Mon
and Tuesdays are usually significantly lower. I wonder someone can relate
this to DBCC DBReindex or any other possibilites.
Thanks
TO> Our backup size increases in those two days.
Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before of
after the operation?
If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of data
with the reindex).
If it is a log backup, then the same goes, you have modified a lot of data. These modifications have
been logged.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
> We have two one-time scheduled jobs running the DBCC below, this happened to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on Mon
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>|||There are regular large amount of inserts (because of ETL process)
***everyday***, no noticeable different for those days though.
We have full db backup on Sunday and Diff db backup daily.
Should I not run the DBCC REINDEX with fill factor of 90 a day after full
backup?
>You have shuffled (= modified) a lot of data with the reindex).
Can you further explain? It sounds like this may be the answer.
The database is 58 GB and the diff for the day is 55 GB. Same day last week
was 26.961 GB.
BTW, 27 GB diff is way too big of size to me. What strategy I can use to
save our backup size, without jeopardizing or compromising our recoverability
(longer restoring is acceptable)?
Thanks.
"Tibor Karaszi" wrote:
> > Our backup size increases in those two days.
> Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before of
> after the operation?
> If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of data
> with the reindex).
> If it is a log backup, then the same goes, you have modified a lot of data. These modifications have
> been logged.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "C TO" <CTO@.discussions.microsoft.com> wrote in message
> news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
> >
> > We have two one-time scheduled jobs running the DBCC below, this happened to
> > be Mon and Tuesday after the full backup on Sunday:
> >
> > DBCC dbreindex (My Table,'',90)
> >
> > The table being DBCC'ed are rather big, can be 30GB in total.
> >
> > Our backup size increases in those two days. Our differential backups on Mon
> > and Tuesdays are usually significantly lower. I wonder someone can relate
> > this to DBCC DBReindex or any other possibilites.
> >
> > Thanks
> > TO
> >
> >
>|||That is a logged operation, so it will expand your transaction log. Take a
peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
your scenario.
"C TO" wrote:
> We have two one-time scheduled jobs running the DBCC below, this happened to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on Mon
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>|||Forgot to mention that the recovery mode is "Simple", my bad. We have no
logged operation.
Is there any DBCC + Backup strategy can be implement?
Thanks.
"Dennis Forbes" wrote:
> That is a logged operation, so it will expand your transaction log. Take a
> peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> your scenario.
> "C TO" wrote:
> >
> > We have two one-time scheduled jobs running the DBCC below, this happened to
> > be Mon and Tuesday after the full backup on Sunday:
> >
> > DBCC dbreindex (My Table,'',90)
> >
> > The table being DBCC'ed are rather big, can be 30GB in total.
> >
> > Our backup size increases in those two days. Our differential backups on Mon
> > and Tuesdays are usually significantly lower. I wonder someone can relate
> > this to DBCC DBReindex or any other possibilites.
> >
> > Thanks
> > TO
> >
> >|||Ahh, okay.
The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
the reindex routines before your full backup.
"C TO" wrote:
> Forgot to mention that the recovery mode is "Simple", my bad. We have no
> logged operation.
> Is there any DBCC + Backup strategy can be implement?
> Thanks.
> "Dennis Forbes" wrote:
> > That is a logged operation, so it will expand your transaction log. Take a
> > peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> > your scenario.
> >
> > "C TO" wrote:
> >
> > >
> > > We have two one-time scheduled jobs running the DBCC below, this happened to
> > > be Mon and Tuesday after the full backup on Sunday:
> > >
> > > DBCC dbreindex (My Table,'',90)
> > >
> > > The table being DBCC'ed are rather big, can be 30GB in total.
> > >
> > > Our backup size increases in those two days. Our differential backups on Mon
> > > and Tuesdays are usually significantly lower. I wonder someone can relate
> > > this to DBCC DBReindex or any other possibilites.
> > >
> > > Thanks
> > > TO
> > >
> > >|||>>You have shuffled (= modified) a lot of data with the reindex).
> Can you further explain? It sounds like this may be the answer.
Yes, the index is rebuilt, hence all the data in the table has been modified (assuming there's a
clustered index on the table). All that data is included in the diff backup.
Can you be more selective with your rebuilds? Also, perhaps DBCC INDEXDEFRAG will suit you better?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:91C42295-D1E9-4545-990D-43AEDF876184@.microsoft.com...
> There are regular large amount of inserts (because of ETL process)
> ***everyday***, no noticeable different for those days though.
> We have full db backup on Sunday and Diff db backup daily.
> Should I not run the DBCC REINDEX with fill factor of 90 a day after full
> backup?
>>You have shuffled (= modified) a lot of data with the reindex).
> Can you further explain? It sounds like this may be the answer.
>
> The database is 58 GB and the diff for the day is 55 GB. Same day last week
> was 26.961 GB.
> BTW, 27 GB diff is way too big of size to me. What strategy I can use to
> save our backup size, without jeopardizing or compromising our recoverability
> (longer restoring is acceptable)?
> Thanks.
>
>
> "Tibor Karaszi" wrote:
>> > Our backup size increases in those two days.
>> Can you elaborate further? What type of backup? Database, diff or log? Is the backup large before
>> of
>> after the operation?
>> If a diff backup is larger than usual it is expected. You have shuffled (= modified) a lot of
>> data
>> with the reindex).
>> If it is a log backup, then the same goes, you have modified a lot of data. These modifications
>> have
>> been logged.
>> --
>> Tibor Karaszi, SQL Server MVP
>> http://www.karaszi.com/sqlserver/default.asp
>> http://www.solidqualitylearning.com/
>> Blog: http://solidqualitylearning.com/blogs/tibor/
>>
>> "C TO" <CTO@.discussions.microsoft.com> wrote in message
>> news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
>> >
>> > We have two one-time scheduled jobs running the DBCC below, this happened to
>> > be Mon and Tuesday after the full backup on Sunday:
>> >
>> > DBCC dbreindex (My Table,'',90)
>> >
>> > The table being DBCC'ed are rather big, can be 30GB in total.
>> >
>> > Our backup size increases in those two days. Our differential backups on Mon
>> > and Tuesdays are usually significantly lower. I wonder someone can relate
>> > this to DBCC DBReindex or any other possibilites.
>> >
>> > Thanks
>> > TO
>> >
>> >
>>|||Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
was reindexing the tables with just about one third of the 59 GB database
size. Assuming this is true, the marked pages are 28GB. Are these all index
pages?
"Dennis Forbes" wrote:
> Ahh, okay.
> The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
> the reindex routines before your full backup.
> "C TO" wrote:
> > Forgot to mention that the recovery mode is "Simple", my bad. We have no
> > logged operation.
> >
> > Is there any DBCC + Backup strategy can be implement?
> >
> > Thanks.
> >
> > "Dennis Forbes" wrote:
> >
> > > That is a logged operation, so it will expand your transaction log. Take a
> > > peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> > > your scenario.
> > >
> > > "C TO" wrote:
> > >
> > > >
> > > > We have two one-time scheduled jobs running the DBCC below, this happened to
> > > > be Mon and Tuesday after the full backup on Sunday:
> > > >
> > > > DBCC dbreindex (My Table,'',90)
> > > >
> > > > The table being DBCC'ed are rather big, can be 30GB in total.
> > > >
> > > > Our backup size increases in those two days. Our differential backups on Mon
> > > > and Tuesdays are usually significantly lower. I wonder someone can relate
> > > > this to DBCC DBReindex or any other possibilites.
> > > >
> > > > Thanks
> > > > TO
> > > >
> > > >|||You probably have a clustered index on the table, and a clustered index
contains the entirity of the data (a clustered index is the data), so in that
case the answer would be yes.
"C TO" wrote:
> Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
> was reindexing the tables with just about one third of the 59 GB database
> size. Assuming this is true, the marked pages are 28GB. Are these all index
> pages?
>
> "Dennis Forbes" wrote:
> > Ahh, okay.
> >
> > The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
> > the reindex routines before your full backup.
> >
> > "C TO" wrote:
> >
> > > Forgot to mention that the recovery mode is "Simple", my bad. We have no
> > > logged operation.
> > >
> > > Is there any DBCC + Backup strategy can be implement?
> > >
> > > Thanks.
> > >
> > > "Dennis Forbes" wrote:
> > >
> > > > That is a logged operation, so it will expand your transaction log. Take a
> > > > peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
> > > > your scenario.
> > > >
> > > > "C TO" wrote:
> > > >
> > > > >
> > > > > We have two one-time scheduled jobs running the DBCC below, this happened to
> > > > > be Mon and Tuesday after the full backup on Sunday:
> > > > >
> > > > > DBCC dbreindex (My Table,'',90)
> > > > >
> > > > > The table being DBCC'ed are rather big, can be 30GB in total.
> > > > >
> > > > > Our backup size increases in those two days. Our differential backups on Mon
> > > > > and Tuesdays are usually significantly lower. I wonder someone can relate
> > > > > this to DBCC DBReindex or any other possibilites.
> > > > >
> > > > > Thanks
> > > > > TO
> > > > >
> > > > >

DBCC dbreindex and Backup Size

We have two one-time scheduled jobs running the DBCC below, this happened to
be Mon and Tuesday after the full backup on Sunday:
DBCC dbreindex (My Table,'',90)
The table being DBCC'ed are rather big, can be 30GB in total.
Our backup size increases in those two days. Our differential backups on Mon
and Tuesdays are usually significantly lower. I wonder someone can relate
this to DBCC DBReindex or any other possibilites.
Thanks
TO> Our backup size increases in those two days.
Can you elaborate further? What type of backup? Database, diff or log? Is th
e backup large before of
after the operation?
If a diff backup is larger than usual it is expected. You have shuffled (= m
odified) a lot of data
with the reindex).
If it is a log backup, then the same goes, you have modified a lot of data.
These modifications have
been logged.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
> We have two one-time scheduled jobs running the DBCC below, this happened
to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on M
on
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>|||There are regular large amount of inserts (because of ETL process)
***everyday***, no noticeable different for those days though.
We have full db backup on Sunday and Diff db backup daily.
Should I not run the DBCC REINDEX with fill factor of 90 a day after full
backup?

>You have shuffled (= modified) a lot of data with the reindex).
Can you further explain? It sounds like this may be the answer.
The database is 58 GB and the diff for the day is 55 GB. Same day last week
was 26.961 GB.
BTW, 27 GB diff is way too big of size to me. What strategy I can use to
save our backup size, without jeopardizing or compromising our recoverabilit
y
(longer restoring is acceptable)?
Thanks.
"Tibor Karaszi" wrote:

> Can you elaborate further? What type of backup? Database, diff or log? Is
the backup large before of
> after the operation?
> If a diff backup is larger than usual it is expected. You have shuffled (=
modified) a lot of data
> with the reindex).
> If it is a log backup, then the same goes, you have modified a lot of data
. These modifications have
> been logged.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "C TO" <CTO@.discussions.microsoft.com> wrote in message
> news:86C5C2CC-F1E8-4148-88D4-EADFC2599D05@.microsoft.com...
>|||That is a logged operation, so it will expand your transaction log. Take a
peek at the BULK_LOGGED recovery model to see if perhaps it makes sense for
your scenario.
"C TO" wrote:

> We have two one-time scheduled jobs running the DBCC below, this happened
to
> be Mon and Tuesday after the full backup on Sunday:
> DBCC dbreindex (My Table,'',90)
> The table being DBCC'ed are rather big, can be 30GB in total.
> Our backup size increases in those two days. Our differential backups on M
on
> and Tuesdays are usually significantly lower. I wonder someone can relate
> this to DBCC DBReindex or any other possibilites.
> Thanks
> TO
>|||Forgot to mention that the recovery mode is "Simple", my bad. We have no
logged operation.
Is there any DBCC + Backup strategy can be implement?
Thanks.
"Dennis Forbes" wrote:
[vbcol=seagreen]
> That is a logged operation, so it will expand your transaction log. Take a
> peek at the BULK_LOGGED recovery model to see if perhaps it makes sense fo
r
> your scenario.
> "C TO" wrote:
>|||Ahh, okay.
The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
the reindex routines before your full backup.
"C TO" wrote:
[vbcol=seagreen]
> Forgot to mention that the recovery mode is "Simple", my bad. We have no
> logged operation.
> Is there any DBCC + Backup strategy can be implement?
> Thanks.
> "Dennis Forbes" wrote:
>|||>>You have shuffled (= modified) a lot of data with the reindex).
> Can you further explain? It sounds like this may be the answer.
Yes, the index is rebuilt, hence all the data in the table has been modified
(assuming there's a
clustered index on the table). All that data is included in the diff backup.
Can you be more selective with your rebuilds? Also, perhaps DBCC INDEXDEFRAG
will suit you better?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"C TO" <CTO@.discussions.microsoft.com> wrote in message
news:91C42295-D1E9-4545-990D-43AEDF876184@.microsoft.com...[vbcol=seagreen]
> There are regular large amount of inserts (because of ETL process)
> ***everyday***, no noticeable different for those days though.
> We have full db backup on Sunday and Diff db backup daily.
> Should I not run the DBCC REINDEX with fill factor of 90 a day after full
> backup?
>
> Can you further explain? It sounds like this may be the answer.
>
> The database is 58 GB and the diff for the day is 55 GB. Same day last wee
k
> was 26.961 GB.
> BTW, 27 GB diff is way too big of size to me. What strategy I can use to
> save our backup size, without jeopardizing or compromising our recoverabil
ity
> (longer restoring is acceptable)?
> Thanks.
>
>
> "Tibor Karaszi" wrote:
>|||Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX job
was reindexing the tables with just about one third of the 59 GB database
size. Assuming this is true, the marked pages are 28GB. Are these all index
pages?
"Dennis Forbes" wrote:
[vbcol=seagreen]
> Ahh, okay.
> The DBCC DBREINDEX is marking the pages as changed. Perhaps you should run
> the reindex routines before your full backup.
> "C TO" wrote:
>|||You probably have a clustered index on the table, and a clustered index
contains the entirity of the data (a clustered index is the data), so in tha
t
case the answer would be yes.
"C TO" wrote:
[vbcol=seagreen]
> Does that explain the 28 GB increases in backup size? the DBCC DBREINDEX j
ob
> was reindexing the tables with just about one third of the 59 GB database
> size. Assuming this is true, the marked pages are 28GB. Are these all inde
x
> pages?
>
> "Dennis Forbes" wrote:
>