Thursday, March 29, 2012
DBCC SHRINKFILE
a lot of excess unused space ,approx 64GB, in the .MDF and want to give it
back to Windows 200 Server.
Our .MDF is not in the SQL DATA directory it is on another volume.
Is there a way to specify a path to another volume when using DBCC SHRINKFILE?
Right now I am forced to use DBCC SHRINKDATABASE and it is taking a long time.
TIA,
-BenDBCC SHRINKFILE doesn't care where the file is as it uses the logical
filename. There is no reason you need to use SHRINKDATABASE.
--
Andrew J. Kelly SQL MVP
"Ben" <Ben@.discussions.microsoft.com> wrote in message
news:33AB1ADE-1D27-45F0-A054-B6F4B5F88727@.microsoft.com...
>I have recently changed the indexes ina large table in our database. I
>have
> a lot of excess unused space ,approx 64GB, in the .MDF and want to give
> it
> back to Windows 200 Server.
> Our .MDF is not in the SQL DATA directory it is on another volume.
> Is there a way to specify a path to another volume when using DBCC
> SHRINKFILE?
> Right now I am forced to use DBCC SHRINKDATABASE and it is taking a long
> time.
> TIA,
> -Ben
DBCC SHRINKFILE
a lot of excess unused space ,approx 64GB, in the .MDF and want to give it
back to Windows 200 Server.
Our .MDF is not in the SQL DATA directory it is on another volume.
Is there a way to specify a path to another volume when using DBCC SHRINKFILE?
Right now I am forced to use DBCC SHRINKDATABASE and it is taking a long time.
TIA,
-Ben
DBCC SHRINKFILE doesn't care where the file is as it uses the logical
filename. There is no reason you need to use SHRINKDATABASE.
Andrew J. Kelly SQL MVP
"Ben" <Ben@.discussions.microsoft.com> wrote in message
news:33AB1ADE-1D27-45F0-A054-B6F4B5F88727@.microsoft.com...
>I have recently changed the indexes ina large table in our database. I
>have
> a lot of excess unused space ,approx 64GB, in the .MDF and want to give
> it
> back to Windows 200 Server.
> Our .MDF is not in the SQL DATA directory it is on another volume.
> Is there a way to specify a path to another volume when using DBCC
> SHRINKFILE?
> Right now I am forced to use DBCC SHRINKDATABASE and it is taking a long
> time.
> TIA,
> -Ben
DBCC SHRINKFILE
a lot of excess unused space ,approx 64GB, in the .MDF and want to give it
back to Windows 200 Server.
Our .MDF is not in the SQL DATA directory it is on another volume.
Is there a way to specify a path to another volume when using DBCC SHRINKFIL
E?
Right now I am forced to use DBCC SHRINKDATABASE and it is taking a long tim
e.
TIA,
-BenDBCC SHRINKFILE doesn't care where the file is as it uses the logical
filename. There is no reason you need to use SHRINKDATABASE.
Andrew J. Kelly SQL MVP
"Ben" <Ben@.discussions.microsoft.com> wrote in message
news:33AB1ADE-1D27-45F0-A054-B6F4B5F88727@.microsoft.com...
>I have recently changed the indexes ina large table in our database. I
>have
> a lot of excess unused space ,approx 64GB, in the .MDF and want to give
> it
> back to Windows 200 Server.
> Our .MDF is not in the SQL DATA directory it is on another volume.
> Is there a way to specify a path to another volume when using DBCC
> SHRINKFILE?
> Right now I am forced to use DBCC SHRINKDATABASE and it is taking a long
> time.
> TIA,
> -Ben
dbcc shrinkdatabase vs dbcc shrinkfile
I just chopped off millions of stale records from a table bringing the
row count from 85M to 5 and also freeing up around 30GB of data. What
is the safest way to shrink the database to reclaim this freed up space?Hello,
I recommend you to use DBCC SHRINKFILE to shrink the MDF and LDF file
seperately. Before executing the SHRINKFILE use the command SP_SPACEUSED
to identify the unallocated space.
Thanks
Hari
"AF" <af.at.work@.gmail.com> wrote in message
news:1168388869.505893.31950@.i56g2000hsf.googlegroups.com...
> Hello all,
> I just chopped off millions of stale records from a table bringing the
> row count from 85M to 5 and also freeing up around 30GB of data. What
> is the safest way to shrink the database to reclaim this freed up space?
>|||Hari Prasad wrote:
> Hello,
> I recommend you to use DBCC SHRINKFILE to shrink the MDF and LDF file
> seperately. Before executing the SHRINKFILE use the command SP_SPACEUSED
> to identify the unallocated space.
> Thanks
> Hari
>
> "AF" <af.at.work@.gmail.com> wrote in message
> news:1168388869.505893.31950@.i56g2000hsf.googlegroups.com...
> > Hello all,
> >
> > I just chopped off millions of stale records from a table bringing the
> > row count from 85M to 5 and also freeing up around 30GB of data. What
> > is the safest way to shrink the database to reclaim this freed up space?
> >
Thanks Hari, just curious -- why separately?
These are the results of sp_spaceused but how do I interpret these to
use dbcc shrinkfile?
name size unallocated space
mydb 39536.81 MB 504.16 MB
reserved data index size unused
8589984 KB 7921840 KB 647688 KB 20456 KB|||AF wrote:
> Hari Prasad wrote:
> > Hello,
> >
> > I recommend you to use DBCC SHRINKFILE to shrink the MDF and LDF file
> > seperately. Before executing the SHRINKFILE use the command SP_SPACEUSED
> > to identify the unallocated space.
> >
> > Thanks
> > Hari
> >
> >
> > "AF" <af.at.work@.gmail.com> wrote in message
> > news:1168388869.505893.31950@.i56g2000hsf.googlegroups.com...
> > > Hello all,
> > >
> > > I just chopped off millions of stale records from a table bringing the
> > > row count from 85M to 5 and also freeing up around 30GB of data. What
> > > is the safest way to shrink the database to reclaim this freed up space?
> > >
> Thanks Hari, just curious -- why separately?
> These are the results of sp_spaceused but how do I interpret these to
> use dbcc shrinkfile?
> name size unallocated space
> mydb 39536.81 MB 504.16 MB
> reserved data index size unused
> 8589984 KB 7921840 KB 647688 KB 20456 KB
This is the usage reported from SSEM:
mydb.mdf
7721MB USED
1172MB FREE
mydb.ldf
158MB USED
30486MB FREEsql
dbcc shrinkdatabase vs dbcc shrinkfile
I just chopped off millions of stale records from a table bringing the
row count from 85M to 5 and also freeing up around 30GB of data. What
is the safest way to shrink the database to reclaim this freed up space?Hello,
I recommend you to use DBCC SHRINKFILE to shrink the MDF and LDF file
seperately. Before executing the SHRINKFILE use the command SP_SPACEUSED
to identify the unallocated space.
Thanks
Hari
"AF" <af.at.work@.gmail.com> wrote in message
news:1168388869.505893.31950@.i56g2000hsf.googlegroups.com...
> Hello all,
> I just chopped off millions of stale records from a table bringing the
> row count from 85M to 5 and also freeing up around 30GB of data. What
> is the safest way to shrink the database to reclaim this freed up space?
>|||Hari Prasad wrote:[vbcol=seagreen]
> Hello,
> I recommend you to use DBCC SHRINKFILE to shrink the MDF and LDF file
> seperately. Before executing the SHRINKFILE use the command SP_SPACEUSED
> to identify the unallocated space.
> Thanks
> Hari
>
> "AF" <af.at.work@.gmail.com> wrote in message
> news:1168388869.505893.31950@.i56g2000hsf.googlegroups.com...
Thanks Hari, just curious -- why separately?
These are the results of sp_spaceused but how do I interpret these to
use dbcc shrinkfile?
name size unallocated space
mydb 39536.81 MB 504.16 MB
reserved data index size unused
8589984 KB 7921840 KB 647688 KB 20456 KB|||AF wrote:
> Hari Prasad wrote:
> Thanks Hari, just curious -- why separately?
> These are the results of sp_spaceused but how do I interpret these to
> use dbcc shrinkfile?
> name size unallocated space
> mydb 39536.81 MB 504.16 MB
> reserved data index size unused
> 8589984 KB 7921840 KB 647688 KB 20456 KB
This is the usage reported from SSEM:
mydb.mdf
7721MB USED
1172MB FREE
mydb.ldf
158MB USED
30486MB FREE
Tuesday, March 27, 2012
dbcc shrinkdatabase vs dbcc shrinkfile
I just chopped off millions of stale records from a table bringing the
row count from 85M to 5 and also freeing up around 30GB of data. What
is the safest way to shrink the database to reclaim this freed up space?
Hello,
I recommend you to use DBCC SHRINKFILE to shrink the MDF and LDF file
seperately. Before executing the SHRINKFILE use the command SP_SPACEUSED
to identify the unallocated space.
Thanks
Hari
"AF" <af.at.work@.gmail.com> wrote in message
news:1168388869.505893.31950@.i56g2000hsf.googlegro ups.com...
> Hello all,
> I just chopped off millions of stale records from a table bringing the
> row count from 85M to 5 and also freeing up around 30GB of data. What
> is the safest way to shrink the database to reclaim this freed up space?
>
|||Hari Prasad wrote:[vbcol=seagreen]
> Hello,
> I recommend you to use DBCC SHRINKFILE to shrink the MDF and LDF file
> seperately. Before executing the SHRINKFILE use the command SP_SPACEUSED
> to identify the unallocated space.
> Thanks
> Hari
>
> "AF" <af.at.work@.gmail.com> wrote in message
> news:1168388869.505893.31950@.i56g2000hsf.googlegro ups.com...
Thanks Hari, just curious -- why separately?
These are the results of sp_spaceused but how do I interpret these to
use dbcc shrinkfile?
name size unallocated space
mydb39536.81 MB504.16 MB
reserved data index size unused
8589984 KB7921840 KB647688 KB20456 KB
DBCC SHRINKDATABASE QUESTION
DBCC SHRINKDATABASE
( database_name [ , target_percent ] )
--I set the target percent low(about 5%), current db size after archive is 75 gigs so it should shrink to about 79 gigs...
and instead of releasing the free space to the operating system the routine actually increases the used space of the db to fill almost all the newly freed space. Basically I remove 25 gigs of data, run the routine and now magically the database grows 25 gigs instead of freeing up 25 gigs.(now the db is back to the orginal size before archive process!!) I've only experienced this when dealing with image datatypes. I am wondering if the way images are stored has something to do with this unexpected behavior. If anybody has any insight please let me know. ThanksYes this is a know issue that it can happen sometimes but I don't know if
there is a KB regarding this or not. Try using DBCC CLEANTABLE and see if
that helps. If not then you might have to BCP out all the data from that
table, truncate it and bcp it back in.
--
Andrew J. Kelly
SQL Server MVP
"mike petanovitch" <mpetanovitch@.hotmail.com> wrote in message
news:5DB04B5D-64C6-4422-AEB4-A1E99BEE8574@.microsoft.com...
> I am archiving about 25 gigs of images from our OLTP database to an
archive database. After archiving these images(table which contains a
column datatype image), there is about 25 gigs of unallocated space reported
by EM. I run the normal Shrink database routine:
> DBCC SHRINKDATABASE
> ( database_name [ , target_percent ] )
> --I set the target percent low(about 5%), current db size after archive is
75 gigs so it should shrink to about 79 gigs...
> and instead of releasing the free space to the operating system the
routine actually increases the used space of the db to fill almost all the
newly freed space. Basically I remove 25 gigs of data, run the routine
and now magically the database grows 25 gigs instead of freeing up 25
gigs.(now the db is back to the orginal size before archive process!!)
I've only experienced this when dealing with image datatypes. I am
wondering if the way images are stored has something to do with this
unexpected behavior. If anybody has any insight please let me know. Thanks
Sunday, March 25, 2012
dbcc showcontig with tablersults
"Number of data or index records at that level of the index. For a heap, this is the number of data records in the entire heap."
By the way, if I select from sysindexes for a one of the above indexes, there is a correct rowcount for the index.
Any ideas?
CliveI found it. I was using the 'WITH FAST' parameter. Without FAST, it returns the row count.
Clive
DBCC SHOWCONTIG incorrect?
DBREINDEX which appeared to run fine - new fillfactor showing etc. However,
showcontig still shows the indexes as being badly fragmented. I've run an
update stats with fullscan and update usage which has corrected incorrect
space stats but that's all. I run this process every month and normally
everything works as expected. Any ideas why this time it didn't please?Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
Keith Kratochvil
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Thanks. I've already done that and it didn't.
"Keith Kratochvil" wrote:
> Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
> --
> Keith Kratochvil
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
>
>|||Hi Shirley
Can you show us the details from DBCC SHOWCONTIG?
Also, how many indexes are there? Are they all fragmented, clustered and
nonclustered?
How much free space is in the database?
HTH
Kalen Delaney, SQL Server MVP
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Hi Kalen, thanks for replying.
There is one clustered IX and 3 NC IX. These are the results
- Scan Density [Best Count:Actual Count]......: 98.18% [292775:2981
97] CL
- Scan Density [Best Count:Actual Count]......: 54.10% [22661:41886
] NC
- Scan Density [Best Count:Actual Count]......: 52.88% [30754:58162
] NC
- Scan Density [Best Count:Actual Count]......: 54.74% [27210:49709
] NC
There is about 30 GB free space in the DB and it can autogrow.
"Kalen Delaney" wrote:
> Hi Shirley
> Can you show us the details from DBCC SHOWCONTIG?
> Also, how many indexes are there? Are they all fragmented, clustered and
> nonclustered?
> How much free space is in the database?
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
>
>|||Problem fixed. I ran an UPDATE STATISTICS again but with the FULLSCAN option
,
then re-ran the reorg and all looks fine now. Don't know why this just
started to happen but will do the updatestats before the reorg every time no
w.
"Shirley" wrote:
[vbcol=seagreen]
> Hi Kalen, thanks for replying.
> There is one clustered IX and 3 NC IX. These are the results
> - Scan Density [Best Count:Actual Count]......: 98.18% [292775:29
8197] CL
> - Scan Density [Best Count:Actual Count]......: 54.10% [22661:418
86] NC
> - Scan Density [Best Count:Actual Count]......: 52.88% [30754:581
62] NC
> - Scan Density [Best Count:Actual Count]......: 54.74% [27210:497
09] NC
> There is about 30 GB free space in the DB and it can autogrow.
>
> "Kalen Delaney" wrote:
>
DBCC SHOWCONTIG incorrect?
DBREINDEX which appeared to run fine - new fillfactor showing etc. However,
showcontig still shows the indexes as being badly fragmented. I've run an
update stats with fullscan and update usage which has corrected incorrect
space stats but that's all. I run this process every month and normally
everything works as expected. Any ideas why this time it didn't please?Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
--
Keith Kratochvil
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Thanks. I've already done that and it didn't.
"Keith Kratochvil" wrote:
> Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
> --
> Keith Kratochvil
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> > Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> > DBCC
> > DBREINDEX which appeared to run fine - new fillfactor showing etc.
> > However,
> > showcontig still shows the indexes as being badly fragmented. I've run an
> > update stats with fullscan and update usage which has corrected incorrect
> > space stats but that's all. I run this process every month and normally
> > everything works as expected. Any ideas why this time it didn't please?
>
>|||Hi Shirley
Can you show us the details from DBCC SHOWCONTIG?
Also, how many indexes are there? Are they all fragmented, clustered and
nonclustered?
How much free space is in the database?
--
HTH
Kalen Delaney, SQL Server MVP
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Hi Kalen, thanks for replying.
There is one clustered IX and 3 NC IX. These are the results
- Scan Density [Best Count:Actual Count]......: 98.18% [292775:298197] CL
- Scan Density [Best Count:Actual Count]......: 54.10% [22661:41886] NC
- Scan Density [Best Count:Actual Count]......: 52.88% [30754:58162] NC
- Scan Density [Best Count:Actual Count]......: 54.74% [27210:49709] NC
There is about 30 GB free space in the DB and it can autogrow.
"Kalen Delaney" wrote:
> Hi Shirley
> Can you show us the details from DBCC SHOWCONTIG?
> Also, how many indexes are there? Are they all fragmented, clustered and
> nonclustered?
> How much free space is in the database?
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> > Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> > DBCC
> > DBREINDEX which appeared to run fine - new fillfactor showing etc.
> > However,
> > showcontig still shows the indexes as being badly fragmented. I've run an
> > update stats with fullscan and update usage which has corrected incorrect
> > space stats but that's all. I run this process every month and normally
> > everything works as expected. Any ideas why this time it didn't please?
>
>|||Problem fixed. I ran an UPDATE STATISTICS again but with the FULLSCAN option,
then re-ran the reorg and all looks fine now. Don't know why this just
started to happen but will do the updatestats before the reorg every time now.
"Shirley" wrote:
> Hi Kalen, thanks for replying.
> There is one clustered IX and 3 NC IX. These are the results
> - Scan Density [Best Count:Actual Count]......: 98.18% [292775:298197] CL
> - Scan Density [Best Count:Actual Count]......: 54.10% [22661:41886] NC
> - Scan Density [Best Count:Actual Count]......: 52.88% [30754:58162] NC
> - Scan Density [Best Count:Actual Count]......: 54.74% [27210:49709] NC
> There is about 30 GB free space in the DB and it can autogrow.
>
> "Kalen Delaney" wrote:
> > Hi Shirley
> >
> > Can you show us the details from DBCC SHOWCONTIG?
> > Also, how many indexes are there? Are they all fragmented, clustered and
> > nonclustered?
> > How much free space is in the database?
> >
> > --
> > HTH
> > Kalen Delaney, SQL Server MVP
> >
> >
> > "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> > news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> > > Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> > > DBCC
> > > DBREINDEX which appeared to run fine - new fillfactor showing etc.
> > > However,
> > > showcontig still shows the indexes as being badly fragmented. I've run an
> > > update stats with fullscan and update usage which has corrected incorrect
> > > space stats but that's all. I run this process every month and normally
> > > everything works as expected. Any ideas why this time it didn't please?
> >
> >
> >
DBCC SHOWCONTIG inconsistency??
I run a DBCC SHOWCONTIG every week and I noticed that the number of
Rows and the Average Free Bytes are stable but the number of Extents
and the Average Record Size keeps increasing.
Any idea how to explain this incrase? and how to stop it? Will DBCC
DBREINDEX will help?
Here is my data:
Rows: 1166273 (same as last week)
Extents: 147099 (+10% since last week)
Max RecordSize: 7050
Average Record Size: 7688 (+10% since last week)
AverageFreeBytes: 372
Average Page Density: 95
Scan density: 30Guillaume (loizeau@.hotmail.com) writes:
> I have a table where I store around 1 million rows for 7 days.
> I run a DBCC SHOWCONTIG every week and I noticed that the number of
> Rows and the Average Free Bytes are stable but the number of Extents
> and the Average Record Size keeps increasing.
> Any idea how to explain this incrase? and how to stop it? Will DBCC
> DBREINDEX will help?
> Here is my data:
> Rows: 1166273 (same as last week)
> Extents: 147099 (+10% since last week)
> Max RecordSize: 7050
> Average Record Size: 7688 (+10% since last week)
> AverageFreeBytes: 372
> Average Page Density: 95
> Scan density: 30
If he row size is growing that indicates that the newer data
has higher row size. But it cannot increase with 10% for long,
since 8060 bytes is the max row size. (Unless that row size includes
text/image data.)
> Around 150,000 new rows are inserted every day and 150,000 rows (older
> than 7 days) are deleted every day.
That can certainly do a fragmentation, and you should run DBCC DBREINDEX
at least once a week.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland,
The Average Record Size has reached 7984 bytes and it's still
increasing.
The first time I had the problem my filegroup was full and I didn't have
time to investigate the issue so I just truncated the table. Now, I have
plenty of space in the filegroup and I'd like to understand the problem
before running any DBCC DBREINDEX.
Could the statistics for the table be wrong? or not updated?
*** Sent via Developersdex http://www.developersdex.com ***|||Guillaume (anonymous@.devdex.com) writes:
> The Average Record Size has reached 7984 bytes and it's still
> increasing.
> The first time I had the problem my filegroup was full and I didn't have
> time to investigate the issue so I just truncated the table. Now, I have
> plenty of space in the filegroup and I'd like to understand the problem
> before running any DBCC DBREINDEX.
> Could the statistics for the table be wrong? or not updated?
If the average record size keeps increaseing, it's probably because of
your data. If that is abnormal, you will need to look at the data you
load. Or do you have any reason to assume that the number is incorrect?
What is your maximum record size?
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland
The table contains only "events" monitoring my system and the data I
store in that table should not grow over time. I have several systems
running in parallel and this is the only DB where I have the problem.
Here is what I get from the SHOWCONTIG on the suspicious DB:
Minimum Record Size: 116
Maximum record Size: 7050
Average Record Size: 7984
Do you know why the Average Record Size is bigger then the Max Record
Size? it doesn't make sense.
Here is another SHOWCONTIG result from a different system.
Minimum Record Size: 111
Maximum record Size: 7231
Average Record Size: 420
*** Sent via Developersdex http://www.developersdex.com ***|||Guillaume (anonymous@.devdex.com) writes:
> The table contains only "events" monitoring my system and the data I
> store in that table should not grow over time. I have several systems
> running in parallel and this is the only DB where I have the problem.
> Here is what I get from the SHOWCONTIG on the suspicious DB:
> Minimum Record Size: 116
> Maximum record Size: 7050
> Average Record Size: 7984
> Do you know why the Average Record Size is bigger then the Max Record
> Size? it doesn't make sense.
That's undoubtedly a little funny. Maybe Paul Randal knows about some
reason how this can arise. (And if Paul does not know, no one else is
likely to know.)
Then again, while the number is funny, it's not likely to be a major
problem in itself, or do you have other problems with the table? A DBCC
CHECKDB could reveal some corruption.
I would try a DBCC DBREINDEX and see what happens.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||>Then again, while the number is funny, it's not likely to be >a major
problem in itself, or do you have other problems with >the table?
No I don't. The only problem I have is the size of this single table
growing every day.
>A DBCC CHECKDB could reveal some corruption.
Yes I will try to run it.
>I would try a DBCC DBREINDEX and see what happens.
I haven't done it yet because I was trying to figure out why the table
size was growing steadily. If there is nothing I can do right now, I'll
try the DBCC DBREINDEX
Thanks for your help
*** Sent via Developersdex http://www.developersdex.com ***|||The only think I can think of is a bug in showcontig where we're dividing
the cumulative row size count by the wrong number of pages, but I find it
hard to believe that such a bug has existed unnoticed for 6 years (since I
rewrote most of showcontig in Spring '99).
Can you post the full showcontig output for the two tables (one on the
suspicious system and the comparable table on another system)?
Thanks
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns96477EB94FE49Yazorman@.127.0.0.1...
> Guillaume (anonymous@.devdex.com) writes:
> > The table contains only "events" monitoring my system and the data I
> > store in that table should not grow over time. I have several systems
> > running in parallel and this is the only DB where I have the problem.
> > Here is what I get from the SHOWCONTIG on the suspicious DB:
> > Minimum Record Size: 116
> > Maximum record Size: 7050
> > Average Record Size: 7984
> > Do you know why the Average Record Size is bigger then the Max Record
> > Size? it doesn't make sense.
> That's undoubtedly a little funny. Maybe Paul Randal knows about some
> reason how this can arise. (And if Paul does not know, no one else is
> likely to know.)
> Then again, while the number is funny, it's not likely to be a major
> problem in itself, or do you have other problems with the table? A DBCC
> CHECKDB could reveal some corruption.
> I would try a DBCC DBREINDEX and see what happens.
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp|||I looked through the code and found the problem.
In SQL Server 2000, showcontig uses the size information from all the rows, but only counts non-ghost rows in the rowcount. For an index with a bunch of ghost rows, the avg rowsize will be higher than the real value, as the total rowsize is being divided by the wrong (smaller) number of rows.
In SQL Server 2005 this problem has been fixed. I checked back through the internal bug database and this issue was found internally in 2002 and the decision was made not to fix it in SQL Server 2000. Shows how good my memory is.
If you heavily rely on this value I recommend rebuilding the index as we won't provide a fix for it at this stage of SQL Server 2000's lifecycle.
My apologies for the time you've spent trying to figure this out but I'm glad we finally did. Let me know if you have an further questions.
Thanks and regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Guillaume" <anonymous@.devdex.com> wrote in message news:Nvzce.442$zX2.13250@.news.uswest.net...
Here is the SHOWCONTIG for the same table on 3 Databases. I have NOT
done a DBCC DBREINDEX on any of those tables.
Suspicious table 1.3 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 1372708
Rows 1323293
Min RecordSize 116
Max RecordSize 7050
Average RecordSize 7984.698
Extents 173093
ExtentSwitches 640050
AverageFreeBytes 370.9679871
Average Page Density 95.41675568
Scan Density 26.80864494
Best Count 171589
Actual Count 640051
Logical Fragmentation 12.87914085
ExtentFragmentation 6.509795189
Normal table 1.1 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 63520
Rows 1170832
Min RecordSize 111
Max RecordSize 7231
Average RecordSize 420.889
Extents 7981
ExtentSwitches 8036
AverageFreeBytes 301.098999
Average Page Density 96.27997589
Scan Density 98.793082
Best Count 7940
Actual Count 8037
Logical Fragmentation 0.127518892
ExtentFragmentation 33.07855988
Normal table 9 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 467440
Rows 9048200
Min RecordSize 130
Max RecordSize 7059
Average RecordSize 375.19
Extents 61,950
ExtentSwitches 115,936
AverageFreeBytes 794.7390137
Average Page Density 90.18113708
Scan Density 50.39806102
Best Count 58430
Actual Count 115937
Logical Fragmentation 11.53752327
ExtentFragmentation 12.42292213
*** Sent via Developersdex http://www.developersdex.com ***|||BTW - forgot to say - the reason you're seeing the avg row size increase is that (ironically) you're deleting more records and so the number of ghost records is growing, further distorting the calculation.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:4272e24a$1@.news.microsoft.com...
I looked through the code and found the problem.
In SQL Server 2000, showcontig uses the size information from all the rows, but only counts non-ghost rows in the rowcount. For an index with a bunch of ghost rows, the avg rowsize will be higher than the real value, as the total rowsize is being divided by the wrong (smaller) number of rows.
In SQL Server 2005 this problem has been fixed. I checked back through the internal bug database and this issue was found internally in 2002 and the decision was made not to fix it in SQL Server 2000. Shows how good my memory is.
If you heavily rely on this value I recommend rebuilding the index as we won't provide a fix for it at this stage of SQL Server 2000's lifecycle.
My apologies for the time you've spent trying to figure this out but I'm glad we finally did. Let me know if you have an further questions.
Thanks and regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Guillaume" <anonymous@.devdex.com> wrote in message news:Nvzce.442$zX2.13250@.news.uswest.net...
Here is the SHOWCONTIG for the same table on 3 Databases. I have NOT
done a DBCC DBREINDEX on any of those tables.
Suspicious table 1.3 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 1372708
Rows 1323293
Min RecordSize 116
Max RecordSize 7050
Average RecordSize 7984.698
Extents 173093
ExtentSwitches 640050
AverageFreeBytes 370.9679871
Average Page Density 95.41675568
Scan Density 26.80864494
Best Count 171589
Actual Count 640051
Logical Fragmentation 12.87914085
ExtentFragmentation 6.509795189
Normal table 1.1 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 63520
Rows 1170832
Min RecordSize 111
Max RecordSize 7231
Average RecordSize 420.889
Extents 7981
ExtentSwitches 8036
AverageFreeBytes 301.098999
Average Page Density 96.27997589
Scan Density 98.793082
Best Count 7940
Actual Count 8037
Logical Fragmentation 0.127518892
ExtentFragmentation 33.07855988
Normal table 9 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 467440
Rows 9048200
Min RecordSize 130
Max RecordSize 7059
Average RecordSize 375.19
Extents 61,950
ExtentSwitches 115,936
AverageFreeBytes 794.7390137
Average Page Density 90.18113708
Scan Density 50.39806102
Best Count 58430
Actual Count 115937
Logical Fragmentation 11.53752327
ExtentFragmentation 12.42292213
*** Sent via Developersdex http://www.developersdex.com ***|||Thanks a lot for your answer,
Based on your description, it turns out that my problem is related to
ghost rows.
Do you have any idea why I have such a large number of ghost rows on one
single table from only one database? I assume the ghost rows are
periodically deleted from the database. If I'm right then something
might be going wrong during the physical delete of the rows.
I ran a TRUNCATE TABLE once on my table and it fixed my issue for a
period of time but now the problem is back so do you think it might make
a difference if I deleted and then recreated the table or should I just
run DBREINDEX from time to time?
Guillaume
*** Sent via Developersdex http://www.developersdex.com ***|||Do you have any trace flags turned on? I'm wondering if someone's caused
ghost cleanup to be turned off.
Also, do you have any (very) long running transactions in that database?
That would also prevent any ghost rows need by potential rollback of these
transactions from being deleted.
Truncate will remove them as it deletes all rows from the table.
Yes, a reindex will have the same effect but shouldn't be needed because of
the automatic ghost cleanup process.
This seems to be the root of the problem - the number of ghost rows climbing
steadily.
Thanks
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Guillaume" <anonymous@.devdex.com> wrote in message
news:Zalde.50$wu5.2310@.news.uswest.net...
> Thanks a lot for your answer,
> Based on your description, it turns out that my problem is related to
> ghost rows.
> Do you have any idea why I have such a large number of ghost rows on one
> single table from only one database? I assume the ghost rows are
> periodically deleted from the database. If I'm right then something
> might be going wrong during the physical delete of the rows.
> I ran a TRUNCATE TABLE once on my table and it fixed my issue for a
> period of time but now the problem is back so do you think it might make
> a difference if I deleted and then recreated the table or should I just
> run DBREINDEX from time to time?
> Guillaume
> *** Sent via Developersdex http://www.developersdex.com ***|||Paul,
I don't have any specific Trace flags on. And I don't have any long
running transaction. Do you know if there is a way to check why the
ghost cleanup doesn't run on a single table on my machine?
I recently ran the following command:
DBCC TRACEON (2514)
GO
DBCC CHECKTABLE (my event table)
GO
The result was:
Ghost Record count = 52 millions
How can delete those ghost records from my table?
Can I force the ghost cleanup service to run?
Thanks a lot.
Guillaume
*** Sent via Developersdex http://www.developersdex.com ***|||Guillaume,
Normally there is a background task that wakes up to clean some ghosted
records every 5 seconds or so. To force the ghost cleanup activities, you
can:
First, make sure traceflag 661 is turned off in the server by
dbcc traceoff(661, -1)
dbcc tracestatus(661, -1)
go
Then, issue a scan on the table/index that contains the 52 millions of
ghosted record, something like:
select * from [my event table] with (index=the index id of the index that
contains ghosted record)
go
Upon seeing a ghosted record, the scan will enqueue a request to clean it up
aggressively.
--
Gang He
Software Design Engineer
Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Guillaume" <anonymous@.devdex.com> wrote in message
news:Vzage.300$693.7473@.news.uswest.net...
> Paul,
> I don't have any specific Trace flags on. And I don't have any long
> running transaction. Do you know if there is a way to check why the
> ghost cleanup doesn't run on a single table on my machine?
> I recently ran the following command:
> DBCC TRACEON (2514)
> GO
> DBCC CHECKTABLE (my event table)
> GO
> The result was:
> Ghost Record count = 52 millions
> How can delete those ghost records from my table?
> Can I force the ghost cleanup service to run?
> Thanks a lot.
> Guillaume
> *** Sent via Developersdex http://www.developersdex.com ***
DBCC SHOWCONTIG and SQL2k Maintenance Plan Question
I use the WITH FAST option will this allow data to be written to the file?
Also when using the maintenance plan wizard to perform maintenance, will
tables be locked when the indexes are rebuilt and data is reorganized?
Thanks,
-Ben(I'm assuming you're using SQL Server 2000 as you mention the WITH FAST
option I added in that release)
DBCC SHOWCONTIG ... WITH FAST takes a table IS lock - thus allowing updates.
Without WITH FAST is takes a table S lock. Look in BOL for more details
here.
Rebuilding an index requires a table S lock if you're rebuilding a
non-clustered index, and a table X lock for rebuilding a clustered index.
Running DBCC INDEXDEFRAG only requires a table IX lock - so allows updates.
You should read the whitepaper below for more details:
http://www.microsoft.com/technet/tr...ze/ss2kidbp.asp
Regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Ben" <ben@.brainspout.com> wrote in message
news:uoGzfuO5DHA.1368@.TK2MSFTNGP10.phx.gbl...
quote:
> When performing the DBCC SHOWCONTIG will this perform table locks? If
> I use the WITH FAST option will this allow data to be written to the file?
> Also when using the maintenance plan wizard to perform maintenance, will
> tables be locked when the indexes are rebuilt and data is reorganized?
> Thanks,
> -Ben
>
DBCC SHOWCONTIG and SQL2k Maintenance Plan Question
I use the WITH FAST option will this allow data to be written to the file?
Also when using the maintenance plan wizard to perform maintenance, will
tables be locked when the indexes are rebuilt and data is reorganized?
Thanks,
-Ben(I'm assuming you're using SQL Server 2000 as you mention the WITH FAST
option I added in that release)
DBCC SHOWCONTIG ... WITH FAST takes a table IS lock - thus allowing updates.
Without WITH FAST is takes a table S lock. Look in BOL for more details
here.
Rebuilding an index requires a table S lock if you're rebuilding a
non-clustered index, and a table X lock for rebuilding a clustered index.
Running DBCC INDEXDEFRAG only requires a table IX lock - so allows updates.
You should read the whitepaper below for more details:
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/optimize/ss2kidbp.asp
Regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Ben" <ben@.brainspout.com> wrote in message
news:uoGzfuO5DHA.1368@.TK2MSFTNGP10.phx.gbl...
> When performing the DBCC SHOWCONTIG will this perform table locks? If
> I use the WITH FAST option will this allow data to be written to the file?
> Also when using the maintenance plan wizard to perform maintenance, will
> tables be locked when the indexes are rebuilt and data is reorganized?
> Thanks,
> -Ben
>
DBCC Showcontig and extent fragmentation
down to an aceptable level. Fist the logical scan density and logical
fragmentation were extreamly high so I rebuit the index. This helped both
the logical scan density and logical fragmentation but the extent
fragmentation stayed at 65%. I then did an index defrag and the extent
fragmentation stayed at 65%. I then Reindexed again and the extent
fragmentation dropped to 33%. I defraged and reindexed once more and it
droped to 21%. I could not get this number to drop any more from 21% so I
droped the index and recreated it; - and it came back with an extent
fragmentation of 34%. Is there a way, short of moving to another file group
to get my extent fragmentation down without jumping through all these hoops?
Hi Kirk
Extent scan fragmentation (ESF) measures how contiguous the extents in your
table or index are. To have no ESF, you would basically need to have free
space in your file equal to the total size of your structure, and it would
need to be allocated all at once, which SQL Server doesn't do. Even if you
have the contiguous space, you still might end up with ESF. If while you are
rebuilding your structures, other users are doing any kind of work that
requires allocation of extents, their allocations could come from an extent
right next to the one you just got, so your next allocation would be
separated from your previous one.
Why do you want to get it down lower? If you don't have any other
fragmentation, ESF just isn't worth worrying about, IMO.
HTH
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"kirk1880" <kirk1880@.discussions.microsoft.com> wrote in message
news:13DDB330-4ADD-4B67-B627-623CAB48C95E@.microsoft.com...
>I have a large table that I cannot seem to bring the extent fragmentation
> down to an aceptable level. Fist the logical scan density and logical
> fragmentation were extreamly high so I rebuit the index. This helped both
> the logical scan density and logical fragmentation but the extent
> fragmentation stayed at 65%. I then did an index defrag and the extent
> fragmentation stayed at 65%. I then Reindexed again and the extent
> fragmentation dropped to 33%. I defraged and reindexed once more and it
> droped to 21%. I could not get this number to drop any more from 21% so I
> droped the index and recreated it; - and it came back with an extent
> fragmentation of 34%. Is there a way, short of moving to another file
> group
> to get my extent fragmentation down without jumping through all these
> hoops?
|||Also, if you have multiple files, extent scan fragmentation is meaningless
(as documented in Books OnLine). No amount of rebuilding/defragging will get
it down in that case because the algorithm doesn't cope with multiple files.
You should read the whitepaper below which will explain about fragmentation
and when you should be worried about it.
http://www.microsoft.com/technet/pro.../ss2kidbp.mspx
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
news:uid1C2SvEHA.3872@.TK2MSFTNGP11.phx.gbl...
> Hi Kirk
> Extent scan fragmentation (ESF) measures how contiguous the extents in
your
> table or index are. To have no ESF, you would basically need to have free
> space in your file equal to the total size of your structure, and it would
> need to be allocated all at once, which SQL Server doesn't do. Even if you
> have the contiguous space, you still might end up with ESF. If while you
are
> rebuilding your structures, other users are doing any kind of work that
> requires allocation of extents, their allocations could come from an
extent[vbcol=seagreen]
> right next to the one you just got, so your next allocation would be
> separated from your previous one.
> Why do you want to get it down lower? If you don't have any other
> fragmentation, ESF just isn't worth worrying about, IMO.
> --
> HTH
> --
> Kalen Delaney
> SQL Server MVP
> www.SolidQualityLearning.com
>
> "kirk1880" <kirk1880@.discussions.microsoft.com> wrote in message
> news:13DDB330-4ADD-4B67-B627-623CAB48C95E@.microsoft.com...
both[vbcol=seagreen]
I
>
DBCC Showcontig and extent fragmentation
down to an aceptable level. Fist the logical scan density and logical
fragmentation were extreamly high so I rebuit the index. This helped both
the logical scan density and logical fragmentation but the extent
fragmentation stayed at 65%. I then did an index defrag and the extent
fragmentation stayed at 65%. I then Reindexed again and the extent
fragmentation dropped to 33%. I defraged and reindexed once more and it
droped to 21%. I could not get this number to drop any more from 21% so I
droped the index and recreated it; - and it came back with an extent
fragmentation of 34%. Is there a way, short of moving to another file group
to get my extent fragmentation down without jumping through all these hoops?Hi Kirk
Extent scan fragmentation (ESF) measures how contiguous the extents in your
table or index are. To have no ESF, you would basically need to have free
space in your file equal to the total size of your structure, and it would
need to be allocated all at once, which SQL Server doesn't do. Even if you
have the contiguous space, you still might end up with ESF. If while you are
rebuilding your structures, other users are doing any kind of work that
requires allocation of extents, their allocations could come from an extent
right next to the one you just got, so your next allocation would be
separated from your previous one.
Why do you want to get it down lower? If you don't have any other
fragmentation, ESF just isn't worth worrying about, IMO.
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"kirk1880" <kirk1880@.discussions.microsoft.com> wrote in message
news:13DDB330-4ADD-4B67-B627-623CAB48C95E@.microsoft.com...
>I have a large table that I cannot seem to bring the extent fragmentation
> down to an aceptable level. Fist the logical scan density and logical
> fragmentation were extreamly high so I rebuit the index. This helped both
> the logical scan density and logical fragmentation but the extent
> fragmentation stayed at 65%. I then did an index defrag and the extent
> fragmentation stayed at 65%. I then Reindexed again and the extent
> fragmentation dropped to 33%. I defraged and reindexed once more and it
> droped to 21%. I could not get this number to drop any more from 21% so I
> droped the index and recreated it; - and it came back with an extent
> fragmentation of 34%. Is there a way, short of moving to another file
> group
> to get my extent fragmentation down without jumping through all these
> hoops?|||Also, if you have multiple files, extent scan fragmentation is meaningless
(as documented in Books OnLine). No amount of rebuilding/defragging will get
it down in that case because the algorithm doesn't cope with multiple files.
You should read the whitepaper below which will explain about fragmentation
and when you should be worried about it.
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
news:uid1C2SvEHA.3872@.TK2MSFTNGP11.phx.gbl...
> Hi Kirk
> Extent scan fragmentation (ESF) measures how contiguous the extents in
your
> table or index are. To have no ESF, you would basically need to have free
> space in your file equal to the total size of your structure, and it would
> need to be allocated all at once, which SQL Server doesn't do. Even if you
> have the contiguous space, you still might end up with ESF. If while you
are
> rebuilding your structures, other users are doing any kind of work that
> requires allocation of extents, their allocations could come from an
extent
> right next to the one you just got, so your next allocation would be
> separated from your previous one.
> Why do you want to get it down lower? If you don't have any other
> fragmentation, ESF just isn't worth worrying about, IMO.
> --
> HTH
> --
> Kalen Delaney
> SQL Server MVP
> www.SolidQualityLearning.com
>
> "kirk1880" <kirk1880@.discussions.microsoft.com> wrote in message
> news:13DDB330-4ADD-4B67-B627-623CAB48C95E@.microsoft.com...
both[vbcol=seagreen]
I[vbcol=seagreen]
>
DBCC Showcontig and extent fragmentation
down to an aceptable level. Fist the logical scan density and logical
fragmentation were extreamly high so I rebuit the index. This helped both
the logical scan density and logical fragmentation but the extent
fragmentation stayed at 65%. I then did an index defrag and the extent
fragmentation stayed at 65%. I then Reindexed again and the extent
fragmentation dropped to 33%. I defraged and reindexed once more and it
droped to 21%. I could not get this number to drop any more from 21% so I
droped the index and recreated it; - and it came back with an extent
fragmentation of 34%. Is there a way, short of moving to another file group
to get my extent fragmentation down without jumping through all these hoops?Hi Kirk
Extent scan fragmentation (ESF) measures how contiguous the extents in your
table or index are. To have no ESF, you would basically need to have free
space in your file equal to the total size of your structure, and it would
need to be allocated all at once, which SQL Server doesn't do. Even if you
have the contiguous space, you still might end up with ESF. If while you are
rebuilding your structures, other users are doing any kind of work that
requires allocation of extents, their allocations could come from an extent
right next to the one you just got, so your next allocation would be
separated from your previous one.
Why do you want to get it down lower? If you don't have any other
fragmentation, ESF just isn't worth worrying about, IMO.
--
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"kirk1880" <kirk1880@.discussions.microsoft.com> wrote in message
news:13DDB330-4ADD-4B67-B627-623CAB48C95E@.microsoft.com...
>I have a large table that I cannot seem to bring the extent fragmentation
> down to an aceptable level. Fist the logical scan density and logical
> fragmentation were extreamly high so I rebuit the index. This helped both
> the logical scan density and logical fragmentation but the extent
> fragmentation stayed at 65%. I then did an index defrag and the extent
> fragmentation stayed at 65%. I then Reindexed again and the extent
> fragmentation dropped to 33%. I defraged and reindexed once more and it
> droped to 21%. I could not get this number to drop any more from 21% so I
> droped the index and recreated it; - and it came back with an extent
> fragmentation of 34%. Is there a way, short of moving to another file
> group
> to get my extent fragmentation down without jumping through all these
> hoops?|||Also, if you have multiple files, extent scan fragmentation is meaningless
(as documented in Books OnLine). No amount of rebuilding/defragging will get
it down in that case because the algorithm doesn't cope with multiple files.
You should read the whitepaper below which will explain about fragmentation
and when you should be worried about it.
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
Regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Kalen Delaney" <replies@.public_newsgroups.com> wrote in message
news:uid1C2SvEHA.3872@.TK2MSFTNGP11.phx.gbl...
> Hi Kirk
> Extent scan fragmentation (ESF) measures how contiguous the extents in
your
> table or index are. To have no ESF, you would basically need to have free
> space in your file equal to the total size of your structure, and it would
> need to be allocated all at once, which SQL Server doesn't do. Even if you
> have the contiguous space, you still might end up with ESF. If while you
are
> rebuilding your structures, other users are doing any kind of work that
> requires allocation of extents, their allocations could come from an
extent
> right next to the one you just got, so your next allocation would be
> separated from your previous one.
> Why do you want to get it down lower? If you don't have any other
> fragmentation, ESF just isn't worth worrying about, IMO.
> --
> HTH
> --
> Kalen Delaney
> SQL Server MVP
> www.SolidQualityLearning.com
>
> "kirk1880" <kirk1880@.discussions.microsoft.com> wrote in message
> news:13DDB330-4ADD-4B67-B627-623CAB48C95E@.microsoft.com...
> >I have a large table that I cannot seem to bring the extent fragmentation
> > down to an aceptable level. Fist the logical scan density and logical
> > fragmentation were extreamly high so I rebuit the index. This helped
both
> > the logical scan density and logical fragmentation but the extent
> > fragmentation stayed at 65%. I then did an index defrag and the extent
> > fragmentation stayed at 65%. I then Reindexed again and the extent
> > fragmentation dropped to 33%. I defraged and reindexed once more and it
> > droped to 21%. I could not get this number to drop any more from 21% so
I
> > droped the index and recreated it; - and it came back with an extent
> > fragmentation of 34%. Is there a way, short of moving to another file
> > group
> > to get my extent fragmentation down without jumping through all these
> > hoops?
>
DBCC SHOWCONTIG (SQL 7.0)
Can anybody tell me how can i put the output of the=20
DBCC SHOWCONTIG into a table in SQL Server 7.0
I=B4ve done yet the same to SQL Server 2000, however i cant=20
do this to run in my servers whith SQL Server 7.0, can you=20
help me.
Theres no output when i execute this:
..
..
insert into frag_tb
exec ('dbcc showcontig ('+ @.objectid + ')')=20
..
..
Best regards
CC&JM,
Yes, DBCC SHOWCONTIG in SQL7 does not have the WITH TABLERESULTS option.
You may be able to output the result into a perl script, and then parse
the results in a scripting language. It aint going to be pretty.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
CC&JM wrote:
> Hi,
> Can anybody tell me how can i put the output of the
> DBCC SHOWCONTIG into a table in SQL Server 7.0
> Ive done yet the same to SQL Server 2000, however i cant
> do this to run in my servers whith SQL Server 7.0, can you
> help me.
> Theres no output when i execute this:
> .
> .
> insert into frag_tb
> exec ('dbcc showcontig ('+ @.objectid + ')')
> .
> .
> Best regards
DBCC SHOWCONTIG (SQL 7.0)
Can anybody tell me how can i put the output of the DBCC SHOWCONTIG into a table in SQL Server 7.0
I=B4ve done yet the same to SQL Server 2000, however i cant do this to run in my servers whith SQL Server 7.0, can you help me.
Theres no output when i execute this:
.
.
insert into frag_tb
exec ('dbcc showcontig ('+ @.objectid + ')') .
.
Best regardsCC&JM,
Yes, DBCC SHOWCONTIG in SQL7 does not have the WITH TABLERESULTS option.
You may be able to output the result into a perl script, and then parse
the results in a scripting language. It aint going to be pretty.
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
CC&JM wrote:
> Hi,
> Can anybody tell me how can i put the output of the
> DBCC SHOWCONTIG into a table in SQL Server 7.0
> I´ve done yet the same to SQL Server 2000, however i cant
> do this to run in my servers whith SQL Server 7.0, can you
> help me.
> Theres no output when i execute this:
> .
> .
> insert into frag_tb
> exec ('dbcc showcontig ('+ @.objectid + ')')
> .
> .
> Best regardssql
DBCC SHOWCONTIG (SQL 7.0)
Can anybody tell me how can i put the output of the=20
DBCC SHOWCONTIG into a table in SQL Server 7.0
I=B4ve done yet the same to SQL Server 2000, however i cant=20
do this to run in my servers whith SQL Server 7.0, can you=20
help me.
Theres no output when i execute this:
.
.
insert into frag_tb
exec ('dbcc showcontig ('+ @.objectid + ')')=20
.
.
Best regardsCC&JM,
Yes, DBCC SHOWCONTIG in SQL7 does not have the WITH TABLERESULTS option.
You may be able to output the result into a PERL script, and then parse
the results in a scripting language. It aint going to be pretty.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
CC&JM wrote:
> Hi,
> Can anybody tell me how can i put the output of the
> DBCC SHOWCONTIG into a table in SQL Server 7.0
> Ive done yet the same to SQL Server 2000, however i cant
> do this to run in my servers whith SQL Server 7.0, can you
> help me.
> Theres no output when i execute this:
> .
> .
> insert into frag_tb
> exec ('dbcc showcontig ('+ @.objectid + ')')
> .
> .
> Best regards
DBCC SHOWCONTIG (No Lock)
table lock?
Thank You,I think that the FAST option will take a light-weight lock (if any). And it is a lot faster.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Joe K." <JoeK@.discussions.microsoft.com> wrote in message
news:C763E461-B833-4DBF-B8E9-E7222F73DB4F@.microsoft.com...
> Is there away to execute the DBCC SHOWCONTIG without this program creating a
> table lock?
> Thank You,|||The FAST option will only take a table IS lock rather than a table S lock.
That's as non-blocking as it can do.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:unLlLucvEHA.3640@.TK2MSFTNGP10.phx.gbl...
> I think that the FAST option will take a light-weight lock (if any). And
it is a lot faster.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Joe K." <JoeK@.discussions.microsoft.com> wrote in message
> news:C763E461-B833-4DBF-B8E9-E7222F73DB4F@.microsoft.com...
> >
> > Is there away to execute the DBCC SHOWCONTIG without this program
creating a
> > table lock?
> >
> > Thank You,
>