Showing posts with label inconsistency. Show all posts
Showing posts with label inconsistency. Show all posts

Sunday, March 25, 2012

DBCC SHOWCONTIG inconsistency??

I have a table where I store around 1 million rows for 7 days.
I run a DBCC SHOWCONTIG every week and I noticed that the number of
Rows and the Average Free Bytes are stable but the number of Extents
and the Average Record Size keeps increasing.
Any idea how to explain this incrase? and how to stop it? Will DBCC
DBREINDEX will help?

Here is my data:
Rows: 1166273 (same as last week)
Extents: 147099 (+10% since last week)
Max RecordSize: 7050
Average Record Size: 7688 (+10% since last week)
AverageFreeBytes: 372
Average Page Density: 95
Scan density: 30Guillaume (loizeau@.hotmail.com) writes:
> I have a table where I store around 1 million rows for 7 days.
> I run a DBCC SHOWCONTIG every week and I noticed that the number of
> Rows and the Average Free Bytes are stable but the number of Extents
> and the Average Record Size keeps increasing.
> Any idea how to explain this incrase? and how to stop it? Will DBCC
> DBREINDEX will help?
> Here is my data:
> Rows: 1166273 (same as last week)
> Extents: 147099 (+10% since last week)
> Max RecordSize: 7050
> Average Record Size: 7688 (+10% since last week)
> AverageFreeBytes: 372
> Average Page Density: 95
> Scan density: 30

If he row size is growing that indicates that the newer data
has higher row size. But it cannot increase with 10% for long,
since 8060 bytes is the max row size. (Unless that row size includes
text/image data.)

> Around 150,000 new rows are inserted every day and 150,000 rows (older
> than 7 days) are deleted every day.

That can certainly do a fragmentation, and you should run DBCC DBREINDEX
at least once a week.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland,
The Average Record Size has reached 7984 bytes and it's still
increasing.
The first time I had the problem my filegroup was full and I didn't have
time to investigate the issue so I just truncated the table. Now, I have
plenty of space in the filegroup and I'd like to understand the problem
before running any DBCC DBREINDEX.
Could the statistics for the table be wrong? or not updated?

*** Sent via Developersdex http://www.developersdex.com ***|||Guillaume (anonymous@.devdex.com) writes:
> The Average Record Size has reached 7984 bytes and it's still
> increasing.
> The first time I had the problem my filegroup was full and I didn't have
> time to investigate the issue so I just truncated the table. Now, I have
> plenty of space in the filegroup and I'd like to understand the problem
> before running any DBCC DBREINDEX.
> Could the statistics for the table be wrong? or not updated?

If the average record size keeps increaseing, it's probably because of
your data. If that is abnormal, you will need to look at the data you
load. Or do you have any reason to assume that the number is incorrect?
What is your maximum record size?

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland

The table contains only "events" monitoring my system and the data I
store in that table should not grow over time. I have several systems
running in parallel and this is the only DB where I have the problem.
Here is what I get from the SHOWCONTIG on the suspicious DB:
Minimum Record Size: 116
Maximum record Size: 7050
Average Record Size: 7984

Do you know why the Average Record Size is bigger then the Max Record
Size? it doesn't make sense.

Here is another SHOWCONTIG result from a different system.
Minimum Record Size: 111
Maximum record Size: 7231
Average Record Size: 420

*** Sent via Developersdex http://www.developersdex.com ***|||Guillaume (anonymous@.devdex.com) writes:
> The table contains only "events" monitoring my system and the data I
> store in that table should not grow over time. I have several systems
> running in parallel and this is the only DB where I have the problem.
> Here is what I get from the SHOWCONTIG on the suspicious DB:
> Minimum Record Size: 116
> Maximum record Size: 7050
> Average Record Size: 7984
> Do you know why the Average Record Size is bigger then the Max Record
> Size? it doesn't make sense.

That's undoubtedly a little funny. Maybe Paul Randal knows about some
reason how this can arise. (And if Paul does not know, no one else is
likely to know.)

Then again, while the number is funny, it's not likely to be a major
problem in itself, or do you have other problems with the table? A DBCC
CHECKDB could reveal some corruption.

I would try a DBCC DBREINDEX and see what happens.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||>Then again, while the number is funny, it's not likely to be >a major
problem in itself, or do you have other problems with >the table?
No I don't. The only problem I have is the size of this single table
growing every day.

>A DBCC CHECKDB could reveal some corruption.
Yes I will try to run it.

>I would try a DBCC DBREINDEX and see what happens.
I haven't done it yet because I was trying to figure out why the table
size was growing steadily. If there is nothing I can do right now, I'll
try the DBCC DBREINDEX

Thanks for your help

*** Sent via Developersdex http://www.developersdex.com ***|||The only think I can think of is a bug in showcontig where we're dividing
the cumulative row size count by the wrong number of pages, but I find it
hard to believe that such a bug has existed unnoticed for 6 years (since I
rewrote most of showcontig in Spring '99).

Can you post the full showcontig output for the two tables (one on the
suspicious system and the comparable table on another system)?

Thanks

--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine

This posting is provided "AS IS" with no warranties, and confers no rights.

"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns96477EB94FE49Yazorman@.127.0.0.1...
> Guillaume (anonymous@.devdex.com) writes:
> > The table contains only "events" monitoring my system and the data I
> > store in that table should not grow over time. I have several systems
> > running in parallel and this is the only DB where I have the problem.
> > Here is what I get from the SHOWCONTIG on the suspicious DB:
> > Minimum Record Size: 116
> > Maximum record Size: 7050
> > Average Record Size: 7984
> > Do you know why the Average Record Size is bigger then the Max Record
> > Size? it doesn't make sense.
> That's undoubtedly a little funny. Maybe Paul Randal knows about some
> reason how this can arise. (And if Paul does not know, no one else is
> likely to know.)
> Then again, while the number is funny, it's not likely to be a major
> problem in itself, or do you have other problems with the table? A DBCC
> CHECKDB could reveal some corruption.
> I would try a DBCC DBREINDEX and see what happens.
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp|||I looked through the code and found the problem.

In SQL Server 2000, showcontig uses the size information from all the rows, but only counts non-ghost rows in the rowcount. For an index with a bunch of ghost rows, the avg rowsize will be higher than the real value, as the total rowsize is being divided by the wrong (smaller) number of rows.

In SQL Server 2005 this problem has been fixed. I checked back through the internal bug database and this issue was found internally in 2002 and the decision was made not to fix it in SQL Server 2000. Shows how good my memory is.

If you heavily rely on this value I recommend rebuilding the index as we won't provide a fix for it at this stage of SQL Server 2000's lifecycle.

My apologies for the time you've spent trying to figure this out but I'm glad we finally did. Let me know if you have an further questions.

Thanks and regards.

--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine

This posting is provided "AS IS" with no warranties, and confers no rights.

"Guillaume" <anonymous@.devdex.com> wrote in message news:Nvzce.442$zX2.13250@.news.uswest.net...
Here is the SHOWCONTIG for the same table on 3 Databases. I have NOT
done a DBCC DBREINDEX on any of those tables.

Suspicious table 1.3 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 1372708
Rows 1323293
Min RecordSize 116
Max RecordSize 7050
Average RecordSize 7984.698
Extents 173093
ExtentSwitches 640050
AverageFreeBytes 370.9679871
Average Page Density 95.41675568
Scan Density 26.80864494
Best Count 171589
Actual Count 640051
Logical Fragmentation 12.87914085
ExtentFragmentation 6.509795189

Normal table 1.1 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 63520
Rows 1170832
Min RecordSize 111
Max RecordSize 7231
Average RecordSize 420.889
Extents 7981
ExtentSwitches 8036
AverageFreeBytes 301.098999
Average Page Density 96.27997589
Scan Density 98.793082
Best Count 7940
Actual Count 8037
Logical Fragmentation 0.127518892
ExtentFragmentation 33.07855988

Normal table 9 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 467440
Rows 9048200
Min RecordSize 130
Max RecordSize 7059
Average RecordSize 375.19
Extents 61,950
ExtentSwitches 115,936
AverageFreeBytes 794.7390137
Average Page Density 90.18113708
Scan Density 50.39806102
Best Count 58430
Actual Count 115937
Logical Fragmentation 11.53752327
ExtentFragmentation 12.42292213

*** Sent via Developersdex http://www.developersdex.com ***|||BTW - forgot to say - the reason you're seeing the avg row size increase is that (ironically) you're deleting more records and so the number of ghost records is growing, further distorting the calculation.

--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine

This posting is provided "AS IS" with no warranties, and confers no rights.

"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:4272e24a$1@.news.microsoft.com...
I looked through the code and found the problem.

In SQL Server 2000, showcontig uses the size information from all the rows, but only counts non-ghost rows in the rowcount. For an index with a bunch of ghost rows, the avg rowsize will be higher than the real value, as the total rowsize is being divided by the wrong (smaller) number of rows.

In SQL Server 2005 this problem has been fixed. I checked back through the internal bug database and this issue was found internally in 2002 and the decision was made not to fix it in SQL Server 2000. Shows how good my memory is.

If you heavily rely on this value I recommend rebuilding the index as we won't provide a fix for it at this stage of SQL Server 2000's lifecycle.

My apologies for the time you've spent trying to figure this out but I'm glad we finally did. Let me know if you have an further questions.

Thanks and regards.

--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine

This posting is provided "AS IS" with no warranties, and confers no rights.

"Guillaume" <anonymous@.devdex.com> wrote in message news:Nvzce.442$zX2.13250@.news.uswest.net...
Here is the SHOWCONTIG for the same table on 3 Databases. I have NOT
done a DBCC DBREINDEX on any of those tables.

Suspicious table 1.3 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 1372708
Rows 1323293
Min RecordSize 116
Max RecordSize 7050
Average RecordSize 7984.698
Extents 173093
ExtentSwitches 640050
AverageFreeBytes 370.9679871
Average Page Density 95.41675568
Scan Density 26.80864494
Best Count 171589
Actual Count 640051
Logical Fragmentation 12.87914085
ExtentFragmentation 6.509795189

Normal table 1.1 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 63520
Rows 1170832
Min RecordSize 111
Max RecordSize 7231
Average RecordSize 420.889
Extents 7981
ExtentSwitches 8036
AverageFreeBytes 301.098999
Average Page Density 96.27997589
Scan Density 98.793082
Best Count 7940
Actual Count 8037
Logical Fragmentation 0.127518892
ExtentFragmentation 33.07855988

Normal table 9 million rows
ObjectName Events
ObjectId 1221579390
IndexName IX_Time
IndexId 1
Level 0
Pages 467440
Rows 9048200
Min RecordSize 130
Max RecordSize 7059
Average RecordSize 375.19
Extents 61,950
ExtentSwitches 115,936
AverageFreeBytes 794.7390137
Average Page Density 90.18113708
Scan Density 50.39806102
Best Count 58430
Actual Count 115937
Logical Fragmentation 11.53752327
ExtentFragmentation 12.42292213

*** Sent via Developersdex http://www.developersdex.com ***|||Thanks a lot for your answer,

Based on your description, it turns out that my problem is related to
ghost rows.
Do you have any idea why I have such a large number of ghost rows on one
single table from only one database? I assume the ghost rows are
periodically deleted from the database. If I'm right then something
might be going wrong during the physical delete of the rows.
I ran a TRUNCATE TABLE once on my table and it fixed my issue for a
period of time but now the problem is back so do you think it might make
a difference if I deleted and then recreated the table or should I just
run DBREINDEX from time to time?

Guillaume

*** Sent via Developersdex http://www.developersdex.com ***|||Do you have any trace flags turned on? I'm wondering if someone's caused
ghost cleanup to be turned off.

Also, do you have any (very) long running transactions in that database?
That would also prevent any ghost rows need by potential rollback of these
transactions from being deleted.

Truncate will remove them as it deletes all rows from the table.

Yes, a reindex will have the same effect but shouldn't be needed because of
the automatic ghost cleanup process.

This seems to be the root of the problem - the number of ghost rows climbing
steadily.

Thanks

--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine

This posting is provided "AS IS" with no warranties, and confers no rights.

"Guillaume" <anonymous@.devdex.com> wrote in message
news:Zalde.50$wu5.2310@.news.uswest.net...
> Thanks a lot for your answer,
> Based on your description, it turns out that my problem is related to
> ghost rows.
> Do you have any idea why I have such a large number of ghost rows on one
> single table from only one database? I assume the ghost rows are
> periodically deleted from the database. If I'm right then something
> might be going wrong during the physical delete of the rows.
> I ran a TRUNCATE TABLE once on my table and it fixed my issue for a
> period of time but now the problem is back so do you think it might make
> a difference if I deleted and then recreated the table or should I just
> run DBREINDEX from time to time?
> Guillaume
> *** Sent via Developersdex http://www.developersdex.com ***|||Paul,

I don't have any specific Trace flags on. And I don't have any long
running transaction. Do you know if there is a way to check why the
ghost cleanup doesn't run on a single table on my machine?
I recently ran the following command:
DBCC TRACEON (2514)
GO
DBCC CHECKTABLE (my event table)
GO
The result was:
Ghost Record count = 52 millions

How can delete those ghost records from my table?
Can I force the ghost cleanup service to run?

Thanks a lot.

Guillaume

*** Sent via Developersdex http://www.developersdex.com ***|||Guillaume,

Normally there is a background task that wakes up to clean some ghosted
records every 5 seconds or so. To force the ghost cleanup activities, you
can:

First, make sure traceflag 661 is turned off in the server by
dbcc traceoff(661, -1)
dbcc tracestatus(661, -1)
go

Then, issue a scan on the table/index that contains the 52 millions of
ghosted record, something like:
select * from [my event table] with (index=the index id of the index that
contains ghosted record)
go
Upon seeing a ghosted record, the scan will enqueue a request to clean it up
aggressively.

--
Gang He
Software Design Engineer
Microsoft SQL Server Storage Engine

This posting is provided "AS IS" with no warranties, and confers no rights.
"Guillaume" <anonymous@.devdex.com> wrote in message
news:Vzage.300$693.7473@.news.uswest.net...
> Paul,
> I don't have any specific Trace flags on. And I don't have any long
> running transaction. Do you know if there is a way to check why the
> ghost cleanup doesn't run on a single table on my machine?
> I recently ran the following command:
> DBCC TRACEON (2514)
> GO
> DBCC CHECKTABLE (my event table)
> GO
> The result was:
> Ghost Record count = 52 millions
> How can delete those ghost records from my table?
> Can I force the ghost cleanup service to run?
> Thanks a lot.
> Guillaume
> *** Sent via Developersdex http://www.developersdex.com ***

Thursday, March 8, 2012

DBCC inconsistency

We have a very minor problem with a db. It's not
affecting how the db runs, but it's causing BackupExec to
show red for the server every time it back up. Here is
the error.
Consistency checking magic
Consistency Check including Indexes.
There was a problem running the DBCC.
^ ^ ^ ^ ^
SQL Server returned the following error message:
Table Corrupt: Object ID 2042490355 (object '2042490355')
does not match between 'SYSCOMMENTS' and 'SYSOBJECTS'.
^ ^ ^ ^ ^
I'm told that I just need to reconcile this
object "2042490355" in those two db's using a simple sql
script. Can anyone tell me what that script is or where
to find the article for what I need? I've been searching
support for at least an hour.Shannon
Who told you just needed to reconcile this object in the two dbs? I have no
idea what someone might mean by 'reconcile' without knowing what the problem
is.
First of all, we're most likely talking about just ONE database, but there
are two system tables involved.
Sysobjects has one row for each object in the entire database, and
syscomments contains the textual definition for any objects with a TSQL code
definition such as stored procedures, functions, constraints, views, etc.
So make sure you are in the right database.
Then see what object the message is referring to:
SELECT object_name(2042490355)
Then run sp_helptext 'whatever that object name was'
You might try just dropping the object, and recreating it, but save the
definition before you do that.
Since the object is not a table (or so it seems from this evidence), you
don't have to worry about losing data.
Good luck
--
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Shannon" <srubin@.pec.com> wrote in message
news:006a01c3602a$482282a0$a401280a@.phx.gbl...
> We have a very minor problem with a db. It's not
> affecting how the db runs, but it's causing BackupExec to
> show red for the server every time it back up. Here is
> the error.
> Consistency checking magic
> Consistency Check including Indexes.
> There was a problem running the DBCC.
> ^ ^ ^ ^ ^
> SQL Server returned the following error message:
> Table Corrupt: Object ID 2042490355 (object '2042490355')
> does not match between 'SYSCOMMENTS' and 'SYSOBJECTS'.
>
> ^ ^ ^ ^ ^
> I'm told that I just need to reconcile this
> object "2042490355" in those two db's using a simple sql
> script. Can anyone tell me what that script is or where
> to find the article for what I need? I've been searching
> support for at least an hour.

Friday, February 24, 2012

Dbcc Checkident Reseed

On some tables when I execute the following statement the identity value will start at 1 other times 0. Has anyone experienced this inconsistency before?
DBCC CHECKIDENT ('TABLE_NAME', RESEED, 0)Is the miracle effect?

Can you post some code where this happens?

Please make sure to post the DDL, any DML and sample data.

Thanks|||Here ya go. Most of the tables are very simplistic. I start by deleting all of the records then executing the DBCC CHECKIDENT ('table_name', RESEED, 0) in query analyzer against the table. Here's the ddl for one of the tables. 90% of the time the identity value starts at 1 the other 10% start at 0.

t_unique_id = int
t_varchar = varchar(50)
t_flag = int with rule limiting values to 1 or 0
t_guid = char(36)

/*================================================= =============*/
/* Table: producer_type */
/*================================================= =============*/
create table dbo.producer_type (
producer_type_id t_unique_id identity,
producer_type_name t_varchar not null,
org_flag t_flag not null,
description t_long_varchar not null,
comments t_long_varchar not null,
created_by t_unique_id not null,
created_dt datetime not null,
changed_by t_unique_id not null,
changed_dt datetime not null,
logical_unit_of_work_id t_guid not null
)
go

alter table dbo.producer_type
add constraint pk_producer_type primary key (producer_type_id)
with
fillfactor= 90
go

/*================================================= =============*/
/* Index: ux_producer_type_01 */
/*================================================= =============*/
create unique index ux_producer_type_01 on dbo.producer_type (
producer_type_name
)
with
fillfactor= 90
go|||I'll take a look, but why don't you specify the seed an increment in the DDL?

I'm sure there's a default...but...

I've always set IDENTITY(1,1) (that is when I have to use it...I try to avoid it)|||Just out of curiosity...how are you building this table?

You can't do what you're doing unless you use dynamic sql...unless you're building from the application layer, and if you are, what's with the GO?

Also out of curiosity, it sounds like whatever process your doing is placing an undue amount of meaning to the identity column.

Why does it need to be reset to 1 (or 0)|||Sorry, these are in no particular order:

First, I'm building the tables with ddl and executing with query builder. I didn't want to post all of the ddl for the user defined data types, fks, etc... The 'GO' is used as a terminator for the SQL. I'm building over 150 tables, pk's, fk's, indexes, etc.. in one script. It helps me if I encounter an error. The script will stop at that point and allow me to correct the problem and continue running.

Second, the default on the identity is (1,1)

Third, we are placing extra burden on the identity value because we are trying to migrate data into the new system from an existing system and are heavily coding at the same time. The developers need to know the primary key values for reference or metadata for codiing purposes. I know the order of the values going into the tables and as a standard told all of the developers the pk would start at 1. This hurts if it periodically starts at 0 because we are constantly rebuilding the database from the ddl.

Finally, I'm regretting the decision not to turn the identity insert on and specify the values on insert. I could change my scripts but it would take some time to do. I didn't know if there was something simple I was missing with the DBCC CHECKIDENT and RESEED.

Hope this helps, sorry for the long reply.|||Are you not using Query Analyzer?

And if you are, then how does this work?

t_unique_id = int
t_varchar = varchar(50)
t_flag = int with rule limiting values to 1 or 0
t_guid = char(36)

/ *=================================================
=============*/
/* Table: producer_type */
/ *=================================================
=============*/
create table dbo.producer_type (
producer_type_id t_unique_id identity,
producer_type_name t_varchar not null,
org_flag t_flag not null,
description t_long_varchar not null,
comments t_long_varchar not null,
created_by t_unique_id not null,
created_dt datetime not null,
changed_by t_unique_id not null,
changed_dt datetime not null,
logical_unit_of_work_id t_guid not null
)
go

You can't assign the datatype to a variable like that...unless it's a different interface/language...

I'm amazed at a 10% occurance...it's not often that the same process behave differently.

Since you're blowing the data away anyway, can you just do a drop, then recreate the table?

It will save on logging.

Why not create a shell db and recreate it when you need. Then have load scripts...or better just dump a "clean slate" db, and restore over...

and I'm sure NONE of these suggestion will work for you because of either RI, developer data, yada yada yada...

And in lieu of the fact that you're having these problems...why not give it a try and build the table with IDENTITY(1,1).

My Own Opinion (MOO)

Good Luck