Friday, February 24, 2012

DBCC commands stability

Hi all,

I have a server from which keeping clients off for maintenance is
difficult. They all have VPN connections and can be online any time
they want, and uptime as always is important.

Now I need to run dbcc shrinkdatabase, checkdb and of course
checkpoint right before backups, and when the log seems to grow. I
just tried dbcc checkdb on my home computer and apparently its really
io and CPU intensive on this dual P3. Can users be running queries and
the occasional update and insert while checkdb is doing its thing? Or
is it better to lock everyone out?

How about shrinkdatabase? Any benchmarks on the stability of these
commands while other clients are running? If tables are getting locked
during these commands, the log file will grow even if shrinkdatabase
is running...

Any commands to show which tables are locked, and by whom or what?

I just tried our 2.5GB database on my dual P3 with 256MB ram home
computer.. checkdb took 8 minutes and shrinkdb took 1.5 minutes. We've
a dualP3 server at work, IBM xSeries 232 with 1GB ram, but even 5
minutes of downtime can hurt if shrinkdb needs to be run during work
hours.

Any thoughts?"Ghazan Haider" <ghazan@.ghazan.haider.name> wrote in message
news:2f57764a.0404091155.33f4439f@.posting.google.c om...
> Hi all,
> I have a server from which keeping clients off for maintenance is
> difficult. They all have VPN connections and can be online any time
> they want, and uptime as always is important.
> Now I need to run dbcc shrinkdatabase, checkdb and of course
> checkpoint right before backups, and when the log seems to grow. I
> just tried dbcc checkdb on my home computer and apparently its really
> io and CPU intensive on this dual P3. Can users be running queries and
> the occasional update and insert while checkdb is doing its thing? Or
> is it better to lock everyone out?
> How about shrinkdatabase? Any benchmarks on the stability of these
> commands while other clients are running? If tables are getting locked
> during these commands, the log file will grow even if shrinkdatabase
> is running...
> Any commands to show which tables are locked, and by whom or what?
> I just tried our 2.5GB database on my dual P3 with 256MB ram home
> computer.. checkdb took 8 minutes and shrinkdb took 1.5 minutes. We've
> a dualP3 server at work, IBM xSeries 232 with 1GB ram, but even 5
> minutes of downtime can hurt if shrinkdb needs to be run during work
> hours.
> Any thoughts?

Yes, why should anyone contemplate running these things during
production hours when you have an automated task queue at your
disposal? Surely any realtime high volume transaction database
system has its natural cycles of usage and minimal usage?

Always move routine tasks into this window of opportunity via
automation and queuing of the task to the off-peak times.

--
Pete Brown
Winluck P/L
IT Managers & Engineers
Falls Creek
Australia
www.mountainman.com.au/software|||> Yes, why should anyone contemplate running these things during
> production hours when you have an automated task queue at your
> disposal? Surely any realtime high volume transaction database
> system has its natural cycles of usage and minimal usage?
> Always move routine tasks into this window of opportunity via
> automation and queuing of the task to the off-peak times.

The log file simply blows up at the wrongest of times, swallows all of
the 36GB disk in a matter of 12 minutes. Apparently something is
locked while some other heavy transaction or bulk upload is going on.
Some of the financial transactions are really heavy and update lots of
rows. I'd just like to have the flexibility to checkpoint and shrinkdb
the database, and know what is locked and why.|||Why do you want to shrink the database? You've said it has very heavy
usage - if you shrink it, it'll grow again. That's because it needs all the
space for regular running - this is demonstrated by your (I'm assuming) need
to shrink every so often. Why cause the extra work for no gain? You'd be far
better off not shrinking the database at all.

You can run shrink and checkdb at any time, although they can cause up to a
20% drop (observed on a test system - YMMV) in transaction throughput.

Regards.

--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine

This posting is provided "AS IS" with no warranties, and confers no rights.

"Ghazan Haider" <ghazan@.ghazan.haider.name> wrote in message
news:2f57764a.0404092151.27001b80@.posting.google.c om...
> > Yes, why should anyone contemplate running these things during
> > production hours when you have an automated task queue at your
> > disposal? Surely any realtime high volume transaction database
> > system has its natural cycles of usage and minimal usage?
> > Always move routine tasks into this window of opportunity via
> > automation and queuing of the task to the off-peak times.
> The log file simply blows up at the wrongest of times, swallows all of
> the 36GB disk in a matter of 12 minutes. Apparently something is
> locked while some other heavy transaction or bulk upload is going on.
> Some of the financial transactions are really heavy and update lots of
> rows. I'd just like to have the flexibility to checkpoint and shrinkdb
> the database, and know what is locked and why.|||"Paul S Randal [MS]" <prandal@.online.microsoft.com> wrote in message news:<4078339a$1@.news.microsoft.com>...
> Why do you want to shrink the database? You've said it has very heavy
> usage - if you shrink it, it'll grow again. That's because it needs all the
> space for regular running - this is demonstrated by your (I'm assuming) need
> to shrink every so often. Why cause the extra work for no gain? You'd be far
> better off not shrinking the database at all.
> You can run shrink and checkdb at any time, although they can cause up to a
> 20% drop (observed on a test system - YMMV) in transaction throughput.

We've had the system freeze with the message transaction log full (no
more transactions or ERP system logins which inserts rows). That was
when the log file grew to several gigabytes and filled up the disk.

It'd be nice to be able to check what is locked and why, making the
logs grow. The shrink and checkdb are just an assurance after
unlocking whatever is locked, and running checkpoint to make sure
everything has been committed, so work can begin all over again. I
wouldnt need to really shrinkdb if theres a command to show the number
of uncommitted transactions in the log file, so I know everything has
been flushed.|||"Ghazan Haider" <ghazan@.ghazan.haider.name> wrote:

...[trim]...

> wouldnt need to really shrinkdb if theres a command to show the number
> of uncommitted transactions in the log file, so I know everything has
> been flushed.

Look up dbcc opentran

--
Pete Brown
Winluck P/L
IT Managers & Engineers
Falls Creek
Australia
www.mountainman.com.au/software

No comments:

Post a Comment