Thursday, March 29, 2012

DBCC SHRINKFILE ('filename' , EMPTYFILE) not working

I created a file (size = 1 GB, filegrowth=1 GB) but I forgot to add it to the
correct filegroup (it defaulted to the primary).
I am trying to use DBCC shrinkfile with the emptyfile option so that I can
then go and do a 'alter database remove file'. I start the process and with
sp_who2 active it shows the command DbccFilesCompact, but it just sits there
and spins forever.
I see cpu and diskio cycles escalating but nothing ever happens.
is there a step before this that i am forgetting?
setup is: x64 sql 2005, sp1 (windwos 2003 ee x64), db in simple recovery
model.
gracias,
Robert
i should state a few more facts.
It is a data file (not log), and the whole DB 1.8 TB. The file is only 100GB.
I have tried to backup just that filegroup (only 1 file in it) and do a
emptyfile right afterwards but no luck...
"sql411@.nospam.com" wrote:

> I created a file (size = 1 GB, filegrowth=1 GB) but I forgot to add it to the
> correct filegroup (it defaulted to the primary).
> I am trying to use DBCC shrinkfile with the emptyfile option so that I can
> then go and do a 'alter database remove file'. I start the process and with
> sp_who2 active it shows the command DbccFilesCompact, but it just sits there
> and spins forever.
> I see cpu and diskio cycles escalating but nothing ever happens.
> is there a step before this that i am forgetting?
> setup is: x64 sql 2005, sp1 (windwos 2003 ee x64), db in simple recovery
> model.
> gracias,
> Robert
sql

DBCC SHRINKFILE ('filename' , EMPTYFILE) not working

I created a file (size = 1 GB, filegrowth=1 GB) but I forgot to add it to th
e
correct filegroup (it defaulted to the primary).
I am trying to use DBCC shrinkfile with the emptyfile option so that I can
then go and do a 'alter database remove file'. I start the process and with
sp_who2 active it shows the command DbccFilesCompact, but it just sits there
and spins forever.
I see cpu and diskio cycles escalating but nothing ever happens.
is there a step before this that i am forgetting?
setup is: x64 sql 2005, sp1 (windwos 2003 ee x64), db in simple recovery
model.
gracias,
Roberti should state a few more facts.
It is a data file (not log), and the whole DB 1.8 TB. The file is only 100G
B.
I have tried to backup just that filegroup (only 1 file in it) and do a
emptyfile right afterwards but no luck...
"sql411@.nospam.com" wrote:

> I created a file (size = 1 GB, filegrowth=1 GB) but I forgot to add it to
the
> correct filegroup (it defaulted to the primary).
> I am trying to use DBCC shrinkfile with the emptyfile option so that I can
> then go and do a 'alter database remove file'. I start the process and wi
th
> sp_who2 active it shows the command DbccFilesCompact, but it just sits the
re
> and spins forever.
> I see cpu and diskio cycles escalating but nothing ever happens.
> is there a step before this that i am forgetting?
> setup is: x64 sql 2005, sp1 (windwos 2003 ee x64), db in simple recovery
> model.
> gracias,
> Robert

dbcc shrinkfile & recovery mode

Occasionally, we have gotten messages that the logfile is too big on one
of our servers. At that point, we will run a backup of the database and
then dbcc shrinkfile on it.
From what I've read, I thought that some of the properties we have set
would make it unnecessary to have to do that. On the Options tab on the
database properties, we have the Recovery model set to Simple, and have
the following settings checked: Auto Update Statistics, Auto Shrink,
and Auto Create Statistics.
Is there another setting I'm missing? Or, should I just set up a job to
back up the database and run the dbcc shrinkfile on it at set intervals?
BC
AutoShrink is taking your log file down to a very minimal size when it runs,
and then your Autogrow has to kick in when the db has a lot of
activity...if it is very busy, it could grow quite large. I've never seen
a SQL Server message "too big"...what are you really getting, and what is
the size at that point.?
In theory, if you set the db to Simple and DBCC Shrinkfile it down to a size
that is big enough to handle your busiest time frames you should be good
from there.
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Blasting Cap" <goober@.christian.net> wrote in message
news:%23xn12hncFHA.1384@.TK2MSFTNGP09.phx.gbl...
> Occasionally, we have gotten messages that the logfile is too big on one
> of our servers. At that point, we will run a backup of the database and
> then dbcc shrinkfile on it.
> From what I've read, I thought that some of the properties we have set
> would make it unnecessary to have to do that. On the Options tab on the
> database properties, we have the Recovery model set to Simple, and have
> the following settings checked: Auto Update Statistics, Auto Shrink, and
> Auto Create Statistics.
> Is there another setting I'm missing? Or, should I just set up a job to
> back up the database and run the dbcc shrinkfile on it at set intervals?
> BC

dbcc shrinkfile & recovery mode

Occasionally, we have gotten messages that the logfile is too big on one
of our servers. At that point, we will run a backup of the database and
then dbcc shrinkfile on it.
From what I've read, I thought that some of the properties we have set
would make it unnecessary to have to do that. On the Options tab on the
database properties, we have the Recovery model set to Simple, and have
the following settings checked: Auto Update Statistics, Auto Shrink,
and Auto Create Statistics.
Is there another setting I'm missing? Or, should I just set up a job to
back up the database and run the dbcc shrinkfile on it at set intervals?
BCAutoShrink is taking your log file down to a very minimal size when it runs,
and then your Autogrow has to kick in when the db has a lot of
activity...if it is very busy, it could grow quite large. I've never seen
a SQL Server message "too big"...what are you really getting, and what is
the size at that point.?
In theory, if you set the db to Simple and DBCC Shrinkfile it down to a size
that is big enough to handle your busiest time frames you should be good
from there.
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Blasting Cap" <goober@.christian.net> wrote in message
news:%23xn12hncFHA.1384@.TK2MSFTNGP09.phx.gbl...
> Occasionally, we have gotten messages that the logfile is too big on one
> of our servers. At that point, we will run a backup of the database and
> then dbcc shrinkfile on it.
> From what I've read, I thought that some of the properties we have set
> would make it unnecessary to have to do that. On the Options tab on the
> database properties, we have the Recovery model set to Simple, and have
> the following settings checked: Auto Update Statistics, Auto Shrink, and
> Auto Create Statistics.
> Is there another setting I'm missing? Or, should I just set up a job to
> back up the database and run the dbcc shrinkfile on it at set intervals?
> BC

dbcc shrinkfile & recovery mode

Occasionally, we have gotten messages that the logfile is too big on one
of our servers. At that point, we will run a backup of the database and
then dbcc shrinkfile on it.
From what I've read, I thought that some of the properties we have set
would make it unnecessary to have to do that. On the Options tab on the
database properties, we have the Recovery model set to Simple, and have
the following settings checked: Auto Update Statistics, Auto Shrink,
and Auto Create Statistics.
Is there another setting I'm missing? Or, should I just set up a job to
back up the database and run the dbcc shrinkfile on it at set intervals?
BCAutoShrink is taking your log file down to a very minimal size when it runs,
and then your Autogrow has to kick in when the db has a lot of
activity...if it is very busy, it could grow quite large. I've never seen
a SQL Server message "too big"...what are you really getting, and what is
the size at that point.?
In theory, if you set the db to Simple and DBCC Shrinkfile it down to a size
that is big enough to handle your busiest time frames you should be good
from there.
--
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Blasting Cap" <goober@.christian.net> wrote in message
news:%23xn12hncFHA.1384@.TK2MSFTNGP09.phx.gbl...
> Occasionally, we have gotten messages that the logfile is too big on one
> of our servers. At that point, we will run a backup of the database and
> then dbcc shrinkfile on it.
> From what I've read, I thought that some of the properties we have set
> would make it unnecessary to have to do that. On the Options tab on the
> database properties, we have the Recovery model set to Simple, and have
> the following settings checked: Auto Update Statistics, Auto Shrink, and
> Auto Create Statistics.
> Is there another setting I'm missing? Or, should I just set up a job to
> back up the database and run the dbcc shrinkfile on it at set intervals?
> BC

dbcc shrinkfile - SQL Server 2000 - invalidate log?

Hi,

I need to shrink a database file and was wondering whether it is
required to run a full backup after the shrink operation.

In SQL Server 7.0 shrinkfile was a non-logged operation so would
invalidate your transaction logs. Is the same true for 2000?

Obviously as a matter of course I would backup before and after the
operation but going forward I may want to implement this on a regular
basis.

Cheers
Deedeebeeay@.gmail.com wrote:
> Hi,
> I need to shrink a database file and was wondering whether it is
> required to run a full backup after the shrink operation.
> In SQL Server 7.0 shrinkfile was a non-logged operation so would
> invalidate your transaction logs. Is the same true for 2000?
> Obviously as a matter of course I would backup before and after the
> operation but going forward I may want to implement this on a regular
> basis.
> Cheers
> Dee

Shrinking log files does not invalidate the log. It is however a very
bad idea to shrink on a regular basis. If you are running full recovery
and doing regular transaction log backups then why would you want to
keep shrinking the log?

For more information see:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp

--
David Portas, SQL Server MVP

Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.

SQL Server Books Online:
http://msdn2.microsoft.com/library/...US,SQL.90).aspx
--|||Hi David,

Its not the log I want to shrink but the database.|||deebeeay@.gmail.com wrote:
> Hi David,
> Its not the log I want to shrink but the database.

Skrinking the database regularly is also unlikely to be a good idea in
most environments. Do you enable autogrow? Auto-growing a database is
potentially a very expensive operation. Much better to preset the
database size, turn auto-grow OFF and then don't shrink it at all.

--
David Portas, SQL Server MVP

Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.

SQL Server Books Online:
http://msdn2.microsoft.com/library/...US,SQL.90).aspx
--|||Yes I agree, auto-growth isn't good and by that same token a regular
shrink is probably not a good idea.

However I'd still like to know if a dbcc shrinkfile would then require
a post full database backup to ensure recoverability.|||deebeeay@.gmail.com wrote:
> Yes I agree, auto-growth isn't good and by that same token a regular
> shrink is probably not a good idea.
> However I'd still like to know if a dbcc shrinkfile would then require
> a post full database backup to ensure recoverability.

Shrinking a data file does not invalidate the log. However, it will
cause an exceptional amount of logging. Shrinking may require at least
as much log as you have data in the file(s) being shrunk. So your next
log backup could be vastly inflated. Assuming your database is offline
you may want to backup the log with the TRUNCATE ONLY option
immediately after shrinking and then perform a database backup. That's
not essential but it does mean you can return the log to its more
typical size. On the other hand if you think you need to do this
regularly then you'll have to ensure sufficient log space to support it
- so shrinking the data file is usually a false economy because it
moves data around without reducing the storage requirement.

--
David Portas, SQL Server MVP

Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.

SQL Server Books Online:
http://msdn2.microsoft.com/library/...US,SQL.90).aspx
--|||Many thanks for the reply David.

If I do a dbcc shrinkfile using the TRUNCATEONLY option - would this
reduce the amount of logging required?|||deebeeay@.gmail.com (deebeeay@.gmail.com) writes:
> Many thanks for the reply David.
> If I do a dbcc shrinkfile using the TRUNCATEONLY option - would this
> reduce the amount of logging required?

As I understand Books Online, this should not be an expensive operation
with regards to the tranaction log. On the other hand, it may not have
much effect, since it removes only extents at the end of the data file.
If you mistakenly created a file 10 times too large, then I would
expect TRUNCATEONLY to be useful. But if you want to shrink the database,
because you just deleted 5 years worth of data, TRUNCATEONLY is not likely
to have any effect at all.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspxsql

dbcc shrinkfile - how long will it take

Hi,
We are shrinking a large databse using dbcc shrinkfile and it has been
running for hours. Is there any way of determining how long the job still has
left to run (the equivalent of the Oracle dynamic view v$session_longops)?.
Thanks,
Andy
Not that I am aware of. I avoid this by shrinking in increments of 100MB.
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Squirrel" <Squirrel@.discussions.microsoft.com> wrote in message
news:956637B6-5C96-4F71-9630-E13702C94D78@.microsoft.com...
> Hi,
> We are shrinking a large databse using dbcc shrinkfile and it has been
> running for hours. Is there any way of determining how long the job still
> has
> left to run (the equivalent of the Oracle dynamic view
> v$session_longops)?.
> Thanks,
> Andy
|||Hi,
In SQL Server we can not exactly say how the process is going to run. But as
Kevin mentioned you could try
shrinking the files by providing a lower value (500 MB or soo...). Ensure
that you do a Backup LOG command
before shrinking the tranasction log.
Thanks
Hari
Sql server MVP
"Squirrel" <Squirrel@.discussions.microsoft.com> wrote in message
news:956637B6-5C96-4F71-9630-E13702C94D78@.microsoft.com...
> Hi,
> We are shrinking a large databse using dbcc shrinkfile and it has been
> running for hours. Is there any way of determining how long the job still
> has
> left to run (the equivalent of the Oracle dynamic view
> v$session_longops)?.
> Thanks,
> Andy
|||Not in SQL Server 2000. We've put progress reporting in for SQL Server 2005,
but even that is just a percentage complete with an elapsed-time based
extrapolation of the completion time. Basically, there are far too many
variables to consider to have a hope of being able to predict the run-time -
the worst ones being blocking, the starting state of the file/database, and
how much work shrink needs to do. For instance, if another process takes a
lock that shrink needs, shrink will wait forever for that lock.
Have you checked to make sure it is actually progressing and isn't blocked?
Regards
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Squirrel" <Squirrel@.discussions.microsoft.com> wrote in message
news:956637B6-5C96-4F71-9630-E13702C94D78@.microsoft.com...
> Hi,
> We are shrinking a large databse using dbcc shrinkfile and it has been
> running for hours. Is there any way of determining how long the job still
has
> left to run (the equivalent of the Oracle dynamic view
v$session_longops)?.
> Thanks,
> Andy
|||Not sure if this is related to the original post but I have SQL2000 SP4
install and have tried running DBCC SHRINKDATABASE (MyDB). This results is a
lock that is not released. Ent. Mgr shows "spid 58 (blocked by 58)" the Wait
Type is "PAGEIOLATCH_SH".
I think this maybe a bug in the SP4? I haven't had this problem with a DB
shrink before.
Regards,
John
"Paul S Randal [MS]" wrote:

> Not in SQL Server 2000. We've put progress reporting in for SQL Server 2005,
> but even that is just a percentage complete with an elapsed-time based
> extrapolation of the completion time. Basically, there are far too many
> variables to consider to have a hope of being able to predict the run-time -
> the worst ones being blocking, the starting state of the file/database, and
> how much work shrink needs to do. For instance, if another process takes a
> lock that shrink needs, shrink will wait forever for that lock.
> Have you checked to make sure it is actually progressing and isn't blocked?
> Regards
> --
> Paul Randal
> Dev Lead, Microsoft SQL Server Storage Engine
> This posting is provided "AS IS" with no warranties, and confers no rights.
> "Squirrel" <Squirrel@.discussions.microsoft.com> wrote in message
> news:956637B6-5C96-4F71-9630-E13702C94D78@.microsoft.com...
> has
> v$session_longops)?.
>
>