Showing posts with label ran. Show all posts
Showing posts with label ran. Show all posts
Thursday, March 29, 2012
DBCC Shrinkfile
I ran a dbcc shrinkfile after deleting data in order to take thte db down in size and it has been running for 2 days. Can I canel the command? Will the file be partially shrunk? Last year when we did this I ran a re-index command. Will I have to run a bakup or truncate log command? I have been trying to get the client to upgrade from SQL 7 but they just have not done it yet.follow up question. If I cancel how long will it take to finish the cancel operation?
Tuesday, March 27, 2012
DBCC SHRINKDATABASE
We deleted alot of Temp tables in our DB that we did not need anylonger. Afterwards, I ran DBCC SHRINKDATABASE and left the leave space at 10%. The DB shrank from 8GB to 6.5GB. Sunday nights we run the Maintenance plan wizard which rebuilds the Reorganizes Data and Index pages and changes free space to 10%. When I cam in on Monday the DB grew back to 8GB. The DB on our Test environment is not setup to run the Maint. plan and stays at 6.5GB. Has anyone ever heard of this or might know a solution to keep the DB at 6.5GB as on the Test server? Thank you."Don" <donolwert@.hotmail.com> wrote in message
news:243A6EFD-2F1A-44D3-80C9-57409859E59B@.microsoft.com...
> We deleted alot of Temp tables in our DB that we did not need anylonger.
Afterwards, I ran DBCC SHRINKDATABASE and left the leave space at 10%. The
DB shrank from 8GB to 6.5GB. Sunday nights we run the Maintenance plan
wizard which rebuilds the Reorganizes Data and Index pages and changes free
space to 10%. When I cam in on Monday the DB grew back to 8GB. The DB on
our Test environment is not setup to run the Maint. plan and stays at 6.5GB.
Has anyone ever heard of this or might know a solution to keep the DB at
6.5GB as on the Test server? Thank you.
Perhaps you could schedule the DBCC SHRINKDATABASE to run after your index
rebuild process? It's highly likely that the database expansion is required
to handle the index rebuilds.
Stevesql
news:243A6EFD-2F1A-44D3-80C9-57409859E59B@.microsoft.com...
> We deleted alot of Temp tables in our DB that we did not need anylonger.
Afterwards, I ran DBCC SHRINKDATABASE and left the leave space at 10%. The
DB shrank from 8GB to 6.5GB. Sunday nights we run the Maintenance plan
wizard which rebuilds the Reorganizes Data and Index pages and changes free
space to 10%. When I cam in on Monday the DB grew back to 8GB. The DB on
our Test environment is not setup to run the Maint. plan and stays at 6.5GB.
Has anyone ever heard of this or might know a solution to keep the DB at
6.5GB as on the Test server? Thank you.
Perhaps you could schedule the DBCC SHRINKDATABASE to run after your index
rebuild process? It's highly likely that the database expansion is required
to handle the index rebuilds.
Stevesql
DBCC SHRINKDATABASE
We deleted alot of Temp tables in our DB that we did not need anylonger. Af
terwards, I ran DBCC SHRINKDATABASE and left the leave space at 10%. The DB
shrank from 8GB to 6.5GB. Sunday nights we run the Maintenance plan wizard
which rebuilds the Reorgan
izes Data and Index pages and changes free space to 10%. When I cam in on M
onday the DB grew back to 8GB. The DB on our Test environment is not setup
to run the Maint. plan and stays at 6.5GB. Has anyone ever heard of this or
might know a solution to k
eep the DB at 6.5GB as on the Test server? Thank you."Don" <donolwert@.hotmail.com> wrote in message
news:243A6EFD-2F1A-44D3-80C9-57409859E59B@.microsoft.com...
Afterwards, I ran DBCC SHRINKDATABASE and left the leave space at 10%. The
DB shrank from 8GB to 6.5GB. Sunday nights we run the Maintenance plan
wizard which rebuilds the Reorganizes Data and Index pages and changes free
space to 10%. When I cam in on Monday the DB grew back to 8GB. The DB on
our Test environment is not setup to run the Maint. plan and stays at 6.5GB.
Has anyone ever heard of this or might know a solution to keep the DB at
6.5GB as on the Test server? Thank you.
Perhaps you could schedule the DBCC SHRINKDATABASE to run after your index
rebuild process? It's highly likely that the database expansion is required
to handle the index rebuilds.
Steve
terwards, I ran DBCC SHRINKDATABASE and left the leave space at 10%. The DB
shrank from 8GB to 6.5GB. Sunday nights we run the Maintenance plan wizard
which rebuilds the Reorgan
izes Data and Index pages and changes free space to 10%. When I cam in on M
onday the DB grew back to 8GB. The DB on our Test environment is not setup
to run the Maint. plan and stays at 6.5GB. Has anyone ever heard of this or
might know a solution to k
eep the DB at 6.5GB as on the Test server? Thank you."Don" <donolwert@.hotmail.com> wrote in message
news:243A6EFD-2F1A-44D3-80C9-57409859E59B@.microsoft.com...
quote:
> We deleted alot of Temp tables in our DB that we did not need anylonger.
Afterwards, I ran DBCC SHRINKDATABASE and left the leave space at 10%. The
DB shrank from 8GB to 6.5GB. Sunday nights we run the Maintenance plan
wizard which rebuilds the Reorganizes Data and Index pages and changes free
space to 10%. When I cam in on Monday the DB grew back to 8GB. The DB on
our Test environment is not setup to run the Maint. plan and stays at 6.5GB.
Has anyone ever heard of this or might know a solution to keep the DB at
6.5GB as on the Test server? Thank you.
Perhaps you could schedule the DBCC SHRINKDATABASE to run after your index
rebuild process? It's highly likely that the database expansion is required
to handle the index rebuilds.
Steve
Sunday, March 25, 2012
DBCC SHOWCONTIG incorrect?
Hi. I have a table that has 58 million rows and is about 24 GB. I ran a DBCC
DBREINDEX which appeared to run fine - new fillfactor showing etc. However,
showcontig still shows the indexes as being badly fragmented. I've run an
update stats with fullscan and update usage which has corrected incorrect
space stats but that's all. I run this process every month and normally
everything works as expected. Any ideas why this time it didn't please?Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
Keith Kratochvil
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Thanks. I've already done that and it didn't.
"Keith Kratochvil" wrote:
> Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
> --
> Keith Kratochvil
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
>
>|||Hi Shirley
Can you show us the details from DBCC SHOWCONTIG?
Also, how many indexes are there? Are they all fragmented, clustered and
nonclustered?
How much free space is in the database?
HTH
Kalen Delaney, SQL Server MVP
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Hi Kalen, thanks for replying.
There is one clustered IX and 3 NC IX. These are the results
- Scan Density [Best Count:Actual Count]......: 98.18% [292775:2981
97] CL
- Scan Density [Best Count:Actual Count]......: 54.10% [22661:41886
] NC
- Scan Density [Best Count:Actual Count]......: 52.88% [30754:58162
] NC
- Scan Density [Best Count:Actual Count]......: 54.74% [27210:49709
] NC
There is about 30 GB free space in the DB and it can autogrow.
"Kalen Delaney" wrote:
> Hi Shirley
> Can you show us the details from DBCC SHOWCONTIG?
> Also, how many indexes are there? Are they all fragmented, clustered and
> nonclustered?
> How much free space is in the database?
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
>
>|||Problem fixed. I ran an UPDATE STATISTICS again but with the FULLSCAN option
,
then re-ran the reorg and all looks fine now. Don't know why this just
started to happen but will do the updatestats before the reorg every time no
w.
"Shirley" wrote:
[vbcol=seagreen]
> Hi Kalen, thanks for replying.
> There is one clustered IX and 3 NC IX. These are the results
> - Scan Density [Best Count:Actual Count]......: 98.18% [292775:29
8197] CL
> - Scan Density [Best Count:Actual Count]......: 54.10% [22661:418
86] NC
> - Scan Density [Best Count:Actual Count]......: 52.88% [30754:581
62] NC
> - Scan Density [Best Count:Actual Count]......: 54.74% [27210:497
09] NC
> There is about 30 GB free space in the DB and it can autogrow.
>
> "Kalen Delaney" wrote:
>
DBREINDEX which appeared to run fine - new fillfactor showing etc. However,
showcontig still shows the indexes as being badly fragmented. I've run an
update stats with fullscan and update usage which has corrected incorrect
space stats but that's all. I run this process every month and normally
everything works as expected. Any ideas why this time it didn't please?Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
Keith Kratochvil
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Thanks. I've already done that and it didn't.
"Keith Kratochvil" wrote:
> Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
> --
> Keith Kratochvil
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
>
>|||Hi Shirley
Can you show us the details from DBCC SHOWCONTIG?
Also, how many indexes are there? Are they all fragmented, clustered and
nonclustered?
How much free space is in the database?
HTH
Kalen Delaney, SQL Server MVP
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Hi Kalen, thanks for replying.
There is one clustered IX and 3 NC IX. These are the results
- Scan Density [Best Count:Actual Count]......: 98.18% [292775:2981
97] CL
- Scan Density [Best Count:Actual Count]......: 54.10% [22661:41886
] NC
- Scan Density [Best Count:Actual Count]......: 52.88% [30754:58162
] NC
- Scan Density [Best Count:Actual Count]......: 54.74% [27210:49709
] NC
There is about 30 GB free space in the DB and it can autogrow.
"Kalen Delaney" wrote:
> Hi Shirley
> Can you show us the details from DBCC SHOWCONTIG?
> Also, how many indexes are there? Are they all fragmented, clustered and
> nonclustered?
> How much free space is in the database?
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
>
>|||Problem fixed. I ran an UPDATE STATISTICS again but with the FULLSCAN option
,
then re-ran the reorg and all looks fine now. Don't know why this just
started to happen but will do the updatestats before the reorg every time no
w.
"Shirley" wrote:
[vbcol=seagreen]
> Hi Kalen, thanks for replying.
> There is one clustered IX and 3 NC IX. These are the results
> - Scan Density [Best Count:Actual Count]......: 98.18% [292775:29
8197] CL
> - Scan Density [Best Count:Actual Count]......: 54.10% [22661:418
86] NC
> - Scan Density [Best Count:Actual Count]......: 52.88% [30754:581
62] NC
> - Scan Density [Best Count:Actual Count]......: 54.74% [27210:497
09] NC
> There is about 30 GB free space in the DB and it can autogrow.
>
> "Kalen Delaney" wrote:
>
DBCC SHOWCONTIG incorrect?
Hi. I have a table that has 58 million rows and is about 24 GB. I ran a DBCC
DBREINDEX which appeared to run fine - new fillfactor showing etc. However,
showcontig still shows the indexes as being badly fragmented. I've run an
update stats with fullscan and update usage which has corrected incorrect
space stats but that's all. I run this process every month and normally
everything works as expected. Any ideas why this time it didn't please?Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
--
Keith Kratochvil
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Thanks. I've already done that and it didn't.
"Keith Kratochvil" wrote:
> Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
> --
> Keith Kratochvil
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> > Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> > DBCC
> > DBREINDEX which appeared to run fine - new fillfactor showing etc.
> > However,
> > showcontig still shows the indexes as being badly fragmented. I've run an
> > update stats with fullscan and update usage which has corrected incorrect
> > space stats but that's all. I run this process every month and normally
> > everything works as expected. Any ideas why this time it didn't please?
>
>|||Hi Shirley
Can you show us the details from DBCC SHOWCONTIG?
Also, how many indexes are there? Are they all fragmented, clustered and
nonclustered?
How much free space is in the database?
--
HTH
Kalen Delaney, SQL Server MVP
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Hi Kalen, thanks for replying.
There is one clustered IX and 3 NC IX. These are the results
- Scan Density [Best Count:Actual Count]......: 98.18% [292775:298197] CL
- Scan Density [Best Count:Actual Count]......: 54.10% [22661:41886] NC
- Scan Density [Best Count:Actual Count]......: 52.88% [30754:58162] NC
- Scan Density [Best Count:Actual Count]......: 54.74% [27210:49709] NC
There is about 30 GB free space in the DB and it can autogrow.
"Kalen Delaney" wrote:
> Hi Shirley
> Can you show us the details from DBCC SHOWCONTIG?
> Also, how many indexes are there? Are they all fragmented, clustered and
> nonclustered?
> How much free space is in the database?
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> > Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> > DBCC
> > DBREINDEX which appeared to run fine - new fillfactor showing etc.
> > However,
> > showcontig still shows the indexes as being badly fragmented. I've run an
> > update stats with fullscan and update usage which has corrected incorrect
> > space stats but that's all. I run this process every month and normally
> > everything works as expected. Any ideas why this time it didn't please?
>
>|||Problem fixed. I ran an UPDATE STATISTICS again but with the FULLSCAN option,
then re-ran the reorg and all looks fine now. Don't know why this just
started to happen but will do the updatestats before the reorg every time now.
"Shirley" wrote:
> Hi Kalen, thanks for replying.
> There is one clustered IX and 3 NC IX. These are the results
> - Scan Density [Best Count:Actual Count]......: 98.18% [292775:298197] CL
> - Scan Density [Best Count:Actual Count]......: 54.10% [22661:41886] NC
> - Scan Density [Best Count:Actual Count]......: 52.88% [30754:58162] NC
> - Scan Density [Best Count:Actual Count]......: 54.74% [27210:49709] NC
> There is about 30 GB free space in the DB and it can autogrow.
>
> "Kalen Delaney" wrote:
> > Hi Shirley
> >
> > Can you show us the details from DBCC SHOWCONTIG?
> > Also, how many indexes are there? Are they all fragmented, clustered and
> > nonclustered?
> > How much free space is in the database?
> >
> > --
> > HTH
> > Kalen Delaney, SQL Server MVP
> >
> >
> > "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> > news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> > > Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> > > DBCC
> > > DBREINDEX which appeared to run fine - new fillfactor showing etc.
> > > However,
> > > showcontig still shows the indexes as being badly fragmented. I've run an
> > > update stats with fullscan and update usage which has corrected incorrect
> > > space stats but that's all. I run this process every month and normally
> > > everything works as expected. Any ideas why this time it didn't please?
> >
> >
> >
DBREINDEX which appeared to run fine - new fillfactor showing etc. However,
showcontig still shows the indexes as being badly fragmented. I've run an
update stats with fullscan and update usage which has corrected incorrect
space stats but that's all. I run this process every month and normally
everything works as expected. Any ideas why this time it didn't please?Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
--
Keith Kratochvil
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Thanks. I've already done that and it didn't.
"Keith Kratochvil" wrote:
> Perhaps DBCC UPDATEUSAGE will fix the incorrect stats.
> --
> Keith Kratochvil
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> > Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> > DBCC
> > DBREINDEX which appeared to run fine - new fillfactor showing etc.
> > However,
> > showcontig still shows the indexes as being badly fragmented. I've run an
> > update stats with fullscan and update usage which has corrected incorrect
> > space stats but that's all. I run this process every month and normally
> > everything works as expected. Any ideas why this time it didn't please?
>
>|||Hi Shirley
Can you show us the details from DBCC SHOWCONTIG?
Also, how many indexes are there? Are they all fragmented, clustered and
nonclustered?
How much free space is in the database?
--
HTH
Kalen Delaney, SQL Server MVP
"Shirley" <Shirley@.discussions.microsoft.com> wrote in message
news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> DBCC
> DBREINDEX which appeared to run fine - new fillfactor showing etc.
> However,
> showcontig still shows the indexes as being badly fragmented. I've run an
> update stats with fullscan and update usage which has corrected incorrect
> space stats but that's all. I run this process every month and normally
> everything works as expected. Any ideas why this time it didn't please?|||Hi Kalen, thanks for replying.
There is one clustered IX and 3 NC IX. These are the results
- Scan Density [Best Count:Actual Count]......: 98.18% [292775:298197] CL
- Scan Density [Best Count:Actual Count]......: 54.10% [22661:41886] NC
- Scan Density [Best Count:Actual Count]......: 52.88% [30754:58162] NC
- Scan Density [Best Count:Actual Count]......: 54.74% [27210:49709] NC
There is about 30 GB free space in the DB and it can autogrow.
"Kalen Delaney" wrote:
> Hi Shirley
> Can you show us the details from DBCC SHOWCONTIG?
> Also, how many indexes are there? Are they all fragmented, clustered and
> nonclustered?
> How much free space is in the database?
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> > Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> > DBCC
> > DBREINDEX which appeared to run fine - new fillfactor showing etc.
> > However,
> > showcontig still shows the indexes as being badly fragmented. I've run an
> > update stats with fullscan and update usage which has corrected incorrect
> > space stats but that's all. I run this process every month and normally
> > everything works as expected. Any ideas why this time it didn't please?
>
>|||Problem fixed. I ran an UPDATE STATISTICS again but with the FULLSCAN option,
then re-ran the reorg and all looks fine now. Don't know why this just
started to happen but will do the updatestats before the reorg every time now.
"Shirley" wrote:
> Hi Kalen, thanks for replying.
> There is one clustered IX and 3 NC IX. These are the results
> - Scan Density [Best Count:Actual Count]......: 98.18% [292775:298197] CL
> - Scan Density [Best Count:Actual Count]......: 54.10% [22661:41886] NC
> - Scan Density [Best Count:Actual Count]......: 52.88% [30754:58162] NC
> - Scan Density [Best Count:Actual Count]......: 54.74% [27210:49709] NC
> There is about 30 GB free space in the DB and it can autogrow.
>
> "Kalen Delaney" wrote:
> > Hi Shirley
> >
> > Can you show us the details from DBCC SHOWCONTIG?
> > Also, how many indexes are there? Are they all fragmented, clustered and
> > nonclustered?
> > How much free space is in the database?
> >
> > --
> > HTH
> > Kalen Delaney, SQL Server MVP
> >
> >
> > "Shirley" <Shirley@.discussions.microsoft.com> wrote in message
> > news:B05EB837-51D3-40E1-B517-E758862A412A@.microsoft.com...
> > > Hi. I have a table that has 58 million rows and is about 24 GB. I ran a
> > > DBCC
> > > DBREINDEX which appeared to run fine - new fillfactor showing etc.
> > > However,
> > > showcontig still shows the indexes as being badly fragmented. I've run an
> > > update stats with fullscan and update usage which has corrected incorrect
> > > space stats but that's all. I run this process every month and normally
> > > everything works as expected. Any ideas why this time it didn't please?
> >
> >
> >
Thursday, March 22, 2012
dbcc showcontig
When I run command: dbcc showcontig with all_indexes, I got a lot of indexes w/ high logical fragmentation
So I ran dbcc dbreindex on all the indexes
When I re-ran showcontig to verify each table individually, it reported 0% frag per index
For one last time, I ran a full report of the entire database, the logicalfrag quoted the original numbers of 100%.
Why would dbcc showcontig all_indexes and dbcc showcontig (tblname) all_indexes have different results for logicalfrag100% fragmentation is only possible with tiny indexes (i.e. several pages).
Can you give some examples of the output you're comparing?
You don't need to fix fragmentation on all indexes - it depends on your
access patterns whether fragmentation will affect your performance. Read the
whitepaper below for full details:
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/optimize/ss2kidbp.asp
Regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Achieng O" <snow.barb@.verizon.net> wrote in message
news:AEA21E32-7345-4600-B116-434F1D3E398D@.microsoft.com...
> When I run command: dbcc showcontig with all_indexes, I got a lot of
indexes w/ high logical fragmentation.
> So I ran dbcc dbreindex on all the indexes.
> When I re-ran showcontig to verify each table individually, it reported 0%
frag per index.
> For one last time, I ran a full report of the entire database, the
logicalfrag quoted the original numbers of 100%.
> Why would dbcc showcontig all_indexes and dbcc showcontig (tblname)
all_indexes have different results for logicalfrag?
>|||***Here's the result w/ dbcc showcontig with all_indexes****
DBCC SHOWCONTIG scanning 'DEDUCTRULE' table...
Table: 'DEDUCTRULE' (158623608); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 6797.0
- Avg. Page Density (full)................: 16.02%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 100.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 5335.0
- Avg. Page Density (full)................: 34.09%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 7207.0
- Avg. Page Density (full)................: 10.96%
****BUT WHEN I RUN dbcc showcontig (ACCESSPROFILE) with all_indexes****
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 5335.0
- Avg. Page Density (full)................: 34.09%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 7207.0
- Avg. Page Density (full)................: 10.96%
****HOW COME THE STATS FOR LOGICALFRAG CHANGED?*****
Paul S Randal [MS] wrote:
> 100% fragmentation is only possible with tiny indexes (i.e. several pages).
> Can you give some examples of the output you're comparing?
> You don't need to fix fragmentation on all indexes - it depends on your
> access patterns whether fragmentation will affect your performance. Read the
> whitepaper below for full details:
> http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/optimize/ss2kidbp.asp
> Regards.
>|||Interesting - there must be a corner case in one of the algorithms for
single-page indexes, but that's not something I'd consider fixing in an SP
as you shouldn't even bother about fragmentation with an index under say
1000 pages - it's just not worth it.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Achieng O" <snow.barb@.verizon.net> wrote in message
news:#gC0eCO7DHA.1596@.TK2MSFTNGP10.phx.gbl...
> ***Here's the result w/ dbcc showcontig with all_indexes****
> DBCC SHOWCONTIG scanning 'DEDUCTRULE' table...
> Table: 'DEDUCTRULE' (158623608); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 6797.0
> - Avg. Page Density (full)................: 16.02%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 100.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 5335.0
> - Avg. Page Density (full)................: 34.09%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 7207.0
> - Avg. Page Density (full)................: 10.96%
> ****BUT WHEN I RUN dbcc showcontig (ACCESSPROFILE) with all_indexes****
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 5335.0
> - Avg. Page Density (full)................: 34.09%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 7207.0
> - Avg. Page Density (full)................: 10.96%
> ****HOW COME THE STATS FOR LOGICALFRAG CHANGED?*****
> Paul S Randal [MS] wrote:
> > 100% fragmentation is only possible with tiny indexes (i.e. several
pages).
> > Can you give some examples of the output you're comparing?
> >
> > You don't need to fix fragmentation on all indexes - it depends on your
> > access patterns whether fragmentation will affect your performance. Read
the
> > whitepaper below for full details:
> >
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/optimize/ss2kidbp.asp
> >
> > Regards.
> >
>
So I ran dbcc dbreindex on all the indexes
When I re-ran showcontig to verify each table individually, it reported 0% frag per index
For one last time, I ran a full report of the entire database, the logicalfrag quoted the original numbers of 100%.
Why would dbcc showcontig all_indexes and dbcc showcontig (tblname) all_indexes have different results for logicalfrag100% fragmentation is only possible with tiny indexes (i.e. several pages).
Can you give some examples of the output you're comparing?
You don't need to fix fragmentation on all indexes - it depends on your
access patterns whether fragmentation will affect your performance. Read the
whitepaper below for full details:
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/optimize/ss2kidbp.asp
Regards.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Achieng O" <snow.barb@.verizon.net> wrote in message
news:AEA21E32-7345-4600-B116-434F1D3E398D@.microsoft.com...
> When I run command: dbcc showcontig with all_indexes, I got a lot of
indexes w/ high logical fragmentation.
> So I ran dbcc dbreindex on all the indexes.
> When I re-ran showcontig to verify each table individually, it reported 0%
frag per index.
> For one last time, I ran a full report of the entire database, the
logicalfrag quoted the original numbers of 100%.
> Why would dbcc showcontig all_indexes and dbcc showcontig (tblname)
all_indexes have different results for logicalfrag?
>|||***Here's the result w/ dbcc showcontig with all_indexes****
DBCC SHOWCONTIG scanning 'DEDUCTRULE' table...
Table: 'DEDUCTRULE' (158623608); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 6797.0
- Avg. Page Density (full)................: 16.02%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 100.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 5335.0
- Avg. Page Density (full)................: 34.09%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 7207.0
- Avg. Page Density (full)................: 10.96%
****BUT WHEN I RUN dbcc showcontig (ACCESSPROFILE) with all_indexes****
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 5335.0
- Avg. Page Density (full)................: 34.09%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 7207.0
- Avg. Page Density (full)................: 10.96%
****HOW COME THE STATS FOR LOGICALFRAG CHANGED?*****
Paul S Randal [MS] wrote:
> 100% fragmentation is only possible with tiny indexes (i.e. several pages).
> Can you give some examples of the output you're comparing?
> You don't need to fix fragmentation on all indexes - it depends on your
> access patterns whether fragmentation will affect your performance. Read the
> whitepaper below for full details:
> http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/optimize/ss2kidbp.asp
> Regards.
>|||Interesting - there must be a corner case in one of the algorithms for
single-page indexes, but that's not something I'd consider fixing in an SP
as you shouldn't even bother about fragmentation with an index under say
1000 pages - it's just not worth it.
--
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Achieng O" <snow.barb@.verizon.net> wrote in message
news:#gC0eCO7DHA.1596@.TK2MSFTNGP10.phx.gbl...
> ***Here's the result w/ dbcc showcontig with all_indexes****
> DBCC SHOWCONTIG scanning 'DEDUCTRULE' table...
> Table: 'DEDUCTRULE' (158623608); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 6797.0
> - Avg. Page Density (full)................: 16.02%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 100.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 5335.0
> - Avg. Page Density (full)................: 34.09%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 7207.0
> - Avg. Page Density (full)................: 10.96%
> ****BUT WHEN I RUN dbcc showcontig (ACCESSPROFILE) with all_indexes****
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 5335.0
> - Avg. Page Density (full)................: 34.09%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 7207.0
> - Avg. Page Density (full)................: 10.96%
> ****HOW COME THE STATS FOR LOGICALFRAG CHANGED?*****
> Paul S Randal [MS] wrote:
> > 100% fragmentation is only possible with tiny indexes (i.e. several
pages).
> > Can you give some examples of the output you're comparing?
> >
> > You don't need to fix fragmentation on all indexes - it depends on your
> > access patterns whether fragmentation will affect your performance. Read
the
> > whitepaper below for full details:
> >
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/optimize/ss2kidbp.asp
> >
> > Regards.
> >
>
dbcc showcontig
When I run command: dbcc showcontig with all_indexes, I got a lot of indexe
s w/ high logical fragmentation.
So I ran dbcc dbreindex on all the indexes.
When I re-ran showcontig to verify each table individually, it reported 0% f
rag per index.
For one last time, I ran a full report of the entire database, the logicalfr
ag quoted the original numbers of 100%.
Why would dbcc showcontig all_indexes and dbcc showcontig (tblname) all_inde
xes have different results for logicalfrag?100% fragmentation is only possible with tiny indexes (i.e. several pages).
Can you give some examples of the output you're comparing?
You don't need to fix fragmentation on all indexes - it depends on your
access patterns whether fragmentation will affect your performance. Read the
whitepaper below for full details:
http://www.microsoft.com/technet/tr...ze/ss2kidbp.asp
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Achieng O" <snow.barb@.verizon.net> wrote in message
news:AEA21E32-7345-4600-B116-434F1D3E398D@.microsoft.com...
> When I run command: dbcc showcontig with all_indexes, I got a lot of
indexes w/ high logical fragmentation.
> So I ran dbcc dbreindex on all the indexes.
> When I re-ran showcontig to verify each table individually, it reported 0%
frag per index.
> For one last time, I ran a full report of the entire database, the
logicalfrag quoted the original numbers of 100%.
> Why would dbcc showcontig all_indexes and dbcc showcontig (tblname)
all_indexes have different results for logicalfrag?
>|||***Here's the result w/ dbcc showcontig with all_indexes****
DBCC SHOWCONTIG scanning 'DEDUCTRULE' table...
Table: 'DEDUCTRULE' (158623608); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 6797.0
- Avg. Page Density (full)................: 16.02%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 100.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 5335.0
- Avg. Page Density (full)................: 34.09%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 7207.0
- Avg. Page Density (full)................: 10.96%
****BUT WHEN I RUN dbcc showcontig (ACCESSPROFILE) with all_indexes****
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 5335.0
- Avg. Page Density (full)................: 34.09%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 7207.0
- Avg. Page Density (full)................: 10.96%
****HOW COME THE STATS FOR LOGICALFRAG CHANGED?*****
Paul S Randal [MS] wrote:
> 100% fragmentation is only possible with tiny indexes (i.e. several pages)
.
> Can you give some examples of the output you're comparing?
> You don't need to fix fragmentation on all indexes - it depends on your
> access patterns whether fragmentation will affect your performance. Read t
he
> whitepaper below for full details:
> http://www.microsoft.com/technet/tr...ze/ss2kidbp.asp
> Regards.
>|||Interesting - there must be a corner case in one of the algorithms for
single-page indexes, but that's not something I'd consider fixing in an SP
as you shouldn't even bother about fragmentation with an index under say
1000 pages - it's just not worth it.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Achieng O" <snow.barb@.verizon.net> wrote in message
news:#gC0eCO7DHA.1596@.TK2MSFTNGP10.phx.gbl...
> ***Here's the result w/ dbcc showcontig with all_indexes****
> DBCC SHOWCONTIG scanning 'DEDUCTRULE' table...
> Table: 'DEDUCTRULE' (158623608); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 6797.0
> - Avg. Page Density (full)................: 16.02%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 100.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 5335.0
> - Avg. Page Density (full)................: 34.09%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 7207.0
> - Avg. Page Density (full)................: 10.96%
> ****BUT WHEN I RUN dbcc showcontig (ACCESSPROFILE) with all_indexes****
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 5335.0
> - Avg. Page Density (full)................: 34.09%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 7207.0
> - Avg. Page Density (full)................: 10.96%
> ****HOW COME THE STATS FOR LOGICALFRAG CHANGED?*****
> Paul S Randal [MS] wrote:
pages).
the
http://www.microsoft.com/technet/tr...ze/ss2kidbp.asp
>
s w/ high logical fragmentation.
So I ran dbcc dbreindex on all the indexes.
When I re-ran showcontig to verify each table individually, it reported 0% f
rag per index.
For one last time, I ran a full report of the entire database, the logicalfr
ag quoted the original numbers of 100%.
Why would dbcc showcontig all_indexes and dbcc showcontig (tblname) all_inde
xes have different results for logicalfrag?100% fragmentation is only possible with tiny indexes (i.e. several pages).
Can you give some examples of the output you're comparing?
You don't need to fix fragmentation on all indexes - it depends on your
access patterns whether fragmentation will affect your performance. Read the
whitepaper below for full details:
http://www.microsoft.com/technet/tr...ze/ss2kidbp.asp
Regards.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Achieng O" <snow.barb@.verizon.net> wrote in message
news:AEA21E32-7345-4600-B116-434F1D3E398D@.microsoft.com...
> When I run command: dbcc showcontig with all_indexes, I got a lot of
indexes w/ high logical fragmentation.
> So I ran dbcc dbreindex on all the indexes.
> When I re-ran showcontig to verify each table individually, it reported 0%
frag per index.
> For one last time, I ran a full report of the entire database, the
logicalfrag quoted the original numbers of 100%.
> Why would dbcc showcontig all_indexes and dbcc showcontig (tblname)
all_indexes have different results for logicalfrag?
>|||***Here's the result w/ dbcc showcontig with all_indexes****
DBCC SHOWCONTIG scanning 'DEDUCTRULE' table...
Table: 'DEDUCTRULE' (158623608); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 6797.0
- Avg. Page Density (full)................: 16.02%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 100.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 5335.0
- Avg. Page Density (full)................: 34.09%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 7207.0
- Avg. Page Density (full)................: 10.96%
****BUT WHEN I RUN dbcc showcontig (ACCESSPROFILE) with all_indexes****
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 5335.0
- Avg. Page Density (full)................: 34.09%
DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
LEAF level scan performed.
- Pages Scanned........................: 1
- Extents Scanned.......................: 1
- Extent Switches.......................: 0
- Avg. Pages per Extent..................: 1.0
- Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
- Logical Scan Fragmentation ..............: 0.00%
- Extent Scan Fragmentation ...............: 0.00%
- Avg. Bytes Free per Page................: 7207.0
- Avg. Page Density (full)................: 10.96%
****HOW COME THE STATS FOR LOGICALFRAG CHANGED?*****
Paul S Randal [MS] wrote:
> 100% fragmentation is only possible with tiny indexes (i.e. several pages)
.
> Can you give some examples of the output you're comparing?
> You don't need to fix fragmentation on all indexes - it depends on your
> access patterns whether fragmentation will affect your performance. Read t
he
> whitepaper below for full details:
> http://www.microsoft.com/technet/tr...ze/ss2kidbp.asp
> Regards.
>|||Interesting - there must be a corner case in one of the algorithms for
single-page indexes, but that's not something I'd consider fixing in an SP
as you shouldn't even bother about fragmentation with an index under say
1000 pages - it's just not worth it.
Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Achieng O" <snow.barb@.verizon.net> wrote in message
news:#gC0eCO7DHA.1596@.TK2MSFTNGP10.phx.gbl...
> ***Here's the result w/ dbcc showcontig with all_indexes****
> DBCC SHOWCONTIG scanning 'DEDUCTRULE' table...
> Table: 'DEDUCTRULE' (158623608); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 6797.0
> - Avg. Page Density (full)................: 16.02%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 100.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 5335.0
> - Avg. Page Density (full)................: 34.09%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 7207.0
> - Avg. Page Density (full)................: 10.96%
> ****BUT WHEN I RUN dbcc showcontig (ACCESSPROFILE) with all_indexes****
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 1, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 5335.0
> - Avg. Page Density (full)................: 34.09%
> DBCC SHOWCONTIG scanning 'ACCESSPROFILE' table...
> Table: 'ACCESSPROFILE' (162099618); index ID: 2, database ID: 7
> LEAF level scan performed.
> - Pages Scanned........................: 1
> - Extents Scanned.......................: 1
> - Extent Switches.......................: 0
> - Avg. Pages per Extent..................: 1.0
> - Scan Density [Best Count:Actual Count]......: 100.00% [1:1]
> - Logical Scan Fragmentation ..............: 0.00%
> - Extent Scan Fragmentation ...............: 0.00%
> - Avg. Bytes Free per Page................: 7207.0
> - Avg. Page Density (full)................: 10.96%
> ****HOW COME THE STATS FOR LOGICALFRAG CHANGED?*****
> Paul S Randal [MS] wrote:
pages).
the
http://www.microsoft.com/technet/tr...ze/ss2kidbp.asp
>
DBCC Reindex factor
DBCC DBREINDEX mytable , 90)
fACTOR IS 90% MEANS 90% full... is that correct?
I feel that my table is 90% free space for that dbcc statement i ran.
Because after reindex the db size is 20gb instead of 5 gb. I have to shrink
it after reindex. any ideas?
ThanksSql2005 no sp applied yet.
after doing dbcc dbreindex with 90% factor. All dbs data and log files
allocation space is very big (70-80% of acture data space). That means I
have to do shrink db(both data and log) after reindex'
Thanks
"mecn" <mecn2002@.yahoo.com> wrote in message
news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
> DBCC DBREINDEX mytable , 90)
> fACTOR IS 90% MEANS 90% full... is that correct?
> I feel that my table is 90% free space for that dbcc statement i ran.
> Because after reindex the db size is 20gb instead of 5 gb. I have to
> shrink it after reindex. any ideas?
> Thanks
>|||mecn,
How much of it is data space and how much is log space? If the log became
very large, that is easy to shrink, but shrinking the data is usually not a
wise thing to do. www.karaszi.com/SQLServer/info_dont_shrink.asp
If you are actually working that close to full on your disks, then it is
time to considering buying more space.
RLF
"mecn" <mecn2002@.yahoo.com> wrote in message
news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
> Sql2005 no sp applied yet.
> after doing dbcc dbreindex with 90% factor. All dbs data and log files
> allocation space is very big (70-80% of acture data space). That means I
> have to do shrink db(both data and log) after reindex'
> Thanks
> "mecn" <mecn2002@.yahoo.com> wrote in message
> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>|||The problem is that data size is 5gb, that db doesn't have that much
transactions.
I do weekly reindex. after I finish db reindex the log size (allocation ) is
like 21 GB. I have to do shrink log file.
"Russell Fields" <russellfields@.nomail.com> wrote in message
news:eeRBGaTzHHA.5408@.TK2MSFTNGP02.phx.gbl...
> mecn,
> How much of it is data space and how much is log space? If the log became
> very large, that is easy to shrink, but shrinking the data is usually not
> a wise thing to do. www.karaszi.com/SQLServer/info_dont_shrink.asp
> If you are actually working that close to full on your disks, then it is
> time to considering buying more space.
> RLF
> "mecn" <mecn2002@.yahoo.com> wrote in message
> news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Sql2005 no sp applied yet.
>> after doing dbcc dbreindex with 90% factor. All dbs data and log files
>> allocation space is very big (70-80% of acture data space). That means I
>> have to do shrink db(both data and log) after reindex'
>> Thanks
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>>
>|||What recovery mode are you running? If you are in FULL and don't do any log
backups during the reindex process your log must be able to hold all the log
entries from the reindex process. If you are using DBCC INDEXDEFRAG this can
result in a lot of log entries. Try setting the recovery mode to simple just
before the reindex and then back to full when done. Just remember to do a
FULL backup to reinitialize the log chain.
--
Andrew J. Kelly SQL MVP
"mecn" <mecn2002@.yahoo.com> wrote in message
news:%23%23SZtgTzHHA.748@.TK2MSFTNGP04.phx.gbl...
> The problem is that data size is 5gb, that db doesn't have that much
> transactions.
> I do weekly reindex. after I finish db reindex the log size (allocation )
> is like 21 GB. I have to do shrink log file.
>
> "Russell Fields" <russellfields@.nomail.com> wrote in message
> news:eeRBGaTzHHA.5408@.TK2MSFTNGP02.phx.gbl...
>> mecn,
>> How much of it is data space and how much is log space? If the log
>> became very large, that is easy to shrink, but shrinking the data is
>> usually not a wise thing to do.
>> www.karaszi.com/SQLServer/info_dont_shrink.asp
>> If you are actually working that close to full on your disks, then it is
>> time to considering buying more space.
>> RLF
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Sql2005 no sp applied yet.
>> after doing dbcc dbreindex with 90% factor. All dbs data and log files
>> allocation space is very big (70-80% of acture data space). That means I
>> have to do shrink db(both data and log) after reindex'
>> Thanks
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>>
>>
>|||Full
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%238xyvfXzHHA.5980@.TK2MSFTNGP04.phx.gbl...
> What recovery mode are you running? If you are in FULL and don't do any
> log backups during the reindex process your log must be able to hold all
> the log entries from the reindex process. If you are using DBCC
> INDEXDEFRAG this can result in a lot of log entries. Try setting the
> recovery mode to simple just before the reindex and then back to full when
> done. Just remember to do a FULL backup to reinitialize the log chain.
> --
> Andrew J. Kelly SQL MVP
> "mecn" <mecn2002@.yahoo.com> wrote in message
> news:%23%23SZtgTzHHA.748@.TK2MSFTNGP04.phx.gbl...
>> The problem is that data size is 5gb, that db doesn't have that much
>> transactions.
>> I do weekly reindex. after I finish db reindex the log size (allocation )
>> is like 21 GB. I have to do shrink log file.
>>
>> "Russell Fields" <russellfields@.nomail.com> wrote in message
>> news:eeRBGaTzHHA.5408@.TK2MSFTNGP02.phx.gbl...
>> mecn,
>> How much of it is data space and how much is log space? If the log
>> became very large, that is easy to shrink, but shrinking the data is
>> usually not a wise thing to do.
>> www.karaszi.com/SQLServer/info_dont_shrink.asp
>> If you are actually working that close to full on your disks, then it is
>> time to considering buying more space.
>> RLF
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Sql2005 no sp applied yet.
>> after doing dbcc dbreindex with 90% factor. All dbs data and log files
>> allocation space is very big (70-80% of acture data space). That means
>> I have to do shrink db(both data and log) after reindex'
>> Thanks
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>>
>>
>>
>|||Well that is the answer then. If you are in FULL recovery mode every part of
the reindex will be fully logged and the log will grow when it runs out of
room.
--
Andrew J. Kelly SQL MVP
"mecn" <mecn2002@.yahoo.com> wrote in message
news:%232kuyUgzHHA.3448@.TK2MSFTNGP03.phx.gbl...
> Full
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%238xyvfXzHHA.5980@.TK2MSFTNGP04.phx.gbl...
>> What recovery mode are you running? If you are in FULL and don't do any
>> log backups during the reindex process your log must be able to hold all
>> the log entries from the reindex process. If you are using DBCC
>> INDEXDEFRAG this can result in a lot of log entries. Try setting the
>> recovery mode to simple just before the reindex and then back to full
>> when done. Just remember to do a FULL backup to reinitialize the log
>> chain.
>> --
>> Andrew J. Kelly SQL MVP
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:%23%23SZtgTzHHA.748@.TK2MSFTNGP04.phx.gbl...
>> The problem is that data size is 5gb, that db doesn't have that much
>> transactions.
>> I do weekly reindex. after I finish db reindex the log size
>> (allocation ) is like 21 GB. I have to do shrink log file.
>>
>> "Russell Fields" <russellfields@.nomail.com> wrote in message
>> news:eeRBGaTzHHA.5408@.TK2MSFTNGP02.phx.gbl...
>> mecn,
>> How much of it is data space and how much is log space? If the log
>> became very large, that is easy to shrink, but shrinking the data is
>> usually not a wise thing to do.
>> www.karaszi.com/SQLServer/info_dont_shrink.asp
>> If you are actually working that close to full on your disks, then it
>> is time to considering buying more space.
>> RLF
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Sql2005 no sp applied yet.
>> after doing dbcc dbreindex with 90% factor. All dbs data and log files
>> allocation space is very big (70-80% of acture data space). That means
>> I have to do shrink db(both data and log) after reindex'
>> Thanks
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>>
>>
>>
>>
>
fACTOR IS 90% MEANS 90% full... is that correct?
I feel that my table is 90% free space for that dbcc statement i ran.
Because after reindex the db size is 20gb instead of 5 gb. I have to shrink
it after reindex. any ideas?
ThanksSql2005 no sp applied yet.
after doing dbcc dbreindex with 90% factor. All dbs data and log files
allocation space is very big (70-80% of acture data space). That means I
have to do shrink db(both data and log) after reindex'
Thanks
"mecn" <mecn2002@.yahoo.com> wrote in message
news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
> DBCC DBREINDEX mytable , 90)
> fACTOR IS 90% MEANS 90% full... is that correct?
> I feel that my table is 90% free space for that dbcc statement i ran.
> Because after reindex the db size is 20gb instead of 5 gb. I have to
> shrink it after reindex. any ideas?
> Thanks
>|||mecn,
How much of it is data space and how much is log space? If the log became
very large, that is easy to shrink, but shrinking the data is usually not a
wise thing to do. www.karaszi.com/SQLServer/info_dont_shrink.asp
If you are actually working that close to full on your disks, then it is
time to considering buying more space.
RLF
"mecn" <mecn2002@.yahoo.com> wrote in message
news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
> Sql2005 no sp applied yet.
> after doing dbcc dbreindex with 90% factor. All dbs data and log files
> allocation space is very big (70-80% of acture data space). That means I
> have to do shrink db(both data and log) after reindex'
> Thanks
> "mecn" <mecn2002@.yahoo.com> wrote in message
> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>|||The problem is that data size is 5gb, that db doesn't have that much
transactions.
I do weekly reindex. after I finish db reindex the log size (allocation ) is
like 21 GB. I have to do shrink log file.
"Russell Fields" <russellfields@.nomail.com> wrote in message
news:eeRBGaTzHHA.5408@.TK2MSFTNGP02.phx.gbl...
> mecn,
> How much of it is data space and how much is log space? If the log became
> very large, that is easy to shrink, but shrinking the data is usually not
> a wise thing to do. www.karaszi.com/SQLServer/info_dont_shrink.asp
> If you are actually working that close to full on your disks, then it is
> time to considering buying more space.
> RLF
> "mecn" <mecn2002@.yahoo.com> wrote in message
> news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Sql2005 no sp applied yet.
>> after doing dbcc dbreindex with 90% factor. All dbs data and log files
>> allocation space is very big (70-80% of acture data space). That means I
>> have to do shrink db(both data and log) after reindex'
>> Thanks
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>>
>|||What recovery mode are you running? If you are in FULL and don't do any log
backups during the reindex process your log must be able to hold all the log
entries from the reindex process. If you are using DBCC INDEXDEFRAG this can
result in a lot of log entries. Try setting the recovery mode to simple just
before the reindex and then back to full when done. Just remember to do a
FULL backup to reinitialize the log chain.
--
Andrew J. Kelly SQL MVP
"mecn" <mecn2002@.yahoo.com> wrote in message
news:%23%23SZtgTzHHA.748@.TK2MSFTNGP04.phx.gbl...
> The problem is that data size is 5gb, that db doesn't have that much
> transactions.
> I do weekly reindex. after I finish db reindex the log size (allocation )
> is like 21 GB. I have to do shrink log file.
>
> "Russell Fields" <russellfields@.nomail.com> wrote in message
> news:eeRBGaTzHHA.5408@.TK2MSFTNGP02.phx.gbl...
>> mecn,
>> How much of it is data space and how much is log space? If the log
>> became very large, that is easy to shrink, but shrinking the data is
>> usually not a wise thing to do.
>> www.karaszi.com/SQLServer/info_dont_shrink.asp
>> If you are actually working that close to full on your disks, then it is
>> time to considering buying more space.
>> RLF
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Sql2005 no sp applied yet.
>> after doing dbcc dbreindex with 90% factor. All dbs data and log files
>> allocation space is very big (70-80% of acture data space). That means I
>> have to do shrink db(both data and log) after reindex'
>> Thanks
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>>
>>
>|||Full
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%238xyvfXzHHA.5980@.TK2MSFTNGP04.phx.gbl...
> What recovery mode are you running? If you are in FULL and don't do any
> log backups during the reindex process your log must be able to hold all
> the log entries from the reindex process. If you are using DBCC
> INDEXDEFRAG this can result in a lot of log entries. Try setting the
> recovery mode to simple just before the reindex and then back to full when
> done. Just remember to do a FULL backup to reinitialize the log chain.
> --
> Andrew J. Kelly SQL MVP
> "mecn" <mecn2002@.yahoo.com> wrote in message
> news:%23%23SZtgTzHHA.748@.TK2MSFTNGP04.phx.gbl...
>> The problem is that data size is 5gb, that db doesn't have that much
>> transactions.
>> I do weekly reindex. after I finish db reindex the log size (allocation )
>> is like 21 GB. I have to do shrink log file.
>>
>> "Russell Fields" <russellfields@.nomail.com> wrote in message
>> news:eeRBGaTzHHA.5408@.TK2MSFTNGP02.phx.gbl...
>> mecn,
>> How much of it is data space and how much is log space? If the log
>> became very large, that is easy to shrink, but shrinking the data is
>> usually not a wise thing to do.
>> www.karaszi.com/SQLServer/info_dont_shrink.asp
>> If you are actually working that close to full on your disks, then it is
>> time to considering buying more space.
>> RLF
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Sql2005 no sp applied yet.
>> after doing dbcc dbreindex with 90% factor. All dbs data and log files
>> allocation space is very big (70-80% of acture data space). That means
>> I have to do shrink db(both data and log) after reindex'
>> Thanks
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>>
>>
>>
>|||Well that is the answer then. If you are in FULL recovery mode every part of
the reindex will be fully logged and the log will grow when it runs out of
room.
--
Andrew J. Kelly SQL MVP
"mecn" <mecn2002@.yahoo.com> wrote in message
news:%232kuyUgzHHA.3448@.TK2MSFTNGP03.phx.gbl...
> Full
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%238xyvfXzHHA.5980@.TK2MSFTNGP04.phx.gbl...
>> What recovery mode are you running? If you are in FULL and don't do any
>> log backups during the reindex process your log must be able to hold all
>> the log entries from the reindex process. If you are using DBCC
>> INDEXDEFRAG this can result in a lot of log entries. Try setting the
>> recovery mode to simple just before the reindex and then back to full
>> when done. Just remember to do a FULL backup to reinitialize the log
>> chain.
>> --
>> Andrew J. Kelly SQL MVP
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:%23%23SZtgTzHHA.748@.TK2MSFTNGP04.phx.gbl...
>> The problem is that data size is 5gb, that db doesn't have that much
>> transactions.
>> I do weekly reindex. after I finish db reindex the log size
>> (allocation ) is like 21 GB. I have to do shrink log file.
>>
>> "Russell Fields" <russellfields@.nomail.com> wrote in message
>> news:eeRBGaTzHHA.5408@.TK2MSFTNGP02.phx.gbl...
>> mecn,
>> How much of it is data space and how much is log space? If the log
>> became very large, that is easy to shrink, but shrinking the data is
>> usually not a wise thing to do.
>> www.karaszi.com/SQLServer/info_dont_shrink.asp
>> If you are actually working that close to full on your disks, then it
>> is time to considering buying more space.
>> RLF
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:uL61FJTzHHA.4476@.TK2MSFTNGP06.phx.gbl...
>> Sql2005 no sp applied yet.
>> after doing dbcc dbreindex with 90% factor. All dbs data and log files
>> allocation space is very big (70-80% of acture data space). That means
>> I have to do shrink db(both data and log) after reindex'
>> Thanks
>> "mecn" <mecn2002@.yahoo.com> wrote in message
>> news:Oz3LYFTzHHA.464@.TK2MSFTNGP02.phx.gbl...
>> DBCC DBREINDEX mytable , 90)
>> fACTOR IS 90% MEANS 90% full... is that correct?
>> I feel that my table is 90% free space for that dbcc statement i ran.
>> Because after reindex the db size is 20gb instead of 5 gb. I have to
>> shrink it after reindex. any ideas?
>> Thanks
>>
>>
>>
>>
>>
>
Sunday, March 11, 2012
DBCC INDEXDEFRAG
I index my database nightly since I have many changes to
cluster indexes and non-cluster indexes daily. I ran a
script the would tells me every table the number of rows,
index space, data space, and unused space. Next I
applied the DBCC INDEXDEFRAG to every table and each
index which ran for approximately 40 minutes. Then I ran
the script again to tell me every table the number of
rows, index space, data space, and unused space. There
was no change in values.
Please help me with this problem?
Don
My guess is that the script you are mentioning is using the sysindexes
system table. This table is updated asynchronously. Try to update the data
for a specific table using sp_spaceused
(http://msdn.microsoft.com/library/de...-us/tsqlref/ts
_sp_sa-sz_0kro.asp) system sp.
Next, what do you expect to change? N of rows should not change during index
defragmentation! DBCC INDEXDEFRAG is used primarily to defragment logical
fragmentation (logical <> physical order). Sometimes it might compact
indexes as well, if they are physically fragmented; if not, you won't see
any changes in index space
(http://msdn.microsoft.com/library/de...-us/tsqlref/ts
_dbcc_30o9.asp).
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>
|||In addition to what Dejan said, INDEXDEFRAG does not recompute the
statistics. You might want to consider running sp_updatestats after
INDEXDEFRAG.
Andrew J. Kelly
SQL Server MVP
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>
cluster indexes and non-cluster indexes daily. I ran a
script the would tells me every table the number of rows,
index space, data space, and unused space. Next I
applied the DBCC INDEXDEFRAG to every table and each
index which ran for approximately 40 minutes. Then I ran
the script again to tell me every table the number of
rows, index space, data space, and unused space. There
was no change in values.
Please help me with this problem?
Don
My guess is that the script you are mentioning is using the sysindexes
system table. This table is updated asynchronously. Try to update the data
for a specific table using sp_spaceused
(http://msdn.microsoft.com/library/de...-us/tsqlref/ts
_sp_sa-sz_0kro.asp) system sp.
Next, what do you expect to change? N of rows should not change during index
defragmentation! DBCC INDEXDEFRAG is used primarily to defragment logical
fragmentation (logical <> physical order). Sometimes it might compact
indexes as well, if they are physically fragmented; if not, you won't see
any changes in index space
(http://msdn.microsoft.com/library/de...-us/tsqlref/ts
_dbcc_30o9.asp).
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>
|||In addition to what Dejan said, INDEXDEFRAG does not recompute the
statistics. You might want to consider running sp_updatestats after
INDEXDEFRAG.
Andrew J. Kelly
SQL Server MVP
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>
Thursday, March 8, 2012
DBCC INDEXDEFRAG
I index my database nightly since I have many changes to
cluster indexes and non-cluster indexes daily. I ran a
script the would tells me every table the number of rows,
index space, data space, and unused space. Next I
applied the DBCC INDEXDEFRAG to every table and each
index which ran for approximately 40 minutes. Then I ran
the script again to tell me every table the number of
rows, index space, data space, and unused space. There
was no change in values.
Please help me with this problem?
DonMy guess is that the script you are mentioning is using the sysindexes
system table. This table is updated asynchronously. Try to update the data
for a specific table using sp_spaceused
(http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts
_sp_sa-sz_0kro.asp) system sp.
Next, what do you expect to change? N of rows should not change during index
defragmentation! DBCC INDEXDEFRAG is used primarily to defragment logical
fragmentation (logical <> physical order). Sometimes it might compact
indexes as well, if they are physically fragmented; if not, you won't see
any changes in index space
(http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts
_dbcc_30o9.asp).
--
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>|||In addition to what Dejan said, INDEXDEFRAG does not recompute the
statistics. You might want to consider running sp_updatestats after
INDEXDEFRAG.
Andrew J. Kelly
SQL Server MVP
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>
cluster indexes and non-cluster indexes daily. I ran a
script the would tells me every table the number of rows,
index space, data space, and unused space. Next I
applied the DBCC INDEXDEFRAG to every table and each
index which ran for approximately 40 minutes. Then I ran
the script again to tell me every table the number of
rows, index space, data space, and unused space. There
was no change in values.
Please help me with this problem?
DonMy guess is that the script you are mentioning is using the sysindexes
system table. This table is updated asynchronously. Try to update the data
for a specific table using sp_spaceused
(http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts
_sp_sa-sz_0kro.asp) system sp.
Next, what do you expect to change? N of rows should not change during index
defragmentation! DBCC INDEXDEFRAG is used primarily to defragment logical
fragmentation (logical <> physical order). Sometimes it might compact
indexes as well, if they are physically fragmented; if not, you won't see
any changes in index space
(http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts
_dbcc_30o9.asp).
--
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>|||In addition to what Dejan said, INDEXDEFRAG does not recompute the
statistics. You might want to consider running sp_updatestats after
INDEXDEFRAG.
Andrew J. Kelly
SQL Server MVP
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>
DBCC INDEXDEFRAG
I index my database nightly since I have many changes to
cluster indexes and non-cluster indexes daily. I ran a
script the would tells me every table the number of rows,
index space, data space, and unused space. Next I
applied the DBCC INDEXDEFRAG to every table and each
index which ran for approximately 40 minutes. Then I ran
the script again to tell me every table the number of
rows, index space, data space, and unused space. There
was no change in values.
Please help me with this problem?
DonMy guess is that the script you are mentioning is using the sysindexes
system table. This table is updated asynchronously. Try to update the data
for a specific table using sp_spaceused
(http://msdn.microsoft.com/library/d...n-us/tsqlref/ts
_sp_sa-sz_0kro.asp) system sp.
Next, what do you expect to change? N of rows should not change during index
defragmentation! DBCC INDEXDEFRAG is used primarily to defragment logical
fragmentation (logical <> physical order). Sometimes it might compact
indexes as well, if they are physically fragmented; if not, you won't see
any changes in index space
(http://msdn.microsoft.com/library/d...n-us/tsqlref/ts
_dbcc_30o9.asp).
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx
.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>|||In addition to what Dejan said, INDEXDEFRAG does not recompute the
statistics. You might want to consider running sp_updatestats after
INDEXDEFRAG.
Andrew J. Kelly
SQL Server MVP
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx
.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>
cluster indexes and non-cluster indexes daily. I ran a
script the would tells me every table the number of rows,
index space, data space, and unused space. Next I
applied the DBCC INDEXDEFRAG to every table and each
index which ran for approximately 40 minutes. Then I ran
the script again to tell me every table the number of
rows, index space, data space, and unused space. There
was no change in values.
Please help me with this problem?
DonMy guess is that the script you are mentioning is using the sysindexes
system table. This table is updated asynchronously. Try to update the data
for a specific table using sp_spaceused
(http://msdn.microsoft.com/library/d...n-us/tsqlref/ts
_sp_sa-sz_0kro.asp) system sp.
Next, what do you expect to change? N of rows should not change during index
defragmentation! DBCC INDEXDEFRAG is used primarily to defragment logical
fragmentation (logical <> physical order). Sometimes it might compact
indexes as well, if they are physically fragmented; if not, you won't see
any changes in index space
(http://msdn.microsoft.com/library/d...n-us/tsqlref/ts
_dbcc_30o9.asp).
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx
.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>|||In addition to what Dejan said, INDEXDEFRAG does not recompute the
statistics. You might want to consider running sp_updatestats after
INDEXDEFRAG.
Andrew J. Kelly
SQL Server MVP
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:1573e01c446c9$71f94970$a401280a@.phx
.gbl...
> I index my database nightly since I have many changes to
> cluster indexes and non-cluster indexes daily. I ran a
> script the would tells me every table the number of rows,
> index space, data space, and unused space. Next I
> applied the DBCC INDEXDEFRAG to every table and each
> index which ran for approximately 40 minutes. Then I ran
> the script again to tell me every table the number of
> rows, index space, data space, and unused space. There
> was no change in values.
> Please help me with this problem?
> Don
>
dbcc errorlog fails
I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
Datacenter cluster. The logs for .2 through .6 change. However, the
origional errorlog does not. I do not get a new error log started. I do not
want to stop and start the sql server as this would shutdown production. Has
anyone seen this and how should I go about fixing this?
Thanks for you support.
Rick Schantz
How are you determining that the log hasn't changed? If you
are viewing the logs in Enterprise Manager, did you refresh
the view? Are you viewing the files in Explorer and
expecting a new file? The logs are cycled rather than new
files being created.
What happens if you execute sp_cycle_errorlog a few times?
If you are sure data from errorlog isn't moving to
errorlog.1, you could check for locking on the log files by
some other process.
-Sue
On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
<RickSchantz@.discussions.microsoft.com> wrote:
>I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
>Datacenter cluster. The logs for .2 through .6 change. However, the
>origional errorlog does not. I do not get a new error log started. I do not
>want to stop and start the sql server as this would shutdown production. Has
>anyone seen this and how should I go about fixing this?
>Thanks for you support.
|||I tested the command on another server. The log size for errorlog. went to
errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
and .5 still exist. .6 never existed. All of this was viewed by Enterpise
Manager and I was refreshing it several times. I did not look for locking on
the errorlog. file. What process would be locking the errorlog file? Our
goal is to cycle the logs on a weekly basis and archive the oldest one. Any
suggestions? Thanks for your fast response.
"Sue Hoegemeier" wrote:
> How are you determining that the log hasn't changed? If you
> are viewing the logs in Enterprise Manager, did you refresh
> the view? Are you viewing the files in Explorer and
> expecting a new file? The logs are cycled rather than new
> files being created.
> What happens if you execute sp_cycle_errorlog a few times?
> If you are sure data from errorlog isn't moving to
> errorlog.1, you could check for locking on the log files by
> some other process.
> -Sue
> On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
> <RickSchantz@.discussions.microsoft.com> wrote:
>
>
|||Backup or antivirus software would be a couple of programs that could
lock a file. You can check for this using filemon from sysinternals:
http://www.sysinternals.com/
Once you figure out whatever the problem is with cycling the logs, it
should be pretty easy to set up a job to archive. You can just save .6
file off somewhere using an ActiveX script and FSO or DOS commands and
a CmdExec step and then run sp_cycle_errorlog.
-Sue
On Tue, 5 Oct 2004 07:03:06 -0700, Rick Schantz
<RickSchantz@.discussions.microsoft.com> wrote:
[vbcol=seagreen]
>I tested the command on another server. The log size for errorlog. went to
>errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
>deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
>and .5 still exist. .6 never existed. All of this was viewed by Enterpise
>Manager and I was refreshing it several times. I did not look for locking on
>the errorlog. file. What process would be locking the errorlog file? Our
>goal is to cycle the logs on a weekly basis and archive the oldest one. Any
>suggestions? Thanks for your fast response.
>"Sue Hoegemeier" wrote:
|||I renamed the file errorlog.6 and then ran the EXEC sp_cycle_errorlog
command. The errorlog.5 went to errorlog.6 and errorlog. when to errorlog.1.
A new errorlog. was created. Not sure what was going on but I'll monitor it
from here. I also used the filemon tool. Great tool.
Thanks
"Sue Hoegemeier" wrote:
> Backup or antivirus software would be a couple of programs that could
> lock a file. You can check for this using filemon from sysinternals:
> http://www.sysinternals.com/
> Once you figure out whatever the problem is with cycling the logs, it
> should be pretty easy to set up a job to archive. You can just save .6
> file off somewhere using an ActiveX script and FSO or DOS commands and
> a CmdExec step and then run sp_cycle_errorlog.
> -Sue
> On Tue, 5 Oct 2004 07:03:06 -0700, Rick Schantz
> <RickSchantz@.discussions.microsoft.com> wrote:
>
>
Datacenter cluster. The logs for .2 through .6 change. However, the
origional errorlog does not. I do not get a new error log started. I do not
want to stop and start the sql server as this would shutdown production. Has
anyone seen this and how should I go about fixing this?
Thanks for you support.
Rick Schantz
How are you determining that the log hasn't changed? If you
are viewing the logs in Enterprise Manager, did you refresh
the view? Are you viewing the files in Explorer and
expecting a new file? The logs are cycled rather than new
files being created.
What happens if you execute sp_cycle_errorlog a few times?
If you are sure data from errorlog isn't moving to
errorlog.1, you could check for locking on the log files by
some other process.
-Sue
On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
<RickSchantz@.discussions.microsoft.com> wrote:
>I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
>Datacenter cluster. The logs for .2 through .6 change. However, the
>origional errorlog does not. I do not get a new error log started. I do not
>want to stop and start the sql server as this would shutdown production. Has
>anyone seen this and how should I go about fixing this?
>Thanks for you support.
|||I tested the command on another server. The log size for errorlog. went to
errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
and .5 still exist. .6 never existed. All of this was viewed by Enterpise
Manager and I was refreshing it several times. I did not look for locking on
the errorlog. file. What process would be locking the errorlog file? Our
goal is to cycle the logs on a weekly basis and archive the oldest one. Any
suggestions? Thanks for your fast response.
"Sue Hoegemeier" wrote:
> How are you determining that the log hasn't changed? If you
> are viewing the logs in Enterprise Manager, did you refresh
> the view? Are you viewing the files in Explorer and
> expecting a new file? The logs are cycled rather than new
> files being created.
> What happens if you execute sp_cycle_errorlog a few times?
> If you are sure data from errorlog isn't moving to
> errorlog.1, you could check for locking on the log files by
> some other process.
> -Sue
> On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
> <RickSchantz@.discussions.microsoft.com> wrote:
>
>
|||Backup or antivirus software would be a couple of programs that could
lock a file. You can check for this using filemon from sysinternals:
http://www.sysinternals.com/
Once you figure out whatever the problem is with cycling the logs, it
should be pretty easy to set up a job to archive. You can just save .6
file off somewhere using an ActiveX script and FSO or DOS commands and
a CmdExec step and then run sp_cycle_errorlog.
-Sue
On Tue, 5 Oct 2004 07:03:06 -0700, Rick Schantz
<RickSchantz@.discussions.microsoft.com> wrote:
[vbcol=seagreen]
>I tested the command on another server. The log size for errorlog. went to
>errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
>deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
>and .5 still exist. .6 never existed. All of this was viewed by Enterpise
>Manager and I was refreshing it several times. I did not look for locking on
>the errorlog. file. What process would be locking the errorlog file? Our
>goal is to cycle the logs on a weekly basis and archive the oldest one. Any
>suggestions? Thanks for your fast response.
>"Sue Hoegemeier" wrote:
|||I renamed the file errorlog.6 and then ran the EXEC sp_cycle_errorlog
command. The errorlog.5 went to errorlog.6 and errorlog. when to errorlog.1.
A new errorlog. was created. Not sure what was going on but I'll monitor it
from here. I also used the filemon tool. Great tool.
Thanks
"Sue Hoegemeier" wrote:
> Backup or antivirus software would be a couple of programs that could
> lock a file. You can check for this using filemon from sysinternals:
> http://www.sysinternals.com/
> Once you figure out whatever the problem is with cycling the logs, it
> should be pretty easy to set up a job to archive. You can just save .6
> file off somewhere using an ActiveX script and FSO or DOS commands and
> a CmdExec step and then run sp_cycle_errorlog.
> -Sue
> On Tue, 5 Oct 2004 07:03:06 -0700, Rick Schantz
> <RickSchantz@.discussions.microsoft.com> wrote:
>
>
dbcc errorlog fails
I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
Datacenter cluster. The logs for .2 through .6 change. However, the
origional errorlog does not. I do not get a new error log started. I do not
want to stop and start the sql server as this would shutdown production. Has
anyone seen this and how should I go about fixing this?
Thanks for you support.
--
Rick SchantzHow are you determining that the log hasn't changed? If you
are viewing the logs in Enterprise Manager, did you refresh
the view? Are you viewing the files in Explorer and
expecting a new file? The logs are cycled rather than new
files being created.
What happens if you execute sp_cycle_errorlog a few times?
If you are sure data from errorlog isn't moving to
errorlog.1, you could check for locking on the log files by
some other process.
-Sue
On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
<RickSchantz@.discussions.microsoft.com> wrote:
>I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
>Datacenter cluster. The logs for .2 through .6 change. However, the
>origional errorlog does not. I do not get a new error log started. I do not
>want to stop and start the sql server as this would shutdown production. Has
>anyone seen this and how should I go about fixing this?
>Thanks for you support.|||I tested the command on another server. The log size for errorlog. went to
errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
and .5 still exist. .6 never existed. All of this was viewed by Enterpise
Manager and I was refreshing it several times. I did not look for locking on
the errorlog. file. What process would be locking the errorlog file? Our
goal is to cycle the logs on a weekly basis and archive the oldest one. Any
suggestions? Thanks for your fast response.
"Sue Hoegemeier" wrote:
> How are you determining that the log hasn't changed? If you
> are viewing the logs in Enterprise Manager, did you refresh
> the view? Are you viewing the files in Explorer and
> expecting a new file? The logs are cycled rather than new
> files being created.
> What happens if you execute sp_cycle_errorlog a few times?
> If you are sure data from errorlog isn't moving to
> errorlog.1, you could check for locking on the log files by
> some other process.
> -Sue
> On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
> <RickSchantz@.discussions.microsoft.com> wrote:
> >I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
> >Datacenter cluster. The logs for .2 through .6 change. However, the
> >origional errorlog does not. I do not get a new error log started. I do not
> >want to stop and start the sql server as this would shutdown production. Has
> >anyone seen this and how should I go about fixing this?
> >
> >Thanks for you support.
>|||Backup or antivirus software would be a couple of programs that could
lock a file. You can check for this using filemon from sysinternals:
http://www.sysinternals.com/
Once you figure out whatever the problem is with cycling the logs, it
should be pretty easy to set up a job to archive. You can just save .6
file off somewhere using an ActiveX script and FSO or DOS commands and
a CmdExec step and then run sp_cycle_errorlog.
-Sue
On Tue, 5 Oct 2004 07:03:06 -0700, Rick Schantz
<RickSchantz@.discussions.microsoft.com> wrote:
>I tested the command on another server. The log size for errorlog. went to
>errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
>deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
>and .5 still exist. .6 never existed. All of this was viewed by Enterpise
>Manager and I was refreshing it several times. I did not look for locking on
>the errorlog. file. What process would be locking the errorlog file? Our
>goal is to cycle the logs on a weekly basis and archive the oldest one. Any
>suggestions? Thanks for your fast response.
>"Sue Hoegemeier" wrote:
>> How are you determining that the log hasn't changed? If you
>> are viewing the logs in Enterprise Manager, did you refresh
>> the view? Are you viewing the files in Explorer and
>> expecting a new file? The logs are cycled rather than new
>> files being created.
>> What happens if you execute sp_cycle_errorlog a few times?
>> If you are sure data from errorlog isn't moving to
>> errorlog.1, you could check for locking on the log files by
>> some other process.
>> -Sue
>> On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
>> <RickSchantz@.discussions.microsoft.com> wrote:
>> >I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
>> >Datacenter cluster. The logs for .2 through .6 change. However, the
>> >origional errorlog does not. I do not get a new error log started. I do not
>> >want to stop and start the sql server as this would shutdown production. Has
>> >anyone seen this and how should I go about fixing this?
>> >
>> >Thanks for you support.
>>|||I renamed the file errorlog.6 and then ran the EXEC sp_cycle_errorlog
command. The errorlog.5 went to errorlog.6 and errorlog. when to errorlog.1.
A new errorlog. was created. Not sure what was going on but I'll monitor it
from here. I also used the filemon tool. Great tool.
Thanks
"Sue Hoegemeier" wrote:
> Backup or antivirus software would be a couple of programs that could
> lock a file. You can check for this using filemon from sysinternals:
> http://www.sysinternals.com/
> Once you figure out whatever the problem is with cycling the logs, it
> should be pretty easy to set up a job to archive. You can just save .6
> file off somewhere using an ActiveX script and FSO or DOS commands and
> a CmdExec step and then run sp_cycle_errorlog.
> -Sue
> On Tue, 5 Oct 2004 07:03:06 -0700, Rick Schantz
> <RickSchantz@.discussions.microsoft.com> wrote:
> >I tested the command on another server. The log size for errorlog. went to
> >errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
> >deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
> >and .5 still exist. .6 never existed. All of this was viewed by Enterpise
> >Manager and I was refreshing it several times. I did not look for locking on
> >the errorlog. file. What process would be locking the errorlog file? Our
> >goal is to cycle the logs on a weekly basis and archive the oldest one. Any
> >suggestions? Thanks for your fast response.
> >
> >"Sue Hoegemeier" wrote:
> >
> >> How are you determining that the log hasn't changed? If you
> >> are viewing the logs in Enterprise Manager, did you refresh
> >> the view? Are you viewing the files in Explorer and
> >> expecting a new file? The logs are cycled rather than new
> >> files being created.
> >> What happens if you execute sp_cycle_errorlog a few times?
> >> If you are sure data from errorlog isn't moving to
> >> errorlog.1, you could check for locking on the log files by
> >> some other process.
> >>
> >> -Sue
> >>
> >> On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
> >> <RickSchantz@.discussions.microsoft.com> wrote:
> >>
> >> >I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
> >> >Datacenter cluster. The logs for .2 through .6 change. However, the
> >> >origional errorlog does not. I do not get a new error log started. I do not
> >> >want to stop and start the sql server as this would shutdown production. Has
> >> >anyone seen this and how should I go about fixing this?
> >> >
> >> >Thanks for you support.
> >>
> >>
>
Datacenter cluster. The logs for .2 through .6 change. However, the
origional errorlog does not. I do not get a new error log started. I do not
want to stop and start the sql server as this would shutdown production. Has
anyone seen this and how should I go about fixing this?
Thanks for you support.
--
Rick SchantzHow are you determining that the log hasn't changed? If you
are viewing the logs in Enterprise Manager, did you refresh
the view? Are you viewing the files in Explorer and
expecting a new file? The logs are cycled rather than new
files being created.
What happens if you execute sp_cycle_errorlog a few times?
If you are sure data from errorlog isn't moving to
errorlog.1, you could check for locking on the log files by
some other process.
-Sue
On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
<RickSchantz@.discussions.microsoft.com> wrote:
>I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
>Datacenter cluster. The logs for .2 through .6 change. However, the
>origional errorlog does not. I do not get a new error log started. I do not
>want to stop and start the sql server as this would shutdown production. Has
>anyone seen this and how should I go about fixing this?
>Thanks for you support.|||I tested the command on another server. The log size for errorlog. went to
errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
and .5 still exist. .6 never existed. All of this was viewed by Enterpise
Manager and I was refreshing it several times. I did not look for locking on
the errorlog. file. What process would be locking the errorlog file? Our
goal is to cycle the logs on a weekly basis and archive the oldest one. Any
suggestions? Thanks for your fast response.
"Sue Hoegemeier" wrote:
> How are you determining that the log hasn't changed? If you
> are viewing the logs in Enterprise Manager, did you refresh
> the view? Are you viewing the files in Explorer and
> expecting a new file? The logs are cycled rather than new
> files being created.
> What happens if you execute sp_cycle_errorlog a few times?
> If you are sure data from errorlog isn't moving to
> errorlog.1, you could check for locking on the log files by
> some other process.
> -Sue
> On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
> <RickSchantz@.discussions.microsoft.com> wrote:
> >I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
> >Datacenter cluster. The logs for .2 through .6 change. However, the
> >origional errorlog does not. I do not get a new error log started. I do not
> >want to stop and start the sql server as this would shutdown production. Has
> >anyone seen this and how should I go about fixing this?
> >
> >Thanks for you support.
>|||Backup or antivirus software would be a couple of programs that could
lock a file. You can check for this using filemon from sysinternals:
http://www.sysinternals.com/
Once you figure out whatever the problem is with cycling the logs, it
should be pretty easy to set up a job to archive. You can just save .6
file off somewhere using an ActiveX script and FSO or DOS commands and
a CmdExec step and then run sp_cycle_errorlog.
-Sue
On Tue, 5 Oct 2004 07:03:06 -0700, Rick Schantz
<RickSchantz@.discussions.microsoft.com> wrote:
>I tested the command on another server. The log size for errorlog. went to
>errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
>deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
>and .5 still exist. .6 never existed. All of this was viewed by Enterpise
>Manager and I was refreshing it several times. I did not look for locking on
>the errorlog. file. What process would be locking the errorlog file? Our
>goal is to cycle the logs on a weekly basis and archive the oldest one. Any
>suggestions? Thanks for your fast response.
>"Sue Hoegemeier" wrote:
>> How are you determining that the log hasn't changed? If you
>> are viewing the logs in Enterprise Manager, did you refresh
>> the view? Are you viewing the files in Explorer and
>> expecting a new file? The logs are cycled rather than new
>> files being created.
>> What happens if you execute sp_cycle_errorlog a few times?
>> If you are sure data from errorlog isn't moving to
>> errorlog.1, you could check for locking on the log files by
>> some other process.
>> -Sue
>> On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
>> <RickSchantz@.discussions.microsoft.com> wrote:
>> >I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
>> >Datacenter cluster. The logs for .2 through .6 change. However, the
>> >origional errorlog does not. I do not get a new error log started. I do not
>> >want to stop and start the sql server as this would shutdown production. Has
>> >anyone seen this and how should I go about fixing this?
>> >
>> >Thanks for you support.
>>|||I renamed the file errorlog.6 and then ran the EXEC sp_cycle_errorlog
command. The errorlog.5 went to errorlog.6 and errorlog. when to errorlog.1.
A new errorlog. was created. Not sure what was going on but I'll monitor it
from here. I also used the filemon tool. Great tool.
Thanks
"Sue Hoegemeier" wrote:
> Backup or antivirus software would be a couple of programs that could
> lock a file. You can check for this using filemon from sysinternals:
> http://www.sysinternals.com/
> Once you figure out whatever the problem is with cycling the logs, it
> should be pretty easy to set up a job to archive. You can just save .6
> file off somewhere using an ActiveX script and FSO or DOS commands and
> a CmdExec step and then run sp_cycle_errorlog.
> -Sue
> On Tue, 5 Oct 2004 07:03:06 -0700, Rick Schantz
> <RickSchantz@.discussions.microsoft.com> wrote:
> >I tested the command on another server. The log size for errorlog. went to
> >errorlog.1 and so on while a new errorlog. was created. Errorlog.6 was
> >deleted. I executed sp_cycle_error a few times and .2 and .3 are missing. .4
> >and .5 still exist. .6 never existed. All of this was viewed by Enterpise
> >Manager and I was refreshing it several times. I did not look for locking on
> >the errorlog. file. What process would be locking the errorlog file? Our
> >goal is to cycle the logs on a weekly basis and archive the oldest one. Any
> >suggestions? Thanks for your fast response.
> >
> >"Sue Hoegemeier" wrote:
> >
> >> How are you determining that the log hasn't changed? If you
> >> are viewing the logs in Enterprise Manager, did you refresh
> >> the view? Are you viewing the files in Explorer and
> >> expecting a new file? The logs are cycled rather than new
> >> files being created.
> >> What happens if you execute sp_cycle_errorlog a few times?
> >> If you are sure data from errorlog isn't moving to
> >> errorlog.1, you could check for locking on the log files by
> >> some other process.
> >>
> >> -Sue
> >>
> >> On Mon, 4 Oct 2004 12:13:03 -0700, Rick Schantz
> >> <RickSchantz@.discussions.microsoft.com> wrote:
> >>
> >> >I have ran the sp_cycle_errorlog and dbcc errorlog command on a SQL 200
> >> >Datacenter cluster. The logs for .2 through .6 change. However, the
> >> >origional errorlog does not. I do not get a new error log started. I do not
> >> >want to stop and start the sql server as this would shutdown production. Has
> >> >anyone seen this and how should I go about fixing this?
> >> >
> >> >Thanks for you support.
> >>
> >>
>
dbcc emptyfile
Hi,
(SQL 2K)
We used to have a database with 1 100Gb data file
I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
the data out of the primary file to the new ones.
Enterprise Mgr reports the primary file is 99% empty - excellent.
Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
ShrinkFile, backing up logs, full backups etc will shrink it.
Can anyone suggest what I might be missing?
Thanks
Nick
Hi
Can you post your DBCC SHRINKFILE command and the output from sp_spaceused,
sp_helpfile.
John
"Nick" wrote:
> Hi,
> (SQL 2K)
> We used to have a database with 1 100Gb data file
> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
> the data out of the primary file to the new ones.
> Enterprise Mgr reports the primary file is 99% empty - excellent.
> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> ShrinkFile, backing up logs, full backups etc will shrink it.
> Can anyone suggest what I might be missing?
> Thanks
> Nick
>
>
|||John,
sp_spaceused:
MyDB 290343.88 MB 69625.83 MB
Reserved Data Index Unused
155007152 KB 92248808 KB 5123888 KB 57634456 KB
sp_helpfile:
name fileid filename filegroup size maxsize growth usage
Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
Unlimited 153600 KB data only
Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
Unlimited 10% log only
Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
10% log only
MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
data only
MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
10% data only
MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
10% data only
MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
10% data only
Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
10% log only
Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
10% log only
I'm trying to shrink Merlin_Data - the primary file - by running:
DBCC SHRINKFILE (N'Merlin_Data' , 200)
and I've also tried:
DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
Any pointers really appreciated!
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...[vbcol=seagreen]
> Hi
> Can you post your DBCC SHRINKFILE command and the output from
> sp_spaceused,
> sp_helpfile.
> John
> "Nick" wrote:
|||Hi
The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so your
shrinkfile should have worked!
Try
DBCC SHRINKFILE ( 1, 200 )
or
DBCC SHRINKFILE ( 1 )
John
"Nick" wrote:
> John,
> sp_spaceused:
> MyDB 290343.88 MB 69625.83 MB
> Reserved Data Index Unused
> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> sp_helpfile:
> name fileid filename filegroup size maxsize growth usage
> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> Unlimited 153600 KB data only
> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
> Unlimited 10% log only
> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
> 10% log only
> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
> data only
> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
> 10% data only
> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
> 10% data only
> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
> 10% data only
> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
> 10% log only
> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
> 10% log only
> I'm trying to shrink Merlin_Data - the primary file - by running:
> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> and I've also tried:
> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> Any pointers really appreciated!
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>
>
|||Thats what I thought.
Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
it to 200Mb and it just doesn't budge. Not an inch.
Geeze, it's frustrating as I need this file as small as possible as I've
moved all the data out of it.
Is there any way of seeing if SQL is even attempting to shrink the file? The
DBCC command returns very quickly (< 1sec) so I guess it's an off-line
process.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...[vbcol=seagreen]
> Hi
> The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> your
> shrinkfile should have worked!
> Try
> DBCC SHRINKFILE ( 1, 200 )
> or
> DBCC SHRINKFILE ( 1 )
> John
> "Nick" wrote:
|||Hi Nick
If the command was working it should take some time, you may want to try a
larger size say 94000. What size is model?
John
"Nick" wrote:
> Thats what I thought.
> Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
> it to 200Mb and it just doesn't budge. Not an inch.
> Geeze, it's frustrating as I need this file as small as possible as I've
> moved all the data out of it.
> Is there any way of seeing if SQL is even attempting to shrink the file? The
> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> process.
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
>
>
|||Model - what a good idea! SQL can't shrink a database smaller than that can
it?
With much excitement, I check model's size... 1.5Mb!
Damm!!!
DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
than 1 sec.
Logged in as sa.
File permissions maybe?
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...[vbcol=seagreen]
> Hi Nick
> If the command was working it should take some time, you may want to try a
> larger size say 94000. What size is model?
> John
> "Nick" wrote:
|||Hi Nick
I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
I doubt it is file permissions are a problem.
Have you tried DBCC SHRINKDATABASE?
John
"Nick" wrote:
> Model - what a good idea! SQL can't shrink a database smaller than that can
> it?
> With much excitement, I check model's size... 1.5Mb!
> Damm!!!
> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
> than 1 sec.
> Logged in as sa.
> File permissions maybe?
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
>
>
|||Well, it took a while but I've sussed it.
It turns out, that my log files were on a Windows mount-point which should
be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
I thought maybe SQL Server needed some more log space for the data file
shrink, but, whenever I tried expanding the logs SQL told me "File not
found". Very weird. So, I detached the db then moved the logs to a normal
drive and re-attached it. I could then increase the log sizes and the data
file shrank straight away.
Maybe SQL Server was unable to claim log-space for the shrink but didn't
report the error correctly as it was getting confused with the mount-points?
I'm guessing, but it works at last and many thanks for your assistance.
Nick
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...[vbcol=seagreen]
> Hi Nick
> I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
> I doubt it is file permissions are a problem.
> Have you tried DBCC SHRINKDATABASE?
> John
> "Nick" wrote:
|||Hi Nick
I hae certainly had problems with mount points in the past so gave up using
them, they didn't allow database files larger than the host disc size to be
created. I don't know if that is still a problem though!!
John
"Nick" wrote:
> Well, it took a while but I've sussed it.
> It turns out, that my log files were on a Windows mount-point which should
> be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
> I thought maybe SQL Server needed some more log space for the data file
> shrink, but, whenever I tried expanding the logs SQL told me "File not
> found". Very weird. So, I detached the db then moved the logs to a normal
> drive and re-attached it. I could then increase the log sizes and the data
> file shrank straight away.
> Maybe SQL Server was unable to claim log-space for the shrink but didn't
> report the error correctly as it was getting confused with the mount-points?
> I'm guessing, but it works at last and many thanks for your assistance.
> Nick
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...
>
>
(SQL 2K)
We used to have a database with 1 100Gb data file
I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
the data out of the primary file to the new ones.
Enterprise Mgr reports the primary file is 99% empty - excellent.
Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
ShrinkFile, backing up logs, full backups etc will shrink it.
Can anyone suggest what I might be missing?
Thanks
Nick
Hi
Can you post your DBCC SHRINKFILE command and the output from sp_spaceused,
sp_helpfile.
John
"Nick" wrote:
> Hi,
> (SQL 2K)
> We used to have a database with 1 100Gb data file
> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
> the data out of the primary file to the new ones.
> Enterprise Mgr reports the primary file is 99% empty - excellent.
> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> ShrinkFile, backing up logs, full backups etc will shrink it.
> Can anyone suggest what I might be missing?
> Thanks
> Nick
>
>
|||John,
sp_spaceused:
MyDB 290343.88 MB 69625.83 MB
Reserved Data Index Unused
155007152 KB 92248808 KB 5123888 KB 57634456 KB
sp_helpfile:
name fileid filename filegroup size maxsize growth usage
Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
Unlimited 153600 KB data only
Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
Unlimited 10% log only
Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
10% log only
MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
data only
MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
10% data only
MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
10% data only
MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
10% data only
Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
10% log only
Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
10% log only
I'm trying to shrink Merlin_Data - the primary file - by running:
DBCC SHRINKFILE (N'Merlin_Data' , 200)
and I've also tried:
DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
Any pointers really appreciated!
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...[vbcol=seagreen]
> Hi
> Can you post your DBCC SHRINKFILE command and the output from
> sp_spaceused,
> sp_helpfile.
> John
> "Nick" wrote:
|||Hi
The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so your
shrinkfile should have worked!
Try
DBCC SHRINKFILE ( 1, 200 )
or
DBCC SHRINKFILE ( 1 )
John
"Nick" wrote:
> John,
> sp_spaceused:
> MyDB 290343.88 MB 69625.83 MB
> Reserved Data Index Unused
> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> sp_helpfile:
> name fileid filename filegroup size maxsize growth usage
> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> Unlimited 153600 KB data only
> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
> Unlimited 10% log only
> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
> 10% log only
> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
> data only
> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
> 10% data only
> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
> 10% data only
> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
> 10% data only
> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
> 10% log only
> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
> 10% log only
> I'm trying to shrink Merlin_Data - the primary file - by running:
> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> and I've also tried:
> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> Any pointers really appreciated!
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>
>
|||Thats what I thought.
Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
it to 200Mb and it just doesn't budge. Not an inch.
Geeze, it's frustrating as I need this file as small as possible as I've
moved all the data out of it.
Is there any way of seeing if SQL is even attempting to shrink the file? The
DBCC command returns very quickly (< 1sec) so I guess it's an off-line
process.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...[vbcol=seagreen]
> Hi
> The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> your
> shrinkfile should have worked!
> Try
> DBCC SHRINKFILE ( 1, 200 )
> or
> DBCC SHRINKFILE ( 1 )
> John
> "Nick" wrote:
|||Hi Nick
If the command was working it should take some time, you may want to try a
larger size say 94000. What size is model?
John
"Nick" wrote:
> Thats what I thought.
> Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
> it to 200Mb and it just doesn't budge. Not an inch.
> Geeze, it's frustrating as I need this file as small as possible as I've
> moved all the data out of it.
> Is there any way of seeing if SQL is even attempting to shrink the file? The
> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> process.
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
>
>
|||Model - what a good idea! SQL can't shrink a database smaller than that can
it?
With much excitement, I check model's size... 1.5Mb!
Damm!!!
DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
than 1 sec.
Logged in as sa.
File permissions maybe?
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...[vbcol=seagreen]
> Hi Nick
> If the command was working it should take some time, you may want to try a
> larger size say 94000. What size is model?
> John
> "Nick" wrote:
|||Hi Nick
I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
I doubt it is file permissions are a problem.
Have you tried DBCC SHRINKDATABASE?
John
"Nick" wrote:
> Model - what a good idea! SQL can't shrink a database smaller than that can
> it?
> With much excitement, I check model's size... 1.5Mb!
> Damm!!!
> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
> than 1 sec.
> Logged in as sa.
> File permissions maybe?
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
>
>
|||Well, it took a while but I've sussed it.
It turns out, that my log files were on a Windows mount-point which should
be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
I thought maybe SQL Server needed some more log space for the data file
shrink, but, whenever I tried expanding the logs SQL told me "File not
found". Very weird. So, I detached the db then moved the logs to a normal
drive and re-attached it. I could then increase the log sizes and the data
file shrank straight away.
Maybe SQL Server was unable to claim log-space for the shrink but didn't
report the error correctly as it was getting confused with the mount-points?
I'm guessing, but it works at last and many thanks for your assistance.
Nick
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...[vbcol=seagreen]
> Hi Nick
> I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
> I doubt it is file permissions are a problem.
> Have you tried DBCC SHRINKDATABASE?
> John
> "Nick" wrote:
|||Hi Nick
I hae certainly had problems with mount points in the past so gave up using
them, they didn't allow database files larger than the host disc size to be
created. I don't know if that is still a problem though!!
John
"Nick" wrote:
> Well, it took a while but I've sussed it.
> It turns out, that my log files were on a Windows mount-point which should
> be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
> I thought maybe SQL Server needed some more log space for the data file
> shrink, but, whenever I tried expanding the logs SQL told me "File not
> found". Very weird. So, I detached the db then moved the logs to a normal
> drive and re-attached it. I could then increase the log sizes and the data
> file shrank straight away.
> Maybe SQL Server was unable to claim log-space for the shrink but didn't
> report the error correctly as it was getting confused with the mount-points?
> I'm guessing, but it works at last and many thanks for your assistance.
> Nick
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...
>
>
dbcc emptyfile
Hi,
(SQL 2K)
We used to have a database with 1 100Gb data file
I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
the data out of the primary file to the new ones.
Enterprise Mgr reports the primary file is 99% empty - excellent.
Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
ShrinkFile, backing up logs, full backups etc will shrink it.
Can anyone suggest what I might be missing?
Thanks
NickHi
Can you post your DBCC SHRINKFILE command and the output from sp_spaceused,
sp_helpfile.
John
"Nick" wrote:
> Hi,
> (SQL 2K)
> We used to have a database with 1 100Gb data file
> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move a
ll
> the data out of the primary file to the new ones.
> Enterprise Mgr reports the primary file is 99% empty - excellent.
> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> ShrinkFile, backing up logs, full backups etc will shrink it.
> Can anyone suggest what I might be missing?
> Thanks
> Nick
>
>|||John,
sp_spaceused:
MyDB 290343.88 MB 69625.83 MB
Reserved Data Index Unused
155007152 KB 92248808 KB 5123888 KB 57634456 KB
sp_helpfile:
name fileid filename filegroup size maxsize growth usage
Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
Unlimited 153600 KB data only
Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
Unlimited 10% log only
Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
10% log only
MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
data only
MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
10% data only
MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
10% data only
MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
10% data only
Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
10% log only
Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
10% log only
I'm trying to shrink Merlin_Data - the primary file - by running:
DBCC SHRINKFILE (N'Merlin_Data' , 200)
and I've also tried:
DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
Any pointers really appreciated!
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...[vbcol=seagreen]
> Hi
> Can you post your DBCC SHRINKFILE command and the output from
> sp_spaceused,
> sp_helpfile.
> John
> "Nick" wrote:
>|||Hi
The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so your
shrinkfile should have worked!
Try
DBCC SHRINKFILE ( 1, 200 )
or
DBCC SHRINKFILE ( 1 )
John
"Nick" wrote:
> John,
> sp_spaceused:
> MyDB 290343.88 MB 69625.83 MB
> Reserved Data Index Unused
> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> sp_helpfile:
> name fileid filename filegroup size maxsize growth usage
> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> Unlimited 153600 KB data only
> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
> Unlimited 10% log only
> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
> 10% log only
> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10
%
> data only
> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimite
d
> 10% data only
> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimite
d
> 10% data only
> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimite
d
> 10% data only
> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
> 10% log only
> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimite
d
> 10% log only
> I'm trying to shrink Merlin_Data - the primary file - by running:
> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> and I've also tried:
> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> Any pointers really appreciated!
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>
>|||Thats what I thought.
Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
it to 200Mb and it just doesn't budge. Not an inch.
Geeze, it's frustrating as I need this file as small as possible as I've
moved all the data out of it.
Is there any way of seeing if SQL is even attempting to shrink the file? The
DBCC command returns very quickly (< 1sec) so I guess it's an off-line
process.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...[vbcol=seagreen]
> Hi
> The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> your
> shrinkfile should have worked!
> Try
> DBCC SHRINKFILE ( 1, 200 )
> or
> DBCC SHRINKFILE ( 1 )
> John
> "Nick" wrote:
>|||Hi Nick
If the command was working it should take some time, you may want to try a
larger size say 94000. What size is model?
John
"Nick" wrote:
> Thats what I thought.
> Even in enterprise manager, it says "Min file size 150mb" so I try shrinki
ng
> it to 200Mb and it just doesn't budge. Not an inch.
> Geeze, it's frustrating as I need this file as small as possible as I've
> moved all the data out of it.
> Is there any way of seeing if SQL is even attempting to shrink the file? T
he
> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> process.
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
>
>|||Model - what a good idea! SQL can't shrink a database smaller than that can
it?
With much excitement, I check model's size... 1.5Mb!
Damm!!!
DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
than 1 sec.
Logged in as sa.
File permissions maybe'
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...[vbcol=seagreen]
> Hi Nick
> If the command was working it should take some time, you may want to try a
> larger size say 94000. What size is model?
> John
> "Nick" wrote:
>|||Hi Nick
I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
I doubt it is file permissions are a problem.
Have you tried DBCC SHRINKDATABASE?
John
"Nick" wrote:
> Model - what a good idea! SQL can't shrink a database smaller than that ca
n
> it?
> With much excitement, I check model's size... 1.5Mb!
> Damm!!!
> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
> than 1 sec.
> Logged in as sa.
> File permissions maybe'
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
>
>|||Well, it took a while but I've sussed it.
It turns out, that my log files were on a Windows mount-point which should
be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
I thought maybe SQL Server needed some more log space for the data file
shrink, but, whenever I tried expanding the logs SQL told me "File not
found". Very weird. So, I detached the db then moved the logs to a normal
drive and re-attached it. I could then increase the log sizes and the data
file shrank straight away.
Maybe SQL Server was unable to claim log-space for the shrink but didn't
report the error correctly as it was getting confused with the mount-points?
I'm guessing, but it works at last and many thanks for your assistance.
Nick
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...[vbcol=seagreen]
> Hi Nick
> I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
> I doubt it is file permissions are a problem.
> Have you tried DBCC SHRINKDATABASE?
> John
> "Nick" wrote:
>|||Hi Nick
I hae certainly had problems with mount points in the past so gave up using
them, they didn't allow database files larger than the host disc size to be
created. I don't know if that is still a problem though!!
John
"Nick" wrote:
> Well, it took a while but I've sussed it.
> It turns out, that my log files were on a Windows mount-point which should
> be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
> I thought maybe SQL Server needed some more log space for the data file
> shrink, but, whenever I tried expanding the logs SQL told me "File not
> found". Very weird. So, I detached the db then moved the logs to a normal
> drive and re-attached it. I could then increase the log sizes and the data
> file shrank straight away.
> Maybe SQL Server was unable to claim log-space for the shrink but didn't
> report the error correctly as it was getting confused with the mount-point
s?
> I'm guessing, but it works at last and many thanks for your assistance.
> Nick
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...
>
>
(SQL 2K)
We used to have a database with 1 100Gb data file
I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
the data out of the primary file to the new ones.
Enterprise Mgr reports the primary file is 99% empty - excellent.
Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
ShrinkFile, backing up logs, full backups etc will shrink it.
Can anyone suggest what I might be missing?
Thanks
NickHi
Can you post your DBCC SHRINKFILE command and the output from sp_spaceused,
sp_helpfile.
John
"Nick" wrote:
> Hi,
> (SQL 2K)
> We used to have a database with 1 100Gb data file
> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move a
ll
> the data out of the primary file to the new ones.
> Enterprise Mgr reports the primary file is 99% empty - excellent.
> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> ShrinkFile, backing up logs, full backups etc will shrink it.
> Can anyone suggest what I might be missing?
> Thanks
> Nick
>
>|||John,
sp_spaceused:
MyDB 290343.88 MB 69625.83 MB
Reserved Data Index Unused
155007152 KB 92248808 KB 5123888 KB 57634456 KB
sp_helpfile:
name fileid filename filegroup size maxsize growth usage
Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
Unlimited 153600 KB data only
Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
Unlimited 10% log only
Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
10% log only
MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
data only
MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
10% data only
MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
10% data only
MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
10% data only
Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
10% log only
Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
10% log only
I'm trying to shrink Merlin_Data - the primary file - by running:
DBCC SHRINKFILE (N'Merlin_Data' , 200)
and I've also tried:
DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
Any pointers really appreciated!
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...[vbcol=seagreen]
> Hi
> Can you post your DBCC SHRINKFILE command and the output from
> sp_spaceused,
> sp_helpfile.
> John
> "Nick" wrote:
>|||Hi
The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so your
shrinkfile should have worked!
Try
DBCC SHRINKFILE ( 1, 200 )
or
DBCC SHRINKFILE ( 1 )
John
"Nick" wrote:
> John,
> sp_spaceused:
> MyDB 290343.88 MB 69625.83 MB
> Reserved Data Index Unused
> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> sp_helpfile:
> name fileid filename filegroup size maxsize growth usage
> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> Unlimited 153600 KB data only
> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
> Unlimited 10% log only
> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
> 10% log only
> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10
%
> data only
> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimite
d
> 10% data only
> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimite
d
> 10% data only
> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimite
d
> 10% data only
> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
> 10% log only
> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimite
d
> 10% log only
> I'm trying to shrink Merlin_Data - the primary file - by running:
> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> and I've also tried:
> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> Any pointers really appreciated!
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>
>|||Thats what I thought.
Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
it to 200Mb and it just doesn't budge. Not an inch.
Geeze, it's frustrating as I need this file as small as possible as I've
moved all the data out of it.
Is there any way of seeing if SQL is even attempting to shrink the file? The
DBCC command returns very quickly (< 1sec) so I guess it's an off-line
process.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...[vbcol=seagreen]
> Hi
> The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> your
> shrinkfile should have worked!
> Try
> DBCC SHRINKFILE ( 1, 200 )
> or
> DBCC SHRINKFILE ( 1 )
> John
> "Nick" wrote:
>|||Hi Nick
If the command was working it should take some time, you may want to try a
larger size say 94000. What size is model?
John
"Nick" wrote:
> Thats what I thought.
> Even in enterprise manager, it says "Min file size 150mb" so I try shrinki
ng
> it to 200Mb and it just doesn't budge. Not an inch.
> Geeze, it's frustrating as I need this file as small as possible as I've
> moved all the data out of it.
> Is there any way of seeing if SQL is even attempting to shrink the file? T
he
> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> process.
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
>
>|||Model - what a good idea! SQL can't shrink a database smaller than that can
it?
With much excitement, I check model's size... 1.5Mb!
Damm!!!
DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
than 1 sec.
Logged in as sa.
File permissions maybe'
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...[vbcol=seagreen]
> Hi Nick
> If the command was working it should take some time, you may want to try a
> larger size say 94000. What size is model?
> John
> "Nick" wrote:
>|||Hi Nick
I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
I doubt it is file permissions are a problem.
Have you tried DBCC SHRINKDATABASE?
John
"Nick" wrote:
> Model - what a good idea! SQL can't shrink a database smaller than that ca
n
> it?
> With much excitement, I check model's size... 1.5Mb!
> Damm!!!
> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
> than 1 sec.
> Logged in as sa.
> File permissions maybe'
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
>
>|||Well, it took a while but I've sussed it.
It turns out, that my log files were on a Windows mount-point which should
be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
I thought maybe SQL Server needed some more log space for the data file
shrink, but, whenever I tried expanding the logs SQL told me "File not
found". Very weird. So, I detached the db then moved the logs to a normal
drive and re-attached it. I could then increase the log sizes and the data
file shrank straight away.
Maybe SQL Server was unable to claim log-space for the shrink but didn't
report the error correctly as it was getting confused with the mount-points?
I'm guessing, but it works at last and many thanks for your assistance.
Nick
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...[vbcol=seagreen]
> Hi Nick
> I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
> I doubt it is file permissions are a problem.
> Have you tried DBCC SHRINKDATABASE?
> John
> "Nick" wrote:
>|||Hi Nick
I hae certainly had problems with mount points in the past so gave up using
them, they didn't allow database files larger than the host disc size to be
created. I don't know if that is still a problem though!!
John
"Nick" wrote:
> Well, it took a while but I've sussed it.
> It turns out, that my log files were on a Windows mount-point which should
> be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
> I thought maybe SQL Server needed some more log space for the data file
> shrink, but, whenever I tried expanding the logs SQL told me "File not
> found". Very weird. So, I detached the db then moved the logs to a normal
> drive and re-attached it. I could then increase the log sizes and the data
> file shrank straight away.
> Maybe SQL Server was unable to claim log-space for the shrink but didn't
> report the error correctly as it was getting confused with the mount-point
s?
> I'm guessing, but it works at last and many thanks for your assistance.
> Nick
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...
>
>
dbcc emptyfile
Hi,
(SQL 2K)
We used to have a database with 1 100Gb data file
I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
the data out of the primary file to the new ones.
Enterprise Mgr reports the primary file is 99% empty - excellent.
Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
ShrinkFile, backing up logs, full backups etc will shrink it.
Can anyone suggest what I might be missing?
Thanks
NickHi
Can you post your DBCC SHRINKFILE command and the output from sp_spaceused,
sp_helpfile.
John
"Nick" wrote:
> Hi,
> (SQL 2K)
> We used to have a database with 1 100Gb data file
> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
> the data out of the primary file to the new ones.
> Enterprise Mgr reports the primary file is 99% empty - excellent.
> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> ShrinkFile, backing up logs, full backups etc will shrink it.
> Can anyone suggest what I might be missing?
> Thanks
> Nick
>
>|||John,
sp_spaceused:
MyDB 290343.88 MB 69625.83 MB
Reserved Data Index Unused
155007152 KB 92248808 KB 5123888 KB 57634456 KB
sp_helpfile:
name fileid filename filegroup size maxsize growth usage
Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
Unlimited 153600 KB data only
Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
Unlimited 10% log only
Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
10% log only
MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
data only
MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
10% data only
MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
10% data only
MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
10% data only
Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
10% log only
Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
10% log only
I'm trying to shrink Merlin_Data - the primary file - by running:
DBCC SHRINKFILE (N'Merlin_Data' , 200)
and I've also tried:
DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
Any pointers really appreciated!
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> Hi
> Can you post your DBCC SHRINKFILE command and the output from
> sp_spaceused,
> sp_helpfile.
> John
> "Nick" wrote:
>> Hi,
>> (SQL 2K)
>> We used to have a database with 1 100Gb data file
>> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move
>> all
>> the data out of the primary file to the new ones.
>> Enterprise Mgr reports the primary file is 99% empty - excellent.
>> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
>> ShrinkFile, backing up logs, full backups etc will shrink it.
>> Can anyone suggest what I might be missing?
>> Thanks
>> Nick
>>|||Hi
The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so your
shrinkfile should have worked!
Try
DBCC SHRINKFILE ( 1, 200 )
or
DBCC SHRINKFILE ( 1 )
John
"Nick" wrote:
> John,
> sp_spaceused:
> MyDB 290343.88 MB 69625.83 MB
> Reserved Data Index Unused
> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> sp_helpfile:
> name fileid filename filegroup size maxsize growth usage
> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> Unlimited 153600 KB data only
> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
> Unlimited 10% log only
> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
> 10% log only
> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
> data only
> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
> 10% data only
> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
> 10% data only
> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
> 10% data only
> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
> 10% log only
> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
> 10% log only
> I'm trying to shrink Merlin_Data - the primary file - by running:
> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> and I've also tried:
> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> Any pointers really appreciated!
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> > Hi
> >
> > Can you post your DBCC SHRINKFILE command and the output from
> > sp_spaceused,
> > sp_helpfile.
> >
> > John
> >
> > "Nick" wrote:
> >
> >> Hi,
> >>
> >> (SQL 2K)
> >>
> >> We used to have a database with 1 100Gb data file
> >>
> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move
> >> all
> >> the data out of the primary file to the new ones.
> >>
> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
> >>
> >> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> >> ShrinkFile, backing up logs, full backups etc will shrink it.
> >>
> >> Can anyone suggest what I might be missing?
> >>
> >> Thanks
> >>
> >> Nick
> >>
> >>
> >>
>
>|||Thats what I thought.
Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
it to 200Mb and it just doesn't budge. Not an inch.
Geeze, it's frustrating as I need this file as small as possible as I've
moved all the data out of it.
Is there any way of seeing if SQL is even attempting to shrink the file? The
DBCC command returns very quickly (< 1sec) so I guess it's an off-line
process.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
> Hi
> The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> your
> shrinkfile should have worked!
> Try
> DBCC SHRINKFILE ( 1, 200 )
> or
> DBCC SHRINKFILE ( 1 )
> John
> "Nick" wrote:
>> John,
>> sp_spaceused:
>> MyDB 290343.88 MB 69625.83 MB
>> Reserved Data Index Unused
>> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
>> sp_helpfile:
>> name fileid filename filegroup size maxsize growth usage
>> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
>> Unlimited 153600 KB data only
>> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
>> Unlimited 10% log only
>> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
>> Unlimited
>> 10% log only
>> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited
>> 10%
>> data only
>> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
>> Unlimited
>> 10% data only
>> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
>> Unlimited
>> 10% data only
>> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
>> Unlimited
>> 10% data only
>> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
>> Unlimited
>> 10% log only
>> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
>> Unlimited
>> 10% log only
>> I'm trying to shrink Merlin_Data - the primary file - by running:
>> DBCC SHRINKFILE (N'Merlin_Data' , 200)
>> and I've also tried:
>> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
>> Any pointers really appreciated!
>> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>> > Hi
>> >
>> > Can you post your DBCC SHRINKFILE command and the output from
>> > sp_spaceused,
>> > sp_helpfile.
>> >
>> > John
>> >
>> > "Nick" wrote:
>> >
>> >> Hi,
>> >>
>> >> (SQL 2K)
>> >>
>> >> We used to have a database with 1 100Gb data file
>> >>
>> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to
>> >> move
>> >> all
>> >> the data out of the primary file to the new ones.
>> >>
>> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
>> >>
>> >> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
>> >> ShrinkFile, backing up logs, full backups etc will shrink it.
>> >>
>> >> Can anyone suggest what I might be missing?
>> >>
>> >> Thanks
>> >>
>> >> Nick
>> >>
>> >>
>> >>
>>|||Hi Nick
If the command was working it should take some time, you may want to try a
larger size say 94000. What size is model?
John
"Nick" wrote:
> Thats what I thought.
> Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
> it to 200Mb and it just doesn't budge. Not an inch.
> Geeze, it's frustrating as I need this file as small as possible as I've
> moved all the data out of it.
> Is there any way of seeing if SQL is even attempting to shrink the file? The
> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> process.
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
> > Hi
> >
> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> > your
> > shrinkfile should have worked!
> >
> > Try
> >
> > DBCC SHRINKFILE ( 1, 200 )
> > or
> > DBCC SHRINKFILE ( 1 )
> >
> > John
> >
> > "Nick" wrote:
> >
> >> John,
> >>
> >> sp_spaceused:
> >> MyDB 290343.88 MB 69625.83 MB
> >>
> >> Reserved Data Index Unused
> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> >>
> >> sp_helpfile:
> >>
> >> name fileid filename filegroup size maxsize growth usage
> >>
> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> >> Unlimited 153600 KB data only
> >>
> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
> >> Unlimited 10% log only
> >>
> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
> >> Unlimited
> >> 10% log only
> >>
> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited
> >> 10%
> >> data only
> >>
> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
> >> Unlimited
> >> 10% data only
> >>
> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
> >> Unlimited
> >> 10% data only
> >>
> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
> >> Unlimited
> >> 10% data only
> >>
> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
> >> Unlimited
> >> 10% log only
> >>
> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
> >> Unlimited
> >> 10% log only
> >>
> >> I'm trying to shrink Merlin_Data - the primary file - by running:
> >>
> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> >> and I've also tried:
> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> >>
> >> Any pointers really appreciated!
> >>
> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> >> > Hi
> >> >
> >> > Can you post your DBCC SHRINKFILE command and the output from
> >> > sp_spaceused,
> >> > sp_helpfile.
> >> >
> >> > John
> >> >
> >> > "Nick" wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> (SQL 2K)
> >> >>
> >> >> We used to have a database with 1 100Gb data file
> >> >>
> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to
> >> >> move
> >> >> all
> >> >> the data out of the primary file to the new ones.
> >> >>
> >> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
> >> >>
> >> >> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
> >> >>
> >> >> Can anyone suggest what I might be missing?
> >> >>
> >> >> Thanks
> >> >>
> >> >> Nick
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>|||Model - what a good idea! SQL can't shrink a database smaller than that can
it?
With much excitement, I check model's size... 1.5Mb!
Damm!!!
DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
than 1 sec.
Logged in as sa.
File permissions maybe'
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
> Hi Nick
> If the command was working it should take some time, you may want to try a
> larger size say 94000. What size is model?
> John
> "Nick" wrote:
>> Thats what I thought.
>> Even in enterprise manager, it says "Min file size 150mb" so I try
>> shrinking
>> it to 200Mb and it just doesn't budge. Not an inch.
>> Geeze, it's frustrating as I need this file as small as possible as I've
>> moved all the data out of it.
>> Is there any way of seeing if SQL is even attempting to shrink the file?
>> The
>> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
>> process.
>> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
>> > Hi
>> >
>> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
>> > your
>> > shrinkfile should have worked!
>> >
>> > Try
>> >
>> > DBCC SHRINKFILE ( 1, 200 )
>> > or
>> > DBCC SHRINKFILE ( 1 )
>> >
>> > John
>> >
>> > "Nick" wrote:
>> >
>> >> John,
>> >>
>> >> sp_spaceused:
>> >> MyDB 290343.88 MB 69625.83 MB
>> >>
>> >> Reserved Data Index Unused
>> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
>> >>
>> >> sp_helpfile:
>> >>
>> >> name fileid filename filegroup size maxsize growth usage
>> >>
>> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
>> >> Unlimited 153600 KB data only
>> >>
>> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448
>> >> KB
>> >> Unlimited 10% log only
>> >>
>> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
>> >> Unlimited
>> >> 10% log only
>> >>
>> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB
>> >> Unlimited
>> >> 10%
>> >> data only
>> >>
>> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
>> >> Unlimited
>> >> 10% data only
>> >>
>> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
>> >> Unlimited
>> >> 10% data only
>> >>
>> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
>> >> Unlimited
>> >> 10% data only
>> >>
>> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
>> >> Unlimited
>> >> 10% log only
>> >>
>> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
>> >> Unlimited
>> >> 10% log only
>> >>
>> >> I'm trying to shrink Merlin_Data - the primary file - by running:
>> >>
>> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
>> >> and I've also tried:
>> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
>> >>
>> >> Any pointers really appreciated!
>> >>
>> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>> >> > Hi
>> >> >
>> >> > Can you post your DBCC SHRINKFILE command and the output from
>> >> > sp_spaceused,
>> >> > sp_helpfile.
>> >> >
>> >> > John
>> >> >
>> >> > "Nick" wrote:
>> >> >
>> >> >> Hi,
>> >> >>
>> >> >> (SQL 2K)
>> >> >>
>> >> >> We used to have a database with 1 100Gb data file
>> >> >>
>> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to
>> >> >> move
>> >> >> all
>> >> >> the data out of the primary file to the new ones.
>> >> >>
>> >> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
>> >> >>
>> >> >> Problem is, it wont shrink from being 100Gb in size - no amout of
>> >> >> DBCC
>> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
>> >> >>
>> >> >> Can anyone suggest what I might be missing?
>> >> >>
>> >> >> Thanks
>> >> >>
>> >> >> Nick
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>>|||Hi Nick
I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
I doubt it is file permissions are a problem.
Have you tried DBCC SHRINKDATABASE?
John
"Nick" wrote:
> Model - what a good idea! SQL can't shrink a database smaller than that can
> it?
> With much excitement, I check model's size... 1.5Mb!
> Damm!!!
> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
> than 1 sec.
> Logged in as sa.
> File permissions maybe'
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
> > Hi Nick
> >
> > If the command was working it should take some time, you may want to try a
> > larger size say 94000. What size is model?
> >
> > John
> >
> > "Nick" wrote:
> >
> >> Thats what I thought.
> >>
> >> Even in enterprise manager, it says "Min file size 150mb" so I try
> >> shrinking
> >> it to 200Mb and it just doesn't budge. Not an inch.
> >>
> >> Geeze, it's frustrating as I need this file as small as possible as I've
> >> moved all the data out of it.
> >>
> >> Is there any way of seeing if SQL is even attempting to shrink the file?
> >> The
> >> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> >> process.
> >>
> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
> >> > Hi
> >> >
> >> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> >> > your
> >> > shrinkfile should have worked!
> >> >
> >> > Try
> >> >
> >> > DBCC SHRINKFILE ( 1, 200 )
> >> > or
> >> > DBCC SHRINKFILE ( 1 )
> >> >
> >> > John
> >> >
> >> > "Nick" wrote:
> >> >
> >> >> John,
> >> >>
> >> >> sp_spaceused:
> >> >> MyDB 290343.88 MB 69625.83 MB
> >> >>
> >> >> Reserved Data Index Unused
> >> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> >> >>
> >> >> sp_helpfile:
> >> >>
> >> >> name fileid filename filegroup size maxsize growth usage
> >> >>
> >> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> >> >> Unlimited 153600 KB data only
> >> >>
> >> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448
> >> >> KB
> >> >> Unlimited 10% log only
> >> >>
> >> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
> >> >> Unlimited
> >> >> 10% log only
> >> >>
> >> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB
> >> >> Unlimited
> >> >> 10%
> >> >> data only
> >> >>
> >> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
> >> >> Unlimited
> >> >> 10% data only
> >> >>
> >> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
> >> >> Unlimited
> >> >> 10% data only
> >> >>
> >> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
> >> >> Unlimited
> >> >> 10% data only
> >> >>
> >> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
> >> >> Unlimited
> >> >> 10% log only
> >> >>
> >> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
> >> >> Unlimited
> >> >> 10% log only
> >> >>
> >> >> I'm trying to shrink Merlin_Data - the primary file - by running:
> >> >>
> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> >> >> and I've also tried:
> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> >> >>
> >> >> Any pointers really appreciated!
> >> >>
> >> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> >> >> > Hi
> >> >> >
> >> >> > Can you post your DBCC SHRINKFILE command and the output from
> >> >> > sp_spaceused,
> >> >> > sp_helpfile.
> >> >> >
> >> >> > John
> >> >> >
> >> >> > "Nick" wrote:
> >> >> >
> >> >> >> Hi,
> >> >> >>
> >> >> >> (SQL 2K)
> >> >> >>
> >> >> >> We used to have a database with 1 100Gb data file
> >> >> >>
> >> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to
> >> >> >> move
> >> >> >> all
> >> >> >> the data out of the primary file to the new ones.
> >> >> >>
> >> >> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
> >> >> >>
> >> >> >> Problem is, it wont shrink from being 100Gb in size - no amout of
> >> >> >> DBCC
> >> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
> >> >> >>
> >> >> >> Can anyone suggest what I might be missing?
> >> >> >>
> >> >> >> Thanks
> >> >> >>
> >> >> >> Nick
> >> >> >>
> >> >> >>
> >> >> >>
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>|||Well, it took a while but I've sussed it.
It turns out, that my log files were on a Windows mount-point which should
be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
I thought maybe SQL Server needed some more log space for the data file
shrink, but, whenever I tried expanding the logs SQL told me "File not
found". Very weird. So, I detached the db then moved the logs to a normal
drive and re-attached it. I could then increase the log sizes and the data
file shrank straight away.
Maybe SQL Server was unable to claim log-space for the shrink but didn't
report the error correctly as it was getting confused with the mount-points?
I'm guessing, but it works at last and many thanks for your assistance.
Nick
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...
> Hi Nick
> I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
> I doubt it is file permissions are a problem.
> Have you tried DBCC SHRINKDATABASE?
> John
> "Nick" wrote:
>> Model - what a good idea! SQL can't shrink a database smaller than that
>> can
>> it?
>> With much excitement, I check model's size... 1.5Mb!
>> Damm!!!
>> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
>> than 1 sec.
>> Logged in as sa.
>> File permissions maybe'
>>
>> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
>> > Hi Nick
>> >
>> > If the command was working it should take some time, you may want to
>> > try a
>> > larger size say 94000. What size is model?
>> >
>> > John
>> >
>> > "Nick" wrote:
>> >
>> >> Thats what I thought.
>> >>
>> >> Even in enterprise manager, it says "Min file size 150mb" so I try
>> >> shrinking
>> >> it to 200Mb and it just doesn't budge. Not an inch.
>> >>
>> >> Geeze, it's frustrating as I need this file as small as possible as
>> >> I've
>> >> moved all the data out of it.
>> >>
>> >> Is there any way of seeing if SQL is even attempting to shrink the
>> >> file?
>> >> The
>> >> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
>> >> process.
>> >>
>> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> >> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
>> >> > Hi
>> >> >
>> >> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB)
>> >> > so
>> >> > your
>> >> > shrinkfile should have worked!
>> >> >
>> >> > Try
>> >> >
>> >> > DBCC SHRINKFILE ( 1, 200 )
>> >> > or
>> >> > DBCC SHRINKFILE ( 1 )
>> >> >
>> >> > John
>> >> >
>> >> > "Nick" wrote:
>> >> >
>> >> >> John,
>> >> >>
>> >> >> sp_spaceused:
>> >> >> MyDB 290343.88 MB 69625.83 MB
>> >> >>
>> >> >> Reserved Data Index Unused
>> >> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
>> >> >>
>> >> >> sp_helpfile:
>> >> >>
>> >> >> name fileid filename filegroup size maxsize growth usage
>> >> >>
>> >> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
>> >> >> Unlimited 153600 KB data only
>> >> >>
>> >> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL
>> >> >> 14592448
>> >> >> KB
>> >> >> Unlimited 10% log only
>> >> >>
>> >> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
>> >> >> Unlimited
>> >> >> 10% log only
>> >> >>
>> >> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB
>> >> >> Unlimited
>> >> >> 10%
>> >> >> data only
>> >> >>
>> >> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
>> >> >> Unlimited
>> >> >> 10% data only
>> >> >>
>> >> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
>> >> >> Unlimited
>> >> >> 10% data only
>> >> >>
>> >> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
>> >> >> Unlimited
>> >> >> 10% data only
>> >> >>
>> >> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
>> >> >> Unlimited
>> >> >> 10% log only
>> >> >>
>> >> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
>> >> >> Unlimited
>> >> >> 10% log only
>> >> >>
>> >> >> I'm trying to shrink Merlin_Data - the primary file - by running:
>> >> >>
>> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
>> >> >> and I've also tried:
>> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
>> >> >>
>> >> >> Any pointers really appreciated!
>> >> >>
>> >> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> >> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>> >> >> > Hi
>> >> >> >
>> >> >> > Can you post your DBCC SHRINKFILE command and the output from
>> >> >> > sp_spaceused,
>> >> >> > sp_helpfile.
>> >> >> >
>> >> >> > John
>> >> >> >
>> >> >> > "Nick" wrote:
>> >> >> >
>> >> >> >> Hi,
>> >> >> >>
>> >> >> >> (SQL 2K)
>> >> >> >>
>> >> >> >> We used to have a database with 1 100Gb data file
>> >> >> >>
>> >> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command
>> >> >> >> to
>> >> >> >> move
>> >> >> >> all
>> >> >> >> the data out of the primary file to the new ones.
>> >> >> >>
>> >> >> >> Enterprise Mgr reports the primary file is 99% empty -
>> >> >> >> excellent.
>> >> >> >>
>> >> >> >> Problem is, it wont shrink from being 100Gb in size - no amout
>> >> >> >> of
>> >> >> >> DBCC
>> >> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
>> >> >> >>
>> >> >> >> Can anyone suggest what I might be missing?
>> >> >> >>
>> >> >> >> Thanks
>> >> >> >>
>> >> >> >> Nick
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>>|||Hi Nick
I hae certainly had problems with mount points in the past so gave up using
them, they didn't allow database files larger than the host disc size to be
created. I don't know if that is still a problem though!!
John
"Nick" wrote:
> Well, it took a while but I've sussed it.
> It turns out, that my log files were on a Windows mount-point which should
> be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
> I thought maybe SQL Server needed some more log space for the data file
> shrink, but, whenever I tried expanding the logs SQL told me "File not
> found". Very weird. So, I detached the db then moved the logs to a normal
> drive and re-attached it. I could then increase the log sizes and the data
> file shrank straight away.
> Maybe SQL Server was unable to claim log-space for the shrink but didn't
> report the error correctly as it was getting confused with the mount-points?
> I'm guessing, but it works at last and many thanks for your assistance.
> Nick
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...
> > Hi Nick
> >
> > I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
> >
> > I doubt it is file permissions are a problem.
> >
> > Have you tried DBCC SHRINKDATABASE?
> >
> > John
> >
> > "Nick" wrote:
> >
> >> Model - what a good idea! SQL can't shrink a database smaller than that
> >> can
> >> it?
> >> With much excitement, I check model's size... 1.5Mb!
> >> Damm!!!
> >> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
> >> than 1 sec.
> >>
> >> Logged in as sa.
> >>
> >> File permissions maybe'
> >>
> >>
> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
> >> > Hi Nick
> >> >
> >> > If the command was working it should take some time, you may want to
> >> > try a
> >> > larger size say 94000. What size is model?
> >> >
> >> > John
> >> >
> >> > "Nick" wrote:
> >> >
> >> >> Thats what I thought.
> >> >>
> >> >> Even in enterprise manager, it says "Min file size 150mb" so I try
> >> >> shrinking
> >> >> it to 200Mb and it just doesn't budge. Not an inch.
> >> >>
> >> >> Geeze, it's frustrating as I need this file as small as possible as
> >> >> I've
> >> >> moved all the data out of it.
> >> >>
> >> >> Is there any way of seeing if SQL is even attempting to shrink the
> >> >> file?
> >> >> The
> >> >> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> >> >> process.
> >> >>
> >> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> >> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
> >> >> > Hi
> >> >> >
> >> >> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB)
> >> >> > so
> >> >> > your
> >> >> > shrinkfile should have worked!
> >> >> >
> >> >> > Try
> >> >> >
> >> >> > DBCC SHRINKFILE ( 1, 200 )
> >> >> > or
> >> >> > DBCC SHRINKFILE ( 1 )
> >> >> >
> >> >> > John
> >> >> >
> >> >> > "Nick" wrote:
> >> >> >
> >> >> >> John,
> >> >> >>
> >> >> >> sp_spaceused:
> >> >> >> MyDB 290343.88 MB 69625.83 MB
> >> >> >>
> >> >> >> Reserved Data Index Unused
> >> >> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> >> >> >>
> >> >> >> sp_helpfile:
> >> >> >>
> >> >> >> name fileid filename filegroup size maxsize growth usage
> >> >> >>
> >> >> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> >> >> >> Unlimited 153600 KB data only
> >> >> >>
> >> >> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL
> >> >> >> 14592448
> >> >> >> KB
> >> >> >> Unlimited 10% log only
> >> >> >>
> >> >> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
> >> >> >> Unlimited
> >> >> >> 10% log only
> >> >> >>
> >> >> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB
> >> >> >> Unlimited
> >> >> >> 10%
> >> >> >> data only
> >> >> >>
> >> >> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
> >> >> >> Unlimited
> >> >> >> 10% data only
> >> >> >>
> >> >> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
> >> >> >> Unlimited
> >> >> >> 10% data only
> >> >> >>
> >> >> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
> >> >> >> Unlimited
> >> >> >> 10% data only
> >> >> >>
> >> >> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
> >> >> >> Unlimited
> >> >> >> 10% log only
> >> >> >>
> >> >> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
> >> >> >> Unlimited
> >> >> >> 10% log only
> >> >> >>
> >> >> >> I'm trying to shrink Merlin_Data - the primary file - by running:
> >> >> >>
> >> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> >> >> >> and I've also tried:
> >> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> >> >> >>
> >> >> >> Any pointers really appreciated!
> >> >> >>
> >> >> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> >> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> >> >> >> > Hi
> >> >> >> >
> >> >> >> > Can you post your DBCC SHRINKFILE command and the output from
> >> >> >> > sp_spaceused,
> >> >> >> > sp_helpfile.
> >> >> >> >
> >> >> >> > John
> >> >> >> >
> >> >> >> > "Nick" wrote:
> >> >> >> >
> >> >> >> >> Hi,
> >> >> >> >>
> >> >> >> >> (SQL 2K)
> >> >> >> >>
> >> >> >> >> We used to have a database with 1 100Gb data file
> >> >> >> >>
> >> >> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command
> >> >> >> >> to
> >> >> >> >> move
> >> >> >> >> all
> >> >> >> >> the data out of the primary file to the new ones.
> >> >> >> >>
> >> >> >> >> Enterprise Mgr reports the primary file is 99% empty -
> >> >> >> >> excellent.
> >> >> >> >>
> >> >> >> >> Problem is, it wont shrink from being 100Gb in size - no amout
> >> >> >> >> of
> >> >> >> >> DBCC
> >> >> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
> >> >> >> >>
> >> >> >> >> Can anyone suggest what I might be missing?
> >> >> >> >>
> >> >> >> >> Thanks
> >> >> >> >>
> >> >> >> >> Nick
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>
(SQL 2K)
We used to have a database with 1 100Gb data file
I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
the data out of the primary file to the new ones.
Enterprise Mgr reports the primary file is 99% empty - excellent.
Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
ShrinkFile, backing up logs, full backups etc will shrink it.
Can anyone suggest what I might be missing?
Thanks
NickHi
Can you post your DBCC SHRINKFILE command and the output from sp_spaceused,
sp_helpfile.
John
"Nick" wrote:
> Hi,
> (SQL 2K)
> We used to have a database with 1 100Gb data file
> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move all
> the data out of the primary file to the new ones.
> Enterprise Mgr reports the primary file is 99% empty - excellent.
> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> ShrinkFile, backing up logs, full backups etc will shrink it.
> Can anyone suggest what I might be missing?
> Thanks
> Nick
>
>|||John,
sp_spaceused:
MyDB 290343.88 MB 69625.83 MB
Reserved Data Index Unused
155007152 KB 92248808 KB 5123888 KB 57634456 KB
sp_helpfile:
name fileid filename filegroup size maxsize growth usage
Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
Unlimited 153600 KB data only
Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
Unlimited 10% log only
Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
10% log only
MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
data only
MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
10% data only
MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
10% data only
MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
10% data only
Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
10% log only
Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
10% log only
I'm trying to shrink Merlin_Data - the primary file - by running:
DBCC SHRINKFILE (N'Merlin_Data' , 200)
and I've also tried:
DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
Any pointers really appreciated!
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> Hi
> Can you post your DBCC SHRINKFILE command and the output from
> sp_spaceused,
> sp_helpfile.
> John
> "Nick" wrote:
>> Hi,
>> (SQL 2K)
>> We used to have a database with 1 100Gb data file
>> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move
>> all
>> the data out of the primary file to the new ones.
>> Enterprise Mgr reports the primary file is 99% empty - excellent.
>> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
>> ShrinkFile, backing up logs, full backups etc will shrink it.
>> Can anyone suggest what I might be missing?
>> Thanks
>> Nick
>>|||Hi
The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so your
shrinkfile should have worked!
Try
DBCC SHRINKFILE ( 1, 200 )
or
DBCC SHRINKFILE ( 1 )
John
"Nick" wrote:
> John,
> sp_spaceused:
> MyDB 290343.88 MB 69625.83 MB
> Reserved Data Index Unused
> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> sp_helpfile:
> name fileid filename filegroup size maxsize growth usage
> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> Unlimited 153600 KB data only
> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
> Unlimited 10% log only
> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB Unlimited
> 10% log only
> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited 10%
> data only
> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB Unlimited
> 10% data only
> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB Unlimited
> 10% data only
> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB Unlimited
> 10% data only
> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB Unlimited
> 10% log only
> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB Unlimited
> 10% log only
> I'm trying to shrink Merlin_Data - the primary file - by running:
> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> and I've also tried:
> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> Any pointers really appreciated!
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> > Hi
> >
> > Can you post your DBCC SHRINKFILE command and the output from
> > sp_spaceused,
> > sp_helpfile.
> >
> > John
> >
> > "Nick" wrote:
> >
> >> Hi,
> >>
> >> (SQL 2K)
> >>
> >> We used to have a database with 1 100Gb data file
> >>
> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to move
> >> all
> >> the data out of the primary file to the new ones.
> >>
> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
> >>
> >> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> >> ShrinkFile, backing up logs, full backups etc will shrink it.
> >>
> >> Can anyone suggest what I might be missing?
> >>
> >> Thanks
> >>
> >> Nick
> >>
> >>
> >>
>
>|||Thats what I thought.
Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
it to 200Mb and it just doesn't budge. Not an inch.
Geeze, it's frustrating as I need this file as small as possible as I've
moved all the data out of it.
Is there any way of seeing if SQL is even attempting to shrink the file? The
DBCC command returns very quickly (< 1sec) so I guess it's an off-line
process.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
> Hi
> The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> your
> shrinkfile should have worked!
> Try
> DBCC SHRINKFILE ( 1, 200 )
> or
> DBCC SHRINKFILE ( 1 )
> John
> "Nick" wrote:
>> John,
>> sp_spaceused:
>> MyDB 290343.88 MB 69625.83 MB
>> Reserved Data Index Unused
>> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
>> sp_helpfile:
>> name fileid filename filegroup size maxsize growth usage
>> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
>> Unlimited 153600 KB data only
>> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
>> Unlimited 10% log only
>> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
>> Unlimited
>> 10% log only
>> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited
>> 10%
>> data only
>> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
>> Unlimited
>> 10% data only
>> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
>> Unlimited
>> 10% data only
>> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
>> Unlimited
>> 10% data only
>> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
>> Unlimited
>> 10% log only
>> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
>> Unlimited
>> 10% log only
>> I'm trying to shrink Merlin_Data - the primary file - by running:
>> DBCC SHRINKFILE (N'Merlin_Data' , 200)
>> and I've also tried:
>> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
>> Any pointers really appreciated!
>> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>> > Hi
>> >
>> > Can you post your DBCC SHRINKFILE command and the output from
>> > sp_spaceused,
>> > sp_helpfile.
>> >
>> > John
>> >
>> > "Nick" wrote:
>> >
>> >> Hi,
>> >>
>> >> (SQL 2K)
>> >>
>> >> We used to have a database with 1 100Gb data file
>> >>
>> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to
>> >> move
>> >> all
>> >> the data out of the primary file to the new ones.
>> >>
>> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
>> >>
>> >> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
>> >> ShrinkFile, backing up logs, full backups etc will shrink it.
>> >>
>> >> Can anyone suggest what I might be missing?
>> >>
>> >> Thanks
>> >>
>> >> Nick
>> >>
>> >>
>> >>
>>|||Hi Nick
If the command was working it should take some time, you may want to try a
larger size say 94000. What size is model?
John
"Nick" wrote:
> Thats what I thought.
> Even in enterprise manager, it says "Min file size 150mb" so I try shrinking
> it to 200Mb and it just doesn't budge. Not an inch.
> Geeze, it's frustrating as I need this file as small as possible as I've
> moved all the data out of it.
> Is there any way of seeing if SQL is even attempting to shrink the file? The
> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> process.
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
> > Hi
> >
> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> > your
> > shrinkfile should have worked!
> >
> > Try
> >
> > DBCC SHRINKFILE ( 1, 200 )
> > or
> > DBCC SHRINKFILE ( 1 )
> >
> > John
> >
> > "Nick" wrote:
> >
> >> John,
> >>
> >> sp_spaceused:
> >> MyDB 290343.88 MB 69625.83 MB
> >>
> >> Reserved Data Index Unused
> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> >>
> >> sp_helpfile:
> >>
> >> name fileid filename filegroup size maxsize growth usage
> >>
> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> >> Unlimited 153600 KB data only
> >>
> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448 KB
> >> Unlimited 10% log only
> >>
> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
> >> Unlimited
> >> 10% log only
> >>
> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB Unlimited
> >> 10%
> >> data only
> >>
> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
> >> Unlimited
> >> 10% data only
> >>
> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
> >> Unlimited
> >> 10% data only
> >>
> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
> >> Unlimited
> >> 10% data only
> >>
> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
> >> Unlimited
> >> 10% log only
> >>
> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
> >> Unlimited
> >> 10% log only
> >>
> >> I'm trying to shrink Merlin_Data - the primary file - by running:
> >>
> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> >> and I've also tried:
> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> >>
> >> Any pointers really appreciated!
> >>
> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> >> > Hi
> >> >
> >> > Can you post your DBCC SHRINKFILE command and the output from
> >> > sp_spaceused,
> >> > sp_helpfile.
> >> >
> >> > John
> >> >
> >> > "Nick" wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> (SQL 2K)
> >> >>
> >> >> We used to have a database with 1 100Gb data file
> >> >>
> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to
> >> >> move
> >> >> all
> >> >> the data out of the primary file to the new ones.
> >> >>
> >> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
> >> >>
> >> >> Problem is, it wont shrink from being 100Gb in size - no amout of DBCC
> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
> >> >>
> >> >> Can anyone suggest what I might be missing?
> >> >>
> >> >> Thanks
> >> >>
> >> >> Nick
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>|||Model - what a good idea! SQL can't shrink a database smaller than that can
it?
With much excitement, I check model's size... 1.5Mb!
Damm!!!
DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
than 1 sec.
Logged in as sa.
File permissions maybe'
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
> Hi Nick
> If the command was working it should take some time, you may want to try a
> larger size say 94000. What size is model?
> John
> "Nick" wrote:
>> Thats what I thought.
>> Even in enterprise manager, it says "Min file size 150mb" so I try
>> shrinking
>> it to 200Mb and it just doesn't budge. Not an inch.
>> Geeze, it's frustrating as I need this file as small as possible as I've
>> moved all the data out of it.
>> Is there any way of seeing if SQL is even attempting to shrink the file?
>> The
>> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
>> process.
>> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
>> > Hi
>> >
>> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
>> > your
>> > shrinkfile should have worked!
>> >
>> > Try
>> >
>> > DBCC SHRINKFILE ( 1, 200 )
>> > or
>> > DBCC SHRINKFILE ( 1 )
>> >
>> > John
>> >
>> > "Nick" wrote:
>> >
>> >> John,
>> >>
>> >> sp_spaceused:
>> >> MyDB 290343.88 MB 69625.83 MB
>> >>
>> >> Reserved Data Index Unused
>> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
>> >>
>> >> sp_helpfile:
>> >>
>> >> name fileid filename filegroup size maxsize growth usage
>> >>
>> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
>> >> Unlimited 153600 KB data only
>> >>
>> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448
>> >> KB
>> >> Unlimited 10% log only
>> >>
>> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
>> >> Unlimited
>> >> 10% log only
>> >>
>> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB
>> >> Unlimited
>> >> 10%
>> >> data only
>> >>
>> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
>> >> Unlimited
>> >> 10% data only
>> >>
>> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
>> >> Unlimited
>> >> 10% data only
>> >>
>> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
>> >> Unlimited
>> >> 10% data only
>> >>
>> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
>> >> Unlimited
>> >> 10% log only
>> >>
>> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
>> >> Unlimited
>> >> 10% log only
>> >>
>> >> I'm trying to shrink Merlin_Data - the primary file - by running:
>> >>
>> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
>> >> and I've also tried:
>> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
>> >>
>> >> Any pointers really appreciated!
>> >>
>> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>> >> > Hi
>> >> >
>> >> > Can you post your DBCC SHRINKFILE command and the output from
>> >> > sp_spaceused,
>> >> > sp_helpfile.
>> >> >
>> >> > John
>> >> >
>> >> > "Nick" wrote:
>> >> >
>> >> >> Hi,
>> >> >>
>> >> >> (SQL 2K)
>> >> >>
>> >> >> We used to have a database with 1 100Gb data file
>> >> >>
>> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to
>> >> >> move
>> >> >> all
>> >> >> the data out of the primary file to the new ones.
>> >> >>
>> >> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
>> >> >>
>> >> >> Problem is, it wont shrink from being 100Gb in size - no amout of
>> >> >> DBCC
>> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
>> >> >>
>> >> >> Can anyone suggest what I might be missing?
>> >> >>
>> >> >> Thanks
>> >> >>
>> >> >> Nick
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>>|||Hi Nick
I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
I doubt it is file permissions are a problem.
Have you tried DBCC SHRINKDATABASE?
John
"Nick" wrote:
> Model - what a good idea! SQL can't shrink a database smaller than that can
> it?
> With much excitement, I check model's size... 1.5Mb!
> Damm!!!
> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
> than 1 sec.
> Logged in as sa.
> File permissions maybe'
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
> > Hi Nick
> >
> > If the command was working it should take some time, you may want to try a
> > larger size say 94000. What size is model?
> >
> > John
> >
> > "Nick" wrote:
> >
> >> Thats what I thought.
> >>
> >> Even in enterprise manager, it says "Min file size 150mb" so I try
> >> shrinking
> >> it to 200Mb and it just doesn't budge. Not an inch.
> >>
> >> Geeze, it's frustrating as I need this file as small as possible as I've
> >> moved all the data out of it.
> >>
> >> Is there any way of seeing if SQL is even attempting to shrink the file?
> >> The
> >> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> >> process.
> >>
> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
> >> > Hi
> >> >
> >> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB) so
> >> > your
> >> > shrinkfile should have worked!
> >> >
> >> > Try
> >> >
> >> > DBCC SHRINKFILE ( 1, 200 )
> >> > or
> >> > DBCC SHRINKFILE ( 1 )
> >> >
> >> > John
> >> >
> >> > "Nick" wrote:
> >> >
> >> >> John,
> >> >>
> >> >> sp_spaceused:
> >> >> MyDB 290343.88 MB 69625.83 MB
> >> >>
> >> >> Reserved Data Index Unused
> >> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> >> >>
> >> >> sp_helpfile:
> >> >>
> >> >> name fileid filename filegroup size maxsize growth usage
> >> >>
> >> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> >> >> Unlimited 153600 KB data only
> >> >>
> >> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL 14592448
> >> >> KB
> >> >> Unlimited 10% log only
> >> >>
> >> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
> >> >> Unlimited
> >> >> 10% log only
> >> >>
> >> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB
> >> >> Unlimited
> >> >> 10%
> >> >> data only
> >> >>
> >> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
> >> >> Unlimited
> >> >> 10% data only
> >> >>
> >> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
> >> >> Unlimited
> >> >> 10% data only
> >> >>
> >> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
> >> >> Unlimited
> >> >> 10% data only
> >> >>
> >> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
> >> >> Unlimited
> >> >> 10% log only
> >> >>
> >> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
> >> >> Unlimited
> >> >> 10% log only
> >> >>
> >> >> I'm trying to shrink Merlin_Data - the primary file - by running:
> >> >>
> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> >> >> and I've also tried:
> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> >> >>
> >> >> Any pointers really appreciated!
> >> >>
> >> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> >> >> > Hi
> >> >> >
> >> >> > Can you post your DBCC SHRINKFILE command and the output from
> >> >> > sp_spaceused,
> >> >> > sp_helpfile.
> >> >> >
> >> >> > John
> >> >> >
> >> >> > "Nick" wrote:
> >> >> >
> >> >> >> Hi,
> >> >> >>
> >> >> >> (SQL 2K)
> >> >> >>
> >> >> >> We used to have a database with 1 100Gb data file
> >> >> >>
> >> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command to
> >> >> >> move
> >> >> >> all
> >> >> >> the data out of the primary file to the new ones.
> >> >> >>
> >> >> >> Enterprise Mgr reports the primary file is 99% empty - excellent.
> >> >> >>
> >> >> >> Problem is, it wont shrink from being 100Gb in size - no amout of
> >> >> >> DBCC
> >> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
> >> >> >>
> >> >> >> Can anyone suggest what I might be missing?
> >> >> >>
> >> >> >> Thanks
> >> >> >>
> >> >> >> Nick
> >> >> >>
> >> >> >>
> >> >> >>
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>|||Well, it took a while but I've sussed it.
It turns out, that my log files were on a Windows mount-point which should
be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
I thought maybe SQL Server needed some more log space for the data file
shrink, but, whenever I tried expanding the logs SQL told me "File not
found". Very weird. So, I detached the db then moved the logs to a normal
drive and re-attached it. I could then increase the log sizes and the data
file shrank straight away.
Maybe SQL Server was unable to claim log-space for the shrink but didn't
report the error correctly as it was getting confused with the mount-points?
I'm guessing, but it works at last and many thanks for your assistance.
Nick
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...
> Hi Nick
> I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
> I doubt it is file permissions are a problem.
> Have you tried DBCC SHRINKDATABASE?
> John
> "Nick" wrote:
>> Model - what a good idea! SQL can't shrink a database smaller than that
>> can
>> it?
>> With much excitement, I check model's size... 1.5Mb!
>> Damm!!!
>> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
>> than 1 sec.
>> Logged in as sa.
>> File permissions maybe'
>>
>> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
>> > Hi Nick
>> >
>> > If the command was working it should take some time, you may want to
>> > try a
>> > larger size say 94000. What size is model?
>> >
>> > John
>> >
>> > "Nick" wrote:
>> >
>> >> Thats what I thought.
>> >>
>> >> Even in enterprise manager, it says "Min file size 150mb" so I try
>> >> shrinking
>> >> it to 200Mb and it just doesn't budge. Not an inch.
>> >>
>> >> Geeze, it's frustrating as I need this file as small as possible as
>> >> I've
>> >> moved all the data out of it.
>> >>
>> >> Is there any way of seeing if SQL is even attempting to shrink the
>> >> file?
>> >> The
>> >> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
>> >> process.
>> >>
>> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> >> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
>> >> > Hi
>> >> >
>> >> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB)
>> >> > so
>> >> > your
>> >> > shrinkfile should have worked!
>> >> >
>> >> > Try
>> >> >
>> >> > DBCC SHRINKFILE ( 1, 200 )
>> >> > or
>> >> > DBCC SHRINKFILE ( 1 )
>> >> >
>> >> > John
>> >> >
>> >> > "Nick" wrote:
>> >> >
>> >> >> John,
>> >> >>
>> >> >> sp_spaceused:
>> >> >> MyDB 290343.88 MB 69625.83 MB
>> >> >>
>> >> >> Reserved Data Index Unused
>> >> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
>> >> >>
>> >> >> sp_helpfile:
>> >> >>
>> >> >> name fileid filename filegroup size maxsize growth usage
>> >> >>
>> >> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
>> >> >> Unlimited 153600 KB data only
>> >> >>
>> >> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL
>> >> >> 14592448
>> >> >> KB
>> >> >> Unlimited 10% log only
>> >> >>
>> >> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
>> >> >> Unlimited
>> >> >> 10% log only
>> >> >>
>> >> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB
>> >> >> Unlimited
>> >> >> 10%
>> >> >> data only
>> >> >>
>> >> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
>> >> >> Unlimited
>> >> >> 10% data only
>> >> >>
>> >> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
>> >> >> Unlimited
>> >> >> 10% data only
>> >> >>
>> >> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
>> >> >> Unlimited
>> >> >> 10% data only
>> >> >>
>> >> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
>> >> >> Unlimited
>> >> >> 10% log only
>> >> >>
>> >> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
>> >> >> Unlimited
>> >> >> 10% log only
>> >> >>
>> >> >> I'm trying to shrink Merlin_Data - the primary file - by running:
>> >> >>
>> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
>> >> >> and I've also tried:
>> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
>> >> >>
>> >> >> Any pointers really appreciated!
>> >> >>
>> >> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> >> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
>> >> >> > Hi
>> >> >> >
>> >> >> > Can you post your DBCC SHRINKFILE command and the output from
>> >> >> > sp_spaceused,
>> >> >> > sp_helpfile.
>> >> >> >
>> >> >> > John
>> >> >> >
>> >> >> > "Nick" wrote:
>> >> >> >
>> >> >> >> Hi,
>> >> >> >>
>> >> >> >> (SQL 2K)
>> >> >> >>
>> >> >> >> We used to have a database with 1 100Gb data file
>> >> >> >>
>> >> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command
>> >> >> >> to
>> >> >> >> move
>> >> >> >> all
>> >> >> >> the data out of the primary file to the new ones.
>> >> >> >>
>> >> >> >> Enterprise Mgr reports the primary file is 99% empty -
>> >> >> >> excellent.
>> >> >> >>
>> >> >> >> Problem is, it wont shrink from being 100Gb in size - no amout
>> >> >> >> of
>> >> >> >> DBCC
>> >> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
>> >> >> >>
>> >> >> >> Can anyone suggest what I might be missing?
>> >> >> >>
>> >> >> >> Thanks
>> >> >> >>
>> >> >> >> Nick
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>>|||Hi Nick
I hae certainly had problems with mount points in the past so gave up using
them, they didn't allow database files larger than the host disc size to be
created. I don't know if that is still a problem though!!
John
"Nick" wrote:
> Well, it took a while but I've sussed it.
> It turns out, that my log files were on a Windows mount-point which should
> be supported on SQL2K & Windows Server 2003 Enterprise non-clustered.
> I thought maybe SQL Server needed some more log space for the data file
> shrink, but, whenever I tried expanding the logs SQL told me "File not
> found". Very weird. So, I detached the db then moved the logs to a normal
> drive and re-attached it. I could then increase the log sizes and the data
> file shrank straight away.
> Maybe SQL Server was unable to claim log-space for the shrink but didn't
> report the error correctly as it was getting confused with the mount-points?
> I'm guessing, but it works at last and many thanks for your assistance.
> Nick
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:285E26C2-FF94-4539-B890-088B681AC999@.microsoft.com...
> > Hi Nick
> >
> > I presume that you did a sp_helpfile again after the DBCC SHRINKFILE?
> >
> > I doubt it is file permissions are a problem.
> >
> > Have you tried DBCC SHRINKDATABASE?
> >
> > John
> >
> > "Nick" wrote:
> >
> >> Model - what a good idea! SQL can't shrink a database smaller than that
> >> can
> >> it?
> >> With much excitement, I check model's size... 1.5Mb!
> >> Damm!!!
> >> DBCC SHRINKFILE (N'Merlin_Data' , 94000) also comes back instantly, less
> >> than 1 sec.
> >>
> >> Logged in as sa.
> >>
> >> File permissions maybe'
> >>
> >>
> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> news:FE88B764-03DC-4AD0-A6D7-5264410E1F91@.microsoft.com...
> >> > Hi Nick
> >> >
> >> > If the command was working it should take some time, you may want to
> >> > try a
> >> > larger size say 94000. What size is model?
> >> >
> >> > John
> >> >
> >> > "Nick" wrote:
> >> >
> >> >> Thats what I thought.
> >> >>
> >> >> Even in enterprise manager, it says "Min file size 150mb" so I try
> >> >> shrinking
> >> >> it to 200Mb and it just doesn't budge. Not an inch.
> >> >>
> >> >> Geeze, it's frustrating as I need this file as small as possible as
> >> >> I've
> >> >> moved all the data out of it.
> >> >>
> >> >> Is there any way of seeing if SQL is even attempting to shrink the
> >> >> file?
> >> >> The
> >> >> DBCC command returns very quickly (< 1sec) so I guess it's an off-line
> >> >> process.
> >> >>
> >> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> >> news:F2199DEB-6438-467B-B909-DBB935583740@.microsoft.com...
> >> >> > Hi
> >> >> >
> >> >> > The file Merlin_Data is 97280000 KB and only uses 153600 KB (150 MB)
> >> >> > so
> >> >> > your
> >> >> > shrinkfile should have worked!
> >> >> >
> >> >> > Try
> >> >> >
> >> >> > DBCC SHRINKFILE ( 1, 200 )
> >> >> > or
> >> >> > DBCC SHRINKFILE ( 1 )
> >> >> >
> >> >> > John
> >> >> >
> >> >> > "Nick" wrote:
> >> >> >
> >> >> >> John,
> >> >> >>
> >> >> >> sp_spaceused:
> >> >> >> MyDB 290343.88 MB 69625.83 MB
> >> >> >>
> >> >> >> Reserved Data Index Unused
> >> >> >> 155007152 KB 92248808 KB 5123888 KB 57634456 KB
> >> >> >>
> >> >> >> sp_helpfile:
> >> >> >>
> >> >> >> name fileid filename filegroup size maxsize growth usage
> >> >> >>
> >> >> >> Merlin_Data 1 E:\Data1\MerlinDB\Merlin_Data.MDF PRIMARY 97280000 KB
> >> >> >> Unlimited 153600 KB data only
> >> >> >>
> >> >> >> Merlin_Log 2 E:\Logs1\MerlinDB\Logs1\MerlinDB_Log1.LDF NULL
> >> >> >> 14592448
> >> >> >> KB
> >> >> >> Unlimited 10% log only
> >> >> >>
> >> >> >> Merlin_Log2 3 e:\logs1\merlindb\Merlin_Log2.ldf NULL 22561024 KB
> >> >> >> Unlimited
> >> >> >> 10% log only
> >> >> >>
> >> >> >> MerlinDB 5 e:\data1\MerlinDB\MerlinDB.ndf PRIMARY 30720000 KB
> >> >> >> Unlimited
> >> >> >> 10%
> >> >> >> data only
> >> >> >>
> >> >> >> MerlinDB_2 6 e:\data1\merlindb\MerlinDB_2.ndf PRIMARY 30720000 KB
> >> >> >> Unlimited
> >> >> >> 10% data only
> >> >> >>
> >> >> >> MerlinDB_3 7 e:\data1\merlindb\MerlinDB_3.ndf PRIMARY 33792000 KB
> >> >> >> Unlimited
> >> >> >> 10% data only
> >> >> >>
> >> >> >> MerlinDB_4 8 e:\data1\merlindb\MerlinDB_4.ndf PRIMARY 33792000 KB
> >> >> >> Unlimited
> >> >> >> 10% data only
> >> >> >>
> >> >> >> Merlin_Log3 9 e:\logs1\merlindb\Merlin_Log3.ldf NULL 22561024 KB
> >> >> >> Unlimited
> >> >> >> 10% log only
> >> >> >>
> >> >> >> Merlin_Log4 10 e:\logs1\merlindb\Merlin_Log4.ldf NULL 11293632 KB
> >> >> >> Unlimited
> >> >> >> 10% log only
> >> >> >>
> >> >> >> I'm trying to shrink Merlin_Data - the primary file - by running:
> >> >> >>
> >> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200)
> >> >> >> and I've also tried:
> >> >> >> DBCC SHRINKFILE (N'Merlin_Data' , 200, TRUNCATEONLY)
> >> >> >>
> >> >> >> Any pointers really appreciated!
> >> >> >>
> >> >> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> >> >> news:0790504B-90AF-4CC8-8E9B-0FB395B51927@.microsoft.com...
> >> >> >> > Hi
> >> >> >> >
> >> >> >> > Can you post your DBCC SHRINKFILE command and the output from
> >> >> >> > sp_spaceused,
> >> >> >> > sp_helpfile.
> >> >> >> >
> >> >> >> > John
> >> >> >> >
> >> >> >> > "Nick" wrote:
> >> >> >> >
> >> >> >> >> Hi,
> >> >> >> >>
> >> >> >> >> (SQL 2K)
> >> >> >> >>
> >> >> >> >> We used to have a database with 1 100Gb data file
> >> >> >> >>
> >> >> >> >> I've now created 5 25Gb files and ran the DBCC EmptyFile command
> >> >> >> >> to
> >> >> >> >> move
> >> >> >> >> all
> >> >> >> >> the data out of the primary file to the new ones.
> >> >> >> >>
> >> >> >> >> Enterprise Mgr reports the primary file is 99% empty -
> >> >> >> >> excellent.
> >> >> >> >>
> >> >> >> >> Problem is, it wont shrink from being 100Gb in size - no amout
> >> >> >> >> of
> >> >> >> >> DBCC
> >> >> >> >> ShrinkFile, backing up logs, full backups etc will shrink it.
> >> >> >> >>
> >> >> >> >> Can anyone suggest what I might be missing?
> >> >> >> >>
> >> >> >> >> Thanks
> >> >> >> >>
> >> >> >> >> Nick
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>
Subscribe to:
Posts (Atom)