"Primary Filegroup is Full" in SQL Server 2008 Standard for no apparent reason
Our database is currently at 64 Gb and one of our apps started to fail with the following error:
System.Data.SqlClient.SqlException
: Could not allocate space for object'cnv.LoggedUnpreparedSpos'.'PK_LoggedUnpreparedSpos'
in database'travelgateway'
because the'PRIMARY'
filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
I double-checked everything: all files in a single filegroup are allowed to autogrow with a reasonable increments (100 Mb for a data file, 10% for a log file), more than 100 Gb of free space is available for the database, tempdb
is set to autogrow as well with plenty of free HDD space on its drive.
To resolve a problem, I added second file to the filegroup and the error has gone. But I feel uneasy about this whole situation.
Where' the problem here, guys?
Solution 1:
OK, got it working. Turns out that an NTFS volume where the DB files were located got heavily fragmented. Stopped SQL Server, defragmented the whole thing and all it was fine ever since.
Solution 2:
Anton,
As a best practice one should n't create user objects in the primary filegroup. When you have bandwidth, create a new file group and move the user objects and leave the system objects in primary.
The following queries will help you identify the space used in each file and the top tables that have highest number of rows and if there are any heaps. Its a good starting point to investigate this issue.
SELECT
ds.name as filegroupname
, df.name AS 'FileName'
, physical_name AS 'PhysicalName'
, size/128 AS 'TotalSizeinMB'
, size/128.0 - CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0 AS 'AvailableSpaceInMB'
, CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0 AS 'ActualSpaceUsedInMB'
, (CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0)/(size/128)*100. as '%SpaceUsed'
FROM sys.database_files df LEFT OUTER JOIN sys.data_spaces ds
ON df.data_space_id = ds.data_space_id;
EXEC xp_fixeddrives
select t.name as TableName,
i.name as IndexName,
p.rows as Rows
from sys.filegroups fg (nolock) join sys.database_files df (nolock)
on fg.data_space_id = df.data_space_id join sys.indexes i (nolock)
on df.data_space_id = i.data_space_id join sys.tables t (nolock)
on i.object_id = t.object_id join sys.partitions p (nolock)
on t.object_id = p.object_id and i.index_id = p.index_id
where fg.name = 'PRIMARY' and t.type = 'U'
order by rows desc
select t.name as TableName,
i.name as IndexName,
p.rows as Rows
from sys.filegroups fg (nolock) join sys.database_files df (nolock)
on fg.data_space_id = df.data_space_id join sys.indexes i (nolock)
on df.data_space_id = i.data_space_id join sys.tables t (nolock)
on i.object_id = t.object_id join sys.partitions p (nolock)
on t.object_id = p.object_id and i.index_id = p.index_id
where fg.name = 'PRIMARY' and t.type = 'U' and i.index_id = 0
order by rows desc
Solution 3:
Ran into the same problem and at first defragmenting seemed to work. But it was for just a short while. Turns out the server the customer was using, was running the Express version
and that has a licensing limit of about 10gb
.
So even though the size was set to "unlimited", it wasn't.
Solution 4:
Do one thing, go to properties of database select files and increase initial size of database and set primary filegroup as autoincremented. Restart sql server.
You will be able to use database as earlier.
Solution 5:
I also ran into the same problem, where the initial dtabase size is set to 4Gb and autogrowth is set by 1Mb. The virtual encrypted TrueCrypt drive that the databse was on, seemed to have plenty of space.
I changed a couple of (the above) things:
- I turned the Windows service for Sql Server Express from automatic to manual, so only the 'regular' Sql Server is running. (Even though I am running Sql Server 2008 R2 which should allow 10 GB.)
- I changed the autogrowth from 1 MB to 10%
- I changed the autogrowth increment-size from 10% to 1000 MB
- I defragmented the drive
- I shrank the database:
- manually
DBCC SHRINKDATABASE('...')
- automatically right click on database | "properties" | "Auto Shrink" | "Truncate log on check point")
- manually
All to little avail (I could insert some more records, but soon ran into the same problem). The pagefile mentioned by Tobbi, made me try a larger virtual drive. (Even though my drive should not contain any such system files, since I run without it being mounted a lot of the time.)
- I made a new larger virtual drive with TrueCrypt
When making this, I ran into a TrueCrypt-question, if I am going to store files larger than 4gb (as shown in this SuperUser question).
- I told TrueCrypt I would store files larger than 4 GB
After these last two I was doing fine, and I am assuming this last one did the trick. I think TrueCrypt chooses an exfat file system (as described here), which limits all files to 4GB. (So I probably did not need to enlarge the drive after all, but I did anyway.)
This is probably a very rare border case, but maybe it is of help to somebody.