Just to be clear we're on the same page, you're saying that each log backup is hundreds of MBs, not that the log files (LDFs) are large. The later is usually explained by log being pinned by replication, mirroring, a forgotten active user transaction or anything else as listed in sys.databases.log_reuse_wait_desc. However, I understand that is not the case. If the log backups taken every 10 minutes each have hundreds of MBs then it means there must be hundreds of MB worth of activity that occurred in the last 10 minutes. Perhaps there is some activity going on constantly of which you are not aware of?

Of course I assume you do not use the WITH NO_TRUNCATE or WITH COPY_ONLY syntax for the log backup task.

The following 3 queries should give you an idea about what is the log filled up with:

use [<myproblemdb>];
go

select count(*) as CountLogRecords, 
    sum([Log Record Length]) [Total Log Length]
from fn_dblog(null, null);

select count(*) as CountTransactionRecords, 
    sum([Log Record Length]) [Transaction Log Length],
    [Transaction ID]
from fn_dblog(null, null)
group by [Transaction ID]
order by 2 desc;

select count(*) as CountLogOperations,
    sum ([Log Record Length]) as [Operation Log Length],
    [Operation]
from fn_dblog(null, null)
group by [Operation]    
order by 2 desc;

How much data is within the transaction log file before the backup is taken? You can see this with DBCC LOGINFO or by looking at the disk space report within SQL Server Management Studio. How big are the log backups?


You may have too many VLFs. Run DBCC LOGINFO and see how many rows are returned. If there are a large number consider shrinking the database's transaction log and growing it in one step to an appropriate size.