Solution 1:

I wrote two comprehensive blog posts that explain ghost cleanup (it's the only place it's explained in depth anywhere, either in print or online).

The first one is Inside the Storage Engine: Ghost cleanup in depth and the second one is Ghost cleanup redux. Yes, 10 pages every time, and it's possible for the ghost cleanup task to never catch-up with a high volume of continuous deletes.

Ghost cleanup apart from the 10-pages-every-5 seconds can be triggered aggressively by making sure the Storage Engine 'sees' the ghost records. Force a scan of the affected table or index using something like

select * from [problem-table] with (index = problem-index)

That will queue up a request to clean out the ghost records aggressively. Beware, though, that it will generate a lot of transaction log while it does it. They should also get cleared out by index rebuilds or reorganizes as part of regular index maintenance.

Hope this helps.

Solution 2:

I've always thought that index rebuilds would "fix" any remaining issues when the structures are shifted around.

Solution 3:

A couple of years ago I ran into this problem and was not able to fix it through configuration, triggering ghost cleanup, or anything of the sort. I was working with a database that had inserts and deletes continuously and SQLServer would effectively lock up periodically due to ghost cleanup triggering.

My eventual solution was to partition the problem tables by time, and instead of deleting old rows I could merely drop the old tables. That helped immensely. This may not be feasible for your situation but it applies to many, with some variation.