Windows 8 Defragmenter?
This PDF seems to have an explanation of this, along with new NTFS features.
It says:
Slab Consolidation
Efficiently defrags files to minimize the number of allocated slabs
A slab is the unit of allocation on a thin provisioned volume
Requires support for
IOCTL_STORAGE_QUERY_PROPERTY
requesting a property ID of:StorageDeviceLBProvisioningProperty
- Retrieves a volume's slab size
I couldn't find anything specifically explaining what this means in the context of Windows 8's defragmenter. But "slab consolidation" generally refers to moving objects so that objects that round up to the same allocation size are placed together.
The benefit of doing this is usually pretty minimal. But it does tend to reduce the average seek time when a large number of small objects are accessed.
Actually I don't think slabs are made to tune the allocation of many files with the same size to reduce the average seek time.
My opinion is that it is used to reduce latency for allocations on large volumes which otherwise would cause too many concurrent accesses by parallel threads when they need to allocate space on the volume, because this would put a lock on the same part of the volume allocation bitmap. To avoid processing large bitmaps, it can be subdivided into "slabs" whose size in bits represents contiguous areas on disk using the same bitmap fragment (occupying at least 1 or more cluster; if your cluster size if 4KB, its cluster in the bitmap represents 4K*8= 32K allocatable clusters, i.e. 128MB os storage; the actual slab size in a volume is tuned to be between 33 and 64, allowing about 33 concurrent threads to allocate space in the bitmap on disk without blocking each other)
So slabs are used to speedup space allocation on the volume, assuming that a thread creating many files will do that most often within its own slab, before unlocking it and trying another slab, or trying by allocating smaller amounts in the current slab, before trying another available non-locked slab, and then trying to concurrently get a blocking access to the slab currently used by another thread.
This explains why allocation on disk is "spread" across the volume. As well this explain why the MFT on the NTFS has at least 2 fragments, belonging to other slabs, as it avoids severe locks between many threads using the volume. You may defragment the MFT but it will remain at least one fragment kept in its "reserved area" for concurrent allocations which must avoid performing blocking I/O on the NTFS volume).
In the past, the NTFS volume was not subdivided in multiple slabs, and there was a huge performance penalty with many thread blocking and too many thread switches in the kernel waiting for I/O completion (even if allocation in the bitmap is in fact extremely fast and takes nanoseconds as most of the interesting part of the bitmap is already cached in memory). When writes on the volumes are then flushed, and journaled, there's another lock occurring because of allocation on the journal, so the journal now uses also a separate slab on the volume (if possible).
But I don't think that NTFS dedicate any slab to files for specific sizes. NTFS internally will slightly defragment the slabs when data is removed and their allocated size falls below some threshold and two such slabs can be merged.
You can get info about slabs sizes with :
fsutil fsinfo ntfsinfo c:
Clearly the slabs are tuning parameters intended for performance. But many third party defragmentation tools ignore this setting and don't use optimal placement. Ideally you should have some free space in each slab of the volume, unless the slabs are full of files and indexes that are not reallocated and should remain stable. For many small temporary files and transactions that are constantly created and recycled, you need to place them in enough slabs depending on the number of concurrent threads and avoid placing them too far away from the other clusters that need to be read if the volume is a hard disk or RAID array (this does not matter on SSD).
Slabs can also be useful for remote filesystems but their optimal size is hard to predict. Slabs on the opposite are very small for differential volumes of hierarchic virtualized volumes and there's a very different placement strategy, given that allocation is virtual and remapped to different physical places.
We still need info from Microsoft about the following tuning parameters in the registry:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Dfrg\SlabifyFunction]
MinimumReclaimSlabsMB = REG_DWORD: 10240
MinimumReclaimSlabsPercent = REG_DWORD: 10
SlabEvictUpperBoundKB = REG_DWORD: 204800
SlabEvictUpperBoundPercent = REG_DWORD: 20
I think that these are left undocumented on purpose because Microsoft is still thinking about changing the placement strategies and may change it over time. They are not exposed by the API, you only find their evidence in the registry and in the internal source code implementation of the NTFS driver (not sure if someone can access these sources, or if they have been ported with help of Microsoft in the NTFS "NG" driver for Linux, which may still be tuned differently in its implementation to better fit the most common Linux demands; however this Linux kernel is now sponsored by Microsoft in the kernel that it provides for Windows and for Microsoft's own Azure cloud offers on Azure, now that Microsoft works a lot with Linux, as well as Linux-based Android, and BSD-based iOS/MacOS kernels where it wants to sell cloud services. NTFS is no longer just for Windows, it has to work as well with other OSes and Windows has to live as well inside a larger set of OSes for different needs and scales).
The alternatives to NTFS also exist in Windows, such as ReFS (with its additional features, but also its own limitations); may be Windows will be extended later to accept other filesystems, including Ext4 or ZFS, or HFS+ from MacOS: all seems to be ready in the Windows kernel and in the API (and several companies, like Paragon Software, are already supporting these filesystems on Windows... but still not for booting the OS and critical components of Windows like Hyper-V). Microsoft is constantly changing its mind, it just adapts to the market demand but Microsoft no longer wants to isolate itself in a niche market with only its own proprietary solutions, because it wants to target more applications nad usages. All we know is that slabs are exposed briefly by the "/K" parameter of the DEFRAG.EXE command-line tool, that does not detail them much. But it is easy to observe that the /K optimization is giving huge performance gains after the initial installation of Windows (even before Bootvis optimization is made after 6 reboots and measurements). There are also the /L parameters related to trimming on SSDs.