Why is sleepimage and swapfile filled with zeros?
If I look at the contents of either sleepimage or swapfile in /var/vm
, I find that it's just 1.07G of null bytes. I'd expect them to contain actual data, or if they're not being used to be a 0B file. I've checked a few different OS versions and verified that this is the case from 10.9 to 11.6, so I doubt it's something particular to a given setup or filesystem.
I decided to do some digging into the kernel source and I have a rough answer, but it's still not 100% clear. If you take a look at IOHibernateIO.cpp
inside the IOHibernateSystemPostWake
method you see the following call (note: slightly different for 10.11+, but the same function ends up being called)
if (kFSOpened == gFSState)
{
// invalidate & close the image file
gIOHibernateCurrentHeader->signature = kIOHibernateHeaderInvalidSignature;
if ((fileRef = gIOHibernateFileRef))
{
gIOHibernateFileRef = 0;
IOSleep(TRIM_DELAY);
kern_close_file_for_direct_io(fileRef,
#if DISABLE_TRIM
0, 0, 0, 0, 0);
#else
0, (caddr_t) gIOHibernateCurrentHeader,
sizeof(IOHibernateImageHeader),
0,
gIOHibernateCurrentHeader->imageSize);
#endif
}
gFSState = kFSIdle;
}
return (kIOReturnSuccess);
Importantly, DISABLE_TRIM
is defined as 0, so we end up calling kern_close_file_for_direct_io
whose signature is as follows:
void
kern_close_file_for_direct_io(struct kern_direct_file_io_ref_t * ref,
off_t write_offset, caddr_t addr, vm_size_t write_length,
off_t discard_offset, off_t discard_end)
and whose implementation first seems to issue a DKIOCUNMAP
ioctl for the extents belonging to the hibernate file, then copies write_length
bytes from memory address addr
into the file. If we look a bit back, gIOHibernateCurrentHeader
is zeroed out in memory after the hibernate image is written during IOHibernateSystemSleep
.
So this leads to a few conclusions, as well as a few lingering questions:
-
This does indeed seem to be intended behavior, and I can see a similar
DKIOCUNMAP
being issued withinvm_compressor_backing_file.c
so I assume that something similar happens for swap files as well. -
However, I'm uncertain as to what this means for the space physically taken on disk. For APFS this is trivially answerable since it supports sparse files (so even if the file is actually 1.7G of null bytes it won't matter), but what happens for HFS+? My understanding is that the
DKIOCUNMAP
ioctl effectively translates into the SSD TRIM command to clean up used blocks:
The doUnmap method was introduced as a replacement for the doDiscard method. It performs a similar function, which is to release disk blocks that are not used by the file system. Unlike the doDiscard method, which is capable of releasing only a single physically contiguous run of disk blocks, the doUnmap method is provided with an array containing one or more ranges of disk blocks that are no longer in use. A user space process can perform this action by sending the ioctl DKIOCUNMAP.
which would seem to imply that despite the file-system level thinking that the size of the file is 1.7G, none of those extents actually point to valid blocks on disk, and so the physical file size is far less. This would also explain why it's 1.7G of zeros, since I assume the SSD controller upon receiving the trim command might implement RZAT (return zeros after trim). This hypothesis could be tested by trying this on a spinning disk and seeing if data is returned.