Pagefile prioritization on multiple drives [duplicate]
I am managing a Dell R710 server used for some very large non-linear finite element analysis (FEA) computations. Occasionally, these runs will take upwards of 500GB of allocated memory. Since this machine only currently has 132GB of RAM, this additional memory allocation comes through the paging file.
The paging file is current on a spinning HDD array and is causing a huge bottleneck. I have investigated maxing out the memory (288GB) and adding a 400GB Intel 750 NVMe SSD as a dedicated pagefile disk. This should free up some of the pagefile IO bottleneck, but I want to make sure that we don't max out the pagefile and crash a large run.
Short of getting the 800GB Intel 750 for the understood max page file size of 864GB (3x 288GB), can I tell Windows to use the HDD array as a failover for extra pagedisk space? Is there any way to prioritize the SSD as primary for the pagefile? Thanks.
Solution 1:
You are not "required" to have a pagefile on the HDD array. You can simply remove it or set it to absolute minimum, if you want Crash Dumps (the OS will tell you when you change the individual pagefile size on the HDD array). Assuming the array is the location of the OS.
This will automatically force the writes to SSD, after it has used the OS partition drive's pagefile.
Having the pagefile on an array has drawbacks. Each page write is going to a controller and unnecessarily going through controller board's logic to determine which drive(s) to actually write that page on. A pagefile by nature is temporary storage, so there's no benefit for having any type of RAID or array (especially if a faster subsystem, i.e SSD in this case, is available).
Someone might ask "what about the large caches found on most array controllers?" Those are not useful for pagefile, again because by definition what is being paged out is what has not been read in a while, so the cache is not likely to be accessed for reading back the pagefile anyway. An SSD with its builtin basic cache will be faster than an array cache in this scenario.
In your very particular situation (FEA computations) it gets a bit tricky, if the algorithm needs to span the whole allocated memory regularly. So then the pagefile is getting read back a lot. In that case any large cache on a controller "could" help depending on the sequence in which your algorithm accesses the memory. If it causes more of LIFO (last in first out) type of access sequence then it will help. If it is random, then likely limited benefit. If it is FIFO (first in first out) then it will likely hurt.
Random Microsoft MVP sayings indicate says that faster drives will be auto-magically favored. Though my empirical observations over the years show that the OS drive is favored. So the above configuration gets both your concerns addressed.