why does attempting to write a large file cause js heap to run out of memory
The out of memory error happens because you're not waiting for the drain
event to be emitted, without waiting Node.js will buffer all written chunks until maximum memory usage occurs.
.write
will return false
if the internal buffer is greater than highWaterMark
which defaults to 16384 bytes (16kb). In your code, you're not handling the return value of .write
, and so the buffer is never flushed.
This can be tested very easily using: tail -f test.dat
When executing your script, you will see that nothing is being written on test.dat
until the script finishes.
For 1e7
the buffer should be cleared 610 times.
1e7 / 16384 = 610
A solution is to check for .write
return value and if false
is returned, use file.once('drain')
wrapped in a promise to wait until drain
event is emitted
NOTE: writable.writableHighWaterMark
was added in node v9.3.0
const file = require("fs").createWriteStream("./test.dat");
(async() => {
for(let i = 0; i < 1e7; i++) {
if(!file.write('a')) {
// Will pause every 16384 iterations until `drain` is emitted
await new Promise(resolve => file.once('drain', resolve));
}
}
})();
Now if you dotail -f test.dat
you will see how data is being written while the script is still running.
As of why you get memory issues with 1e7 and not 1e6 we have to take a look into how Node.Js does the buffering, that happen at the writeOrBuffer function.
This sample code will allow us to have a rough estimate of the memory usage:
const count = Number(process.argv[2]) || 1e6;
const state = {};
function nop() {}
const buffer = (data) => {
const last = state.lastBufferedRequest;
state.lastBufferedRequest = {
chunk: Buffer.from(data),
encoding: 'buffer',
isBuf: true,
callback: nop,
next: null
};
if(last)
last.next = state.lastBufferedRequest;
else
state.bufferedRequest = state.lastBufferedRequest;
state.bufferedRequestCount += 1;
}
const start = process.memoryUsage().heapUsed;
for(let i = 0; i < count; i++) {
buffer('a');
}
const used = (process.memoryUsage().heapUsed - start) / 1024 / 1024;
console.log(`${Math.round(used * 100) / 100} MB`);
When executed:
// node memory.js <count>
1e4: 1.98 MB
1e5: 16.75 MB
1e6: 160 MB
5e6: 801.74 MB
8e6: 1282.22 MB
9e6: 1442.22 MB - Out of memory
1e7: 1602.97 MB - Out of memory
So each object uses ~0.16 kb
, and when doing 1e7 writes
without waiting for drain
event, you have 10 million of those objects in memory (To be fair it crashes before reaching 10M)
It doesn't matter if you use a single a
or 1000, the memory increase from that is negligible.
You can increase the max memory used by node with --max_old_space_size={MB}
flag (Of course this is not the solution, is just for checking the memory consumption without crashing the script):
node --max_old_space_size=4096 memory.js 1e7
UPDATE: I made a mistake on the memory snippet which led to a 30% increase on memory usage. I was creating a new callback for every .write
, Node reuses nop
callback.
UPDATE II
If you're writing always the same value (doubtful in a real scenario), you can reduce greatly the memory usage & execution time by passing the same buffer every time:
const buf = Buffer.from('a');
for(let i = 0; i < 1e7; i++) {
if(!file.write(buf)) {
// Will pause every 16384 iterations until `drain` is emitted
await new Promise(resolve => file.once('drain', resolve));
}
}