MongoDB 80,000+ insertions per second

To test my MongoDB 3.2 / WiredTiger on CentOS6, I use the script from http://vladmihalcea.com/2013/12/01/mongodb-facts-80000-insertssecond-on-commodity-hardware/. Here I list the script (a little modified):

// Run me as:
// mongo random --eval "var arg1=50000000;arg2=1" create_random.js

var minDate = new Date(2012, 0, 1, 0, 0, 0, 0);
var maxDate = new Date(2013, 0, 1, 0, 0, 0, 0);
var delta = maxDate.getTime() - minDate.getTime();

var job_id = arg2;

var documentNumber = arg1;
var batchNumber = 5 * 2000;

var job_name = 'Job#' + job_id
var start = new Date();
var tsPrev = start;

var batchDocuments = new Array();
var index = 0;

while(index < documentNumber) {
    var date = new Date(minDate.getTime() + Math.random() * delta);
    var value = Math.random();
    var document = {
        created_on : date,
        value : value
    };
    batchDocuments[index % batchNumber] = document;
    if((index + 1) % batchNumber == 0) {
        db.randomData.insert(batchDocuments, {writeConcern:{w:0}, ordered: false});
    }
    index++;

    if(index % 100000 == 0) {
        var tsNow = new Date();
        print(job_name + ' inserted ' + index + ' documents in ' + (tsNow-tsPrev)/1000 + 's; Total time: ' + (tsNow-start)/1000 + 's');
        tsPrev = tsNow;
    }
}
print(job_name + ' inserted ' + documentNumber + ' in ' + (new Date() - start)/1000.0 + 's');

I get on average 11,000 inserts/s independent on number of documents in batch, write concert or ordered flag. The data from mongostats:

insert query update delete getmore command % dirty % used flushes   vsize     res qr|qw ar|aw netIn netOut conn                      time
 15424    *0     *0     *0       0     1|0     0.5   17.6       0    3.2G    2.8G   0|0   0|1  930k    18k   33 2015-12-15T23:32:40+01:00
 12430    *0     *0     *0       0     6|0     0.5   17.6       0    3.2G    2.8G   0|0   0|0  807k    19k   33 2015-12-15T23:32:41+01:00
 14330    *0     *0     *0       0     1|0     0.5   17.6       0    3.2G    2.8G   0|0   0|1  868k    18k   33 2015-12-15T23:32:42+01:00
 15670    *0     *0     *0       0     1|0     0.5   17.6       0    3.2G    2.8G   0|0   0|0  992k    18k   33 2015-12-15T23:32:43+01:00
 10794    *0     *0     *0       0     1|0     0.5   17.6       0    3.2G    2.8G   0|0   0|1  620k    18k   33 2015-12-15T23:32:44+01:00
 15370    *0     *0     *0       0     1|0     0.6   17.7       0    3.2G    2.8G   0|0   0|1  992k    18k   33 2015-12-15T23:32:45+01:00
 13836    *0     *0     *0       0     6|0     0.6   17.7       0    3.2G    2.8G   0|0   0|0  869k    19k   33 2015-12-15T23:32:46+01:00
 13900    *0     *0     *0       0     1|0     0.6   17.7       0    3.2G    2.8G   0|0   0|1  806k    18k   33 2015-12-15T23:32:47+01:00
 15121    *0     *0     *0       0     1|0     0.6   17.7       0    3.2G    2.9G   0|0   0|1  992k    18k   33 2015-12-15T23:32:48+01:00
 11040    *0     *0     *0       0     1|0     0.6   17.7       0    3.2G    2.9G   0|0   0|1  682k    18k   33 2015-12-15T23:32:49+01:00

The data from the script itself:

Job#3 inserted 100000 documents in 7.353s; Total time: 7.353s
Job#3 inserted 200000 documents in 7.261s; Total time: 14.614s
Job#3 inserted 300000 documents in 7.495s; Total time: 22.109s
Job#3 inserted 400000 documents in 8.094s; Total time: 30.203s
Job#3 inserted 500000 documents in 7.39s; Total time: 37.593s
Job#3 inserted 600000 documents in 8.088s; Total time: 45.681s
Job#3 inserted 700000 documents in 7.682s; Total time: 53.363s
Job#3 inserted 800000 documents in 7.728s; Total time: 61.091s

Not even close to 80,000 ins/s.

iostats 10 gives:

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda               0.40         0.00         2.40          0         24
sdb              34.70         0.00      1801.20          0      18012

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.72    0.00    0.58    0.03    0.00   95.67

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda              26.10         0.40      5464.40          4      54644
sdb              32.50         0.00      1889.60          0      18896

I have 32Gb RAM, Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz and SSD 1T IBM ServeRAID M5110e (scsi).

What can be the reason for such low performance relative to the blog's results? Where can I find more benchmarks on MongoDB to have a reference point?


Back when that article was written, MongoDB's default install was set to make unsafe writes. This performed great in benchmarks, but not so great in saving your data correctly.

https://aphyr.com/posts/284-call-me-maybe-mongodb

Up until recently, clients for MongoDB didn't bother to check whether or not their writes succeeded, by default: they just sent them and assumed everything went fine. This goes about as well as you'd expect.

https://blog.rainforestqa.com/2012-11-05-mongodb-gotchas-and-how-to-avoid-them/

MongoDB allows very fast writes and updates by default. The tradeoff is that you are not explicitly notified of failures. By default most drivers do asynchronous, ‘unsafe’ writes - this means that the driver does not return an error directly, similar to INSERT DELAYED with MySQL. If you want to know if something succeeded, you have to manually check for errors using getLastError.

For cases where you want an error thrown if something goes wrong, it’s simple in most drivers to enable “safe” queries which are synchronous. This makes MongoDB act in a familiar way to those migrating from a more traditional database.

Chances are you're seeing the results of safer default settings.