Modern releases of pymongo ( greater than 3.x ) wrap bulk operations in a consistent interface that downgrades where the server release does not support bulk operations. This is now consistent in MongoDB officially supported drivers.

So the preferred method for coding is to use bulk_write() instead, where you use an UpdateOne other other appropriate operation action instead. And now of course it is preferred to use the natural language lists rather than a specific builder

The direct translation of the old documention:

from pymongo import UpdateOne

operations = [
    UpdateOne({ "field1": 1},{ "$push": { "vals": 1 } },upsert=True),
    UpdateOne({ "field1": 1},{ "$push": { "vals": 2 } },upsert=True),
    UpdateOne({ "field1": 1},{ "$push": { "vals": 3 } },upsert=True)
]

result = collection.bulk_write(operations)

Or the classic document transformation loop:

import random
from pymongo import UpdateOne

random.seed()

operations = []

for doc in collection.find():
    # Set a random number on every document update
    operations.append(
        UpdateOne({ "_id": doc["_id"] },{ "$set": { "random": random.randint(0,10) } })
    )

    # Send once every 1000 in batch
    if ( len(operations) == 1000 ):
        collection.bulk_write(operations,ordered=False)
        operations = []

if ( len(operations) > 0 ):
    collection.bulk_write(operations,ordered=False)

The returned result is of BulkWriteResult which will contain counters of matched and updated documents as well as the returned _id values for any "upserts" that occur.

There is a bit of a misconception about the size of the bulk operations array. The actual request as sent to the server cannot exceed the 16MB BSON limit since that limit also applies to the "request" sent to the server which is using BSON format as well.

However that does not govern the size of the request array that you can build, as the actual operations will only be sent and processed in batches of 1000 anyway. The only real restriction is that those 1000 operation instructions themselves do not actually create a BSON document greater than 16MB. Which is indeed a pretty tall order.

The general concept of bulk methods is "less traffic", as a result of sending many things at once and only dealing with one server response. The reduction of that overhead attached to every single update request saves lots of time.


MongoDB 2.6+ has support for bulk operations. This includes bulk inserts, upserts, updates, etc. The point of this is to reduce/eliminate delays from the round-trip latency of doing record-by-record operations ('document by document' to be correct).

So, how does this work? Example in Python, because that's what I'm working in.

>>> import pymongo
>>> pymongo.version
'2.7rc0'

To use this feature, we create a 'bulk' object, add documents to it, then call execute on it and it will send all the updates at once. Caveats: The BSONsize of the collected operations (sum of the bsonsizes) cannot be over the document size limit of 16 MB. Of course, the number of operations can thus vary significantly, Your Mileage May Vary.

Example in Pymongo of Bulk upsert operation:

import pymongo
conn = pymongo.MongoClient('myserver', 8839)
db = conn['mydbname']
coll = db.myCollection
bulkop = coll.initialize_ordered_bulk_op()
retval = bulkop.find({'field1':1}).upsert().update({'$push':{'vals':1}})
retval = bulkop.find({'field1':1}).upsert().update({'$push':{'vals':2}})
retval = bulkop.find({'field1':1}).upsert().update({'$push':{'vals':3}})
retval = bulkop.execute()

This is the essential method. More info available at:

http://api.mongodb.org/python/2.7rc1/examples/bulk.html

Edit :- since version 3.5 of python driver, initialize_ordered_bulk_op is deprecated. Use bulk_write() instead. [ http://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.bulk_write ]