SQLAlchemy: Scan huge tables using ORM?

I am currently playing around with SQLAlchemy a bit, which is really quite neat.

For testing I created a huge table containing my pictures archive, indexed by SHA1 hashes (to remove duplicates :-)). Which was impressingly fast...

For fun I did the equivalent of a select * over the resulting SQLite database:

session = Session()
for p in session.query(Picture):
    print(p)

I expected to see hashes scrolling by, but instead it just kept scanning the disk. At the same time, memory usage was skyrocketing, reaching 1GB after a few seconds. This seems to come from the identity map feature of SQLAlchemy, which I thought was only keeping weak references.

Can somebody explain this to me? I thought that each Picture p would be collected after the hash is written out!?


Okay, I just found a way to do this myself. Changing the code to

session = Session()
for p in session.query(Picture).yield_per(5):
    print(p)

loads only 5 pictures at a time. It seems like the query will load all rows at a time by default. However, I don't yet understand the disclaimer on that method. Quote from SQLAlchemy docs

WARNING: use this method with caution; if the same instance is present in more than one batch of rows, end-user changes to attributes will be overwritten. In particular, it’s usually impossible to use this setting with eagerly loaded collections (i.e. any lazy=False) since those collections will be cleared for a new load when encountered in a subsequent result batch.

So if using yield_per is actually the right way (tm) to scan over copious amounts of SQL data while using the ORM, when is it safe to use it?


here's what I usually do for this situation:

def page_query(q):
    offset = 0
    while True:
        r = False
        for elem in q.limit(1000).offset(offset):
           r = True
           yield elem
        offset += 1000
        if not r:
            break

for item in page_query(Session.query(Picture)):
    print item

This avoids the various buffering that DBAPIs do as well (such as psycopg2 and MySQLdb). It still needs to be used appropriately if your query has explicit JOINs, although eagerly loaded collections are guaranteed to load fully since they are applied to a subquery which has the actual LIMIT/OFFSET supplied.

I have noticed that Postgresql takes almost as long to return the last 100 rows of a large result set as it does to return the entire result (minus the actual row-fetching overhead) since OFFSET just does a simple scan of the whole thing.


You can defer the picture to only retrieve on access. You can do it on a query by query basis. like

session = Session()
for p in session.query(Picture).options(sqlalchemy.orm.defer("picture")):
    print(p)

or you can do it in the mapper

mapper(Picture, pictures, properties={
   'picture': deferred(pictures.c.picture)
})

How you do it is in the documentation here

Doing it either way will make sure that the picture is only loaded when you access the attribute.