Object pools in high performance javascript?

I'm writing some javascript code which needs to run fast, and uses a lot of short-lived objects. Am I better off using an object pool, or just creating objects as I need them?

I wrote a JSPerf test which suggests that there is no benefit to using an object pool, however I'm not sure if jsperf benchmarks are run long enough for the browser's garbage collector to kick in.

The code is part of a game, so I don't care about legacy browser support. My graphics engine won't work on old browsers anyway.


Let me start by saying: I would advice against pools, unless you are developing visualizations, games or other computationally expensive code that actually does a lot of work. Your average web app is I/O bound and your CPU and RAM will be idle most of the time. In that case, you gain much more by optimizing I/O- rather than execution- speed. In most scenarios, you want to start with performance monitoring tools like Lighthouse to figure out your bottlenecks, get your caching figured out, make sure your files load fast, be smart about splitting client- vs. server-side rendering tasks etc.

However, if you are toying around with games, scientific computation or other CPU-bound Javascript code, this post might be interesting for you.

Short Version:

In performance-critical code:

  1. Start by using general-purpose optimizations [1] [2] [3] [4] (and many more). Don't jump into pools right away (you know what I mean!).
  2. Be careful with syntactic sugar and external libraries, as even Promises and many built-ins (such as Array.concat etc.) do a lot of evil stuff under the hood, including allocations.
  3. Avoid immutables (such as String), since those will create new objects during state-changing operations you perform on them.
  4. Know your allocations. Use encapsulation for object creation, so you can easily find all allocations, and quickly change your allocation strategy, during profiling.
  5. If you worry about performance, always profile and compare different approaches. Ideally, you should not randomly believe someone on the intarwebz (including me). Remember that our definitions of words such as "fast", "long-lived" etc. might differ vastly.
  6. If you decide to use pooling:
  • You might have to use different pools for long-lived and short-lived objects to avoid fragmentation of the short-lived pool.
  • You want to compare different algorithms and different pooling granularity (pool entire objects or only pool some object properties?) for different scenarios.
  • Pooling increases code complexity and thus makes the optimizer's job more difficult, potentially reducing performance.

Long Version:

First, consider that the system heap is essentially the same as a large object pool. That means, whenever you create a new object (using new, [], {}, (), nested functions, string concatenation, etc.), the system will use a (very sophisticated, fast and low-level performance-tuned) algorithm to give you some unused space (i.e. an object), makes sure it's bytes are zeroed out and return it. That is very similar to what an object pool has to do. However, the Javascript's run-time heap manager uses the GC to retrieve "borrowed objects", where a pool gets it's objects back at almost zero cost, but requires the developer to take care of tracking all such objects herself.

Modern Javascript run-time environments, such as V8, have a run-time profiler and run-time optimizer that ideally can (but do not necessarily (yet)) optimize aggressively, when it identifies performance-critical code sections. It can also use that information to determine a good time for garbage collection. If it realizes you run a game loop, it might just run the GC after every few loops (maybe even reduce older generation collection to a minimum etc.), thereby not actually letting you feel the work it is doing (however, it will still drain your battery faster, if it is an expensive operation). Sometimes, the optimizer can even move the allocation to the stack, and that sort of allocation is basically free and much more cache-friendly. That being said, these kinds of optimization techniques are not perfect (and they actually cannot be, since perfect code optimization is NP-hard, but that's another topic).

Let us take games for example: This talk on fast vector math in JS explains how repeated vector allocation (and you need A LOT of vector math in most games) slowed down something that should be very fast: Vector math with Float32Array. In this case, you can benefit from a pool, if you use the right kind of pool in the right way.

These are my lessons learned from writing games in Javascript:

  • Encapsulate creation of all often-used objects in functions. Let it return a new object first, then compare it with a pool version:

Instead of

var x = new X(...);

use:

var x = X.create(...);

or even:

// this keeps all your allocation in the control of `Allocator`:
var x = Allocator.createX(...);      // or:
var y = Allocator.create('Y', ...);

This way, you can implement X.create or Allocator.createX with return new X(); first, and then replace it with a pool later on, to easily compare the speed. Better yet, it allows you to quickly find all allocations in your code, so you can review them one by one, when the time comes. Don't worry about the extra function invocation, as that will be inlined by any decent optimizer tool, and possibly even by the run-time optimizer.

  • Try to keep object creation to a minimum in general. If you can re-use existing objects, just do that. Take 2D vector math as an example: Don't make vectors (or other often-used objects) immutable. Even though immutability produces prettier and more bug-resilient code, it tends to be extremely expensive (because suddenly every vector operations requires either creating a new vector, or getting one from the pool, instead of just adding or multiplying a few numbers). The reason why in other languages, you can make vectors immutable is because often those allocations can be done on the stack, reducing allocation cost to practically zero. In Javascript however -

Instead of:

function add(a, b) { return new Vector(a.x + b.x, a.y + a.y); }
// ...
var z = add(x, y);

try:

function add(out, a, b) { out.set(a.x + b.x, a.y + a.y); return out; }
// ...
var z = add(x, x, y);   // you can do that here, if you don't need x anymore (Note: z = x)
  • Don't create temp variables. Those make parallel optimizations practically impossible.

Avoid:

var tmp = new X(...);
for (var x ...) {
    tmp.set(x);
    use(tmp);       // use() will modify tmp instead of x now, and x remains unchanged.
}
  • Just like temp variables in front of your loops, simple pooling will hamper parallelization optimizations of simple loops: The optimizer will have a hard time proving that your pool operations don't require a specific order, and at the very least it will need additional synchronization that might not be necessary for new (because the run-time has full control over how to allocate things). In case of tight computational loops, you might want to consider doing multiple computations per iteration, rather than just one (that is also known as a partially unrolled loop).
  • Unless, you really like to tinker, don't write your own pool. There are already plenty of them out there. This article, for example, lists a whole bunch.
  • Only try pooling, if you find that memory churn ruins your day. In that case, make sure to properly profile your application, figure out the bottlenecks, and react. As always: Don't optimize blindly.
  • Depending on the type of pool querying algorithm, you might want to use different pools for long-lived and short-lived objects to avoid fragmentation of the short-lived pool. Querying short-lived objects is much more performance-critical than querying long-lived objects (because the former can happen hundreds, thousands or even millions of times per second).

Pool Algorithms

Unless you write a very sophisticated pool querying algorithm, you are generally stuck with two or three options. Each of these options are faster in some and slower in other scenarios. The ones I saw most often are:

  1. Linked list: Only keep empty objects in the list. Whenever an object is needed, remove it from the list at little cost. Put it back, when the object is no longer needed.
  2. Array: Keep all objects in the array. Whenever an object is needed, iterate over all pooled objects, return the first one that is free, and set it's inUse flag to true. Unset it when the object is no longer needed.

Play around with those options. Unless your linked list implementation is rather sophisticated, you will probably find that the array-based solution is faster for short-lived objects (which is where pool performance actually matters), given, there are no long-lived objects in the array, causing the search for a free object to become unnecessarily long. If you usually need to allocate more than one object at a time (e.g. for your partially unrolled loops), consider a bulk allocation option that allocates (small) arrays of objects, rather than just one, to reduce the lookup overhead for unallocated objects. If you are really hot for a fast pool (and/or just wanna try out something new), look at how system heaps are implemented which are fast and allow for allocations of varying sizes.

Final Words

Whatever you decide to use, keep profiling, researching and sharing successful approaches of making our beloved JS code run even faster!


Generally speaking (in my personal experience), pooling objects is not going to improve speed. Creating objects is typically very cheap. Rather, the purpose of object pooling is to cut down on jank(periodic lag) caused by garbage collections.

As a concrete example (not necessarily for JavaScript, but as a general illustration), think of games with advanced 3D graphics. If one game has an average frame rate of 60fps, that's faster than another game with an average frame rate of 40fps. But if the second game's fps is consistently 40, the graphics look smooth, whereas if the first's is often much higher than 60fps but occasionally dips down to 10fps every now and then, the graphics look choppy.

If you create a benchmark that runs both games for 10 minutes and samples the frame rate every so often, it will tell you that the first game has better performance. But it won't pick up on the choppiness. That's the problem object pools are meant to address.

This isn't a blanket statement that covers all cases, of course. One scenario where pooling can improve not only choppiness but also raw performance is when you are frequently allocating large arrays: by simply setting arr.length = 0 and reusing arr, you can improve performance by escaping future re-sizings. Similarly, if you're frequently creating very large objects that all share a common schema (i.e., they have a well-defined set of properties, so you don't have to "clean" every object when returning it to the pool), you might see a performance improvement from pooling in that case as well.

As I said, generally speaking though, that is not the primary aim of object pools.