If I strictly follow RAII rules, which seems to be a good thing, why would that be any different from having a garbage collector in C++?

While both deal with allocations, they do so in completely different manners. If you are reffering to a GC like the one in Java, that adds its own overhead, removes some of the determinism from the resource release process and handles circular references.

You can implement GC though for particular cases, with much different performance characteristics. I implemented one once for closing socket connections, in a high-performance/high-throughput server (just calling the socket close API took too long and borked the throughput performance). This involved no memory, but network connections, and no cyclic dependency handling.

I know that with RAII the programmer is in full control of when the resources are freed again, but is that in any case beneficial to just having a garbage collector?

This determinism is a feature that GC simply doesn't allow. Sometimes you want to be able to know that after some point, a cleanup operation has been performed (deleting a temporary file, closing a network connection, etc).

In such cases GC doesn't cut it which is the reason in C# (for example) you have the IDisposable interface.

I even heard that having a garbage collector can be more efficient, as it can free larger chunks of memory at a time instead of freeing small memory pieces all over the code.

Can be ... depends on the implementation.


Garbage collection solves certain classes of resource problems that RAII cannot solve. Basically, it boils down to circular dependencies where you do not identify the cycle before hand.

This gives it two advantages. First, there are going to be certain types of problem that RAII cannot solve. These are, in my experience, rare.

The bigger one is that it lets the programmer be lazy and not care about memory resource lifetimes and certain other resources you don't mind delayed cleanup on. When you don't have to care about certain kinds of problems, you can care more about other problems. This lets you focus on the parts of your problem you want to focus on.

The downside is that without RAII, managing resources whose lifetime you want constrained is hard. GC languages basically reduce you to either having extremely simple scope-bound lifetimes or require you to do resource management manually, like in C, with manually stating you are done with a resource. Their object lifetime system is strongly tied to GC, and doesn't work well for tight lifetime management of large complex (yet cycle-free) systems.

To be fair, resource management in C++ takes a lot of work to do properly in such large complex (yet cycle-free) systems. C# and similar languages just make it a touch harder, in exchange they make the easy case easy.

Most GC implementations also forces non-locality full fledged classes; creating contiguous buffers of general objects, or composing general objects into one larger object, is not something that most GC implementations make easy. On the other hand, C# permits you to create value type structs with somewhat limited capabilities. In the current era of CPU architecture, cache friendliness is key, and the lack of locality GC forces is a heavy burden. As these languages have a bytecode runtime for the most part, in theory the JIT environment could move commonly used data together, but more often than not you just get a uniform performance loss due to frequent cache misses compared to C++.

The last problem with GC is that deallocation is indeterminate, and can sometimes cause performance problems. Modern GCs make this less of a problem than it has been in the past.


RAII and GC solve problems in completely different directions. They are completely different, despite what some would say.

Both address the issue that managing resources is hard. Garbage Collection solves it by making it so that the developer doesn't need to pay as much attention to managing those resources. RAII solves it by making it easier for developers to pay attention to their resource management. Anyone who says they do the same thing has something to sell you.

If you look at recent trends in languages, you're seeing both approaches being used in the same language because, frankly, you really need both sides of the puzzle. You're seeing lots of languages which use garbage collection of sorts so that you don't have to pay attention to most objects, and those languages also offer RAII solutions (such as python's with operator) for the times you really want to pay attention to them.

  • C++ offers RAII through constructors/destructors and GC through shared_ptr (If I may make the argument that refcounting and GC are in the same class of solutions because they're both designed to help you not need to pay attention to lifespan)
  • Python offers RAII through with and GC through a refcounting system plus a garbage collector
  • C# offers RAII through IDisposable and using and GC through a generational garbage collector

The patterns are cropping up in every language.


Notice that RAII is a programming idiom, while GC is a memory management technique. So we are comparing apples with oranges.

But we can restrict RAII to its memory management aspects only and compare that to GC techniques.

The main difference between so called RAII based memory management techniques (which really means reference counting, at least when you consider memory resources and ignore the other ones such as files) and genuine garbage collection techniques is the handling of circular references (for cyclic graphs).

With reference counting, you need to code specially for them (using weak references or other stuff).

In many useful cases (think of std::vector<std::map<std::string,int>>) the reference counting is implicit (since it can only be 0 or 1) and is practically omitted, but the contructor and destructor functions (essential to RAII) behave as if there was a reference counting bit (which is practically absent). In std::shared_ptr there is a genuine reference counter. But memory is still implicitly manually managed (with new and delete triggered inside constructors and destructors), but that "implicit" delete (in destructors) gives the illusion of automatic memory management. However, calls to new and delete still happen (and they cost time).

BTW the GC implementation may (and often does) handle circularity in some special way, but you leave that burden to the GC (e.g. read about the Cheney's algorithm).

Some GC algorithms (notably generational copying garbage collector) don't bother releasing memory for individual objects, it is release en masse after the copy. In practice the Ocaml GC (or the SBCL one) can be faster than a genuine C++ RAII programming style (for some, not all, kind of algorithms).

Some GC provide finalization (mostly used to manage non-memory external resources like files), but you'll rarely use it (since most values consume only memory resources). The disadvantage is that finalization does not offer any timing guarantee. Practically speaking, a program using finalization is using it as a last resort (e.g. closing of files should still happen more or less explicitly outside of finalization, and also with them).

You still can have memory leaks with GC (and also with RAII, at least when used improperly), e.g. when a value is kept in some variable or some field but will never be used in the future. They just happen less often.

I recommend reading the garbage collection handbook.

In your C++ code, you might use Boehm's GC or Ravenbrook's MPS or code your own tracing garbage collector. Of course using a GC is a tradeoff (there are some inconvenience, e.g. non-determinism, lack of timing guarantees, etc...).

I don't think that RAII is the ultimate way of dealing with memory in all cases. In several occasions, coding your program in a genuinely and efficiently GC implementations (think of Ocaml or SBCL) can be simpler (to develop) and faster (to execute) than coding it with fancy RAII style in C++17. In other cases it is not. YMMV.

As an example, if you code a Scheme interpreter in C++17 with the fanciest RAII style, you would still need to code (or use) a explicit GC inside it (because a Scheme heap has circularities). And most proof assistants are coded in GC-ed languages, often functional ones, (the only one I know which is coded in C++ is Lean) for good reasons.

BTW, I'm interested in finding such a C++17 implementation of Scheme (but less interested in coding it myself), preferably with some multi-threading ability.


One of the problem about garbage collectors is that it's hard to predict program performance.

With RAII you know that in exact time resource will go out of scope you will clear some memory and it will take some time. But if you are not a master of garbage collector settings you cannot predict when cleanup will happen.

For example: cleaning a bunch of small objects can be done more effectively with GC because it can free large chunk, but it will be not fast operation, and it's hard to predict when in will occur and because of "large chunk cleanup" it will take some processor time and can affect your program performance.