Is using an outdated C compiler a security risk?

We have some build systems in production which no one cares about and these machines run ancient versions of GCC like GCC 3 or GCC 2.

And I can't persuade the management to upgrade it to a more recent: they say, "if ain't broke, don't fix it".

Since we maintain a very old code base (written in the 80s), this C89 code compiles just fine on these compilers.

But I'm not sure it is good idea to use these old stuff.

My question is:

Can using an old C compiler compromise the security of the compiled program?

UPDATE:

The same code is built by Visual Studio 2008 for Windows targets, and MSVC doesn't support C99 or C11 yet (I don't know if newer MSVC does), and I can build it on my Linux box using the latest GCC. So if we would just drop in a newer GCC it would probably build just as fine as before.


Solution 1:

Actually I would argue the opposite.

There are a number of cases where behaviour is undefined by the C standard but where it is obvious what would happen with a "dumb compiler" on a given platform. Cases like allowing a signed integer to overflow or accessing the same memory though variables of two different types.

Recent versions of gcc (and clang) have started treating these cases as optimisation opportunities not caring if they change how the binary behaves in the "undefined behaviour" condition. This is very bad if your codebase was written by people who treated C like a "portable assembler". As time went on the optimisers have started looking at larger and larger chunks of code when doing these optimisations increasing the chance the binary will end up doing something other than "what a binary built by a dumb compiler" would do.

There are compiler switches to restore "traditional" behaviour (-fwrapv and -fno-strict-aliasing for the two I mentioned above) , but first you have to know about them.

While in principle a compiler bug could turn compliant code into a security hole I would consider the risk of this to be negligable in the grand scheme of things.

Solution 2:

There are risks in both courses of action.


Older compilers have the advantage of maturity, and whatever was broken in them has probably (but there's no guarantee) been worked around successfully.

In this case, a new compiler is a potential source of new bugs.


On the other hand, newer compilers come with additional tooling:

  • GCC and Clang both now feature sanitizers which can instrument the runtime to detect undefined behaviors of various sorts (Chandler Carruth, of the Google Compiler team, claimed last year that he expects them to have reached full coverage)
  • Clang, at least, features hardening, for example Control Flow Integrity is about detecting hi-jacks of control flow, there are also hardening implements to protect against stack smashing attacks (by separating the control-flow part of the stack from the data part); hardening features are generally low overhead (< 1% CPU overhead)
  • Clang/LLVM is also working on libFuzzer, a tool to create instrumented fuzzing unit-tests that explore the input space of the function under test smartly (by tweaking the input to take not-as-yet explored execution paths)

Instrumenting your binary with the sanitizers (Address Sanitizer, Memory Sanitizer or Undefined Behavior Sanitizer) and then fuzzing it (using American Fuzzy Lop for example) has uncovered vulnerabilities in a number of high-profile softwares, see for example this LWN.net article.

Those new tools, and all future tools, are inaccessible to you unless you upgrade your compiler.

By staying on an underpowered compiler, you are putting your head in the sand and crossing fingers that no vulnerability is found. If your product is a high-value target, I urge you to reconsider.


Note: even if you do NOT upgrade the production compiler, you might want to use a new compiler to check for vulnerability anyway; do be aware that since those are different compilers, the guarantees are lessened though.