How much faster is C++ than C#?

Solution 1:

There is no strict reason why a bytecode based language like C# or Java that has a JIT cannot be as fast as C++ code. However C++ code used to be significantly faster for a long time, and also today still is in many cases. This is mainly due to the more advanced JIT optimizations being complicated to implement, and the really cool ones are only arriving just now.

So C++ is faster, in many cases. But this is only part of the answer. The cases where C++ is actually faster, are highly optimized programs, where expert programmers thoroughly optimized the hell out of the code. This is not only very time consuming (and thus expensive), but also commonly leads to errors due to over-optimizations.

On the other hand, code in interpreted languages gets faster in later versions of the runtime (.NET CLR or Java VM), without you doing anything. And there are a lot of useful optimizations JIT compilers can do that are simply impossible in languages with pointers. Also, some argue that garbage collection should generally be as fast or faster as manual memory management, and in many cases it is. You can generally implement and achieve all of this in C++ or C, but it's going to be much more complicated and error prone.

As Donald Knuth said, "premature optimization is the root of all evil". If you really know for sure that your application will mostly consist of very performance critical arithmetic, and that it will be the bottleneck, and it's certainly going to be faster in C++, and you're sure that C++ won't conflict with your other requirements, go for C++. In any other case, concentrate on first implementing your application correctly in whatever language suits you best, then find performance bottlenecks if it runs too slow, and then think about how to optimize the code. In the worst case, you might need to call out to C code through a foreign function interface, so you'll still have the ability to write critical parts in lower level language.

Keep in mind that it's relatively easy to optimize a correct program, but much harder to correct an optimized program.

Giving actual percentages of speed advantages is impossible, it largely depends on your code. In many cases, the programming language implementation isn't even the bottleneck. Take the benchmarks at http://benchmarksgame.alioth.debian.org/ with a great deal of scepticism, as these largely test arithmetic code, which is most likely not similar to your code at all.

Solution 2:

C# may not be faster, but it makes YOU/ME faster. That's the most important measure for what I do. :)

Solution 3:

I'm going to start by disagreeing with part of the accepted (and well-upvoted) answer to this question by stating:

There are actually plenty of reasons why JITted code will run slower than a properly optimized C++ (or other language without runtime overhead) program including:

  • compute cycles spent on JITting code at runtime are by definition unavailable for use in program execution.

  • any hot paths in the JITter will be competing with your code for instruction and data cache in the CPU. We know that cache dominates when it comes to performance and native languages like C++ do not have this type of contention, by design.

  • a run-time optimizer's time budget is necessarily much more constrained than that of a compile-time optimizer's (as another commenter pointed out)

Bottom line: Ultimately, you will almost certainly be able to create a faster implementation in C++ than you could in C#.

Now, with that said, how much faster really isn't quantifiable, as there are too many variables: the task, problem domain, hardware, quality of implementations, and many other factors. You'll have run tests on your scenario to determine the the difference in performance, and then decide whether it is worth the the additional effort and complexity.

This is a very long and complex topic, but I feel it's worth mentioning for the sake of completeness that C#'s runtime optimizer is excellent, and is able to perform certain dynamic optimizations at runtime that are simply not available to C++ with its compile-time (static) optimizer. Even with this, the advantage is still typically deeply in the native application's court, but the dynamic optimizer is the reason for the "almost certainly" qualifier given above.

--

In terms of relative performance, I was also disturbed by the figures and discussions I saw in some other answers, so I thought I'd chime in and at the same time, provide some support for the statements I've made above.

A huge part of the problem with those benchmarks is you can't write C++ code as if you were writing C# and expect to get representative results (eg. performing thousands of memory allocations in C++ is going to give you terrible numbers.)

Instead, I wrote slightly more idiomatic C++ code and compared against the C# code @Wiory provided. The two major changes I made to the C++ code were:

  1. used vector::reserve()

  2. flattened the 2d array to 1d to achieve better cache locality (contiguous block)

C# (.NET 4.6.1)

private static void TestArray()
{
    const int rows = 5000;
    const int columns = 9000;
    DateTime t1 = System.DateTime.Now;
    double[][] arr = new double[rows][];
    for (int i = 0; i < rows; i++)
        arr[i] = new double[columns];
    DateTime t2 = System.DateTime.Now;

    Console.WriteLine(t2 - t1);

    t1 = System.DateTime.Now;
    for (int i = 0; i < rows; i++)
        for (int j = 0; j < columns; j++)
            arr[i][j] = i;
    t2 = System.DateTime.Now;

    Console.WriteLine(t2 - t1);
}

Run time (Release): Init: 124ms, Fill: 165ms

C++14 (Clang v3.8/C2)

#include <iostream>
#include <vector>

auto TestSuite::ColMajorArray()
{
    constexpr size_t ROWS = 5000;
    constexpr size_t COLS = 9000;

    auto initStart = std::chrono::steady_clock::now();

    auto arr = std::vector<double>();
    arr.reserve(ROWS * COLS);

    auto initFinish = std::chrono::steady_clock::now();
    auto initTime = std::chrono::duration_cast<std::chrono::microseconds>(initFinish - initStart);

    auto fillStart = std::chrono::steady_clock::now();

    for(auto i = 0, r = 0; r < ROWS; ++r)
    {
        for (auto c = 0; c < COLS; ++c)
        {
            arr[i++] = static_cast<double>(r * c);
        }
    }

    auto fillFinish = std::chrono::steady_clock::now();
    auto fillTime = std::chrono::duration_cast<std::chrono::milliseconds>(fillFinish - fillStart);

    return std::make_pair(initTime, fillTime);
}

Run time (Release): Init: 398µs (yes, that's microseconds), Fill: 152ms

Total Run times: C#: 289ms, C++ 152ms (roughly 90% faster)

Observations

  • Changing the C# implementation to the same 1d array implementation yielded Init: 40ms, Fill: 171ms, Total: 211ms (C++ was still almost 40% faster).

  • It is much harder to design and write "fast" code in C++ than it is to write "regular" code in either language.

  • It's (perhaps) astonishingly easy to get poor performance in C++; we saw that with unreserved vectors performance. And there are lots of pitfalls like this.

  • C#'s performance is rather amazing when you consider all that is going on at runtime. And that performance is comparatively easy to access.

  • More anecdotal data comparing the performance of C++ and C#: https://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=gpp&lang2=csharpcore

The bottom line is that C++ gives you much more control over performance. Do you want to use a pointer? A reference? Stack memory? Heap? Dynamic polymorphism or eliminate the runtime overhead of a vtable with static polymorphism (via templates/CRTP)? In C++ you have to... er, get to make all these choices (and more) yourself, ideally so that your solution best addresses the problem you're tackling.

Ask yourself if you actually want or need that control, because even for the trivial example above, you can see that although there is a significant improvement in performance, it requires a deeper investment to access.

Solution 4:

It's five oranges faster. Or rather: there can be no (correct) blanket answer. C++ is a statically compiled language (but then, there's profile guided optimization, too), C# runs aided by a JIT compiler. There are so many differences that questions like “how much faster” cannot be answered, not even by giving orders of magnitude.

Solution 5:

In my experience (and I have worked a lot with both languages), the main problem with C# compared to C++ is high memory consumption, and I have not found a good way to control it. It was the memory consumption that would eventually slow down .NET software.

Another factor is that JIT compiler cannot afford too much time to do advanced optimizations, because it runs at runtime, and the end user would notice it if it takes too much time. On the other hand, a C++ compiler has all the time it needs to do optimizations at compile time. This factor is much less significant than memory consumption, IMHO.