Are C++ enums slower to use than integers?

It's really a simple problem :

I'm programming a Go program. Should I represent the board with a QVector<int> or a QVector<Player> where

enum Player
{
    EMPTY = 0,
    BLACK = 1,
    WHITE = 2
};

I guess that of course, using Player instead of integers will be slower. But I wonder how much more, because I believe that using enum is better coding.

I've done a few tests regarding assigning and comparing Players (as opposed to int)

QVector<int> vec;
vec.resize(10000000);
int size = vec.size();


for(int i =0; i<size; ++i)
{
    vec[i] = 0;
}


for(int i =0; i<size; ++i)
{
    bool b = (vec[i] == 1);
}


QVector<Player> vec2;
vec2.resize(10000000);
int size = vec2.size();


for(int i =0; i<size; ++i)
{
    vec2[i] = EMPTY;
}


for(int i =0; i<size; ++i)
{
    bool b = (vec2[i] == BLACK);
}

Basically, it's only 10% slower. Is there anything else I should know before continuing?

Thanks!

Edit : The 10% difference is not a figment of my imagination, it seems to be specific to Qt and QVector. When I use std::vector, the speed is the same


Enums are completely resolved at compile time (enum constants as integer literals, enum variables as integer variables), there's no speed penalty in using them.

In general the average enumeration won't have an underlying type bigger than int (unless you put in it very big constants); in facts, at §7.2 ¶ 5 it's explicitly said:

The underlying type of an enumeration is an integral type that can represent all the enumerator values defined in the enumeration. It is implementation-defined which integral type is used as the underlying type for an enumeration except that the underlying type shall not be larger than int unless the value of an enumerator cannot fit in an int or unsigned int.

You should use enumerations when it's appropriate because they usually make the code easier to read and to maintain (have you ever tried to debug a program full of "magic numbers"? :S).

As for your results: probably your test methodology doesn't take into account the normal speed fluctuations you get when you run code on "normal" machines1; have you tried running the test many (100+) times and calculating mean and standard deviation of your times? The results should be compatible: the difference between the means shouldn't be bigger than 1 or 2 times the RSS2 of the two standard deviations (assuming, as usual, a Gaussian distribution for the fluctuations).

Another check you could do is to compare the generated assembly code (with g++ you can get it with the -S switch).


  1. On "normal" PCs you have some indeterministic fluctuations because of other tasks running, cache/RAM/VM state, ...
  2. Root Sum Squared, the square root of the sum of the squared standard deviations.

In general, using an enum should make absolutely no difference to performance. How did you test this?

I just ran tests myself. The differences are pure noise.

Just now, I compiled both versions to assembler. Here's the main function from each:

int

LFB1778:
        pushl   %ebp
LCFI11:
        movl    %esp, %ebp
LCFI12:
        subl    $8, %esp
LCFI13:
        movl    $65535, %edx
        movl    $1, %eax
        call    __Z41__static_initialization_and_destruction_0ii
        leave
        ret

Player

LFB1774:
        pushl   %ebp
LCFI10:
        movl    %esp, %ebp
LCFI11:
        subl    $8, %esp
LCFI12:
        movl    $65535, %edx
        movl    $1, %eax
        call    __Z41__static_initialization_and_destruction_0ii
        leave
        ret

It's hazardous to base any statement regarding performance on micro-benchmarks. There are too many extraneous factors skewing the data.


Enums should be no slower. They're implemented as integers.


if you use Visual Studio for example you can create a simple project where you have

     a=Player::EMPTY;

and if you right click "go to disassembly" the code will be

mov         dword ptr [a],0

So the compiler replace the value of the enum, and normally it will not generate any overhead.