Why are flag enums usually defined with hexadecimal values
A lot of times I see flag enum declarations that use hexadecimal values. For example:
[Flags]
public enum MyEnum
{
None = 0x0,
Flag1 = 0x1,
Flag2 = 0x2,
Flag3 = 0x4,
Flag4 = 0x8,
Flag5 = 0x10
}
When I declare an enum, I usually declare it like this:
[Flags]
public enum MyEnum
{
None = 0,
Flag1 = 1,
Flag2 = 2,
Flag3 = 4,
Flag4 = 8,
Flag5 = 16
}
Is there a reason or rationale to why some people choose to write the value in hexadecimal rather than decimal? The way I see it, it's easier to get confused when using hex values and accidentally write Flag5 = 0x16
instead of Flag5 = 0x10
.
Solution 1:
Rationales may differ, but an advantage I see is that hexadecimal reminds you: "Okay, we're not dealing with numbers in the arbitrary human-invented world of base ten anymore. We're dealing with bits - the machine's world - and we're gonna play by its rules." Hexadecimal is rarely used unless you're dealing with relatively low-level topics where the memory layout of data matters. Using it hints at the fact that that's the situation we're in now.
Also, i'm not sure about C#, but I know that in C x << y
is a valid compile-time constant.
Using bit shifts seems the most clear:
[Flags]
public enum MyEnum
{
None = 0,
Flag1 = 1 << 0, //1
Flag2 = 1 << 1, //2
Flag3 = 1 << 2, //4
Flag4 = 1 << 3, //8
Flag5 = 1 << 4 //16
}
Solution 2:
It makes it easy to see that these are binary flags.
None = 0x0, // == 00000
Flag1 = 0x1, // == 00001
Flag2 = 0x2, // == 00010
Flag3 = 0x4, // == 00100
Flag4 = 0x8, // == 01000
Flag5 = 0x10 // == 10000
Though the progression makes it even clearer:
Flag6 = 0x20 // == 00100000
Flag7 = 0x40 // == 01000000
Flag8 = 0x80 // == 10000000