What is the memory usage overhead for a 64-bit application?
From what I have found so far it's clear that programs compiled for a 64-bit architecture use twice as much RAM for pointers as their 32-bit alternatives - https://superuser.com/questions/56540/32-bit-vs-64-bit-systems.
Does that mean that code compiled for 64-bit uses on average two times more RAM than the 32-bit version?
I somehow doubt it, but I am wondering what the real overhead is. I suppose that small types, like short
, byte
and char
are same sized in a 64-bit architecture? I am not really sure about byte
though. Given that many applications work with large strings (like web browsers, etc.), that consist mostly of char
arrays in most implementations, the overhead may not be so large.
So even if numeric types like int
and long
are larger on 64 bit, would it have a significant effect on usage of RAM or not?
Solution 1:
It depends on the programming style (and on the language, but you are referring to C).
- If you work a lot with pointers (or you have a lot of references in some languages), RAM consumption goes up.
- If you use a lot of data with fixed size, such as
double
orint32_t
, RAM consumption does not go up. - For types like
int
orlong
, it depends on the architecture; there may be differences between Linux and Windows. Here you see the alternatives you have. In short, Windows uses LLP64, meaning thatlong long
and pointers are 64 bit, while Linux uses LP64, wherelong
is 64 bit as well. Other architectures might makeint
or evenshort
64 bit as well, but these are quite uncommon. -
float
anddouble
should remain the same in size in all cases.
So you see it strongly depends on the usage of the data types.
Solution 2:
There are a few reasons for the memory consumption to go up. However the overhead of 64b vs 32b depends from an app to another.
Main reason is using a lot of pointers in your code. However, an array allocated dynamically in a code compiled for 64bit and running on a 64bit OS would be the same size as the array allocated on a 32 bit system. Only the address to the array will be larger, the content size will be the same (except when the type size changed - however that should not happen and should be well documented).
Another footprint increase would be due to memory alignment. In 64 bit mode the alignment needs to consider a 64bit address so that should add a small overhead.
Probably the size of the code will increase. On some architectures the 64bit ISA could be slightly larger. Also, you would now have to make calls to 64bit addresses.
When running in 64bit registers are larger (64bit) so if you use many numerical types the compiler might as well place them in registers so that shouldn't necessarily mean that your RAM footprint would go up. Using double variables is likely to produce a memory footprint increase if they are not stored into 64b registers.
When using JIT compiled languages like Java, .NET it is likely that the footprint increase of 64b code would be larger as the runtime environment will generate additional overhead through pointer usage, hidden control structures, etc.
However there is no magic number describing the 64bit memory footprint overhead. That needs to be measured from an application to another. From what I've seen, I never got more than 20% increase in footprint for an application running on 64bit, compared to 32bit. However that's purely based on the applications I encountered and I'm using mostly C and C++.