Why do my array of structs take up so much memory?
Question: How does the Micro Framework allocate memory for an array of structs?
BitBucket repository with code to replicate.
Context and Detail
I'm making a queue using a fixed sized array to insert delays in processing keystrokes from a USB Keyboard. I'm using a struct
to represent the key up and down events and the delay.
public struct QueuedEvent
{
public readonly EventType Type; // Byte
public readonly byte KeyPressed;
public readonly TinyTimeSpan Delay; // Int16
public readonly static QueuedEvent Empty = new QueuedEvent();
}
public enum EventType : byte
{
None = 0,
Delay = 1,
KeyDown = 2,
KeyUp = 3,
KeyPress = 4,
}
public class FixedSizeQueue
{
private readonly QueuedEvent[] _Array;
private int _Head = 0;
private int _Tail = 0;
public FixedSizeQueue(int size)
{
_Array = new QueuedEvent[size];
}
// Enqueue and Dequeue methods follow.
}
I would have thought my QueuedEvent
would occupy 4 bytes in memory, but, based on looking at the debug output of the garbage collector (specifically the VALUETYPE
and SZARRAY
types) it is actually taking up 84 bytes each! This strikes me as overkill! (And it really appears to be 84 bytes each, because I get an OutOfMemoryException
if I allocate 512 of them. I have ~20kB of RAM available, so I should be able to allocate at 512 easily).
Question (again): How does the Micro Framework manage to allocate 84 bytes for a struct which could fit in 4?
Garbage Collector Figures
Here's a table of different sized arrays of QueuedEvent
(after I subtract the amounts when I allocate 0):
+--------+-----------+-----------+---------+------------+-------+
| Number | VALUETYPE | B/Q'dEvnt | SZARRAY | B/Q'edEvnt | Total |
| 16 | 1152 | 72 | 192 | 12 | 84 |
| 32 | 2304 | 72 | 384 | 12 | 84 |
| 64 | 4608 | 72 | 768 | 12 | 84 |
| 128 | 9216 | 72 | 1536 | 12 | 84 |
+--------+-----------+-----------+---------+------------+-------+
Based on the SZARRAY
numbers, I'd guess my QueuedEvent
fields are being aligned to Int32 boundaries, thus taking up 12 bytes. But I have no idea where the extra 72 bytes come from.
Edit: I'm getting these numbers by calling Debug.GC(true)
and observing the dump I get in my debugger output. I have not found a reference which identifies exactly what each of the numbers mean.
I realise I could simply allocate an int[]
, but that means I lose the nice encapsulation and any type safety of the struct. And I'd really like to know what the true cost of a struct is in the micro framework.
My TinyTimeSpan
is much like a regular TimeSpan
except is using an Int16
to represent a number of milliseconds rather than an Int64 representing 100ns ticks.
public struct TinyTimeSpan
{
public static readonly TinyTimeSpan Zero = new TinyTimeSpan(0);
private short _Milliseconds;
public TinyTimeSpan(short milliseconds)
{
_Milliseconds = milliseconds;
}
public TinyTimeSpan(TimeSpan ts)
{
_Milliseconds = (short)(ts.Ticks / TimeSpan.TicksPerMillisecond);
}
public int Milliseconds { get { return _Milliseconds; } }
public int Seconds { get { return _Milliseconds * 1000; } }
}
I'm using a FEZ Domino as hardware. It's totally possible this is hardware specific. Also, Micro Framework 4.1.
Edit - More Testing And Comment Answers
I ran a whole bunch more tests (in the emulator this time, not on real hardware, but the numbers for QueuedEvent
are the same, so I'm assuming my hardware would be identical for other tests).
BitBucket repository with code to replicate.
The following integral types and structs do not attract any overhead as VALUETYPE
:
- Byte (1 byte)
- Int32 (4 bytes)
- Int16 (2 bytes)
- Int64 (8 bytes)
- Double (8 bytes)
- TimeSpan (12 bytes - strange, as its internal member is an Int64)
- DateTime (12 bytes - strange)
However, Guid
does: each using 36 bytes.
The empty static member does allocate VALUETYPE
, using 72 bytes (12 bytes less than the same struct in an array).
Allocating the array as a static
member does not change anything.
Running in Debug or Release modes makes no difference. I don't know how to get the GC debug info without a debugger attached though. But Micro Framework is interpreted, so I don't know what effect a non-attached debugger would have anyway.
Micro Framework does not support unsafe
code. Nor does it support StructLayout
Explicit
(well, technically it does, but there is no FieldOffset
attribute) . StructLayout
Auto
and Sequential
make no difference.
Here are are few more structs and their measured memory allocation:
// Uses 12 bytes in SZARRAY and 24 in VALUETYPE, total = 36 each
public struct JustAnInt32
{
public readonly Int32 Value;
}
// Uses 12 bytes in SZARRAY and 48 in VALUETYPE, total = 60 each
// Same as original QueuedEvent but only uses integral types.
public struct QueuedEventSimple
{
public readonly byte Type;
public readonly byte KeyPressed;
public readonly short DelayMilliseconds;
// Replacing the short with TimeSpan does not change memory usage.
}
// Uses 12 bytes in SZARRAY and 12 in VALUETYPE, total = 24 each
// I have to admit 24 bytes is a bit much for an empty struct!!
public struct Empty
{
}
It seems every time I use a custom struct, I incur some sort of overhead. And no matter what I include in the struct, it always requires 12 bytes in SZARRAY
. So I tried this:
// Uses 12 bytes in SZARRAY and 36 in VALUETYPE, total = 48 each
public struct DifferentEntity
{
public readonly Double D;
public readonly TimeSpan T;
}
// Uses 12 bytes in SZARRAY and 108 in VALUETYPE, total = 120 each
public struct MultipleEntities
{
public readonly DifferentEntity E1;
public readonly DifferentEntity E2;
}
// Uses 12 bytes in SZARRAY and 60 in VALUETYPE, total = 72 each
// This is equivalent to MultipleEntities, but has quite different memory usage.
public struct TwoDoublesAndTimeSpans
{
public readonly double D1;
public readonly TimeSpan T1;
public readonly double D2;
public readonly TimeSpan T2;
}
Minor Edit
After posting my own answer, I realised there was always a 12 byte overhead in SZARRAY
per item. So I tested an object[]
. Reference types consume 12 bytes each in the Micro Framework.
An empty struct public struct Empty { }
consumes 24 bytes each.
Based on my tests, I'm guessing ValueTypes
in the Micro Framework are not true value types as we're used to on the desktop CLR. At the very least, they are being boxed. And there may be another level of indirection involved too. These costs are incurred in a (quite substantial for an embedded platform) memory overhead.
I'll be converting to an .int[]
in my FixedSizedQueue
Actually, I ended up using UInt32[]
and added some extension methods to wrap around the bit bashing.
I poked around a bit in the source code, but couldn't find anything helpful (and I really don't know what to look for either).