Is Java HashMap.clear() and remove() memory effective?
Consider the follwing HashMap.clear()
code:
/**
* Removes all of the mappings from this map.
* The map will be empty after this call returns.
*/
public void clear() {
modCount++;
Entry[] tab = table;
for (int i = 0; i < tab.length; i++)
tab[i] = null;
size = 0;
}
It seems, that the internal array (table
) of Entry
objects is never shrinked. So, when I add 10000 elements to a map, and after that call map.clear()
, it will keep 10000 nulls in it's internal array. So, my question is, how does JVM handle this array of nothing, and thus, is HashMap
memory effective?
Solution 1:
The idea is that clear()
is only called when you want to re-use the HashMap
. Reusing an object should only be done for the same reason it was used before, so chances are that you'll have roughly the same number of entries. To avoid useless shrinking and resizing of the Map
the capacity is held the same when clear()
is called.
If all you want to do is discard the data in the Map
, then you need not (and in fact should not) call clear()
on it, but simply clear all references to the Map
itself, in which case it will be garbage collected eventually.
Solution 2:
Looking at the source code, it does look like HashMap
never shrinks. The resize
method is called to double the size whenever required, but doesn't have anything ala ArrayList.trimToSize()
.
If you're using a HashMap
in such a way that it grows and shrinks dramatically often, you may want to just create a new HashMap
instead of calling clear()
.
Solution 3:
You are right, but considering that increasing the array is a much more expensive operation, it's not unreasonable for the HashMap to think "once the user has increased the array, chances are he'll need the array this size again later" and just leave the array instead of decreasing it and risking to have to expensively expand it later again. It's a heuristic I guess - you could advocate the other way around too.