Javascript ES6 computational/time complexity of collections

What time complexity (in big-O notation) is provided by the ES6 specification for the Keyed Collections (Set, Map, WeakSet, and WeakMap)?

My expectation, and I expect that of most developers, is that the specifications and implementations would use widely accepted performant algorithms, in which case Set.prototype.has, add and delete to all be O(1) in the average case. The same for the Map and Weak– equivalents.

It is not entirely apparent to me whether the time complexity of the implementations was mandated e.g. in ECMAScript 2015 Language Specification - 6th Edition — 23.2 Set Objects.

Unless I misunderstand it (and it's certainly very possible I do), it looks the ECMA spec mandates that the implementations (e.g. Set.prototype.has) are to use a linear time (O(n)) algorithm. It would strike me as exceedingly surprising that more performant algorithms would not be mandated or even permitted by the spec, and I would be very interested in an explanation for why this is the case.


Solution 1:

Right from that very paragraph your linked to:

Set objects must be implemented using [mechanisms] that, on average, provide access times that are sublinear on the number of elements in the collection.

You will find the same sentence for Maps, WeakMaps and WeakSets.

It looks the ECMA spec mandates that the implementations (e.g. Set.prototype.has) are to use a linear time (O(n)) algorithm.

No:

The data structures used in this Set objects specification is only intended to describe the required observable semantics of Set objects. It is not intended to be a viable implementation model.

The observable semantics are mostly related to the predictable iteration order (which still can be implemented efficient and fast). It is indeed expected by the specification that an implementation uses a hash table or something similar with constant access, though trees (with logarithmic access complexity) are allowed as well.

Solution 2:

For anyone who is curious, I did a very quick benchmark:

const benchmarkMap = size => {
  console.time('benchmarkMap');
  var map = new Map();
  for (var i = 0; i < size; i++) map.set(i, i);
  for (var i = 0; i < size; i++) var x = map.get(i);
  console.timeEnd('benchmarkMap');
}

const benchmarkObj = size => {
  console.time('benchmarkObj');
  var obj = {};
  for (var i = 0; i < size; i++) obj[i] = i;
  for (var i = 0; i < size; i++) var x = obj[i];
  console.timeEnd('benchmarkObj');
}

var size = 1000000;

benchmarkMap(size);
benchmarkObj(size);

I ran this a few times and yielded the following results:

(2017 MacBook Pro, 2.5 GHz i7 w/ 16G RAM)

benchmarkMap: 189.120ms
benchmarkObj: 44.214ms

benchmarkMap: 200.817ms
benchmarkObj: 38.963ms

benchmarkMap: 187.968ms
benchmarkObj: 41.633ms

benchmarkMap: 186.533ms
benchmarkObj: 35.850ms

benchmarkMap: 187.339ms
benchmarkObj: 44.515ms