Why would I use Enumerable.ElementAt() versus the [] operator?

Solution 1:

Because Enumerable is more generic, and a collection represented by enumerable may not have an indexer.

But, if it does - don't use ElementAt() it's probably not going to be as efficient.

Solution 2:

ElementAt() provides a unified interface for all enumerations in C#. I tend to use it quite often as I like the common API.

If the underlying type supports random access (ie, it supports the [] operator), then ElementAt will make use of that. So the only overhead is an extra method call (which is almost never relevant).

If the underlying type does not support random access, then ElementAt() simulates it by iterating the enumeration until it arrives at the element you care about. This can be very expensive and even have side effects sometimes.

There is also ElementAtOrDefault() which is often very handy.

Solution 3:

AVOID using ElementAt() in certain scenarios!!!

If you know you're going to look up each element, and you have (or could have) over 500, then just call ToArray(), store it in a reusable array variable, and index it off that way.

For example; my code was reading data from of an Excel file.
I was using ElementAt() to find the SharedStringItem my Cell was referencing.
With 500 or less lines, you probably won't notice the difference.
With 16K lines, it was taking 100 seconds.

To make things worse (with each row it read) the bigger the index grew and the more it had to index through on each iteration, so it took longer than it should have.
The first 1,000 rows took 2.5 Seconds, while the last 1,000 rows took 10.7 Seconds.

Looping through this line of code:

SharedStringItem ssi = sst.Elements<SharedStringItem>().ElementAt(ssi_index);

Resulted in this Logged Output:

...Using ElementAt():
RowIndex:  1000 Read: 1,000 Seconds: 2.4627589
RowIndex:  2000 Read: 1,000 Seconds: 2.9460492
RowIndex:  3000 Read: 1,000 Seconds: 3.1014865
RowIndex:  4000 Read: 1,000 Seconds: 3.76619
RowIndex:  5000 Read: 1,000 Seconds: 4.2489844
RowIndex:  6000 Read: 1,000 Seconds: 4.7678506
RowIndex:  7000 Read: 1,000 Seconds: 5.3871863
RowIndex:  8000 Read: 1,000 Seconds: 5.7997721
RowIndex:  9000 Read: 1,000 Seconds: 6.4447562
RowIndex: 10000 Read: 1,000 Seconds: 6.8978011
RowIndex: 11000 Read: 1,000 Seconds: 7.4564455
RowIndex: 12000 Read: 1,000 Seconds: 8.2510054
RowIndex: 13000 Read: 1,000 Seconds: 8.5758217
RowIndex: 14000 Read: 1,000 Seconds: 9.2953823
RowIndex: 15000 Read: 1,000 Seconds: 10.0159931
RowIndex: 16000 Read: 1,000 Seconds: 10.6884988
Total Seconds: 100.6736451

Once I created an intermediary Array to store the SharedStringItem's in for referencing, my time went 100 Seconds down to 10 Seconds and now handles each row in the same amount of time.

This line of code:

SharedStringItem[] ssia = sst == null ? null : sst.Elements<SharedStringItem>().ToArray();
Console.WriteLine("ToArray():" + watch.Elapsed.TotalSeconds + " Len:" + ssia.LongCount());

and Looping through this line of code:

SharedStringItem ssi = ssia[ssi_index];

Resulted in this Logged Output:

...Using Array[]:
ToArray(): 0.0840583 Len: 33560
RowIndex:  1000 Read: 1,000 Seconds: 0.8057094
RowIndex:  2000 Read: 1,000 Seconds: 0.8183683
RowIndex:  3000 Read: 1,000 Seconds: 0.6809131
RowIndex:  4000 Read: 1,000 Seconds: 0.6530671
RowIndex:  5000 Read: 1,000 Seconds: 0.6086124
RowIndex:  6000 Read: 1,000 Seconds: 0.6232579
RowIndex:  7000 Read: 1,000 Seconds: 0.6369397
RowIndex:  8000 Read: 1,000 Seconds: 0.629919
RowIndex:  9000 Read: 1,000 Seconds: 0.633328
RowIndex: 10000 Read: 1,000 Seconds: 0.6356769
RowIndex: 11000 Read: 1,000 Seconds: 0.663076
RowIndex: 12000 Read: 1,000 Seconds: 0.6633178
RowIndex: 13000 Read: 1,000 Seconds: 0.6580743
RowIndex: 14000 Read: 1,000 Seconds: 0.6518182
RowIndex: 15000 Read: 1,000 Seconds: 0.6662199
RowIndex: 16000 Read: 1,000 Seconds: 0.6360254
Total Seconds: 10.7586264

As you can see, converting to an Array took a fraction of a second for 33,560 items and was well worth it to speed up my import process.