Solution 1:

Very easy to measure...

In a small number of tight-loop processing code where I know the length is fixed I use arrays for that extra tiny bit of micro-optimisation; arrays can be marginally faster if you use the indexer / for form - but IIRC believe it depends on the type of data in the array. But unless you need to micro-optimise, keep it simple and use List<T> etc.

Of course, this only applies if you are reading all of the data; a dictionary would be quicker for key-based lookups.

Here's my results using "int" (the second number is a checksum to verify they all did the same work):

(edited to fix bug)

List/for: 1971ms (589725196)
Array/for: 1864ms (589725196)
List/foreach: 3054ms (589725196)
Array/foreach: 1860ms (589725196)

based on the test rig:

using System;
using System.Collections.Generic;
using System.Diagnostics;
static class Program
{
    static void Main()
    {
        List<int> list = new List<int>(6000000);
        Random rand = new Random(12345);
        for (int i = 0; i < 6000000; i++)
        {
            list.Add(rand.Next(5000));
        }
        int[] arr = list.ToArray();

        int chk = 0;
        Stopwatch watch = Stopwatch.StartNew();
        for (int rpt = 0; rpt < 100; rpt++)
        {
            int len = list.Count;
            for (int i = 0; i < len; i++)
            {
                chk += list[i];
            }
        }
        watch.Stop();
        Console.WriteLine("List/for: {0}ms ({1})", watch.ElapsedMilliseconds, chk);

        chk = 0;
        watch = Stopwatch.StartNew();
        for (int rpt = 0; rpt < 100; rpt++)
        {
            for (int i = 0; i < arr.Length; i++)
            {
                chk += arr[i];
            }
        }
        watch.Stop();
        Console.WriteLine("Array/for: {0}ms ({1})", watch.ElapsedMilliseconds, chk);

        chk = 0;
        watch = Stopwatch.StartNew();
        for (int rpt = 0; rpt < 100; rpt++)
        {
            foreach (int i in list)
            {
                chk += i;
            }
        }
        watch.Stop();
        Console.WriteLine("List/foreach: {0}ms ({1})", watch.ElapsedMilliseconds, chk);

        chk = 0;
        watch = Stopwatch.StartNew();
        for (int rpt = 0; rpt < 100; rpt++)
        {
            foreach (int i in arr)
            {
                chk += i;
            }
        }
        watch.Stop();
        Console.WriteLine("Array/foreach: {0}ms ({1})", watch.ElapsedMilliseconds, chk);

        Console.ReadLine();
    }
}

Solution 2:

Summary:

  • Array need to use:

    • So often as possible. It's fast and takes smallest RAM range for same amount information.
    • If you know exact count of cells needed
    • If data saved in array < 85000 b (85000/32 = 2656 elements for integer data)
    • If needed high Random Access speed
  • List need to use:

    • If needed to add cells to the end of list (often)
    • If needed to add cells in the beginning/middle of the list (NOT OFTEN)
    • If data saved in array < 85000 b (85000/32 = 2656 elements for integer data)
    • If needed high Random Access speed
  • LinkedList need to use:

    • If needed to add cells in the beginning/middle/end of the list (often)

    • If needed only sequential access (forward/backward)

    • If you need to save LARGE items, but items count is low.

    • Better do not use for large amount of items, as it's use additional memory for links.

      If you not sure that you need LinkedList -- YOU DON'T NEED IT.

      Just do not use it.


More details:

color meaning

Array vs List vs Linked List

Much more details:

https://stackoverflow.com/a/29263914/4423545

Solution 3:

I think the performance will be quite similar. The overhead that is involved when using a List vs an Array is, IMHO when you add items to the list, and when the list has to increase the size of the array that it's using internally, when the capacity of the array is reached.

Suppose you have a List with a Capacity of 10, then the List will increase it's capacity once you want to add the 11th element. You can decrease the performance impact by initializing the Capacity of the list to the number of items it will hold.

But, in order to figure out if iterating over a List is as fast as iterating over an array, why don't you benchmark it ?

int numberOfElements = 6000000;

List<int> theList = new List<int> (numberOfElements);
int[] theArray = new int[numberOfElements];

for( int i = 0; i < numberOfElements; i++ )
{
    theList.Add (i);
    theArray[i] = i;
}

Stopwatch chrono = new Stopwatch ();

chrono.Start ();

int j;

 for( int i = 0; i < numberOfElements; i++ )
 {
     j = theList[i];
 }

 chrono.Stop ();
 Console.WriteLine (String.Format("iterating the List took {0} msec", chrono.ElapsedMilliseconds));

 chrono.Reset();

 chrono.Start();

 for( int i = 0; i < numberOfElements; i++ )
 {
     j = theArray[i];
 }

 chrono.Stop ();
 Console.WriteLine (String.Format("iterating the array took {0} msec", chrono.ElapsedMilliseconds));

 Console.ReadLine();

On my system; iterating over the array took 33msec; iterating over the list took 66msec.

To be honest, I didn't expect that the variation would be that much. So, I've put my iteration in a loop: now, I execute both iteration 1000 times. The results are:

iterating the List took 67146 msec iterating the array took 40821 msec

Now, the variation is not that large anymore, but still ...

Therefore, I've started up .NET Reflector, and the getter of the indexer of the List class, looks like this:

public T get_Item(int index)
{
    if (index >= this._size)
    {
        ThrowHelper.ThrowArgumentOutOfRangeException();
    }
    return this._items[index];
}

As you can see, when you use the indexer of the List, the List performs a check whether you're not going out of the bounds of the internal array. This additional check comes with a cost.

Solution 4:

if you are just getting a single value out of either (not in a loop) then both do bounds checking (you're in managed code remember) it's just the list does it twice. See the notes later for why this is likely not a big deal.

If you are using your own for(int int i = 0; i < x.[Length/Count];i++) then the key difference is as follows:

  • Array:
    • bounds checking is removed
  • Lists
    • bounds checking is performed

If you are using foreach then the key difference is as follows:

  • Array:
    • no object is allocated to manage the iteration
    • bounds checking is removed
  • List via a variable known to be List.
    • the iteration management variable is stack allocated
    • bounds checking is performed
  • List via a variable known to be IList.
    • the iteration management variable is heap allocated
    • bounds checking is performed also Lists values may not be altered during the foreach whereas the array's can be.

The bounds checking is often no big deal (especially if you are on a cpu with a deep pipeline and branch prediction - the norm for most these days) but only your own profiling can tell you if that is an issue. If you are in parts of your code where you are avoiding heap allocations (good examples are libraries or in hashcode implementations) then ensuring the variable is typed as List not IList will avoid that pitfall. As always profile if it matters.

Solution 5:

[See also this question]

I've modified Marc's answer to use actual random numbers and actually do the same work in all cases.

Results:

         for      foreach
Array : 1575ms     1575ms (+0%)
List  : 1630ms     2627ms (+61%)
         (+3%)     (+67%)

(Checksum: -1000038876)

Compiled as Release under VS 2008 SP1. Running without debugging on a [email protected], .NET 3.5 SP1.

Code:

class Program
{
    static void Main(string[] args)
    {
        List<int> list = new List<int>(6000000);
        Random rand = new Random(1);
        for (int i = 0; i < 6000000; i++)
        {
            list.Add(rand.Next());
        }
        int[] arr = list.ToArray();

        int chk = 0;
        Stopwatch watch = Stopwatch.StartNew();
        for (int rpt = 0; rpt < 100; rpt++)
        {
            int len = list.Count;
            for (int i = 0; i < len; i++)
            {
                chk += list[i];
            }
        }
        watch.Stop();
        Console.WriteLine("List/for: {0}ms ({1})", watch.ElapsedMilliseconds, chk);

        chk = 0;
        watch = Stopwatch.StartNew();
        for (int rpt = 0; rpt < 100; rpt++)
        {
            int len = arr.Length;
            for (int i = 0; i < len; i++)
            {
                chk += arr[i];
            }
        }
        watch.Stop();
        Console.WriteLine("Array/for: {0}ms ({1})", watch.ElapsedMilliseconds, chk);

        chk = 0;
        watch = Stopwatch.StartNew();
        for (int rpt = 0; rpt < 100; rpt++)
        {
            foreach (int i in list)
            {
                chk += i;
            }
        }
        watch.Stop();
        Console.WriteLine("List/foreach: {0}ms ({1})", watch.ElapsedMilliseconds, chk);

        chk = 0;
        watch = Stopwatch.StartNew();
        for (int rpt = 0; rpt < 100; rpt++)
        {
            foreach (int i in arr)
            {
                chk += i;
            }
        }
        watch.Stop();
        Console.WriteLine("Array/foreach: {0}ms ({1})", watch.ElapsedMilliseconds, chk);
        Console.WriteLine();

        Console.ReadLine();
    }
}