Difference between average case and amortized analysis
I am reading an article on amortized analysis of algorithms. The following is a text snippet.
Amortized analysis is similar to average-case analysis in that it is concerned with the cost averaged over a sequence of operations. However, average case analysis relies on probabilistic assumptions about the data structures and operations in order to compute an expected running time of an algorithm. Its applicability is therefore dependent on certain assumptions about the probability distribution of algorithm inputs.
An average case bound does not preclude the possibility that one will get “unlucky” and encounter an input that requires more-than-expected time even if the assumptions for probability distribution of inputs are valid.
My questions about above text snippet are:
In the first paragraph, how does average-case analysis “rely on probabilistic assumptions about data structures and operations?” I know average-case analysis depends on probability of input, but what does the above statement mean?
What does the author mean in the second paragraph that average case is not valid even if the input distribution is valid?
Thanks!
Average case analysis makes assumptions about the input that may not be met in certain cases. Therefore, if your input is not random, in the worst case the actual performace of an algorithm may be much slower than the average case.
Amortized analysis makes no such assumptions, but it considers the total performance of a sequence of operations instead of just one operation.
Dynamic array insertion provides a simple example of amortized analysis. One algorithm is to allocate a fixed size array, and as new elements are inserted, allocate a fixed size array of double the old length when necessary. In the worst case a insertion can require time proportional to the length of the entire list, so in the worst case insertion is an O(n) operation. However, you can guarantee that such a worst case is infrequent, so insertion is an O(1) operation using amortized analysis. Amortized analysis holds no matter what the input is.
To get the average-case time complexity, you need to make assumptions about what the "average case" is. If inputs are strings, what's the "average string"? Does only length matter? If so, what is the average length of strings I will get? If not, what is the average character(s) in these strings? It becomes difficult to answer these questions definitively if the strings are, for instance, last names. What is the average last name?
In most interesting statistical samples, the maximum value is greater than the mean. This means that your average case analysis will sometimes underestimate the time/resources needed for certain inputs (which are problematic). If you think about it, for a symmetrical PDF, average case analysis should underestimate as much as it overestimates. Worst case analysis, OTOH, considers only the most problematic case(s), and so is guaranteed to overestimate.
Consider the computation of the minimum in an unsorted array. Maybe you know that it has
O(n)
running time but if we want be more precise it doesn/2
comparison in the average case. Why this? because we are doing an assumption on data; we are assuming that the minimum can be in every position with the same probability. if we change this assumption, and we say for example that the probability of being in the i position is for example increasing with i, we could prove a different comparison number, even a different asymptotical bound.In the second paragraph the author say that with average case analysis we can be very unlucky and have a measured average case greater than the therotical case; recalling the previous example, if we are unlucky on m different arrays of size n, and the minimum is every time in the last position, than we'll measure a
n
average case and not an/2
. This can't just happen when a amortized bound is proven.