Expectation of an event

If I am understanding the problem correctly, it appears that we can express everything in terms of the number $X_n$ of 1s in the array at step $n$. This gives a much simpler Markov chain to analyze.

Let $N = 1000$, and suppose at time $n$ we have $k$ 1s in the array. Suppose first that $A[j_1] = 0$ (happens with probability $(N-k)/N$). We set it to 1, gaining one 1 (so now there are $k+1$ 1s). If $A[j_2]$ and $A[j_3]$ are both 1 (prob $((k+1)/N)^2$) or both 0 (prob $((N-(k+1))/N)^2$), no more 1s are gained. If exactly one of $A[j_2], A[j_3]$ is 1 (prob $2(k+1)(N-(k+1))/N^2$), an additional 1 is gained.

Similarly, if $A[j_1] = 1$ (prob $k/N$) then we gain an additional 1 with probability $k(N-k)/N^2$, or no additional 1s with probability $(k^2 + (N-k)^2)/N^2$.

So putting this all together, we have the transition probabilities for $X_n$ are: $$\begin{align*} p(k, k) &= \frac{k(k^2 + (N-k)^2)}{N^3} \\ p(k, k+1) &= \frac{k^2(N-k) + 2 (N-k)(k+1)(N-(k+1))}{N^3} \\ p(k, k+2) &= \frac{(N-k)((k+1)^2 + ((N-(k+1))^2)}{N^3} \end{align*} $$

Perhaps someone can double-check that these sum to 1, as I haven't the time right now. But this should at least reduce the problem. There might be a slick martingale solution; I'll edit if I think of something.


Just to complete Nate Eldredge's answer. As his answer says, it's enough to consider the state at any point as being the number of ones in the array. Let $\mathsf E[k]$ be the expected number of steps needed to fill the array with ones, starting from a state with $k$ ones. Then, calculations essentially as in his answer (in code comments below) give a recurrence relation for $\mathsf E[k]$, which can be solved for instance with the following code (uses Sage; you can put it in a .sage file and run it), giving the answer: $$2861.36107443079$$

This agrees with user Listing's simulation.

N = 1000
E = [0]*(N+2)
def findE(k):
    if k>=N: return 0
    p1 = k/N          #Pr. that A[j1]=1
    p0 = 1-p1         #Pr. that A[j1]=0
    p2 = (k+1)/N      #Pr. that A[j2]=0, assuming A[j1] was 0 and was set to 1
    p1d = 2*p1*(1-p1) #Pr. that {A[j2],A[j3]} = {1, 0}, assuming A[j1] was 1
    p0d = 2*p2*(1-p2) #Pr. that {A[j2],A[j3]} = {1, 0}, assuming A[j1] was 0
    #Now,
    #E[k] = 1 + p1*(p1d*E[k+1]+(1-p1d)*E[k]) + p0*(p0d*E[k+2]+(1-p0d)*E[k+1])
    #so
    #E[k]*(1-p1*(1-p1d)) = 1 + p1*p1d*E[k+1] + p0*(p0d*E[k+2]+(1-p0d)*E[k+1])
    return (1 + p1*p1d*E[k+1] + p0*(p0d*E[k+2]+(1-p0d)*E[k+1])) / (1-p1*(1-p1d))

for k in range(N+1, -1, -1): E[k] = findE(k)
print n(E[0])

If the last statement is changed to print E[0] it prints the exact answer as a rational number which in lowest terms is a 3075-digit integer divided by a 3072-digit integer, clearly not worth reporting — and also dashing any hopes that a human of reasonable patience could arrive at the answer by hand.