Expected time till absorption in specific state of a Markov chain

[A partial answer illustrating the first method.]

State $1$ represents losing the game, so remove it from the system and condition the remaining transition probabilities on the event that the system does not transition from state $i$ to state $1$. In practical terms, this means that you delete from your transition matrix column $1$ and every row that had a $1$ in this column, and scale each remaining row $i$ by $1/(1-p_{1i})$. For your example system, this produces the reduced matrix $$P' = \begin{bmatrix}0&1&0&0&0 \\ 0&0&0&1&0 \\ q&0&0&0&p \\ 0&0&q&0&p \\ 0&0&0&0&1 \end{bmatrix}.$$ Applying standard techinques to this matrix, we can verify that the absorption probability is $1$ for any starting state and that the expected conditional absorption times are $$\begin{bmatrix}{3+q\over1-q^2} \\ {2+q+q^2\over1-q^2} \\ {1+3q\over1-q^2} \\ {1+q+2q^2\over1-q^2}\end{bmatrix}.$$