How many times would you have to roll a single die on average to reach a sum of at least 30?
I am primarily asking for what this kind of problem would be called.
Say I have one six-sided die. How many times on average would I have to roll it in order to have a sum of at least 30?
Doing some googling, the closest I have found was a geometric distribution. But I am unsure if that is strictly for pass/fail scenarios (such as flipping a coin or drawing a name out of a hat).
The closest I can get is that it will take no more than 30 tries (at a $1/6^30$ chance) and no less than 5 tries (at a $1/6^5$ chance). My problem arises when I think about the number of possible ways to get 6 tries. It would be easy if each die had 2 outcomes, but each one has 6, and many combinations can accomplish this. I don't even know where to begin figuring out how many combinations exist outside of counting (which would take hours).
EDIT: So I found this Help with the Probabilty of Rolling Two Ten-Sided Dice Multiple Times Until 100 is Reached
This is very much what this problem is, but it doesn't mention what the process is called. Which in the end is what my main question is.
A standard way to solve this is to consider simultaneously the mean number of rolls $t_n$ needed to reach $n$ for every nonnegative integer $n$ (and to remember at the end that the OP is asking for $t_{30}$). So, let us do that...
For every $n\leqslant0$, $t_n=0$. For every $n\geqslant1$, considering the result of the first roll, one sees that $$t_n=1+\frac16\sum_{k=1}^6t_{n-k} $$ Thus, the series $$T(s)=\sum_nt_ns^n$$ solves $$T(s)=\frac s{1-s}+\frac16\sum_{k=1}^6s^kT(s)$$ from which it follows that $$T(s)=\frac{6s}{(1-s)\left(6-\sum\limits_{k=1}^6s^k\right)}=\frac{6s}{6-7s+s^7}$$ To extract the coefficient $t_{30}=[s^{30}]T(s)$ of $s^{30}$ in $T(s)$, rewrite this as $$T(s)=s\left(1-rs\left(1-\frac t7\right)\right)^{-1}=\sum_{n=0}^\infty r^ns^{n+1}\left(1-\frac t7\right)^n$$ where $$r=\frac76\qquad t=s^6$$ hence $$[s^{30}]T(s)=\sum_{k=0}^4r^{5+6k}[t^{4-k}]\left(1-\frac t7\right)^{5+6k}$$ or, equivalently, $$[s^{30}]T(s)=\sum_{k=0}^46^{-5-6k}[t^{4-k}]\left(7-t\right)^{5+6k}=\frac7{6^5}\sum_{k=0}^4(-1)^kr^{6k}{5+6k\choose4-k}$$ that is, $$t_{30}=\frac7{6^5}\left(5-165r^6+136r^{12}-23r^{18}+r^{24}\right) $$ which can be "simplified" into the exact result $$t_{30}= \frac{333366007330230566034343}{36845653286788892983296}$$ with a numerical approximation $$t_{30}\approx 9.047634594384022902065997942672796588425278684184104625$$
Edit: To get some estimates of $t_n$ when $n\to\infty$, note that $$T(s)=\frac s{(1-s)^2Q(s)}$$ where $$Q(s)=\frac16(6+5s+4s^2+3s^3+2s^4+s^5)$$ Furthermore, $(1-s)Q(s)=1-\frac16\sum\limits_{k=1}^6s^k$ has no zero in the closed unit disk except the simple zero $s=1$ hence $Q(s)$ has no zero in the closed unit disk. This implies that $$T(s)=\frac a{(1-s)^2}+\frac b{1-s}+R(s)$$ for some given $(a,b)$ and some rational fraction $R(s)=\sum\limits_nr_ns^n$ with no pole in the closed unit disk. Then, there exists some finite $c$ and some $\varrho$ in $(0,1)$ such that, for every $n$, $$|r_n|\leqslant c\varrho^n$$ This yields, again for every $n$, $$|t_n-a(n+1)-b|=|r_n|\leqslant c\varrho^n$$ Equivalently, $$t_n=an+(a+b)+O(\varrho^n)$$ To identify $(a,b)$, note that $$(1-s)^2T(s)=a+b(1-s)+(1-s)^2R(s)$$ hence $$a=\left.(1-s)^2T(s)\right|_{s=1}=\frac1{Q(1)}$$ and $$b=-\left.\frac d{ds}[(1-s)^2T(s)]\right|_{s=1}=-\frac1{Q(1)}+\frac{Q'(1)}{Q(1)^2}$$ or, equivalently, $$a+b=\frac{Q'(1)}{Q(1)^2}$$ Finally, $Q(1)=\frac72$ and $Q'(1)=\frac{35}6$ hence $a=\frac27$ and $a+b=\frac{10}{21}$, which implies $$t_n=\frac27n+\frac{10}{21}+O(\varrho^n)$$ Edit-edit: More generally, throwing repeatedly a "die" producing a random number of points distributed like $X$, following the same route, one gets $a=E(X)$ and $a+b=Q'(1)/E(X)^2$ with $Q'(1)=\frac12E(X(X-1))$, hence, for some $\varrho_X$ in $(0,1)$ depending on the distribution of $X$, $$t_n=\frac1{E(X)}n+\frac{E(X(X-1))}{2E(X)^2}+O(\varrho_X^n)$$
Let's replace the target "$30$" with a general target $M$. Let $X_n$ denote the 'score' after $n$ throws; define $Y_n = X_n - \tfrac72 n$. Note that $E(Y_n) = 0$, and moreover $E(Y_n \mid Y_{n-1}) = 0$, so it is a martingale. Let $$ \tau = \inf\{ n \ge 0 \mid X_n \ge M \} = \inf\{ n \ge 0 \mid Y_n \ge M - \tfrac72 n \}. $$ Then $\tau$ is a stopping time for $Y$, and so by the optional stopping theorem $E(Y_\tau) = Y_0 = 0$. (Here $\tau$ is deterministically bounded by $M$, and so this is the easy case for OST.) In particular, $$ E(Y_\tau) = 0 \iff E(\tau) = \tfrac27 E(X_\tau). $$ A key point is that while $X_\tau \ge M$, we don't necessarily have equality. Of course, $X_\tau \in [M,M+5]$, and so $$ E(\tau) \in \bigl[ \tfrac27M, \tfrac27(M+5) \bigr], $$ and interval of width $10/7$. If you imagine $M \to \infty$, this means that we know $E(\tau)$ is linear in $M$ (and even know the constant, $\tfrac27$) and our estimate is $\mathcal O(1)$ off.
This is as far as I am able to get. It is not clear to me what the distribution of $X_\tau$ is. (Note that one only needs to know its mean.)
Update. I was interested by this, and posted a very related question: Distribution at First Time a Sum Reaches a Threshold. The answer given there (not by me, unfortunately!) gives the limiting distribution of $X_\tau$, in a simple form from which it is easy to calculate the mean (in the limit $M \to \infty$). Using this question/answer, and a little bit of algebra, one finds that the answer in question is that $$ E(\tau) = \tfrac27 M + \tfrac{10}{21} + o(1). $$
The answer also gives an iterative solution for any $M$, but involves working out a large power of a matrix.