When can we interchange the derivative with an expectation?
Interchanging a derivative with an expectation or an integral can be done using the dominated convergence theorem. Here is a version of such a result.
Lemma. Let $X\in\mathcal{X}$ be a random variable $g\colon \mathbb{R}\times \mathcal{X} \to \mathbb{R}$ a function such that $g(t, X)$ is integrable for all $t$ and $g$ is continuously differentiable w.r.t. $t$. Assume that there is a random variable $Z$ such that $|\frac{\partial}{\partial t} g(t, X)| \leq Z$ a.s. for all $t$ and $\mathbb{E}(Z) < \infty$. Then $$\frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr) = \mathbb{E}\bigl(\frac{\partial}{\partial t} g(t, X)\bigr).$$
Proof. We have $$\begin{align*} \frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr) &= \lim_{h\to 0} \frac1h \Bigl( \mathbb{E}\bigl(g(t+h, X)\bigr) - \mathbb{E}\bigl(g(t, X)\bigr) \Bigr) \\ &= \lim_{h\to 0} \mathbb{E}\Bigl( \frac{g(t+h, X) - g(t, X)}{h} \Bigr) \\ &= \lim_{h\to 0} \mathbb{E}\Bigl( \frac{\partial}{\partial t} g(\tau(h), X) \Bigr), \end{align*}$$ where $\tau(h) \in (t, t+h)$ exists by the mean value theorem. By assumption we have $$\Bigl| \frac{\partial}{\partial t} g(\tau(h), X) \Bigr| \leq Z$$ and thus we can use the dominated convergence theorem to conclude $$\begin{equation*} \frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr) = \mathbb{E}\Bigl( \lim_{h\to 0} \frac{\partial}{\partial t} g(\tau(h), X) \Bigr) = \mathbb{E}\Bigl( \frac{\partial}{\partial t} g(t, X) \Bigr). \end{equation*}$$ This completes the proof.
In your case you would have $g(t, X) = \int_0^t f(X_s) \,ds$ and a sufficient condition to obtain $\frac{d}{dt} \mathbb{E}(Y_t) = \mathbb{E}\bigl(f(X_t)\bigr)$ would be for $f$ to be bounded.
If you want to take the derivative only for a single point $t=t^\ast$, boundedness of the derivative is only required in a neighbourhood of $t^\ast$. Variants of the lemma can be derived by using different convergence theorems in place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.
The lemma which is stated in jochen's answer is quite useful. However, there are cases in which the integrand is not differentiable with respect to the parameter. Here, there is a discussion about some results which can be made in a more general setup.
Let $\left(\mathbf{X},\mathcal{X},\mu\right)$ be a general measure space (e.g., a probability space) and let $\xi:\mathbf{X}\times[0,\infty)\rightarrow\mathbb{R}$ be such that:
(a) For every $s\geq0$, $x\mapsto\xi(x,s)$ is $\mathcal{X}$-measurable.
(b) For every $x\in\mathbf{X}$, $s\mapsto\xi(x,s)$ is right-continuous (This assumption can be weakened by letting it be valid just $\mu$-a.s. but then $\left(\mathbf{X},\mathcal{X},\mu\right)$ has to be a complete).
In particular, notice that (a) and the right-continuity assumption which is listed in (b) imply that $\xi\in\mathcal{X}\otimes\mathcal{B}[0,\infty)$ where $\mathcal{B}[0,\infty)$ is the Borel $\sigma$-field which is generated by $[0,\infty)$. For details see, e.g., Remark 1.4 on p. 5 of I. Karatzas, S.E. Shreve, Brownian Motion and Stochastic Calculus, Springer, 1988. Then, for every $(x,t)\in\mathbf{X}\times[0,\infty)$ define $g(x,t)=\int_0^t\xi(x,s)ds$ and note that $t\mapsto g(x,t)$ has a right-derivative which equals to $s\mapsto\xi(x,s)$. In addition, for every $t\geq0$ let
$$\varphi(t)\equiv\int_{\mathbf{X}}g(x,t)\mu(dx)=\int_{\mathbf{X}}\int_0^t\xi(x,s)ds\mu(dx)\,.$$
To make $\varphi(\cdot)$ be well-defined, let $m$ be Lebesgue measure on $[0,\infty)$ and assume that the pre-conditions of Fubini's theorem are satisfied, e.g., $\xi(x,s)$ is nonnegative (This assumption can be weakened by letting it be valid just $\mu$-a.s. but then $\left(\mathbf{X},\mathcal{X},\mu\right)$ has to be a complete) or integrable with respect to $\mu\otimes m$. Then, deduce that
$$\varphi(t)=\int_0^t\zeta(s)ds\ \ , \ \ \forall t\geq0$$
such that for every $t\geq0$, $\zeta(t)\equiv\int_{\mathbf{X}}\xi(x,t)\mu(dx)$. This means that if there is a right-continuous version of $\zeta(\cdot)$, then it equals to the right-derivative of $\varphi(\cdot)$. Moreover, if this version is continuous, then the fundamental theorem of calculus implies that it is the derivative of $\varphi(\cdot)$.
In particular, if some convergence theorem can be used in order to show that the right-continuity of $s\mapsto\xi(x,s)$ for every $x\in\mathbf{X}$ leads to a right-continuity of $\zeta(\cdot)$, then
$$\partial_+\varphi(t)=\zeta(t)\ \ ,\ \ \forall t\geq0$$
where $\partial_+$ is a notation for a right-derivative. For example, this happens when
$$|\xi(x,s)|\leq \psi(x) \ \ , \ \ \mu\text{-a.s.}$$
for some $\psi\in L_1(\mu)$.