When can one use the Leibniz rule for integration?
TL;DR, if the partial derivative $\frac{\partial f}{\partial t}$ is jointly continuous in the variables $x$ and $t$, then the Leibniz rule works. If you use the Lebesgue integral (which gives you the dominated convergence theorem), this condition can be relaxed.
Leibniz rule for Riemann integration
When working with Riemann integrals, the standard criterion for switching a limit and an integral sign is the following statement (this is, in fact, a special case of the dominated convergence theorem), which relies on uniform convergence:
Theorem 1. (Interchanging limits and integrals) If $g_n : [a, b] \to \mathbb{R}$ is a sequence of Riemann integrable functions that converges uniformly to a Riemann integrable function $g : [a, b] \to \mathbb{R}$, then $$ \lim_{n \to \infty} \int_a^b g_n(x)\mathrm{d}x = \int_a^b g(x)\mathrm{d}x. $$
Using this result, we can establish a Leibniz rule for Riemann integration. Because notation with multiple variables can get confusing, let us define $F : \mathbb{R} \to \mathbb{R}$ be the function $$ F(t) = \int_a^b f(x, t)\mathrm{d}x, $$ where $f : [a, b] \times \mathbb{R}$ is the function in your question. For a fixed $t_0 \in \mathbb{R}$, we would like to find if $F'(t_0)$ exists and whether it can be obtained by the Leibniz rule. The key observation is that we can write differentiation as the limit $$ \tag{1} F'(t_0) = \lim_{h \to 0} \frac{F(t_0 + h) - F(t_0)}{h} = \lim_{h \to 0} \int_a^b \frac{f(x, t_0 + h) - f(x, t_0)}{h} \mathrm{d}x. $$ To apply Theorem 1, we would like the difference quotient to converge uniformly. (That is to say, for every sequence $h_n \to 0$, the difference quotient $\frac{f(x, t_0 + h_n) - f(x, t_0)}{h_n}$ should converge uniformly in $x$.) However, the difference quotient is a bit unwieldy to work with, but we can use the mean-value theorem to instead write $$ \tag{2} F'(t_0) = \lim_{h \to 0} \int_a^b \frac{\partial f}{\partial t}(x, t_0 + h_x) \mathrm{d}x, $$ where $|h_x| \leq |h|$ for all $x \in [a, b]$. This leads to the following result.
Theorem 2. (Leibniz rule for Riemann integration) Let $f, F, t_0$ be defined as above. If $\frac{\partial f}{\partial t}$ is continuous on a rectangle $[a, b] \times [t_0 - \delta, t_0 + \delta]$, then $F'(t_0)$ exists and is given by the formula $$ F'(t_0) = \int_a^b \frac{\partial{f}}{\partial t}(x, t_0) \mathrm{d}x. $$ In particular, if $\frac{\partial f}{\partial t}$ is continuous on all of $[a, b] \times \mathbb{R}$, $F$ is differentiable everywhere and can be determined by the Leibniz rule.
By $(2)$, it suffices to show that for all sequences $h_n \to 0$, the functions $$g_n(x) := \frac{\partial f}{\partial t}(x, t_0 + (h_n)_x)$$ converge uniformly to $g(x) := \frac{\partial f}{\partial t}(x)$. This can be done by utilizing the uniform continuity of $\frac{\partial f}{\partial t}$. I'll leave the rest of the proof to you.
Also, (if I'm understanding correctly) your criterion of $f(x, t + 1 / n)$ converging uniformly to $f(x, t)$ doesn't exactly work. For one, it says nothing about the uniform convergence of the difference quotient in $(1)$ since it only works with discrete time steps of $1 / n$. Even if $f(x, t + h)$ were to converge uniformly to $f(x, t)$ as $h \to 0$, it would not guarantee uniform convergence of the difference quotient (it does not even guarantee the existence of a derivative!).
Leibniz rule for Lebesgue integration
Finally, here's a criterion for the Leibniz rule if we are using the Lebesgue integral.
Theorem 3. (Leibniz rule for Lebesgue integration) Let $X$ be an open subset of $\mathbb{R}$. Let $f : [a, b] \times X \to \mathbb{R}$ be a Lebesgue integrable function such that the partial derivative $\frac{\partial f}{\partial t}(x, t)$ exists everywhere. Further suppose there is a Lebesgue integrable function $g : [a, b] \to [0, +\infty]$ such that $\int_a^b g(x)\mathrm{d}x$ is finite and $\left|\frac{\partial f}{\partial t}(x, t)\right| \leq g(x)$ for all $t \in X$ and $x \in [a, b]$. Then $$ \frac{\mathrm{d}}{\mathrm{d}t}\int_a^b f(x, t)\mathrm{d}x = \int_a^b \frac{\partial f}{\partial t}(x, t) \mathrm{d}x. $$
Observe that Theorem 3 supersedes Theorem 2 because continuous functions are bounded on compact subsets. Setting $X = [t_0 - \delta, t_0 + \delta]$ and setting $g : X \to \mathbb{R}$ to be a constant bound of $\frac{\partial f}{\partial t}$ on $X$, Theorem 2 follows.
The proof of Theorem 3 is arguably easier than in the case of Riemann integration, at least if one is equipped with the machinery of measure theory. After obtaining $(2)$, the result follows directly from the dominated convergence theorem.