What is the relationship of $\mathcal{L}_1$ (total variation) distance to hypothesis testing?

Kullback-Leibler divergence (a.k.a. relative entropy) has a nice property in hypothesis testing: given some observed measurement $m\in \mathcal{Q}$, and two probability distributions $P_0$ and $P_1$ defined over measurement space $\mathcal{Q}$, if $H_0$ is the hypothesis that $m$ was generated from $P_0$ and $H_1$ is the hypothesis that $m$ was generated from $P_1$, then the Type I and Type II errors are related as follows:

$$d(\alpha,\beta)\leq D(P_0\|P_1)$$

where

$$D(P_0\|P_1)=\sum_{x\in\mathcal{Q}}P_0(x)\log_2\left(\frac{P_0(x)}{P_1(x)}\right)$$

is the Kullback-Leibler divergence,

$$d(\alpha,\beta)=\alpha\log_2\frac{\alpha}{1-\beta}+(1-\alpha)\log_2\frac{1-\alpha}{\beta}$$

is called binary relative entropy, and $\alpha$ and $\beta$ are probabilities of Type I and Type II errors, respectively.

This relationship allows one to bound the probabilities of Type I and Type II errors.

I am wondering if something similar exists for Total Variation distance:

$$TV(P_0,P_1)=\frac{1}{2}\sum_{x\in\mathcal{Q}}\left| P_0(x)-P_1(x)\right|$$

I am aware that

$$2(TV(P_0,P_1)^2\leq D(P_0\|P_1)$$

Is there more?

Unfortunately, I am not very well-versed in hypothesis testing and statistics (I know the basics and have pretty good background in probability theory). Any help would be appreciated.


Here's a bit of an informal argument towards a lower bound I recently learned during a lecture.

Suppose we have two probability measures $P_0(\cdot )$ and $P_1(\cdot )$, and suppose I reject $P_0$ when the event $A$ occurs. So,

$ \begin{align} \textrm{Type I error} + \textrm{Type II error} &= P_0(A) + P_1(A^C) \\ &= P_0(A) + [1 - P_1(A)]\\ &= 1 + [P_0(A) - P_1(A)]\\ &\geq 1 + \inf_{A}[P_0(A)-P_1(A)]\\ &= 1-\sup_{A}[P_0(A)-P_1(A)]\\ &= 1-TV(P_0 , P_1) \end{align}$