Show that a set $A \subset \mathbb{R}^2$ of positive Lebesgue measure contains the vertices of an equilateral triangle

I have the following problem for which I'm trying to figure out a solution, and I'm a bit stuck. Any hints or insights would be appreciated.

Suppose $A \subseteq \mathbb{R}^2$ is a Lebesgue measurable set of positive Lebesgue measures. Show that it contains the vertices of an equilateral triangle.

Here is what I've done so far.

First, we can write $$\displaystyle A =\bigcup_{i = 1}^\infty \left(A \cap ((B(0, i) \backslash B(0, i-1))\right)$$ where this is a disjoint union. If, for all $i$, $$m\left(A \cap ((B(0, i) \backslash B(0, i-1))\right) = 0$$ then $m(A) = 0$, which is a contradiction. Hence, we can take $A$ to be bounded. Furthermore, since, by the regularity of measure, we can approximate $A$ by a closed subset $F$ such that $m(A \backslash F) < \epsilon$ for any $\epsilon > 0$, it suffices to show that $F$ contains the vertices of an equilateral triangle. Hence, we can take our set to be compact.

The above so far I believe to be correct. What follows is the rest of my thought process, which I don't believe is correct, but I wonder if anyone has any idea how to fix the gap in it.

Since $F$ is compact, we can cover it with finitely many balls of radius $\epsilon$, so we can write $$F = \bigcup_{i = 1}^n (F \cap B(x_i, \epsilon))$$ Hence, by the countable subadditivity of measure, we have $$m(F) \leq \sum_{i = 1}^n m(F \cap B(x_i, \epsilon))$$ Dividing by the measure of a ball of radius $\epsilon$, we have $$\frac{m(F)}{\pi \epsilon^2} \leq \sum_{i = 1}^n \frac{m(F \cap B(x_i, \epsilon)}{m(B(x_i, \epsilon))}$$ Now, by Lebesgue's density theorem, taking the limit as $\epsilon \to 0$, the right hand side is finite, which implies that $m(F) = 0$, a contradiction. Now, one problem with this is that $n$ actually depends on $\epsilon$, so we can't actually move the limit inside the finite sum. I do think solving this problem is supposed to make use of Lebesgue's density theorem though.

Any hints or insights would be appreciated! Thank you!


I'll use $\mathcal{L}^n$ to denote the Lebesgue (outer) measure on $\mathbb{R}^n$.

Your problem is essentially a variant of the Steinhuas theorem. We will prove:

Let $S$ be a Lebesgue measurable subset of $\mathbb{R}^n$ such that $\mathcal{L}^n (S)>0$, and let $\mathbf{v}_1,\ldots,\mathbf{v}_N$ be a finite collection of vectors in $\mathbb{R}^n$. Then there exists $R>0$, depending on the set $S$, the integer $N$, and the number $M:=\max\left\{ \left\Vert \mathbf{v}_1 \right\Vert, \ldots, \left\Vert \mathbf{v}_N \right\Vert \right\}$, with the following property:

For all $0<r<R$, there exists $p\in S$ such that $(N+1)$ points $p,p+r\mathbf{v}_1,\ldots,p+r\mathbf{v}_1+\cdots r\mathbf{v}_N$ all belong to $S$.

Proof By inner regularity, we can take a compact subset $K_1$ of $S$ such that $\mathcal{L}^n(K_1)>0$. Let $\beta$ be a small positive number (how small $\beta$ should be will be given later in the proof), and take an open set $U$ such that $$K_1\subset U,\quad \mathcal{L}^n(U) \leq (1+\beta)\mathcal{L}^n(K_1)$$ which is possible by outer regularity.

Since $K_1$ is compact, $d_1:=\text{dist}(K_1,U^c)$ is positive. The case $M=0$ is obvious, so we may assume $M>0$. We claim that choosing $R=\frac{d_1}{M}$ works. To verify this, take any $r$ such that $0<r<\frac{d_1}{M}$. We shall first show that the set $K_1 \cap (K_1 + r\mathbf{v}_1)$ is nonempty: in fact, we shall prove$$\mathcal{L}^n (K_1 \cap (K_1 + r\mathbf{v}_1)) \geq (1-\beta) \mathcal{L}^n (K_1).$$

Observe that the set $(K_1 + r\mathbf{v}_1)$ is still contained in $U$, since otherwise $\text{dist}(K_1,U^c)\leq \left\Vert r\mathbf{v}_1 \right\Vert \leq rM < d_1$. Thus we have $K_1 \cup (K_1 + r\mathbf{v}_1) \subset U$, and $$ \mathcal{L}^n(U) \geq \mathcal{L}^n (K_1 \cup (K_1 + r\mathbf{v}_1)) = \mathcal{L}^n (K_1) + \mathcal{L}^n (K_1 + r\mathbf{v}_1) - \mathcal{L}^n (K_1 \cap (K_1 + r\mathbf{v}_1)).$$ Since $\mathcal{L}^n$ is translation invariant, we may rewrite this in the form $$ \mathcal{L}^n (K_1 \cap (K_1 + r\mathbf{v}_1)) \geq 2\mathcal{L}^n (K_1) - \mathcal{L}^n(U),$$ and the right-hand side is greater than or equal to $(1-\beta) \mathcal{L}^n(K_1)$ by our choice of $U$.

Now we proceed inductively. For each $i=1,\ldots N$, put $K_{i+1}:=K_i \cap (K_i + r\mathbf{v}_i)$. Clearly $K_{i+1}\subset K_i\subset U$, and the set $(K_i + r\mathbf{v}_i)$ is still contained in $U$. We claim that the estimate $$ \mathcal{L}^n (K_{i+1}) \geq \left( 1 - (2^i -1)\beta \right) \mathcal{L}^n (K_1)$$ holds for $i=1,\ldots N$. For $i=1$, this has already been done. Assuming the inequality for some $i<N$, we have $$ \mathcal{L}^n(U) \geq \mathcal{L}^n (K_i \cup (K_i + r\mathbf{v}_i)) = \mathcal{L}^n (K_i) + \mathcal{L}^n (K_i + r\mathbf{v}_i) - \mathcal{L}^n (K_{i+1}) = 2 \mathcal{L}^n (K_i) - \mathcal{L}^n (K_{i+1}),$$ so that the induction hypothesis implies $$ \mathcal{L}^n (K_{i+1}) \geq 2 \mathcal{L}^n (K_i) - \mathcal{L}^n(U) \geq 2\cdot\left( 1 - (2^i -1)\beta \right) \mathcal{L}^n (K_1) - (1+\beta)\mathcal{L}^n (K_1) = \left( 1 - (2^{i+1} -1)\beta \right) \mathcal{L}^n (K_1)$$ which completes the induction. Hence, if we had chosen $\beta$ small enough so that $\beta<\frac{1}{2^N -1}$, we would have had the folowing nested sequence of compact subsets within $U$: $$ \varnothing\neq K_{N+1}\subset K_N \subset \cdots \subset K_1 \subset U.$$

We now assume that this is the case. Take any $q\in K_{N+1}$. Since $K_{N+1}=K_N \cap (K_N + r\mathbf{v}_N)$, we have $q-r\mathbf{v}_N \in K_N$. Again, since $K_N=K_{N-1} \cap (K_{N-1} + r\mathbf{v}_{N-1})$, we have $q-r\mathbf{v}_N-r\mathbf{v}_{N-1} \in K_{N-1}$. Going `backward' in this manner $N$ times, we finally reach at $q-r\mathbf{v}_{N}-\cdots-r\mathbf{v}_1 \in K_1$. If we let $p=q-r\mathbf{v}_{N}-\cdots-r\mathbf{v}_1$, it is easy to see that $p,p+r\mathbf{v}_1,\ldots,p+r\mathbf{v}_1+\cdots r\mathbf{v}_N$ all belong to $K_1$, completing the proof.


Remark. The proof really shows that $R=\frac{d_1}{M}$ depends only $S$, $N$, and $M$. Thus, if we fix the set $S$ and consider the case where $N=1$ and $M=1$, a single choice of $R$ works for every $\mathbf{v}_1\in \mathbb{S}^{n-1}$. This means that the difference set $S-S$ contains the open ball $B_{R}(0)$ with radius $R$ and center $0$, which is exactly the content of the classical Steinhaus theorem.