The Plank Problem 2 Dimensions
We were trying to solve this wonderful problem, but have not succeeded to solve.
It goes like this: Let $R=[0,1]^2$, and $D\subseteq R$ be a convex set which intersects each side of $R$. Define a plank to be a set of the kind $[a,b]\times[0,1]$ or $[0,1]\times[a,b]$. The planks are parallel to the axes. Show that for each set of planks that covers $D$, it holds that the sum of the areas of the planks is at least 1.
Thanks, LLAP.
A hint from the oracle: The Marriage theorem. (We still have yet to solve)
As promised, here is a geometric solution, without "Marriage Theorem" (I also would like to see a solution which uses it).
A remark concerning probabilistic methods.: I do not think that the Lebesgue measure on $R$ is the right object here, since $D$ can have Lebesgue measure zero (if $D$ equals a diagonal of $R$). I do suspect however that there is a proof using a non-Lebesgue measure on $R$ which detects "widths" of subsets rather than their areas (see my comment on George Lowther's proof). However, I do not know how to construct one (at least, without first proving Theorem 1).
Below is my solution almost completely rewritten (I originally missed two cases).
I will label the vertices of $R$ as $a=(0,0), b=(0,1), c=(1,1), d=(1,0)$. By the "width" of a plank I mean the length of its shorter side. I let $\pi_1, \pi_2$ denote the projections of $R$ to its horizontal side $ad$ and vertical side $ab$ respectively.
Theorem 1. For every finite collection $C$ of planks covering $R$, the total width $W$ of $C$ is $\ge 1$.
Proof. The proof is by induction on the total number $N$ of planks. If $N=1$, it is clear that $W\ge 1$.
Now, consider the case $N=2$, when $C$ consist of one vertical plank $P$ and one horizontal plank $Q$ of the widths $v$ and $w$ respectively. (The case $N=2$ with just two horizontal or two vertical planks reduces immediately to $N=1$.)
Lemma 1. If $N=2$ then $W=v+w\ge 1$.
Proof. Since $D$ intersects all four sides of $R$, consider the quadrilateral $D'=x_1y_1x_2 y_2\subset D$ where $x_1\in ad\cap D, x_2\in bc\cap D', y_1\in ab\cap D', y_2\in cd \cap D$. The planks $P$ and $Q$ break edges of $R$ in subsegments of the lengths $g_1, g_2$ and $h_1, h_2$, see figure below. I will consider only the critical case depicted in the figure, where the segments $y_1x_2, x_2y_2, y_2x_1, x_1y_1$ pass through the four intersection points $z_1, z_2, z_3, z_4$ of the boundaries of planks (if this is not the case, one can modify $D'$, $P$ and $Q$ so that the total width of $P$ and $Q$ strictly decreases and they still cover $D'$).
Consider the shaded rectanges as in the figure: Rectangles marked with the same color have the same area. The total area of the four shaded rectanges adjacent to the corners of $R$ equals $$ (h_1+h_2)(g_1+g_2)= (1-v)(1-w). $$ On the other hand, the area of the rectange $z_1 z_2 z_3 z_4$ equals $vw$ and, clearly, the total area of the shaded rectangles inside $z_1 z_2 z_3 z_4$ equal $(1-v)(1-w)$. From this, we get $$ (1-v)(1-w)\le vw $$ $$ 1\le v+w=W, $$ as claimed. qed.
Now, let $N\ge 3$ and assume that for each collection of $<N$ planks, $W\ge 1$. I will now consider only collections of planks where each plank is different from the entire $R$ (otherwise, the claim is clear).
Consider a collection $C$ of planks which consists of $m$ vertical planks $P_i$ and $n$ horizontal planks $Q_j$, where $m\le n$ (otherwise we swap the vertical and horizontal coordinates) and $N=m+n\ge 3$. I number the vertical planks $P_i$ from left to right. Without loss of generality, we can assume that vertical planks are pairwise disjoint and horizontal planks are also pairwise disjoint (otherwise, we amalgamate the non-disjoint vertical/horizontal planks, without increasing the total width $W$ and reducing the number of planks). For the same reason, we can assume that any proper subset of $C$ does not cover $D$.
The complement $$ D\setminus \bigcup_{i=1}^m P_i $$ is a union of convex sets $D_j$. I also number $D_j$'s from left to right. Each $D_j$ is covered by exactly one horizontal plank (by the disjointness property). Hence, the number $k$ of the convex sets $D_j$ is $\le n$. Furthermore, $k\in \{m-1, m, m+1\}$ since there exists exactly one $D_j$ between each consecutive pair $P_i, P_{i+1}$. Since $m\le n$, it follows that at most one of the planks $Q_i$ covers two different components $D_j$, which means that $$ k-1\le n\le k. $$ By combining these inequalities, we see that $$ m\le k\le m+1, $$ and, hence, either $P_1$ or $P_m$ does not contain one of the two vertical sides of $R$. (I assume that it is $P_1$ which does not contain $ab$, otherwise, we change the orientation.) Lastly, if $k=m$ (i.e, $P_m$ contains $cd$) then $n=k=m$ and, hence, each plank $Q_i$ covers exactly one of the components $D_j$. In any case, I let $Q_1$ denote the horizontal plank covering $D_1$.
Case 1: $k=n$, i.e., each $Q_i$ covers exactly one $D_i$, $i=1,...,n$. As we noted above, the component $D_1$ has nonempty intersection with the vertical segment $ab$. Let $h_1$ denote the length of the projection $\pi_2(D_1)\subset ab$ and let $v_1$ denote the length of the projection $\pi_1(D_1)$ of $D_1$. Since $D_1$ is covered by a $Q_1$, it is clear that width of $Q_1$ is $\ge h_1$.
Lemma 2. $v_1\le h_1$.
Proof. Consider a point $x\in D_1\cap ab$ and the segment $yz=D_1\cap P_1$. The triangle $\Delta=xyz$ is contained in $D_1$. Moreover, convexity of $D$ and the fact that it intersects all four sides of $R$ imply that the rays from $x$ extending the segments $xy$ and $xz$ both intersect the union of the horizontal segments $bc\cup da$. It is then an elementary geometric calculation to conclude that the height $v_1$ (from the vertex $x$) of $T$ is $\le $ the length of the base $yz$ of $T$. The latter is $\le h_1$. qed.
Let $P_0\subset R$ denote the vertical plank which is the product $\pi_1(D_1)\times ab$ ($P_0$ is not a part of our collection $C$ of planks). By Lemma 1, the width of $P_0$ is $\le$ the width of $Q_1$.
Then replace $Q_1$ with $P_0\cup P_1$. Since $Q_1$ covers only the component $D_1$, it follows that the new collection $C'$ of planks still covers $R$. By Lemma 1, $C'$ has overall width $W'\le W$ and, by construction, $C'$ consists of $N-1$ planks. Therefore, by the induction assumption, $1\le W'\le W$ and we are done. This concludes the proof of Theorem 1 in Case 1.
Case 2: $n=k-1$. Then $k=m+1$, $n=m$.
Subcase 2a: $Q_1$ covers $D_1$ and (after renumbering if necessary) a distinct horizontal plank $Q_n$ covers $D_{n+1}$. Then we repeat the argument from Case 1 and conclude that $W\ge 1$.
Subcase 2a: $Q=Q_1$ covers both $D_1$ and $D_{n+1}$. We then switch roles of horizontal and vertical directions (this is OK since $n=m$). Repeating the above case-by-case analysis we conclude that there exists a vertical plank $P:=P_\ell$ which covers two components $E_1, E_{n+1}$ of $$ R\setminus \bigcup_{i=1}^n Q_i. $$ The components $E_1, E_{n+1}$ have nonempty intersections with the horizontal sides of $R$. Now, pick from each subset $D_1, D_{m+1}, E_1, E_{n+1}$ one point $y_1, y_2, x_1, x_2$ which belongs to a side of $R$.
Lemma 3. The convex hull $D'$ of $x_1, x_2, y_1, y_2$ is covered by the planks $P$ and $Q$.
Proof. It suffices to show that every edge of $D'$ is contained in $K=P\cup Q$. Suppose that there exists an edge of $D'$ (after relabeling, we can assume that this edge is $x_1y_1$) which is not contained in $P\cup Q$. Let $x_1'\in x_1y_1$ and $y_1'\in x_1y_1$ be the points of intersection of the boundary of $K$ with $x_1y_1$ (these intersection points are labelled so that $x_1'$ belongs to the left side $s_1$ of $P$ and $y_1'$ belongs the bottom side $t_1$ of $Q$. Let $z_1'=s_1\cap t_1$; then the solid right angled triangle $x_1'y_1'z_1'$ is not covered by $P\cup Q$. However, by the disjointness property of planks, if follows that a small corner of $x_1'y_1'z_1'$ at the vertex $z_1'$ is not covered by any plank in $C$.
Contradiction. qed.
In view of Lemma 3, we can now apply Lemma 1 (since we have a covering of $D'$ by two planks) and conclude that the total width of $P$ and $Q$ is $\ge 1$.
This concludes the proof of Theorem 1. qed
I gave a proof of Theorem 1 for finite collections of planks. An easy approximation argument implies that the same conclusion holds for countable collections of planks. (For uncountable collections, the statement is meaningless, as far as I can tell.)
Theorem 1 is a special case of much harder theorem (theorem 2 below) proven by T.Bang in
A solution to the "Plank Problem", Proceedings of AMS, 2 (1951).
The "Plank Problem" itself was posed by A.Tarski in 1932.
Theorem 2: Let $D\subset R^d$ be a bounded convex subset and $C$ is a countable collection of planks covering $D$. Then the total width of $C$ is $\ge$ the width of $D$.
Here are the definitions: A plank in $R^d$ is the closed region bounded by two parallel hyperplanes. The width of a plank is the distance between the hyperplanes. The total width of $C$ is the sum of width of planks in $C$. The width of $D$ is the infimum of lengths of its orthogonal projections to straight lines in $R^d$.
Bang's proof is clever and nonelementary. It has led to a number of problems some of which are still open. Read here to find more about it.
Here's an alternative proof, using a probabilistic 'coupling' method, via the following Lemma. Throughout, I let $D$ be as in the question: a convex subset of $R=[0,1]^2$ intersecting each of the four sides of $R$.
Lemma: There exists random variables $U_1,U_2$ (defined on some probability space) which are both uniform on $[0,1]$, and such that $(U_1,U_2)$ lies in $D$ with probability 1.
[Note: Although $U_1$ and $U_2$ here have the uniform distribution, there is no requirement for the joint distribution of $(U_1,U_2)$ to be uniform, or even have an absolutely continuous distribution. For example, if $D$ Is one of the diagonals of the unit square, which has zero Lebesgue measure but satisfies the requirements, then we would take $U_1=U_2$ (or $U_1=1-U_2$). This ensures that $(U_1,U_2)\in D$ and, if $U_2$ is chosen with the uniform distribution, then $U_1$ is also uniform.]
If $A\subseteq[0,1]$ is the set of $x$-coordinates of the vertical planks, then it is a finite union of closed intervals, the vertical planks cover the region $A\times[0,1]$ and have total area $\lambda(A)$ ($\lambda$ is Lebesgue measure). Similarly, if $B\subseteq[0,1]$ is the set of $y$-coordinates of the horizontal planks then they cover the region $[0,1]\times B$ and have total area $\lambda(B)$. The following corollary of the above lemma is then a positive answer to the original question.
Corollary: If $A,B$ are measurable subsets of $[0,1]$ such that $D\subseteq(A\times[0,1])\cup([0,1]\times B)$, then $\lambda(A)+\lambda(B)\ge1$.
Proof: Letting $U_1,U_2$ be random variables uniformly distributed on $[0,1]$ with $(U_1,U_2)\in D$ (almost surely), \begin{align} \lambda(A)+\lambda(B)&=\mathbb{P}(U_1\in A)+\mathbb{P}(U_2\in B)\\ &\ge\mathbb{P}(U_1\in A{\rm\ or\ }U_2\in B)\\ &=\mathbb{P}\left((U_1,U_2)\in(A\times[0,1])\cup([0,1]\times B)\right)\\ &\ge\mathbb{P}\left((U_1,U_2)\in D\right)\\ &=1. \end{align}
I'll give a proof of the lemma now, which will require going through several cases. Letting $\pi_i\colon\mathbb{R}^2\to\mathbb{R}$ ($i=1,2$) denote projection onto the $x$ and $y$ coordinates respectively, $\pi_i(x_1,x_2)=x_i$, we need to find a random variable $X$ taking values in $D$ such that $\pi_1(X)$ and $\pi_2(X)$ are both uniform on $[0,1]$. The lemma will then be answered by taking $U_i=\pi_i(X)$.
I'll make repeated use of the fact that if $X$ is uniformly distributed on one of the diagonals of a rectangle $[a_1,b_1]\times[a_2,b_2]$ then $\pi_i(X)$ is uniformly distributed on $[a_i,b_i]$. The idea is to find a finite set of rectangles $R^{(k)}=[a^{(k)}_1,b^{(k)}_1]\times[a^{(k)}_2,b^{(k)}_2]$ and numbers $p^{(k)}\ge0$ summing to $1$ such that each $R^{(k)}$ has a pair of opposite corners in $D$ (and, by convexity, a diagonal in $D$), then we can choose $X$ to be uniformly distributed on the diagonal of $R^{(k)}$ with probability $p^{(k)}$. Then, $U_i=\pi_i(X)$ is uniformly distributed on the interval $[a^{(k)}_i,b^{(k)}_i]$ with probability $p^{(k)}$, so has probability density $$ p_i(x)=\sum_kp^{(k)}(b^{(k)}_i-a^{(k)}_i)^{-1}1_{\{a^{(k)}_i < x\le b^{(k)}\}}. $$ I'll do this so that the probability density is $1$ on $(0,1]$.
First, let $P_1=(x_1,0)$, $Q_1=(0,y_1)$, $P_2=(x_2,1)$ and $Q_2=(1,y_2)$ be points of intersection of $D$ with the edges of $R$. Then, the quadrilateral $P_1Q_1P_2Q_2$ is contained in $D$.
By symmetry, we suppose that $x_1\le x_2$, $x_1\le y_1$. There are a few cases to consider (I include diagrams, with the shaded area being the convex hull of the points $P_1,Q_1,P_2,Q_2$, which is contained in $D$, and the red lines are the rectangle diagonals supporting the distribution of $X$).
Case I: $y_1\ge y_2$
In this case,the four rectangles with diagonals $P_1Q_1$, $Q_1P_2$, $P_2Q_2$, $Q_2P_1$ do not intersect. Together with the center rectangle, $[x_1,x_2]\times[y_1,y_2]$, which is contained in $D$ they partition $R$. Explicitly, the rectangles $[0,x_1]\times[0,y_1]$, $[0,x_2]\times[y_1,1]$, $[x_2,1]\times[y_2,1]$, $[x_1,1]\times[0,y_2]$ and $[x_1,x_2]\times[y_2,y_1]$ partition $R$ and each have a diagonal in $D$. So, we can choose $X$ to be uniformly distributed on a diagonal of any these rectangles with probability equal to the area of the rectangle.
In the remaining cases, the rectangles with diagonals $Q_1P_2$ and $Q_2P_1$ overlap, so the construction in Case I does not work.
Case II: $x_1\le y_1\le x_2\le y_2$. In this case $P_1Q_1$ is the diagonal of a rectangle which is taller than in is wide. We add a rectangle to the right of this to make a square, and take diagonals of these. Similarly, we add a rectangle below the one with diagonal $P_2Q_2$ to make a square, and take diagonals of these. These two squares do not overlap, so we can add the line segment in $D$ joining the nearest corners of these squares (which is a diagonal of the square $[y_1,x_2]^2$). More explicitly, using 'BL-TR' to mean 'bottom-left to top-right diagonal', etc.
- $R^{(1)}=[0,x_1]\times[0,y_1]$, $p^{(1)}=x_1$ (TL-BR = $P_1Q_1$ is in $D$).
- $R^{(2)}=[x_1,y_1]\times[0,y_1]$, $p^{(2)}=y_1-x_1$ (BL-TR is in $D$).
- $R^{(3)}=[y_1,x_2]\times[y_1,x_2]$, $p^{(3)}=x_2-y_1$ (BL-TR is in $D$).
- $R^{(4)}=[x_2,1]\times[x_2,y_2]$, $p^{(4)}=y_2-x_2$ (BL-TR is in $D$).
- $R^{(5)}=[x_2,1]\times[y_2,1]$, $p^{(5)}=1-y_2$ (BR-TL = $P_2Q_2$ is in $D$).
Case III: $x_1\le y_1\le y_2\le x_2$. This is a similar arrangement to Case II except that the rectangle with diagonal $P_P2Q_2$ is taller than it is wide, so we add a rectangle to the left of this rather than below to make a square.
- $R^{(1)}=[0,x_1]\times[0,y_1]$, $p^{(1)}=x_1$ (TL-BR = $P_1Q_1$ is in $D$).
- $R^{(2)}=[x_1,y_1]\times[0,y_1]$, $p^{(2)}=y_1-x_1$ (BL-TR is in $D$).
- $R^{(3)}=[y_1,y_2]\times[y_1,y_2]$, $p^{(3)}=y_2-y_1$ (BL-TR is in $D$).
- $R^{(4)}=[y_2,x_2]\times[y_2,1]$, $p^{(4)}=x_2-y_2$ (BL-TR is in $D$).
- $R^{(5)}=[x_2,1]\times[y_2,1]$, $p^{(5)}=1-x_2$ (TL-BR = $P_2Q_2$ is in $D$).
Case IV: $x_1\le x_2 < y_1 < y_2$. In this case, any two squares (with standard orientation) containing diagonals $P_1Q_1$ and $P_2Q_2$ overlap, so the construction in cases I and II does not work.
The point $(x_2,y_1)$ is in $D$, and there exists unique $a_0\in[0,y_1]$, $b_0\in[x_2,1]$ such that $(x_2,a_0)$ and $(b_0,y_1)$ are on $Q_2P_1$. Explicitly, $a_0=y_2\frac{x_2-x_1}{1-x_1}$ and $b_0=y_2^{-1}(y_1+x_1(y_2-y_1))$. Choosing $a\in[a_0,y_1]$ and $b\in[x_2,b_0]$, so that $(x_2,a)$ and $(b,y_1)$ are in $D$, take
- $R^{(1)}=[0,x_1]\times[0,y_1]$, $p^{(1)}=x_1$ (TL-BR = $P_1Q_1$ is in $D$).
- $R^{(2)}=[x_1,x_2]\times[0,a]$, $p^{(2)}=x_2-x_1$ (BL-TR is in $D$).
- $R^{(3)}=[x_2,b]\times[a,y_1]$, $p^{(3)}=y_1-x_2$ (BL-TR is in $D$).
- $R^{(4)}=[b,1]\times[y_1,y_2]$, $p^{(4)}=y_2-y_1$ (BL-TR is in $D$).
- $R^{(5)}=[x_2,1]\times[y_2,1]$, $p^{(5)}=1-y_2$ (BR-TL = $P_2Q_2$ is in $D$).
It can be seen that using this distribution, $U_2=\pi_2(X)$ is uniform on $[y_1,1]$ with probability $1-y_1$ and is uniform on each of $[0,a]$ and $[a,y_1]$. The probability of $U_2$ being in $[0,a]$ is $x_1\frac{a}{y_1}+(x_2-x_1)$. For $U_2$ to be uniform on $[0,1]$ we need to solve $x_1\frac{a}{y_1}+(x_2-x_1)-a=0$. It can be checked that this is negative for $a=a_0$ and positive for $a=y_1$, so it can be solved for some $a$ in $[a_0,y_1]$. Explicitly, $a=y_1\frac{x_2-x_1}{y_1-x_1}$. Similarly, choosing $b=(y_2-x_2)^{-1}(y_1+x_2(y_2-y_1-1))$ gives the uniform distribution for $U_1=\pi_1(X)$.