Continuously Differentiable Curves in $\mathbb{R}^{d}$ and their Lebesgue Measure

Show that the image of the curve $\Gamma\in\mathscr{C}^{1}\left([a,b]\to\mathbb{R}^{d}\right)$ has d-dimensional Lebesgue measure zero (of course, $d\geq2$).

This can be proved using the absolute continuity of $\Gamma'$ (since $[a,b]$ is compact and $\Gamma'$ is assumed continuous, hence in $L^{1}([a,b])$) together with the fundamental theorem of calculus to obtain an $\epsilon$-small cover of $\Gamma$ by balls.

But I am trying to prove this using more elementary means (i.e. without integration). Intuitively, since $\Gamma$ is smooth, we ought to (for fine enough partitions) be able to cover $\Gamma$ by boxes which arise from its tangent line. And by taking the partition of $[a,b]$ to be finer and finer, the "tangent box" cover ought to also get smaller and smaller.

More rigorously, the vector-version of the mean value theorem can be applied: $$|\Gamma(t_{i-1})-\Gamma(t_{i})|\leq(t_{i}-t_{i-1})|\Gamma'(t_{i}^{\star})|\leq M_{i}\Delta t$$ where $t_{i}^{\star}\in(t_{i-1},t_{i})$ and $M_{i}=\sup_{t\in[t_{i-1},t_{i}]}|\Gamma'(t)|$ which exists and is finite since $\Gamma'$ is continuous.

But to me, it's not quite clear how to rigorously construct a cover by boxes from here.

NOTE:

In the proof I mentioned (using integration), essentially you sum the left hand side over all partition intervals of uniform length $\delta$ (which depends on the $\Gamma'$), and "integrate" the right hand side.

Actually, to be more specific, for each $\epsilon>0$ there exists a $\delta>0$ such that $||P||<\delta$ implies $\int_{t_{i-1}}^{t_{i}}|\Gamma'(t)|dt<\epsilon$ (e.g. absolute continuity). This allows you to define numbers $\epsilon_{i}=\sup_{t,\bar{t}\in[t_{i-1},t_{i}]}|\Gamma(t)-\Gamma(\bar{t})|\leq\epsilon$ so that $\sum_{i=1}^{\#P}\epsilon_{i}\leq||\Gamma'||_{L^{1}([a,b])}$. Then you can use these $\epsilon_{i}$ to put balls at each point $\Gamma(t_{i})$ of radius (say) $2\epsilon_{i}$, thus giving you an $\epsilon$-small cover. Again though, this is harder to establish without integration theory.


Let's prove it more generally for a Lipschitz curve $\gamma: [a,b]\to \mathbb R^d$ with Lipschitz constant $L = \mathrm{Lip}(\gamma)$, instead.

Given an interval $I = (c-\epsilon/2, c+\epsilon/2)\cap [a,b]$ of length $\le \epsilon$, it is immediate that $\gamma(I)\subset B_{L\epsilon}(\gamma(c))$. Hence, letting $\lambda$ denote the $d$-dimensional Lebesgue measure we obtain $$\lambda(\gamma(I)) \le L^d\omega_d \epsilon^d$$

Now given $n\in \mathbb N$, let $\epsilon = (b-a)/n$. Then we can cover $[a,b]$ by $n$ intervals $I_1, \dots, I_n$ of length $\epsilon$ and by the above it follows that $$\lambda(\gamma([a,b])) \le \sum_{i=1}^n \lambda(\gamma(I_i)) \le n L^d\omega_d \epsilon^d = \frac{\omega_dL^d(b-a)^d}{n^{d-1}}$$

Since $n$ can be chosen arbitrarily big, this implies that $\lambda(\gamma([a,b])) = 0$ if $d\ge 2$.

Remark: Essentially the same argument shows that a Lipschitz function can never increase the Hausdorff dimension of a set.