Question about the proof that 'A graph with maximum degree at most k is (k + 1) colorable

I'm trying to follow the MIT introductory mathematics for cs course.

In the reading on graph theory, the proof that a graph with maximum degree at most k is (k + 1) colorable is given as follows:

We use induction on the number of vertices in the graph, which we denote by n. Let P(n) be the proposition that an n-vertex graph with maximum degree at most k is (k + 1)-colorable.

Base case (n = 1):

  1. A 1-vertex graph has maximum degree 0 and is 1-colorable, so P (1) is true.

Inductive step:

  1. Now assume that P (n) is true, and let G be an (n + 1)-vertex graph with maximum degree at most k.
  2. Remove a vertex v (and all edges incident to it), leaving an n-vertex subgraph, H. The maximum degree of H is at most k, and so H is (k + 1)-colorable by our assumption P (n).
  3. Now add back vertex v. We can assign v a color (from the set of k + 1 colors) that is different from all its adjacent vertices, since there are at most k vertices adjacent to v and so at least one of the k + 1 colors is still available.
  4. Therefore, G is (k + 1)-colorable. This completes the inductive step, and the theorem follows by induction.

Simple enough, but I'm confused as to why we need part 1 and 2 of the inductive step.

That is to say why do we need to create a (n + 1) vertex graph, remove an edge and then add it back again? Can't we just define H from the start to be a graph of size n with a maximum degree of at most k? Doesn't step 3 work just the same?


I think you're asking "Why can't we prove the $n+1$ node case by starting with an $n$-node graph and adding a vertex?" The answer is "Because that proves the $n+1$ node case only for graphs that can be constructed that way, which might, a priori, not be all of them." What steps 1 and 2 do is show how, given an $n+1$-node graph $G$, to find an $n$-node graph $H$ which, when a new node and the right edges are added, becomes the graph $G$ that you're interested in.

That seems completely trivial, and maybe in this case it actually is. But in general, for inductions that prove things on larger and larger sets, it's not always clear how to get from an element of the smaller set to every possible element of the larger set. Students often think that they see "an obvious way" that's in fact wrong.

As an example: consider and $n \times n$ symmetric matrices $M$ whose "outer" values (i.e., those for which one index is either 1 or $n$) are all "1"s, and whose inner values are integers other than one.

It's pretty clear that to get from a size-$n$ example of this to a size $n+1$ example, you can't just add a row at the bottom and a column at the right, but you might think, "yeah, but you could just add a row in the MIDDLE and a column at the corresponding position in the middle", with the condition that the inserted row start and end with 1 and have non-1s in the other places.

Clear? Pretty much. Unfortunately, it's also wrong. You can't get from the (only) $1 \times 1$ example of this kind of matrix, namely $[1]$ to the only $2 \times 2$ example (which is all $1$s) through this "obvious" step.