Can I use the distributive law to write $\sum_{i=0}^{\infty}2^i$ as $1+ 2\sum_{i=0}^{\infty}2^i$?
Solution 1:
Actually, this is a perfectly valid proof that if this series converges, then its value must be $-1$. (As also pointed out by Thomas Andrews in the comments.)
It turns out, however, that the premise "this series converges" is false. Therefore, the truth of this statement turns out to not be very useful.
Solution 2:
As Mark raised in the comments, doing any sort of arithmetic with divergent series is meaningless, we often do arithmetic with limits and then justify why it was fine after the fact by showing they were convergent, another example of this going wrong:
$$\lim_{n\to\infty} 1 = \lim_{n\to\infty} n\cdot\frac{1}{n}=\lim_{n\to\infty} n\lim_{n\to\infty} \frac{1}{n}=\lim_{n\to\infty} n \cdot 0=0$$
Which is obviously false, and the problem is that $n$ is divergent.
Solution 3:
In mathematics, it is not whether something is allowed or not. Rather, everything is based on logic and logically precise definitions.
$\sum_{k=0}^∞ f(k)$ is defined as $\lim_{n\overset{∈ℕ}→∞} \sum_{k=0}^n f(k)$. If the limit exists, then the sum exists (and is said to converge). Otherwise the sum is also undefined (and is said to diverge). Of course, you also need the precise definition of the limit. With that and the basic properties of limits (which you also need to know), it is obvious that $\sum_{k=0}^∞ (c·f(k)) = c·\sum_{k=0}^∞ f(k)$ (or both sides are undefined) for any constant $c$, because $\lim_{n\overset{∈ℕ}→∞} \sum_{k=0}^n (c·f(k)) = \lim_{n\overset{∈ℕ}→∞} \big( c · \sum_{k=0}^n f(k) \big) = c · \lim_{n\overset{∈ℕ}→∞} \sum_{k=0}^n f(k)$. Make sure you understand that this works because scaling commutes with a finite summation and with limits.
Thus if you use "$E=F$" to denote that either $E$ and $F$ are the same object or that both $E,F$ are undefined (and you understood what I said above), then all the steps except the last one are in fact okay. However, it is best to restrict even those steps to the case where the equalities are between well-defined objects, unless you are absolutely certain of what you are doing. This is because any undefined values propagate and do not satisfy the basic arithmetic properties. In particular, if $S$ is undefined then $S-S$ is not $0$ but rather is also undefined. This also explains why at the end you cannot get from $S = 1+2S$ to $0 = 1+S$.
Once you fully grasp the above explanation, it may be of interest to you to know that in (advanced) real analysis we frequently use the affinely extended reals which is essentially $ℝ$ with positive and negative infinity added. Some arithmetic operations on the extended reals are still undefined in some cases, such as $(∞-∞)$. But there are significant advantages especially once you go on to measure theory. For instance, every infinite sum of non-negative terms now has a value, which may be $∞$. In this setting, all the steps except the last are actually correct and the expressions involved are actually well-defined.
The bottom-line is that you must be absolutely clear about what you mean by any expression that you write, otherwise it may very well be meaningless.