How does $\ x_i = (x_i - \overline{x})$ in the proof for ordinary least squares regression?

I am trying to understand how this:

$\sum_{i = 1}^{n} x_i(y_i - \overline{y})-\beta_1\sum_{i = 1}^{n} x_i(x_i - \overline{x})$

Could possibly simplify to this:

$\sum_{i = 1}^{n} (x_i - \overline{x})(y_i - \overline{y})-\beta_1\sum_{i = 1}^{n} (x_i - \overline{x})(x_i - \overline{x})$
$\sum_{i = 1}^{n} (x_i - \overline{x})(y_i - \overline{y})-\beta_1\sum_{i = 1}^{n} (x_i - \overline{x})^2$

Solve the above equation for $\beta_1 $ and you have your OLS estimator. I understand the entire proof up until this point.

Am I crazy? Unless the average value of $\ x_i $ is zero (which nothing in a linear regressions says it has to be) isn't this impossible? I'm convinced this is a typo.


Could you see it from here:

$\sum_{i = 1}^{n} (x_i - \overline{x})(y_i - \overline{y}) = \sum_{i = 1}^{n}(y_i-\overline{y})x_i - \sum_{i = 1}^{n}(y_i-\overline{y})\overline{x} $

But: $ \sum_{i = 1}^{n}(y_i-\overline{y})\overline{x} = \overline{x} \sum_{i = 1}^{n}(y_i-\overline{y}) =0 $.

Since: $ \sum_{i = 1}^{n}(y_i-\overline{y}) = \sum_{i = 1}^{n}y_i - \sum_{i = 1}^{n} \overline{y} = \sum_{i = 1}^{n}y_i - n\overline{y} = n\overline{y} -n\overline{y} =0 $

It follows that: $\sum_{i = 1}^{n} (x_i - \overline{x})(y_i - \overline{y}) = \sum_{i = 1}^{n}(y_i-\overline{y})x_i - 0 = \sum_{i = 1}^{n}(y_i-\overline{y})x_i $

For the other part, following the same pattern:
$\sum_{i = 1}^{n} (x_i - \overline{x})(x_i - \overline{x}) = \sum_{i = 1}^{n}(x_i-\overline{x})x_i - \overline{x} \sum_{i = 1}^{n}(x_i-\overline{x}) = \sum_{i = 1}^{n}(x_i-\overline{x})x_i $