It seems that the limit function is "cherry-picking" as values approach $0$
A rule that I've been taught over and over is that dividing by zero gives an undefined value. Another fact is that multiplying by zero, equals zero. Also, zero is nothing. Now, when I'm learning calculus, it seems that the limit function only cares about the fact that "zero = nothing", but pays no attention to the other rules, allowing one to, without consequence, let a value approach zero, so as to remove certain terms.
Here's a few concrete examples:
$$\lim_{b \rightarrow a}\frac{f(b) - f(a)}{b-a}$$
That this would amount to anything, when applying arithmetic or algebraic thinking, is crazy. First of all, as $b$ approaches $a$, the numerator and denominator approaches $0$, no? What calculus does here only looks like trickery to me, and not rigor; let's say $f(x) = x^2$.
$$\frac{b^2 - a^2}{b-a} = b + a$$
I agree with this, it's simple algebra. Now, after procuring this expression, calculus says the answer is simply $2a$ as $b$ approaches $a$. That's disregarding the fact that having $b = a$ changes what the fraction equals; algebraically, it would go from $b + a$ to UNDEF. However, calculus isn't simply substituting $b$ for $a$, it's letting $b$ approach $a$. That's different, but I don't see how that difference allows us to say the limit of the fraction is $2a$, and not UNDEF. To put it simply:
- The logical conclusion of $b + a$ as $b \rightarrow a$ is $2a$
- The logical conclusion of $\displaystyle \frac{x}{b-a}$ as $b \rightarrow a$ is UNDEF.
The above difference quotient happens to be both $(1)$ and $(2)$, yet calculus choses $(1)$, which is understandable, as it produces more helpful answers; but I don't know how to accept it. Why is this okay? I mean, just because something is helpful, doesn't mean it's true. In math, we define based on convenience, but here, we're not talking about simply defining something; we're evaluating the result of an already defined expression. The only thing that is defined by calculus is the limit function, which makes me think that the answer lies in how it works. I just can't see that answer.
Another example, just for good measure:
$$f(x) = x^2$$
$$(x + \Delta x)^2 = x^2 + 2x\Delta x + (\Delta x)^2$$
Brilliant.org's course says: "When $\Delta x$ is very small, the area of that smaller shaded piece, $(\Delta x)^2$, is so tiny that we can ignore it completely."
And that's fine, because the course goes on to use $\approx$, and not $=$. So, by ignoring the little piece, we get this:
$$(x + \Delta x)^2 \approx x^2 + 2x\Delta x$$
No problems here. Now, let's figure out the rate of change for the square: we take the difference in area and divide it by the difference in side length, $\Delta x$, which is just the rise over run.
$$(x + \Delta x)^2 - x^2 \approx \frac{2x\Delta x}{\Delta x}$$
Still, I have no problems. It is clear that at this point, one could cancel out the two $\Delta x$'s, and be left with the approximate answer, $2x$. What is not clear, is that this goes from approximate, to exact, when $\Delta x \rightarrow 0$:
$$2x\times 0 = 0, \ \ \frac{2x\times 0}{0} = \text{UNDEF}$$
As said, this feels like trickery. We rearrange the expression, so that the factors of $0$ are "out of sight, out of mind", and then we get our answer. The original expression and the rearranged expression are equal, so if the original expression ends up with issues like "UNDEF due to $\div 0$", then the rearranged expression should suffer the same consequences, no?
Your concerns are very reasonable from the perspective of someone learning calculus for the first time, but do not actually pose a real problem. There are no arbitrary choices being made for reasons of convenience. I think a similar complaint could be lodged on against much of the modern theory of quantum field theories, where proper mathematical foundations have often not been worked out yet, and so choices have to be made while computing on the basis of what turns out a well defined answer. I hope I can can convince you that this is not the situation in modern calculus.
Suppose we have have an expression like $a+b$ and wish to take the limit $\lim_{b\to a}b+a.$ For this specific limit it is valid to apply a rule where we replace all occurrences of $b$ by $a$ and so get $2a.$ This step is valid in this context because, for fixed values of $a$, the function $b+a$ is continuous in the variable $b$. In fact, whenever a function $f(b)$ is continuous, it is valid to apply the rule $$ \lim_{b\to a} f(b)=f(a). $$
Now let's look at the second context where you are trying to apply this rule. This is in the evaluation of the limit $$ \lim_{b\to a}\frac{b^2-a^2}{b-a}. $$ Of course it does not make sense in this context to evaluate the expression at $b=a$ and get $\frac{0}{0}.$ But the deeper reason that replacing $b$ by $a$ to evaluate the limit is not valid in this context is that, for fixed $a$, the function $\frac{b^2-a^2}{b-a}$ is not continuous at $b=a$. In fact, it cannot be continuous at this point because the function $f(b)=\frac{b^2-a^2}{b-a}$ is not even defined at $b=a$! A function $f(b)$ can only be continuous at values of $b$ for which $f$ is defined. This might seem like a tricky way of saying that we just don't get a convenient result when $b=a$, but it is actually the fact that we are attempting to apply a rigorous theorem (we can replace $b$ by $a$ whenever the function is continuous) outside the context where that theorem is actually valid.
About your claim that $\lim_{b\to a}\frac{x}{b-a}$ is undefined, this actually depends on whether and how $b$ occurs in the expression $x$. If $x$ in no way depends on $b$ and so is fixed relative to change in $b$, then your claim is correct. However, if the expression represented by $x$ depends on $b$ then the limit $\lim_{b\to a}\frac{x}{b-a}$ may actually be well defined.