What exactly is calculus?
In a nutshell, Calculus (as seen in most basic undergraduate courses) is the study of change and behaviour of functions and sequences. The three main points are:
- Limits: How sequences and functions behave when getting closer and closer to a desired point (geometrically, what happens when you "zoom in" near a point)
- Derivatives: How functions change over a parameter (geometrically, the "slope of a graph at a given point")
- Integrals: What's the cumulative effect of a function (geometrically, the "area under a graph")
And obviously (and maybe especially), how these relate to one another; the crowning jewel of Calculus is probably the Fundamental Theorem of Calculus, which truly lives up to its name and was developed by none other than Leibniz and Newton.
In math, a "calculus" is a set of rules for computing/manipulating some set of objects. For instance the rules $\log AB = \log A+\log B$, etc, are the "logarithm calculus."
But commonly, "calculus" refers to "differential calculus" and "integral calculus." There is a set of rules (product rule, quotient rule, chain rule) for manipulating and computing derivatives. There is also a set of rules (integration by parts, trig substitution, etc.) for manipulating and computing integrals. So, at least etymologically, "calculus" refers to these two sets of rules.
But so far, this has been a bad answer. You really want to know what differential calculus and integral calculus have come to mean. So here goes:
There are a lot of formulas out there that involve multiplying two quantities. $D=RT$, area$ = lw$, work = force x distance, etc. All of these formula are equivalent to finding the area of a rectangle. If you drive 30 miles per hour for 4 hours, your distance is $D = RT = 30\cdot 4 = 120 $ miles. Easy-peasy.
But what if the speed varies during the drive. Now your $30 \times 4$ rectangle is warped. The left and right sides and bottom are still straight, but the top is all curvy. Still, the distance traveled is the area of the warped rectangle. Integral calculus cuts the area into infinitely many, infinitely tiny rectangles, computes the area of all of them and glues them back together to find the area.
How much work to lift a 10 pound rock to the top of a 200 foot cliff? $10 \times 200 $ foot-pounds. But now what if the rock is ice and it melts on the way up, so that when it reaches the top it weighs only 1 pound?
Differential calculus is sort of the opposite problem. When a rock is falling, it starts at $0$ ft/sec., and accelerates. At each point in time, it is going a different speed. Differential calculus gives us a formula for that constantly changing speed. If you graph the position of the rock against time, then the speed is slope of that curve at each point in time.
In both integral and differential calculus, we do nothing more than take the formula for the area of a rectangle and the slope formula for a line and make them sit up and do tricks.
Another way of understanding calculus is that it is the science of refining approximations. The idea is that if we cannot calculate a value directly, we come up with a scheme that allows us to approximate the value as closely as we want. If that scheme is good enough that by sufficiently refining the approximation, we can eliminate all but one particular value as being the number we are after, then we have our answer.
There are a great many values that we cannot calculate directly, but which we can approximate. You mentioned one example: area. From our physical experience, we expect shapes to have a comparable quantity called area. If it takes the same amount of paint to cover two different shapes, then if I paint again, being careful of my thicknesses, it will still take the same amount of paint the second time. To quantify, we can define that a square with sidelength $1$ has area $1$. From this and the concepts that area should not change under rigid motions and that if a shape is divided into two shapes, then the sum of the areas of the parts should be the area of the whole, we can quickly calculate that the area of a rectangle has to be the width $w$ times the height $h$, provided that $w$ and $h$ are both rational values. With a little creativity, we can even show that it holds for some irrational values.
But by direct calculation, we can never show that the area of a rectangle is always its width times its height for all irrational values. And even worse, we cannot arrive at an area for any figure whose boundary is not made of line segments strung together. So we have to find a means to calculate them indirectly. That means is by refining approximations.
While Newton and Liebnitz truly do deserve their titles as the fathers of calculus for their joint invention of the Fundamental Theorem of Calculus, the basic ideas pre-date them - by nearly 2000 years. The key idea is attributed to Eudoxus, though it may predate him as well. That idea is this: If you are comparing two values $x$ and $y$, and can show that $x$ cannot be less than $y$ and $x$ cannot be greater than $y$, then it has to be that $x = y$. Pretty obvious. Let's apply it to finding areas:
Suppose you have some arbitrary shape $S$. We can't directly calculate the area of $S$, but we can cover it with a grid of squares of sidelength $\frac 1n$ for some natural number $n$.
Now count the number $M_n$ of squares that overlap $S$ (both blue and tan squares), and the number $m_n$ of squares that are completely inside of $S$ (tan squares only). Since every square of the latter type is also of the former, it is always the case that $m_n \le M_n$. The total area of the covering squares is the sum of the areas of the individual squares, which we already know is $\frac 1{n^2}$, so it will be $\frac {M_n}{n^2}$, and similarly for the contained squares. If $S$ has an area , then it should be true that $$\frac {m_n}{n^2} \le \text{ area of }S \le \frac {M_n}{n^2}$$ for every $n$.
Now for most shapes $S$ we cannot exactly calculate its area this way - not unless $S$ happens to be the union of a bunch of squares. But for nice shapes, we can come up with approximations that are as good as we want. That is, if we say that we want to know the area to a tolerance of $\epsilon$ (epsilon is a traditional variable for this role) for any given $\epsilon > 0$, then by dint of effort we can produce a $n$ big enough that $$0 \le \frac {M_n}{n^2} - \frac {m_n}{n^2} < \epsilon$$ Since the actual area lies between the two, either value differs from the area by an amount less than $\epsilon$.
Now suppose there is a number $A$ that we think should be the area. Let $x < A < y$ for some values $x, y$, and let $\epsilon$ be the smaller of $A - x$ and $y - A$. If we can always find $n$ as above, then $$x = A - (A - x) \le A - \epsilon < \frac {m_n}{n^2} \le \text{area of }S$$ Hence the area cannot be $x$. And similarly, it cannot be $y$. So, per Eudoxus, the area has to be $A$. (If we cannot find such an $n$, then we were wrong about $A$ being the area.)
"Limits" are just a terminology used to describe this concept of refining approximations. Derivatives (slopes of tangent lines to curves) and integrals (areas of regions defined by curves) are two very common and very useful values that usually cannot be calculated directly, and which turn out to be closely related.
In a nutshell: calculus is about derivatives and integrals. A derivative generalizes the idea of slope to graphs that are not lines. For instance, you might look at the graph of $y = x^2$ and notice that it gets steeper as $x$ increases, but how can we make this observation precise? You might ask what the slope of the graph is at $x = 1$, for instance.
But what could the "slope at a point" mean? Rise-over-run gives the formula for the slope of a secant line, but what I really want is a formula for the slope of a tangent line, which would have a "rise" and "run" of zero.
Traditionally, we resolve this paradox using limits (though it can also be done with "infinitesimals"). You'll see how limits work when you take calculus.
As it turns out, the answer we get to our question is that when $y = x^2$, $\frac{dy}{dx}|_{x = a} = 2a$. So: at $(0,0)$, the graph has a slope of $2(0) = 0$. At $(-1,1)$, the graph has a slope of $2(-1) = -2$. At $(2,4)$, the graph has a slope of $2(2) = 4$. This function $2x$ is called the derivative of the function $x^2$.
Intgeration, as you said, is about computing the area, usually between a graph and the $x$-axis, from $x = a$ to $x = b$. For instance, $$ \int_1^4 2x\,dx $$ means "the area underneath the graph $y = 2x$, between the values $x = 1$ and $x = 4$". The fundamental theorem of calculus relates integrals to derivatives. In this particular case: because we know a function whose derivative is $2x$ (in this case, $x^2$), we can find this area by calculating $$ \int_1^4 2x\,dx = \left. x^2\right|_1^4 = (4)^2 - (1)^2 = 15 $$
As other answers have broken down some of the applications and the categories of calculus, I'll try to give a more intuitive and motivating explanation.
At its core, calculus is about trying to do meaningful computations with quantities that are infinitely large or infinitely small ("infinitesimals"). What math student, upon being introduced to the idea of infinity, hasn't wondered about $\infty - \infty$ or $0 \cdot \infty$? Or wondered what $\frac{0}{0}$ should be? As Siri will tell you:
Imagine that you have zero cookies and you split them evenly among zero friends. How many cookies does each person get? See? It doesn’t make sense. And Cookie Monster is sad that there are no cookies, and you are sad that you have no friends.
Furthermore, there are similar ancient questions such as Zeno's Paradox---what happens if you take infinitely many steps that are infinitely small? Even the great Archimedes had grappled with such questions and worked with a "proto-calculus" of sorts, almost two millennia before Newton and Leibniz.
In it's original formulations by Newton and Leibniz, calculus was about trying to perform these sorts of computations and get meaningful answers. In Leibniz's version, $\mathrm{d}x$ and $\mathrm{d}y$ were literally meant to be infinitesimal quantities in the $x$- and $y$-directions that were smaller than any real number, but still non-zero (Newton had a similar sort of notation). Consider the slope of a curve at a single point, which would ordinarily be $\frac{0}{0} = \frac{f(a) - f(a)}{a-a}$. In Leibniz's calculus, we could compute it as $\frac{\mathrm{d} y}{\mathrm{d}x}$---the infinitesimal change in the $y$-direction divided by the infinitesimal change in the $x$-direction.
While this "infinitesimal" approach worked for nearly a hundred years, the logical inconsistencies grew more problematic. Thus Cauchy and other luminaries introduced the modern limit definitions of derivatives and integrals to give calculus a rigorous foundation. While modern calculus is phrased in terms limits, generally of a sequence of approximations that become arbitrarily precise, the entire field is fundamentally about trying to make sense of calculations using infinitely small and infinitely large quantities.
Examples
- What happens if you add up $\frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots$? Intuitively, it feels like the answer should be $1$, as you cover half the remaining distance with each additional term. However, you never actually get to $1$, no matter how many terms you add up. Furthermore, how can adding up infinitely many non-zero numbers give a finite result?
- How does a car speedometer work? What does it mean to be moving at $60mph$ when you haven't actually traveled $60$ miles or driven for an hour in the instant that you glance at the speedometer? It certainly makes sense to calculate an average speed over a finite distance, but that requires some finite amount of time as well. What does it mean to be traveling a certain speed at one instant? We can try to think of this as moving an "infinitesimal distance in an infinitesimal amount of time," but how can we actually calculate anything with that?
- Given an arbitrary geometric shape, we can approximate its length/area/volume with nice shapes, such as rectangles. If we allow ourselves to approximate a shape with lots and lots of rectangles, we seem to get a better approximation. What if we use infinitely many rectangles that are infinitely small? How can we use this to find the true area?