What is the difference between partial and normal derivatives?

I have a clarifying question about this question:

What is the difference between $d$ and $\partial$?

I understand the idea that $\frac{d}{dx}$ is the derivative where all variables are assumed to be functions of other variables, while with $\frac{\partial}{\partial x}$ one assumes that $x$ is the only variable and every thing else is a constant (as stated in one of the answers).

Example 1: If $z = xa + x$, then I would guess that $$ \frac{\partial z}{\partial x} = a + 1 $$ and $$ \frac{d z}{d x} = a + x\frac{da}{dx} + 1. $$ since now $a$ should be considered a function.

When we in calculus 1 have $y = ax^2 + bx + c$, then technically we should use $\partial$ as we are assuming $a, b$, and $c$ are constants?

Is this correct?

Example 2: Maybe the thing that is confusing me is that when we do implicit differentiation we use $d$. So if $$ x^2 + y^2 = 1 $$ then taking $\frac{d}{dx}$ gives $$ 2x + 2y\frac{dy}{dx} = 0 $$ again because $y$ is considered a function.

How would taking $\frac{\partial}{\partial x}$ of an equation like $x^2 + y^2 =1$ work? Does that even make sense?

Example 3: Is it ever possible that using $\partial$ and $d$ can give the same? If, for example $y = x^2$, does it make sense to say that $$ \frac{\partial}{\partial x} y = 2x? $$

Edit: My overall question, I guess, is how the notations of partial derivatives vs. ordinary derivatives are formally defined. I am looking for a bit more background.


Solution 1:

Some key things to remember about partial derivatives are:

  • You need to have a function of one or more variables.
  • You need to be very clear about what that function is.
  • You can only take partial derivatives of that function with respect to each of the variables it is a function of.

So for your Example 1, $z = xa + x$, if what you mean by this to define $z$ as a function of two variables, $$z = f(x, a) = xa + x,$$ then $\frac{\partial z}{\partial x} = a + 1$ and $\frac{dz}{dx} = a + 1 + x\frac{da}{dx},$ as you surmised, though you could also have gotten that last result by considering $a$ as a function of $x$ and applying the Chain Rule.

But when we write something like $y = ax^2 + bx + c,$ and we say explicitly that $a$, $b$, and $c$ are (possibly arbitrary) constants, $y$ is really only a function of one variable: $$y = g(x) = ax^2 + bx + c.$$ Sure, you can say that $\frac{\partial y}{\partial x}$ is what happens when you vary $x$ while holding $a$, $b$, and $c$ constant, but that's about as meaningful as saying you vary $x$ while holding the number $3$ constant.

I suppose technically $\frac{\partial y}{\partial x}$ is defined even if $y$ is a single-variable function of $x$, but it would then just be $\frac{dy}{dx}$ (the ordinary derivative), and I can't remember seeing such a thing ever written as a partial derivative. It would not make it possible to do anything you cannot do with the ordinary derivative, and it might confuse people (who might try to guess what other variables $y$ is a function of).

The previous paragraph implies that the answer to your Example 3 is "yes." It also hints at why I almost wrote "a function of two or more variables" as part of the first requirement for using partial derivatives. Technically I think you only need a function of one or more variables, but you should want a function of at least two variables before you think about taking partial derivatives.

For Example 2, where we have $x^2 + y^2 = 1$, it is not obvious what the function is that we would be taking partial derivatives of. Either $x$ or $y$ could be a function of the other. (The function would be defined only over a limited domain, and would produce only some of the points that satisfy the equation, but it can still be useful to do some analysis under those conditions.) If you write something besides the equation to make it clear that (say) $y$ is a function of $x$, giving a sufficiently clear idea which of the possible functions of $x$ you mean, then I think technically you could write $\frac{\partial y}{\partial x}$, and you might even find that $\frac{\partial y}{\partial x} = 2x$, but again this is a lot of trouble and confusion to get a result you could get simply by using ordinary derivatives.

On the other hand, suppose we say that $$h(x,y) = x^2 + y^2 - 1,$$ and we are interested in the points that satisfy $x^2 + y^2 = 1$, that is, where $h(x,y) = 0$. Now we have a function of multiple variables, so we can do interesting things with partial derivatives, such as compute $\frac{\partial h}{\partial x}$ and $\frac{\partial h}{\partial y}$ and perhaps use these to look for trajectories in the $x,y$ plane along which $h$ is constant. OK, we don't really need partial derivatives to figure out that those trajectories will run along circular arcs, but we could have some other two-variable function where the answer is not so obvious.

Solution 2:

I hope this answers your question.

The partial derivative notation is used to specify the derivative of a function of more than one variable with respect to one of its variables.

e.g. Let y be a function of 3 variables such that $y(s, t, r) = r^2 - srt$

$$\frac{\partial y}{\partial r} = 2r-st$$

$\frac{d}{dx}$ notation is used when the function to be differentiated is only of one variable e.g. $y(x) = x^2 \ \implies \frac{dy}{dx} = 2x$

I hope this clarifies it a bit for you.

So really, they both mean the same thing but one is used within the context of multivariable calculus whilst the other is reserved for univariate calculus.

Solution 3:

The (calculus-of-variations) tag seems to be not the most popular one, so maybe it needs some more advertising (-:

  • Intuition behind variational principle
Serious. The Euler-Lagrange equations associated with calculus of variations provide an example, where both partial and common differentiation are involved. That's why it might help here.
Let there be given a curve $\vec{q}(t)$ and a real valued function $L$ with the following arguments:
this curve, the time derivative $\dot{\vec{q}}(t)$ of the curve and the time $t$ itself.
Minimize the following integral as a function/functional of the curve $\vec{q}(t)$: $$ W\left(\vec{q},\dot{\vec{q}}\right) = \int_{t_1}^{t_2} L\left(\vec{q},\dot{\vec{q}},t\right) dt = \mbox{minimum} $$ It is proved in the reference that the curve minimizing the integral $W$ is given by the following system of mixed partial-common differential equations, one for each of the coordinates $q_k(t)$ of the curve $\vec{q}(t)$: $$ \frac{\partial L}{\partial q_k} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}_k}\right) = 0 $$ These are the well known Euler-Lagrange equations. They are specified for the following problem: find all curves in the Euclidean plane for which the length $W$ between two given end-points is minimal. This makes $\vec{q} = (x,y)$ and $\dot{\vec{q}} = (\dot{x},\dot{y})$ in: $$ W = \int_{t_1}^{t_2} L(\dot{x},\dot{y}) dt = \mbox{minimal} \qquad \mbox{with} \quad L(\dot{x},\dot{y}) = \sqrt{\dot{x}^2 + \dot{y}^2} $$ Giving for the Euler-Lagrange equations: $$ \frac{\partial L}{\partial x} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{x}}\right) = 0 \\ \frac{\partial L}{\partial y} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{y}}\right) = 0 $$ Partial derivatives. Obviously: $$ \frac{\partial L}{\partial x} = \frac{\partial L}{\partial y} = 0 $$ Somewhat less obviously: $$ \frac{\partial \sqrt{\dot{x}^2 + \dot{y}^2}}{\partial \dot{x}} = \frac{\dot{x}}{\sqrt{\dot{x}^2 + \dot{y}^2}} \\ \frac{\partial \sqrt{\dot{x}^2 + \dot{y}^2}}{\partial \dot{y}} = \frac{\dot{y}}{\sqrt{\dot{x}^2 + \dot{y}^2}} $$ Common derivatives: $$ \frac{d}{dt} \frac{\dot{x}}{\sqrt{\dot{x}^2 + \dot{y}^2}} = \frac{ \ddot{x} \sqrt{\dot{x}^2 + \dot{y}^2} - \dot{x} \left( \dot{x} \ddot{x} + \dot{y} \ddot{y} \right) / \sqrt{\dot{x}^2 + \dot{y}^2}} {\left(\sqrt{\dot{x}^2 + \dot{y}^2}\right)^2} = \dot{y} \frac{\dot{y}\ddot{x} - \dot{x}\ddot{y}}{\left(\dot{x}^2 + \dot{y}^2\right)^{3/2}} = - \kappa \, \dot{y} \\ \frac{d}{dt} \frac{\dot{y}}{\sqrt{\dot{x}^2 + \dot{y}^2}} = \frac{ \ddot{y} \sqrt{\dot{x}^2 + \dot{y}^2} - \dot{y} \left( \dot{x} \ddot{x} + \dot{y} \ddot{y} \right) / \sqrt{\dot{x}^2 + \dot{y}^2}} {\left(\sqrt{\dot{x}^2 + \dot{y}^2}\right)^2} = \dot{x} \frac{\dot{x}\ddot{y} - \dot{y}\ddot{x}}{\left(\dot{x}^2 + \dot{y}^2\right)^{3/2}} = + \kappa \, \dot{x} $$ Where $\kappa$ is recognized as the curvature. The Euler-Lagrange equations thus say that $- \kappa\, \dot{x} = +\kappa \, \dot{y} = 0$ , with can only be fulfilled if $\kappa = 0$ : the curvature is zero.
Indeed, the shortest path between two points in the Euclidean plane is a straight line.

Solution 4:

Suppose $F (t) = f (x (t), y (t)) $. Then, by the chain rule, $$ F'(t) = \frac{\partial f(x(t),y(t))}{\partial x} x'(t) + \frac{\partial f(x(t),y(t))}{\partial y} y'(t). $$ That is perfectly clear. The only thing that's confusing is that people sometimes give $F$ and $f$ the same name, and call them both $f$, even though they are different functions. Then the equation above is (confusingly) written $$ \frac{df}{dt} = \frac{\partial f}{\partial x} \frac{dx}{dt} + \frac{\partial f}{\partial y} \frac{dy}{dt}. $$

It seems crazy to call $F$ and $f$ by the same name, but here is a typical example on a Wikipedia page.

Solution 5:

For a function $V(r,h)=πr^2h$ which is the volume of a cylinder of radius $r$ and height $h$, $V$ depends on two quantities, the values of $r$ and $h$, which are both variables. $V(r,h)$ is our function here.

When we take the derivative of $V(r,h)$ with respect to (say) $r$ we measure the function's sensitivity to change when one of it's parameters (the independent variables) is changing. However we don't know what the other independent variables are doing, they may change, they may not. They are still variables (unknowns) to us and we treat them as such.

Instead, when we take the partial derivative of the function $V(r,h)$ with respect to $r$, we also measure the function's sensitivity to change when one of it's parameters is changing, but the other variables are held constant, so we treat them as numbers.

This is how i learned it.