What is divergence in image processing?
Solution 1:
The gradient is the directional rate of change of a scalar function in $\mathbb{R}^n$ whereas the divergence measures the amount of output vs input for a unit volume of a vector valued "flow" in $\mathbb{R}^n$.
The gradient has the magnitude of the rate of change in the direction of that change: $$ \nabla f(\vec{x})=\left\langle\frac{\partial}{\partial x_1}f,\frac{\partial}{\partial x_2}f,\dots,\frac{\partial}{\partial x_n}f\right\rangle $$ For example, the gradient of the distance from a given point is a vector field of unit length vectors pointing away from the given point.
Whereas the divergence is the measure of the amount of flow out of a given volume minus the amount of flow into a given volume: $$ \nabla\cdot\vec{f}(\vec{x})=\frac{\partial}{\partial x_1}f_1+\frac{\partial}{\partial x_2}f_2+\dots+\frac{\partial}{\partial x_n}f_n $$ For example, the divergence of a flow with no source or sink is $0$. If there is a net source, the divergence is positive and if there is a net sink the divergence is negative.
The Divergence of the Gradient
The $\color{#C00000}{\text{divergence}}$ of the $\color{#00A000}{\text{gradient}}$ is also called the $\color{#0000FF}{\text{Laplacian}}$: $$ \color{#C00000}{\nabla\cdot}\color{#00A000}{\nabla}=\color{#0000FF}{\Delta} $$ which is given by $$ \Delta f(\vec{x})=\frac{\partial^2}{\partial x_1^2}f+\frac{\partial^2}{\partial x_2^2}f+\dots+\frac{\partial^2}{\partial x_n^2}f $$ In one dimension, it is the second derivative. In higher dimensions it behaves in a similar manner: at a minimum point of $f$, $\Delta f\gt0$ and at a maximum point of $f$, $\Delta f\lt0$.
What your first equation says is that $$ \frac{\partial}{\partial t}I=c\Delta I $$ If $c\gt0$ then the diffusion is working to fill in depressions and tear down accumulations. If $c\lt0$, the diffusion has the opposite effect.
The isotropic diffusion acts the same (constant $c$) everywhere, whereas the anisotropic diffusion acts differently depending on the size of the gradient of the field.
Edge Stopping Function
The idea of an edge stopping function is to impede diffusion at an edge in an image (in a region where the magnitude of the gradient is large). That is, the function $g$ has a shape like
$\hspace{3.5cm}$
When the gradient is small, diffusion flows similarly to the isotropic case, but when the gradient is large (near an edge), diffusion stops. This allows detection of the edge.
Solution 2:
A glib answer is that "gradient" is a vector and "divergence" is a scalar. More specifically (and perhaps helpfully), the gradient vector points in the direction of the fastest (local) increase in the value of the (scalar) function. The divergence (of a vector field) provides a measure of how much "flux" (or flow) is passing through a surface surrounding a point in the field (positive for flow away from that point, negative for flow toward, zero for no net flow).
An example for gradient: the "distance from the origin" function, $ \ f(x) = \sqrt{x^2+y^2+z^2} \ $ has the gradient
$$ \nabla f \ = \ \frac{\langle \ x, y, z \ \rangle}{\sqrt{x^2+y^2+z^2}} \ , $$
which gives vectors pointing radially away from the origin, the "fastest" way to get farther from the origin.
An example for divergence: the vector function, $\overrightarrow{g} (x,y,z) = \langle x^2 , y^2 , z^2 \rangle \ , $ which has the divergence
$$ \nabla \centerdot \overrightarrow{g} \ = \ 2x + 2y + 2z \ , $$
which indicates that, for a quantity which is getting larger with increasing distance from the origin, the amount of "flow" outward through an (imaginary) surface at some radius from the origin is also getting "stronger".
These are formal definitions, but I'm afraid I'm not familiar with the context in which they appear in image processing.