*Recursive* vs. *inductive* definition
I once had an argument with a professor of mine, if the following definition was a recursive or inductive definition:
Suppose you have sequence of real numbers. Define $a_0:=2$ and $a_{i+1}:=\frac{a_i a_{i-1}}{5}$. (Of course this is just an example and as such has only illustrative character - I could have as well taken as an example a definition of some family of sets)
I claimed that this definition was recursive since we have an $a_{i+1}$ and define it going "downwards" and use the word "inductive" only as an adjectiv for the word "proof", but my professor insisted that we distinguish between these types of definition and that this was an inductive definition, since we start with $a_0$ and work "upwards".
Now, is there even someone who can be right ? Since to me it seems that mathematically every recursive definition is also inductive (whatever these two expressions may finally mean), since the mathematical methods used to define them (namely equations) are the same. (Wikipedia also seems to think they are the same - but I trust a sound mathematical community, i.e. you guys, more than Wikipedia)
And if there is a difference, who is right and what is, if the above is a recursive definition, an inductive definition (and vice-versa) ?
(Please, don't ask me to ask my professor again - or anything similar, since I often get this answer here, after mentioning that this question resulted from a discussion with some faculty member - since out discussion ended with him saying that "it definitely is inductive, but I just can't explain it")
Solution 1:
Both the terms "recursive definition" and "inductive definition" are common, and the differences between the terms are usually so insignificant that either one will work (so it's not worth losing sleep over). Some people are very fastidious about whether they call each definition "inductive" or "recursive", which is respectable. But I cannot immediately think of an example where changing from one word to the other would change what is going on in a particular definition.
My best description is that "inductive definition" is more common when we are defining a set of objects "out of nothing", while "recursive definition" is more common when we are defining a function on an already-existing collection of objects.
A prototypical inductive definition is the following definition of the set of natural numbers:
0 is a natural number
If $n$ is a natural number, so is $n$ + 1
Only numbers obtained from rules 1 and 2 are natural numbers.
Here we don't already have a set of natural numbers, we are describing a certain way to characterize which numbers are "natural numbers".
On the other hand, the following definition of the factorial function is usually called a "recursive definition" $$ n! = \begin{cases} 1 & n = 0 \\ n \cdot (n-1)! & n > 0 \end{cases} $$
The inductive definition of $\mathbb{N}$, which we already have, is what justifies the use of a recursive definition for $n!$. Any inductive definition leads to an induction principle and a method of defining functions by recursion.
It does matter that, in defining $\mathbb{N}$, we have to start with 0 and "work up". It doesn't work to try define the set $\mathbb{N}$ by saying that $n$ is a natural number if $n-1$ is a natural number. But once we have an inductive definition of $\mathbb{N}$ we can leverage that for other purposes.
A few other things that can be defined inductively are the set of polynomials over a ring, the collection of rooted finite trees, and the collection of Borel sets. Inductive definitions are particularly important in computer science, mathematical logic, and related areas. One additional aspect of them is that they usually give a set of "terms" for the objects being described. For example, we know from the inductive definition of $\mathbb{N}$ that every natural number can be expressed as a finite string of the form $0 + 1 + 1 + \cdots 1$ (of some length).
Solution 2:
Recursive definitions are technically unrestricted, whereas inductive definitions must usually have a well founded "induction principle" which actually lets you do induction (in the proof sense) on the object. Recursive definitions don't a priori give you inductive definitions, but an inductive definition is recursive.
Solution 3:
Wikipedia actually seems to distinguish between these definitions. I've actually never heard "inductive definition" being used, but I'll try to get the idea:
In your example however, it basically feels like thinking about the same thing in two different ways:
Recursive:
In order to compute $a_n$, first compute $a_{n-1}$ and let then $a_n = 2a_{n-1}$. Terminate when you reach $a_0=1$.
Inductive:
Start with $a_0=1$. Now if you know $a_n$, you can compute $a_{n+1}$ by $a_{n+1}=2a_n$.
That's pretty much the same! However one can think of cases where the recursive definition is obvious but the inductive one feels strange, namely whenever choices occur: Consider the stopping times for the Collatz sequence
In order to compute $s_n$ for $n\neq 1$, compute $$ s_n = 1+s_{f(n)} \text{ where } f(n) = \begin{cases} \frac n 2&,n \text{ even} \\ 3n+1&, n \text{ odd} \end{cases} $$ Terminate with $s_1=1$.
Now try giving a nice inductive definition ;)
In other contexts, the inductive approach feels more natural
If A is a valid term b is a variable, then $\lambda b . T$ is a valid term.
However, I wouldn't really bother the distinction, as one will get the point anyway. In the case of the sequence, what we acually mean is
Let $a_n$ be the unique sequence that satisfies the recursive equation $$a_n = 2a_{n-1}, a_0=1 $$
Nobody would call this an inductive equation, right?
Solution 4:
Bear in mind that in computer programming these terms have a strict meaning relating to whether programs terminate.
It is possible to inductively define the sum of a list:
sum([]) = 0
sum([H|T]) = H + sum(T)
Here the list constantly decreases in length so we know it will terminate. However, for an infinite list such as:
helper(N) = [N | helper(N + 1)]
nats = helper(0)
nats = [0, 1, 2, 3, 4, 5...
such an inductive definition will not terminate.
You might define a list as:
List(a) = unit ∨ (a ∧ List(a))
Read as a base case of a unit
type that has one inhabitant or a pair of the type a and the rest of the list.
One way of dealing with this is to distinguish between finite and possibly infinite data with the helper functions:
μ(f) = ∀a. (f(a) → a) → a
ν(f) = ∃a. a ∧ (a → f a)
These sort of encodings don't give you dependent recursion though but they're simpler than the full story of polynomial endofunctors.
And a finite list would be defined as List(a) = μ(λrest. unit ∨ (a ∧ rest))
.
The finite list then becomes a program that lets you apply functions to it to reduce it to a smaller value.
The infinite list then becomes a program that starts with a seed and a function to go to the next step that can be used to generate an infinite stream of values.
So to your example is inherently an inductive definition because it doesn't work with an infinite datatype.
Consider the natural numbers defined as:
nat = zero or succ(nat)
Then what of the following number?
omega = succ(omega)
Your algorithm will loop forever.
Most of the time in math people use inductive definitions. But they are specifically different than other recursive algorithms in that they can only work on finite data.
Now you might ask what kind of "coinductive" algorithms can work on an infinite list anyway?
Well for example it is possible to increment each individual member of a list. Remember that the coinductive definition defines the infinite list as a pair so:
inc((seed, succ)) = ((n, succ, seed), helper)
helper((n, succ, seed)) = (n + 1, ((succ(seed), succ, seed), helper))
That said, I wouldn't say that these are the definitions of the terms inductive and coinductive in every case. This is just jargon specific to algorithms.
See also https://en.wikipedia.org/wiki/Coinduction