Functions with different codomain the same according to my book?

Yes. You can change th codomain, and as long as it includes the range of $f$, the function is the same.

For example, $\sin\colon\Bbb R\to\Bbb R$ and $\sin\colon\Bbb R\to[-1,1]$ are the same function. Or $f(1)=1$ is the same whether or not $f\colon\{1\}\to\Bbb N$ to $f\colon\{1\}\to\{0,1\}$.

As the other answers indicate, in some contexts a function is a triplet where the codomain is specified explicitly. It can be useful to know the codomain as part of the function. In set theoretic contexts, it is sometimes better to treat a function as a set of ordered pairs with a certain property.

Why is it useful? For example, it dispenses the need to talk about "canonical injection", and we can just talk about inclusion. Now we can say that $f\subseteq g$ when $g$ is a larger function, or that $f\cap g$ is a function. It allows us to define partial functions more easily, especially in the context of predicate logic: we can define a predicate which is a function on its domain, and it is not necessary this domain is the entire universe of the structure (so the predicate cannot be a function symbol). This is very useful, for example, in computability theory.


If you look at a function as a pure set of ordered pairs, then yes your observation is true. If you look at it as a triple $(A,B,\text{set of ordered pairs})$ then $B$ matters. If you replace $B$ in the triple by the image (some books call it range) of $f$ (some $B' \subseteq B$), then again your observation is true.

Basically, this is a nit you should not worry too much about.


This really depends on how you formally define "the same". In mathematics, this is done by specifying an equivalence relation giving rise to equivalence classes of functions. It is possible to specify an equivalence relation between functions that requires them to have the same codomain in order to be considered "the same" (i.e., functions with different codomain are not the same), and it is possible to specify an equivalence relation that ignores the codomain (i.e., functions can be the same even if they have different codomains).

Formally, functions with specified codomains are triples $(\mathcal{X}, \mathcal{Y}, \mathscr{G})$ containing a domain $\mathcal{X}$, a codomain $\mathcal{Y}$, and a graph $\mathscr{G} \equiv \{ (x,f(x)) | x \in \mathcal{X} \}$. You can define the equivalence relations $\sim$ and $\overset{*}{\sim}$ respectively by:

$$\begin{matrix} f_0 \sim f_1 & & \iff & & (\mathcal{X}_0, \mathcal{Y}_0, \mathscr{G}_0) = (\mathcal{X}_1, \mathcal{Y}_1, \mathscr{G}_1), \\[6pt] f_0 \overset{*}{\sim} f_1 & & \iff & & (\mathcal{X}_0, \mathscr{G}_0) = (\mathcal{X}_1, \mathscr{G}_1). \\[6pt] \end{matrix}$$

The first equivalence relation requires equivalence of the codomains of the two functions, whereas the second does not. Both give a well-defined notion of when two functions are "the same".

So, the real question here is, which of these two equivalence relations is more useful? Well, it turns out that the codomain isn't really important for most properties of functions. Indeed, the "meat" of a function is given by the graph $\mathscr{G}$ that shows all pairs of values (i.e., all the combinations of a value in the domain and the value it maps to). (However, for some cases where the codomain is important, see here.) Consequently, we tend to adopt the convention that two functions are considered "the same" if they have the same domain and graph, but we do not require the same codomain. This means that we adopt the above equivalence relation $\sim$ as specifying when two functions are "the same".


Let's stand back a bit -- sometimes it is worth thinking what's behind a convention.

So: start by thinking how, in practice, we informally prove that there's exactly one function that satisfies a certain description --- e.g. is a unique order-isomorphism between $(A, <)$ and $(B,\prec)$. We need to show that there's at least one such function, and that there is at most one. How do we do the second bit? We show that if $f$ and $f'$ are candidates, then they are the same function. And how do we do that? We show that for every $a$ among the objects $A$, $f(a) = f'(a)$. But why does that show that $f$ and $f'$ are the same function?

It only does so on an extensional understanding of what a function is: we are taking it that functions aren't individuated by the rules we give for associating an argument with value but by the resulting association. So, even if we have two different rules of association, if they both generate the same collection of pairings between arguments and values, we count the rules as two different ways of presenting the same function.

Now this doesn't yet warrant defining functions as collections of argument/value pairings; that identification involves another idea beyond extensionality, call it plenitude. This is the further idea that any old association of arguments with values (one value per argument) determines a function, even if that association is beyond any possibility that we could describe it. It is one thing to say that different rules can determine the same function (extensionality), it is another thing to say that there can be functions which have no describable rule that determines their values (that functions are as plenitudinous as arbitrary argument/value pairings). Taking the second step might seem natural in hindsight, but its almost universal acceptance was the result of hard won achievements in 19th century mathematics.

OK: if we go for extensionality and plentitude, it becomes entirely natural to define a function from $A$ into $B$ -- or perhaps we should really say "model" or "implement" a function -- as a set of ordered pairs $(a, b)$ with only one pair for each $a$. That fixes functions as extensional items, and is naturally understood as making the functions as plenitudinous as the relevant sets.

Hence the characterization of functions we find in elementary analysis texts which implements functions as sets of ordered pairs in such a way that changing the co-domain doesn't change the function. That's because what matters in the elementary context is to emphasize an extensional understanding of what functions and to stress that you can have functions associated with no describable rule for associating argument to function.

Now sure, in further non-so-elementary contexts it can be useful to explicitly build into our characterization of a function the domain (if we are going to start seriously dealing with partial functions) and codomain. That refinement doesn't make the implementation in (some of) the elementary texts wrong -- that serves perfectly to make the points the elementary texts need to make.