How to think of a function as a vector?
Solution 1:
Mathematics and physics are not really compatible.
In the plane, or the 3D space (and so forth) it is fine to represent a vector space as a magnitude and direction.
However, the formal definition of a vector space requires not the need for either of those in order to represent a vector. In fact, a vector - formally - is just an element of a vector space.
This can go on, not all vector spaces have norms defined on them. Without the axiom of choice, not all have a basis, decomposition into a direct sum, nontrivial functionals, and so on.
Similarly, not all topological spaces are normal, regular, Hausdorff, etc., however we like to think of the physical world as $\mathbb R^3$ which is normal, regular, Hausdorff, etc..
In the finite dimensional case, or assuming the axiom of choice, we have a basis for the space. That is every vector can be written as a linear combination of the elements of the basis. You can think of the vector, if so, as a function from a set into the field.
The set, of course, is the basis; or some other set with the same number of elements. For a vector $v = \sum_{n=1}^k\alpha_n\cdot v_n$ we can think of $v$ as a function from the set $\{1,\ldots,k\}$ into the field: $v(n)=\alpha_n$. Of course, after changing a basis we "change the function", but this is why vector spaces are isomorphic and not the same.
Solution 2:
Any vector is a function in the trivial sense that you can re-interpret it as the trivial constant map sending anything to that particular vector, $f_v:D\to\{v\}\in V$. In this way every $v\in V$ can be understood as the associated "function" $f_v$ regardless of domain $D$.
The other direction - the idea that functions can form a vector space - is more general than that of the usual $\mathbb{R}^n$ vector spaces with canonically understood magnitude and direction, which both come from an inner product $\langle\cdot,\cdot\rangle$ on $V$. The general idea is that vector analysis provides a model for situations where mathematical objects can be decomposed into a linear combination of components over a field of scalars (and modules if over a ring). In this way they form the natural backdrop for linear algebra, which again doesn't always come with a geometric formalism in all circumstances, but they coincide exactly in the obvious cases. Bottom line: It's just the universal practice in mathematics of noticing one structure is a smaller case of a bigger one, where sometimes the first or smaller structure has special meaning (e.g. geometry) associated to it.
Solution 3:
What's going on here is that physics and mathematics use the word with different (but related) meanings. It's just a problem of terminology, not really hiding anything deep technically.
Everything that physics calls a vector is also a vector in mathematics. But there are things that mathematicians call vectors which physicists wouldn't. My understanding is that the physicist's sense of "vector" corresponds best to what mathematics would call a "tangent vector" of a manifold. (The familiar $\mathbb R^3$ vectors in Euclidean space are a special case of this).
Computer science has a third, related but again different, sense of "vector". That's somewhat unfortunate, but eventually just the way the world is.
Solution 4:
Vectors by construction are 'algebraic objects that can be added and scaled.'
Many of us as students are first shown vectors while illustrating forces or direction in physics courses. One of my favorite math professors used to begin his first vector calculus and linear algebra courses by drawing a circle on a board and posing the question: 'can this circle can be a vector?' It is a common misconception to deem vectors as objects that carry only direction and magnitude, and at first thought many of us would agree.
Though thinking about it, can circles actually be vectors? Can functions be vectors? If so, they ought to follow the aforementioned addition and multiplication operations (check out linear combination).
There's an important topic in linear algebra called the inner product, which begets geometry, interestingly enough. The inner product can be thought of as a functional, that takes two vectors and produces a scalar. There are various ways to produce a scalar with two vectors. One example is: you can integrate a product of two functions (functions can be vectors!) on specified limits to produce a scalar.
The only conditions that must be met for an inner product are: bilinearity, symmetry, and bearing a norm. Further analysis of the inner product will lead one to orthogonal basis expansion or Fourier expansion and Hilbert spaces. Real juicy stuff and a really good question!