Algebraic interpretation of Lyapunov functions
From what little I know, the notion of a Lyapunov function is nowadays widely used for almost all dynamical systems where the notion of certain stability makes sense. For example, the are even used in computer science and software verification - see e.g. this article, and also pay attention to the comment about ranking functions left there.
Notions of stability crucially depend on the dynamical system that is considered: if you have a vector field on some manifold $\mathscr M$, e.g. $\mathscr M = \Bbb R^n$, you often talk about the stability of a point (equilibrium point), limit cycle or in general of some invariant submanifold - the set of points which is invariant under the vector flow.
Let's think about a simplest example - that is the stability of an equilibrium point: the lowest dimension where it makes sense to deal with it is $1$, that is $\Bbb R$ (e.g. it does not make sense to talk about limit cycles in $\Bbb R$). The idea of a Lyapunov function $$ f:\mathscr M\to \Bbb R $$ is to push the original dynamics to a simpler set, and study stability there. The condition on the decreasing behavior of $f$ further assures that the stability over $\Bbb R$ can be brought back to $\mathscr M$. The nice feature of such approach is that it not only applies to a particular situation above. In hybrid dynamical systems classical notions of stability are enriched with Zeno stability; as Zeno behavior is not performed by smooth dynamical systems, the construction of Lyapunov functions goes as $$ f':\mathscr M'\to H $$ where $\mathscr M'$ is a hybrid manifold, and $H$ is a 2d simple hybrid system. For example, real-valued Lyapunov functions are not anymore suitable to study this particular notion of stability.
Finally, the notion of an incremental stability talks about stability of trajectories w.r.t. themselves rather then w.r.t. some fixed set. A similar ideas can also be applied to that case. As a result, one may think of Lyapunov functions as certain morphism-like maps between dynamical systems that preserve stability structures.
With regards to the construction of Lyapunov functions, it yet again depends on the stability problem you are interested in. It often holds that their existence is equivalent to the stability. One direction is always easy to prove, as it often leads to the necessary conditions in the definition of a Lyapunov function. The reverse direction (*stability*$\,\Rightarrow\,$*existence*) is much harder, as it requires the construction of a Lyapunov function - and often invokes existence of diffeomorphisms. For example, one of the proofs I've seen discussed that it required the Poincare conjecture to hold true for the low-dimensional cases.
In general, there are many ad-hoc rules in construction of Lyapunov functions within certain classes. As an example, for non-linear system the Sum Of Squares (SOS) polynomial functions can be algorithmically constructed. The latter can be regarded as an algebraic method as it follows the symbolical computations.
What shall hint you upon the existence of such function - is a local stability/attractivity of a set of interest. I am not sure whether there are any receipts for this, since such kind of analysis always depends on a particular system you have in hands. If you are interested, I can look for some comprehensive references that discuss both ideas and the construction of Lyapunov functions in some particular classes of problems.
Let me start with a general discussion, in arbitrary dimensions. Suppose we have a manifold $M$ (possibly with boundary) and a vector field $V$. (I will assume it points inwards at the boundary for simplicity.) I also take the sign convention that a Lyapunov function has $df(V) \le 0$ with equality only at zeros of $V$.
The existence of a Lyapunov function is a strong topological condition on $V$, but in some sense, if it exists, it is unique (from a topologist's point of view). Indeed, suppose I have two Lyapunov functions $f_1$ and $f_2$, then I can take a convex combination to give a Lyapunov function (actually, I can take a positive cone). This means the space of Lyapunov functions is contractible and thus "unique" as far as a topologist is concerned.
Now, let's see some constraints that having a Lyapunov function imposes on the vector field. As the question correctly points out, you can't have any periodic orbits since the values of the Lyapunov function have to decrease along flow lines of the vector field. Let's now assume that $V$ has non-degenerate, isolated zeros, again for simplicity. Having a Lyapunov function now also rules out having zeros $p_1, \dots, p_N$ with a trajectory going from $p_1$ to $p_2$, $p_2$ to $p_3$, etc. and $p_N$ to $p_1$ again. (I will refer to this as a heteroclinic cycle, though the case $N=1$ is a homoclinic.) In higher dimensions, there are many more complicated configurations that can also prevent the existence of a Lyapunov function.
Let's consider the case of a surface. I can easily prove a special case of what you want, but will have to think a little bit about the general statement. The easy claim is that if $V$ has non-degenerate zeros, none of whose linearizations have purely imaginary eigenvalues, and if $V$ doesn't have periodic orbits and doesn't have heteroclinic cycles, then there exists a Lyapunov function.
The claim is more-or-less constructive (I don't believe this will give you a usable algorithm). The heart of the discussion is the Poincaré-Bendixson theorem. Combining with our lack of periodic orbits, this tells us that for any forward-time flow invariant set $S$, the $\omega$-limit set will consist of a collection of critical points and their connecting trajectories. By our second hypothesis, our connecting trajectories will not form any loops. We therefore have a tree in our surface whose vertices are zeros of $V$ and whose edges are flow lines of $V$. By the Poincaré-Bendixson theorem, this is the $\omega$-limit set of the surface. We can construct a local Lyapunov function in the neighbourhood of the tree by using the linearizations of $V$ at the zeros (so the function will be quadratic in small neighbourhoods of the zeros), and then patch these together (this needs a little bit of care).
Once we have the Lyapunov function in the neighbourhood of this tree, we can extend it to the surface by defining $f(x ) = t+ f( \phi_t(x) )$ where $t$ is the first time such that $\phi_t(x)$ is in the neighbourhood. This isn't quite smooth, but can be smoothed easily.
More generally, the existence of a Lyapunov function for a vector field is telling you that the vector field is "gradient-like" (or a "pseudogradient vector field"). Such a vector field allows you to reconstruct the homology of the manifold, so they are objects of interest.