You can reduce it to an algebraic problem as follows:

The definition of a surface of revolution is that there is some axis such that rotations about this axis leave the surface invariant. Let's denote the action of a rotation by angle $\theta$ about this axis by the map $\vec x\mapsto \vec R_\theta(\vec x)$. With your surface given as a level set $f(\vec{x})=0$, the invariance condition is that $f(\vec R_\theta(\vec x))=0$ whenever $f(\vec{x})=0$, for all $\theta$.

In particular, we can differentiate this with respect to $\theta$ to get $\vec\nabla f(\vec R_\theta(\vec x))\cdot \frac{\partial}{\partial\theta}\vec R_\theta(\vec x)=0$, which at $\theta=0$ gives $\vec\nabla f(\vec x)\cdot \vec k(\vec x)=0$, where $\vec k(\vec x)=\left.\frac{\partial}{\partial\theta}\vec R_\theta(\vec x)\right|_{\theta=0}$ is the vector field representing the action of an infinitesimal rotation. (If the language of differential geometry is familiar, this is a Killing vector field).

So with this language established, what we need to check is whether there is any Killing field $\vec k(\vec x)$ associated to a rotation, which is orthogonal to the gradient of $f$ everywhere on the surface (i.e., whenever $f(\vec x)=0$). In fact, this will be not just necessary, but also sufficient, since (intuitively) any rotation can be built from many small ones.

Luckily, it's quite straightforward to write down the most general Killing field: if $\vec x_0$ is a point on the axis of rotation, and $\vec a$ a unit vector pointing along the axis, we have $\vec k(\vec x)=\vec a \times (\vec x-\vec x_0)$. (Note that this is a degenerate parametrization, since we can pick any point on the axis, corresponding to shifting $\vec x_0$ by a multiple of $\vec a$, and also send $\vec a$ to $-\vec a$, to give the same rotation).

To summarize: the question has been recast as "are there any solutions $\vec x_0, \vec a$ to $\vec a \times (\vec x-\vec x_0)\cdot\vec\nabla f(\vec x)=0$, which hold for all $\vec x$ such that $f(\vec x)=0$?".

(You could also write the equation as $\det[\vec a, \, \vec x-\vec x_0, \,\vec\nabla f(\vec x)]=0$).

For your example, I got Mathematica to use this method. I let $\vec x_0=(x_0,y_0,0)$, taking $z_0=0$ to remove some degeneracy, and $\vec a=(a_x,a_y,a_z)$, giving 5 unknowns for the axis. I then found four points on the surface, by solving $f(x,y,z)=0$ with $y=z=0$ and $x=z=0$. Then I got it to solve $\vec a \times (\vec x-\vec x_0)\cdot\vec\nabla f(\vec x)=0$ for the four points, and $|\vec a|^2=1$ (5 equations for the 5 unknowns), getting a unique solution up to the sign of $\vec a$. I then substituted the solution into $\vec a \times (\vec x-\vec x_0)\cdot\vec\nabla f(\vec x)$ for general $\vec x$, getting zero, so the solution indeed gave an axis of revolution. It is simplified in this case because all the level sets of $f$ give a surface of revolution about the same axis, so this last step did not require assuming $f(\vec x)=0$.


If you are working numerically you are probably not interested in degenerate cases, so let's assume that the surface is "generic" in a suitable sense (this rules out the sphere, for example). As you point out, it is helpful to use Gaussian curvature $K$ because $K$ is independent of presentation, such as the particular choice of the function $f(x,y,z)$. The good news is that it is possible to calculate $K$ directly from $f$ using a suitable differential operator. For curves in the plane this is the Riess operator from algebraic geometry; for surfaces it is more complicated. This was treated in detail in an article by Goldman:

Goldman, Ron: Curvature formulas for implicit curves and surfaces. Comput. Aided Geom. Design 22 (2005), no. 7, 632-658.

See here

Now the punchline is that if this is a surface of revolution then the lines $\gamma$ of curvature are very special: they are all circles (except for degenerate cases like the sphere). In particular, $\gamma$ has constant curvature and zero torsion $\tau$ as a curve in 3-space. If $f$ is a polynomial, one should get a rational function, say $g$, for curvature, and then the condition of constant curvature for $\gamma$ should be expressible by a pair of equations.


This is not really a full answer, just some thoughts that should get you started.

Suppose we have already identified a candidate axis of revolution for the surface. How do we tell whether the surface was really created by revolution? Well, we can simply change coordinates to the new coordinate system given by the candidate axis and any two orthogonal vectors and see whether the equation for the surface in the new coordinates will only depend on two variables.

Now, let's make the above more precise. Instead of the qualitative statement "we have/haven't a surface of revolution" let's consider some kind of function $R$ (depending on the axis of revolution and perhaps some other parameters to be specified) that vanishes precisely when we have a surface of revolution. Can we find such a function? You bet we can! Enter Fourier transformation.

You might be familiar with the fact that the Fourier transform of constant function on a circle is the delta function. Explicitly $\hat 1 = \delta_0$ (there should also be some normalization factor but let's disregard that). So our function $R$ can be $\lVert \delta_0 - \hat f\rVert$ where $\lVert \cdot \rVert$ is any reasonable norm and $\hat f$ is the Fourier transform of surface's equation around the candidate axis of rotation. The reason for this is that $f$ doesn't depend on one of the variables in the new coordinates. Another way to say this is that it is constant in that variable or that it's Fourier transform is a delta function.

Summing up, we have reduced the problem to minimizing function $R$. Arguably there's still quite some work involved in computing the Fourier transform of $f$ around arbitrary axis and finding a good norm to use. Good news is that there are surely packages for calculating the Fourier transform and that you can truncate the norm to include just first few terms (so called low frequency ones) of the Fourier transform.


The defining equation should have an infinitesimal symmetry that, after a translation of the coordinate system to be centered on the axis of rotation, is linear and skew-symmetric (it is an infinitesimal rotation) with coefficients independent of $(x,y,z)$. In other words,

$f(x,y,z)=0 \hskip15pt$ implies, after ignoring terms of order $(dt)^k$ with $k > 1$, that $$f(x+ (\alpha + Py + Qz) dt, y + (\beta - Px + Rz)dt, z + (\gamma -Qx -Ry) dt) = 0$$

for constants $P,Q,R$ (the entries of the skew symmetric matrix) and $(a,b,c)$ (the new center of coordinates, which can be any point on the axis of rotation). The triple of constants $(\alpha,\beta,\gamma)$ is defined to equal $(-Pb-Qc, Pa-Rc, Qa +Rb)$; this is just the correction term resulting from the coordinate change since we have for example $x = a + (P(y-b) + Q(z-c)) dt$ as the effect on $x$ of an infinitesimal rotation through an angle proportional to $dt$.

As $f(x,y,z)$ is zero, this is a set of equations on the degree $1$ term (in $dt$) of the power series of $F(x + ..., y+ ..., z + ...)$ that must vanish.

The vanishing is not only numerical, but up to whatever order of vanishing $f(x,y,z)$ has, in case of a non-reduced equation like $f(x,y,z)=(1 + g^2) h(x^2+y^2)^2$ where $g$ and $h$ are polynomials. The equation can be seen as taking place in a ring of functions modulo $f(x,y,z)$, or as matching the asymptotic expansions up to the correct order. If there is no foolproof ability to reduce the equation before starting, picking several generic points $(x_i,y_i,z_i)$ can fail because partial derivatives of $f$ might vanish everywhere $f$ does. For polynomials you could factor $f$ before starting.

The equations are linear (and homogeneous, so invariant under scaling) in $P,Q,R$ but contain bilinear products like $Pa$. The condition on $(P,Q,R)$ is invariant under changes of $(a,b,c)$ corresponding to different choice of point on the axis of rotation, so if there is a solution there is one with $a=0$, or $b=0$, or $c=0$. Taking these two facts into consideration the dimension of the solution space is $6 - 1 - 1 = 4$ which is the same as the number of parameters to define a line in $3$-dimensions. The degree of the system is $2$ after imposing $abc=0$ and requiring $P=1$ (or the same for $Q$ or $R$, if no solution has $P=1$); this reflects the twofold redundancy that reversing both the orientation of the rotation axis and the direction of the rotation does not change the rotation.

All in all this ends up being similar to the answer of Holographer using random points, since the variables parameterizing the problem are equivalent. If $f$ is polynomial you can compute exactly in a computer algebra system without picking random points; $C[x,y,z]/(f=0)$ will be an infinite-dimensional vector space over $C$ (or whatever other field of coefficients) and after exhausting the relations in low degree (getting a numerical degree $2$ system that is solved for the constants $P,Q,R,a,b,c$ after imposing different cases of $abc=0$ and $(P-1)(Q-1)(R-1)=0$) one can test the solution to see whether the degree $1$ term with $dt$ is divisible by $f(x,y,z)$ or not.