Why isn't integer factorization in complexity P, when you can factorize n in O(√n) steps?

It is said that integer factorization is an NP problem.

Why isn't it P? You can solve it in $O(\sqrt{n})$ time with trial factorization, and since $\sqrt{n} = n^{1/2}$, to me that looks like a number of form $n^k$ which is a polynomial.

I am having difficulty determining what P vs NP vs NP-Complete vs NP-Hard means because I don't know how to separate the definitions and how complexity is measured and defined.


Let $k$ denote the length of the input value $n$.

Since $k=\log_2n$, a complexity of $O(\sqrt{n})$ is equivalent to $O(\sqrt{2^k})$.

Since $\sqrt{2^k}=2^{k/2}$, this complexity is obviously exponential in terms of input-length.


Strictly speaking, it is invalid to ask whether an abstract problem like integer factoring is in P. When deciding whether an algorithm runs in P time, we don't ask about the $n$, the number being factored (or equivalent), we ask about the length of the input. The length of the input depends on exactly how you've chosen to represent it. Instead, we have to ask whether the integer factoring with a particular representation is in P.

Integer factoring with the numbers represented in binary is (as far as we know) not in P. In this case, the length of the input is $\log_2 n$.

Integer factoring with the numbers represented in unary is in P. In this case the number of bits is $n$.

Integer factoring with number represented as a list of numbers from 1 to N is in P. In this case the number of bits is $O(n \log n)$

Why the length of the input? That's just the definition of P.

Typically, we assume when talking about P/NP, that you are using a reasonable representation. What makes a reasonable representation? That's left somewhat underspecified. There is no strict definition for this reasonableness. But representing numbers in unary, or as lists of numbers is clearly not reasonable, as it uses an exponential amount of space compared to using binary.

Ok, what about the maximum of $n$ numbers?

The obvious way to represent this would be using $k$ bits for each number, giving $nk$ bits total. That puts the problem in P. In order for the problem to end up outside of P, the length would have to be something more like $O(\log n)$. But a representation that short couldn't hold the numbers in the list. Whatever problem was being solved there, it would not recognizably be finding the maximum of a list.

To some degree it comes down to: I can represent $n$ in the integer factorization problem with $\log n$ bits without losing information, so we assume that you will. But I can't do the same with the maximum integer problem.


Why do you think the problem is solvable in $n^{1\over 2}$? Keep in mind that you have to find each factor, and testing "Does $x$ divide $y$?" (and computing $y/x$, if so) takes more than one step. (Also, $n$ here is the number of bits of the input, not the input itself. See bof's comment.)


By the way, strictly speaking, we don't know that integer factorization isn't in $P$; we just know that the obvious approaches don't work.