Existence of a function
Solution 1:
Here's a slightly different approach.
Suppose for contradiction that such an $f$ exists. In fact we will reach the same contradiction even if $f$ only satisfies $f(x+y) \geq f(x)(1+yf(x))$ for $x,y>0$.
Define a sequence $x_0 < x_1 < x_2 < \ldots$ via the recurrence $$x_0 := 2011 \text{ and } x_{n+1} := x_n + 1/f(x_n) \text{ for } n>0.$$
Making repeated use of the bound $f(x + 1/f(x)) \geq f(x)(1+f(x)/f(x)) = 2f(x)$, we find that $f(x_n) \geq 2^n f(x_0)$ for all $n$.
Taking the reciprocal of the latter inequality and summing a geometric series enables us to establish $$ x_n = x_0 + \sum_{i=0}^{n-1} (x_{i+1} - x_i) = x_0 + \sum_{i=0}^{n-1} 1/f(x_i) \leq x_0 + \sum_{i=0}^{n-1} 1/(2^i f(x_0)) < x_0 + 2/f(x_0) =: M.$$
But now, the sequence $x_0,x_1,\ldots$ is bounded (by $M$) while the sequence $f(x_0),f(x_1),\ldots$ diverges to infinity. It follows that there is some integer $N$ such that $f(x_N) > f(M)$ even though $x_N < M$. However, it has been observed such an $f$ must be monotone, so we have a contradiction.
Solution 2:
There exists no such $f$. My solution is formally not based on analysis, but it is inspired by my analysis "solution" in the comments. Hopefully there's no bug here :-).
Pick a sufficiently small $\epsilon > 0$. Then, for $n \in \mathbb N_{0}$, we have: $$ f(1+(n+1)\epsilon) - f(1+n\epsilon) > \epsilon f(1+ n\epsilon)^2. $$ I find it easier to work with a related sequence $u_n$ defined as $u_n = \epsilon f(1+n\epsilon)$. Plugging this in the above equation, we get the pleasant recursive inequality $u_{n+1} > u_n + u_n^2$, with the base condition $u_0 = \epsilon f(1)$.
Proposition. If $u_k \geq \eta$, then $u_{k + \lceil 1/\eta \rceil} \geq 2 \eta$.
Proof. We have $u_{k+1} - u_k > \eta^2$, $u_{k+2}-u_{k+1} > \eta^2$, and so on. (I am implicitly using the fact that $u_n$ is monotonic.) Adding all these $\lceil 1/\eta \rceil$ inequalities, we get the claim. $\Box$
Number of iterations before reaching $1$. For $\eta \leq 1$, we have the simplification $\lceil 1/\eta \rceil \leq 2/\eta$. Also I find it convenient to use the informal language "$u$-value at iteration $i$" to denote $u_i$. The above observation says that if the $u$-value at the current iteration is at least $\eta \leq 1$, then after at most $2/\eta$ iterations, the $u$-value becomes at least $2\eta$. Now, if $2\eta \leq 1$, then we stop; else, after an additional $2/(2\eta)$ iterations, the $u$-value is at least $2^2\eta$. Using this argument repeatedly, the number of iterations before the $u$-value becomes at least $1$ is at most $$ \frac{2}{\eta} + \frac{2}{2\eta} + \frac{2}{2^2 \eta} + \ldots $$ (this is actually a finite sum with $\approx \log(1/\eta)$ terms), which is smaller than the sum of the corresponding infinite geometric series, namely $4/\eta$.
Final contradiction. Now, since $u_0 = \epsilon f(1) \stackrel{def}{=} \eta$, we have $$ 1 \leq u_{\frac{4}{\epsilon f(1)}} = \epsilon \cdot f\left(1+\epsilon \cdot \frac{4}{\epsilon f(1)} \right) = \epsilon \cdot f\left(1+\frac{4}{f(1)} \right). $$ This is a contradiction since the $\epsilon > 0$ in the above argument was arbitrary, whereas the second factor is a fixed positive quantity depending only on $f$.
Note. While no $f$ exists satisfying the requirements of the problem, we can do better if we relax the domain to some interval of the form $(0,A)$. In this case, just the function $f(x) = \frac{1}{A-x}$ works.