Trust region problem using dual problem

I would like to solve the following problem:

$$\min_{x} \frac{1}{2}x^THx+c^Tx+d \quad \text{s.t} \quad x^Tx\leq r^2$$ where $H$ is a symmetric, positive definite matrix.

So this is a quadratic convex function in a compact Euclidean ball. In order to solve this problem, I used the Lagrangian dual problem, and got the following: $$\max_{\lambda}\{-\frac{1}{2}c^T(H+2\lambda I)^{-1}c+d-\lambda r^2\} \quad \text{s.t} \quad \lambda\geq0$$

How can I solve this problem? Slader's condition holds for this problem, so strong duality holds. The function is I guess differentiable with respect to $\lambda$, so I could use simple gradient technics, but I am not familiar with matrix differentiation, so I couldn't go further. How can I solve the dual problem? Thanks in advance.


Solution 1:

In general, there is not a closed form solution for the trust-region subproblem (see pages 83-88 of Nocedal & Wright Numerical Optimization, 2nd Edition). The idea is that by restricting $x$ to lie in a norm ball of radius $r$, we are introducing implicit regularization to the problem.

For example, the unconstrained problem gives $x^* = -H^{-1}c$ (which solves the trust-region subproblem as long as $r^2\geq\|H^{-1}c\|_2^2$). In contrast, when we introduce the binding constraint $\|x\|_2^2\leq r^2$, we get $$x^* = -(H^{-1}+2\lambda^* I)^{-1}c,$$ where $\lambda^*$ makes the constraint $\|x\|_2^2=r^2$ tight (solves the dual problem). That is, we can think of $x^*$ as a function of $\lambda$, i.e., $x^*(\lambda)$ and perform a one-dimensional rootfinding search for $\|x^*(\lambda)\|_2-r=0$ in $\lambda$. Nocedal and Wright give a procedure for this. On the other hand, differentiating the dual objective function with respect to $\lambda$ also gives another difficult equation without a closed-form solution.