Finding the MVUE out of all unbiased estimators $\sum_{i=1}^ka_iW_i$ using Lagrange multiplier [duplicate]

Given $W_1...W_k$ unbiased estimators, where $Var_{\theta}(W_i) = \sigma_i^2$, and $Cov_θ(W_i, W_j) = 0$ for all $i\neq j$, I need to show that the estimator $$W^*=\frac{\sum_{i=1}^k(W_i/\sigma_i^2)}{\sum_{i=1}^k(1/\sigma_i^2)}$$ is the MVUE out of all unbiased estimators with the form $$\hat\theta=\sum_{i=1}^ka_iW_i$$ where $a_i$ is a constant and $E_{\theta}[\sum_{i=1}^ka_iW_i] = \theta$.

I understand that I need to apply the method of Lagrange multipliers here, but I have not used this method before. How would I use it to show that $W^*$ is the MVUE?

Then I need to show that $Var(W^*)= \frac{1}{\sum_{i=1}^k(1/\sigma_i^2)}$.


You need $\hat\theta$ to be unbiased for $\theta$, which gives the constraint $\sum\limits_{i=1}^k a_i=1$ assuming $W_i$'s are unbiased for $\theta$. Subject to this restriction, you have to minimize the variance of $\hat\theta$.

Now,

\begin{align} \operatorname{Var}_{\theta}(\hat\theta)&=\sum_{i=1}^k a_i^2\operatorname{Var}_{\theta}(W_i)+\sum_{i\ne j}\operatorname{Cov}_{\theta}(W_i,W_j) \\&=\sum_{i=1}^k a_i^2\sigma_i^2 \end{align}

So you have the optimization problem (with respect to $a_1,\ldots,a_k$):

$$\text{Minimize} \quad \sum_{i=1}^k a_i^2\sigma_i^2\quad\text{subject to }\sum_{i=1}^k a_i=1$$

To solve this using Lagrange multipliers, you might take a look at the worked out examples here and here. This is a straightforward application of the method.

But you can alternatively use Cauchy-Schwarz inequality to directly say that

$$\left(\sum_{i=1}^k a_i^2\sigma_i^2\right) \left(\sum_{i=1}^k \frac{1}{\sigma_i^2}\right)\ge \left(\sum_{i=1}^k a_i\right)^2$$

Here equality holds if and only if $a_i\sigma_i=\frac{c}{\sigma_i}\implies a_i=\frac{c}{\sigma_i^2}$ for all $i$ and for some constant $c(\ne 0)$.

Since $\sum\limits_{i=1}^k a_i=1$, you have $c=\left(\sum\limits_{i=1}^k \frac{1}{\sigma_i^2}\right)^{-1}$. So equality is attained exactly at $\hat a_i=\frac{1/\sigma_i^2}{\sum\limits_{i=1}^k (1/\sigma_i^2)}$.

The estimator $W^*=\sum\limits_{i=1}^k \hat a_i W_i$ is what is called the best linear unbiased estimator (BLUE) of $\theta$.