Shifted Exponential Distribution and MLE
I was doing my homework and the following problem came up! We have the CDF of an exponential distribution that is shifted $L$ units where $L>0$ and $x>=L$. The CDF is:
$$1-e^{-\lambda(x-L)}$$
The question says that we should assume that the following data are lifetimes of electric motors, in hours, which are:
$$\begin{align*} 153.52,103.23,31.75,28.91,37.91,7.11,99.21,31.77,11.01,217.40 \end{align*}$$
Please note that the $mean$ of these numbers is: $72.182$
Now the question has two parts which I will go through one by one:
Part1: Evaluate the log likelihood for the data when $\lambda=0.02$ and $L=3.555$. Now the way I approached the problem was to take the derivative of the CDF with respect to $\lambda$ to get the PDF which is:
$$(x-L)e^{-\lambda(x-L)}$$
Then since we have $n$ observations where $n=10$, we have the following joint pdf, due to independence:
$$(x_i-L)^ne^{-\lambda(x_i-L)n}$$ which can be rewritten as the following log likelihood:
$$n\ln(x_i-L)-\lambda\sum_{i=1}^n(x_i-L)$$ Is this the correct approach? Because it would take quite a while and be pretty cumbersome to evaluate $n\ln(x_i-L)$ for every observation?
Part2: The question also asks for the ML Estimate of $L$. So assuming the log likelihood is correct, we can take the derivative with respect to $L$ and get:
$\frac{n}{x_i-L}+\lambda=0$ and solve for $L$? Is this correct? Because I am not quite sure on how I should proceed?
Thanks so much for your help! I greatly appreciate it :)
Step 1. Find the pdf of $X$: $$f(x)=\frac{d}{dx}F(x)=\frac{d}{dx}\left(1-e^{-\lambda(x-L)}\right)=\lambda e^{-\lambda(x-L)}$$ for $x\ge L$.
Step 2. Now the log likelihood is equal to $$\ln\left(L(x;\lambda)\right)=\ln\left(\lambda^n\cdot e^{-\lambda\sum_{i=1}^{n}(x_i-L)}\right)=n\cdot\ln(\lambda)-\lambda\sum_{i=1}^{n}(x_i-L)=n\ln(\lambda)-n\lambda\bar{x}+n\lambda L$$ which can be directly evaluated from the given data.
Step 3. Find the MLE of $L$. Taking the derivative of the log likelihood with respect to $L$ and setting it equal to zero we have that $$\frac{d}{dL}(n\ln(\lambda)-n\lambda\bar{x}+n\lambda L)=\lambda n>0$$ which means that the log likelihood is monotone increasing with respect to $L$. So in order to maximize it we should take the biggest admissible value of $L$. But, looking at the domain (support) of $f$ we see that $X\ge L$. So everything we observed in the sample should be greater of $L$, which gives as an upper bound (constraint) for $L$. That means that the maximal $L$ we can choose in order to maximize the log likelihood, without violating the condition that $X_i\ge L$ for all $1\le i \le n$, i.e. the MLE $\hat{L}$ of $L$ is $$\hat{L}=X_{(1)}$$ where $X_{(1)}$ denotes the minimum value of the sample (7.11).