Suppose $X_1,\ldots, X_n$ are i.i.d. random variables having pdf $$ f_{\theta}(x)=\left\{\begin{array}{ll}{\theta,} & {0 \leqslant x \leqslant 1} \\ {1-\theta,} & {1<x \leqslant 2}\end{array}\right. $$ Give the maximum likelihood estimate of $\theta$.

I know the likelihood function of $(X_1,\ldots, X_n)$ is $$\ell(\theta;x)=\prod_{i=1}^{n}\left[\theta I(0<x_i<1)+(1-\theta)I(1<x_i<2)\right]$$ But I don't know how to compute $\frac{\partial \log \ell(\theta)}{\partial \theta}=0$ to get $\hat{\theta}$.

Is there anyone can tell me?


What you have written for the likelihood function is technically correct but you cannot reasonably derive an MLE in this setup because of the additive nature of the function. In this case, you can suppose that $n_1$ observations in the sample are in $[0,1]$ and $n_2$ observations are in $[1,2]$, with $n_1+n_2=n$.

Given the sample, rewrite the likelihood function as $$L(\theta)=\left(\prod_{i:0\le x_i\le 1}\theta\right)\left(\prod_{i:1<x_i\le 2}(1-\theta)\right)\quad,\,\theta\in(0,1)$$

This is the same as saying $$L(\theta)=\theta^{n_1}(1-\theta)^{n_2}\quad,\,\theta\in(0,1)$$

Differentiating the log-likelihood wrt $\theta$ yields

$$\frac{\partial}{\partial\theta}\ln L(\theta)=\frac{n_1}{\theta}-\frac{n_2}{1-\theta}$$

So a critical point of the likelihood is $$\hat\theta=\frac{n_1}{n_1+n_2}=\frac{n_1}{n}$$


Your expression of the p.d.f. is not very convenient for the estimation.

It's better to use $$ f_\theta(x) = \theta^{I(0 \leq x \leq 1)}(1-\theta)^{I( 1 < x \leq 2)}.$$ If you spend two minutes on it you'll see it's the same.

Then, $$L(\theta) = \prod_{i=1}^N f_\theta(x_i) = \theta ^{\#[0,1]}(1-\theta)^{\#(1,2]},$$ where $\#[0,1]$ and $\#(1,2]$ are the number of $x_1$ that fall in each interval. Then, by taking the derivative, you find that the maximum likelihood estimate of $\theta$ is given by $$ \frac{\#[0,1]}{\hat\theta} + \frac{\#(1,2]}{1-\hat\theta} = 0;$$ so $$\hat \theta = \frac{\#[0,1]}{\#[0,1] + \#(1,2]} = \frac{\#[0,1]}{N}.$$