STAT111 2019-02-19
Zhu, Justin

STAT111 2019-02-19
Tue, Feb 19, 2019


Prove the result known as the Cramer lower bound.

If $\hat{\theta}$ is unbiased for $\theta$, then the $Var(\hat{\theta}) = \frac{1}{n * I_1(\theta^*)}$

Usually, in statistics we don’t have big conjectures. Every problem is open-ended, whereas in math solid proofs govern how people think about the numbers. There’s different ways to measure what’s better.

Schrodinger’s Cat. We don’t know the real world. Did we run the simulation long enough?

It’s universally telling us that there is no better than the Cramer Bound

Asymptotically, you’ll never beat the MLE. Lower variance.

It doesn’t mean we can do better than the MLE. As m goes to infinity, we’ll never get to the MLE. Now let’s prove this:

Correlation is between -1 and 1, a variation of the Cauchy-Schwartz inequality.

Proof Correlation

The |Cov(X,Y)| \leq SD

$$Cov(\hat{\theta}(y)), S(\theta, y)$

This is equal to the expected product $$E(\hat{\theta}(y)S(\theta^*,y))$$

Correlation is between -1 and 1. The square root is the Fisher information. We need to compute this.

LOTUS has the integral of $\hat{\theta}(y)S(\hat{\theta}, y) f_{\theta}(y)dy$ which gives us the log likelihood function.

Coverage Probability

Put prior distribution on $\theta$ and then we create Bayesian and Frequentist interpretations. We are conditioning on the data. The data is random but the theta is fixed.

Start with a prior, get the data, condition and then get the posterior distribution. We can plot that posterior distribution. We then keep the middle 95 percent. We know that 95 percent of the area of $\theta$. Area of the curve is one, keep the 95 percent in the middle. Create an interval on there.

Asymptotic, bootstrap, and the asymptotic of the MLE.

We start with some dataset and we wish we have more replications. That’s really good. Now we have as many as we want. Distribution calculation