estimator=

introduction==

%%visits: 3 ## intuition== X1, X2, …, Xn N(μ, σ2), μ, σ2 unknown.

$\hat{\mu} = \frac{X_1+\ldots+X_n}{n}$

$ = _{k=1}^{n} ( x_i - ) $ OR $\frac{1}{n}\sum _{k=1}^{n} \left( x_i - \overline{x} \right)$

If mu is known for $\hat{\sigma ^2}$ you divide by n

To discern between 2 unbiased estimator, the lower variance of an estimator is better.

Efficienc := Lower variance

Relative Efficienc := $\frac{var(X)}{var(Y)}$

Mean_square_erro := $\mathbb{E}[(\hat{\theta} \theta)^2] = Var(\hat{\theta}) + Bias(\hat{theta}) ^2$

An estimator with smalle MSE is better ## rigour== {{file:../figures/screenshot_20211216_103835.png}} ## exam clinic== ## examples and non-examples== ## resources== tags :math: X1, …, Xn N(μ, σ20

$\hat{\mu} = \frac{X_1,\ldots,Xn}{n} ~ N(\mu,\frac{\sigma^2}{n}$

$\mathbb{P}(\left| \hat{\mu} - \mu \right| > \epsilon) = \mathbb{P}(|\frac{\sigma}{\sqrt{n} }Z|>\epsilon) = \mathbb{P}(Z < \frac{\sqrt{n} \epsilon}{\sigma}) + \mathbb{P}(Z < \frac{\sqrt{n} \epsilon}{\sigma} ) = 2(\psi(Z < \frac{-\epsilon \sqrt{n} }{\sigma} = 0$ as n → ∞

Which becomes very small.

convergence_in_probabilit := The sequence of random variables XN converges in probability to a random variable X (or a constant if ϵ > 0, ℙ(|XN − X| > ϵ) → 0 as n → ∞. E.g. $\hat{\mu_n} \to \mu$ in probability

{{file:../figures/screenshot_20211026_092149.png}}

consisten := If a sequence of estimates $\hat{\theta_n}$ of some parameter θ converges to the parameter θ in probability, we say that $\hat{\theta_n}$ is consistent. $\mathbb{P}(|\hat{\theta_n } - \theta| > \epsilon) \to 0$ as n → ∞ for any ϵ > 0

Example of a :todo:

X1, …, Xn Bernoulli(p), ℙ(heads) = p ∈ (0, 1)

backlinks