Difference between revisions of "Rayleigh sigma estimate"

From ShotStat
Jump to: navigation, search
(Unknown true center: Deriving Singh's (1992) C_{2} estimator)
m (Section headers)
 
Line 22: Line 22:
 
</math>
 
</math>
  
== Unknown true center: Deriving Singh's (1992) <math>C_{2}</math> estimator ==
+
== Unknown true center ==
 +
=== Deriving Singh's (1992) <math>C_{2}</math> estimator ===
  
 
When the <math>(\mu_{X}, \mu_{Y})</math>-center is estimated by <math>(\bar{x}, \bar{y})</math>, the [[Closed_Form_Precision#Bessel_correction_factor|Bessel correction]] is required to make the ML-estimate of the variance unbiased:
 
When the <math>(\mu_{X}, \mu_{Y})</math>-center is estimated by <math>(\bar{x}, \bar{y})</math>, the [[Closed_Form_Precision#Bessel_correction_factor|Bessel correction]] is required to make the ML-estimate of the variance unbiased:
Line 56: Line 57:
 
The penultimate expression is [[CEP_literature#Singh1992|Singh's 1992]] definition 3.3 for estimator <math>C_{2}</math>, also given by [[CEP_literature#Moranda1959|Moranda (1959)]].
 
The penultimate expression is [[CEP_literature#Singh1992|Singh's 1992]] definition 3.3 for estimator <math>C_{2}</math>, also given by [[CEP_literature#Moranda1959|Moranda (1959)]].
  
== Known true center: Deriving Singh's (1992) <math>C_{1}</math> estimator ==
+
== Known true center ==
 +
=== Deriving Singh's (1992) <math>C_{1}</math> estimator ===
  
 
When the <math>(\mu_{X}, \mu_{Y})</math>-center is known, the Bessel correction is '''not''' required to make the Maximum-Likelihood-estimate of the variance unbiased.  In this case we simply set <math>n := 2N+1</math> for the <math>c_{4}(n)</math> correction factor so that <math>c_{4}(n)</math> is the scaled mean of a <math>\chi</math>-distributed variable with <math>k = n-1 = 2N+1-1 = 2N</math> degrees of freedom. This gives:
 
When the <math>(\mu_{X}, \mu_{Y})</math>-center is known, the Bessel correction is '''not''' required to make the Maximum-Likelihood-estimate of the variance unbiased.  In this case we simply set <math>n := 2N+1</math> for the <math>c_{4}(n)</math> correction factor so that <math>c_{4}(n)</math> is the scaled mean of a <math>\chi</math>-distributed variable with <math>k = n-1 = 2N+1-1 = 2N</math> degrees of freedom. This gives:

Latest revision as of 11:27, 30 May 2017

Estimating parameter \(\sigma\) for the Rayleigh distribution

Let \(X, Y\) be uncorrelated, jointly normally distributed cartesian coordinates with means \(\mu_{X}, \mu_{Y}\), and equal variances \(\sigma_{X}^{2} = \sigma_{Y}^{2}\). The radius of an \((X,Y)\)-coordinate pair around the true mean is\[ R := \sqrt{(X-\mu_{X})^{2} + (Y-\mu_{Y})^{2}} \]

With \(N\) observations of \((x,y)\)-coordinates, \(\text{SSR}\) is the sum of squared radii\[ \begin{array}{rcl} \text{SSR} &:=& \sum\limits_{i=1}^{N} r_{i}^{2} = \sum\limits_{i=1}^{N} ((x_{i}-\mu_{X})^{2} + (y_{i}-\mu_{Y})^{2})\\ &=& \sum\limits_{i=1}^{N} (x_{i}-\mu_{X})^{2} + \sum\limits_{i=1}^{N} (y_{i}-\mu_{Y})^{2} \end{array} \]

The Maximum-Likelihood-estimate of the variance of \(r\) is the total variance of all \(2N\) separate \(x\)- and \(y\)-coordinates\[ \widehat{\sigma^{2}_{ML}} = \frac{1}{2N} \cdot \text{SSR} \]

Unknown true center

Deriving Singh's (1992) \(C_{2}\) estimator

When the \((\mu_{X}, \mu_{Y})\)-center is estimated by \((\bar{x}, \bar{y})\), the Bessel correction is required to make the ML-estimate of the variance unbiased\[ \widehat{\sigma^{2}_{ub}} = \frac{N}{N-1} \cdot \frac{1}{2N} \cdot \text{SSR} = \frac{1}{2(N-1)} \cdot \text{SSR} \]

However, taking the square root of the unbiased variance estimate makes it biased (Jensen's inequality). Specifically, since the square root is concave, the bias is negative and \(\sqrt{\widehat{\sigma^{2}_{ub}}}\) underestimates \(\sigma\). Another correction factor is needed to counter this effect. To this end, let \(Q\) be a \(\chi\)-distributed variable with \(k\) degrees of freedom. Then its mean is\[ E(Q) = \sqrt{2} \cdot \frac{\Gamma\left(\frac{k+1}{2}\right)}{\Gamma\left(\frac{k}{2}\right)} \]

When \(X\) is a normally distributed random variable with \(n\) observations, and \(s^{2} := \frac{1}{n-1} \sum\limits_{i=1}^{n}(x-\bar{x})^{2}\) is the Bessel-corrected variance estimate, \(E(s^{2}) = \sigma^{2}\). Then \(c_{4}(n)\) is the correction factor such that \(E(s) = c_{4}(n) \cdot \sigma\). With \(Q\) as given above, \(c_{4}(n) = \frac{1}{\sqrt{k}} \cdot E(Q)\) with \(k := n-1\)\[ c_{4}(n) = \sqrt{\frac{2}{n-1}} \cdot \frac{\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n-1}{2}\right)} \]

We lose 2 degrees of freedom estimating the center. Therefore we really only have 2N - 2 degrees of freedom, so we set \(n := 2N-1\) so that \(c_{4}(n)\) is the scaled mean of a \(\chi\)-distributed variable with \(k = n-1 = 2N-1-1 = 2N-2\) degrees of freedom. This gives\[ \begin{array}{rcl} \widehat{\sigma_{ub}} &=& \frac{1}{c_{4}(2N-1)} \cdot \sqrt{\widehat{\sigma^{2}_{ub}}} \\ &=& \frac{1}{\sqrt{\frac{2}{2N-1-1}} \cdot \frac{\Gamma\left(\frac{2N-1}{2}\right)}{\Gamma\left(\frac{2N-1-1}{2}\right)}} \cdot \sqrt{\frac{N}{N-1} \cdot \frac{1}{2N} \cdot \text{SSR}} \\ &=& \sqrt{\frac{2(N-1)}{2}} \cdot \frac{\Gamma\left(\frac{2(N-1)}{2}\right)}{\Gamma\left(\frac{2N-1}{2}\right)} \cdot \sqrt{\frac{N}{N-1} \cdot \frac{1}{2N} \cdot \text{SSR}} \\ &=& \sqrt{N} \cdot \frac{\Gamma(N-1)}{\Gamma\left(\frac{2N-1}{2}\right)} \cdot \sqrt{\frac{1}{2N} \cdot \text{SSR}} \\ &=& \frac{\Gamma(N-1)}{\Gamma\left(N - \frac{1}{2}\right)} \cdot \sqrt{\frac{1}{2} \cdot \text{SSR}} \end{array} \]

The penultimate expression is Singh's 1992 definition 3.3 for estimator \(C_{2}\), also given by Moranda (1959).

Known true center

Deriving Singh's (1992) \(C_{1}\) estimator

When the \((\mu_{X}, \mu_{Y})\)-center is known, the Bessel correction is not required to make the Maximum-Likelihood-estimate of the variance unbiased. In this case we simply set \(n := 2N+1\) for the \(c_{4}(n)\) correction factor so that \(c_{4}(n)\) is the scaled mean of a \(\chi\)-distributed variable with \(k = n-1 = 2N+1-1 = 2N\) degrees of freedom. This gives\[ \begin{array}{rcl} \widehat{\sigma_{ub}} &=& \frac{1}{c_{4}(2N+1)} \cdot \sqrt{\widehat{\sigma^{2}_{ub}}} \\ &=& \frac{1}{\sqrt{\frac{2}{2N+1-1}} \cdot \frac{\Gamma\left(\frac{2N+1}{2}\right)}{\Gamma\left(\frac{2N+1-1}{2}\right)}} \cdot \sqrt{\frac{1}{2N} \cdot \text{SSR}} \\ &=& \sqrt{N} \cdot \frac{\Gamma(N)}{\Gamma\left(\frac{2N+1}{2}\right)} \cdot \sqrt{\frac{1}{2N} \cdot \text{SSR}} \\ &=& \frac{\Gamma(N)}{\Gamma\left(N + \frac{1}{2}\right)} \cdot \sqrt{\frac{1}{2} \cdot \text{SSR}} \end{array} \]

The penultimate expression is Singh's 1992 definition 2.2 for estimator \(C_{1}\).