Difference between revisions of "Derivation of the Rayleigh Distribution Equation"

From ShotStat
Jump to: navigation, search
(Derivatioon From the Bivariate Normal distribution)
(Cleanup)
 
(10 intermediate revisions by one other user not shown)
Line 2: Line 2:
 
|__TOC__
 
|__TOC__
 
|}
 
|}
= Mathematical Formulas and Derivations  =
 
 
 
= Bivariate Normal Distribution =
 
= Bivariate Normal Distribution =
Starting only with the assumptions that the horzontial and vertical measurements are normally distributed as notated by:<br />
+
Starting only with the assumptions that the horizontal and vertical measurements are normally distributed as notated by:<br />
  
&nbsp;&nbsp;&nbsp;&nbsp; <math>h \sim \mathcal{N}(\mu_h,\sigma_h^2)</math>, and <math>v \sim \mathcal{N}(\mu_v,\sigma_v^2)</math>
+
: <math>h \sim \mathcal{N}(\mu_h,\sigma_h^2), \, \, v \sim \mathcal{N}(\mu_v,\sigma_v^2)</math>
  
 
then the horizontal and vertical measures follow the general bivariate normal distribution which is given by the following equation:
 
then the horizontal and vertical measures follow the general bivariate normal distribution which is given by the following equation:
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>
+
: <math>
 
     f(h,v) =
 
     f(h,v) =
 
       \frac{1}{2 \pi  \sigma_h \sigma_v \sqrt{1-\rho^2}}
 
       \frac{1}{2 \pi  \sigma_h \sigma_v \sqrt{1-\rho^2}}
Line 23: Line 21:
 
   </math>
 
   </math>
  
 +
= The Hoyt Distribution =
 
Simplification of the Bivariate Normal Distribution to the Hoyt Distribution
 
Simplification of the Bivariate Normal Distribution to the Hoyt Distribution
  
= Correction Factors =
 
The following three correction factors will be used throughout this statistical inference and deduction. 
 
 
Note that all of these correction factors are > 1, are significant for very small ''n'', and converge towards 1 as <math>n \to \infty</math>.  Their values are listed for ''n'' up to 100 in [[Media:Sigma1ShotStatistics.ods]].  [[File:SymmetricBivariate.c]] uses Monte Carlo simulation to confirm that their application produces valid corrected estimates.
 
 
== [http://en.wikipedia.org/wiki/Bessel%27s_correction Bessel correction factor] ==
 
The Bessel correction removes bias in sample variance.
 
:&nbsp; <math>c_{B}(n) = \frac{n}{n-1}</math>
 
 
== [http://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation#Results_for_the_normal_distribution Gaussian correction factor] ==
 
The Gaussian correction (sometimes called <math>c_4</math>) removes bias introduced by taking the square root of variance.
 
:&nbsp; <math>\frac{1}{c_{G}(n)} = \sqrt{\frac{2}{n-1}}\,\frac{\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n-1}{2}\right)} \, = \, 1 - \frac{1}{4n} - \frac{7}{32n^2} - \frac{19}{128n^3} + O(n^{-4})</math>
 
 
The third-order approximation is adequate.  The following spreadsheet formula gives a more direct calculation:&nbsp; <math>c_{G}(n)</math> <code>=1/EXP(LN(SQRT(2/(N-1))) + GAMMALN(N/2) - GAMMALN((N-1)/2))</code>
 
 
== Rayleigh correction factor ==
 
The unbiased estimator for the Rayleigh distribution is also for <math>\sigma^2</math>.  The following corrects for the concavity introduced by taking the square root to get ''σ''.
 
:&nbsp; <math>c_{R}(n) = 4^n \sqrt{\frac{n}{\pi}} \frac{ N!(N-1)!} {(2N)!}</math> <ref>[[Media:Statistical Inference for Rayleigh Distributions - Siddiqui, 1964.pdf|''Statistical Inference for Rayleigh Distributions'', M. M. Siddiqui, 1964, p.1007]]</ref>
 
 
To avoid overflows this is better calculated using log-gammas, as in the following spreadsheet formula: <code>=EXP(LN(SQRT(N/PI())) + N*LN(4) + GAMMALN(N+1) + GAMMALN(N) - GAMMALN(2N+1))</code>
 
 
 
= The Hoyt Distribution =
 
 
== Derivation From the Bivariate Normal distribution ==
 
== Derivation From the Bivariate Normal distribution ==
Given the Bivariate Normal distribution as follows:<br />
+
A simple translation of the Cartesian Coordinate System converts the Bivariate Normal distribution to the Hoyt distribution. This translation will not affect measurements about COI, but it would of course affect measurements which are measured about POA.
&nbsp;&nbsp;&nbsp;&nbsp;<math>
 
    f(h,v) =
 
      \frac{1}{2 \pi  \sigma_h \sigma_v \sqrt{1-\rho^2}}
 
      \exp\left(
 
        -\frac{1}{2(1-\rho^2)}\left[
 
          \frac{(h-\mu_h)^2}{\sigma_h^2} +
 
          \frac{(v-\mu_v)^2}{\sigma_v^2} -
 
          \frac{2\rho(h-\mu_h)(v-\mu_v)}{\sigma_h \sigma_v}
 
        \right]
 
      \right)
 
  </math>
 
  
a simple translation of the Cartesian Coordinate System converts the Bivariate Normal distribution to the Hoyt distribution. This translation will not affect measurements about COI, but it would of course affect measurements which are measured about POA.
+
Given a translation to point <math>(\mu_h, \mu_v)</math> then let:
  
Given a translation to point <math>(\mu_h, \mu_v)</math> then let:<br />
+
: <math>h_* =  h - \mu_h, \, \, v_* =  v - \mu_v</math>
  
&nbsp;&nbsp;&nbsp;<math>h_* h - \mu_h</math> &nbsp;&nbsp; and &nbsp;&nbsp; <math>v_* =  v - \mu_v</math><br />
+
Since the derivative of <math>h_*</math> with respect to <math>(h - \mu_h)</math> is 1, (and similarity for <math>v_*</math>) then no change results to the integration constant of the function. Thus <math>h_*</math> can be substituted for <math>(h - \mu_h)</math> and <math>v_*</math> for <math>(v - \mu_v)</math>. At this point the asterisk subscript is superfluous and will be dropped, giving the Hoyt distribution:
  
Since the derivative of <math>h_*</math> with respect to <math>(h - \mu_h)</math> is 1, (and similarity for <math>v_*</math>) then no change results to the integration constant of the function. Thus <math>h_*</math> can be substituted for <math>(h - \mu_h)</math> and <math>v_*</math> for <math>(v - \mu_v)</math>. At this point the asterisk subscript is superfluous and will be dropped, giving the Hoyt distribution.
+
:<math>  
&nbsp;&nbsp;&nbsp;<math>  
 
 
   f(h,v) =
 
   f(h,v) =
 
       \frac{1}{2 \pi  \sigma_h \sigma_v \sqrt{1-\rho^2}}
 
       \frac{1}{2 \pi  \sigma_h \sigma_v \sqrt{1-\rho^2}}
Line 81: Line 45:
 
   </math>
 
   </math>
  
= Simplifying the Distribution of an Individual Shot from the Bivariate Normal Distribution to the Rayleigh distribution =
+
 
 +
= The Rayleigh distribution =
  
 
The Rayleigh Distribution makes the following simplifying assumptions to the general bivariate normal distribution:
 
The Rayleigh Distribution makes the following simplifying assumptions to the general bivariate normal distribution:
Line 88: Line 53:
 
* <math>\rho = 0</math>
 
* <math>\rho = 0</math>
 
* No Fliers
 
* No Fliers
for which the PDF for any shot, <math>i</math>, around the horizontal and vertical point <math>(\mu_h, \mu_v)</math> is given by:<br />
+
for which the PDF for any shot, <math>i</math>, around the horizontal and vertical point <math>(\mu_h, \mu_v)</math> is given by:
&nbsp;&nbsp;&nbsp;&nbsp;<math>PDF(r; \sigma_{\Re}) = \frac{r}{\sigma_{\Re}^2 }
+
 
 +
: <math>PDF(r; \sigma_{\Re}) = \frac{r}{\sigma_{\Re}^2 }
 
       \exp\left(
 
       \exp\left(
 
         - \frac{r^2}{2\sigma_{\Re}^2}  
 
         - \frac{r^2}{2\sigma_{\Re}^2}  
 
       \right)
 
       \right)
 
   </math>
 
   </math>
: where <math>\sigma_{\Re} = \sigma_h = \sigma_v</math> and <math>r = \sqrt{h_i - \mu_h)^2 + sqrt(v_i - \mu_v)^2}</math>
 
  
'''PROOF'''
+
where
 +
 
 +
: <math>\sigma_{\Re} = \sigma_h = \sigma_v, \, \, r = \sqrt{h_i - \mu_h)^2 + sqrt(v_i - \mu_v)^2}</math>
 +
 
 +
== Proof ==
  
 
Using the assumptions in the first section, the distribution of an individual shot is easily simplified from the Bivariate Normal Distribution which has the equation:
 
Using the assumptions in the first section, the distribution of an individual shot is easily simplified from the Bivariate Normal Distribution which has the equation:
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>
+
: <math>
 
     f(h,v) =
 
     f(h,v) =
 
       \frac{1}{2 \pi  \sigma_h \sigma_v \sqrt{1-\rho^2}}
 
       \frac{1}{2 \pi  \sigma_h \sigma_v \sqrt{1-\rho^2}}
Line 114: Line 83:
 
By substituting <math>\rho = 0</math> the equation reduces to:
 
By substituting <math>\rho = 0</math> the equation reduces to:
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>
+
: <math>
 
     f(h,v) =
 
     f(h,v) =
 
       \frac{1}{2 \pi  \sigma_h \sigma_v }
 
       \frac{1}{2 \pi  \sigma_h \sigma_v }
Line 124: Line 93:
 
       \right)
 
       \right)
 
   </math>
 
   </math>
 
  
 
Since <math>\sigma_h</math> and <math>\sigma_v</math> are equal, substitute <math>\sigma</math> for each, then collect terms in the exponential, after which the equation reduces to:
 
Since <math>\sigma_h</math> and <math>\sigma_v</math> are equal, substitute <math>\sigma</math> for each, then collect terms in the exponential, after which the equation reduces to:
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>
+
: <math>
 
     f(h,v) =
 
     f(h,v) =
 
       \frac{1}{2 \pi  \sigma^2 }
 
       \frac{1}{2 \pi  \sigma^2 }
Line 140: Line 108:
 
Letting <math>r^2 = (h-\mu_h)^2 + (v-\mu_v)^2</math> the equation becomes:  
 
Letting <math>r^2 = (h-\mu_h)^2 + (v-\mu_v)^2</math> the equation becomes:  
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>
+
: <math>
 
     f(h,v) =
 
     f(h,v) =
 
       \frac{1}{2 \pi  \sigma^2 }
 
       \frac{1}{2 \pi  \sigma^2 }
Line 148: Line 116:
 
   </math>
 
   </math>
  
Now transforming to the polar coordinate system:<br />
+
Now transforming to the polar coordinate system:
  
 
  ** ok, here I'm lost **
 
  ** ok, here I'm lost **
  
and finally:<br />
+
and finally:
&nbsp;&nbsp;&nbsp;&nbsp;<math>
+
 
 +
: <math>
 
     f(r) =
 
     f(r) =
 
       \frac{r}{\sigma^2 }
 
       \frac{r}{\sigma^2 }
Line 161: Line 130:
 
   </math>
 
   </math>
  
= Accuracy of <math>n</math> Sighting Shots from the Rayleigh distribution =
+
=== Accuracy of <math>n</math> Sighting Shots ===
  
 
Given the assumptions in the starting section we again substitute <math>\sigma</math> for both <math>\sigma_h</math> and <math>\sigma_v</math>. This simplifies the distributions of <math>h</math> and <math>v</math> to:<br />
 
Given the assumptions in the starting section we again substitute <math>\sigma</math> for both <math>\sigma_h</math> and <math>\sigma_v</math>. This simplifies the distributions of <math>h</math> and <math>v</math> to:<br />
  
&nbsp;&nbsp;&nbsp;&nbsp; <math>h \sim \mathcal{N}(\mu_h,\sigma^2)</math>, and <math>v \sim \mathcal{N}(\mu_v,\sigma^2)</math>
+
: <math>h \sim \mathcal{N}(\mu_h,\sigma^2), \, \, v \sim \mathcal{N}(\mu_v,\sigma^2)</math>
  
 
Now we take some number <math>n</math> of shots <math>( n \geq 1)</math>and calculate their centers <math>\bar{h}</math> and <math>\bar{v}</math> which will be normal distributions as well.  
 
Now we take some number <math>n</math> of shots <math>( n \geq 1)</math>and calculate their centers <math>\bar{h}</math> and <math>\bar{v}</math> which will be normal distributions as well.  
  
&nbsp;&nbsp;&nbsp;&nbsp; <math>\bar{h} \sim \mathcal{N}(\mu_h,\sigma^2/n)</math>, and <math>\bar{v} \sim \mathcal{N}(\mu_v,\sigma^2/n)</math>
+
: <math>\bar{h} \sim \mathcal{N}(\mu_h,\sigma^2/n), \, \, bar{v} \sim \mathcal{N}(\mu_v,\sigma^2/n)</math>
  
 
Let <math>r_n</math> be the distance of this sample center <math>(\bar{h}, \bar{v})</math> from the true distribution center <math>(\mu_h, \mu_v)</math> as:<br />
 
Let <math>r_n</math> be the distance of this sample center <math>(\bar{h}, \bar{v})</math> from the true distribution center <math>(\mu_h, \mu_v)</math> as:<br />
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>r_n = \sqrt{(\bar{h}-\mu_h)^2 + (\bar{v}-\mu_v)^2}</math>
+
: <math>r_n = \sqrt{(\bar{h}-\mu_h)^2 + (\bar{v}-\mu_v)^2}</math>
  
 
Define random variables <math>Z_h</math> and <math>Z_v</math> as the squared ''Studentized'' horizontal and vertical errors by dividing by the respective standard deviations. Each of these variables with have a Chi-Squared Distribution with one degree of freedom.
 
Define random variables <math>Z_h</math> and <math>Z_v</math> as the squared ''Studentized'' horizontal and vertical errors by dividing by the respective standard deviations. Each of these variables with have a Chi-Squared Distribution with one degree of freedom.
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>Z_h = \left(\frac{(\bar{h}-\mu_h)}{\sigma/\sqrt n}\right)^2 = \frac n{\sigma^2}(\bar{h}-\mu_h)^2 \sim \chi^2(1)</math><br />
+
: <math>Z_h = \left(\frac{(\bar{h}-\mu_h)}{\sigma/\sqrt n}\right)^2 = \frac n{\sigma^2}(\bar{h}-\mu_h)^2 \sim \chi^2(1)</math><br />
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>Z_v = \left(\frac{(\bar{v}-\mu_v)}{\sigma/\sqrt n}\right)^2 = \frac n{\sigma^2}(\bar{v}-\mu_v)^2\sim \chi^2(1)</math><br />
+
: <math>Z_v = \left(\frac{(\bar{v}-\mu_v)}{\sigma/\sqrt n}\right)^2 = \frac n{\sigma^2}(\bar{v}-\mu_v)^2\sim \chi^2(1)</math><br />
  
 
Define random the variable <math>W</math> which will have a Chi-Squared Distribution with two degrees of freedom as:<br />  
 
Define random the variable <math>W</math> which will have a Chi-Squared Distribution with two degrees of freedom as:<br />  
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>W = Z_x + Z_y =\frac n{\sigma^2}\left((\bar{v}-\mu_v)^2+(\bar{v}-\mu_v)^2\right)\sim \chi^2(2)</math>
+
: <math>W = Z_x + Z_y =\frac n{\sigma^2}\left((\bar{v}-\mu_v)^2+(\bar{v}-\mu_v)^2\right)\sim \chi^2(2)</math>
  
 
Rescale the variable <math>W</math> by <math>\frac {\sigma^2}{n}</math> and denote the new variable <math>w_n</math>:<br />
 
Rescale the variable <math>W</math> by <math>\frac {\sigma^2}{n}</math> and denote the new variable <math>w_n</math>:<br />
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>w_n=\frac {\sigma^2}nW</math> and note that <math>w_n=r_n^2</math>
+
: <math>w_n=\frac {\sigma^2}nW</math> and note that <math>w_n=r_n^2</math>
  
 
By the properties of a chi-square random variable, we have:<br />
 
By the properties of a chi-square random variable, we have:<br />
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>w_n \sim \text {Gamma}(k=1, \theta = 2\sigma^2/n) = \text{Exp}(2\sigma^2/n)</math>
+
: <math>w_n \sim \text {Gamma}(k=1, \theta = 2\sigma^2/n) = \text{Exp}(2\sigma^2/n)</math>
  
 
so:<br />
 
so:<br />
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>PDF(w_n) = \frac {n}{2\sigma^2}\cdot \exp\Big \{-\frac {n}{2\sigma^2} w_n\Big\}</math>
+
: <math>PDF(w_n) = \frac {n}{2\sigma^2}\cdot \exp\Big \{-\frac {n}{2\sigma^2} w_n\Big\}</math>
  
 
But from above <math>r_n = \sqrt {w_n}</math>. By the change-of-variable formula we have
 
But from above <math>r_n = \sqrt {w_n}</math>. By the change-of-variable formula we have
  
&nbsp;&nbsp;&nbsp;&nbsp;<math>w_n = r_n^2 \Rightarrow \frac {dw_n}{dr_n} = 2r_n</math><br />
+
: <math>w_n = r_n^2 \Rightarrow \frac {dw_n}{dr_n} = 2r_n</math><br />
  
 
and so:<br />
 
and so:<br />
  
&nbsp;&nbsp;&nbsp;&nbsp;<math> PDF(r_n) = 2r_n\frac {n}{2\sigma^2}\cdot \exp\Big \{-\frac {n}{2\sigma^2} r_n^2\Big\} = \frac {r_n}{\alpha^2} \exp\Big \{-\frac {r_n^2}{2\alpha^2} \Big\},\;\;\alpha \equiv \sigma/\sqrt n</math>
+
: <math> PDF(r_n) = 2r_n\frac {n}{2\sigma^2}\cdot \exp\Big \{-\frac {n}{2\sigma^2} r_n^2\Big\} = \frac {r_n}{\alpha^2} \exp\Big \{-\frac {r_n^2}{2\alpha^2} \Big\},\;\;\alpha \equiv \sigma/\sqrt n</math>
  
 
So for any number of shots <math>n</math>, the expected accuracy is given by <math>r_n</math> follows a Rayleigh distribution with parameter <math>\alpha = \sigma / \sqrt{n}</math> where <math>\sigma</math> is the Rayleigh shape factor for one shot.  
 
So for any number of shots <math>n</math>, the expected accuracy is given by <math>r_n</math> follows a Rayleigh distribution with parameter <math>\alpha = \sigma / \sqrt{n}</math> where <math>\sigma</math> is the Rayleigh shape factor for one shot.  
  
 
'''Thanks to Alecos Papadopoulos for the solution.'''
 
'''Thanks to Alecos Papadopoulos for the solution.'''
 
= Calculate the Precision of Mean Radius of <math>n</math> Shots About COI =
 
 
= Calculate the Cumulative Distribution Function for the Rayleigh distribution =
 
Given the Rayleigh distribution, calculate the Cumulative Distribution Function (CDF) for the Rayleigh distribution.
 
 
= Calculate the Mode of the Rayleigh distribution =
 
Given the Rayleigh distribution, calculate the mode for the Rayleigh distribution.
 
 
= Calculate the Median of the Rayleigh distribution =
 
Given the Rayleigh distribution, calculate the mean for the Rayleigh distribution.
 
 
= Calculate the Mean Radius of the Rayleigh distribution =
 
Given the Rayleigh distribution, calculate the mean for the Rayleigh distribution.
 
 
= Fit a set of experimental data to calculate the best value of <math>\sigma_{\Re}</math> for the Rayleigh distribution. =
 

Latest revision as of 18:30, 9 January 2024

Bivariate Normal Distribution

Starting only with the assumptions that the horizontal and vertical measurements are normally distributed as notated by:

\[h \sim \mathcal{N}(\mu_h,\sigma_h^2), \, \, v \sim \mathcal{N}(\mu_v,\sigma_v^2)\]

then the horizontal and vertical measures follow the general bivariate normal distribution which is given by the following equation:

\[ f(h,v) = \frac{1}{2 \pi \sigma_h \sigma_v \sqrt{1-\rho^2}} \exp\left( -\frac{1}{2(1-\rho^2)}\left[ \frac{(h-\mu_h)^2}{\sigma_h^2} + \frac{(v-\mu_v)^2}{\sigma_v^2} - \frac{2\rho(h-\mu_h)(v-\mu_v)}{\sigma_h \sigma_v} \right] \right) \]

The Hoyt Distribution

Simplification of the Bivariate Normal Distribution to the Hoyt Distribution

Derivation From the Bivariate Normal distribution

A simple translation of the Cartesian Coordinate System converts the Bivariate Normal distribution to the Hoyt distribution. This translation will not affect measurements about COI, but it would of course affect measurements which are measured about POA.

Given a translation to point \((\mu_h, \mu_v)\) then let:

\[h_* = h - \mu_h, \, \, v_* = v - \mu_v\]

Since the derivative of \(h_*\) with respect to \((h - \mu_h)\) is 1, (and similarity for \(v_*\)) then no change results to the integration constant of the function. Thus \(h_*\) can be substituted for \((h - \mu_h)\) and \(v_*\) for \((v - \mu_v)\). At this point the asterisk subscript is superfluous and will be dropped, giving the Hoyt distribution:

\[ f(h,v) = \frac{1}{2 \pi \sigma_h \sigma_v \sqrt{1-\rho^2}} \exp\left( -\frac{1}{2(1-\rho^2)}\left[ \frac{h^2}{\sigma_h^2} + \frac{v^2}{\sigma_v^2} - \frac{2\rho h v }{\sigma_h \sigma_v} \right] \right) \]


The Rayleigh distribution

The Rayleigh Distribution makes the following simplifying assumptions to the general bivariate normal distribution:

  • Horizontal and vertical dispersion are independent.
  • \(\sigma_h = \sigma_v\) (realistically \(\sigma_h \approx \sigma_v\))
  • \(\rho = 0\)
  • No Fliers

for which the PDF for any shot, \(i\), around the horizontal and vertical point \((\mu_h, \mu_v)\) is given by:

\[PDF(r; \sigma_{\Re}) = \frac{r}{\sigma_{\Re}^2 } \exp\left( - \frac{r^2}{2\sigma_{\Re}^2} \right) \]

where

\[\sigma_{\Re} = \sigma_h = \sigma_v, \, \, r = \sqrt{h_i - \mu_h)^2 + sqrt(v_i - \mu_v)^2}\]

Proof

Using the assumptions in the first section, the distribution of an individual shot is easily simplified from the Bivariate Normal Distribution which has the equation:

\[ f(h,v) = \frac{1}{2 \pi \sigma_h \sigma_v \sqrt{1-\rho^2}} \exp\left( -\frac{1}{2(1-\rho^2)}\left[ \frac{(h-\mu_h)^2}{\sigma_h^2} + \frac{(v-\mu_v)^2}{\sigma_v^2} - \frac{2\rho(h-\mu_h)(v-\mu_v)}{\sigma_h \sigma_v} \right] \right) \]

By substituting \(\rho = 0\) the equation reduces to:

\[ f(h,v) = \frac{1}{2 \pi \sigma_h \sigma_v } \exp\left( -\frac{1}{2}\left[ \frac{(h-\mu_h)^2}{\sigma_h^2} + \frac{(v-\mu_v)^2}{\sigma_v^2} \right] \right) \]

Since \(\sigma_h\) and \(\sigma_v\) are equal, substitute \(\sigma\) for each, then collect terms in the exponential, after which the equation reduces to:

\[ f(h,v) = \frac{1}{2 \pi \sigma^2 } \exp\left( -\left[ \frac{(h-\mu_h)^2 + (v-\mu_h)^2}{2\sigma^2} \right] \right) \]

Letting \(r^2 = (h-\mu_h)^2 + (v-\mu_v)^2\) the equation becomes:

\[ f(h,v) = \frac{1}{2 \pi \sigma^2 } \exp\left( - \frac{r^2}{2\sigma^2} \right) \]

Now transforming to the polar coordinate system:

** ok, here I'm lost **

and finally:

\[ f(r) = \frac{r}{\sigma^2 } \exp\left( - \frac{r^2}{2\sigma^2} \right) \]

Accuracy of \(n\) Sighting Shots

Given the assumptions in the starting section we again substitute \(\sigma\) for both \(\sigma_h\) and \(\sigma_v\). This simplifies the distributions of \(h\) and \(v\) to:

\[h \sim \mathcal{N}(\mu_h,\sigma^2), \, \, v \sim \mathcal{N}(\mu_v,\sigma^2)\]

Now we take some number \(n\) of shots \(( n \geq 1)\)and calculate their centers \(\bar{h}\) and \(\bar{v}\) which will be normal distributions as well.

\[\bar{h} \sim \mathcal{N}(\mu_h,\sigma^2/n), \, \, bar{v} \sim \mathcal{N}(\mu_v,\sigma^2/n)\]

Let \(r_n\) be the distance of this sample center \((\bar{h}, \bar{v})\) from the true distribution center \((\mu_h, \mu_v)\) as:

\[r_n = \sqrt{(\bar{h}-\mu_h)^2 + (\bar{v}-\mu_v)^2}\]

Define random variables \(Z_h\) and \(Z_v\) as the squared Studentized horizontal and vertical errors by dividing by the respective standard deviations. Each of these variables with have a Chi-Squared Distribution with one degree of freedom.

\[Z_h = \left(\frac{(\bar{h}-\mu_h)}{\sigma/\sqrt n}\right)^2 = \frac n{\sigma^2}(\bar{h}-\mu_h)^2 \sim \chi^2(1)\]

\[Z_v = \left(\frac{(\bar{v}-\mu_v)}{\sigma/\sqrt n}\right)^2 = \frac n{\sigma^2}(\bar{v}-\mu_v)^2\sim \chi^2(1)\]

Define random the variable \(W\) which will have a Chi-Squared Distribution with two degrees of freedom as:

\[W = Z_x + Z_y =\frac n{\sigma^2}\left((\bar{v}-\mu_v)^2+(\bar{v}-\mu_v)^2\right)\sim \chi^2(2)\]

Rescale the variable \(W\) by \(\frac {\sigma^2}{n}\) and denote the new variable \(w_n\):

\[w_n=\frac {\sigma^2}nW\] and note that \(w_n=r_n^2\)

By the properties of a chi-square random variable, we have:

\[w_n \sim \text {Gamma}(k=1, \theta = 2\sigma^2/n) = \text{Exp}(2\sigma^2/n)\]

so:

\[PDF(w_n) = \frac {n}{2\sigma^2}\cdot \exp\Big \{-\frac {n}{2\sigma^2} w_n\Big\}\]

But from above \(r_n = \sqrt {w_n}\). By the change-of-variable formula we have

\[w_n = r_n^2 \Rightarrow \frac {dw_n}{dr_n} = 2r_n\]

and so:

\[ PDF(r_n) = 2r_n\frac {n}{2\sigma^2}\cdot \exp\Big \{-\frac {n}{2\sigma^2} r_n^2\Big\} = \frac {r_n}{\alpha^2} \exp\Big \{-\frac {r_n^2}{2\alpha^2} \Big\},\;\;\alpha \equiv \sigma/\sqrt n\]

So for any number of shots \(n\), the expected accuracy is given by \(r_n\) follows a Rayleigh distribution with parameter \(\alpha = \sigma / \sqrt{n}\) where \(\sigma\) is the Rayleigh shape factor for one shot.

Thanks to Alecos Papadopoulos for the solution.