Difference between revisions of "Talk:Home"

From ShotStat
Jump to: navigation, search
(Herb 5/15/2015)
Line 12: Line 12:
  
 
: ''I believe [[Prior_Art#Danielson.2C_2005.2C_Testing_loads|the example]] worked out in [[Media:DanielsonExample.xlsx|this spreadsheet]] shows how 2-shot samples can be transformed to provide an efficient sample set for the Rayleigh model.  The only trick to note is that each pair represents two observations, not just one.  Thus from 24 shots we have 24 radii measurements (though only 12 are unique) and this allows us to compute the Rayleigh MLE for a 24-shot sample.  If there is an error in that math or example please note it. [[User:David|David]] ([[User talk:David|talk]]) 13:00, 15 May 2015 (EDT)''
 
: ''I believe [[Prior_Art#Danielson.2C_2005.2C_Testing_loads|the example]] worked out in [[Media:DanielsonExample.xlsx|this spreadsheet]] shows how 2-shot samples can be transformed to provide an efficient sample set for the Rayleigh model.  The only trick to note is that each pair represents two observations, not just one.  Thus from 24 shots we have 24 radii measurements (though only 12 are unique) and this allows us to compute the Rayleigh MLE for a 24-shot sample.  If there is an error in that math or example please note it. [[User:David|David]] ([[User talk:David|talk]]) 13:00, 15 May 2015 (EDT)''
 +
 +
:: The problem is that for 24 shots the Rayleigh method would have one average position for the horizontal and vertical position. By just cutting the difference between pairs of shots in half, you have 12 average positions for the horizontal and vertical deflection. [[User: Herb]] 15 May 2015, 6:24 EST
  
 
The "best" number of shots per group depends on the % of flyers. No flyers, 5-7 shots are about the same and are "best". A high % of flyers would mean that lower number of shots per group would be better.
 
The "best" number of shots per group depends on the % of flyers. No flyers, 5-7 shots are about the same and are "best". A high % of flyers would mean that lower number of shots per group would be better.
  
 
: ''Can you describe a statistically unbiased method of identifying flyers?'' [[User:David|David]] ([[User talk:David|talk]]) 13:00, 15 May 2015 (EDT)
 
: ''Can you describe a statistically unbiased method of identifying flyers?'' [[User:David|David]] ([[User talk:David|talk]]) 13:00, 15 May 2015 (EDT)
 +
 +
:: You can't describe a distribution for flyers since the distribution function for flyers or any of its parameters are unknown. All that can be done is to "trim" the data based on the analytical model being used. So essentially you'd set a clip level and throw out the "worse" (largest) 5% of the measurements based on simulation. Probably multiple ways to trim data too. For instance think of group size for 5-shot groups. You could set clip levels at the highest and lowest 2.5% levels based on simulation. You could also compare 5-shot group size to 4-shot group size for every group. So let's assume that an "average" 5-shot group is 1 inch. I have a 5-shot group with is 6 inches, but has 4 shots that are in a group of 0.75 inches. The 5-shot to 4-shot ration would put the is group well beyond the clip level of the largest 5% of simulated groups and thus the 5-shot group could be viewed as "abnormal". That is the rub with statistics. I can calculate the exact probability of flipping a penny and getting a hundreds tails in a row even though such a result is practically impossible. So if someone did flip a hundred tails in a row, you would have to have to be very skeptical that the flips were fair. You can't use statistics to "prove" that the flips were unfair, you can only infer that the result was highly unusal at some confidence level. [[User: Herb]] 15 May 2015, 6:24 EST

Revision as of 19:10, 15 May 2015

Herb, 4/19/2015

RE: "Extreme Spread is not only a statistically inefficient measure but also one frequently and easily abused."

The most frequent abuse of extreme spread is chasing the "best group size" (the smallest group). The smallest group size is absolutely meaningless. The valid estimator is the average group size. If you want a smaller group size, just shoot more groups. Sooner or later you'll get lucky and shoot yet an even smaller group by pure luck.


Herb 5/11/2015

Danielson's 2-shot method is very inefficient. Assuming that horizontal and vertical deflection are both Gaussian and equal, and that the correlation coefficient is zero, then the gold standard is the radial standard deviation which is 100% efficient. In Danielson's 2-shot method he analyzed two different brands of ammo. He used 24 shots of each type, but only got 12 measurements per type. Combining all 24 shots for each type and analyzing using the Rayleigh model would be 100% efficient.

I believe the example worked out in this spreadsheet shows how 2-shot samples can be transformed to provide an efficient sample set for the Rayleigh model. The only trick to note is that each pair represents two observations, not just one. Thus from 24 shots we have 24 radii measurements (though only 12 are unique) and this allows us to compute the Rayleigh MLE for a 24-shot sample. If there is an error in that math or example please note it. David (talk) 13:00, 15 May 2015 (EDT)
The problem is that for 24 shots the Rayleigh method would have one average position for the horizontal and vertical position. By just cutting the difference between pairs of shots in half, you have 12 average positions for the horizontal and vertical deflection. User: Herb 15 May 2015, 6:24 EST

The "best" number of shots per group depends on the % of flyers. No flyers, 5-7 shots are about the same and are "best". A high % of flyers would mean that lower number of shots per group would be better.

Can you describe a statistically unbiased method of identifying flyers? David (talk) 13:00, 15 May 2015 (EDT)
You can't describe a distribution for flyers since the distribution function for flyers or any of its parameters are unknown. All that can be done is to "trim" the data based on the analytical model being used. So essentially you'd set a clip level and throw out the "worse" (largest) 5% of the measurements based on simulation. Probably multiple ways to trim data too. For instance think of group size for 5-shot groups. You could set clip levels at the highest and lowest 2.5% levels based on simulation. You could also compare 5-shot group size to 4-shot group size for every group. So let's assume that an "average" 5-shot group is 1 inch. I have a 5-shot group with is 6 inches, but has 4 shots that are in a group of 0.75 inches. The 5-shot to 4-shot ration would put the is group well beyond the clip level of the largest 5% of simulated groups and thus the 5-shot group could be viewed as "abnormal". That is the rub with statistics. I can calculate the exact probability of flipping a penny and getting a hundreds tails in a row even though such a result is practically impossible. So if someone did flip a hundred tails in a row, you would have to have to be very skeptical that the flips were fair. You can't use statistics to "prove" that the flips were unfair, you can only infer that the result was highly unusal at some confidence level. User: Herb 15 May 2015, 6:24 EST