# Talk:300BLK Subsonic 20-shot 100-yard Example

# Further explanation of the velocity data adjustments

The problem here is that we want to know the precision of the shooting system. However, our data come from a 100-yard target. At 100 yards, at subsonic velocities, differences in muzzle velocity introduce significant vertical dispersion. So the data set we observe incorporates two distinct sources of dispersion: the gun’s shot dispersion, which is normally distributed, * plus* the

*vertical*dispersion caused by differences in bullet velocity. If we can separate those two things out then we could say things like, “This is how accurate my gun would be if every bullet left the barrel at the same speed.”

In this case we do have a second set of data: The chronographed muzzle velocity of each round that hit the target. This should allow us to remove the velocity effect from the precision measurement. Except that I didn’t record which impact in the target was produced by which shot. Ideally, I would take a shot, then run down to the target and mark the muzzle velocity next to the hole it produced before taking the next shot. If I did that then I could just calculate the ballistic drop for each shot and elevate the data point accordingly.

But since I didn’t, we have to make some assumptions in order to apply the velocity data set. There’s no perfect way to make up for the fact that I can’t attach the specific velocity to each impact. But it’s not unreasonable to assume that the slower shots tended to hit lower than the higher shots. So I use two different assumptions about the way the velocity data attach to the point data to put bounds on the precision measure:

- The Statistical Adjustment based on variances is a lower bound on the inherent precision of this system after controlling for the spread in muzzle velocity. The Rank Adjustment is an upper bound, because it assumes that the vertical point of impact is 100% correlated with muzzle velocity, which is probably not true.

- Nevertheless, the Rank Adjustment is probably close to the true value because it reduces the vertical variance to the same magnitude as the horizontal variance, and also because the resulting sigma is in line with typical values of light factory rifle precision.

In the end, I make the empirical observation that the “rank adjustment” is probably close to what I would have found if I had attached the measured velocity to each shot as it was fired.

The “rank adjustment” simply assumes that, yes, the highest hits were made by the fastest bullets and the lowest hits were the slowest bullets, in perfect order. It probably isn’t exactly true, but it appears to be pretty close to the truth.

The “statistical adjustment” is very conservative and statistically valid: Without knowing which hit corresponds to which velocity, we can at least look at the variance of the velocity data and subtract that effect out of the vertical variance we see in the target. This gives us a true “adjusted” sample variance for this target, but when you only have one sample sometimes it pays to push the data a little harder ;)