There are a couple of tasks that come up with you are looking at experimental measurements with margins of error that can be solved with simple equations that are nonetheless not widely known by people who have only taken introductory statistics. This post is mostly based upon

a question and answer about the subject at physics stack exchange.

I intentionally omit a derivation of these formulas and a discussion of the limitations on when these formulas apply. In the real world, some of these limitations do not hold and the formulas below are merely good approximations.

I used these formulas to produce, for example, the results reported in my

recent post about new measurements of the Higgs boson mass with different margins of error in studies by the independent ATLAS and CMS experiments which were not combined in the source. A truly correct analysis would probe more deeply into the sources of systemic error in the underlying measurements conducted by each experiment since some of those systemic errors may be perfectly correlated with each other since both ATLAS and CMS are using the same apparatus, the Large Hadron Collider, to conduct their experiment, but the impact of that more rigorous analysis is slight.

A more serious systemic issue which is ignored by the practicing scientists in the area except qualitatively, because it would make the calculations much harder and wouldn't necessarily provide useful information because we don't know the truly correct calculation, is that there is pretty good accumulated empirical evidence that the error in the kinds of measurements that are assumed to be Gaussian are, in fact, non-Gaussian and have fatter tails in their distributions than a Gaussian distribution would suggest.

Thus, while the standard method of computing standard errors in experimental measurements in high energy physics that assumes a Gaussian error distribution is a valid way to determine the degree of error in one experiment relative to another in a physical unit free manner, the absolute value of the likelihood that the true result is more than X sigma away from the measured value based upon a Gaussian distribution is systemically understated in an amount that is particular large for high values of sigma. This is why physicists consider only a 5 sigma result to be a true discovery, even though something close to a 3 sigma result ought to be sufficiently certain if error were really distributed in a Gaussian manner.

**1. Combining different kinds of error in the same measurement to obtain an total margin of error.**
You will often see an experimental measurement in a research paper reported in the form:

Measurement (k) +/- Statistical Sampling Error (Δk_{1}) +/- Systemic Error (Δk_{2}) +/- Theoretical Error (Δk_{3}), where Δk_{1}, Δk_{2} and Δk_{3} are in the same units as k and set forth the magnitude of one standard deviation of error of the type described assuming that the error is distributed on a Gaussian basis (i.e. that it follows a "normal distribution").

But, often what you need to know is Measurement (k) +/- Combined Error (

Δk).

How do you do this?

You square each of the independent sources of error, add these squares together, and take the square root, which gives you the total combined error from all sources. For different types of error Δk_{1 }and Δk_{2} the formula is:

Δk=(Δk1)2+(Δk2)2‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾√
Usually, this formula will result in a total margin of error which is a bit more than the largest source of error, but is much less than the sum of the different margins of error. Intuitively, this makes sense, because having multiple possible independent sources of error should increase the total margin of error, as this formula does, but not linearly (i.e. by just adding up the potential sources of error) because some of the time error from one source of potential error will often be cancelled out by an error in the opposite direct from another independent source of potential error.

**2. Combining two independent measurements with different margins of error.**
Suppose that more than one experiment independently measures the same quantity and each experiment gets a result. The first paper measures a physical constant k, to be k_{1} +/- Δk_{1}. The second paper measures the physical constant k, in the same physical units, to be k_{2} +/- Δk_{2}.

The best estimate of the true value of k given this information is calculated using an error weighted mean, weighting by the reciprocals of the squares of the respective individual uncertainty values. An accurate measurement must contribute more to the best value than an inaccurate measurement.

Thus, the formula is that for X=((

1/(Δk_{1 }*Δk_{1}))*k_{1})+ ((1/(Δk_{2}*Δk_{2}))*k_{2})
and Y=

(1/(Δk_{1 }*Δk_{1}))+ (1/(Δk_{2}*Δk_{2})), the error weighted mean value of k = X/Y.

3. Combining margins of error from two combined independent measurements.
Suppose that more than one experiment independently measures the same quantity and each experiment gets a result. The first paper measures a physical constant k, to be k_{1} +/- Δk_{1}. The second paper measures the physical constant k, in the same physical units, to be k_{2} +/- Δk_{2}.

What is the combined margin of error Δk for the combined error weighted mean measurement k?

(Δk)−1=(Δk1)−2+(Δk2)−2‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾√

Intuitively, a very uncertain value must make little contribution. The uncertainty in k must always be less than or equal to the smallest of the individual uncertainties. Also, multiple, equally accurate measurements must decrease uncertainty.