BLUE (Best Linear Unbiased Estimator)


Code and instructions:

My code for BLUE combinations of N measurements can be downloaded from this location.

To use my code you need to write a python script with the following elements.
(But no actual knowledge of python is needed. Just copy-paste into a file, let's say myfile.py, adapt the numbers, and type python myfile.py.)

Load the BLUE class from my file blue.py, which has to be in the same directory:

from blue import BLUE

Then define the BLUE object, specifying the number of measurements, n:

blue = BLUE(n)
Suppose that your measurements have values a, b, c. Put them in a bracketed list and pass that list to BLUE:
values = [a, b, c]
blue = BLUE(len(values))
blue.AddMeasurement(values)
or equivalently:
blue = BLUE(3)
blue.AddMeasurement([a, b, c])
And that's how you have to pass the uncertainties and define a correlation across the N channels:
blue.AddUncertainty([d, e, f], corr)
This assumes that the same correlation factor holds for any pair of the N measurements, for any systematics. This is in general not true, so be careful.
I may generalize that (which would make the code significantly more complicated) if I find myself in need for that. In case you do that modification yourself, I would appreciate if you share the modified code with me.

Your script must end with either

blue.Simple()
for the standard application of the BLUE method, or
blue.Iterative(n)
for the iterative variant of BLUE, where n is the number of iterations.

Note that Simple() is the core routine, where the BLUE formulas are applied; Iterative() calls Simple() repeatedly.

The output will contain the weight of each input measurement (beware, it can be negative; this is discussed for example here), the combination result and its uncertainty.
In the iterative case, intermediate printouts from intermediate iterations are printed out as well, as they can give you a feeling of the quality of the convergence.
In the plain case, i.e. blue.Simple(), the output also contains the error matrix. (Too verbose for the iterative running.)


Plain BLUE vs iterative BLUE:

In most cases the standard BLUE method is what you need, but you may want to use iterative BLUE whenever your main uncertainties are proportional to the central value, in which case the combination with plain BLUE would be biased (this problem is known as "Peelle's Pertinent Puzzle"; see bibliography at the bottom of this page).
In general, statistics-limited cross section measurements fall in this category; same for the luminosity uncertainty on cross sections.
Note: in real life, not all uncertainties are absolute (i.e., independent from the central value) and not all are proportional to the central value. My personal rule of thumb is to run both the plain and the iterative version, and check how distant are the results. If not much, one can safely assume that these subtleties are unimportant.

For the iterative routine I originally considered using some stopping criterion like the difference between two iterations being below some threshold, but in most practical cases the convergence tends to be very fast (< 4 iterations). So in practice I prefer to set a small number of iterations, like 5.
A large number of iterations is typically needed when the relative uncertainties are very large.


Example:

Your script may look like the following (just an example with random numbers), with some errors fully correlated and some fully uncorrelated:

from blue import BLUE

emu = 80.
mumu = 68.
ljets = 75.

values = [emu, mumu, ljets]

blue = BLUE(len(values))

blue.AddMeasurement(values)

### uncorrelated
# statistical
blue.AddUncertainty([0.24*emu, 0.49*mumu, 0.09*ljets], 0.0)
# MC statistical
blue.AddUncertainty([0.01*emu, 0.02*mumu, 0.03*ljets], 0.0)

### correlated
# jet energy scale
blue.AddUncertainty([0.05*emu, 0.03*mumu, 0.02*ljets], 1.0)
# hadronization
blue.AddUncertainty([0.02*emu, 0.05*mumu, 0.06*ljets], 1.0)
# luminosity
blue.AddUncertainty([0.03*emu, 0.03*mumu, 0.03*ljets], 1.0)

blue.Simple()


Limitations:

The limitations of the BLUE method are well known (most blatantly, it assumes that all errors are Gaussian.)

This particular code has the following additional built-in limitations, originating from short-cuts that I took for sake of simplicity:

Overcoming those limitations probably requires a way more complicated code structure. If you are in need of that, you may consider other BLUE implementations by other authors. For example Richard Nisius' C++ implementation for ROOT is very general and has been extensively validated.


Acknowledgments (and some history):

My very first version of this code, in 2013, was rather uglier and longer. It was one of my first exercises in python and my very first usage of the numpy library, whose full power was still unknown to me. Among other things, it was hard-coded for only two measurements (as that was my use case at that moment.)
Steffen Roecker (my co-author in CMS-TOP-13-001) came to the rescue by drastically reducing the number of lines of code and detaching the core algorithm from the rest. His BLUE class is still available here. (It was still hard-coded for only two measurements, and had no iterative option.)
For testing purposes, Steffen's class is also available in my file with a different name (class BLUE2x2).

Luca Lista helped me to understand the logic of the iterative BLUE variant, and noticed a couple of serious mistakes in my original implementation.

The current version was written in October 2016 for the purposes of this paper.

In case you like this code and use it for any purpose, I would appreciate if a reference is made to this webpage (if appropriate) or at least credit is given somehow.


Bibliography:

Some useful references:

Talks: Other codes:


Home - My statistics page