Sie sind auf Seite 1von 2

Errors as a Monster

One problem that physicists have to overcome in the field of experimentation is the propagation

of error in data. Errors can be negligible in some data but it is equally probable that error duplicates real

data so closely that it is almost impossible to distinguish the two entity. For example, the various dark

matter experiments have the problem of distinguishing between a signal caused by a neutrino and a

possible signal caused by WIMP (dark matter candidate) as both interact with the detector in a similar

manner and give similar error. Error in data can cause generation of unreliable data points for quantities

relevant in physics and a possible falsification of related theories which can have a huge impact on our

understanding of the Universe.

The problem of error propagation in data can be dated back to the 18th century when physicists

started doing experiments to quantify the parameters in the various theories that were hypothesized at

that time. Galileo's measurement of the occultation of mercury and physicist’s effort to prove Einstein's

General theory of Relativity by measuring bending of light rays by the sun are some notable examples

where error propagation was largely realized. The theory to do error analysis is a joint effort by

physicists all over the world who came up with various techniques to handle error propagation. But, the

most effective technique of error propagation came from the mathematician Descartes, who gave the

confidence level technique which determines how much a given set of data is reliable.

The propagation of error can happen both during data collection and data analysis. Errors like

systematic errors creep in during data collection from detectors which are not 100% efficient. Errors like

random errors occur due to the use of the wrong technique to analyze data or human error.

Computational errors can occur due to the wrong way of programming or unavailability of large storage

space which the data may demand.


Random or computational errors can be reduced by keeping track of steps in data analysis and

review, but systematic errors are very hard to minimize. Physicists have been trying to minimize these

type of errors by using more efficient detectors which have more sensitivity, but there seems to be a

bottleneck to this approach. The principle of uncertainty doesn't allow us to go beyond a precision to

measure certain types of quantities. This limitation has further created limits in measurement and if the

error in data is comparable to the relevant quantity, it is impossible to distinguish the two entity.

The problem of error propagation is an inherent problem with the technique of collecting and

dealing with experimental data. The lack of more sophisticated technology to take measurement can be

a reason for this problem. Factors like detector efficiency and good coding to deal with the data can

influence the outcome of an experiment in a huge manner.

Das könnte Ihnen auch gefallen