Sie sind auf Seite 1von 9

Degrees of Freedom

History of Degrees of Freedom

The earliest and most basic concept of degrees of freedom was noted in the early 1800s,
intertwined in the works of mathematician and astronomer Carl Friedrich Gauss. The modern
usage and understanding of the term was expounded upon first by William Sealy Gosset, an
English statistician, in his article "The Probable Error of a Mean," published in Biometrika in
1908 under a pen name to preserve his anonymity. In his writings, Gosset did not specifically
use the term "degrees of freedom." He did, however, give an explanation for the concept
throughout the course of developing what would eventually be known as Student’s t-distribution.
The actual term was not made popular until 1922. English biologist and statistician Ronald
Fisher began using the term "degrees of freedom" when he started publishing reports and data
on his work developing chi squares.
What are 'Degrees Of Freedom‘?
Degrees of freedom are the number of values in a study that have the freedom to

vary.

The degrees of freedom in a statistical calculation represent how many values


involved in a calculation have the freedom to vary.

They are commonly discussed in relationship to various forms of hypothesis testing


in statistics, such as a chi-square. It is essential to calculate degrees of freedom
when trying to understand the importance of a chi square statistic and the validity of
the null hypothesis.
Standard error of the mean
What is a 'Standard Error‘

A standard error is the standard deviation of the sampling distribution of a


statistic. Standard error is a statistical term that measures the accuracy with which
a sample represents a population. In statistics, a sample mean deviates from the
actual mean of a population; this deviation is the standard error. Standard error
statistics are a class of statistics that are provided as output in many inferential
statistics, but function as descriptive statistics. Specifically, the term standard error
refers to a group of statistics that provide information about the dispersion of the
values within a set.
The standard error is considered part of descriptive statistics. It represents the standard deviation of the
mean within a dataset. This serves as a measure of variation for random variables, providing a
measurement for the spread. The smaller the spread, the more accurate the dataset is said to be.

That is, the standard error is equal to the standard deviation divided by the square root of the sample
size, n. This shows that the larger the sample size, the smaller the standard error. (Given that the larger
the divisor, the smaller the result and the smaller the divisor, the larger the result.) The symbol for
standard error of the mean is s M or when symbols are difficult to produce, it may be represented as,
S.E. mean, or more simply as SEM.
Thank you ;)

Das könnte Ihnen auch gefallen