Sie sind auf Seite 1von 16

What percentage of the population do you

need in a representative sample?

A:

Technically, a representative sample requires only whatever percentage of the statistical


population is necessary to replicate as closely as possible the quality or characteristic being
studied or analyzed. For example, in a population of 1,000 that is made up of 600 men and 400
women used in an analysis of buying trends by gender, a representative sample can consist of a
mere five members, three men and two women, or 0.5 percent of the population. However, while
this sample is nominally representative of the larger population, it is likely to result in a high
degree of sampling error or bias when making inferences regarding the larger population because
it is so small.

Sampling bias is an unavoidable consequence of employing samples to analyze a larger group.


Obtaining data from them is a process that is limited and incomplete by its very nature. But
because it is so often necessary given the limited availability of resources, economic analysts
employ methods that can reduce sampling bias to statistically negligible levels. While
representative sampling is one of the most effective methods used to reduce bias, it is often not
enough to do so sufficiently its own.

One strategy used in combination with representative sampling is making sure that the sample is
big enough to optimally reduce error. And while, in general, the larger the subgroup, the more
likely that error is reduced, at a certain point, the reduction becomes so minimal that it does not
justify the additional expense necessary to make the sample larger.

Just as the use of a technically representative but tiny sample is not enough to reduce sampling
bias on its own, simply choosing a large group without taking representation into account may
lead to even more flawed results than using the small representative sample. Returning to the
example above, a group of 600 males is statistically useless on its own when analyzing gender
differences in buying trends.

When is it better to use systematic over


simple random sampling?

A:
Under simple random sampling, a sample of items is chosen randomly from a population, and
each item has an equal probability of being chosen. Simple random sampling uses a table of
random numbers or an electronic random number generator to select items for its sample.
Systematic sampling involves selecting items from an ordered population using a skip or
sampling interval. The use of systematic sampling is more appropriate compared to simple
random sampling when a project's budget is tight and requires simplicity in execution and
understanding the results of a study. Systematic sampling is better than random sampling when
data does not exhibit patterns and there is a low risk of data manipulation by a researcher.

Execution Simplicity

Simple random sampling requires that each element of the population be separately identified
and selected, while systematic sampling relies on a sampling interval rule to select all
individuals. If the population size is small or the size of the individual samples and their number
are relatively small, random sampling provides the best results. However, as the required sample
size increases and a researcher needs to create multiple samples from the population, this can be
very time-consuming and expensive, making systematic sampling a preferred method under such
circumstances.

Pattern Presence

Systematic sampling is better than simple random sampling when there is no pattern in the data.
However, if the population is not random, a researcher runs the risk of selecting elements for the
sample that exhibit the same characteristics. For instance, if every eighth widget in a factory was
damaged due to a certain malfunctioning machine, a researcher is more likely to select these
broken widgets with systematic sampling than with simple random sampling, resulting in a
biased sample.

Data Manipulation

Systematic sampling is preferable to simple random sampling when there is a low risk of data
manipulation. If such a risk is high when a researcher can manipulate the interval length to
obtain desired results, a simple random sampling technique would be more appropriate.

What are the advantages of using a simple


random sample to study a larger population?

A:
Simple random sampling is a method used to cull a smaller sample size from a larger population
and use it to research and make generalizations about the larger group. It is one of several
methods statisticians and researchers use to extract a sample from a larger population; other
methods include stratified random sampling and probability sampling. The advantages of a
simple random sample include its ease of use and its accurate representation of the larger
population.

How a Simple Random Sample Is Generated

Researchers generate a simple random sample by obtaining an exhaustive list of a larger


population and then selecting, at random, a certain number of individuals to comprise the
sample. With a simple random sample, every member of the larger population has an equal
chance of being selected.

Researchers have two ways to generate a simple random sample. One is a manual lottery
method. Each member of the larger population group is assigned a number. Next, numbers are
drawn at random to comprise the sample group. if a simple random sample were to be taken of
100 students in a high school with a population of 1,000, then every student should have a one in
10 chance of being selected.

The manual lottery method works well for smaller populations, but it isn't feasible for larger
ones. In these situations, researchers prefer computer-generated selection. It works via the same
principle, but a sophisticated computer system, rather than a human being, assigns numbers
and selects them at random.

Room for Error

With a simple random sample, there has to be room for error represented by a plus and minus
variance. For example, if in that same high school a survey were to be taken to determine how
many students are left-handed, a random sampling can determine that eight out of the 100
sampled are left-handed. The conclusion would be that 8% of the student population of the high
school are left-handed, when in fact the global average would be closer to 10%.

The same is true regardless of subject matter. A survey on the percentage of the student
population that has green eyes or is physically incapacitated would result in a high mathematical
probability based on a simple random survey, but always with a plus or minus variance. The only
way to have a 100% accuracy rate would be to survey all 1,000 students which, while possible,
would be impractical. (For related reading, see: What percentage of the population do you need
in a representative sample?)

Advantages of Random Sampling

Simple random sample advantages include ease of use and accuracy of representation. No easier
method exists to extract a research sample from a larger population than simple random
sampling. There is no need to divide the population into sub-populations or take any steps further
than plucking the number of research subjects needed at random from the larger group. Again,
the only requirements are that randomness governs the selection process and that each member
of the larger population has an equal probability of selection.

Selecting subjects completely at random from the larger population also yields a sample that is
representative of the group being studied. Even sample sizes as small as 40 can exhibit low
sampling error when simple random sampling is performed correctly. For any type of research
on a population, using a representative sample to make inferences and generalizations about the
larger group is critical; a biased sample can lead to incorrect conclusions being drawn about the
larger population.

Simple random sampling is as simple as its name indicates, and it is accurate. These two
characteristics give simple random sampling a strong advantage over other sampling methods
when conducting research on a larger population.

What are the disadvantages of using a simple


random sample to approximate a larger
population?
By Melissa Horton | Updated January 29, 2018 — 9:55 AM EST

Share

A:

Simple random sampling statistically measures a subset of individuals selected from a larger
group or population to approximate a response from the entire group. Unlike other forms of
surveying techniques, simple random sampling is an unbiased approach to garner the responses
from a large group. Because individuals who make up the subset are chosen at random, each
individual in the large population set has the same probability of being selected. This creates, in
most cases, a balanced subset that carries the greatest potential for representing the larger group
as a whole.

Although there are distinct advantages to using a simple random sample in research, it has
inherent drawbacks. These disadvantages include the time needed to gather the full list of a
specific population, the capital necessary to retrieve and contact that list, and the bias that could
occur when the sample set is not large enough to adequately represent the full population.

The Time and Costs of Simple Random Sampling

In simple random sampling, an accurate statistical measure of a large population can only be
obtained when a full list of the entire population to be studied is available. In some instances,
details on a population of students at a university or a group of employees at a specific company
are accessible through the organization that connects each population. However, gaining access
to the full list can present challenges. Some universities or colleges are not willing to provide a
full list of students or faculty for research. Similarly, specific companies may not be willing or
able to hand over information about employee groups due to privacy policies.

When a full list of a larger population is not available, individuals attempting to produce simple
random sampling must gather information from other sources. If publicly available, smaller
subset lists can be used to recreate a full list of a larger population, but this strategy takes time to
complete. Organizations that keep data on students, employees and individual consumers often
impose lengthy retrieval processes that can stall a person's ability to obtain the most accurate
information on the entire population set. (For related reading, see: What are the best selection
methods for creating a simple random sample?)

In addition to the time it takes to gather information from various sources, the process may cost a
company or individual a substantial amount of capital. Retrieving a full list of a population or
smaller subset lists from a third-party data provider may require payment each time population
data is provided. If the sample is not large enough to represent the views of the entire population
during the first round of simple random sampling, purchasing additional lists or databases to
avoid a sampling error can be prohibitive.

Bias in Random Sampling

Although simple random sampling is intended to be an unbiased approach to surveying, sample


selection bias can occur. When a sample set of the larger population is not inclusive enough,
representation of the full population is skewed and requires additional sampling techniques. To
ensure a bias does not occur, researchers must acquire responses from an adequate number of
respondents, which may not be possible due to time or budget constraints. (For related reading,
see: When is it better to use systematic over simple random sampling?)
What is the difference between a simple
random sample and a stratified random
sample?

A:

Simple random samples and stratified random samples differ in how the sample is drawn from
the overall population of data. Simple random samples involve the random selection of data from
the entire population so each possible sample is equally likely to occur. In contrast, stratified
random sampling divides the population into smaller groups, or strata, based on shared
characteristics. A random sample is taken from each stratum in direct proportion to the size of
the stratum compared to the population. The sample subsets are then combined to create a
random sample.

Simple random sampling and stratified sampling are both types of probability sampling where
each sample has a known probability of being selected. This is different from judgmental
sampling, where the units to be sampled are handpicked by the researcher.

The population is the total set of observations or data. A sample is a set of observations from the
population. The sampling method is the process used to pull samples from the population. A
simple random sample is a random sample pulled from the entire population with no constraints
placed on how the sample is pulled. This method has no bias in selecting the sample from the
population, so each population element has an equal chance of being included in the sample.

How Stratified Random Sampling Works

Stratified random samples group the population elements into strata based on certain
criteria, then randomly choose elements from each stratum in proportion to the stratum’s size
versus the population. The researchers must take care to ensure the strata do not overlap. Each
point in the population must only belong to one stratum so each point is mutually exclusive.
Overlapping strata would increase the likelihood that some data are included, thus skewing the
sample.

Stratified sampling offers certain advantages and disadvantages compared to simple random
sampling. A stratified sample can provide a more accurate representation of the population based
on the characteristic used to divide the population into strata.

For populations with important distinguishing characteristics, stratified sampling can create a
more representative sample. This often requires a smaller sample size, which can save resources
and time. In addition, by including sufficient sample points from each stratum, the researchers
can conduct a separate analysis on each individual stratum. (For related reading, see: What Are
Some Examples of Stratified Random Sampling?)

A stratified sample can ensure representation of certain strata for inclusion in the population.
Random sampling may not pull any data points from a smaller stratum, but a stratified sample
includes those samples with a proportional representation. More work is required to pull a
stratified sample than a random sample. Researchers must individually track and verify the data
for each stratum for inclusion, which can take a lot more time compared with random sampling.
(For related reading, see: What are the criteria for a simple random sampling?)

What's An Example of Stratified Random


Sampling?
By Steven Nickolas | Updated April 23, 2018 — 11:00 AM EDT

Share

A:

Simple random sampling is a sample of individuals that exist in a population; the individuals are
randomly selected from the population and placed into a sample. This method of randomly
selecting individuals seeks to select a sample size that is an unbiased representation of the
population. However, it's not advantageous when the samples of the population vary widely.

Stratified random sampling is a method of sampling that involves the division of a population
into smaller groups known as strata. In stratified random sampling or stratification, the strata are
formed based on members' shared attributes or characteristics. Stratified random sampling is also
called proportional random sampling or quota random sampling.

Stratified random sampling is a better method than simple random sampling. Stratified random
sampling divides a population into subgroups or strata, and random samples are taken, in
proportion to the population, from each of the strata created. The members in each of the stratum
formed have similar attributes and characteristics. This method of sampling is widely used and
very useful when the target population is heterogeneous. A simple random sample should be
taken from each stratum. Stratified random sampling can be used, for example, to sample
students’ grade point averages (GPA) across the nation, people that spend overtime hours at
work, and the life expectancy across the world.

Example

Suppose a research team wants to determine the GPA of college students across the U.S. The
research team has difficulty collecting data from all 21 million college students; it decides to take
a random sample of the population by using 4,000 students.

Now assume that the team looks at the different attributes of the sample participants and wonders
if there are any differences in GPAs and students’ majors. Suppose it finds that 560 students are
English majors, 1135 are science majors, 800 are computer science majors, 1090 are engineering
majors, and 415 are math majors. The team wants to use a proportional stratified random sample
where the stratum of the sample is proportional to the random sample in the population.

Assume the team researches the demographics of college students in the U.S and finds the
percentage of what students major in 12% major in English, 28% major in science, 24% major in
computer science, 21% major in engineering, and 15% major in mathematics. Thus, five strata
are created from the stratified random sampling process.

The team then needs to confirm that the stratum of the population is in proportion to the stratum
in the sample; however, they find the proportions are not equal. The team then needs to resample
4,000 students from the population and randomly select 480 English, 1120 science, 960
computer science, 840 engineering, and 600 mathematics students. With those, it has a
proportionate stratified random sample of college students, which provides a better
representation of students' college majors in the U.S. The researchers can then highlight specific
stratum, observe the varying studies of U.S. college students and observe the various grade point
averages. For a more a more information, please read Stratified Random Sampling.

Applications

The same method used above can be applied to the polling of elections, the income of varying
populations, and income for different jobs across a nation.

For more on the differences between a simple sample and a stratified sample, please read What is
the Difference Between a Simple Random Sample and a Stratified Random Sample?
What are the advantages of using a simple
random sample to study a larger population?

A:

Simple random sampling is a method used to cull a smaller sample size from a larger population
and use it to research and make generalizations about the larger group. It is one of several
methods statisticians and researchers use to extract a sample from a larger population; other
methods include stratified random sampling and probability sampling. The advantages of a
simple random sample include its ease of use and its accurate representation of the larger
population.

How a Simple Random Sample Is Generated

Researchers generate a simple random sample by obtaining an exhaustive list of a larger


population and then selecting, at random, a certain number of individuals to comprise the
sample. With a simple random sample, every member of the larger population has an equal
chance of being selected.

Researchers have two ways to generate a simple random sample. One is a manual lottery
method. Each member of the larger population group is assigned a number. Next, numbers are
drawn at random to comprise the sample group. if a simple random sample were to be taken of
100 students in a high school with a population of 1,000, then every student should have a one in
10 chance of being selected.

The manual lottery method works well for smaller populations, but it isn't feasible for larger
ones. In these situations, researchers prefer computer-generated selection. It works via the same
principle, but a sophisticated computer system, rather than a human being, assigns numbers
and selects them at random.

Room for Error

With a simple random sample, there has to be room for error represented by a plus and minus
variance. For example, if in that same high school a survey were to be taken to determine how
many students are left-handed, a random sampling can determine that eight out of the 100
sampled are left-handed. The conclusion would be that 8% of the student population of the high
school are left-handed, when in fact the global average would be closer to 10%.

The same is true regardless of subject matter. A survey on the percentage of the student
population that has green eyes or is physically incapacitated would result in a high mathematical
probability based on a simple random survey, but always with a plus or minus variance. The only
way to have a 100% accuracy rate would be to survey all 1,000 students which, while possible,
would be impractical. (For related reading, see: What percentage of the population do you need
in a representative sample?)

Advantages of Random Sampling

Simple random sample advantages include ease of use and accuracy of representation. No easier
method exists to extract a research sample from a larger population than simple random
sampling. There is no need to divide the population into sub-populations or take any steps further
than plucking the number of research subjects needed at random from the larger group. Again,
the only requirements are that randomness governs the selection process and that each member
of the larger population has an equal probability of selection.

Selecting subjects completely at random from the larger population also yields a sample that is
representative of the group being studied. Even sample sizes as small as 40 can exhibit low
sampling error when simple random sampling is performed correctly. For any type of research
on a population, using a representative sample to make inferences and generalizations about the
larger group is critical; a biased sample can lead to incorrect conclusions being drawn about the
larger population.

Simple random sampling is as simple as its name indicates, and it is accurate. These two
characteristics give simple random sampling a strong advantage over other sampling methods
when conducting research on a larger population.

(For related reading, see: What are the disadvantages of using a simple random sample to
approximate a larger population?)

What is the difference between the standard


error of means and standard deviation?
By Investopedia

Share

A:

The standard deviation, or SD, measures the amount of variability or dispersion for a subject set
of data from the mean, while the standard error of the mean, or SEM, measures how far the
sample mean of the data is likely to be from the true population mean. The SEM is always
smaller than the SD. The formula for the SEM is the standard deviation divided by the square
root of the sample size. The formula for the SD requires a couple of steps. First, take the square
of the difference between each data point and the sample mean, finding the sum of those values.
Then, divide that sum by the sample size minus one, which is the variance. Finally, take the
square root of the variance to get the SD.

The SEM describes how precise the mean of the sample is versus the true mean of the
population. As the size of the sample data grows larger, the SEM decreases versus the SD. As the
sample size increases, the true mean of the population is known with greater specificity. In
contrast, increasing the sample size also provides a more specific measure of the SD. However,
the SD may be more or less depending on the dispersion of the additional data added to the
sample.

The SD is a measure of volatility and can be used as a risk measure for an investment. Assets
with higher prices have a higher SD than assets with lower prices. The SD can be used to
measure the importance of a price move in an asset. Assuming a normal distribution, around
68% of daily price changes are within one SD of the mean, with around 95% of daily price
changes within two SDs of the mean.

What is the difference between standard


deviation and variance?
A:

Standard deviation and variance, though basic mathematical concepts, play important roles in
many areas of the financial sector, including accounting, economics and investing. In investing,
for example, a firm grasp of the calculation and interpretation of these two measurements is
crucial for the creation of an effective trade strategy.

Both standard deviation and variance are derived from the mean of a given data set. Whereas the
mean is simply the average of all data points, the variance measures the average degree to which
each point differs from the mean. The greater the variance, the larger the overall data range. For
the variance, first calculate the difference between each point and the mean. The results are
squared and averaged to produce the variance. For simplicity's sake, this example uses a data set
consisting of the numbers between 1 and 10, giving a mean of 5.5. Squaring the difference
between each data point and the mean and averaging the squares renders a variance of 8.25.

Standard deviation is simply the square root of the variance. The calculation of variance uses
squares because it weights outliers more heavily than data very near the mean. This also prevents
differences above the mean from canceling out those below, which can sometimes result in a
variance of zero. However, because of this squaring, the variance is no longer in the same unit of
measurement as the original data. Taking the root of the variance means the standard deviation is
restored to the original unit of measure. For traders and analysts, these two concepts are of
paramount importance as the standard deviation is used to measure market volatility, which in
turn plays a large role in creating a profitable trade strategy.

What is the difference between standard


deviation and average deviation?

A:

While there are many different ways to measure variability within a set of data, two of the most
popular are standard deviation and average deviation. Though very similar, the calculation and
interpretation of these two differ in some key ways. Determining range and volatility is
especially important in the finance industry, so professionals in areas such as accounting,
investing and economics should be very familiar with both concepts.

Standard deviation is the most common measure of variability and is frequently used to
determine the volatility of stock markets or other investments. To calculate the standard
deviation, you must first determine the variance. This is done by subtracting the mean from each
data point and then squaring, summing and averaging the differences. Variance in itself is an
excellent measure of variability and range, as a larger variance reflects a greater spread in the
underlying data. The standard deviation is simply the square root of the variance. Squaring the
differences between each point and the mean avoids the issue of negative differences for values
below the mean, but it means the variance is no longer in the same unit of measure as the original
data. Taking the root of the variance means the standard deviation returns to the original unit of
measure and is easier to interpret and utilize in further calculations.

The average deviation, also called the mean absolute deviation, is another measure of variability.
However, average deviation utilizes absolute values instead of squares to circumvent the issue of
negative differences between data and the mean. To calculate the average deviation, simply
subtract the mean from each value, then sum and average the absolute values of the differences.
The mean absolute value is used less frequently because the use of absolute values makes further
calculations more complicated and unwieldy than using the simple standard deviation.

What assumptions are made when


conducting a t-test?
A:

The common assumptions made when doing a t-test include those regarding the scale of
measurement, random sampling, normality of data distribution, adequacy of sample size and
equality of variance in standard deviation.

The T-Test

The t-test was developed by a chemist working for the Guinness brewing company as a simple
way to measure the consistent quality of stout. It was further developed and adapted, and now
refers to any test of a statistical hypothesis in which the statistic being tested for is expected to
correspond to a t-distribution if the null hypothesis is supported.

A t-test is an analysis of two populations means through the use of statistical examination; a t-
test with two samples is commonly used with small sample sizes, testing the difference between
the samples when the variances of two normal distributions are not known.

T-distribution is basically any continuous probability distribution that arises from an estimation
of the mean of a normally distributed population using a small sample size and an unknown
standard deviation for the population. The null hypothesis is the default assumption that no
relationship exists between two different measured phenomena. (For related reading, see: What
does a strong null hypothesis mean?)

T-Test Assumptions

The first assumption made regarding t-tests concerns the scale of measurement. The assumption
for a t-test is that the scale of measurement applied to the data collected follows a continuous or
ordinal scale, such as the scores for an IQ test.

The second assumption made is that of a simple random sample, that the data is collected from a
representative, randomly selected portion of the total population.

The third assumption is the data, when plotted, results in a normal distribution, bell-shaped
distribution curve.

The fourth assumption is a reasonably large sample size is used. A larger sample size means the
distribution of results should approach a normal bell-shaped curve.

The final assumption is homogeneity of variance. Homogeneous, or equal, variance exists when
the standard deviations of samples are approximately equal.
What is the 'Normal Distribution'

The normal distribution, also known as the Gaussian distribution, is a probability distribution
that is a symmetric about the mean, showing that data near the mean are more frequent than data
far from the mean.

Next Up

1. Tail Risk
2. Probability Distribution
3. Leptokurtic
4. Excess Kurtosis
5.

BREAKING DOWN 'Normal Distribution'

The normal distribution is the most common type of distribution assumed in technical stock
market analysis and in other types of statistical analyses. The standard normal distribution has
two parameters: the mean and the standard deviation. For a normal distribution, 68% of the
observations are within +- one standard deviations of the mean, 95% are within +- two standard
deviations, and 99.7% are within +- three standard deviations.

While real data are usually not precisely normally distributed, the normal model is motivated by
the Central Limit Theorem, which states that averages calculated from independent identically
distributed random variables have approximately normal distributions, regardless of the type of
distribution that the variables are sampled from (provided it has finite variance).

Skewness and Kurtosis

Real data rarely if ever come from normal distributions. The skewness and kurtosis coefficients
measure how different the real distribution is from a normal distribution. The skewness measures
the symmetry of a distribution. The normal distribution is symmetric and has a skewness of zero,
as is the case with all symmetric distributions. If the distribution of a data set has a skewness less
than zero, the distribution of the data is skewed to the left; positive skewness implies that the
distribution is skewed to the right. Asset prices can be modelled using a lognormal distribution,
which is skewed to the right because asset prices are non-negative, and because there are
occasional assets with extremely high prices relative to the majority.

The kurtosis statistic measures the tail ends of a distribution in relation to the tails of the normal
distribution. The normal distribution has a kurtosis of three, which indicates the distribution has
neither fat nor thin tails. Therefore, if observed data have a kurtosis greater than three, the
distribution is said to have heavy tails when compared to the normal distribution. If the data have
a kurtosis less than three, it is said to have thin tails when compared to the normal distribution.

Stock market returns are often assumed to follow a normal distribution. However, in reality,
return distributions tend to have fat tails, and therefore have kurtosis greater than three. Such
returns have typically had moves greater than three standard deviations beyond the mean more
often than expected under the assumption of a normal distribution.

Central Limit Theorem - CLT

What is the 'Central Limit Theorem - CLT'

The central limit theorem (CLT) is a statistical theory that states that given a sufficiently large
sample size from a population with a finite level of variance, the mean of all samples from the
same population will be approximately equal to the mean of the population. Furthermore, all of
the samples will follow an approximate normal distribution pattern, with all variances being
approximately equal to the variance of the population divided by each sample's size.

Next Up

1. Sampling Distribution
2. Representative Sample
3. Sample
4. Sample Size Neglect
5.

BREAKING DOWN 'Central Limit Theorem - CLT'


According to the central limit theorem, the mean of a sample of data will be closer to the mean of the
overall population in question as the sample size increases, notwithstanding the actual distribution of
the data, and whether it is normal or non-normal. As a general rule, sample sizes equal to or greater
than 30 are considered sufficient for the central limit theorem to hold, meaning the distribution of the
sample means is fairly normally distributed.

The Central Limit Theorem in Finance

The central limit theorem is very useful when examining returns for a given stock or index
because it simplifies many analysis procedures. An appropriate sample size depends on the data
available, but generally speaking, having a sample size of at least 50 observations is sufficient.
Due to the relative ease of generating financial data, it is often easy to produce much larger
sample sizes. The central limit theorem is the basis for sampling in statistics, so it holds the
foundation for sampling and statistical analysis in finance as well. Investors of all types rely on
the central limit theorem to analyze stock returns, construct portfolios and manage risk.
Example of Central Limit Theorem

If an investor is looking to analyze the overall return for a stock index made up of 1,000 stocks,
he can take random samples of stocks from the index to get an estimate for the return of the total
index. The samples must be random, and at least 30 stocks must be evaluated in each sample for
the central limit theorem to hold. Random samples ensure a broad range of stock across
industries and sectors is represented in the sample. Stocks previously selected must also be
replaced for selection in other samples to avoid bias. The average returns from these samples
approximates the return for the whole index and are approximately normally distributed. The
approximation holds even if the actual returns for the whole index are not normally distributed.

Das könnte Ihnen auch gefallen