Sie sind auf Seite 1von 16

ASSIGNMENT SET 1:

Q 1. (a) Statistics is the backbone of decision-making. Comment.

Answer: 1. STATISTICS- BACKBONE OF DECISION MAKING: a. Due to advanced communication network, rapid changes in consumer behavior, varied expectations of variety of consumers and new market openings, modern managers have a difficult task of making quick and appropriate decisions. Therefore, there is a need for them to depend more upon quantitative techniques like mathematical models, statistics, operations research and econometrics. Decision making is a key part of our day-to-day life. Even when we wish to purchase a television, we like to know the price, quality, durability, and maintainability of various brands and models before buying one. As you can see, in this scenario we are collecting data and making an optimum decision. In other words, we are using Statistics. Statistics play an important role in this aspect. Statistics is broadly divided into two main categories. The two categories of Statistics are descriptive statistics and inferential statistics. STATISTICS DECSRIPTIVE - collecting, organizing, summarizing, presenting data INFERENTIAL making inference, hypothesis testing, determining, relationships, making predictions Suppose a company wishes to introduce a new product, it has to collect data on market potential, consumer likings, availability of raw materials, feasibility of producing the product. Hence, data collection is the back-bone of any decision making process. Many organizations find themselves data-rich but poor in drawing information from it. Therefore, it is important to develop the ability to extract meaningful information from raw data to make better decisions. So we can definitely say that Statistics is the backbone of decision making.

Q.1. (b) Give plural meaning of the word Statistics? Answer:Plural of Word Statistic:

The word statistics is used as the plural of the word Statistic which refers to a numerical quantity like mean, median, variance etc, calculated from sample value. In plural sense, the word statistics refer to numerical facts and figures collected in a systematic manner with a definite purpose in any field of study. In this sense, statistics are also aggregates of facts which are expressed in numerical form. For example, Statistics on industrial production, statistics or population growth of a country in different years etc. For Example: If we select 15 student from a class of 80 students, measure their heights and find the average height. This average would be a statistic.

Q 2. a. In a bivariate data on x and y, variance of x = 49, variance of y = 9 and covariance ( x,y) = -17.5. Find coefficient of correlation between x and y.

Ans:- We know that:

Given

Hence, there is a highly negative correlation.

Q 3. The percentage sugar content of Tobacco in two samples was represented in table 11.11. Test whether their population variances are same. Table 1. Percentage sugar content of Tobacco in two samples Sample A 2.4 2.7 2.6 2.1 2.5 Sample B 2.7 3 2.8 3.1 2.2 3.6 Ans:Required values of the method X d = X - 2.5 2.4 0.1 2.7 -0.2 2.6 -0.1 2.1 0.4 2.5 0
Total 0.2

I to calculate sample mean d2 0.01 0.04 0.01 0.16 0


0.22

Required values of the method II to calculate sample mean X 2.7 3 2.8 3.1 2.2 3.6 Total d=X3 0.3 0 0.2 -0.1 0.8 -0.6 0.6 d2 0.09 0 0.04 0.1 0.64 0.36 1.23 3

2 S 1 1 4 =

1 n1 -1 0.22 [ d
2

(d)2 n1 ]

0.04

= 0.053 2 2 1 n2 -1 (d)2 n2

1 5

1.23-0.053 6

= 0.244 not significant

Q 4. a. Explain the characteristics of business forecasting. Answer:

CHARACTERISTICS OF BUSINESS FORECASTING: Business forecasting has always been one component of running an enterprise. However, forecasting traditionally was based less on concrete and comprehensive data than on faceto-face meetings and common sense. In recent years, business forecasting has developed into a much more scientific endeavor, with a host of theories, methods, and techniques designed for forecasting certain types of data. The development of information technologies and the Internet propelled this development into overdrive, as companies not only adopted such technologies into their business practices, but into forecasting schemes as well. In the 2000s, projecting the optimal levels of goods to buy or products to produce involved sophisticated software and electronic networks that incorporate mounds of data and advanced mathematical algorithms tailored to a company's particular market conditions and line of business. Business forecasting involves a wide range of tools, including simple electronic spread sheets; enterprise resource planning (ERP) and electronic data interchange (EDI) networks, advanced supply chain management systems, and other Web-enabled technologies. The practice attempts to pinpoint key factors in business production and extrapolate from given data sets to produce accurate projections for future costs, revenues, and opportunities. This normally is done with an eye toward adjusting current and near-future business practices to take maximum advantage of expectations. In the Internet age, the field of business forecasting was propelled by three interrelated phenomena. First, the Internet provided a new series of tools to aid the science of business forecasting. Second, business forecasting had to take the Internet itself into account in trying to construct viable models and make predictions. Finally, the Internet fostered vastly accelerated transformations in all areas of business that made the job of business forecasters that much more exacting. By the 2000s, as the Internet and its myriad functions highlighted the central importance of information in economic activity, more and more companies came to recognize the value, and often the necessity, of business forecasting techniques and systems. Business forecasting is indeed big business, with companies investing tremendous resources in systems, time, and employees aimed at bringing useful projections into the

planning process. According to a survey by the Hudson, Ohio-based Answer Think Consulting Group, which specializes in studies of business planning, the average U.S. company spends more than 25,000 person-days on business forecasting and related activities for every billion dollars of revenue. Companies have a vast array of business forecasting systems and software from which to choose, but choosing the correct one for their particular needs requires a good deal of investigation. Any forecasting system needs to be able to facilitate data-sharing partnerships between businesses, accept input from several different data sources and platforms, operate on an open architecture, and feature an array of analysis techniques and approaches. Forecasting systems draw on several sources for their forecasting input, including databases, e-mails, documents, and Web sites. After processing data from various sources, sophisticated forecasting systems integrate all the necessary data into a single spreadsheet, which the company can then manipulate by entering in various projectionssuch as different estimates of future salesthat the system will incorporate into a new readout. A flexible and sound architecture is crucial, particularly in the fast-paced, rapidly developing Internet economy. If a system's base is rigid or inadequate, it can be impossible to reconfigure to adjust to changing market conditions. Along the same lines, it is important to invest in systems that will remain useful over the long term, weathering alterations in the business climate.
Q 4. b. Differentiate between prediction, projection and forecasting.

Answer; b. A prediction or forecast is a statement about the way things will happen in the future, often but not always based on experience or knowledge. While there is much overlap between prediction and forecast, a prediction may be a statement that some outcome is expected, while a forecast may cover a range of possible outcomes. Forecasting is the process of making statements about events whose actual outcomes (typically) have not yet been observed. A commonplace example might be estimation for some variable of interest at some specified future date. Prediction is a similar, but more general term. Both might refer to formal statistical methods employing time series, cross sectional or longitudinal data, or alternatively to less formal judgmental methods. Usage can differ between areas of application: for example in hydrology, the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period. Risk and uncertainty are central to forecasting and prediction; it is generally considered good practice to indicate the degree of uncertainty attaching to forecasts. The process of climate change and increasing energy prices has led to the usage of Egain forecasting of buildings. The method uses Forecasting to reduce the energy needed to heat the building, thus reducing the emission of greenhouse gases. Forecasting is used in the practice of Customer demand planning in

every day business forecasting for manufacturing companies. The discipline of demand planning, also sometimes referred to as supply chain forecasting, embraces both statistical forecasting and a consensus process. An important, albeit often ignored aspect of forecasting, is the relationship it holds with planning. Forecasting can be described as predicting what the future will look like, whereas planning predicts what the future must look like. There is no single right forecasting method to use. Selection of a method should be based on your objectives and your conditions (data etc.). A good place to find a method is by visiting a selection tree. An example of a selection tree can be found here. Prediction is closely related to uncertainty. Reference class forecasting was developed to eliminate or reduce uncertainty in prediction. Although guaranteed information about the information is in many cases impossible, prediction is necessary to allow plans to be made about possible developments; Howard H. Stevenson writes that prediction in business "... is at least two things: Important and hard. Prediction is closely related to uncertainty. Reference class forecasting was developed to eliminate or reduce uncertainty in prediction.

Q 5. What are the components of time series? Bring out the significance of moving average in analysing a time series and point out its limitations.

Answer: . COMPONENTS OF TIME SERIES: SIGNIFICANCE OF MOVING AVERAGE IN ANALYZING TIME SERIES AND ITS LIMITATIONS: The four components of time series are: 1. Secular trend 2. Seasonal variation 3. Cyclical variation 4. Irregular variation Secular trend: A time series data may show upward trend or downward trend for a period of years and this may be due to factors like increase in population, change in technological progress, large scale shift in consumers demands, etc. For example, population increases over a period of time, price increases over a period of years, production of goods on the capital market of the country increases over a period of years. These are the examples of upward trend. The sales of a commodity may decrease over a period of time because of better products coming to the market. This is an example of declining trend or downward trend. The increase or decrease in the movements of a time series is called Secular trend.

Seasonal variation: Seasonal variation is short-term fluctuation in a time series which occur periodically in a year. This continues to repeat year after year. The major factors that are responsible for the repetitive pattern of seasonal variations are weather conditions and customs of people. More woolen clothes are sold in winter than in the season of summer .Regardless of the trend we can observe that in each year more ice creams are sold in summer and very little in winter season. The sales in the departmental stores are more during festive seasons that in the normal days. Cyclical variations: Cyclical variations are recurrent upward or downward movements in a time series but the period of cycle is greater than a year. Also these variations are not regular as seasonal variation. There are different types of cycles of varying in length and size. The ups and downs in business activities are the effects of cyclical variation. A business cycle showing these oscillatory movements has to pass through four phasesprosperity, recession, depression and recovery. In a business, these four phases are completed by passing one to another in this order. Irregular variation: Irregular variations are fluctuations in time series that are short in duration, erratic in nature and follow no regularity in the occurrence pattern. These variations are also referred to as residual variations since by definition they represent what is left out in a time series after trend, cyclical and seasonal variations. Irregular fluctuations results due to the occurrence of unforeseen events like floods, earthquakes, wars, famines, etc.

Q 6. List down various measures of central tendency and explain the difference between them? Answer:

a.. VARIOUS MEASURES OF CENTRAL TENDENCY AND DIFFERENCE BETWEEN THEM: Graphical representation is a good way to represent summarized data. However, graphs provide us only an overview and thus may not be used for further analysis. Hence, we use summary statistics like computing averages. To analyze the data. Mass data, which is collected, classified, tabulated and presented systematically, is analyzed further to bring its size to a single representative figure. This single figure is the measure which can be found at central part of the range of all values. It is the one which represents the entire data set. Hence, this is called the measure of central tendency. In other words, the tendency of data to cluster around a figure which is in central location is known as central tendency. Measure of central tendency or average of first order describes the concentration of large numbers around a particular value. It is a single value which represents all units. The two

most common measures of central tendency are the median and the mean, which can be illustrated with an example. Suppose we draw a sample of five women and measure their weights. They weigh 100 pounds, 100 pounds, 130 pounds, 140pounds, and 150 pounds. A measure of central tendency is a single value that attempts to describe a set of data by identifying the central position within that set of data. As such, measures of central tendency are sometimes called measures of central location. They are also classed as summary statistics. The mean (often called the average) is most likely the measure of central tendency that you are most familiar with, but there are others, such as, the median and the mode. The mean, median and mode are all valid measures of central tendency but, under different conditions, some measures of central tendency become more appropriate to use than others. In the following sections we will look at the mean, mode and median and learn how to calculate them and under what conditions they are most appropriate to be used. Mean (Arithmetic) The mean (or average) is the most popular and well known measure of central tendency. It can be used with both discrete and continuous data, although its use is most often with continuous data. The mean is equal to the sum of all the values in the data set divided by the number of values in the data set. So, if we have n values in a data set and they have values x1, x2, ..., xn, then the sample mean, usually denoted by (pronounced x bar), is: MEAN X= (X1+X2+.. Xn) /n This formula is usually written in a slightly different manner using the Greek capitol letter, , pronounced "sigma", which means "sum of...":

You may have noticed that the above formula refers to the sample mean. So, why calls have we called it a sample mean? This is because, in statistics, samples and populations have very different meanings and these differences are very important, even if, in the case of the mean, they are calculated in the same way. To acknowledge that we are calculating the population mean and not the sample mean, we use the Greek lower case letter "mu", denoted as :

The mean is essentially a model of your data set. It is the value that is most common. You will notice, however, that the mean is not often one of the actual values that you have observed in your data set. However, one of its important properties is that it minimizes error in the prediction of any one value in your data set. That is, it is the value that produces the lowest amount of error from all other values in the data set. An important property of the mean is that it includes every value in your data set as part of the calculation. In addition, the mean is the only measure of central tendency where the sum of the deviations of each value from the mean is always zero. When not to use the mean

The mean has one main disadvantage: it is particularly susceptible to the influence of outliers. These are values that are unusual compared to the rest of the data set by being especially small or large in numerical value. For example, consider the wages of staff at a factory below: Staff Salary 1 15k 2 18k 3 16k 4 14k 5 15k 6 15k 7 12k 8 17k 9 90k 10 95k

The mean salary for these ten staff is $30.7k. However, inspecting the raw data suggests that this mean value might not be the best way to accurately reflect the typical salary of a worker, as most workers have salaries in the $12k to 18k range. The mean is being skewed by the two large salaries. Therefore, in this situation we would like to have a better measure of central tendency. As we will find out later, taking the median would be a better measure of central tendency in this situation. Another time when we usually prefer the median over the mean (or mode) is when our data is skewed (i.e. the frequency distribution for our data is skewed). If we consider the normal distribution - as this is the most frequently assessed in statistics - when the data is perfectly normal then the mean, median and mode are identical. Moreover, they all represent the most typical value in the data set. However, as the data becomes skewed the mean loses its ability to provide the best central location for the data as the skewed data is dragging it away from the typical value. However, the median best retains this position and is not as strongly influenced by the skewed values. This is explained in more detail in the skewed distribution section later in this guide. Median The median is the middle score for a set of data that has been arranged in order of magnitude. The median is less affected by outliers and skewed data. Mode The mode is the most frequent score in our data set. On a histogram it represents the highest bar in a bar chart or histogram. You can, therefore, sometimes consider the mode as being the most popular option. An example of a mode is presented below: Normally, the mode is used for categorical data where we wish to know which the most common category as illustrated below is: We can see above that the most common form of transport, in this particular data set, is the bus. However, one of the problems with the mode is that it is not unique, so it leaves us with problems when we have two or more values that share the highest frequency, such as below: We are now stuck as to which mode best describes the central tendency of the data. This is particularly problematic when we have continuous data, as we are more likely not to have any one value that is more frequent than the other. For example, consider measuring 30

peoples' weight (to the nearest 0.1 kg). How likely is it that we will find two or more people with exactly the same weight, e.g. 67.4 kg? The answer, is probably very unlikely many people might be close but with such a small sample (30 people) and a large range of possible weights you are unlikely to find two people with exactly the same weight, that is, to the nearest 0.1 kg. This is why the mode is very rarely used with continuous data. Another problem with the mode is that it will not provide us with a very good measure of central tendency when the most common mark is far away from the rest of the data in the data set, as depicted in the diagram below: In the above diagram the mode has a value of 2. We can clearly see, however, that the mode is not representative of the data, which is mostly concentrated around the 20 to 30 value range. To use the mode to describe the central tendency of this data set would be misleading. Skewed Distributions and the Mean and Median We often test whether our data is normally distributed as this is a common assumption underlying many statistical tests. An example of a normally distributed set of data is presented below: When you have a normally distributed sample you can legitimately use both the mean and the median as your measure of central tendency. In fact, in any symmetrical distribution the mean, median and mode are equal. However, in this situation, the mean is widely preferred as the best measure of central tendency as it is the measure that includes all the values in the data set for its calculation, and any change in any of the scores will affect the value of the mean. This is not the case with the median or mode. However, when our data is skewed, for example, as with the right-skewed data set below: we find that the mean is being dragged in the direct of the skew. In these situations, the median is generally considered to be the best representative of the central location of the data. The more skewed the distribution the greater the difference between the median and mean, and the greater emphasis should be placed on using the median as opposed to the mean. A classic example of the above right-skewed distribution is income (salary), where higher-earners provide a false representation of the typical income if expressed as a mean and not a median. If dealing with a normal distribution and tests of normality show that the data is nonnormal, then it is customary to use the median instead of the mean. This is more a rule of thumb than a strict guideline however. Sometimes, researchers wish to report the mean of a skewed distribution if the median and mean are not appreciably different (a subjective assessment) and if it allows easier comparisons to previous research to be made. Summary of when to use the mean, median and mode: Please use the following summary table to know what the best measure of central tendency is with respect to the different types of variable.

Type of Variable Nominal Ordinal Interval/Ratio (not skewed) Interval/Ratio (skewed)

Best measure of central tendency Mode Median Mean Median

Q 6 b. What is a confidence interval, and why it is useful? What is a confidence level? Q 6 b. What is a confidence interval, and why it is useful? What is a confidence level?

Answer: A confidence interval (CI) is a particular kind of interval estimate of a population parameter and is used to indicate the reliability of an estimate. It is an observed interval (i.e. it is calculated from the observations), in principle different from sample to sample, that frequently includes the parameter of interest, if the experiment is repeated. How frequently the observed interval contains the parameter is determined by the confidence level or confidence coefficient. A confidence interval with a particular confidence level is intended to give the assurance that, if the statistical model is correct, then taken over all the data that might have been obtained, the procedure for constructing the interval would deliver a confidence interval that included the true value of the parameter the proportion of the time set by the confidence level. More specifically, the meaning of the term "confidence level" is that, if confidence intervals are constructed across many separate data analyses of repeated (and possibly different) experiments, the proportion of such intervals that contain the true value of the parameter will approximately match the confidence level; this is guaranteed by the reasoning underlying the construction of confidence intervals. A confidence interval does not predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained. (An interval intended to have such a property, called a credible interval can be estimated using Bayesian methods, but such methods bring with them their own distinct strengths and weaknesses).In using interval estimates, we are not confined to 1, 2 and 3 standard errors; for example, 1.64 standard errors include about 90 percent of the area under the curve; it includes 0.4495 of the area on either side of the mean in a normal distribution. Similarly, 2.58 standard error includes about 99 percent of the area, or 49.51 percent on either side of the mean. This probability indicates how confident we are that the interval estimate will include the population parameter. A higher probability means more confidence. In estimation, the most commonly used confidence levels are 90 percent, 95percent, and 99 percent, but we are free to apply any confidence level.

ASSIGNMENT SET 2: Q1. What are the characteristics of a good measure of central tendency? (b) What are the uses of averages?

Answer: The characteristics of a good measure of central tendency are: Present mass data in a concise form The mass data is condensed to make the data readable and to use it for further analysis. Facilitate comparison It is difficult to compare two different sets of mass data. But we can compare those two after computing the averages of individual data sets. While comparing, the same measure of average should be used. It leads to incorrect conclusions when the mean salary of employees is compared with the median salary of the employees. Establish relationship between data sets The average can be used to draw inferences about the unknown relationships between the data sets. Computing the averages of the data sets is helpful for estimating the average of population. Provide basis for decision-making In many fields, such as business, finance, insurance and other sectors, managers b) The use or application of a particular average depends upon the purpose of the investigation. Some of the cases of different averages are as follows: Arithmetic Mean Arithmetic mean is considered a deal average. It is frequently used in all the aspects of life. It possesses many mathematical properties and due to this it is of immense utility in further statistical analysis. In economic analysis arithmetic mean is used extensively to calculate average production, average wage, average cost, per capital income exports, imports, consumption, prices, etc. When different items of a series have different relative importance, then weighted arithmetic mean is used. Geometric Mean Use of Geometric mean is important in a series having items of wide dispersion. It is used in the construction of index number. The averages of proportions, percentages and compound rates are computed by geometric mean. The growth of population is measured in it as population increases in geometric progression. Harmonic Mean Harmonic mean is applied in the problems where small items must get more relative importance than the large ones. It is useful in cases where time, speed, values given in quantities, rate and prices are involved. But in practice, it has little applicability. Median and partition Values Median and partition values are positional measures of central tendency. There are mainly used in the qualitative cases like honestly, intelligence, ability, etc. In the distributions which are positively skewed, median is a more suitable average. These are also suitable for the problems of distribution of income, wealth, investment, etc. Mode Mode is also positional average. Its applicability of daily problems is increasing. Mode is used to calculate the 'modal size of a collar', 'modal size of shore', or 'modal size of readymade garments' etc. It is also used in the sciences of Biology, Meteorology, Business and Industry.

3. (a) What is meant by secular trend? Discuss any two methods of isolating trend
values in a time series. (b)What is seasonal variation of a time series? Describe the various methods you know to evaluate it and examine their relative merits

Answer:
A secular trend is one that will is sustained (or is expected to be sustained) over the long term. The term is most often used to distinguish underlying long-term trends from seasonal variations and the effects of economic cycles. There are many techniques for separating out secular trends from seasonal, and other, variations in historical data. These range from simple year on year comparisons to complex econometric models. Secular trends are obviously what matter to investors, but investors need to look beyond historical trends, or even short term forecasts, and consider the long term sustainability of trends.

Q3. b)In statistics, signal processing, econometrics and mathematical finance, a time series is a sequence of data points, measured at successive times. Examples of time series are the daily closing value of the Dow Jones index or the annual flow volume of the Nile River at Aswan. Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. Time series are very frequently plotted via line charts. In evenly spaced time series, the time intervals between data points are all equal, while in unevenly spaced time series the intervals differ. The samples of audio signals amplitude taken 44000 times at regular intervals of exactly 1/44000s would make an evenly spaced time series. Observations of natural events are typically unevenly spaced intervals: A time series that records transactions at NYSE would be unevenly spaced since transactions occur at irregular intervals. A time series holding an event every time a car passes some point on the street or an event each time a plane takes off at some airport would also be unevenly spaced. Time series data have a natural temporal ordering. This makes time series analysis distinct from other common data analysis problems, in which there is no natural ordering of the

observations (e.g. explaining people's wages by reference to their education level, where the individuals' data could be entered in any order). Time series analysis is also distinct from spatial data analysis where the observations typically relate to geographical locations (e.g. accounting for house prices by the location as well as the intrinsic characteristics of the houses). A time series model will generally reflect the fact that observations close together in time will be more closely related than observations further apart. In addition, time series models will often make use of the natural one-way ordering of time so that values for a given period will be expressed as deriving in some way from past values, rather than from future values. Methods for time series analyses may be divided into two classes: frequency domain methods and time domain methods. The former include spectral analysis and recently wavelet analysis; the latter include auto correlation and cross correlation analysis

Q6. (a) Why do we use a chi-square test? b) Why do we use analysis of variance? Answer: (a) Use of Chi-Square tests allows us to do a lot more than just test for the quality of several proportions. If we classify a population into several categories with respect to two attributes (such as age and job performance), we can then use a Chi-Square test to determine whether the two attributes are independent of each other. So, Chi-Square tests can be applied on contingency table. Testing of Hypothesis for Large and Small Samples, we used one-sample tests to determine whether a mean or a proportion was significantly different from a hypothesized value. In the two sample tests, we examined the difference between either two means or two proportions, and we tried to learn whether this difference was significant.

The chi-square (I) test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories. Do the numbers of individuals or objects that fall in each category differ significantly from the number you would expect? Is this difference between the expected and observed due to sampling error, or is it a real difference?

Chi-Square Test Requirements


Quantitative data. One or more categories. Independent observations. Adequate sample size (at least 10). Simple random sample. Data in frequency form. All observations must be used.

Expected frequencies When you find the value for chi square, you determine whether the observed frequencies differ significantly from the expected frequencies. You find the expected frequencies for chi square in three ways: You hypothesize that all the frequencies are equal in each category. For example, you might expect that half of the entering freshmen class of 200 at Tech College will be identified as women and half as men. You figure the expected frequency by dividing the number in the sample by the number of categories. In this exam pie, where there are 200 entering freshmen and two categories, male and female, you divide your sample of 200 by 2, the number of categories, to get 100 (expected frequencies ) in each category. You determine the expected frequencies on the basis of some prior knowledge. b).In statistics, analysis of variance (ANOVA) is a collection of statistical models, and their associated procedures, in which the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form ANOVA provides a statistical of whether or not the means of several groups are all equal, and therefore generalizes t-test to more than two groups. Doing multiple two-sample t-tests would result in an increased chance of committing a type 1 error. For this reason, ANOVAs are useful in comparing two, three or more means.

Das könnte Ihnen auch gefallen