Sie sind auf Seite 1von 15

STATISTICS FOR MANAGEMENT

DRIVE
PROGRAM

FALL 2014
MBA/ MBADS/ MBAFLEX/
MBAHCSN3/ PGDBAN2
SEMESTER
I
SUBJECT CODE & NAME
MB0040
STATISTICS FOR MANAGEMENT
BK ID
B1731
CREDITS
4
MARKS
60
Note: Answer all questions. Kindly note that answers for 10 marks questions should be approximately of
400 words. Each question is followed by evaluation scheme.
Q.1 : Statistics plays a vital role in almost every facet of human life. Describe the functions of
Statistics. Explain the applications of statistics

Meaning of statistics

Functions of statistics

Applications of statistics

ANS. (A)
DEFINITION OF 'STATISTICS'
A type of mathematical analysis involving the use of quantified representations, models and summaries
for a given set of empirical data or real world observations. Statistical analysis involves the process of
collecting and analyzing data and then summarizing the data into a numerical form.
Branch of mathematics dealing with gathering, analyzing, and making inferences from data. Originally
associated with government data (e.g., census data), the subject now has applications in all the sciences.
Statistical tools not only summarize past data through such indicators as the mean (see MEAN, MEDIAN,
AND MODE) and the standard deviation but can predict future events using FREQUENCY
DISTRIBUTIONfunctions. Statistics provides ways to design efficient experiments that eliminate timeconsuming trial and error. Double-blind tests for polls, intelligence and aptitude tests, and medical,
biological, and industrial experiments all benefit from statistical methods and theories. The results of all
of them serve as predictors of future performance, though reliability varies.
Functions or Uses of Statistics
1) Statistics helps in providing a better understanding and exact description of a phenomenon of nature.
(2) Statistical helps in proper and efficient planning of a statistical inquiry in any field of study.
(3) Statistical helps in collecting an appropriate quantitative data.
(4) Statistics helps in presenting complex data in a suitable tabular, diagrammatic and graphic form for an
easy and clear comprehension of the data.

(5) Statistics helps in understanding the nature and pattern of variability of a phenomenon through
quantitative obersevations.
(6) Statistics helps in drawing valid inference, along with a measure of their reliability about the
population parameters from the sample data.
Applications of Statistics
Statistics and Sociology
Sociology is one of the social sciences aiming to discover the basic structure of human society, to identify
the main forces that hold groups together or weaken them and to learn the conditions that transform social
life. It highlights and illuminates aspects of social life that otherwise might be only obscurely recognized
and understood. The sociologist may be called upon for help with a special problem such as social
conflict, urban plight or the war on poverty or crimes. His practical contribution lies in the ability to
clarify the underlaying nature of social problems to estimate more exactly their dimensions and to identify
aspects that seem most amenable to remedy with the knowledge and skills at hand. He naturally lands in
sociological research which is the purposeful effort to learn more about society than one can in the
ordinary course of living. Keeping in view of the problem he sets forth his objectives collects materials or
data and uses statistical techniques and the knowledge and theory already established on similar topics to
achieve his objectives. So statistical data and statistical methods are quite indispensable for sociological
research studies. There is a growing emphasis recently on social survey methods or research methodology
in all faculties of arts.
Sociologists seek the help of statistical tools to study cultural change in the society, family
pattern,prostitution,crime,marriage system etc.They also study statistically the relation between
prostitution and poverty, crime and poverty,drunkness and crime, illiteracy and crime etc.Thus statistics is
of immense use in various sociological studies.
Statistics and Government
The functions of a government are more varied and complex. Various depts in the state are required to
collect and record statistical data in a systematic manner for an effective administration. Data pertaining
to various fields namely population, natural resources, production both agricultural and
industrial,finance,trade,exports and imports, prices, labor, transport and communication, health,
education,defence ,crimes etc are the most fundamental requirements of the state for its administration. It
is only on this basis of such data; the government decides on the priority areas, gives more attention to
them through target oriented programmes and studies the impact of the programmes for its future
guidelines.
Statistics and Planning
Modern age is an age of planning and statistics are indispensable for planning. According to Tippett
planning greater or lesser degree according to the government in power is the order of the day and without
statistics, planning is inconceivable. Based only on a correct assessment of various resources both human
and material of the country proper planning can be made. A study of data relating to population,
agriculture, industry, prices, employment, health, education enables the planners to fix up time-bound
targets on the social and economic fronts evaluation of such economic and social programmes at different
stages by means of related data gathered continuously and systematically is also done to decide whether
the programmes are on towards the goal or targets set.
Statistics and Economics
In the fields of economics it is almost impossible to think of a problem which does not require an
extensive use of statistical data. Most of the laws in economics are based on a study of a large number of
units and their analysis is enabled by statistical data and the statistical methods. The important economic
aspects like production, consumption, exchange and distribution are described, compared and correlated
with the aid of statistical tools. By a statistical study of time series on prices, sales, production one can

study their trends, fluctuations and the underlaying causes. Thus statistics is indispensable in economic
analysis.

Q.2 a) Explain the approaches to define probability.


b) State the addition and multiplication rules of probability giving an example of each case.
Ans. 2
The notion of "the probability of something" is one of those ideas, like "point" and "time," that we can't
define exactly, but that are useful nonetheless. The following should give a good working understanding
of the concept.
Events
First, some related terminology: The "somethings" that we consider the probabilities of are usually called
events. For example, we may talk about the event that the number showing on a die we have rolled is 5;
or the event that it will rain tomorrow; or the event that someone in a certain group will contract a certain
disease within the next five years.
Four Perspectives on Probability
Four perspectives on probability are commonly used: Classical, Empirical, Subjective, and Axiomatic.
1. Classical (sometimes called "A priori" or "Theoretical")
This is the perspective on probability that most people first encounter in formal education (although they
may encounter the subjective perspective in informal education).
For example, suppose we consider tossing a fair die. There are six possible numbers that could come up
("outcomes"), and, since the die is fair, each one is equally likely to occur. So we say each of these
outcomes has probability 1/6. Since the event "an odd number comes up" consists of exactly three of
these basic outcomes, we say the probability of "odd" is 3/6, i.e. 1/2.
More generally, if we have a situation (a "random process") in which there are n equally likely outcomes,
and the event A consists of exactly m of these outcomes, we say that the probability of A is m/n. We may
write this as "P(A) = m/n" for short.
2. Empirical (sometimes called "A posteriori" or "Frequentist")
This perspective defines probability via a thought experiment.
To get the idea, suppose that we have a die which we are told is weighted, but we don't know how it is
weighted. We could get a rough idea of the probability of each outcome by tossing the die a large number
of times and using the proportion of times that the die gives that outcome to estimate the probability of
that outcome.
This idea is formalized to define the probability of the event A as
P(A) = the limit as n approaches infinity of m/n,
where n is the number of times the process (e.g., tossing the die) is performed, and m is the number of
times the outcome A happens.
(Notice that m and n stand for different things in this definition from what they meant in Perspective 1.)
In other words, imagine tossing the die 100 times, 1000 times, 10,000 times, ... . Each time we expect to
get a better and better approximation to the true probability of the event A. The mathematical way of

describing this is that the true probability is the limit of the approximations, as the number of tosses
"approaches infinity" (that just means that the number of tosses gets bigger and bigger indefinitely).
Example
This view of probability generalizes the first view: If we indeed have a fair die, we expect that the number
we will get from this definition is the same as we will get from the first definition (e.g., P(getting 1) = 1/6;
P(getting an odd number) = 1/2). In addition, this second definition also works for cases when outcomes
are not equally likely, such as the weighted die. It also works in cases where it doesn't make sense to talk
about the probability of an individual outcome. For example, we may consider randomly picking a
positive integer ( 1, 2, 3, ... ) and ask, "What is the probability that the number we pick is odd?"
Intuitively, the answer should be 1/2, since every other integer (when counted in order) is odd. To apply
this definition, we consider randomly picking 100 integers, then 1000 integers, then 10,000 integers, ... .
Each time we calculate what fraction of these chosen integers are odd. The resulting sequence of fractions
should give better and better approximations to 1/2.
3. Subjective
Subjective probability is an individual person's measure of belief that an event will occur. With this view
of probability, it makes perfectly good sense intuitively to talk about the probability that the Dow Jones
average will go up tomorrow. You can quite rationally take your subjective view to agree with the
classical or empirical views when they apply, so the subjective perspective can be taken as an expansion
of these other views.
4. Axiomatic
This is a unifying perspective. The coherence conditions needed for subjective probability can be proved
to hold for the classical and empirical definitions. The axiomatic perspective codifies these coherence
conditions, so can be used with any of the above three perspectives.
The axiomatic perspective says that probability is any function (we'll call it P) from events to numbers
satisfying the three conditions (axioms) below. (Just what constitutes events will depend on the situation
where probability is being used.)
The three axioms of probability:
0 P(E) 1 for every allowable event E. (In other words, 0 is the smallest allowable probability and 1 is
the largest allowable probability).
The probability of the union of mutually exclusive events is the sum of the probabilities of the individual
events. (Two events are called mutually exclusive if they cannot both occur simultaneously. For example,
the events "the die comes up 1" and "the die comes up 4" are mutually exclusive, assuming we are talking
about the same toss of the same die. The union of events is the event that at least one of the events occurs.
For example, if E is the event "a 1 comes up on the die" and F is the event "an even number comes up on
the die," then the union of E and F is the event "the number that comes up on the die is either 1 or even."
If we have a fair die, the axioms of probability require that each number comes up with probability 1/6:
Since the die is fair, each number comes up with the same probability. Since the outcomes "1 comes up,"
"2 comes up," ..."6 come up" are mutually exclusive and their union is the certain event, Axiom III says
that
P(1 comes up) + P( 2 comes up) + ... + P(6 comes up) = P(the certain event), which is 1 (by Axiom 2).
Since all six probabilities on the left are equal, that common probability must be 1/6.

3. a) The procedure of testing hypothesis requires a researcher to adopt several steps. Describe in
brief all such steps.
b) Explain the components of time series
Ans. 3 (a) A statistical hypothesis test is a method of statistical inference using data from a scientific
study. In statistics, a result is called statistically significant if it has been predicted as unlikely to have
occurred by chance alone, according to a pre-determined threshold probability, the significance level. The
phrase "test of significance" was coined by statistician Ronald Fisher.[1] These tests are used in
determining what outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified
level of significance; this can help to decide whether results contain enough information to cast doubt on
conventional wisdom, given that conventional wisdom has been used to establish the null hypothesis. The
critical region of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected
in favor of the alternative hypothesis. Statistical hypothesis testing is sometimes called confirmatory data
analysis, in contrast to exploratory data analysis, which may not have pre-specified hypotheses. In the
Neyman-Pearson framework (see below), the process of distinguishing between the null & alternative
hypotheses is aided by identifying two conceptual types of errors (type 1 & type 2), and by specifying
parametric limits on e.g. how much type 1 error will be permitted.
The testing process
In the statistics literature, statistical hypothesis testing plays a fundamental role. The usual line of
reasoning is as follows:

There is an initial research hypothesis of which the truth is unknown.


The first step is to state the relevant null and alternative hypotheses. This is important as misstating the hypotheses will muddy the rest of the process.
The second step is to consider the statistical assumptions being made about the sample in doing
the test; for example, assumptions about the statistical independence or about the form of the
distributions of the observations. This is equally important as invalid assumptions will mean that
the results of the test are invalid.
Decide which test is appropriate, and state the relevant test statistic T.
Derive the distribution of the test statistic under the null hypothesis from the assumptions. In
standard cases this will be a well-known result. For example the test statistic might follow a
Student's t distribution or a normal distribution.
Select a significance level (), a probability threshold below which the null hypothesis will be
rejected. Common values are 5% and 1%.
The distribution of the test statistic under the null hypothesis partitions the possible values of T
into those for which the null hypothesis is rejected, the so-called critical region, and those for
which it is not. The probability of the critical region is .
Compute from the observations the observed value tobs of the test statistic T.
Decide to either reject the null hypothesis in favor of the alternative or not reject it. The decision
rule is to reject the null hypothesis H0 if the observed value tobs is in the critical region, and to
accept or "fail to reject" the hypothesis otherwise.

Ans. (b) Time series components There are three types of time series patterns.
Trend
A trend exists when there is a long-term increase or decrease in the data. It does not have to be
linear. Sometimes we will refer to a trend changing direction when it might go from an
increasing trend to a decreasing trend.

Seasonal
A seasonal pattern exists when a series is influenced by seasonal factors (e.g., the quarter of the
year, the month, or day of the week). Seasonality is always of a fixed and known period.
Cyclic
A cyclic pattern exists when data exhibit rises and falls that are not of fixed period. The duration
of these fluctuations is usually of at least 2 years.
Many people confuse cyclic behaviour with seasonal behaviour, but they are really quite different. If the
fluctuations are not of fixed period then they are cyclic; if the period is unchanging and associated with
some aspect of the calendar, then the pattern is seasonal. In general, the average length of cycles is longer
than the length of a seasonal pattern, and the magnitude of cycles tends to be more variable than the
magnitude of seasonal patterns.
The following four examples shows different combinations of the above components.

Figure
6.1: Four time series exhibiting different types of time series patterns.
R code
par(mfrow=c(2,2))
plot(hsales,xlab="Year",ylab="Monthly housing sales (millions)")
plot(ustreas,xlab="Day",ylab="US treasury bill contracts")
plot(elec,xlab="Year",ylab="Australian monthly electricity production")
plot(diff(dj),xlab="Day",ylab="Daily change in Dow Jones index")
1. The monthly housing sales (top left) show strong seasonality within each year, as well as some strong
cyclic behaviour with period about 610 years. There is no apparent trend in the data over this period.
2. The US treasury bill contracts (top right) show results from the Chicago market for 100 consecutive
trading days in 1981. Here there is no seasonality, but an obvious downward trend. Possibly, if we had a

much longer series, we would see that this downward trend is actually part of a long cycle, but when
viewed over only 100 days it appears to be a trend.
3. The Australian monthly electricity production (bottom left) shows a strong increasing trend, with strong
seasonality. There is no evidence of any cyclic behaviour here.
4. The daily change in the Dow Jones index (bottom right) has no trend, seasonality or cyclic behaviour.
There are random fluctuations which do not appear to be very predictable, and no strong patterns that
would help with developing a forecasting model.

4. a) What is a Chi-square test? Point out its applications. Under what conditions is this test
applicable?
b) Discuss the types of measurement scales with examples
Ans.4. (a) The Chi squared tests
The tests
The distribution of a categorical variable in a sample often needs to be compared with the distribution of a
categorical variable in another sample. For example, over a period of 2 years a psychiatrist has classified by
socioeconomic class the women aged 20-64 admitted to her unit suffering from self poisoning sample A. At the
same time she has likewise classified the women of similar age admitted to a gastroenterological unit in the same
hospital sample B. She has employed the Registrar General's five socioeconomic classes, and generally classified
the women by reference to their father's or husband's occupation. The results are set out in table 8.1.

The psychiatrist wants to investigate whether the distribution of the patients by social class differed in these two
units. She therefore erects the null hypothesis that there is no difference between the two distributions. This is what
is tested by the chi squared () test (pronounced with a hard ch as in "sky"). By default, all tests are two sided.
It is important to emphasise here that tests may be carried out for this purpose only on the actual numbers of
occurrences, not on percentages, proportions, means of observations, or other derived statistics. Note, we distinguish
here the Greek () for the test and the distribution and the Roman (x) for the calculated statistic, which is what is
obtained from the test.
The test is carried out in the following steps:
For each observed number (0) in the table find an "expected" number (E); this procedure is discussed below.

To calculate the expected number for each cell of the table consider the null hypothesis, which in this case is that the
numbers in each cell are proportionately the same in sample A as they are in sample B. We therefore construct a
parallel table in which the proportions are exactly the same for both samples. This has been done in columns (2) and
(3) of table 8.2 . The proportions are obtained from the totals column in table 8.1 and are applied to the totals row.
For instance, in table 8.2 , column (2), 11.80 = (22/289) x 155; 24.67 = (46/289) x 155; in column (3) 10.20 =
(22/289) x 134; 21.33 = (46/289) x 134 and so on.
Thus by simple proportions from the totals we find an expected number to match each observed number. The sum of
the expected numbers for each sample must equal the sum of the observed numbers for each sample, which is a
useful check. We now subtract each expected number from its corresponding observed number.

The results are given in columns (4) and (5) of table 8.2 . Here two points may be noted.
1.
2.

The sum of these differences always equals zero in each column.


Each difference for sample A is matched by the same figure, but with opposite sign, for sample B.
Again these are useful checks.
The figures in columns (4) and (5) are then each squared and divided by the corresponding expected numbers in
columns (2) and (3). The results are given in columns (6) and (7). Finally these results, (O-E)/E are added. The sum
of them is x
A helpful technical procedure in calculating the expected numbers may be noted here. Most electronic calculators
allow successive multiplication by a constant multiplier by a short cut of some kind. To calculate the expected
numbers a constant multiplier for each sample is obtained by dividing the total of the sample by the grand total for
both samples. In table 8.1 for sample A this is 155/289 = 0.5363. This fraction is then successively multiplied by 22,
46, 73, 91, and 57. For sample B the fraction is 134/289 = 0.4636. This too is successively multiplied by 22, 46, 73,
91, and 57.
When a comparison is made between one sample and another, as in table 8.1 , a simple rule is that the degrees of
freedom equal (number of columns minus one) x (number of rows minus one) (not counting the row and column
containing the totals). For the data in table 8.1 this gives (2 - 1) x (5 - 1) = 4. Another way of looking at this is to ask
for the minimum number of figures that must be supplied in table 8.1 , in addition to all the totals, to allow us to
complete the whole table. Four numbers disposed anyhow in samples A and B provided they are in separate rows
will suffice.
Entering Table C at four degrees of freedom and reading along the row we find that the value of x(7.147) lies
between 3.357 and 7.779. The corresponding probability is: 0.10<P<0.50. This is well above the conventionally
significant level of 0.05, or 5%, so the null hypothesis is not disproved. It is therefore quite conceivable that in the
distribution of the patients between socioeconomic classes the population from which sample A was drawn were the
same as the population from which sample B was drawn.

Ans. 4(b)

Scales of Measurement
One of the most influential distinctions made in measurement was Stevens' (1946, 1957) classification of
scales of measurement. He made the distinction between nominal, ordinal, interval, and ratio scales of
measurement, which are briefly defined below. A more detailed discussion of these scales can be found in
Chapter 4 of the text.
Nominal: Nominal scales are naming scales. They represent categories where there is no basis for
ordering the categories.
Ordinal: Ordinal scales involve categories that can be ordered along a pre-established dimension.
However, we have no way of knowing how different the categories are from one another. We state the
latter property by saying that we do not have equal intervals between the items. Rankings also represent
ordinal scales because we know the order but do not know how different each person is from the next
person.
Interval: Interval scales are very similar to standard numbering scales except that they do not have a true
zero. That means that the distance between successive numbers is equal, but that the number zero does
NOT mean that there is none of the property being measured. Many measures that involve psychological
scales, especially those that use a form of normal standardization (e.g., IQ), are assumed to be interval
scales of measurement.
Ratio: Ratio scales are the easiest to understand because they are numbers as we usually think of them.
The distance between adjacent numbers are equal on a ratio scale and the score of zero on the ratio scale
means that there is none of whatever is being measured. Most ratio scales are counts of things.
The most important reason for making the distinction between these scales of measurement is that it
affects the statistical procedures that you will use in describing and analyzing your data.
In this unit, we will be presenting dozens of examples of measures at each of these levels of measurement,
along with some exercises to help you to refine your understanding of these distinctions. We recommend
that you complete the exercises since the best way to learn anything is to actively process the information
by using it to solve real-life problems.
Examples of Each Scale of Measurement
Listed below are several examples of each scale of measurement. We have focused on general categories
to help illustrate what each of the scales represent. We have tried to provide a wide variety of examples to
help make these distinctions clear for you.
Nominal Scale Examples
diagnostic categories
sex of the participant
classification based on discrete characteristics (e.g., hair color)
group affiliation (e.g., Republican, Democrate, Boy Scout, etc.)
the town people live in
a person's name
an arbitrary identification, including identification numbers that are arbitrary
menu items selected
any yes/no distinctions
most forms of classification (species of animals or type of tree)
location of damage in the brain
Ordinal Scale Examples
any rank ordering
class ranks
social class categories
order of finish in a race
Interval Scale Examples

Scores on scales that are standardized (i.e., with an arbitrary mean and standard deviation, usually
designed to always give a positive score)
Scores on scales that are known to not have a true zero (e.g., most temperature scales except for the
Kelvin Scale)
Scores on measures where it is not clear that zero means none of the trait (e.g., a math test)
Scores on most personality scales based on counting the number of endorsed items
Ratio Scale Examples
Time to complete a task
Number of responses given in a specified time period
Weight of an object
Size of an object
Number of objects detected
Number of errors made in a specified time period
Proportion of responses in a specified category.

Q.5 (a) Meaning of Business forecasting (b) Objectives of Business forecasting (c)Theories of
Business forecasting
Ans. (a) BUSINESS FORECASTING is an estimate or prediction of future developments in business
such as sales, expenditures, and profits. Given the wide swings in economic activity and the drastic
effects these fluctuations can have on profit margins, it is not surprising that business forecasting has
emerged as one of the most important aspects of corporate planning. Forecasting has become an
invaluable tool for businesspeople to anticipate economic trends and prepare themselves either to benefit
from or to counteract them. If, for instance, businesspeople envision an economic downturn, they can cut
back on their inventories, production quotas, and hirings. If, on the contrary, an economic boom seems
probable, those same businesspeople can take necessary measures to attain the maximum benefit from it.
Good business forecasts can help business owners and managers adapt to a changing economy.
At a minimum, businesses now need annual forecasts. One reason business planners prefer the annual
averages is that sudden changes in the economic climate can play havoc with the quarter-to-quarter
measurements. For instance, during the first half of 1984, a sudden growth spurt in the economy upset
most business forecasts. Spurred to expansiveness by a surging cash flow, businesses added to their stock
of plant and equipment at the fastest rate in five years. Government spending also went up faster than
expected, as did business inventories. That set the stage for the sharp second-half slowdown that included
an increased demand for credit and, consequently, higher interest rates. At the time, few had foreseen the
short-term trend.
Ans. (b)
The objective of Business forecasting is to produce better forecasts. But in the broader sense, the
objective is to improve organizational performancemore revenue, more profit, increased customer
satisfaction. Better forecasts, by themselves, are of no inherent value if those forecasts are ignored by
management or otherwise not used to improve organizational performance.
A wonderfully sinister way to improve forecast accuracy (while ignoring more important things like order
fill, customer satisfaction, revenue generation, and profit) was provided by Ruud Teunter of Lancaster
University, at the 2008 International Symposium on Business forecasting. Teunter compared various
Business forecasting methods for a data set of 5,000 items having intermittent demand patterns.
(Intermittent patterns have zero demand in many or most time periods.)
Teunter found that if the goal is simply to minimize forecast error, then Business forecasting zero in every
period was the best method to use! (The zero forecast had lower error than a moving average, exponential
smoothing, bootstrapping, and three variations of Crostons method that were tested.) However, for
proper inventory management to serve customer needs, Business forecasting zero demand every period is
probably not the right thing to do.
A similar point was made last fall in a Foresight article by Stephan Kolassa and Roland Martin (discussed
in "Tumbling Dice"). Using a simple dice tossing experiment, they showed the implications for bias in
commonly used percentage error metrics. What makes this important to management is that if the sole
incentive for forecasters is to minimize MAPE, the forecaster could do best by purposely Business
forecasting too low. This, of course, could have bad consequences for inventory management and
customer service.
Ans. (c) The Theory of Business Forecasts
Although businesses and governments pay millions of dollars for forecasts, those forecasts are not always
on target, particularly during turbulent economic times. Perhaps one of the worst years on record for
business fore-casters was 1982. Experts generally believe that business forecasters, caught up in the
excitement of President Reagan's supply-side economic programs, simply stopped paying attention to
what was really happening. As a result, the 1982 forecasts are among the worst in economic history.

Making accurate business forecasts is most difficult for companies that produce durable goods such as
automobiles or appliances, as well as for companies that supply the basic materials to these industries.
Problems arise because sales of such goods are subject to extreme variations. During the early 1970s,
annual sales of automobiles in the United States increased by 22 percent in one year and declined by 22.5
percent in another. Consequently, the durable goods industries in general, and automobile companies in
particular, have developed especially complex and sophisticated forecasting techniques. In addition to
careful analysis of income trends (based on a general economic forecast), automobile companies, which
are acutely sensitive to competition from imports, underwrite a number of studies of consumer attitudes
and surveys of intentions to purchase automobiles.
The Future of Business Forecasting
Today, many executives are unhappy with the economic forecasts they receive. As a result, they have
fired economists and are paying less attention to macroeconomic forecasts, arguing that these forecasts
cost too much and reveal too little. Instead they are now leaning more heavily on their own rough-andready indicators of what is likely to happen to their businesses and industries. When they do consult
economists, they increasingly send them into the field with line managers to forecast the particulars that
really matter.
Executives are now exploring other means of fore-casting the business future. Some watch the growth of
the Gross National Product (GNP). Disposable personal income is another broad measure that suffices,
particularly in retailing. By observing whether economic indicators rise or fall, executives can more
accurately predict their retail sales picture in six months or a year.
For many companies, however, no single indicator works to predict the future. Some might use the
monthly consumer confidence index or study the stock market with regard to certain companies.
Depending on the circumstances, interest rates may have a bearing on the future. High or low rates may
determine whether the consumer will be in the market to buy or just keep looking at certain products such
as cars, boats, houses, and other big-ticket items. Many companies are taking one or more basic indicators
and building them into economic models tailor-made for specific industries and markets.
Scenario Planning versus Business Forecasting
In the 1990s, economists developed new methods of business forecasting that rest more on hard data and
less on theoretical assumptions. They acknowledge that the economy is dynamic and volatile, and have
tried to keep in mind that all forecasts, however sophisticated, are greatly simplified representations of
reality that will likely be incorrect in some respects.
One of the newer forecasting techniques is called "scenario forecasting." More businesses are using the
scenario method to devise their "strategic direction." In scenario forecasting, companies develop scenarios
to identify major changes that could happen in the world and determine the possible effects those changes
will have on their operations. They then map out ways in which to react if those occurrences come to
pass, hoping that the hypothetical exercise will make them better prepared to take action when a real
economic crisis takes place.
One of the biggest reasons to use the scenario method is that traditional forecasting does not keep up with
the lightning-quick pace at which modern business moves. Where change could once be anticipated over
a period of time, the advent of sophisticated technology, which by itself is ever changing, has shown
businesspeople that they need a new way of looking at and thinking about the economic future.

Q.6. (a) Meaning and Assumptions


(b) Formulas/Calculation/Solution to the problem
Ans.6 (a) Accepted cause and effect relationships, or estimates of the existence of a fact from the known
existence of other fact(s). Although useful in providing basis for action and in creating "what if" scenarios
to simulate different realities or possible situations, assumptions are dangerous when accepted as reality
without thorough examination. See also critical thinking and rule of thumb.
Ans. 6(b)
When you enter an equation into the calculator, the calculator will begin by expanding (simplifying) the
problem. Then it will attempt to solve the equation by using one or more of the following: addition,
subtraction, division, taking the square root of each side, factoring, and completing the square.
Variables
Any lowercase letter may be used as a variable.
Exponents
Exponents are supported on variables using the ^ (caret) symbol. For example, to express x2, enter x^2.
Note: exponents must be positive integers, no negatives, decimals, or variables. Exponents may not
currently be placed on numbers, brackets, or parentheses.
Parentheses and Brackets
Parentheses ( ) and brackets [ ] may be used to group terms as in a standard equation or expression.
Multiplication, Addition, and Subtraction
For addition and subtraction, use the standard + and - symbols respectively. For multiplication, use the *
symbol. A * symbol is not necessiary when multiplying a number by a variable. For instance: 2 * x can
also be entered as 2x. Similarly, 2 * (x + 5) can also be entered as 2(x + 5); 2x * (5) can be entered as
2x(5). The * is also optional when multiplying with parentheses, example: (x + 1)(x - 1).
Order of Operations
The calculator follows the standard order of operations taught by most algebra books - Parentheses,
Exponents, Multiplication and Division, Addition and Subtraction. The only exception is that division is
not currently supported; attempts to use the / symbol will result in an error.
Division, Square Root, Radicals, Fractions
The above features are not supported at this time. A future release will add this functionality
The Discriminant
y=3x2+4x+5 discriminant =b24(a)(c)=424(3)(5)=44 ( 2 imaginary solutions )
The Work
bb24(a)(c)2(a)4424(3)(5)2(3)4446=4411
6=42i116=21i113
The Actual Solutions
x=0.6666666666666666+1.1055415967851332i x=0.66666666666666661.1055415967851332i

Das könnte Ihnen auch gefallen