You are on page 1of 16

# Listed below is a glossary of terms used by businesses when talking about the concepts involved in break-even analysis.

## Definitions used in Break-Even Analysis:

Fixed Cost: The sum of all costs required to produce the first unit of a product. This amount does not vary as production increases or decreases, until new capital expenditures are needed. Variable Unit Cost: Costs that vary directly with the production of one additional unit. Expected Unit Sales: Number of units of the product projected to be sold over a specific period of time. Unit Price: The amount of money charged to the customer for each unit of a product or service. Total Variable Cost: The product of expected unit sales and variable unit cost.
(Expected Unit Sales * Variable Unit Cost )

Total Cost: The sum of the fixed cost and total variable cost for any given level of production.
(Fixed Cost + Total Variable Cost )

Total Revenue: The product of expected unit sales and unit price.
(Expected Unit Sales * Unit Price )

Profit (or Loss): The monetary gain (or loss) resulting from revenues after subtracting all associated costs.
(Total Revenue - Total Costs)

Break Even: Number of units that must be sold in order to produce a profit of zero (but will recover all associated costs).
(Break Even = Fixed Cost / (Unit Price - Variable Unit Cost))

Break Even Analysis Break even analysis depends on the following variables: 1. The fixed production costs for a product. 2. The variable production costs for a product. 3. The product's unit price.

4. The product's expected unit sales [sometimes called projected sales.] On the surface, break-even analysis is a tool to calculate at which sales volume the variable and fixed costs of producing your product will be recovered. Another way to look at it is that the break-even point is the point at which your product stops costing you money to produce and sell, and starts to generate a profit for your company. You can also use break even analysis to solve managerial problems:

setting price levels targeting optimal variable/ fixed cost combinations determining the financial attractiveness of different strategic options for your company

Using The Break Even Calculator Imagine that you are an entrepreneur at a location in the United States. You are planning to enter the gourmet soy-based burger market. Using break-even analysis, here is what you want to know...

At what volume of burger sales will you start to make money? One of your MBA partners developed an expected unit sales forecast. You want to compare this 18month forecast of 150,000 units to the volume of burgers you will have to sell in order to break even.
Projected Sales Forcasted Break Even

You are forecasting 18-month sales because that is your banker's deadline for showing a profit. If you are not making money in 18-months, your banker may call your loan, and you would be facing bankruptcy.

Here is what you know now: The variable unit cost for making one burger is \$.97. The fixed cost of making burgers for 18 months will be a total of: \$140,000. Remember, fixed costs cover things like your rent, your phone bill, and insurance coverage - these items tend not to vary in amount per month over the term of one year. Your MBA partner has forecast expected unit sales of 150,000 burgers in 18 months. The unit price you are projecting for the burger is: \$1.99.This is your best estimate of what the

average consumer will pay for your soy-burger. If you charge \$1.99 for your burger, how many burgers will you have to sell before you make back your total cost: \$140,000 + (150,000 burgers x 97 cents)? Enter the variables into the Break Even Calculator. Then click Calculate.

Discussion... I don't know about you, but I am uncomfortable with a break-even volume of 137,000 units when expected unit sales are only 150,000 units. Wouldn't you be? What if your colleague overestimated the demand? What if the economy slows down? Sales forecasts for new products are notoriously inaccurate, and giving yourself less than a 10% margin for error seems risky. If your banker is willing to wait longer than 18 months to see a profit, no problem. But if you think that the banker will call your loan, you may want to consider a different pricing strategy. Lucky for you, another MBA partner has been doing additional research and discovered that because Clevelanders are concerned with their health, they are willing to pay up to \$2.79 for a gourmet burger of the top quality you propose.

Use the break-even calculator once again. What is the break even point when the unit price is raised to \$2.79?

Here is what you know now: The variable unit cost for making one burger is \$.97 The unit price you think you might sell the burger for is: \$2.79. The fixed cost of making burgers for 18 months will be a total of: \$140,000. If you charge \$2.79 for your burger, how many burgers will you have to sell before you make back your total cost?

At a unit price of \$2.79, our break-even volume on burgers sold drops to 76,900 units. You should be able to sell this many burgers in nine to eleven months if your forecasts are accurate to +/- 20%. This is going to make your banker happier!

If you have the time, try some other alternatives with our Break Even calculator.

Lower your variable cost - try \$.80 as a variable unit cost to produce one unit and see what happens. Or adjust your fixed costs - try \$100,000 as your fixed cost over 18 months. What could you do to lower your fixed costs, if you select this strategy?

Each time you change a parameter in Break-Even Analysis, the break-even volume changes, and so does your risk/profit profile. By now you know what we are driving at - all these factors can be controlled by managers! Therefore, each can be the focal variable in a break-even analysis. Next time you face a case where break-even analysis is applicable, use our calculator to assist you!

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++========== =====

In unit sales
If the product can be sold in a larger quantity that occurs at the break even point,then the firm will make a profit; below this point, a loss. Break-even quantity is calculated by:

Total fixed costs / (selling price - average variable costs). Explanation - in the denominator, "price minus average variable cost" is the variable profit per unit, or contribution margin of each unit that is sold. This relationship is derived from the profit equation: Profit = Revenues - Costs where Revenues = (selling price * quantity of product) and Costs = (average variable costs * quantity) + total fixed costs. Therefore, Profit = (selling price*quantity)-(average variable costs*quantity+total fixed costs). Solving for Quantity of product at the breakeven point when Profit equals zero, the quantity of product at breakeven is Total fixed costs / (selling price - average variable costs). Firms may still decide not to sell low-profit products, for example those not fitting well into their sales mix. Firms may also sell products that lose money - as a loss leader, to offer a complete line of products, etc. But if a product does not break even, or a potential product looks like it clearly will not sell better than the break even point, then the firm will not sell, or will stop selling, that product. An example:

Assume we are selling a product for \$2 each. Assume that the variable cost associated with producing and selling the product is 60 cents. Assume that the fixed cost related to the product (the basic costs that are incurred in operating the business even if no product is produced) is \$1000. In this example, the firm would have to sell (1000/(2.00 - 0.60) = 715) 715 units to break even. in that case the margin of safety value of NIL and the value of BEP is not profitable or not gaining loss.

Break Even = FC / (SP VC) where FC is Fixed Cost, SP is selling Price and VC is Variable Cost

===================================================================================== ==========Definition:

A breakeven analysis is used to determine how much sales volume your business needs to start making a profit. The breakeven analysis is especially useful when you're developing a pricing strategy, either as part of a marketing plan or a business plan. To conduct a breakeven analysis, use this formula:

Fixed Costs divided by (Revenue per unit - Variable costs per unit) Fixed costs are costs that must be paid whether or not any units are produced. These costs are "fixed" over a specified period of time or range of production. Variable costs are costs that vary directly with the number of products produced. For instance, the cost of the materials needed and the labour used to produce units isn't always the same. For example, suppose that your fixed costs for producing 100,000 widgets were \$30,000 a year. Your variable costs are \$2.20 materials, \$4.00 labour, and \$0.80 overhead, for a total of \$7.00. If you choose a selling price of \$12.00 for each widget, then: \$30,000 divided by (\$12.00 - 7.00) equals 6000 units. This is the number of widgets that have to be sold at a selling price of \$12.00 before your business will start to make a profit. Common Misspellings: Breakeven analasys; breakeven analesis. Examples: Alison used a breakeven analysis to determine what prices she should set for her software products.

## Lac Leman Case Regression Model

SUMMARY OUTPUT
Regression Statistics

df

SS

MS

Significance F

Coefficients

2 12 14

## 1470051228 735025614.2 168261744.9 14021812.07 1638312973

Standard Error t Stat

52.4201587

1.17363E-06

P-value

Lower 95%

Upper 95%

## Estimated Sat Attendance = 6812.78+1.19*Fri Attendance 8636.35*Rain Indicator

Intro to Quantitative Analysis I 25

## Standard error (statistics)

The standard error of a method of measurement or estimation is the estimated standard deviation of the error in that method. Specifically, it estimates the standard deviation of the difference between the measured or estimated values and the true values. Notice that the true value of the standard deviation is usually unknown and the use of the term standard error carries with it the idea that an estimate of this unknown quantity is being used. It also carries with it the idea that it measures not the standard deviation of the estimate itself but the standard deviation of the error in the estimate, and these are very different. In applications where a standard error is used, it would be good to be able to take proper account of the fact that the standard error is only an estimate. Unfortunately this is not often possible and it may then be better to use an approach that avoids using a standard error, for example by using maximum likelihood or a more formal approach to deriving confidence intervals. One wellknown case where a proper allowance can be made arises where the Student's t-distribution is used to provide a confidence interval for an estimated mean or difference of means. In other cases, the standard error may usefully be used to provide an indication of the size of the uncertainty, but its formal or semi-formal use to provide confidence intervals or tests should be avoided unless the sample size is at least moderately large. Here "large enough" would depend on the particular quantities being analysed.

##  Standard error of the mean

Expected error in the mean of A for a sample of n data points with sample bias coefficient . The unbiased standard error plots as the =0 line with log-log slope -.

The standard error of the mean (SEM), an unbiased estimate of expected error in the sample estimate of a population mean, is the sample estimate of the population standard deviation (sample standard deviation) divided by the square root of the sample size (assuming statistical independence of the values in the sample):

where
s is the sample standard deviation (i.e. the sample based estimate of the standard deviation of the population), and n is the size (number of items) of the sample.

A practical result: Decreasing the uncertainty in your mean value estimate by a factor of two requires that you acquire four times as many samples. Worse, decreasing standard error by a factor of ten requires a hundred times as many samples. This estimate may be compared with the formula for the true standard deviation of the mean:

where
is the standard deviation of the population.

Note: Standard error may also be defined as the standard deviation of the residual error term. (Kenney and Keeping, p. 187; Zwillinger 1995, p. 626) If values of the measured quantity A are not statistically independent but have been obtained from known locations in parameter space x, an unbiased estimate of error in the mean may be obtained by multiplying the standard error above by the square root of (1+(n-1))/(1-), where sample bias coefficient is the average of the autocorrelation-coefficient AA[x] value (a quantity between -1 and 1) for all sample point pairs.

##  Assumptions and usage

If the data are assumed to be normally distributed, quantiles of the normal distribution and the sample mean and standard error can be used to calculate confidence intervals for the mean. The following expressions can be used to calculate the upper and lower 95% confidence limits, where x is equal to the sample mean, s is equal to the standard error for the sample mean, and 1.96 is the .975 quantile of the normal distribution.
Upper 95% Limit = Lower 95% Limit =

In particular, the standard error of a sample statistic (such as sample mean) is the estimated standard deviation of the error in the process by which it was generated. In other words, it is the standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE. Standard errors provide simple measures of uncertainty in a value and are often used because:

If the standard error of several individual quantities is known then the standard error of some function of the quantities can be easily calculated in many cases; Where the probability distribution of the value is known, it can be used to calculate an exact confidence interval; and Where the probability distribution is unknown, relationships like Chebyshevs or the Vysochanski-Petunin inequality can be used to calculate a conservative confidence interval As the sample size tends to infinity the central limit theorem guarantees that the sampling distribution of the mean is asymptotically normal.

Margin of error

The top portion of this graphic depicts probability densities that show the relative likelihood that the "true" percentage is in a particular area given a reported percentage of 50%. The bottom portion shows the 95% confidence intervals (horizontal line segments), the corresponding margins of error (on the left), and sample sizes (on the right). In other words, for each sample size, one is 95% confident that the "true" percentage is in the region indicated by the corresponding segment. The larger the sample is, the smaller the margin of error is.

The margin of error is a statistic expressing the amount of random sampling error in a survey's results. The larger the margin of error, the less faith one should have that the poll's reported results are close to the "true" figures; that is, the figures for the whole population.

Contents
[hide]

1 Explanation 2 Concept o 2.1 Running example o 2.2 Basic concept o 2.3 Calculations assuming random sampling o 2.4 Definition o 2.5 Maximum margin of error

2.6 Different confidence levels 2.7 Maximum and specific margins of error 2.8 Effect of population size 2.9 Other statistics 3 Comparing percentages 4 Notes 5 References 6 See also 7 External links

o o o o

 Explanation
The margin of error is usually defined as the "radius" (or half the width) of a confidence interval for a particular statistic from a survey. One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%. The margin of error has been described as an "absolute" quantity, equal to a confidence interval radius for the statistic. For example, if the true value is 50 percentage points, and the statistic has a confidence interval radius of 5 percentage points, then we say the margin of error is 5 percentage points. As another example, if the true value is 50 people, and the statistic has a confidence interval radius of 5 people, then we might say the margin of error is 5 people. In some cases, the margin of error is not expressed as an "absolute" quantity; rather it is expressed as a "relative" quantity. For example, suppose the true value is 50 people, and the statistic has a confidence interval radius of 5 people. If we use the "absolute" definition, the margin of error would be 5 people. If we use the "relative" definition, then we express this absolute margin of error as a percent of the true value. So in this case, the absolute margin of error is 5 people, but the "percent relative" margin of error is 10% (because 5 people are ten percent of 50 people). Often, however, the distinction is not explicitly made, yet usually is apparent from context. Like confidence intervals, the margin of error can be defined for any desired confidence level, but usually a level of 90%, 95% or 99% is chosen (typically 95%). This level is the probability that a margin of error around the reported percentage would include the "true" percentage. Along with the confidence level, the sample design for a survey, and in particular its sample size, determines the magnitude of the margin of error. A larger sample size produces a smaller margin of error, all else remaining equal. If the exact confidence intervals are used, then the margin of error takes into account both sampling error and non-sampling error. If an approximate confidence interval is used (for example, by assuming the distribution is normal and then modeling the confidence interval accordingly), then the margin of error may only take random sampling error into account. It does

not represent other potential sources of error or bias such as a non-representative sample-design, poorly phrased questions, people lying or refusing to respond, the exclusion of people who could not be contacted, or miscounts and miscalculations.

 Concept
 Running example

A running example from the 2004 U.S. presidential campaign will be used to illustrate concepts throughout this article. According to an October 2, 2004 survey by Newsweek, 47% of registered voters would vote for John Kerry/John Edwards if the election were held on that day, 45% would vote for George W. Bush/Dick Cheney, and 2% would vote for Ralph Nader/Peter Camejo. The size of the sample was 1,013. Unless otherwise stated, the remainder of this article uses a 95% level of confidence.
 Basic concept

Polls typically involve taking a sample from a certain population. In the case of the Newsweek poll, the population of interest is the population of people who will vote. Because it is impractical to poll everyone who will vote, pollsters take smaller samples that are intended to be representative, that is, a random sample of the population. It is possible that pollsters sample 1,013 voters who happen to vote for Bush when in fact the population is evenly split between Bush and Kerry, but this is extremely unlikely (p = 21013 1.1 10305) given that the sample is random. Sampling theory provides methods for calculating the probability that the poll results differ from reality by more than a certain amount, simply due to chance; for instance, that the poll reports 47% for Kerry but his support is actually as high as 50%, or is really as low as 44%. This theory and some Bayesian assumptions suggest that the "true" percentage will probably be fairly close to 47%. The more people that are sampled, the more confident pollsters can be that the "true" percentage is close to the observed percentage. The margin of error is a measure of how close the results are likely to be. However, the margin of error only accounts for random sampling error, so it is blind to systematic errors that may be introduced by non-response or by interactions between the survey and subjects' memory, motivation, communication and knowledge.
 Calculations assuming random sampling

This section will briefly discuss the standard error of a percentage, the corresponding confidence interval, and connect these two concepts to the margin of error. For simplicity, the calculations here assume the poll was based on a simple random sample from a large population. The standard error of a reported proportion or percentage p measures its accuracy, and is the estimated standard deviation of that percentage. It can be estimated from just p and the sample size, n, if n is small relative to the population size, using the following formula:

When the sample is not a simple random sample from a large population, the standard error and the confidence interval must be estimated through more advanced calculations. In most cases, the true confidence interval is approximated by assuming the distribution is normal, and inputing the interval. For normal distributions, the confidence interval radii are proportional to the standard error. Usually, the true standard error is unknown, so an estimate's standard error is calculated from the sample data. Note that there is not necessarily a strict connection between the true confidence interval, and the true standard error. The true p percent confidence interval is the interval [a, b] that contains p percent of the distribution, and where (100 p)/2 percent of the distribution lies below a, and (100 p)/2 percent of the distribution lies above b. The true standard error of the statistic is the square root of the true sampling variance of the statistic. These two may not be directly related, although in general, for large distributions that look like normal curves, there is a direct relationship. In the Newsweek poll, Kerry's level of support p = 0.47 and n = 1,013. The standard error (.016 or 1.6%) helps to give a sense of the accuracy of Kerry's estimated percentage (47%). A Bayesian interpretation of the standard error is that although we do not know the "true" percentage, it is highly likely to be located within two standard errors of the estimated percentage (47%). The standard error can be used to create a confidence interval within which the "true" percentage should be to a certain level of confidence. The estimated percentage plus or minus its margin of error is a confidence interval for the percentage. In other words, the margin of error is half the width of the confidence interval. It can be calculated as a multiple of the standard error, with the factor depending of the level of confidence desired; a margin of one standard error gives a 68% confidence interval, while the estimate plus or minus 1.96 standard errors is a 95% confidence interval, and a 99% confidence interval runs 2.58 standard errors on either side of the estimate.
 Definition

The margin of error for a particular statistic of interest is usually defined as the radius (or half the width) of the confidence interval for that statistic. The term can also be used to mean sampling error in general. In media reports of poll results, the term usually refers to the maximum margin of error for any percentage from that poll.
 Maximum margin of error

The maximum margin of error for any percentage is the radius of the confidence interval when p = 50%. As such, it can be calculated directly from the number of poll respondents. For 95% confidence, assuming a simple random sample from a large population:

This calculation gives a margin of error of 3% for the Newsweek poll, which reported a margin of error of 4%. The difference was probably due to weighting or complex features of the sampling design that required alternative calculations for the standard error. It is also possible that Newsweek have rounded conservatively to avoid overstating the confidence of their results.
 Different confidence levels

For a simple random sample from a large population, the maximum margin of error is a simple re-expression of the sample size n. The numerators of these equations are rounded to two decimal places.
Margin of error at 99% confidence Margin of error at 95% confidence Margin of error at 90% confidence

If an article about a poll does not report the margin of error, but does state that a simple random sample of a certain size was used, the margin of error can be calculated for a desired degree of confidence using one of the above formulae. Also, if the 95% margin of error is given, one can find the 99% margin of error by increasing the reported margin of error by about 30%.
 Maximum and specific margins of error

While the margin of error typically reported in the media is a poll-wide figure that reflects the maximum sampling variation of any percentage based on all respondents from that poll, the term margin of error also refers to the radius of the confidence interval for a particular statistic. The margin of error for a particular individual percentage will usually be smaller than the maximum margin of error quoted for the survey. This maximum only applies when the observed percentage is 50%, and the margin of error shrinks as the percentage approaches the extremes of 0% or 100%. In other words, the maximum margin of error is the radius of a 95% confidence interval for a reported percentage of 50%. If p moves away from 50%, the confidence interval for p will be shorter. Thus, the maximum margin of error represents an upper bound to the uncertainty; one is at least 95% certain that the "true" percentage is within the maximum margin of error of a reported percentage for any reported percentage.

##  Effect of population size

The formulae above for the margin of error assume that there is an infinitely large population and thus do not depend on the size of the population of interest. According to sampling theory, this assumption is reasonable when the sampling fraction is small. The margin of error for a particular sampling method is essentially the same regardless of whether the population of interest is the size of a school, city, state, or country, as long as the sampling fraction is less than 5%. In cases where the sampling fraction exceeds 5%, analysts can adjust the margin of error using "finite population correction," (FPC) to account for the added precision gained by sampling close to a larger percentage of the population. FPC can be calculated using the formula:

To adjust for a large sampling fraction, the fpc factored into to the calculation of the margin of error, which has the effect of narrowing the margin of error. It holds that the fpc approaches zero as the sample size (n) approaches the population size (N), which has the effect of eliminating the margin of error entirely. This makes intuitive sense because when N = n, the sample becomes a census and sampling error becomes moot. Analysts should be mindful that the sample remain truly random as the sampling fraction grows, lest sampling bias be introduced.
 Other statistics

Confidence intervals can be calculated, and so can margins of error, for a range of statistics including individual percentages, differences between percentages, averages, medians and totals. The margin of error for the difference between two percentages is larger than the margins of error for each of these percentages, and may even be larger than the maximum margin of error for any individual percentage from the survey.

##  Comparing percentages

In a plurality voting system, it is important to know who is ahead. The terms "statistical tie" and "statistical dead heat" are sometimes used to describe reported percentages that differ by less than a margin of error, but these terms can be misleading. For one thing, the margin of error as generally calculated is applicable to an individual percentage and not the difference between percentages, so the difference between two percentage estimates may not be statistically significant even when they differ by more than the reported margin of error. The survey results also often provide strong information even when there is not a statistically significant difference.

When comparing percentages, it can accordingly be useful to consider the probability that one percentage is higher than another. In simple situations, this probability can be derived with 1) the standard error calculation introduced earlier, 2) the formula for the variance of the difference of two random variables, and 3) an assumption that if anyone does not choose Kerry they will choose Bush, and vice versa; they are perfectly negatively correlated. This may not be a tenable assumption when there are more than two possible poll responses. For more complex survey designs, different formulas for calculating the standard error of difference must be used. The standard error of the difference of percentages p for Kerry and q for Bush, assuming that they are perfectly negatively correlated, follows:

Given the observed percentage difference p q (2% or 0.02) and the standard error of the difference calculated above (.03), any statistical calculator may be used to calculate the probability that a sample from a normal distribution with mean 0.02 and standard deviation 0.03 is greater than 0.