Sie sind auf Seite 1von 83

# Capital asset pricing model

An estimation of the CAPM and the Security Market Line (purple) for the Dow Jones Industrial Average over 3 years for monthly data. In finance, the capital asset pricing model (CAPM) is used to determine a theoretically appropriate required rate of return of an asset, if that asset is to be added to an already welldiversified portfolio, given that asset's non-diversifiable risk. The model takes into account the asset's sensitivity to non-diversifiable risk (also known as systematic risk or market risk), often represented by the quantity beta () in the financial industry, as well as the expected return of the market and the expected return of a theoretical risk-free asset. The model was introduced by Jack Treynor (1961, 1962),[1] William Sharpe (1964), John Lintner (1965a,b) and Jan Mossin (1966) independently, building on the earlier work of Harry Markowitz on diversification and modern portfolio theory. Sharpe, Markowitz and Merton Miller jointly received the Nobel Memorial Prize in Economics for this contribution to the field of financial economics.

##  The formula

The Security Market Line, seen here in a graph, describes a relation between the beta and the asset's expected rate of return. The CAPM is a model for pricing an individual security or a portfolio. For individual securities, we make use of the security market line (SML) and its relation to expected return and systematic risk (beta) to show how the market must price individual securities in relation to their security risk class. The SML enables us to calculate the reward-to-risk ratio for any security in relation to that of the overall market. Therefore, when the expected rate of return for any security is deflated by its beta coefficient, the reward-to-risk ratio for any individual security in the market is equal to the market reward-to-risk ratio, thus:

The market reward-to-risk ratio is effectively the market risk premium and by rearranging the above equation and solving for E(Ri), we obtain the Capital Asset Pricing Model (CAPM).

where:

is the expected return on the capital asset is the risk-free rate of interest such as interest arising from government bonds (the beta) is the sensitivity of the expected excess asset returns to the expected excess market returns, or also ,

is the expected return of the market is sometimes known as the market premium(the difference between the expected market rate of return and the risk-free rate of return). is also known as the risk premium

## Restated, in terms of risk premium, we find that:

which states that the individual risk premium equals the market premium times . Note 1: the expected market rate of return is usually estimated by measuring the Geometric Average of the historical returns on a market portfolio (e.g. S&P 500). Note 2: the risk free rate of return used for determining the risk premium is usually the arithmetic average of historical risk free rates of return and not the current risk free rate of return. For the full derivation see Modern portfolio theory.

##  Security market line

The SML essentially graphs the results from the capital asset pricing model (CAPM) formula. The x-axis represents the risk (beta), and the y-axis represents the expected return. The market risk premium is determined from the slope of the SML. The relationship between and required return is plotted on the securities market line (SML) which shows expected return as a function of . The intercept is the nominal risk-free rate available for the market, while the slope is the market premium, E(Rm) Rf. The securities market line can be regarded as representing a single-factor model of the asset price, where Beta is exposure to changes in value of the Market. The equation of the SML is thus:

It is a useful tool in determining if an asset being considered for a portfolio offers a reasonable expected return for risk. Individual securities are plotted on the SML graph. If the security's expected return versus risk is plotted above the SML, it is undervalued since the investor can expect a greater return for the inherent risk. And a security plotted below the SML is overvalued since the investor would be accepting less return for the amount of risk assumed.

##  Asset pricing

Once the expected/required rate of return, E(Ri), is calculated using CAPM, we can compare this required rate of return to the asset's estimated rate of return over a specific investment horizon to determine whether it would be an appropriate investment. To make this comparison, you need an independent estimate of the return outlook for the security based on either fundamental or technical analysis techniques, including P/E, M/B etc. Assuming that the CAPM is correct, an asset is correctly priced when its estimated price is the same as the present value of future cash flows of the asset, discounted at the rate suggested by CAPM. If the observed price is higher than the CAPM valuation, then the asset is undervalued

(and overvalued when the estimated price is below the CAPM valuation).[2] When the asset does not lie on the SML, this could also suggest mis-pricing. Since the expected return of the asset at time t is , a higher expected return than what CAPM suggests indicates that Pt is too low (the asset is currently undervalued), assuming that at time t + 1 the asset returns to the CAPM suggested price.[3] The asset price P0 using CAPM, sometimes called the certainty equivalent pricing formula, is a linear relationship given by

##  Asset-specific required return

The CAPM returns the asset-appropriate required return or discount ratei.e. the rate at which future cash flows produced by the asset should be discounted given that asset's relative riskiness. Betas exceeding one signify more than average "riskiness"; betas below one indicate lower than average. Thus, a more risky stock will have a higher beta and will be discounted at a higher rate; less sensitive stocks will have lower betas and be discounted at a lower rate. Given the accepted concave utility function, the CAPM is consistent with intuitioninvestors (should) require a higher return for holding a more risky asset. Since beta reflects asset-specific sensitivity to non-diversifiable, i.e. market risk, the market as a whole, by definition, has a beta of one. Stock market indices are frequently used as local proxies for the marketand in that case (by definition) have a beta of one. An investor in a large, diversified portfolio (such as a mutual fund), therefore, expects performance in line with the market.

##  Risk and diversification

The risk of a portfolio comprises systematic risk, also known as undiversifiable risk, and unsystematic risk which is also known as idiosyncratic risk or diversifiable risk. Systematic risk refers to the risk common to all securitiesi.e. market risk. Unsystematic risk is the risk associated with individual assets. Unsystematic risk can be diversified away to smaller levels by including a greater number of assets in the portfolio (specific risks "average out"). The same is not possible for systematic risk within one market. Depending on the market, a portfolio of approximately 30-40 securities in developed markets such as UK or US will render the portfolio sufficiently diversified such that risk exposure is limited to systematic risk only. In developing markets a larger number is required, due to the higher asset volatilities.

A rational investor should not take on any diversifiable risk, as only non-diversifiable risks are rewarded within the scope of this model. Therefore, the required return on an asset, that is, the return that compensates for risk taken, must be linked to its riskiness in a portfolio contexti.e. its contribution to overall portfolio riskinessas opposed to its "stand alone riskiness." In the CAPM context, portfolio risk is represented by higher variance i.e. less predictability. In other words the beta of the portfolio is the defining factor in rewarding the systematic exposure taken by an investor.

##  The efficient frontier

Main article: Efficient frontier

The (Markowitz) efficient frontier. CAL stands for the capital allocation line. The CAPM assumes that the risk-return profile of a portfolio can be optimizedan optimal portfolio displays the lowest possible level of risk for its level of return. Additionally, since each additional asset introduced into a portfolio further diversifies the portfolio, the optimal portfolio must comprise every asset, (assuming no trading costs) with each asset value-weighted to achieve the above (assuming that any asset is infinitely divisible). All such optimal portfolios, i.e., one for each level of return, comprise the efficient frontier. Because the unsystematic risk is diversifiable, the total risk of a portfolio can be viewed as beta.

##  The market portfolio

An investor might choose to invest a proportion of his or her wealth in a portfolio of risky assets with the remainder in cashearning interest at the risk free rate (or indeed may borrow money to fund his or her purchase of risky assets in which case there is a negative cash weighting). Here, the ratio of risky assets to risk free asset does not determine overall returnthis relationship is clearly linear. It is thus possible to achieve a particular return in one of two ways: 1. By investing all of one's wealth in a risky portfolio,

2. or by investing a proportion in a risky portfolio and the remainder in cash (either borrowed or invested). For a given level of return, however, only one of these portfolios will be optimal (in the sense of lowest risk). Since the risk free asset is, by definition, uncorrelated with any other asset, option 2 will generally have the lower variance and hence be the more efficient of the two. This relationship also holds for portfolios along the efficient frontier: a higher return portfolio plus cash is more efficient than a lower return portfolio alone for that lower level of return. For a given risk free rate, there is only one optimal portfolio which can be combined with cash to achieve the lowest level of risk for any possible return. This is the market portfolio.

##  Assumptions of CAPM

All investors:[4] This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2010) 1. 2. 3. 4. 5. 6. 7. 8. Aim to maximize economic utilities. Are rational and risk-averse. Are broadly diversified across a range of investments. Are price takers, i.e., they cannot influence prices. Can lend and borrow unlimited amounts under the risk free rate of interest. Trade without transaction or taxation costs. Deal with securities that are all highly divisible into small parcels. Assume all information is available at the same time to all investors.

Further, the model assumes that standard deviation of past returns is a perfect proxy for the future risk associated with a given security.

##  Problems of CAPM

The model assumes that either asset returns are (jointly) normally distributed random variables or that active and potential shareholders employ a quadratic form of utility. It is, however, frequently observed that returns in equity and other markets are not normally distributed. As a result, large swings (3 to 6 standard deviations from the mean) occur in the market more frequently than the normal distribution assumption would expect.[5] The model assumes that the variance of returns is an adequate measurement of risk. This might be justified under the assumption of normally distributed returns, but for general return distributions other risk measures (like coherent risk measures) will likely reflect the active and potential shareholders' preferences more adequately. Indeed risk in financial investments is not variance in itself, rather it is the probability of losing: it is asymmetric in nature.

The model assumes that all active and potential shareholders have access to the same information and agree about the risk and expected return of all assets (homogeneous expectations assumption).[citation needed] The model assumes that the probability beliefs of active and potential shareholders match the true distribution of returns. A different possibility is that active and potential shareholders' expectations are biased, causing market prices to be informationally inefficient. This possibility is studied in the field of behavioral finance, which uses psychological assumptions to provide alternatives to the CAPM such as the overconfidence-based asset pricing model of Kent Daniel, David Hirshleifer, and Avanidhar Subrahmanyam (2001).[6] The model does not appear to adequately explain the variation in stock returns. Empirical studies show that low beta stocks may offer higher returns than the model would predict. Some data to this effect was presented as early as a 1969 conference in Buffalo, New York in a paper by Fischer Black, Michael Jensen, and Myron Scholes. Either that fact is itself rational (which saves the efficient-market hypothesis but makes CAPM wrong), or it is irrational (which saves CAPM, but makes the EMH wrong indeed, this possibility makes volatility arbitrage a strategy for reliably beating the market).[citation needed] The model assumes that given a certain expected return, active and potential shareholders will prefer lower risk (lower variance) to higher risk and conversely given a certain level of risk will prefer higher returns to lower ones. It does not allow for active and potential shareholders who will accept lower returns for higher risk. Casino gamblers pay to take on more risk, and it is possible that some stock traders will pay for risk as well.[citation
needed]

The model assumes that there are no taxes or transaction costs, although this assumption may be relaxed with more complicated versions of the model.[citation needed] The market portfolio consists of all assets in all markets, where each asset is weighted by its market capitalization. This assumes no preference between markets and assets for individual active and potential shareholders, and that active and potential shareholders choose assets solely as a function of their risk-return profile. It also assumes that all assets are infinitely divisible as to the amount which may be held or transacted.[citation
needed]

The market portfolio should in theory include all types of assets that are held by anyone as an investment (including works of art, real estate, human capital...) In practice, such a market portfolio is unobservable and people usually substitute a stock index as a proxy for the true market portfolio. Unfortunately, it has been shown that this substitution is not innocuous and can lead to false inferences as to the validity of the CAPM, and it has been said that due to the inobservability of the true market portfolio, the CAPM might not be empirically testable. This was presented in greater depth in a paper by Richard Roll in 1977, and is generally referred to as Roll's critique.[7] The model assumes just two dates, so that there is no opportunity to consume and rebalance portfolios repeatedly over time. The basic insights of the model are extended and generalized in the intertemporal CAPM (ICAPM) of Robert Merton, and the consumption CAPM (CCAPM) of Douglas Breeden and Mark Rubinstein.[citation needed] CAPM assumes that all active and potential shareholders will consider all of their assets and optimize one portfolio. This is in sharp contradiction with portfolios that are held by individual shareholders: humans tend to have fragmented portfolios or, rather, multiple

portfolios: for each goal one portfolio see behavioral portfolio theory[8] and Maslowian Portfolio Theory.[9]

Arbitrage pricing theory (APT) Consumption beta (C-CAPM) Efficient market hypothesis FamaFrench three-factor model Hamada's equation ICAPM Modern portfolio theory Risk Risk management tools Roll's critique Valuation (finance)

## Difference Between CML and SML

CML vs SML CML stands for Capital Market Line, and SML stands for Security Market Line. The CML is a line that is used to show the rates of return, which depends on risk-free rates of return and levels of risk for a specific portfolio. SML, which is also called a Characteristic Line, is a graphical representation of the markets risk and return at a given time. One of the differences between CML and SML, is how the risk factors are measured. While standard deviation is the measure of risk for CML, Beta coefficient determines the risk factors of the SML. The CML measures the risk through standard deviation, or through a total risk factor. On the other hand, the SML measures the risk through beta, which helps to find the securitys risk contribution for the portfolio. While the Capital Market Line graphs define efficient portfolios, the Security Market Line graphs define both efficient and non-efficient portfolios. While calculating the returns, the expected return of the portfolio for CML is shown along the Yaxis. On the contrary, for SML, the return of the securities is shown along the Y-axis. The standard deviation of the portfolio is shown along the X-axis for CML, whereas, the Beta of security is shown along the X-axis for SML. Where the market portfolio and risk free assets are determined by the CML, all security factors are determined by the SML. Unlike the Capital Market Line, the Security Market Line shows the expected returns of

individual assets. The CML determines the risk or return for efficient portfolios, and the SML demonstrates the risk or return for individual stocks. Well, the Capital Market Line is considered to be superior when measuring the risk factors. Summary: 1. The CML is a line that is used to show the rates of return, which depends on risk-free rates of return and levels of risk for a specific portfolio. SML, which is also called a Characteristic Line, is a graphical representation of the markets risk and return at a given time. 2. While standard deviation is the measure of risk in CML, Beta coefficient determines the risk factors of the SML. 3. While the Capital Market Line graphs define efficient portfolios, the Security Market Line graphs define both efficient and non-efficient portfolios. 4. The Capital Market Line is considered to be superior when measuring the risk factors. 5. Where the market portfolio and risk free assets are determined by the CML, all security factors are determined by the SML.

Read more: Difference Between CML and SML | Difference Between | CML vs SML http://www.differencebetween.net/business/difference-between-cml-and-sml/#ixzz1d6AhhgeN

## Arbitrage pricing theory

In finance, arbitrage pricing theory (APT) is a general theory of asset pricing that holds that the expected return of a financial asset can be modeled as a linear function of various macroeconomic factors or theoretical market indices, where sensitivity to changes in each factor is represented by a factor-specific beta coefficient. The model-derived rate of return will then be used to price the asset correctly - the asset price should equal the expected end of period price discounted at the rate implied by the model. If the price diverges, arbitrage should bring it back into line. The theory was initiated by the economist Stephen Ross in 1976.

Contents

1 The APT model 2 Arbitrage and the APT o 2.1 Arbitrage in expectations o 2.2 Arbitrage mechanics 3 Relationship with the capital asset pricing model (CAPM) 4 Using the APT o 4.1 Identifying the factors o 4.2 APT and asset management 5 See also 6 References 7 External links

##  The APT model

Risky asset returns are said to follow a factor structure if they can be expressed as:

where

aj is a constant for asset j Fk is a systematic factor bjk is the sensitivity of the jth asset to factor k, also called factor loading, and is the risky asset's idiosyncratic random shock with mean zero.

Idiosyncratic shocks are assumed to be uncorrelated across assets and uncorrelated with the factors.

The APT states that if asset returns follow a factor structure then the following relation exists between expected returns and the factor sensitivities:

where

## RPk is the risk premium of the factor, rf is the risk-free rate,

That is, the expected return of an asset j is a linear function of the assets sensitivities to the n factors. Note that there are some assumptions and requirements that have to be fulfilled for the latter to be correct: There must be perfect competition in the market, and the total number of factors may never surpass the total number of assets (in order to avoid the problem of matrix singularity),

##  Arbitrage and the APT

Arbitrage is the practice of taking positive expected return from overvalued or undervalued securities in the inefficient market without any incremental risk and zero additional investments.
 Arbitrage in expectations

The capital asset pricing model and its extensions are based on specific assumptions on investors asset demand. For example:

Investors care only about mean return and variance. Investors hold only traded assets.

##  Arbitrage mechanics

In the APT context, arbitrage consists of trading in two assets with at least one being mispriced. The arbitrageur sells the asset which is relatively too expensive and uses the proceeds to buy one which is relatively too cheap. Under the APT, an asset is mispriced if its current price diverges from the price predicted by the model. The asset price today should equal the sum of all future cash flows discounted at the APT rate, where the expected return of the asset is a linear function of various factors, and sensitivity to changes in each factor is represented by a factor-specific beta coefficient. A correctly priced asset here may be in fact a synthetic asset - a portfolio consisting of other correctly priced assets. This portfolio has the same exposure

to each of the macroeconomic factors as the mispriced asset. The arbitrageur creates the portfolio by identifying x correctly priced assets (one per factor plus one) and then weighting the assets such that portfolio beta per factor is the same as for the mispriced asset. When the investor is long the asset and short the portfolio (or vice versa) he has created a position which has a positive expected return (the difference between asset return and portfolio return) and which has a net-zero exposure to any macroeconomic factor and is therefore risk free (other than for firm specific risk). The arbitrageur is thus in a position to make a risk-free profit: Where today's price is too low:
The implication is that at the end of the period the portfolio would have appreciated at the rate implied by the APT, whereas the mispriced asset would have appreciated at more than this rate. The arbitrageur could therefore: Today: 1 short sell the portfolio 2 buy the mispriced asset with the proceeds. At the end of the period: 1 sell the mispriced asset 2 use the proceeds to buy back the portfolio 3 pocket the difference.

## Where today's price is too high:

The implication is that at the end of the period the portfolio would have appreciated at the rate implied by the APT, whereas the mispriced asset would have appreciated at less than this rate. The arbitrageur could therefore: Today: 1 short sell the mispriced asset 2 buy the portfolio with the proceeds. At the end of the period: 1 sell the portfolio 2 use the proceeds to buy back the mispriced asset

##  Relationship with the capital asset pricing model (CAPM)

The APT along with the capital asset pricing model (CAPM) is one of two influential theories on asset pricing. The APT differs from the CAPM in that it is less restrictive in its assumptions. It allows for an explanatory (as opposed to statistical) model of asset returns. It assumes that each investor will hold a unique portfolio with its own particular array of betas, as opposed to the identical "market portfolio". In some ways, the CAPM can be considered a "special case" of the APT in that the securities market line represents a single-factor model of the asset price, where beta is exposed to changes in value of the market. Additionally, the APT can be seen as a "supply-side" model, since its beta coefficients reflect the sensitivity of the underlying asset to economic factors. Thus, factor shocks would cause structural changes in assets' expected returns, or in the case of stocks, in firms' profitabilities. On the other side, the capital asset pricing model is considered a "demand side" model. Its results, although similar to those of the APT, arise from a maximization problem of each investor's utility function, and from the resulting market equilibrium (investors are considered to be the "consumers" of the assets).

##  Using the APT

 Identifying the factors

As with the CAPM, the factor-specific betas are found via a linear regression of historical security returns on the factor in question. Unlike the CAPM, the APT, however, does not itself reveal the identity of its priced factors - the number and nature of these factors is likely to change over time and between economies. As a result, this issue is essentially empirical in nature. Several a priori guidelines as to the characteristics required of potential factors are, however, suggested:
1. their impact on asset prices manifests in their unexpected movements 2. they should represent undiversifiable influences (these are, clearly, more likely to be macroeconomic rather than firm-specific in nature) 3. timely and accurate information on these variables is required 4. the relationship should be theoretically justifiable on economic grounds

Chen, Roll and Ross (1986) identified the following macro-economic factors as significant in explaining security returns:

surprises in inflation; surprises in GNP as indicated by an industrial production index; surprises in investor confidence due to changes in default premium in corporate bonds; surprise shifts in the yield curve.

As a practical matter, indices or spot or futures market prices may be used in place of macro-economic factors, which are reported at low frequency (e.g. monthly) and often with significant estimation errors. Market indices are sometimes derived by means of factor analysis. More direct "indices" that might be used are:

short term interest rates; the difference in long-term and short-term interest rates; a diversified stock index such as the S&P 500 or NYSE Composite Index; oil prices gold or other precious metal prices Currency exchange rates

##  APT and asset management

The linear factor model structure of the APT is used as the basis for many of the commercial risk systems employed by asset managers. These include MSCI Barra, APT, Northfield and Axioma.

Beta coefficient Capital asset pricing model Cost of capital Earnings response coefficient Efficient-market hypothesis Fundamental theorem of arbitrage-free pricing Investment theory Roll's critique Rational pricing Modern portfolio theory Post-modern portfolio theory Value investing

Efficient-market hypothesis

Public market

Exchange Securities Bond market Fixed income Corporate bond Government bond Municipal bond Bond valuation High-yield debt Stock market Stock Preferred stock Common stock Registered share Voting share Stock exchange Derivatives market Securitization Hybrid security Credit derivative Futures exchange

OTC, non organized Spot market Forwards Swaps Options Foreign exchange Exchange rate Currency

Other markets Money market Reinsurance market Commodity market Real estate market Practical trading Participants Clearing house Financial regulation

Finance series Banks and banking Corporate finance Personal finance Public finance vde

In finance, the efficient-market hypothesis (EMH) asserts that financial markets are "informationally efficient". That is, one cannot consistently achieve returns in excess of average market returns on a risk-adjusted basis, given the information available at the time the investment is made. There are three major versions of the hypothesis: "weak", "semi-strong", and "strong". The weak-form EMH claims that prices on traded assets (e.g., stocks, bonds, or property) already reflect all past publicly available information. The semi-strong-form EMH claims both that prices reflect all publicly available information and that prices instantly change to reflect new public information. The strong-form EMH additionally claims that prices instantly reflect even hidden or "insider" information. There is evidence for and against the weak-form and semistrong-form EMHs, while there is evidence against strong-form EMH.[citation needed]

Various studies have pointed out signs of inefficiency in financial markets.[1] Critics have blamed the belief in rational markets for much of the late-2000s financial crisis.[2][3][4] In response, proponents of the hypothesis have stated that market efficiency does not mean having no uncertainty about the future, that market efficiency is a simplification of the world which may not always hold true, and that the market is practically efficient for investment purposes for most individuals.[1]

Contents

1 Historical background 2 Theoretical background 3 Criticism and behavioral finance 4 Late 2000s financial crisis 5 See also 6 Notes 7 References 8 External links

##  Historical background

Historically, there was a very close link between EMH and the random-walk model and then the Martingale model. The random character of stock market prices was first modelled by Jules Regnault, a French broker, in 1863 and then by Louis Bachelier, a French mathematician, in his 1900 PhD thesis, "The Theory of Speculation".[5] His work was largely ignored until the 1950s; however beginning in the 30s scattered, independent work corroborated his thesis. A small number of studies indicated that US stock prices and related financial series followed a random walk model.[6] Research by Alfred Cowles in the 30s and 40s suggested that professional investors were in general unable to outperform the market. The efficient-market hypothesis was developed by Professor Eugene Fama at the University of Chicago Booth School of Business as an academic concept of study through his published Ph.D. thesis in the early 1960s at the same school. It was widely accepted up until the 1990s, when behavioral finance economists, who had been a fringe element, became mainstream.[7] Empirical analyses have consistently found problems with the efficient-market hypothesis, the most consistent being that stocks with low price to earnings (and similarly, low price to cash-flow or book value) outperform other stocks.[8][9] Alternative theories have proposed that cognitive biases cause these inefficiencies, leading investors to purchase overpriced growth stocks rather than value stocks.[7] Although the efficient-market hypothesis has become controversial because substantial and lasting inefficiencies are observed, Beechey et al. (2000) consider that it remains a worthwhile starting point.[10] The efficient-market hypothesis emerged as a prominent theory in the mid-1960s. Paul Samuelson had begun to circulate Bachelier's work among economists. In 1964 Bachelier's dissertation along with the empirical studies mentioned above were published in an anthology

edited by Paul Cootner.[11] In 1965 Eugene Fama published his dissertation arguing for the random walk hypothesis,[12] and Samuelson published a proof for a version of the efficientmarket hypothesis.[13] In 1970 Fama published a review of both the theory and the evidence for the hypothesis. The paper extended and refined the theory, included the definitions for three forms of financial market efficiency: weak, semi-strong and strong (see below).[14] It has been argued that the stock market is micro efficient but macro inefficient. The main proponent of this view was Samuelson, who asserted that the EMH is much better suited for individual stocks than it is for the aggregate stock market. Research based on regression and scatter diagrams has strongly supported Samuelson's dictum.[15] Further to this evidence that the UK stock market is weak-form efficient, other studies of capital markets have pointed toward their being semi-strong-form efficient. A study by Khan of the grain futures market indicated semi-strong form efficiency following the release of large trader position information (Khan, 1986). Studies by Firth (1976, 1979, and 1980) in the United Kingdom have compared the share prices existing after a takeover announcement with the bid offer. Firth found that the share prices were fully and instantaneously adjusted to their correct levels, thus concluding that the UK stock market was semi-strong-form efficient. However, the market's ability to efficiently respond to a short term, widely publicized event such as a takeover announcement does not necessarily prove market efficiency related to other more long term, amorphous factors. David Dreman has criticized the evidence provided by this instant "efficient" response, pointing out that an immediate response is not necessarily efficient, and that the longterm performance of the stock in response to certain movements are better indications. A study on stocks' response to dividend cuts or increases over three years found that after an announcement of a dividend cut, stocks underperformed the market by 15.3% for the three-year period, while stocks outperformed the market by 24.8% for the three years following the announcement of a dividend increase.[16]

##  Theoretical background

Beyond the normal utility maximizing agents, the efficient-market hypothesis requires that agents have rational expectations; that on average the population is correct (even if no one person is) and whenever new relevant information appears, the agents update their expectations appropriately. Note that it is not required that the agents be rational. EMH allows that when faced with new information, some investors may overreact and some may underreact. All that is required by the EMH is that investors' reactions be random and follow a normal distribution pattern so that the net effect on market prices cannot be reliably exploited to make an abnormal profit, especially when considering transaction costs (including commissions and spreads). Thus, any one person can be wrong about the marketindeed, everyone can bebut the market as a whole is always right. There are three common forms in which the efficient-market hypothesis is commonly statedweak-form efficiency, semi-strong-form efficiency and strong-form efficiency, each of which has different implications for how markets work. In weak-form efficiency, future prices cannot be predicted by analyzing prices from the past. Excess returns cannot be earned in the long run by using investment strategies based on historical share prices or other historical data. Technical analysis techniques will not be able to

consistently produce excess returns, though some forms of fundamental analysis may still provide excess returns. Share prices exhibit no serial dependencies, meaning that there are no "patterns" to asset prices. This implies that future price movements are determined entirely by information not contained in the price series. Hence, prices must follow a random walk. This 'soft' EMH does not require that prices remain at or near equilibrium, but only that market participants not be able to systematically profit from market 'inefficiencies'. However, while EMH predicts that all price movement (in the absence of change in fundamental information) is random (i.e., non-trending), many studies have shown a marked tendency for the stock markets to trend over time periods of weeks or longer[17] and that, moreover, there is a positive correlation between degree of trending and length of time period studied[18] (but note that over long time periods, the trending is sinusoidal in appearance). Various explanations for such large and apparently non-random price movements have been promulgated. The problem of algorithmically constructing prices which reflect all available information has been studied extensively in the field of computer science.[19][20] For example, the complexity of finding the arbitrage opportunities in pair betting markets has been shown to be NP-hard.[21] In semi-strong-form efficiency, it is implied that share prices adjust to publicly available new information very rapidly and in an unbiased fashion, such that no excess returns can be earned by trading on that information. Semi-strong-form efficiency implies that neither fundamental analysis nor technical analysis techniques will be able to reliably produce excess returns. To test for semi-strong-form efficiency, the adjustments to previously unknown news must be of a reasonable size and must be instantaneous. To test for this, consistent upward or downward adjustments after the initial change must be looked for. If there are any such adjustments it would suggest that investors had interpreted the information in a biased fashion and hence in an inefficient manner. In strong-form efficiency, share prices reflect all information, public and private, and no one can earn excess returns. If there are legal barriers to private information becoming public, as with insider trading laws, strong-form efficiency is impossible, except in the case where the laws are universally ignored. To test for strong-form efficiency, a market needs to exist where investors cannot consistently earn excess returns over a long period of time. Even if some money managers are consistently observed to beat the market, no refutation even of strong-form efficiency follows: with hundreds of thousands of fund managers worldwide, even a normal distribution of returns (as efficiency predicts) should be expected to produce a few dozen "star" performers.

##  Criticism and behavioral finance

Price-Earnings ratios as a predictor of twenty-year returns based upon the plot by Robert Shiller (Figure 10.1,[22] source). The horizontal axis shows the real price-earnings ratio of the S&P Composite Stock Price Index as computed in Irrational Exuberance (inflation adjusted price divided by the prior ten-year mean of inflation-adjusted earnings). The vertical axis shows the geometric average real annual return on investing in the S&P Composite Stock Price Index, reinvesting dividends, and selling twenty years later. Data from different twenty-year periods is color-coded as shown in the key. See also ten-year returns. Shiller states that this plot "confirms that long-term investorsinvestors who commit their money to an investment for ten full yearsdid do well when prices were low relative to earnings at the beginning of the ten years. Long-term investors would be well advised, individually, to lower their exposure to the stock market when it is high, as it has been recently, and get into the market when it is low."[22] Burton Malkiel stated that this correlation may be consistent with an efficient market due to differences in interest rates.[23]

Daniel Kahneman

Investors and researchers have disputed the efficient-market hypothesis both empirically and theoretically. Behavioral economists attribute the imperfections in financial markets to a combination of cognitive biases such as overconfidence, overreaction, representative bias, information bias, and various other predictable human errors in reasoning and information processing. These have been researched by psychologists such as Daniel Kahneman, Amos Tversky, Richard Thaler, and Paul Slovic. These errors in reasoning lead most investors to avoid value stocks and buy growth stocks at expensive prices, which allow those who reason correctly to profit from bargains in neglected value stocks and the overreacted selling of growth stocks. Empirical evidence has been mixed, but has generally not supported strong forms of the efficient-market hypothesis[8][9][24] According to Dreman and Berry, in a 1995 paper, low P/E stocks have greater returns.[25] In an earlier paper Dreman also refuted the assertion by Ray Ball that these higher returns could be attributed to higher beta,[26] whose research had been accepted by efficient market theorists as explaining the anomaly[27] in neat accordance with modern portfolio theory. One can identify "losers" as stocks that have had poor returns over some number of past years. "Winners" would be those stocks that had high returns over a similar period. The main result of one such study is that losers have much higher average returns than winners over the following period of the same number of years.[28] A later study showed that beta () cannot account for this difference in average returns.[29] This tendency of returns to reverse over long horizons (i.e., losers become winners) is yet another contradiction of EMH. Losers would have to have much higher betas than winners in order to justify the return difference. The study showed that the beta difference required to save the EMH is just not there. Speculative economic bubbles are an obvious anomaly, in that the market often appears to be driven by buyers operating on irrational exuberance, who take little notice of underlying value. These bubbles are typically followed by an overreaction of frantic selling, allowing shrewd investors to buy stocks at bargain prices. Rational investors have difficulty profiting by shorting irrational bubbles because, as John Maynard Keynes commented, "Markets can remain irrational far longer than you or I can remain solvent."[citation needed] Sudden market crashes as happened on Black Monday in 1987 are mysterious from the perspective of efficient markets, but allowed as a rare statistical event under the Weak-form of EMH. Burton Malkiel, a well-known proponent of the general validity of EMH, has warned that certain emerging markets such as China are not empirically efficient; that the Shanghai and Shenzhen markets, unlike markets in United States, exhibit considerable serial correlation (price trends), non-random walk, and evidence of manipulation.[30] Behavioral psychology approaches to stock market trading are among some of the more promising alternatives to EMH (and some investment strategies seek to exploit exactly such inefficiencies). But Nobel Laureate co-founder of the programmeDaniel Kahneman announced his skepticism of investors beating the market: "They're [investors] just not going to do it [beat the market]. It's just not going to happen."[31] Indeed defenders of EMH maintain that

Behavioral Finance strengthens the case for EMH in that BF highlights biases in individuals and committees and not competitive markets. For example, one prominent finding in Behaviorial Finance is that individuals employ hyperbolic discounting. It is palpably true that bonds, mortgages, annuities and other similar financial instruments subject to competitive market forces do not. Any manifestation of hyperbolic discounting in the pricing of these obligations would invite arbitrage thereby quickly eliminating any vestige of individual biases. Similarly, diversification, derivative securities and other hedging strategies assuage if not eliminate potential mispricings from the severe risk-intolerance (loss aversion) of individuals underscored by behavioral finance. On the other hand, economists, behaviorial psychologists and mutual fund managers are drawn from the human population and are therefore subject to the biases that behavioralists showcase. By contrast, the price signals in markets are far less subject to individual biases highlighted by the Behavioral Finance programme. Richard Thaler has started a fund based on his research on cognitive biases. In a 2008 report he identified complexity and herd behavior as central to the global financial crisis of 2008.[32] Further empirical work has highlighted the impact transaction costs have on the concept of market efficiency, with much evidence suggesting that any anomalies pertaining to market inefficiencies are the result of a cost benefit analysis made by those willing to incur the cost of acquiring the valuable information in order to trade on it. Additionally the concept of liquidity is a critical component to capturing "inefficiencies" in tests for abnormal returns. Any test of this proposition faces the joint hypothesis problem, where it is impossible to ever test for market efficiency, since to do so requires the use of a measuring stick against which abnormal returns are compared - one cannot know if the market is efficient if one does not know if a model correctly stipulates the required rate of return. Consequently, a situation arises where either the asset pricing model is incorrect or the market is inefficient, but one has no way of knowing which is the case.[citation needed] A key work on random walk was done in the late 1980s by Profs. Andrew Lo and Craig MacKinlay; they effectively argue that a random walk does not exist, nor ever has.[33] Their paper took almost two years to be accepted by academia and in 1999 they published "A Nonrandom Walk Down Wall St." which collects their research papers on the topic up to that time. Economists Matthew Bishop and Michael Green claim that full acceptance of the hypothesis goes against the thinking of Adam Smith and John Maynard Keynes, who both believed irrational behavior had a real impact on the markets.[34] Warren Buffett has also argued against EMH, saying the preponderance of value investors among the world's best money managers rebuts the claim of EMH proponents that luck is the reason some investors appear more successful than others.[35]

##  Late 2000s financial crisis

The financial crisis of 20072010 has led to renewed scrutiny and criticism of the hypothesis.[36] Market strategist Jeremy Grantham has stated flatly that the EMH is responsible for the current financial crisis, claiming that belief in the hypothesis caused financial leaders to have a "chronic underestimation of the dangers of asset bubbles breaking".[3] Noted financial journalist Roger

Lowenstein blasted the theory, declaring "The upside of the current Great Recession is that it could drive a stake through the heart of the academic nostrum known as the efficient-market hypothesis."[4] At the International Organization of Securities Commissions annual conference, held in June 2009, the hypothesis took center stage. Martin Wolf, the chief economics commentator for the Financial Times, dismissed the hypothesis as being a useless way to examine how markets function in reality. Paul McCulley, managing director of PIMCO, was less extreme in his criticism, saying that the hypothesis had not failed, but was "seriously flawed" in its neglect of human nature.[37] The financial crisis has led Richard Posner, a prominent judge, University of Chicago law professor, and innovator in the field of Law and Economics, to back away from the hypothesis and express some degree of belief in Keynesian economics. Posner accused some of his Chicago School colleagues of being "asleep at the switch", saying that "the movement to deregulate the financial industry went too far by exaggerating the resilience - the self healing powers - of laissez-faire capitalism."[38] Others, such as Fama himself, said that the hypothesis held up well during the crisis and that the markets were a casualty of the recession, not the cause of it. Despite this, Fama has conceded that "poorly informed investors could theoretically lead the market astray" and that stock prices could become "somewhat irrational" as a result.[39] Critics have suggested that financial institutions and corporations have been able to reduce the efficiency of financial markets by creating private information and reducing the accuracy of conventional disclosures, and by developing new and complex products which are challenging for most market participants to evaluate and correctly price.[40][41]

Adaptive market hypothesis Arbitrage Financial market efficiency Eugene Fama Finance Insider trading Investment theory Market anomaly Microeconomics Paul Samuelson Technical analysis Transparency (market) Noisy market hypothesis Dumb agent theory

Technical analysis

Public market

Exchange Securities Bond market Fixed income Corporate bond Government bond Municipal bond Bond valuation High-yield debt Stock market Stock Preferred stock Common stock Registered share Voting share Stock exchange Derivatives market Securitization Hybrid security Credit derivative Futures exchange

OTC, non organized Spot market Forwards Swaps Options Foreign exchange Exchange rate Currency

Other markets Money market Reinsurance market Commodity market Real estate market Practical trading Participants Clearing house Financial regulation

Finance series Banks and banking Corporate finance Personal finance Public finance vde

In finance, technical analysis is security analysis discipline for forecasting the direction of prices through the study of past market data, primarily price and volume.[1] Behavioral economics and quantitative analysis incorporate technical analysis, which being an aspect of active management stands in contradiction to much of modern portfolio theory. The efficacy of both technical and fundamental analysis is disputed by efficient-market hypothesis which states that stock market prices are essentially unpredictable.[2]

Contents

## 1 History 2 General description 3 Characteristics 4 Principles

4.1 Market action discounts everything 4.2 Prices move in trends 4.3 History tends to repeat itself 5 Industry 6 Systematic trading o 6.1 Neural networks 7 Combination with other market forecast methods 8 Empirical evidence o 8.1 Efficient market hypothesis 8.1.1 Random walk hypothesis 9 Charting terms and indicators o 9.1 Concepts o 9.2 Types of charts o 9.3 Overlays o 9.4 Price-based indicators o 9.5 Breadth Indicators o 9.6 Volume-based indicators 10 See also 11 Notes 12 Further reading 13 External links

o o o

 History
The principles of technical analysis are derived from hundreds of years of financial market data.[3] Some aspects of technical analysis began to appear in Joseph de la Vega's accounts of the Dutch markets in the 17th century. In Asia, technical analysis is said to be a method developed by Homma Munehisa during early 18th century which evolved into the use of candlestick techniques, and is today a technical analysis charting tool.[4][5] In the 1920s and 1930s Richard W. Schabacker published several books which continued the work of Charles Dow and William Peter Hamilton in their books Stock Market Theory and Practice and Technical Market Analysis. In 1948 Robert D. Edwards and John Magee published Technical Analysis of Stock Trends which is widely considered to be one of the seminal works of the discipline. It is exclusively concerned with trend analysis and chart patterns and remains in use to the present. As is obvious, early technical analysis was almost exclusively the analysis of charts, because the processing power of computers was not available for statistical analysis. Charles Dow reportedly originated a form of point and figure chart analysis. Dow Theory is based on the collected writings of Dow Jones co-founder and editor Charles Dow, and inspired the use and development of modern technical analysis at the end of the 19th century. Other pioneers of analysis techniques include Ralph Nelson Elliott, William Delbert Gann and Richard Wyckoff who developed their respective techniques in the early 20th century. More technical tools and theories have been developed and enhanced in recent decades, with an increasing emphasis on computer-assisted techniques using specially designed computer software.

##  General description

(January 2011)

While fundamental analysts examine earnings, dividends, new products, research and the like, technical analysts examine what investors fear or think about those developments and whether or not investors have the wherewithal to back up their opinions; these two concepts are called psych (psychology) and supply/demand. Technicians employ many techniques, one of which is the use of charts. Using charts, technical analysts seek to identify price patterns and market trends in financial markets and attempt to exploit those patterns.[6] Technicians use various methods and tools, the study of price charts is but one. Technicians using charts search for archetypal price chart patterns, such as the well-known head and shoulders or double top/bottom reversal patterns, study technical indicators, moving averages, and look for forms such as lines of support, resistance, channels, and more obscure formations such as flags, pennants, balance days and cup and handle patterns. Technical analysts also widely use market indicators of many sorts, some of which are mathematical transformations of price, often including up and down volume, advance/decline data and other inputs. These indicators are used to help assess whether an asset is trending, and if it is, the probability of its direction and of continuation. Technicians also look for relationships between price/volume indices and market indicators. Examples include the relative strength index, and MACD. Other avenues of study include correlations between changes in options (implied volatility) and put/call ratios with price. Also important are sentiment indicators such as Put/Call ratios, bull/bear ratios, short interest, Implied Volatility, etc. There are many techniques in technical analysis. Adherents of different techniques (for example, candlestick charting, Dow Theory, and Elliott wave theory) may ignore the other approaches, yet many traders combine elements from more than one technique. Some technical analysts use subjective judgment to decide which pattern(s) a particular instrument reflects at a given time and what the interpretation of that pattern should be. Others employ a strictly mechanical or systematic approach to pattern identification and interpretation. Technical analysis is frequently contrasted with fundamental analysis, the study of economic factors that influence the way investors price financial markets. Technical analysis holds that prices already reflect all such trends before investors are aware of them. Uncovering those trends is what technical indicators are designed to do, imperfect as they may be. Fundamental indicators are subject to the same limitations, naturally. Some traders use technical or fundamental analysis exclusively, while others use both types to make trading decisions.

 Characteristics

 Principles

Stock chart showing levels of support (4,5,6, 7, and 8) and resistance (1, 2, and 3); levels of resistance tend to become levels of support and vice versa.

A fundamental principle of technical analysis is that a market's price reflects all relevant information, so their analysis looks at the history of a security's trading pattern rather than external drivers such as economic, fundamental and news events. Price action also tends to repeat itself because investors collectively tend toward patterned behavior hence technicians' focus on identifiable trends and conditions.[citation needed]
 Market action discounts everything

Based on the premise that all relevant information is already reflected by prices, technical analysts believe it is important to understand what investors think of that information, known and perceived; studies such as by Cutler, Poterba, and Summers titled "What Moves Stock Prices?" do not cover this aspect of investing.[citation needed]

Technical analysts believe that prices trend directionally, i.e., up, down, or sideways (flat) or some combination. The basic definition of a price trend was originally put forward by Dow Theory.[6] An example of a security that had an apparent trend is AOL from November 2001 through August 2002. A technical analyst or trend follower recognizing this trend would look for opportunities to sell this security. AOL consistently moves downward in price. Each time the stock rose, sellers would enter the market and sell the stock; hence the "zig-zag" movement in the price. The series of "lower highs" and "lower lows" is a tell tale sign of a stock in a down trend.[21] In other words, each time the stock moved lower, it fell below its previous relative low price. Each time the stock moved higher, it could not reach the level of its previous relative high price. Note that the sequence of lower lows and lower highs did not begin until August. Then AOL makes a low price that does not pierce the relative low set earlier in the month. Later in the same month, the stock makes a relative high equal to the most recent relative high. In this a technician sees strong indications that the down trend is at least pausing and possibly ending, and would likely stop actively selling the stock at that point.
 History tends to repeat itself

Technical analysts believe that investors collectively repeat the behavior of the investors that preceded them. To a technician, the emotions in the market may be irrational, but they exist. Because investor behavior repeats itself so often, technicians believe that recognizable (and predictable) price patterns will develop on a chart.[6] Technical analysis is not limited to charting, but it always considers price trends.[1] For example, many technicians monitor surveys of investor sentiment. These surveys gauge the attitude of market participants, specifically whether they are bearish or bullish. Technicians use these surveys to help determine whether a trend will continue or if a reversal could develop; they are

most likely to anticipate a change when the surveys report extreme investor sentiment[22] Surveys that show overwhelming bullishness, for example, are evidence that an uptrend may reverse; the premise being that if most investors are bullish they have already bought the market (anticipating higher prices). And because most investors are bullish and invested, one assumes that few buyers remain. This leaves more potential sellers than buyers, despite the bullish sentiment. This suggests that prices will trend down, and is an example of contrarian trading.[23] Recently, Kim Man Lui, Lun Hu, and Keith C.C. Chan have suggested that there is statistical evidence of association relationships between some of the index composite stocks whereas there is no evidence for such a relationship between some index composite others. They show that the price behavior of these Hang Seng index composite stocks is easier to understand than that of the index.[24]

 Industry
The industry is globally represented by the International Federation of Technical Analysts (IFTA), which is a Federation of regional and national organizations. In the United States, the industry is represented by both the Market Technicians Association (MTA) and the American Association of Professional Technical Analysts (AAPTA). The United States is also represented by the Technical Security Analysts Association of San Francisco (TSAASF). In the United Kingdom, the industry is represented by the Society of Technical Analysts (STA). In Canada the industry is represented by the Canadian Society of Technical Analysts.[25] In Australia, the industry is represented by the Australian Professional Technical Analysts (APTA) Inc [26] and the Australian Technical Analysts Association (ATAA). Professional technical analysis societies have worked on creating a body of knowledge that describes the field of Technical Analysis. A body of knowledge is central to the field as a way of defining how and why technical analysis may work. It can then be used by academia, as well as regulatory bodies, in developing proper research and standards for the field.[27] The Market Technicians Association (MTA) has published a body of knowledge, which is the structure for the MTA's Chartered Market Technician (CMT) exam.[28]

 Neural networks

Since the early 1990s when the first practically usable types emerged, artificial neural networks (ANNs) have rapidly grown in popularity. They are artificial intelligence adaptive software systems that have been inspired by how biological neural networks work. They are used because they can learn to detect complex patterns in data. In mathematical terms, they are universal function approximators,[29][30] meaning that given the right data and configured correctly, they can capture and model any input-output relationships. This not only removes the need for human interpretation of charts or the series of rules for generating entry/exit signals, but also provides a bridge to fundamental analysis, as the variables used in fundamental analysis can be used as input.

As ANNs are essentially non-linear statistical models, their accuracy and prediction capabilities can be both mathematically and empirically tested. In various studies, authors have claimed that neural networks used for generating trading signals given various technical and fundamental inputs have significantly outperformed buy-hold strategies as well as traditional linear technical analysis methods when combined with rule-based expert systems.[31][32][33] While the advanced mathematical nature of such adaptive systems has kept neural networks for financial analysis mostly within academic research circles, in recent years more user friendly neural network software has made the technology more accessible to traders. However, largescale application is problematic because of the problem of matching the correct neural topology to the market being studied.

##  Combination with other market forecast methods

John Murphy states that the principal sources of information available to technicians are price, volume and open interest.[6] Other data, such as indicators and sentiment analysis, are considered secondary. However, many technical analysts reach outside pure technical analysis, combining other market forecast methods with their technical work. One advocate for this approach is John Bollinger, who coined the term rational analysis in the middle 1980s for the intersection of technical analysis and fundamental analysis.[34] Another such approach, fusion analysis,[35] overlays fundamental analysis with technical, in an attempt to improve portfolio manager performance. Technical analysis is also often combined with quantitative analysis and economics. For example, neural networks may be used to help identify intermarket relationships.[36] A few market forecasters combine financial astrology with technical analysis. Chris Carolan's article "Autumn Panics and Calendar Phenomenon", which won the Market Technicians Association Dow Award for best technical analysis paper in 1998, demonstrates how technical analysis and lunar cycles can be combined.[37] Calendar phenomena, such as the January effect in the stock market, are generally believed to be caused by tax and accounting related transactions, and are not related to the subject of financial astrology. Investor and newsletter polls, and magazine cover sentiment indicators, are also used by technical analysts.[38]

##  Empirical evidence

Whether technical analysis actually works is a matter of controversy. Methods vary greatly, and different technical analysts can sometimes make contradictory predictions from the same data. Many investors claim that they experience positive returns, but academic appraisals often find that it has little predictive power.[39] Of 95 modern studies, 56 concluded that technical analysis had positive results, although data-snooping bias and other problems make the analysis difficult.[7] Nonlinear prediction using neural networks occasionally produces statistically significant prediction results.[40] A Federal Reserve working paper[15] regarding support and

resistance levels in short-term foreign exchange rates "offers strong evidence that the levels help to predict intraday trend interruptions," although the "predictive power" of those levels was "found to vary across the exchange rates and firms examined". Technical trading strategies were found to be effective in the Chinese marketplace by a recent study that states, "Finally, we find significant positive returns on buy trades generated by the contrarian version of the moving-average crossover rule, the channel breakout rule, and the Bollinger band trading rule, after accounting for transaction costs of 0.50 percent."[41] An influential 1992 study by Brock et al. which appeared to find support for technical trading rules was tested for data snooping and other problems in 1999;[42] the sample covered by Brock et al. was robust to data snooping. Subsequently, a comprehensive study of the question by Amsterdam economist Gerwin Griffioen concludes that: "for the U.S., Japanese and most Western European stock market indices the recursive out-of-sample forecasting procedure does not show to be profitable, after implementing little transaction costs. Moreover, for sufficiently high transaction costs it is found, by estimating CAPMs, that technical trading shows no statistically significant risk-corrected out-of-sample forecasting power for almost all of the stock market indices."[10] Transaction costs are particularly applicable to "momentum strategies"; a comprehensive 1996 review of the data and studies concluded that even small transaction costs would lead to an inability to capture any excess from such strategies.[43] In a paper published in the Journal of Finance, Dr. Andrew W. Lo, director MIT Laboratory for Financial Engineering, working with Harry Mamaysky and Jiang Wang found that "
Technical analysis, also known as "charting," has been a part of financial practice for many decades, but this discipline has not received the same level of academic scrutiny and acceptance as more traditional approaches such as fundamental analysis. One of the main obstacles is the highly subjective nature of technical analysisthe presence of geometric shapes in historical price charts is often in the eyes of the beholder. In this paper, we propose a systematic and automatic approach to technical pattern recognition using nonparametric kernel regression, and apply this method to a large number of U.S. stocks from 1962 to 1996 to evaluate the effectiveness of technical analysis. By comparing the unconditional empirical distribution of daily stock returns to the conditional distributionconditioned on specific technical indicators such as head-and-shoulders or double-bottomswe find that over the 31-year sample period, several technical indicators do provide incremental information and may have some practical value.[44]

In that same paper Dr. Lo wrote that "several academic studies suggest that ... technical analysis may well be an effective means for extracting useful information from market prices."[45] Some techniques such as Drummond Geometry attempt to overcome the past data bias by projecting support and resistance levels from differing time frames into the near-term future and combining that with reversion to the mean techniques.[46]

##  Efficient market hypothesis

The efficient-market hypothesis (EMH) contradicts the basic tenets of technical analysis by stating that past prices cannot be used to profitably predict future prices. Thus it holds that technical analysis cannot be effective. Economist Eugene Fama published the seminal paper on the EMH in the Journal of Finance in 1970, and said "In short, the evidence in support of the efficient markets model is extensive, and (somewhat uniquely in economics) contradictory evidence is sparse."[47] Technicians say[who?] that EMH ignores the way markets work, in that many investors base their expectations on past earnings or track record, for example. Because future stock prices can be strongly influenced by investor expectations, technicians claim it only follows that past prices influence future prices.[48] They also point to research in the field of behavioral finance, specifically that people are not the rational participants EMH makes them out to be. Technicians have long said that irrational human behavior influences stock prices, and that this behavior leads to predictable outcomes.[49] Author David Aronson says that the theory of behavioral finance blends with the practice of technical analysis: By considering the impact of emotions, cognitive errors, irrational preferences, and the dynamics of group behavior, behavioral finance offers succinct explanations of excess market volatility as well as the excess returns earned by stale information strategies.... cognitive errors may also explain the existence of market inefficiencies that spawn the systematic price movements that allow objective TA [technical analysis] methods to work.[48] EMH advocates reply that while individual market participants do not always act rationally (or have complete information), their aggregate decisions balance each other, resulting in a rational outcome (optimists who buy stock and bid the price higher are countered by pessimists who sell their stock, which keeps the price in equilibrium).[50] Likewise, complete information is reflected in the price because all market participants bring their own individual, but incomplete, knowledge together in the market.[50]  Random walk hypothesis The random walk hypothesis may be derived from the weak-form efficient markets hypothesis, which is based on the assumption that market participants take full account of any information contained in past price movements (but not necessarily other public information). In his book A Random Walk Down Wall Street, Princeton economist Burton Malkiel said that technical forecasting tools such as pattern analysis must ultimately be self-defeating: "The problem is that once such a regularity is known to market participants, people will act in such a way that prevents it from happening in the future."[51] Malkiel has stated that while momentum may explain some stock price movements, there is not enough momentum to make excess profits. Malkiel has compared technical analysis to "astrology".[52] In the late 1980s, professors Andrew Lo and Craig McKinlay published a paper which cast doubt on the random walk hypothesis. In a 1999 response to Malkiel, Lo and McKinlay collected empirical papers that questioned the hypothesis' applicability[53] that suggested a non-random and

possibly predictive component to stock price movement, though they were careful to point out that rejecting random walk does not necessarily invalidate EMH, which is an entirely separate concept from RWH. In a 2000 paper, Andrew Lo back-analyzed data from U.S. from 1962 to 1996 and found that "several technical indicators do provide incremental information and may have some practical value".[45] Burton Malkiel dismissed the irregularities mentioned by Lo and McKinlay as being too small to profit from.[52] Technicians say[who?] that the EMH and random walk theories both ignore the realities of markets, in that participants are not completely rational and that current price moves are not independent of previous moves.[21][54] Some signal processing researchers negate the random walk hypothesis that stock market prices resemble Wiener processes, because the statistical moments of such processes and real stock data vary significantly with respect window size and similarity measure. [55] They argue that feature transformations used for the description of audio and biosignals can also be used to predict stock market prices successfully which would contradict the random walk hypothesis. The random walk index (RWI) is a technical indicator that attempts to determine if a stocks price movement is random or nature or a result of a statistically significant trend. The random walk index attempts to determine when the market is in a strong uptrend or downtrend by measuring price ranges over N and how it differs from what would be expected by a random walk (randomly going up or down). The greater the range suggests a stronger trend.[56]

##  Charting terms and indicators

 Concepts

Resistance a price level that may prompt a net increase of selling activity Support a price level that may prompt a net increase of buying activity Breakout the concept whereby prices forcefully penetrate an area of prior support or resistance, usually, but not always, accompanied by an increase in volume. Trending the phenomenon by which price movement tends to persist in one direction for an extended period of time Average true range averaged daily trading range, adjusted for price gaps Chart pattern distinctive pattern created by the movement of security prices on a chart Dead cat bounce the phenomenon whereby a spectacular decline in the price of a stock is immediately followed by a moderate and temporary rise before resuming its downward movement Elliott wave principle and the golden ratio to calculate successive price movements and retracements Fibonacci ratios used as a guide to determine support and resistance Momentum the rate of price change Point and figure analysis A priced-based analytical approach employing numerical filters which may incorporate time references, though ignores time entirely in its construction. Cycles time targets for potential change in price action (price only moves up, down, or sideways)

##  Types of charts

Open-high-low-close chart OHLC charts, also known as bar charts, plot the span between the high and low prices of a trading period as a vertical line segment at the trading time, and the open and close prices with horizontal tick marks on the range line, usually a tick to the left for the open price and a tick to the right for the closing price. Candlestick chart Of Japanese origin and similar to OHLC, candlesticks widen and fill the interval between the open and close prices to emphasize the open/close relationship. In the West, often black or red candle bodies represent a close lower than the open, while white, green or blue candles represent a close higher than the open price. Line chart Connects the closing price values with line segments. Point and figure chart a chart type employing numerical filters with only passing references to time, and which ignores time entirely in its construction.

 Overlays

## Overlays are generally superimposed over the main price chart.

Resistance a price level that may act as a ceiling above price Support a price level that may act as a floor below price Trend line a sloping line described by at least two peaks or two troughs Channel a pair of parallel trend lines Moving average the last n-bars of price divided by "n" -- where "n" is the number of bars specified by the length of the average. A moving average can be thought of as a kind of dynamic trend-line. Bollinger bands a range of price volatility Parabolic SAR Wilder's trailing stop based on prices tending to stay within a parabolic curve during a strong trend Pivot point derived by calculating the numerical average of a particular currency's or stock's high, low and closing prices Ichimoku kinko hyo a moving average-based system that factors in time and the average point between a candle's high and low

##  Price-based indicators

These indicators are generally shown below or above the main price chart.

Average Directional Index a widely used indicator of trend strength Commodity Channel Index identifies cyclical trends MACD moving average convergence/divergence Momentum the rate of price change Relative Strength Index (RSI) oscillator showing price strength Stochastic oscillator close position within recent trading range Trix an oscillator showing the slope of a triple-smoothed exponential moving average

These indicators are based on statistics derived from the broad market

Advance Decline Line a popular indicator of market breadth McClellan Oscillator - a popular closed-form indicator of breadth McClellan Summation Index - a popular open-form indicator of breadth

##  Volume-based indicators

Accumulation/distribution index based on the close within the day's range Money Flow the amount of stock traded on days the price went up On-balance volume the momentum of buying and selling stocks

Market analysis Market timing Price action trading Chartered Market Technician Behavioral finance Mathematical finance Multimedia Information Retrieval

Dow Theory
(July 2010)

Dow Theory on stock price movement is a form of technical analysis that includes some aspects of sector rotation. The theory was derived from 255 Wall Street Journal editorials written by Charles H. Dow (18511902), journalist, founder and first editor of the Wall Street Journal and co-founder of Dow Jones and Company. Following Dow's death, William Peter Hamilton,

Robert Rhea and E. George Schaefer organized and collectively represented "Dow Theory," based on Dow's editorials. Dow himself never used the term "Dow Theory," nor presented it as a trading system. The six basic tenets of Dow Theory as summarized by Hamilton, Rhea, and Schaefer are described below.

Contents

1 Six basic tenets of Dow Theory 2 Analysis 3 References 4 Further reading 5 External links 6 Classic Books by Dow Theorists

##  Six basic tenets of Dow Theory

1. The market has three movements (1) The "main movement", primary movement or major trend may last from less than a year to several years. It can be bullish or bearish. (2) The "medium swing", secondary reaction or intermediate reaction may last from ten days to three months and generally retraces from 33% to 66% of the primary price change since the previous medium swing or start of the main movement. (3) The "short swing" or minor movement varies with opinion from hours to a month or more. The three movements may be simultaneous, for instance, a daily minor movement in a bearish secondary reaction in a bullish primary movement. 2. Market trends have three phases Dow Theory asserts that major market trends are composed of three phases: an accumulation phase, a public participation phase, and a distribution phase. The accumulation phase (phase 1) is a period when investors "in the know" are actively buying (selling) stock against the general opinion of the market. During this phase, the stock price does not change much because these investors are in the minority demanding (absorbing) stock that the market at large is supplying (releasing). Eventually, the market catches on to these astute investors and a rapid price change occurs (phase 2). This occurs when trend followers and other technically oriented investors participate. This phase continues until rampant speculation occurs. At this point, the astute investors begin to distribute their holdings to the market (phase 3). 3. The stock market discounts all news

Stock prices quickly incorporate new information as soon as it becomes available. Once news is released, stock prices will change to reflect this new information. On this point, Dow Theory agrees with one of the premises of the efficient market hypothesis. 4. Stock market averages must confirm each other In Dow's time, the US was a growing industrial power. The US had population centers but factories were scattered throughout the country. Factories had to ship their goods to market, usually by rail. Dow's first stock averages were an index of industrial (manufacturing) companies and rail companies. To Dow, a bull market in industrials could not occur unless the railway average rallied as well, usually first. According to this logic, if manufacturers' profits are rising, it follows that they are producing more. If they produce more, then they have to ship more goods to consumers. Hence, if an investor is looking for signs of health in manufacturers, he or she should look at the performance of the companies that ship the output of them to market, the railroads. The two averages should be moving in the same direction. When the performance of the averages diverge, it is a warning that change is in the air. Both Barron's Magazine and the Wall Street Journal still publish the daily performance of the Dow Jones Transportation Index in chart form. The index contains major railroads, shipping companies, and air freight carriers in the US. 5. Trends are confirmed by volume Dow believed that volume confirmed price trends. When prices move on low volume, there could be many different explanations why. An overly aggressive seller could be present for example. But when price movements are accompanied by high volume, Dow believed this represented the "true" market view. If many participants are active in a particular security, and the price moves significantly in one direction, Dow maintained that this was the direction in which the market anticipated continued movement. To him, it was a signal that a trend is developing. 6. Trends exist until definitive signals prove that they have ended Dow believed that trends existed despite "market noise". Markets might temporarily move in the direction opposite to the trend, but they will soon resume the prior move. The trend should be given the benefit of the doubt during these reversals. Determining whether a reversal is the start of a new trend or a temporary movement in the current trend is not easy. Dow Theorists often disagree in this determination. Technical analysis tools attempt to clarify this but they can be interpreted differently by different investors.

 Analysis

There is little academic support for the profitability of the Dow Theory. Alfred Cowles in a study in Econometrica in 1934 showed that trading based upon the editorial advice would have resulted in earning less than a buy-and-hold strategy using a well diversified portfolio. Cowles concluded that a buy-and-hold strategy produced 15.5% annualized returns from 1902-1929 while the Dow Theory strategy produced annualized returns of 12%. After numerous studies supported Cowles over the following years, many academics stopped studying Dow Theory believing Cowles's results were conclusive. In recent years however, Cowles' conclusions have been revisited. William Goetzmann, Stephen Brown, and Alok Kumar believe that Cowles' study was incomplete [1] and that Dow Theory produces excess risk-adjusted returns.[1] Specifically, the return of a buy-and-hold strategy was higher than that of a Dow Theory portfolio by 2%, but the riskiness and volatility of the Dow Theory portfolio was lower, so that the Dow Theory portfolio produced higher risk-adjusted returns according to their study. Nevertheless, adjusting returns for risk is controversial in the context of the Dow Theory. One key problem with any analysis of Dow Theory is that the editorials of Charles Dow did not contain explicitly defined investing "rules" so some assumptions and interpretations are necessary. Many technical analysts consider Dow Theory's definition of a trend and its insistence on studying price action as the main premises of modern technical analysis.

 References
1. ^ The Dow Theory: William Peter Hamilton's Record Reconsidered

Scott Peterson: The Wall Street Journal,Technically, A Challenge for Blue Chips, Vol. 250, No. 122, November 23, 2007.

Goetzmann's Dow Page Includes a link to Dow's editorials and links to numerous articles describing support of Dow Theory. Alfred Cowle's Yale Page with selected publications Richard Russell's Dow Theory letters weekly newsletter and charts. John Hussman discusses Dow Theory Record of Dow Theory Signals Dow Theory blog and definition

Candlestick chart
(July 2010)

## Candlestick chart of EURUSD

A candlestick chart is a style of bar-chart used primarily to describe price movements of a security, derivative, or currency over time. It is a combination of a line-chart and a bar-chart, in that each bar represents the range of price movement over a given time interval. It is most often used in technical analysis of equity and currency price patterns. They appear superficially similar to error bars, but are unrelated.

Contents

1 History 2 Candlestick chart topics o 2.1 Candlestick patterns o 2.2 Use of candlestick charts 3 Heikin Ashi candlesticks 4 References 5 See also

 History
Candlestick charts are thought[by whom?] to have been developed in the 16th century by Japanese rice trader of financial instruments.[citation needed]. They were introduced to the Western world by Steve Nison in his book, "Japanese Candlestick Charting Techniques".[1]

## The basic candlestick

Candlesticks are usually composed of the body (black or white), and an upper and a lower shadow (wick): the area between the open and the close is called the real body, price excursions above and below the real body are called shadows. The wick illustrates the highest and lowest traded prices of a security during the time interval represented. The body illustrates the opening and closing trades. If the security closed higher than it opened, the body is white or unfilled, with the opening price at the bottom of the body and the closing price at the top. If the security closed lower than it opened, the body is black, with the opening price at the top and the closing price at the bottom. A candlestick need not have either a body or a wick. To better highlight price movements, modern candlestick charts (especially those displayed digitally) often replace the black or white of the candlestick body with colors such as red (for a lower closing) and blue or green (for a higher closing).
 Candlestick patterns Further information: Candlestick pattern

In addition to the rather simple patterns depicted in the section above, there are more complex and difficult patterns which have been identified since the charting method's inception. Complex patterns can be colored or highlighted for better visualization.

Candlestick charts also convey more information than other forms of charts, such as Open-highlow-close charts. Just as with bar charts, they display the absolute values of the open, high, low, and closing price for a given period. But they also show how those prices are relative to the prior periods' prices, so one can tell by looking at one bar if the price action is higher or lower than the prior one. They are also visually easier to look at[citation needed], and can be coloured for even better definition. Rather than using the open-high-low-close for a given time period (for example, 5 minute, 1 hour, 1 day, 1 month), candlesticks can also be constructed using the open-high-lowclose of a specified volume range (for example, 1,000; 100,000; 1 million shares per candlestick).
 Use of candlestick charts

Candlestick charts are a visual aid for decision making in stock, foreign exchange, commodity, and option trading. For example, when the bar is white and high relative to other time periods, it means buyers are very bullish. The opposite is true for a black bar.

##  Heikin Ashi candlesticks

Heikin-Ashi (, Japanese for 'average bar') candlesticks are a weighted version of candlesticks calculated with the following formula:

Open = (open of previous bar+close of previous bar)/2 Close = (open+high+low+close)/4 High = maximum of high, open, or close (whichever is highest) Low = minimum of low, open, or close (whichever is lowest)

Heikin-Ashi candlesticks must be used with caution with regards to the price as the body doesn't necessarily sync up with the actual open/close. Unlike with regular candlesticks, a long wick shows more strength, whereas the same period on a standard chart might show a long body with little or no wick. Depending on the software or user preference, Heikin-Ashi may be used to chart the price (instead of line, bar, or candlestick), as an indicator overlaid on a regular chart, or as an indicator plotted on a separate window.

Spinning top (candlestick pattern) Kagi chart Pivot point calculations Chart pattern Open-high-low-close chart Hikkake Pattern

The Elliott Wave Principle is a form of technical Economic Waves series analysis that traders use to analyze financial market cycles and forecast market trends by identifying extremes (see Business cycles) in investor psychology, highs and lows in prices, and Cycle/Wave Name Years other collective factors. Ralph Nelson Elliott (1871 1948), a professional accountant, discovered the Kitchin inventory 35 underlying social principles and developed the analytical tools in the 1930s. He proposed that market prices unfold 711 in specific patterns, which practitioners today call Elliott Juglar fixed investment waves, or simply waves. Elliott published his theory of market behavior in the book The Wave Principle in 1938, Kuznets infrastructural investment 1525 summarized it in a series of articles in Financial World 4560 magazine in 1939, and covered it most comprehensively Kondratiev wave in his final major work, Natures Laws: The Secret of the Universe in 1946. Elliott stated that "because man is subject to rhythmical procedure, calculations having to do with his activities can be projected far into the future with a justification and certainty heretofore unattainable."[1]

Contents

1 Overall design 2 Degree 3 Elliott Wave personality and characteristics 4 Pattern recognition and fractals 5 Elliott wave rules and guidelines 6 Fibonacci relationships 7 After Elliott 8 Rediscovery and current use 9 Criticism 10 See also 11 Notes 12 References 13 External links

##  Overall design

From R.N. Elliott's essay, "The Basis of the Wave Principle," October 1940.

The Elliot Wave Principle posits that collective investor psychology, or crowd psychology, moves between optimism and pessimism in natural sequences. These mood swings create patterns evidenced in the price movements of markets at every degree of trend or time scale. In Elliott's model, market prices alternate between an impulsive, or motive phase, and a corrective phase on all time scales of trend, as the illustration shows. Impulses are always subdivided into a set of 5 lower-degree waves, alternating again between motive and corrective character, so that waves 1, 3, and 5 are impulses, and waves 2 and 4 are smaller retraces of waves 1 and 3. Corrective waves subdivide into 3 smaller-degree waves starting with a fivewave counter-trend impulse, a retrace, and another impulse. In a bear market the dominant trend is downward, so the pattern is reversedfive waves down and three up. Motive waves always move with the trend, while corrective waves move against it.

 Degree
The patterns link to form five and three-wave structures which themselves underlie self-similar wave structures of increasing size or higher degree. Note the lower most of the three idealized cycles. In the first small five-wave sequence, waves 1, 3 and 5 are motive, while waves 2 and 4 are corrective. This signals that the movement of the wave one degree higher is upward. It also signals the start of the first small three-wave corrective sequence. After the initial five waves up and three waves down, the sequence begins again and the self-similar fractal geometry begins to unfold according to the five and three-wave structure which it underlies one degree higher. The completed motive pattern includes 89 waves, followed by a completed corrective pattern of 55 waves.[2] Each degree of a pattern in a financial market has a name. Practitioners use symbols for each wave to indicate both function and degreenumbers for motive waves, letters for corrective waves (shown in the highest of the three idealized series of wave structures or degrees). Degrees are relative; they are defined by form, not by absolute size or duration. Waves of the same degree may be of very different size and/or duration.[2]

The classification of a wave at any particular degree can vary, though practitioners generally agree on the standard order of degrees (approximate durations given):

Grand supercycle: multi-century Supercycle: multi-decade (about 40-70 years) Cycle: one year to several years (or even several decades under an Elliott Extension) Primary: a few months to a couple of years Intermediate: weeks to months Minor: weeks Minute: days Minuette: hours Subminuette: minutes

##  Elliott Wave personality and characteristics

Elliott wave analysts (or Elliotticians) hold that each individual wave has its own signature or characteristic, which typically reflects the psychology of the moment.[2][3] Understanding those personalities is key to the application of the Wave Principle; they are defined below. (Definitions assume a bull market in equities; the characteristics apply in reverse in bear markets.)
Five wave pattern (dominant trend) Wave 1: Wave one is rarely obvious at its inception. When the first wave of a new bull market begins, the fundamental news is almost universally negative. The previous trend is considered still strongly in force. Fundamental analysts continue to revise their earnings estimates lower; the economy probably does not look strong. Sentiment surveys are decidedly bearish, put options are in vogue, and implied volatility in the options market is high. Volume might increase a bit as prices rise, but not by enough to alert many technical analysts. Wave 2: Wave two corrects wave one, but can never extend beyond the starting point of wave one. Typically, the news is still bad. As prices retest the prior low, bearish sentiment quickly builds, and "the crowd" haughtily reminds all that the bear market is still deeply ensconced. Still, some positive signs appear for those who are looking: volume should be lower during wave two than during wave one, prices usually do not retrace more than 61.8% Three wave pattern (corrective trend)

Wave A: Corrections are typically harder to identify than impulse moves. In wave A of a bear market, the fundamental news is usually still positive. Most analysts see the drop as a correction in a still-active bull market. Some technical indicators that accompany wave A include increased volume, rising implied volatility in the options markets and possibly a turn higher in open interest in related futures markets.

Wave B: Prices reverse higher, which many see as a resumption of the now long-gone bull market. Those familiar with classical technical analysis may see the peak as the right shoulder of a head and shoulders reversal pattern. The volume during wave B should be lower than in wave A. By this point, fundamentals are probably no longer improving, but they most likely have not yet turned negative.

(see Fibonacci section below) of the wave one gains, and prices should fall in a three wave pattern. Wave 3: Wave three is usually the largest and most powerful wave in a trend (although some research suggests that in commodity markets, wave five is the largest). The news is now positive and fundamental analysts start to raise earnings estimates. Prices rise quickly, corrections are shortlived and shallow. Anyone looking to "get in on a pullback" will likely miss the boat. As wave three starts, the news is probably still bearish, and most market players remain negative; but by wave three's midpoint, "the crowd" will often join the new bullish trend. Wave three often extends wave one by a ratio of 1.618:1. Wave 4: Wave four is typically clearly corrective. Prices may meander sideways for an extended period, and wave four typically retraces less than 38.2% of wave three (see Fibonacci relationships below). Volume is well below than that of wave three. This is a good place to buy a pull back if you understand the potential ahead for wave 5. Still, fourth waves are often frustrating because of their lack of progress in the larger trend. Wave 5: Wave five is the final leg in the direction of the dominant trend. The news is almost universally positive and everyone is bullish. Unfortunately, this is when many average investors finally buy in, right before the top. Volume is often lower in wave five than in wave three, and many momentum indicators start to show divergences (prices reach a new high but the indicators do not reach a new peak). At the end of a major bull market, bears may very well be ridiculed (recall how forecasts for a top in the stock market during 2000 were received). Wave C: Prices move impulsively lower in five waves. Volume picks up, and by the third leg of wave C, almost everyone realizes that a bear market is firmly entrenched. Wave C is typically at least as large as wave A and often extends to 1.618 times wave A or beyond.

##  Pattern recognition and fractals

Elliott's market model relies heavily on looking at price charts. Practitioners study developing trends to distinguish the waves and wave structures, and discern what prices may do next; thus the application of the wave principle is a form of pattern recognition. The structures Elliott described also meet the common definition of a fractal (self-similar patterns appearing at every degree of trend). Elliott wave practitioners say that just as naturallyoccurring fractals often expand and grow more complex over time, the model shows that collective human psychology develops in natural patterns, via buying and selling decisions reflected in market prices: "It's as though we are somehow programmed by mathematics. Seashell, galaxy, snowflake or human: we're all bound by the same order."[4]

##  Elliott wave rules and guidelines

A correct Elliott wave "count" must observe three rules:
1. Wave 2 always retraces less than 100% of wave 1. 2. Wave 3 cannot be the shortest of the three impulse waves, namely waves 1, 3 and 5. 3. Wave 4 does not overlap with the price territory of wave 1, except in the rare case of a diagonal triangle.

A common guideline observes that in a five-wave pattern, waves 2 and 4 will often take alternate forms; a sharp move in wave 2, for example, will suggest a mild move in wave 4. Corrective wave patterns unfold in forms known as zigzags, flats, or triangles. In turn these corrective patterns can come together to form more complex corrections.[3]

##  Fibonacci relationships

R. N. Elliott's analysis of the mathematical properties of waves and patterns eventually led him to conclude that "The Fibonacci Summation Series is the basis of The Wave Principle".[1] Numbers from the Fibonacci sequence surface repeatedly in Elliott wave structures, including motive waves (1, 3, 5), a single full cycle (8 waves), and the completed motive (89 waves) and corrective (55 waves) patterns. Elliott developed his market model before he realized that it reflects the Fibonacci sequence. "When I discovered The Wave Principle action of market trends, I had never heard of either the Fibonacci Series or the Pythagorean Diagram".[1] The Fibonacci sequence is also closely connected to the Golden ratio (1.618). Practitioners commonly use this ratio and related ratios to establish support and resistance levels for market waves, namely the price points which help define the parameters of a trend.[5] See Fibonacci retracement. Finance professor Roy Batchelor and researcher Richard Ramyar, a former Director of the United Kingdom Society of Technical Analysts and Head of UK Asset Management Research at Reuters Lipper, studied whether Fibonacci ratios appear non-randomly in the stock market, as Elliott's model predicts. The researchers said the "idea that prices retrace to a Fibonacci ratio or round fraction of the previous trend clearly lacks any scientific rationale". They also said "there

is no significant difference between the frequencies with which price and time ratios occur in cycles in the Dow Jones Industrial Average, and frequencies which we would expect to occur at random in such a time series".[6] Robert Prechter replied to the BatchelorRamyar study, saying that it "does not challenge the validity of any aspect of the Wave Principle...it supports wave theorists' observations," and that because the authors had examined ratios between prices achieved in filtered trends rather than Elliott waves, "their method does not address actual claims by wave theorists".[7] The Socionomics Institute also reviewed data in the BatchelorRamyar study, and said these data show "Fibonacci ratios do occur more often in the stock market than would be expected in a random environment".[8]
Example of the Elliott Wave Principle and the Fibonacci relationship

## From sakuragi_indofx, "Trading never been so easy eh," December 2007.

The GBP/JPY currency chart gives an example of a fourth wave retracement apparently halting between the 38.2% and 50.0% Fibonacci retracements of a completed third wave. The chart also highlights how the Elliott Wave Principle works well with other technical analysis tendencies as prior support (the bottom of wave-1) acts as resistance to wave-4. The wave count depicted in the chart would be invalidated if GBP/JPY moves above the wave-1 low.

##  After Elliott

Following Elliott's death in 1948, other market technicians and financial professionals continued to use the wave principle and provide forecasts to investors. Charles Collins, who had published Elliott's "Wave Principle" and helped introduce Elliott's theory to Wall Street, ranked Elliott's contributions to technical analysis on a level with Charles Dow. Hamilton Bolton, founder of The Bank Credit Analyst, provided wave analysis to a wide readership in the 1950s and 1960s. Bolton introduced Elliott's wave principle to A.J. Frost, who provided weekly financial

commentary on the Financial News Network in the 1980s. Frost co-authored Elliott Wave Principle with Robert Prechter in 1978.

##  Rediscovery and current use

Robert Prechter came across Elliott's works while working as a market technician at Merrill Lynch. His prominence as a forecaster during the bull market of the 1980s brought the greatest exposure to date to Elliott's work, and today Prechter remains the most widely known Elliott analyst.[9] Among market technicians, wave analysis is widely accepted as a component of their trade. Elliott's Wave principle is among the methods included on the exam that analysts must pass to earn the Chartered Market Technician (CMT) designation, the professional accreditation developed by the Market Technicians Association (MTA). Robin Wilkin, Ex-Global Head of FX and Commodity Technical Strategy at JPMorgan Chase, says "the Elliott Wave principle ... provides a probability framework as to when to enter a particular market and where to get out, whether for a profit or a loss."[10] Jordan Kotick, Global Head of Technical Strategy at Barclays Capital and past President of the Market Technicians Association, has said that R. N. Elliott's "discovery was well ahead of its time. In fact, over the last decade or two, many prominent academics have embraced Elliotts idea and have been aggressively advocating the existence of financial market fractals."[11] One such academic is the physicist Didier Sornette, visiting professor at the Department of Earth and Space Science and the Institute of Geophysics and Planetary Physics at UCLA. In a paper he co-authored in 1996 ("Stock Market Crashes, Precursors and Replicas") Sornette said,
It is intriguing that the log-periodic structures documented here bear some similarity with the "Elliott waves" of technical analysis ... A lot of effort has been developed in finance both by academic and trading institutions and more recently by physicists (using some of their statistical tools developed to deal with complex times series) to analyze past data to get information on the future. The 'Elliott wave' technique is probably the most famous in this field. We speculate that the "Elliott waves", so strongly rooted in the financial analysts folklore, could be a signature of an underlying critical structure of the stock market.[12]

Paul Tudor Jones, the billionaire commodity trader, calls Prechter and Frost's standard text on Elliott "a classic," and one of "the four Bibles of the business":
[Magee and Edwards'] Technical Analysis of Stock Trends and The Elliott Wave Theorist both give very specific and systematic ways to approach developing great reward/risk ratios for entering into a business contract with the marketplace, which is what every trade should be if properly and thoughtfully executed.[13]

 Criticism
The premise that markets unfold in recognizable patterns contradicts the efficient market hypothesis, which states that prices cannot be predicted from market data such as moving averages and volume. By this reasoning, if successful market forecasts were possible, investors would buy (or sell) when the method predicted a price increase (or decrease), to the point that prices would rise (or fall) immediately, thus destroying the profitability and predictive power of the method. In efficient markets, knowledge of the Elliott Wave Principle among traders would lead to the disappearance of the very patterns they tried to anticipate, rendering the method, and all forms of technical analysis, useless. Benoit Mandelbrot has questioned whether Elliott waves can predict financial markets:
But Wave prediction is a very uncertain business. It is an art to which the subjective judgement of the chartists matters more than the objective, replicable verdict of the numbers. The record of this, as of most technical analysis, is at best mixed.[14]

Robert Prechter had previously stated that ideas in an article by Mandelbrot[15] "originated with Ralph Nelson Elliott, who put them forth more comprehensively and more accurately with respect to real-world markets in his 1938 book The Wave Principle."[16] Critics also warn the wave principle is too vague to be useful, since it cannot consistently identify when a wave begins or ends, and that Elliott wave forecasts are prone to subjective revision. Some who advocate technical analysis of markets have questioned the value of Elliott wave analysis. Technical analyst David Aronson wrote:[17]
The Elliott Wave Principle, as popularly practiced, is not a legitimate theory, but a story, and a compelling one that is eloquently told by Robert Prechter. The account is especially persuasive because EWP has the seemingly remarkable ability to fit any segment of market history down to its most minute fluctuations. I contend this is made possible by the method's loosely defined rules and the ability to postulate a large number of nested waves of varying magnitude. This gives the Elliott analyst the same freedom and flexibility that allowed pre-Copernican astronomers to explain all observed planet movements even though their underlying theory of an Earth-centered universe was wrong.

## Net present value

In finance, the net present value (NPV) or net present worth (NPW)[1] of a time series of cash flows, both incoming and outgoing, is defined as the sum of the present values (PVs) of the individual cash flows of the same entity. In the case when all future cash flows are incoming (such as coupons and principal of a bond) and the only outflow of cash is the purchase price, the NPV is simply the PV of future cash flows minus the purchase price (which is its own PV). NPV is a central tool in discounted cash flow (DCF) analysis, and is a standard method for using the time value of money to appraise long-term projects. Used for capital budgeting, and widely throughout economics, finance, and accounting, it measures the excess or shortfall of cash flows, in present value terms, once financing charges are met. Karen Karen The NPV of a sequence of cash flows takes as input the cash flows and a discount rate or discount curve and outputs a price; the converse process in DCF analysis - taking a sequence of cash flows and a price as input and inferring as output a discount rate (the discount rate which would yield the given price as NPV) is called the yield, and is more widely used in bond trading.

Contents

1 Formula 2 The discount rate 3 NPV in decision making 4 Example 5 Common pitfalls 6 History 7 Alternative capital budgeting methods 8 References

 Formula
Each cash inflow/outflow is discounted back to its present value (PV). Then they are summed. Therefore NPV is the sum of all terms,

where
t - the time of the cash flow

i - the discount rate (the rate of return that could be earned on an investment in the financial markets with similar risk.); the opportunity cost of capital Rt - the net cash flow (the amount of cash, inflow minus outflow) at time t. For educational purposes, R0 is commonly placed to the left of the sum to emphasize its role as (minus) the investment.

The result of this formula if multiplied with the Annual Net cash in-flows and reduced by Initial Cash outlay the present value but in case where the cash flows are not equal in amount then the previous formula will be used to determine the present value of each cash flow separately. Any cash flow within 12 months will not be discounted for NPV purpose.[2]

##  The discount rate

Main article: Discount rate

The rate used to discount future cash flows to the present value is a key variable of this process. A firm's weighted average cost of capital (after tax) is often used, but many people believe that it is appropriate to use higher discount rates to adjust for risk or other factors. A variable discount rate with higher rates applied to cash flows occurring further along the time span might be used to reflect the yield curve premium for long-term debt. Another approach to choosing the discount rate factor is to decide the rate which the capital needed for the project could return if invested in an alternative venture. If, for example, the capital required for Project A can earn five percent elsewhere, use this discount rate in the NPV calculation to allow a direct comparison to be made between Project A and the alternative. Related to this concept is to use the firm's Reinvestment Rate. Reinvestment rate can be defined as the rate of return for the firm's investments on average. When analyzing projects in a capital constrained environment, it may be appropriate to use the reinvestment rate rather than the firm's weighted average cost of capital as the discount factor. It reflects opportunity cost of investment, rather than the possibly lower cost of capital. An NPV calculated using variable discount rates (if they are known for the duration of the investment) better reflects the real situation than one calculated from a constant discount rate for the entire investment duration. Refer to the tutorial article written by Samuel Baker[3] for more detailed relationship between the NPV value and the discount rate.

For some professional investors, their investment funds are committed to target a specified rate of return. In such cases, that rate of return should be selected as the discount rate for the NPV calculation. In this way, a direct comparison can be made between the profitability of the project and the desired rate of return. To some extent, the selection of the discount rate is dependent on the use to which it will be put. If the intent is simply to determine whether a project will add value to the company, using the firm's weighted average cost of capital may be appropriate. If trying to decide between alternative investments in order to maximize the value of the firm, the corporate reinvestment rate would probably be a better choice. Using variable rates over time, or discounting "guaranteed" cash flows differently from "at risk" cash flows may be a superior methodology, but is seldom used in practice. Using the discount rate to adjust for risk is often difficult to do in practice (especially internationally), and is difficult to do well. An alternative to using discount factor to adjust for risk is to explicitly correct the cash flows for the risk elements using rNPV or a similar method, then discount at the firm's rate.

##  NPV in decision making

NPV is an indicator of how much value an investment or project adds to the firm. With a particular project, if Rt is a positive value, the project is in the status of discounted cash inflow in the time of t. If Rt is a negative value, the project is in the status of discounted cash outflow in the time of t. Appropriately risked projects with a positive NPV could be accepted. This does not necessarily mean that they should be undertaken since NPV at the cost of capital may not account for opportunity cost, i.e. comparison with other available investments. In financial theory, if there is a choice between two mutually exclusive alternatives, the one yielding the higher NPV should be selected.
If... It means... Then...

## the project may be accepted

the investment NPV would subtract value the project should be rejected <0 from the firm We should be indifferent in the decision whether to accept NPV the investment = 0 would neither gain or reject the project. This project adds no monetary value. nor lose value for the Decision should be based on other criteria, e.g. strategic

firm

## positioning or other factors not explicitly included in the calculation.

 Example
A corporation must decide whether to introduce a new product line. The new product will have startup costs, operational costs, and incoming cash flows over six years. This project will have an immediate (t=0) cash outflow of \$100,000 (which might include machinery, and employee training costs). Other cash outflows for years 16 are expected to be \$5,000 per year. Cash inflows are expected to be \$30,000 each for years 16. All cash flows are after-tax, and there are no cash flows expected after year 6. The required rate of return is 10%. The present value (PV) can be calculated for each year:
Year T=0 Cash flow Present value -\$100,000

T=1

\$22,727

T=2

\$20,661

T=3

\$18,783

T=4

\$17,075

T=5

\$15,523

T=6

\$14,112

The sum of all these present values is the net present value, which equals \$8,881.52. Since the NPV is greater than zero, it would be better to invest in the

project than to do nothing, and the corporation should invest in this project if there is no mutually exclusive alternative with a higher NPV. The same example in Excel formulae:

NPV(rate,net_inflow)+initial_investment PV(rate,year_number,yearly_net_inflow)

More realistic problems would need to consider other factors, generally including the calculation of taxes, uneven cash flows, and Terminal Value as well as the availability of alternate investment opportunities.

##  Common pitfalls

If, for example, the Rt are generally negative late in the project (e.g., an industrial or mining project might have clean-up and restoration costs), than at that stage the company owes money, so a high discount rate is not cautious but too optimistic. Some people see this as a problem with NPV. A way to avoid this problem is to include explicit provision for financing any losses after the initial investment, that is, explicitly calculate the cost of financing such losses. Another common pitfall is to adjust for risk by adding a premium to the discount rate. Whilst a bank might charge a higher rate of interest for a risky project, that does not mean that this is a valid approach to adjusting a net present value for risk, although it can be a reasonable approximation in some specific cases. One reason such an approach may not work well can be seen from the following: if some risk is incurred resulting in some losses, then a discount rate in the NPV will reduce the impact of such losses below their true financial cost. A rigorous approach to risk requires identifying and valuing risks explicitly, e.g. by actuarial or Monte Carlo techniques, and explicitly calculating the cost of financing any losses incurred.

Yet another issue can result from the compounding of the risk premium. R is a composite of the risk free rate and the risk premium. As a result, future cash flows are discounted by both the risk-free rate as well as the risk premium and this effect is compounded by each subsequent cash flow. This compounding results in a much lower NPV than might be otherwise calculated. The certainty equivalent model can be used to account for the risk premium without compounding its effect on present value.[citation needed] Another issue with relying on NPV is that it does not provide an overall picture of the gain or loss of executing a certain project. To see a percentage gain relative to the investments for the project, usually, Internal rate of return or other efficiency measures are used as a complement to NPV. Non specialist users frequently make the error of computing NPV based on cash flows after interest. This is wrong because it double counts the time value of money. Free Cash flow should be used as the basis for NPV computations.

 History
Net present value as a valuation methodology dates at least to the 19th century. Karl Marx refers to NPV as fictitious capital, and the calculation as capitalising, writing:[4] The forming of a fictitious capital is called capitalising. Every periodically repeated income is capitalised by calculating it on the average rate of interest, as an income which would be realised by a capital at this rate of interest. In mainstream neo-classical economics, NPV was formalized and popularized by Irving Fisher, in his 1907 The Rate of Interest and became included in textbooks from the 1950s onwards, starting in finance texts.[5][6]

##  Alternative capital budgeting methods

Adjusted present value (APV): adjusted present value, is the net present value of a project if financed solely by ownership equity plus the present value of all the benefits of financing. Accounting rate of return (ARR): a ratio similar to IRR and MIRR Cost-benefit analysis: which includes issues other than cash, such as time savings. Internal rate of return: which calculates the rate of return of a project while disregarding the absolute amount of money to be gained. Modified internal rate of return (MIRR): similar to IRR, but it makes explicit assumptions about the reinvestment of the cash flows. Sometimes it is called Growth Rate of Return. Payback period: which measures the time required for the cash inflows to equal the original outlay. It measures risk, not return.

Real option method: which attempts to value managerial flexibility that is assumed away in NPV.

(June 2009)

Adjusted Present Value (APV) is a business valuation method. APV is the net present value of a project if financed solely by ownership equity plus the present value of all the benefits of financing. It was first studied by Stewart Myers, a professor at the MIT Sloan School of Management and later theorized by Lorenzo Peccati, professor at the Bocconi University, in 1973. The method is to calculate the NPV of the project as if it is all-equity financed (so called base case). Then the base-case NPV is adjusted for the benefits of financing. Usually, the main benefit is a tax shield resulted from tax deductibility of interest payments. Another benefit can be a subsidized borrowing at sub-market rates. The APV method is especially effective when a leveraged buyout case is considered since the company is loaded with an extreme amount of debt, so the tax shield is substantial. Technically, an APV valuation model looks pretty much the same as a standard DCF model. However, instead of WACC, cash flows would be discounted at the unlevered cost of equity, and tax shields at either the cost of debt(Myers) or following later academics also with the unlevered cost of equity.[1] . APV and the standard DCF approaches should give the identical result if the capital structure remains stable.

Contents

## Accounting rate of return

Accounting rate of return, also known as the Average rate of return, or ARR is a financial ratio used in capital budgeting. [1] The ratio does not take into account the concept of time value of money. ARR calculates the return, generated from net income of the proposed capital investment. The ARR is a percentage return. Say, if ARR = 7%, then it means that the project is expected to earn seven cents out of each dollar invested. If the ARR is equal to or greater than the required rate of return, the project is acceptable. If it is less than the desired rate, it should be rejected. When comparing investments, the higher the ARR, the more attractive the investment.[2] Over one-half of large firms calculate ARR when appraising projects. [3]

Contents

where

## Internal rate of return

The internal rate of return (IRR) is a rate of return used in capital budgeting to measure and compare the profitability of investments. It is also called the discounted cash flow rate of return (DCFROR) or the rate of return (ROR).[1] In the context of savings and loans the IRR is also called the effective interest rate. The term internal refers to the fact that its calculation does not incorporate environmental factors (e.g., the interest rate or inflation).

 Definition

Showing the position of the IRR on the graph of NPV(r) (r is labelled 'i' in the graph)

The internal rate of return on an investment or project is the "annualized effective compounded return rate" or "rate of return" that makes the net present value (NPV as NET*1/(1+IRR)^year) of all cash flows (both positive and negative) from a particular investment equal to zero. In more specific terms, the IRR of an investment is the discount rate at which the net present value of costs (negative cash flows) of the investment equals the net present value of the benefits

(positive cash flows) of the investment. Internal rates of return are commonly used to evaluate the desirability of investments or projects. The higher a project's internal rate of return, the more desirable it is to undertake the project. Assuming all projects require the same amount of up-front investment, the project with the highest IRR would be considered the best and undertaken first. A firm (or individual) should, in theory, undertake all projects or investments available with IRRs that exceed the cost of capital. Investment may be limited by availability of funds to the firm and/or by the firm's capacity or ability to manage numerous projects.

 Uses
Because the internal rate of return is a rate quantity, it is an indicator of the efficiency, quality, or yield of an investment. This is in contrast with the net present value, which is an indicator of the value or magnitude of an investment. An investment is considered acceptable if its internal rate of return is greater than an established minimum acceptable rate of return or cost of capital. In a scenario where an investment is considered by a firm that has equity holders, this minimum rate is the cost of capital of the investment (which may be determined by the risk-adjusted cost of capital of alternative investments). This ensures that the investment is supported by equity holders since, in general, an investment whose IRR exceeds its cost of capital adds value for the company (i.e., it is economically profitable).

 Calculation
Given a collection of pairs (time, cash flow) involved in a project, the internal rate of return follows from the net present value as a function of the rate of return. A rate of return for which this function is zero is an internal rate of return. Given the (period, cash flow) pairs (n, Cn) where n is a positive integer, the total number of periods N, and the net present value NPV, the internal rate of return is given by r in:

The period is usually given in years, but the calculation may be made simpler if r is calculated using the period in which the majority of the problem is defined (e.g., using months if most of the cash flows occur at monthly intervals) and converted to a yearly period thereafter. Any fixed time can be used in place of the present (e.g., the end of one interval of an annuity); the value obtained is zero if and only if the NPV is zero.

In the case that the cash flows are random variables, such as in the case of a life annuity, the expected values are put into the above formula. Often, the value of r cannot be found analytically. In this case, numerical methods or graphical methods must be used.
 Example

## If an investment may be given by the sequence of cash flows

Year (n) Cash Flow (Cn) 37.49 0 1 2 3 4 -4000 1200 1410 1875 1050

## then the IRR r is given by

. In this case, the answer is 14.3%.  Numerical solution Since the above is a manifestation of the general problem of finding the roots of the equation NPV(r), there are many numerical methods that can be used to estimate r. For example, using the secant method, r is given by

. where rn is considered the nth approximation of the IRR. This r can be found to an arbitrary degree of accuracy.

## The convergence behaviour of the sequence is governed by the following:

If the function NPV(i) has a single real root r, then the sequence will converge reproducibly towards r. If the function NPV(i) has n real roots , then the sequence will converge to one of the roots and changing the values of the initial pairs may change the root to which it converges. If function NPV(i) has no real roots, then the sequence will tend towards +.

Having

## when NPV0 < 0 may speed up convergence of rn to r.

 Numerical Solution for Single Outflow and Multiple Inflows Of particular interest is the case where the stream of payments consists of a single outflow, followed by multiple inflows occurring at equal periods. In the above notation, this corresponds to: C0 < 0, Cn 0 for n 1. In this case the NPV of the payment stream is a convex, strictly decreasing function of interest rate. There is always a single unique solution for IRR. Given two estimates r1 and r2 for IRR, the secant method equation (see above) with n = 2 will always produce an improved estimate r3. This is sometimes referred to as the Hit and Trial (or Trial and Error) method. There is however a much more accurate estimation formula, given by:

where

In this equation, NPVn,in and NPVn 1,in refer to the NPV's of the inflows only (that is, set C0 = 0 and compute NPV). For example, using the stream of payments {-4000, 1200, 1410, 1875, 1050} and initial guesses r1 = 0.1 and r2 = 0.2 gives NPV1,in = 4382.1 and NPV2,in = 3570.6. The accurate formula estimates IRR as 14.35% (0.3% error) as compared to IRR = 14.7% (3% error) from the secant method. If applied iteratively, either the secant method or the improved formula will always converge to the correct solution. Both the secant method and the improved formula rely on initial guesses for IRR. The following initial guesses may be used:

r2 = (1 + r1)p 1

where
A = sum of inflows = C1 + ... + CN

## .  Decision Criterion

If the IRR is greater than the cost of capital, accept the project. If the IRR is less than the cost of capital, reject the project.

##  Problems with using internal rate of return

As an investment decision tool, the calculated IRR should not be used to rate mutually exclusive projects, but only to decide whether a single project is worth investing in.

NPV vs discount rate comparison for two mutually exclusive projects. Project 'A' has a higher NPV (for certain discount rates), even though its IRR (=x-axis intercept) is lower than for project 'B' (click to enlarge)

In cases where one project has a higher initial investment than a second mutually exclusive project, the first project may have a lower IRR (expected return), but a higher NPV (increase in shareholders' wealth) and should thus be accepted over the second project (assuming no capital constraints). IRR assumes reinvestment of interim cash flows in projects with equal rates of return (the reinvestment can be the same project or a different project). Therefore, IRR overstates the annual equivalent rate of return for a project whose interim cash flows are reinvested at a rate lower than the calculated IRR. This presents a problem, especially for high IRR projects, since there is frequently not another project available in the interim that can earn the same rate of return as the first project. When the calculated IRR is higher than the true reinvestment rate for interim cash flows, the measure will overestimate sometimes very significantly the annual equivalent return from the project. The formula assumes that the company has additional projects, with equally attractive prospects, in which to invest the interim cash flows.[2] This makes IRR a suitable (and popular) choice for analyzing venture capital and other private equity investments, as these strategies usually require several cash investments throughout the project, but only see one cash outflow at the end of the project (e.g., via IPO or M&A). Since IRR does not consider cost of capital, it should not be used to compare projects of different duration. Modified Internal Rate of Return (MIRR) does consider cost of capital and provides a better indication of a project's efficiency in contributing to the firm's discounted cash flow. In the case of positive cash flows followed by negative ones and then by positive ones (for example, + + - - - +) the IRR may have multiple values. In this case a discount rate may be used for the borrowing cash flow and the IRR calculated for the investment cash flow. This applies for example when a customer makes a deposit before a specific machine is built. In a series of cash flows like (-10, 21, -11), one initially invests money, so a high rate of return is best, but then receives more than one possesses, so then one owes money, so now a low rate of return is best. In this case it is not even clear whether a high or a low IRR is better. There may even be multiple IRRs for a single project, like in the

example 0% as well as 10%. Examples of this type of project are strip mines and nuclear power plants, where there is usually a large cash outflow at the end of the project. In general, the IRR can be calculated by solving a polynomial equation. Sturm's theorem can be used to determine if that equation has a unique real solution. In general the IRR equation cannot be solved analytically but only iteratively. When a project has multiple IRRs it may be more convenient to compute the IRR of the project with the benefits reinvested.[2] Accordingly, MIRR is used, which has an assumed reinvestment rate, usually equal to the project's cost of capital. It has been shown[3] that with multiple internal rates of return, the IRR approach can still be interpreted in a way that is consistent with the present value approach provided that the underlying investment stream is correctly identified as net investment or net borrowing. See also [4] for a way of identifying the relevant value of the IRR from a set of multiple IRR solutions. Despite a strong academic preference for NPV, surveys indicate that executives prefer IRR over NPV.[5] Apparently, managers find it easier to compare investments of different sizes in terms of percentage rates of return than by dollars of NPV. However, NPV remains the "more accurate" reflection of value to the business. IRR, as a measure of investment efficiency may give better insights in capital constrained situations. However, when comparing mutually exclusive projects, NPV is the appropriate measure.

 Mathematics
Mathematically, the value of the investment is assumed to undergo exponential growth or decay according to some rate of return (any value greater than -100%), with discontinuities for cash flows, and the IRR of a series of cash flows is defined as any rate of return that results in a net present value of zero (or equivalently, a rate of return that results in the correct value of zero after the last cash flow). Thus, internal rate(s) of return follow from the net present value as a function of the rate of return. This function is continuous. Towards a rate of return of -100% the net present value approaches infinity with the sign of the last cash flow, and towards a rate of return of positive infinity the net present value approaches the first cash flow (the one at the present). Therefore, if the first and last cash flow have a different sign there exists

## an internal rate of return. Examples of time series without an IRR:

Only negative cash flows the NPV is negative for every rate of return. (-1, 1, -1), rather small positive cash flow between two negative cash flows; the NPV is a quadratic function of 1/(1+r), where r is the rate of return, or put differently, a quadratic function of the discount rate r/(1+r); the highest NPV is -0.75, for r = 100%.

In the case of a series of exclusively negative cash flows followed by a series of exclusively positive ones, consider the total value of the cash flows converted to a time between the negative and the positive ones. The resulting function of the rate of return is continuous and monotonically decreasing from positive infinity to negative infinity, so there is a unique rate of return for which it is zero. Hence, the IRR is also unique (and equal). Although the NPV-function itself is not necessarily monotonically decreasing on its whole domain, it is at the IRR. Similarly, in the case of a series of exclusively positive cash flows followed by a series of exclusively negative ones the IRR is also unique.

Accounting rate of return Capital budgeting Discounted cash flow Modified internal rate of return Modified Dietz Method Net present value Simple Dietz Method

 References
1. ^ Project Economics and Decision Analysis, Volume I: Deterministic Models, M.A.Main, Page 269 2. ^ a b Internal Rate of Return: A Cautionary Tale 3. ^ Hazen, G. B., "A new perspective on multiple internal rates of return," The Engineering Economist 48(2), 2003, 31-51. 4. ^ Hartman, J. C., and Schafrick, I. C., "The relevant internal rate of return," The Engineering Economist 49(2), 2004, 139-158. 5. ^ Pogue, M.(2004). Investment Appraisal: A New Approach. Managerial Auditing Journal.Vol. 19 No. 4, 2004. pp. 565-570

## APV = Base-case NPV + PV of financing effect

 Example
 Given data

Initial investment = 1 000 000 Expected cashflow to equity = 95 000 in perpetuity Unlevered cost of equity = 10% Cost of debt = 5% Actual interest on debt = 5% Tax rate = 35% Project is financed with 500 000 of debt and 500 000 of equity; this capital structure is kept in perpetuity

 Calculation

Base-case NPV @10% = 1 000 000 + (95 000/10%) = 50 000 (approx) PV of Tax Shield @5% = (0.05 x 500 000 x 0.35)/(0.05) = 175000 (approx) APV = 50 000 + 175,000 = 125,000

Note how substantial the effect of tax shield can be. The tax shield, like the cash flow to equity, is perpetual.

Payback period
(March 2009)

Payback period in capital budgeting refers to the period of time required for the return on an investment to "repay" the sum of the original investment. For example, a \$1000 investment which returned \$500 per year would have a two year payback period. The time value of money is not taken into account. Payback period intuitively measures how long something takes to "pay for itself." All else being equal, shorter payback periods are preferable to longer payback periods.

Payback period is widely used because of its ease of use despite the recognized limitations described below. The term is also widely used in other types of investment areas, often with respect to energy efficiency technologies, maintenance, upgrades, or other changes. For example, a compact fluorescent light bulb may be described as having a payback period of a certain number of years or operating hours, assuming certain costs. Here, the return to the investment consists of reduced operating costs. Although primarily a financial term, the concept of a payback period is occasionally extended to other uses, such as energy payback period (the period of time over which the energy savings of a project equal the amount of energy expended since project inception); these other terms may not be standardized or widely used. Payback period as a tool of analysis is often used because it is easy to apply and easy to understand for most individuals, regardless of academic training or field of endeavour. When used carefully or to compare similar investments, it can be quite useful. As a stand-alone tool to compare an investment to "doing nothing," payback period has no explicit criteria for decisionmaking (except, perhaps, that the payback period should be less than infinity). The payback period is considered a method of analysis with serious limitations and qualifications for its use, because it does not account for the time value of money, risk, financing or other important considerations, such as the opportunity cost. Whilst the time value of money can be rectified by applying a weighted average cost of capital discount, it is generally agreed that this tool for investment decisions should not be used in isolation. Alternative measures of "return" preferred by economists are net present value and internal rate of return. An implicit assumption in the use of payback period is that returns to the investment continue after the payback period. Payback period does not specify any required comparison to other investments or even to not making an investment. There is no formula to calculate the payback period, except the simple and unrealistic case of the initial cash outlay and further constant cash inflows or constantly growing cash inflows. To calculate the payback period an algorithm that is easlily applied in spreadsheets is needed. The typical algorithm reduces to the calculation of cumulative cash flow and the moment in which it turns to positive from negative. Additional complexity arises when the cash flow changes sign several times; i.e., it contains outflows in the midst or at the end of the project lifetime. The modified payback period algorithm may be applied then. First, the sum of all of the cash outflows is calculated. Then the cumulative positive cash flows are determined for each period. The modified payback period is calculated as the moment in which the cumulative positive cash flow exceeds the total cash outflow.

Six Sigma

## Part of a series of articles on

Industry

Manufacturing methods
Batch production Job production Continuous production

Improvement methods
LM TPM QRM VDM TOC Six Sigma RCM

## ISA-88 ISA-95 ERP SAP IEC 62264 B2MML

Process control
PLC DCS

Six Sigma is a business management strategy originally developed by Motorola, USA in 1986.[1][2] As of 2010, it is widely used in many sectors of industry. Six Sigma seeks to improve the quality of process outputs by identifying and removing the causes of defects (errors) and minimizing variability in manufacturing and business processes.[3] It uses a set of quality management methods, including statistical methods, and creates a special infrastructure of people within the organization ("Black Belts", "Green Belts", etc.) who are experts in these methods.[3] Each Six Sigma project carried out within an organization follows a defined sequence of steps and has quantified financial targets (cost reduction and/or profit increase).[3] The term Six Sigma originated from terminology associated with manufacturing, specifically terms associated with statistical modeling of manufacturing processes. The maturity of a manufacturing process can be described by a sigma rating indicating its yield, or the percentage of defect-free products it creates. A six sigma process is one in which 99.99966% of the products manufactured are statistically expected to be free of defects (3.4 defects per million). Motorola set a goal of "six sigma" for all of its manufacturing operations, and this goal became a byword for the management and engineering practices used to achieve it.

Contents

1 Historical overview 2 Methods o 2.1 DMAIC o 2.2 DMADV or DFSS o 2.3 Quality management tools and methods used in Six Sigma 3 Implementation roles o 3.1 Certification 4 Origin and meaning of the term "six sigma process" o 4.1 Role of the 1.5 sigma shift o 4.2 Sigma levels 5 Software used for Six Sigma 6 Application 7 Criticism o 7.1 Lack of originality o 7.2 Role of consultants o 7.3 Potential negative effects o 7.4 Lack of Proof of evidence of its success o 7.5 Based on arbitrary standards

7.6 Measurement errors and restrictions for time depending defects 7.7 Criticism of the 1.5 sigma shift 8 See also 9 References 10 Further reading

o o

##  Historical overview

Six Sigma originated as a set of practices designed to improve manufacturing processes and eliminate defects, but its application was subsequently extended to other types of business processes as well.[4] In Six Sigma, a defect is defined as any process output that does not meet customer specifications, or that could lead to creating an output that does not meet customer specifications.[3] The core of Six Sigma was born at Motorola in the 1970s out of senior executive Art Sundry's criticism of Motorolas bad quality.[5] As a result of this criticism, the company discovered a connection between increases in quality and decreases in costs of production. At that time, the prevailing view was that quality costs extra money. In fact, it reduced total costs by driving down the costs for repair or control.[6] Bill Smith subsequently formulated the particulars of the methodology at Motorola in 1986.[1] Six Sigma was heavily inspired by the quality improvement methodologies of the six preceding decades, such as quality control, Total Quality Management (TQM), and Zero Defects,[7][8] based on the work of pioneers such as Shewhart, Deming, Juran, Ishikawa, Taguchi and others. Like its predecessors, Six Sigma doctrine asserts that:

Continuous efforts to achieve stable and predictable process results (i.e., reduce process variation) are of vital importance to business success. Manufacturing and business processes have characteristics that can be measured, analyzed, improved and controlled. Achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management.

Features that set Six Sigma apart from previous quality improvement initiatives include:

A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma project.[3] An increased emphasis on strong and passionate management leadership and support.[3] A special infrastructure of "Champions," "Master Black Belts," "Black Belts," "Green Belts", etc. to lead and implement the Six Sigma approach.[3] A clear commitment to making decisions on the basis of verifiable data, rather than assumptions and guesswork.[3]

The term "Six Sigma" comes from a field of statistics known as process capability studies. Originally, it referred to the ability of manufacturing processes to produce a very high proportion

of output within specification. Processes that operate with "six sigma quality" over the short term are assumed to produce long-term defect levels below 3.4 defects per million opportunities (DPMO).[9][10] Six Sigma's implicit goal is to improve all processes to that level of quality or better. Six Sigma is a registered service mark and trademark of Motorola Inc.[11] As of 2006 Motorola reported over US\$17 billion in savings[12] from Six Sigma. Other early adopters of Six Sigma who achieved well-publicized success include Honeywell (previously known as AlliedSignal) and General Electric, where Jack Welch introduced the method.[13] By the late 1990s, about twothirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality.[14] In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing to create a methodology named Lean Six Sigma.[15] The Lean Six Sigma methodology views lean manufacturing, which addresses process flow and waste issues, and Six Sigma, with its focus on variation and design, as complementary disciplines aimed at promoting "business and operational excellence".[15] Companies such as IBM use Lean Six Sigma to focus transformation efforts not just on efficiency but also on growth. It serves as a foundation for innovation throughout the organization, from manufacturing and software development to sales and service delivery functions.

 Methods
Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-Check-Act Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and DMADV.[14]

DMAIC is used for projects aimed at improving an existing business process.[14] DMAIC is pronounced as "duh-may-ick". DMADV is used for projects aimed at creating new product or process designs.[14] DMADV is pronounced as "duh-mad-vee".

 DMAIC

## The DMAIC project methodology has five phases:

Define the problem, the voice of the customer, and the project goals, specifically. Measure key aspects of the current process and collect relevant data. Analyze the data to investigate and verify cause-and-effect relationships. Determine what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation. Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability.

Control the future state process to ensure that any deviations from target are corrected before they result in defects. Implement control systems such as statistical process control, production boards , visual workplaces, and continuously monitor the process.

The DMADV project methodology, also known as DFSS ("Design For Six Sigma"),[14] features five phases:

Define design goals that are consistent with customer demands and the enterprise strategy. Measure and identify CTQs (characteristics that are Critical To Quality), product capabilities, production process capability, and risks. Analyze to develop and design alternatives, create a high-level design and evaluate design capability to select the best design. Design details, optimize the design, and plan for design verification. This phase may require simulations. Verify the design, set up pilot runs, implement the production process and hand it over to the process owner(s).

##  Quality management tools and methods used in Six Sigma

Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many established quality-management tools that are also used outside of Six Sigma. The following table shows an overview of the main methods used.

5 Whys Accelerated life testing Analysis of variance ANOVA Gauge R&R Axiomatic design Business Process Mapping Cause & effects diagram (also known as fishbone or Ishikawa diagram) Check sheet Chi-squared test of independence and fits Control chart Correlation Cost-benefit analysis CTQ tree Design of experiments Failure mode and effects analysis (FMEA) General linear model Histograms

Pareto analysis Pareto chart Pick chart Process capability Quality Function Deployment (QFD) Quantitative marketing research through use of Enterprise Feedback Management (EFM) systems Regression analysis Root cause analysis Run charts Scatter diagram SIPOC analysis (Suppliers, Inputs, Process, Outputs, Customers) Stratification Taguchi methods Taguchi Loss Function TRIZ

##  Implementation roles

One key innovation of Six Sigma involves the "professionalizing" of quality management functions. Prior to Six Sigma, quality management in practice was largely relegated to the production floor and to statisticians in a separate quality department. Formal Six Sigma programs adopt a ranking terminology (similar to some martial arts systems) to define a hierarchy (and career path) that cuts across all business functions. Six Sigma identifies several key roles for its successful implementation.[16]

Executive Leadership includes the CEO and other members of top management. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements. Champions take responsibility for Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from upper management. Champions also act as mentors to Black Belts. Master Black Belts, identified by champions, act as in-house coaches on Six Sigma. They devote 100% of their time to Six Sigma. They assist champions and guide Black Belts and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent application of Six Sigma across various functions and departments. Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma project execution, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma. Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities, operating under the guidance of Black Belts.

Some organizations use additional belt colours, such as Yellow Belts, for employees that have basic training in Six Sigma tools.
 Certification

Corporations such as early Six Sigma pioneers General Electric and Motorola developed certification programs as part of their Six Sigma implementation, verifying individuals' command of the Six Sigma methods at the relevant skill level (Green Belt, Black Belt etc.). Following this approach, many organizations in the 1990s starting offering Six Sigma certifications to their employees.[14][17] Criteria for Green Belt and Black Belt certification vary; some companies simply require participation in a course and a Six Sigma project.[17] There is no standard certification body, and different certification services are offered by various quality associations and other providers against a fee.[18][19] The American Society for Quality for example requires Black Belt applicants to pass a written exam and to provide a signed affidavit stating that they have completed two projects, or one project combined with three years' practical experience in the body of knowledge.[17][20] The International Quality Federation offers an online certification exam that organizations can use for their internal certification programs; it is statistically more demanding than the ASQ certification.[17][19] Other providers offering certification services include the Juran Institute, Six Sigma Qualtec, Air Academy Associates and others.[18]

##  Origin and meaning of the term "six sigma process"

Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma model. The Greek letter (sigma) marks the distance on the horizontal axis between the mean, , and the curve's inflection point. The greater this distance, the greater is the spread of values encountered. For the curve shown above, = 0 and = 1. The upper and lower specification limits (USL, LSL) are at a distance of 6 from the mean. Because of the properties of the normal distribution, values lying that far away from the mean are extremely unlikely. Even if the mean were to move right or left by 1.5 at some point in the future (1.5 sigma shift), there is still a good safety cushion. This is why Six Sigma aims to have processes where the mean is at least 6 away from the nearest specification limit.

The term "six sigma process" comes from the notion that if one has six standard deviations between the process mean and the nearest specification limit, as shown in the graph, practically no items will fail to meet specifications.[10] This is based on the calculation method employed in process capability studies. Capability studies measure the number of standard deviations between the process mean and the nearest specification limit in sigma units. As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, fewer standard deviations will fit between the mean and the nearest specification limit, decreasing the sigma number and increasing the likelihood of items outside specification.[10]
 Role of the 1.5 sigma shift

Experience has shown that processes usually do not perform as well in the long term as they do in the short term.[10] As a result, the number of sigmas that will fit between the process mean and the nearest specification limit may well drop over time, compared to an initial short-term study.[10] To account for this real-life increase in process variation over time, an empiricallybased 1.5 sigma shift is introduced into the calculation.[10][21] According to this idea, a process that fits 6 sigma between the process mean and the nearest specification limit in a short-term study will in the long term fit only 4.5 sigma either because the process mean will move over time, or because the long-term standard deviation of the process will be greater than that observed in the short term, or both.[10]

Hence the widely accepted definition of a six sigma process is a process that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided capability study).[10] So the 3.4 DPMO of a six sigma process in fact corresponds to 4.5 sigma, namely 6 sigma minus the 1.5-sigma shift introduced to account for long-term variation.[10] This allows for the fact that special causes may result in a deterioration in process performance over time, and is designed to prevent underestimation of the defect levels likely to be encountered in real-life operation.[10]
 Sigma levels

A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality by signaling when quality professionals should investigate a process to find and eliminate special-cause variation. See also: Three sigma rule

The table[22][23] below gives long-term DPMO values corresponding to various short-term sigma levels. It must be understood that these figures assume that the process mean will shift by 1.5 sigma toward the side with the critical specification limit. In other words, they assume that after the initial study determining the short-term sigma level, the long-term Cpk value will turn out to be 0.5 less than the short-term Cpk value. So, for example, the DPMO figure given for 1 sigma

assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk = 0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the defect percentages indicate only defects exceeding the specification limit to which the process mean is nearest. Defects beyond the far specification limit are not included in the percentages.
Sigma level DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk 1 2 3 4 5 6 7 691,462 69% 308,538 31% 66,807 6.7% 6,210 233 3.4 0.019 0.62% 0.023% 0.00034% 0.0000019% 31% 69% 93.3% 99.38% 99.977% 99.99966% 99.9999981% 0.33 0.67 1.00 1.33 1.67 2.00 2.33 0.17 0.17 0.5 0.83 1.17 1.5 1.83

##  Software used for Six Sigma

Main article: List of Six Sigma software packages

 Application
Main article: List of Six Sigma companies

Six Sigma mostly finds application in large organizations.[24] An important factor in the spread of Six Sigma was GE's 1998 announcement of \$350 million in savings thanks to Six Sigma, a figure that later grew to more than \$1 billion.[24] According to industry consultants like Thomas Pyzdek and John Kullmann, companies with fewer than 500 employees are less suited to Six Sigma implementation, or need to adapt the standard approach to make it work for them.[24] This is due both to the infrastructure of Black Belts that Six Sigma requires, and to the fact that large organizations present more opportunities for the kinds of improvements Six Sigma is suited to bringing about.[24]

 Criticism

##  Lack of originality

Noted quality expert Joseph M. Juran has described Six Sigma as "a basic version of quality improvement", stating that "there is nothing new there. It includes what we used to call facilitators. They've adopted more flamboyant terms, like belts with different colors. I think that concept has merit to set apart, to create specialists who can be very helpful. Again, that's not a new idea. The American Society for Quality long ago established certificates, such as for reliability engineers."[25]
 Role of consultants

The use of "Black Belts" as itinerant change agents has (controversially) fostered an industry of training and certification. Critics argue there is overselling of Six Sigma by too great a number of consulting firms, many of which claim expertise in Six Sigma when they have only a rudimentary understanding of the tools and techniques involved.[3]
 Potential negative effects

A Fortune article stated that "of 58 large companies that have announced Six Sigma programs, 91 percent have trailed the S&P 500 since". The statement was attributed to "an analysis by Charles Holland of consulting firm Qualpro (which espouses a competing quality-improvement process)."[26] The summary of the article is that Six Sigma is effective at what it is intended to do, but that it is "narrowly designed to fix an existing process" and does not help in "coming up with new products or disruptive technologies." Advocates of Six Sigma have argued that many of these claims are in error or ill-informed.[27][28] A BusinessWeek article says that James McNerney's introduction of Six Sigma at 3M had the effect of stifling creativity and reports its removal from the research function. It cites two Wharton School professors who say that Six Sigma leads to incremental innovation at the expense of blue skies research.[29] This phenomenon is further explored in the book, Going Lean, which describes a related approach known as lean dynamics and provides data to show that Ford's "6 Sigma" program did little to change its fortunes.[30]
 Lack of Proof of evidence of its success

In articles and specially on Internet sites and in text books "claims" are made about the huge successes and millions of dollars that 6-Sigma has saved. 6 sigma seems to be a "silver bullet" method. But, there seems - somehow ironic - no trustworthy evidence for this: "probably more to the Six Sigma literature than concepts, relates to the evidence for Six Sigmas success. So far, documented case studies using the Six Sigma methods are presented as the strongest evidence for its success. However, looking at these documented cases, and apart from a few that are detailed from the experience of leading organizations like GE and Motorola, most cases are not documented in a systemic or academic manner. In fact, the majority are case studies illustrated on websites, and are, at best, sketchy. They provide no mention of any specific Six Sigma methods that were used to resolve the problems. It has been argued that by relying on the

Six Sigma criteria, management is lulled into the idea that something is being done about quality, whereas any resulting improvement is accidental (Latzko 1995). Thus, when looking at the evidence put forward for Six Sigma success, mostly by consultants and people with vested interests, the question that begs to be asked is: are we making a true improvement with Six Sigma methods or just getting skilled at telling stories? Everyone seems to believe that we are making true improvements, but there is some way to go to document these empirically and clarify the casual relations."
[31]

##  Based on arbitrary standards

While 3.4 defects per million opportunities might work well for certain products/processes, it might not operate optimally or cost effectively for others. A pacemaker process might need higher standards, for example, whereas a direct mail advertising campaign might need lower standards. The basis and justification for choosing 6 (as opposed to 5 or 7, for example) as the number of standard deviations, together with the 1.5 sigma shift is not clearly explained. In addition, the Six Sigma model assumes that the process data always conform to the normal distribution. The calculation of defect rates for situations where the normal distribution model does not apply is not properly addressed in the current Six Sigma literature. This specially counts for reliability related defects and other not time invariant problems. The IEC, ARP, EN-ISO, DIN and other (inter)national standardization organizations have not created standards for the Six Sigma process. This might be the reason that it became a dominant domain of consultants (see critics above). [3].
 Measurement errors and restrictions for time depending defects

The measurement of performance is in most analysis of Six-sigma done by sampling. This introduces sometimes relevant uncertainty in the parameters. Not much can be found about these errors in literature. Decisions should be based on all information. This specially counts for defects with very broad, non-normal and difficult to measure failure distributions over time, e.g. the Reliability of parts or systems. Measurement is here not "time invariant" (Opposite to for example the manufactured diameter of an shaft - assuming that it does not change over time). No easy measurements can be taken for reliability defects on time to make a "six-sigma" type of program effective. Reliability engineering and Quality engineering must not be treated as being one and the same type of engineering.[3] There are important links and input and output is required, but an indiscreet mixture of those disciplines might result in program failure. [32] An example related to this confusion is people asking the question: "99% functional reliability of this item over 10 years time, how much sigma levels is this...?" Six-Sigma was developed originally as a manufacturing based Quality tool and might the best suited - on its own or as the main framework - for unReliability related problems [3]

##  Criticism of the 1.5 sigma shift

The statistician Donald J. Wheeler has dismissed the 1.5 sigma shift as "goofy" because of its arbitrary nature.[33] Its universal applicability is seen as doubtful.[3] The 1.5 sigma shift has also become contentious because it results in stated "sigma levels" that reflect short-term rather than long-term performance: a process that has long-term defect levels corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a "six sigma process."[10][34] The accepted Six Sigma scoring system thus cannot be equated to actual normal distribution probabilities for the stated number of standard deviations, and this has been a key bone of contention about how Six Sigma measures are defined.[34] The fact that it is rarely explained that a "6 sigma" process will have long-term defect rates corresponding to 4.5 sigma performance rather than actual 6 sigma performance has led several commentators to express the opinion that Six Sigma is a confidence trick.[10]
.

## SHORT & MEDIUM QUESTIONS

What Are Index Funds?
The Ins and Outs of Index Funds
From Lee McGowan, former About.com Guide

## growth investing passive investing tax-efficient investments

Online InvestingEarn Maximum Interest with Forex. Invest Now & Receive 15% CashBackwww.4xp.com/India What is Sensex?You dont need tuitions to learn. The First Step Kit teaches enough.Sharekhan.Sharekhan-Firststep.com 1 Crore Life InsurancePay less than Rs.1000 every Month Secure your Family for 1 Crorepaisabazaar.com/term-life-insurance

Mutual Funds Mutual Funds India Nav Index Funds Fund of Funds Investments Best Mutual Funds