Sie sind auf Seite 1von 116


The Accounting Process (The Accounting Cycle)

The accounting process is a series of activities that begins with a transaction and ends with the closing of the books. Because this process is repeated each reporting period, it is referred to as the accounting cycle and includes these major steps: 1. Identify the transaction or other recognizable event. 2. Prepare the transaction's source document such as a purchase order or invoice. 3. Analyze and classify the transaction. This step involves quantifying the transaction in monetary terms (e.g. dollars and cents), identifying the accounts that are affected and whether those accounts are to be debited or credited. 4. Record the transaction by making entries in the appropriate journal, such as the sales journal, purchase journal, cash receipt or disbursement journal, or the general journal. Such entries are made in chronological order. 5. Post general journal entries to the ledger accounts. __________________ The above steps are performed throughout the accounting period as transactions occur or in periodic batch processes. The following steps are performed at the end of the accounting period:
6. Prepare the trial balance to make sure that debits equal credits. The trial balance is a listing of all of

the ledger accounts, with debits in the left column and credits in the right column. At this point no adjusting entries have been made. The actual sum of each column is not meaningful; what is important is that the sums be equal. Note that while out-of-balance columns indicate a recording error, balanced columns do not guarantee that there are no errors. For example, not recording a transaction or recording it in the wrong account would not cause an imbalance. 7. Correct any discrepancies in the trial balance. If the columns are not in balance, look for math errors, posting errors, and recording errors. Posting errors include: o posting of the wrong amount, o omitting a posting, o posting in the wrong column, or o posting more than once.

8. Prepare adjusting entries to record accrued, deferred, and estimated amounts.

9. Post adjusting entries to the ledger accounts. 10. Prepare the adjusted trial balance. This step is similar to the preparation of the unadjusted trial balance, but this time the adjusting entries are included. Correct any errors that may be found. 11. Prepare the financial statements. o Income statement: prepared from the revenue, expenses, gains, and losses. o Balance sheet: prepared from the assets, liabilities, and equity accounts. o Statement of retained earnings: prepared from net income and dividend information. o Cash flow statement: derived from the other financial statements using either the direct or indirect method.

12. Prepare closing journal entries that close temporary accounts such as revenues, expenses, gains, and

losses. These accounts are closed to a temporary income summary account, from which the balance is transferred to the retained earnings account (capital). Any dividend or withdrawal accounts also are closed to capital. 13. Post closing entries to the ledger accounts. 14. Prepare the after-closing trial balance to make sure that debits equal credits. At this point, only the permanent accounts appear since the temporary ones have been closed. Correct any errors. 15. Prepare reversing journal entries (optional). Reversing journal entries often are used when there has been an accrual or deferral that was recorded as an adjusting entry on the last day of the accounting period. By reversing the adjusting entry, one avoids double counting the amount when the transaction occurs in the next period. A reversing journal entry is recorded on the first day of the new period. Instead of preparing the financial statements before the closing journal entries, it is possible to prepare them afterwards, using a temporary income summary account to collect the balances of the temporary ledger accounts (revenues, expenses, gains, losses, etc.) when they are closed. The temporary income summary account then would be closed when preparing the financial statements.

Source Documents

The source document is the original record of a transaction. During an audit, source documents are used as evidence that a particular business transaction occurred. Examples of source documents include:

Cash receipts Credit card receipts Cash register tapes Cancelled checks Customer invoices Supplier invoices Purchase orders Time cards Deposit slips Notes for loans Payment stubs for interest

At a minimum, each source document should include the date, the amount, and a description of the transaction. When practical, beyond these minimum requirements source documents should contain the name and address of the other party of the transaction. When a source document does not exist, for example, when a cash receipt is not provided by a vendor or is misplaced, a document should be generated as soon as possible after the transaction, using other documents such as bank statements to support the information on the generated source document. Once a transaction has been journalized, the source document should be filed and made retrievable so that transactions can be verified should the need arise at a later date.

General Journal Entries

The journal is the point of entry of business transactions into the accounting system. It is a chronological record of the transactions, showing an explanation of each transaction, the accounts affected, whether those accounts are increased or decreased, and by what amount. A general journal entry takes the following form: Date Name of account being debited Amoun Amoun t t Name of account being credited Optional: short description of transaction

Consider the following example that illustrates the basic concept of general journal entries. Mike Peddler opens a bicycle repair shop. He leases shop space, purchases an initial inventory of bike parts, and begins operations. Here are the general journal entries for the first month: Date 9/1 Account Names & Explanation Cash Capital Owner contributes $7500 in cash to capitalize the business. Bike Parts Accounts Payable Purchased $2500 in bike parts on account, payable in 30 days. Expenses Cash Paid first month's shop rent of $1000. Cash Accounts Receivable Revenue Repaired bikes for $1100; collected $400 cash; billed customers for the balance. Expenses Bike Parts $275 in bike parts were used. Cash Accounts Receivable Collected $425 from customer accounts. Accounts Payable Cash Paid $500 to suppliers for parts purchased earlier in the month. Debit 7500 Credit 7500


2500 2500


1000 1000


400 700 1100


275 275


425 425


500 500

Most of the above transactions are entered as simple journal entries each debiting one account and crediting another. The entry for 9/17 is a compound journal entry, composed of two lines for the debit and one line for the credit. The transaction could have been entered as two separate simple journal entries, but the compound form is more efficient. In this example, there are no account numbers. In practice, account numbers or codes may be included in the journal entries to allow each account to be positively identified with no confusion between similar accounts. The journal entry is the first entry of a transaction in the accounting system. Before the entry is made, the following decisions must be made:

which accounts are affected by the transaction, and which account will be debited and which will be credited.

Once entered in the journal, the transactions may be posted to the appropriate T-accounts of the general ledger. Unlike the journal entry, the posting to the general ledger is a purely mechanical process - the account and debit/credit decisions already have been made.

The General Ledger

The general ledger is a collection of the firm's accounts. While the general journal is organized as a chronological record of transactions, the ledger is organized by account. In casual use the accounts of the general ledger often take the form of simple two-column T-accounts. In the formal records of the company they may contain a third or fourth column to display the account balance after each posting. To illustrate the posting of transactions in the general ledger, consider the following transactions taken from the example on general journal entries: Date 9/1 Account Names Cash Capital Bike parts Accounts payable Expenses Cash Cash Accounts Receivable Revenue Expenses Bike parts Cash Accounts receivable Accounts payable Cash Debit 7500 Credit 7500 2500 2500 1000 1000 400 700 1100 275 275 425 425 500 500







The above journal entries affect a total of seven different accounts and would be posted to the T-accounts of the general ledger as follows: General Ledger (T-Accounts) Cash 750 Sep 0 400 425 Accounts Payable 2 50 250 Sep Sep 8 8 0 0 Revenue 1 Sep 7 Accounts Receivable 1 70 2 42 Sep Sep 7 0 5 5


1 1 7 2 5

1 5 2 8

100 0 500

Bike Parts 250 1 Sep 8 Sep 0 8 Capital Sep 1 750 0

27 5

110 0

Sep Sep

Expenses 1 100 5 0 1 275 8

Note the direct mapping between the journal entries and the ledger postings. While this posting of journalized transactions in the general ledger at first may appear to be redundant since the transactions already are recorded in the general journal, the general ledger serves an important function: it allows one to view the activity and balance of each account at a glance. Because the posting to the ledger is simply a rearrangement of information requiring no additional decisions, it easily is performed by accounting software, either when the journal entry is made or as a batch process, for example, at the end of the day or week. Finally, while such T-accounts are handy for informal use, in practice a three-column or four-column account may be used to show the running account balance, and in the case of a four column account, whether that balance is a net debit or credit. Additionally, reference numbers may be used so that each posting can be traced back to its original journal entry.

Trial Balance
If the journal entries are error-free and were posted properly to the general ledger, the total of all of the debit balances should equal the total of all of the credit balances. If the debits do not equal the credits, then an error has occurred somewhere in the process. The total of the accounts on the debit and credit side is referred to as the trial balance. To calculate the trial balance, first determine the balance of each general ledger account as shown in the following example: General Ledger


Cash 750 Sep 0

1 400 7 2 425 5 Bal. 6825

1 5 2 8

100 0 500

Accounts Receivable 1 70 2 42 Sep Sep 7 0 5 5 Bal. 275

Parts Inventory 250 1 Sep 8 Sep 0 8 Bal. 2225 Capital Sep 1 750 0

27 5

Accounts Payable 2 50 250 Sep Sep 8 8 0 0 Bal. Revenue 1 Sep 7 Bal. 110 0 1100 2000

Bal. 7500 Expenses 1 100 Sep 5 0 1 Sep 275 8 Bal. 1275

Once the account balances are known, the trial balance can be calculated as shown: Trial Balance Account Title Debits Credits Cash 6825 Accounts Receivable 275 Parts Inventory 2225 Accounts Payable 2000 Capital 7500 Revenue 1100 Expenses 1275 10600 10600

In this example, the debits and credits balance. This result does not guarantee that there are no errors. For example, the trial balance would not catch the following types of errors:

Transactions that were not recorded in the journal Transactions recorded in the wrong accounts Transactions for which the debit and credit were transposed Neglecting to post a journal entry to the ledger

If the trial balance is not in balance, then an error has been made somewhere in the accounting process. The following is listing of common errors that would result in an unbalanced trial balance; this listing can be used to assist in isolating the cause of the imbalance.

Summation error for the debits and credits of the trial balance Error transferring the ledger account balances to the trial balance columns o Error in numeric value o Error in transferring a debit or credit to the proper column o Omission of an account Error in the calculation of a ledger account balance Error in posting a journal entry to the ledger o Error in numeric value o Error in posting a debit or credit to the proper column Error in the journal entry o Error in a numeric value o Omission of part of a compound journal entry

The more often that the trial balance is calculated during the accounting cycle, the easier it is to isolate any errors; more frequent trial balance calculations narrow the time frame in which an error might have occurred, resulting in fewer transactions through which to search.

Adjusting Entries

Adjusting entries are journal entries made at the end of the accounting period to allocate revenue and expenses to the period in which they actually are applicable. Adjusting entries are required because normal journal entries are based on actual transactions, and the date on which these transactions occur may not be the date required to fulfill the matching principle of accrual accounting. The two major types of adjusting entries are:

Accruals: for revenues and expenses that are matched to dates before the transaction has been recorded. Deferrals: for revenues and expenses that are matched to dates after the transaction has been recorded.

Accruals Accrued items are those for which the firm has been realizing revenue or expense without yet observing an actual transaction that would result in a journal entry. For example, consider the case of salaried employees who are paid on the first of the month for the salary they earned over the previous month. Each day of the month, the firm accrues an additional liability in the form of salaries to be paid on the first day of the next month, but the transaction does not actually occur until the paychecks are issued on the first of the month. In order to report the expense in the period in which it was incurred, an adjusting entry is made at the end of the month. For example, in the case of a small company accruing $80,000 in monthly salaries, the journal entry might look like the following:

Date 9/30

Account Titles & Explanation Salary expense Salaries payable Salaries accrued in September, to be paid on Oct 1.

Debit 80,000

Credit 80,000

In theory, the accrued salary could be recorded each day, but daily updates of such accruals on a large scale would be costly and would serve little purpose - the adjustment only is needed at the end of the period for which the financial statements are being prepared. Some accrued items for which adjusting entries may be made include:

Salaries Past-due expenses Income tax expense Interest income Unbilled revenue

Deferrals Deferred items are those for which the firm has recorded the transaction as a journal entry, but has not yet realized the revenue or expense associated with that journal entry. In other words, the recognition of deferred items is postponed until a later accounting period. An example of a deferred item would be prepaid insurance. Suppose the firm prepays a 12-month insurance policy on Sep 1. Because the insurance is a prepaid expense, the journal entry on Sep 1 would look like the following: Date Account Titles & Explanation 9/1 Prepaid Expenses Cash 12-month prepaid insurance policy. Debit 12,000 Credit 12,000

The result of this entry is that the insurance policy becomes an asset in the Prepaid Expenses account. At the end of September, this asset will be adjusted to reflect the amount "consumed" during the month. The adjusting entry would be: Date 9/30 Account Titles & Explanation Insurance Expense Prepaid Expenses Insurance expense for Sep. Debit 1,000 Credit 1,000

This adjusting entry transfers $1000 from the Prepaid Expenses asset account to the Insurance Expense expense account to properly record the insurance expense for the month of September. In this example, a similar adjusting entry would be made for each subsequent month until the insurance policy expires 11 months later. Some deferred items for which adjusting entries would be made include:

Prepaid insurance Prepaid rent Office supplies Depreciation Unearned revenue

In the case of unearned revenue, a liability account is credited when the cash is received. An adjusting entry is made once the service has been rendered or the product has been shipped, thus realizing the revenue. Completing the Adjusting Entries To prevent inadvertent omission of some adjusting entries, it is helpful to review the ones from the previous accounting period since such transactions often recur. It also helps to talk to various people in the company who might know about unbilled revenue or other items that might require adjustments.

Preparing the Financial Statements

Once the adjusting entries have been made or entered into a worksheet, the financial statements can be prepared using information from the ledger accounts. Because some of the financial statements use data from the other statements, the following is a logical order for their preparation:

Income statement Statement of retained earnings Balance sheet Cash flow statement

Income Statement The income statement reports revenues, expenses, and the resulting net income. It is prepared by transferring the following ledger account balances, taking into account any adjusting entries that have been or will be made:

Revenue Expenses Capital gains or losses

Statement of Retained Earnings The retained earnings statement shows the retained earnings at the beginning and end of the accounting period. It is prepared using the following information:

Beginning retained earnings, obtained from the previous statement of retained earnings. Net income, obtained from the income statement Dividends paid during the accounting period

Balance Sheet The balance sheet reports the assets, liabilities, and shareholder equity of the company. It is constructed using the following information:

Balances of all asset accounts such cash, accounts receivable, etc. Balances of all liability accounts such as accounts payable, notes, etc. Capital stock balance Retained earnings, obtained from the statement of retained earnings

Cash Flow Statement The cash flow statement explains the reasons for changes in the cash balance, showing sources and uses of cash in the operating, financing, and investing activities of the firm. Because the cash flow statement is a cash-basis report, it cannot be derived directly from the ledger account balances of an accrual accounting system. Rather, it is derived by converting the accrual information to a cash-basis using one of the following two methods:

Direct method: cash flow information is derived by directly subtracting cash disbursements from cash receipts. Indirect method: cash flow information is derived by adding or subtracting non-cash items from net income.

Closing Entries
At the end of the accounting period, the balances in temporary accounts are transferred to an income summary account and a retained earnings account, thereby resetting the balance of the temporary accounts to zero to begin the next accounting period. First, the revenue accounts are closed by transferring their balances to the income summary account. Consider the following example for which September 30 is the end of the accounting period. If the revenue account balance is $1100, then the closing journal entry would be:

Date 9/30

Date 9/30

Date 9/30

Date 9/30

Accounts Debit Credit Revenue 1100 Income Summary 1100 Next, the expense accounts are closed by transferring their balances to the income summary account. If the expense account balance is $1275, then the closing entry would be: Accounts Debit Credit Income Summary 1275 Expenses 1275 At this point, the net balance of the income summary account is a $175 debit (loss). The income summary account then is closed to retained earnings: Accounts Debit Credit Retained Earnings 175 Income Summary 175 Finally, the dividends account is closed to retained earnings. For example, if $50 in dividends were paid during the period, the closing journal entry would be as follows: Accounts Debit Credit Retained Earnings 50 Dividends 50 Once posted to the ledger, these journal entries serve the purpose of setting the temporary revenue, expense, and dividend accounts back to zero in preparation for the start of the next accounting period. Note that the income summary account is not absolutely necessary - the revenue and expense accounts could be closed directly to retained earnings. The income summary account offers the benefit of indicating the net balance between revenue and expenses (i.e. net income) during the closing process.

Reversing Entries
When an adjusting entry is made for an expense at the end of the accounting period, it is necessary to keep track of this expense so that the transaction will be allocated properly between the two periods. Reversing entries are a way to handle such transactions. Consider the case in which a note is issued on the 16th of September, with interest payable on the 15th of October. If the total interest to be paid at the end of the 30 day period is $100, then half of the amount would be allocated to the month of September using the following adjusting journal entry: Period-End Adjusting Entry Account Title Debit Credit Interest Expense 50 Interest Payable 50 15 days of accrued interest. On October 15, the 30 days of interest will be paid as a $100 lump sum. If the bookkeeper remembers that half of that interest already was recorded as an expense in September, then he or she can record only $50 as the interest expense for October. Alternatively, a reversing entry can be made at the beginning of October as follows: Reversing Entry Account Title Debit Credit Interest Payable 50 Interest Expense 50 Reversing entry for 15 days of interest accrued in Sep. Note that the above journal entry is exactly the reverse of the adjusting entry made on September 30. Once this reversing entry is posted, the affected ledger accounts will appear as follows: Ledger Accounts After Reversing Entry Interest Expense

Date 9/30

Date 10/1

Interest Payable

Oct 1

5 Sep 0

3 0 Bal. 0

5 0

Oct 1 Bal. 50

5 0

The interest payable account carried a credit balance of $50 over to the new period, and this balance became zero when the October 1 reversing entry was posted. Because the interest expense ledger account was closed at the end of the reporting period on September 30 (as were all expense accounts), its balance was reset to zero at that time. After the posting of the reversing entry on October 1, the interest expense ledger account had a credit balance (i.e. a negative expense balance) of $50. On Oct 15, the note matures and the $100 interest is due. Because the reversing entry was made on Oct 1, the Oct 15 entry is for the full $100 that is due on the note, and is recorded as follows: October 15 Journal Entry Date Account Title Debit Credit 10/15 Interest Expense 100 Interest Payable 100 Interest for Sep 16 through Oct 15. The ledger accounts will appear as follows once the journal entries through October 15 are posted:

Interest Payable 5 3 Oct 1 Sep 50 0 0 1 10 Oct 5 0 Bal. 100

Interest Expense 1 10 Oct Oct 1 5 0 Bal. 50

5 0

The net interest expense for October then is $50, as it should be since the other $50 already was reported in September. As can be seen in the ledger accounts, the net effect is that a $50 interest expense will be realized in October, and the full $100 of interest will be paid to the holder of the note. Reversing entries are a useful tool for dealing with certain accruals and deferrals. Their use is optional and depends on the accounting practices of the particular firm and the specific responsibilities of the bookkeeping staff.

Chart of Accounts
The chart of accounts is a listing of all the accounts in the general ledger, each account accompanied by a reference number. To set up a chart of accounts, one first needs to define the various accounts to be used by the business. Each account should have a number to identify it. For very small businesses, three digits may suffice for the account number, though more digits are highly desirable in order to allow for new accounts to be added as the business grows. With more digits, new accounts can be added while maintaining the logical order. Complex businesses may have thousands of accounts and require longer account reference numbers. It is worthwhile to put thought into assigning the account numbers in a logical way, and to follow any specific industry standards. An example of how the digits might be coded is shown in this list: Account Numbering 1000 - 1999: asset accounts 2000 - 2999: liability accounts 3000 - 3999: equity accounts 4000 - 4999: revenue accounts 5000 - 5999: cost of goods sold 6000 - 6999: expense accounts 7000 - 7999: other revenue (for example, interest income) 8000 - 8999: other expense (for example, income taxes) By separating each account by several numbers, many new accounts can be added between any two while maintaining the logical order. Defining Accounts

Different types of businesses will have different accounts. For example, to report the cost of goods sold a manufacturing business will have accounts for its various manufacturing costs whereas a retailer will have accounts for the purchase of its stock merchandise. Many industry associations publish recommended charts of accounts for their respective industries in order to establish a consistent standard of comparison among firms in their industry. Accounting software packages often come with a selection of predefined account charts for various types of businesses. There is a trade-off between simplicity and the ability to make historical comparisons. Initially keeping the number of accounts to a minimum has the advantage of making the accounting system simple. Starting with a small number of accounts, as certain accounts acquired significant balances they would be split into smaller, more specific accounts. However, following this strategy makes it more difficult to generate consistent historical comparisons. For example, if the accounting system is set up with a miscellaneous expense account that later is broken into more detailed accounts, it then would be difficult to compare those detailed expenses with past expenses of the same type. In this respect, there is an advantage in organizing the chart of accounts with a higher initial level of detail. Some accounts must be included due to tax reporting requirements. For example, in the U.S. the IRS requires that travel, entertainment, advertising, and several other expenses be tracked in individual accounts. One should check the appropriate tax regulations and generate a complete list of such required accounts. Other accounts should be set up according to vendor. If the business has more than one checking account, for example, the chart of accounts might include an account for each of them. Account Order Balance sheet accounts tend to follow a standard that lists the most liquid assets first. Revenue and expense accounts tend to follow the standard of first listing the items most closely related to the operations of the business. For example, sales would be listed before non-operating income. In some cases, part or all of the expense accounts simply are listed in alphabetical order. Sample Chart of Accounts The following is an example of some of the accounts that might be included in a chart of accounts. Sample Chart of Accounts Asset Accounts Current Assets Petty Cash Cash on Hand (e.g. in cash registers) Regular Checking Account Payroll Checking Account Savings Account Special Account Investments - Money Market Investments - Certificates of Deposit Accounts Receivable Other Receivables Allowance for Doubtful Accounts Raw Materials Inventory Supplies Inventory Work in Progress Inventory Finished Goods Inventory - Product #1 Finished Goods Inventory - Product #2 Finished Goods Inventory - Product #3 Prepaid Expenses Employee Advances Notes Receivable - Current Prepaid Interest

1000 1010 1020 1030 1040 1050 1060 1070 1100 1140 1150 1200 1205 1210 1215 1220 1230 1400 1410 1420 1430


Other Current Assets Fixed Assets Furniture and Fixtures Equipment Vehicles Other Depreciable Property Leasehold Improvements Buildings Building Improvements Land Accumulated Depreciation, Furniture and Fixtures Accumulated Depreciation, Equipment Accumulated Depreciation, Vehicles Accumulated Depreciation, Other Accumulated Depreciation, Leasehold Accumulated Depreciation, Buildings Accumulated Depreciation, Building Improvements Other Assets Deposits Organization Costs Accumulated Amortization, Organization Costs Notes Receivable, Non-current Other Non-current Assets Liability Accounts Current Liabilities Accounts Payable Accrued Expenses Sales Tax Payable Wages Payable 401-K Deductions Payable Health Insurance Payable Federal Payroll Taxes Payable FUTA Tax Payable State Payroll Taxes Payable SUTA Payable Local Payroll Taxes Payable Income Taxes Payable Other Taxes Payable Employee Benefits Payable Current Portion of Long-term Debt Deposits from Customers Other Current Liabilities Long-term Liabilities Notes Payable Land Payable

1500 1510 1520 1530 1540 1550 1560 1690 1700 1710 1720 1730 1740 1750 1760

1900 1910 1915 1920 1990

2000 2300 2310 2320 2330 2335 2340 2350 2360 2370 2380 2390 2400 2410 2420 2440 2480

2700 2702

2704 2706 2708 2710 2740

Equipment Payable Vehicles Payable Bank Loans Payable Deferred Revenue Other Long-term Liabilities Equity Accounts Stated Capital Capital Surplus Retained Earnings Revenue Accounts Product #1 Sales Product #2 Sales Product #3 Sales Interest Income Other Income Finance Charge Income Shipping Charges Reimbursed Sales Returns and Allowances Sales Discounts Cost of Goods Sold Product #1 Cost Product #2 Cost Product #3 Cost Raw Material Purchases Direct Labor Costs Indirect Labor Costs Heat and Power Commissions Miscellaneous Factory Costs Cost of Goods Sold, Salaries and Wages Cost of Goods Sold, Contract Labor Cost of Goods Sold, Freight Cost of Goods Sold, Other Inventory Adjustments Purchase Returns and Allowances Purchase Discounts Expenses Default Purchase Expense Advertising Expense Amortization Expense Auto Expenses Bad Debt Expense Bank Fees Cash Over and Short

3010 3020 3030

4000 4020 4040 4060 4080 4540 4550 4800 4900

5000 5010 5020 5050 5100 5150 5200 5250 5300 5700 5730 5750 5800 5850 5900 5950

6000 6010 6050 6100 6150 6200 6250

6300 6350 6400 6450 6500 6510 6520 6530 6550 6600 6650 6660 6670 6700 6710 6750 6800 6850 6900 6950 7000 7050 7100 7200 7250 7300 7350 7400 7450 7460 7550 7600 7620 7650 7700 7750 7800 8900 9000

Charitable Contributions Expense Commissions and Fees Expense Depreciation Expense Dues and Subscriptions Expense Employee Benefit Expense, Health Insurance Employee Benefit Expense, Pension Plans Employee Benefit Expense, Profit Sharing Plan Employee Benefit Expense, Other Freight Expense Gifts Expense Income Tax Expense, Federal Income Tax Expense, State Income Tax Expense, Local Insurance Expense, Product Liability Insurance Expense, Vehicle Interest Expense Laundry and Dry Cleaning Expense Legal and Professional Expense Licenses Expense Loss on NSF Checks Maintenance Expense Meals and Entertainment Expense Office Expense Payroll Tax Expense Penalties and Fines Expense Other Taxes Postage Expense Rent or Lease Expense Repair and Maintenance Expense, Office Repair and Maintenance Expense, Vehicle Supplies Expense, Office Telephone Expense Training Expense Travel Expense Salaries Expense, Officers Wages Expense Utilities Expense Other Expense Gain/Loss on Sale of Assets

The Balanced Scorecard

Traditional financial performance metrics provide information about a firm's past results, but are not wellsuited for predicting future performance or for implementing and controlling the firm's strategic plan. By analyzing perspectives other than the financial one, managers can better translate the organization's strategy into actionable objectives and better measure how well the strategic plan is executing. The Balanced Scorecard is a management system that maps an organization's strategic objectives into performance metrics in four perspectives: financial, internal processes, customers, and learning and growth. These perspectives provide relevant feedback as to how well the strategic plan is executing so that adjustments can be made as necessary. The Balance Scorecard framework can be depicted as follows: The Balanced Scorecard Framework Financial Performance

Objectives Measures Targets Initiatives Internal Processes Strategy


Objectives Measures Targets Initiatives

Objectives Measures Targets Initiatives

Learning & Growth

Objectives Measures Targets Initiatives

The Balanced Scorecard (BSC) was published in 1992 by Robert Kaplan and David Norton. In addition to measuring current performance in financial terms, the Balanced Scorecard evaluates the firm's efforts for future improvement using process, customer, and learning and growth metrics. The term "scorecard" signifies quantified performance measures and "balanced" signifies that the system is balanced between:

short-term objectives and long-term objectives financial measures and non-financial measures lagging indicators and leading indicators internal performance and external performance perspectives

Financial Measures Are Insufficient While financial accounting is suited to the tracking of physical assets such as manufacturing equipment and inventory, it is less capable of providing useful reports in environments with a large intangible asset base. As intangible assets constitute an ever-increasing proportion of a company's market value, there is an increase

in the need for measures that better report such assets as loyal customers, proprietary processes, and highlyskilled staff. Consider the case of a company that is not profitable but that has a very large customer base. Such a firm could be an attractive takeover target simply because the acquiring firm wants access to those customers. It is not uncommon for a company to take over a competitor with the plan to discontinue the competing product line and convert the customer base to its own products and services. The balance sheets of such takeover targets do not reflect the value of the customers who nonetheless are worth something to the acquiring firm. Clearly, additional measures are needed for such intangibles. Scorecard Measures are Limited in Number The Balanced Scorecard is more than a collection of measures used to identify problems. It is a system that integrates a firm's strategy with a purposely limited number of key metrics. Simply adding new metrics to the financial ones could result in hundreds of measures and would create information overload. To avoid this problem, the Balanced Scorecard focuses on four major areas of performance and a limited number of metrics within those areas. The objectives within the four perspectives are carefully selected and are firm specific. To avoid information overload, the total number of measures should be limited to somewhere between 15 and 20, or three to four measures for each of the four perspectives. These measures are selected as the ones deemed to be critical in achieving breakthrough competitive performance; they essentially define what is meant by "performance". A Chain of Cause-and-Effect Relationships Before the Balanced Scorecard, some companies already used a collection of both financial and nonfinancial measures of critical performance indicators. However, a well-designed Balanced Scorecard is different from such a system in that the four BSC perspectives form a chain of cause-and-effect relationships. For example, learning and growth lead to better business processes that result in higher customer loyalty and thus a higher return on capital employed (ROCE). Effectively, the cause-and-effect relationships illustrate the hypothesis behind the organization's strategy. The measures reflect a chain of performance drivers that determine the effectiveness of the strategy implementation. Objectives, Measures, Targets, and Initiatives Within each of the Balanced Scorecard financial, customer, internal process, and learning perspectives, the firm must define the following:

Strategic objectives - what the strategy is to achieve in that perspective. Measures - how progress for that particular objective will be measured. Targets - the target value sought for each measure. Initiatives - what will be done to facilitate the reaching of the target.

The following sections provide examples of some objectives and measures for the four perspectives. Financial Perspective The financial perspective addresses the question of how shareholders view the firm and which financial goals are desired from the shareholder's perspective. The specific goals depend on the company's stage in the business life cycle. For example:

Growth stage - goal is growth, such as revenue growth rate Sustain stage - goal is profitability, such ROE, ROCE, and EVA Harvest stage - goal is cash flow and reduction in capital requirements

The following table outlines some examples of financial metrics: Objective Growth Profitability Specific Measure Revenue growth Return on equity

Cost leadership Unit cost

Customer Perspective The customer perspective addresses the question of how the firm is viewed by its customers and how well the firm is serving its targeted customers in order to meet the financial objectives. Generally, customers view the firm in terms of time, quality, performance, and cost. Most customer objectives fall into one of those four categories. The following table outlines some examples of specific customer objectives and measures: Objective New products Responsive supply Specific Measure % of sales from new products Ontime delivery

To be preferred supplier Share of key accounts Customer partnerships Number of cooperative efforts

Internal Process Perspective Internal business process objectives address the question of which processes are most critical for satisfying customers and shareholders. These are the processes in which the firm must concentrate its efforts to excel. The following table outlines some examples of process objectives and measures: Objective Manufacturing excellence Increase design productivity Specific Measure Cycle time, yield Engineering efficiency

Reduce product launch delays Actual launch date vs. plan

Learning and Growth Perspective Learning and growth metrics address the question of how the firm must learn, improve, and innovate in order to meet its objectives. Much of this perspective is employee-centered. The following table outlines some examples of learning and growth measures: Objective Specific Measure

Manufacturing learning Time to new process maturity Product focus Time to market % of products representing 80% of sales Time compared to that of competitors

Achieving Strategic Alignment throughout the Organization Whereas strategy is articulated in terms meaningful to top management, to be implemented it must be translated into objectives and measures that are actionable at lower levels in the organization. The Balanced Scorecard can be cascaded to make the translation of strategy possible. While top level objectives may be expressed in terms of growth and profitability, these goals get translated into more concrete terms as they progress down the organization and each manager at the next lower level develops objectives and measures that support the next higher level. For example, increased profitability might get translated into lower unit cost, which then gets translated into better calibration of the equipment by the workers on the shop floor. Ultimately, achievement of scorecard objectives would be rewarded by the employee compensation system. The Balanced Scorecard can be cascaded in this manner to align the strategy thoughout the organization. The Process of Building a Balanced Scorecard While there are many ways to develop a Balanced Scorecard, Kaplan and Norton defined a four-step process that has been used across a wide range of organizations.
1. Define the measurement architecture - When a company initially introduces the Balanced

Scorecard, it is more manageable to apply it on the strategic business unit level rather than the corporate level. However, interactions must be considered in order to avoid optimizing the results of one business unit at the expense of others. 2. Specify strategic objectives - The top three or four objectives for each perspective are agreed upon. Potential measures are identified for each objective. 3. Choose strategic measures - Measures that are closely related to the actual performance drivers are selected for evaluating the progress made toward achieving the objectives. 4. Develop the implementation plan - Target values are assigned to the measures. An information system is developed to link the top level metrics to lower-level operational measures. The scorecard is integrated into the management system. Balanced Scorecard Benefits Some of the benefits of the Balanced Scorecard system include:

Translation of strategy into measurable parameters. Communication of the strategy to everybody in the firm. Alignment of individual goals with the firm's strategic objectives - the BSC recognizes that the selected measures influence the behavior of employees. Feedback of implementation results to the strategic planning process.

Since its beginnings as a peformance measurement system, the Balanced Scorecard has evolved into a strategy implementation system that not only measures performance but also describes, communicates, and aligns the strategy throughout the organization. Potential Pitfalls The following are potential pitfalls that should be avoided when implementing the Balanced Scorecard:

Lack of a well-defined strategy: The Balanced Scorecard relies on a well-defined strategy and an understanding of the linkages between strategic objectives and the metrics. Without this foundation, the implementation of the Balanced Scorecard is unlikely to be successful. Using only lagging measures: Many managers believe that they will reap the benefits of the Balanced Scorecard by using a wide range of non-financial measures. However, care should be taken to identify not only lagging measures that describe past performance, but also leading measures that can be used to plan for future performance.

Use of generic metrics: It usually is not sufficient simply to adopt the metrics used by other successful firms. Each firm should put forth the effort to identify the measures that are appropriate for its own strategy and competitive position.


The Demand Curve

The quantity demanded of a good usually is a strong function of its price. Suppose an experiment is run to determine the quantity demanded of a particular product at different price levels, holding everything else constant. Presenting the data in tabular form would result in a demand schedule, an example of which is shown below. Demand Schedule Price 5 4 3 2 1 Quantity Demanded 10 17 26 38 53

The demand curve for this example is obtained by plotting the data: Demand Curve

By convention, the demand curve displays quantity demanded as the independent variable (the x axis) and price as the dependent variable (the y axis). The law of demand states that quantity demanded moves in the opposite direction of price (all other things held constant), and this effect is observed in the downward slope of the demand curve.

For basic analysis, the demand curve often is approximated as a straight line. A demand function can be written to describe the demand curve. Demand functions for a straight-line demand curve take the following form: Quantity = a - (b x Price) where a and b are constants that must be determined for each particular demand curve. When price changes, the result is a change in quantity demanded as one moves along the demand curve. Shifts in the Demand Curve When there is a change in an influencing factor other than price, there may be a shift in the demand curve to the left or to the right, as the quantity demanded increases or decreases at a given price. For example, if there is a positive news report about the product, the quantity demanded at each price may increase, as demonstrated by the demand curve shifting to the right: Demand Curve Shift

A number of factors may influence the demand for a product, and changes in one or more of those factors may cause a shift in the demand curve. Some of these demand-shifting factors are:

Customer preference Prices of related goods o Complements - an increase in the price of a complement reduces demand, shifting the demand curve to the left. o Substitutes - an increase in the price of a substitute product increases demand, shifting the demand curve to the right. Income - an increase in income shifts the demand curve of normal goods to the right. Number of potential buyers - an increase in population or market size shifts the demand curve to the right. Expectations of a price change - a news report predicting higher prices in the future can increase the current demand as customers increase the quantity they purchase in anticipation of the price change.

Price Elasticity of Demand

An important aspect of a product's demand curve is how much the quantity demanded changes when the price changes. The economic measure of this response is the price elasticity of demand. Price elasticity of demand is calculated by dividing the proportionate change in quantity demanded by the proportionate change in price. Proportionate (or percentage) changes are used so that the elasticity is a unitless value and does not depend on the types of measures used (e.g. kilograms, pounds, etc).

As an example, if a 2% increase in price resulted in a 1% decrease in quantity demanded, the price elasticity of demand would be equal to approximately 0.5. It is not exactly 0.5 because of the specific definition for elasticity uses the average of the initial and final values when calculating percentage change. When the elasticity is calculated over a certain arc or section of the demand curve, it is referred to as the arc elasticity and is defined as the magnitude (absolute value) of the following: Q2 - Q1 ( Q1 + Q2 ) / 2 P2 - P1 ( P1 + P2 ) / 2 where Q1 Q2 P1 P2 = = = = Initial quantity Final quantity Initial price Final price

The average values for quantity and price are used so that the elasticity will be the same whether calculated going from lower price to higher price or from higher price to lower price. For example, going from $8 to $10 is a 25% increase in price, but going from $10 to $8 is only a 20% decrease in price. This asymmetry is eliminated by using the average price as the basis for the percentage change in both cases. For slightly easier calculations, the formula for arc elasticity can be rewritten as: ( Q2 - Q1 ) ( P2 + P1 ) ( Q2 + Q1 ) ( P2 - P1 ) To better understand the price elasticity of demand, it is worthwhile to consider different ranges of values. Elasticity > 1 In this case, the change in quantity demanded is proportionately larger than the change in price. This means that an increase in price would result in a decrease in revenue, and a decrease in price would result in an increase in revenue. In the extreme case of near infinite elasticity, the demand curve would be nearly horizontal, meaning than the quantity demanded is extremely sensitive to changes in price. The case of infinite elasticity is described as being perfectly elastic and is illustrated below: Perfectly Elastic Demand Curve

From this demand curve it is easy to visualize how an extremely small change in price would result in an infinitely large shift in quantity demanded.

Elasticity < 1 In this case, the change in quantity demanded is proportionately smaller than the change in price. An increase in price would result in an increase in revenue, and a decrease in price would result in a decrease in revenue. In the extreme case of elasticity near 0, the demand curve would be nearly vertical, and the quantity demanded would be almost independent of price. The case of zero elasticity is described as being perfectly inelastic. Perfectly Inelastic Demand Curve

From this demand curve, it is easy to visualize how even a very large change in price would have no impact on quantity demanded.

Elasticity = 1 This case is referred to as unitary elasticity. The change in quantity demanded is in the same proportion as the change in price. A change in price in either direction therefore would result in no change in revenue. Applications of Price Elasticity of Demand The price elasticity of demand can be applied to a variety of problems in which one wants to know the expected change in quantity demanded or revenue given a contemplated change in price. For example, a state automobile registration authority considers a price hike in personalized "vanity" license plates. The current annual price is $35 per year, and the registration office is considering increasing the price to $40 per year in an effort to increase revenue. Suppose that the registration office knows that the price elasticity of demand from $35 to $40 is 1.3. Because the elasticity is greater than one over the price range of interest, we know that an increase in price actually would decrease the revenue collected by the automobile registration authority, so the price hike would be unwise.

Factors Influencing the Price Elasticity of Demand The price elasticity of demand for a particular demand curve is influenced by the following factors:

Availability of substitutes: the greater the number of substitute products, the greater the elasticity. Degree of necessity or luxury: luxury products tend to have greater elasticity than necessities. Some products that initially have a low degree of necessity are habit forming and can become "necessities" to some consumers.

Proportion of income required by the item: products requiring a larger portion of the consumer's income tend to have greater elasticity. Time period considered: elasticity tends to be greater over the long run because consumers have more time to adjust their behavoir to price changes. Permanent or temporary price change: a one-day sale will result in a different response than a permanent price decrease of the same magnitude. Price points: decreasing the price from $2.00 to $1.99 may result in greater increase in quantity demanded than decreasing it from $1.99 to $1.98.

Point Elasticity It sometimes is useful to calculate the price elasticity of demand at a specific point on the demand curve instead of over a range of it. This measure of elasticity is called the point elasticity. Because point elasticity is for an infinitesimally small change in price and quantity, it is defined using differentials, as follows: dQ Q dP P and can be written as: dQ P dP Q The point elasticity can be approximated by calculating the arc elasticity for a very short arc, for example, a 0.01% change in price.

The Supply Curve

Price usually is a major determinant in the quantity supplied. For a particular good with all other factors held constant, a table can be constructed of price and quantity supplied based on observed data. Such a table is called a supply schedule, as shown in the following example: Supply Schedule Price 1 2 3 4 5 Quantity Supplied 12 28 42 52 60

By graphing this data, one obtains the supply curve as shown below: Supply Curve

As with the demand curve, the convention of the supply curve is to display quantity supplied on the x-axis as the independent variable and price on the y-axis as the dependent variable. The law of supply states that the higher the price, the larger the quantity supplied, all other things constant. The law of supply is demonstrated by the upward slope of the supply curve. As with the demand curve, the supply curve often is approximated as a straight line to simplify analysis. A straight-line supply function would have the following structure: Quantity = a + (b x Price) where a and b are constant for each supply curve. A change in price results in a change in quantity supplied and represents movement along the supply curve. Shifts in the Supply Curve While changes in price result in movement along the supply curve, changes in other relevant factors cause a shift in supply, that is, a shift of the supply curve to the left or right. Such a shift results in a change in quantity supplied for a given price level. If the change causes an increase in the quantity supplied at each price, the supply curve would shift to the right: Supply Curve Shift

There are several factors that may cause a shift in a good's supply curve. Some supply-shifting factors include:

Prices of other goods - the supply of one good may decrease if the price of another good increases, causing producers to reallocate resources to produce larger quantities of the more profitable good. Number of sellers - more sellers result in more supply, shifting the supply curve to the right. Prices of relevant inputs - if the cost of resources used to produce a good increases, sellers will be less inclined to supply the same quantity at a given price, and the supply curve will shift to the left.

Technology - technological advances that increase production efficiency shift the supply curve to the right. Expectations - if sellers expect prices to increase, they may decrease the quantity currently supplied at a given price in order to be able to supply more when the price increases, resulting in a supply curve shift to the left.

Supply and Demand

The market price of a good is determined by both the supply and demand for it. In 1890, English economist Alfred Marshall published his work, Principles of Economics, which was one of the earlier writings on how both supply and demand interacted to determine price. Today, the supplydemand model is one of the fundamental concepts of economics. The price level of a good essentially is determined by the point at which quantity supplied equals quantity demanded. To illustrate, consider the following case in which the supply and demand curves are plotted on the same graph. Supply and Demand

On this graph, there is only one price level at which quantity demanded is in balance with the quantity supplied, and that price is the point at which the supply and demand curves cross. The law of supply and demand predicts that the price level will move toward the point that equalizes quantities supplied and demanded. To understand why this must be the equilibrium point, consider the situation in which the price is higher than the price at which the curves cross. In such a case, the quantity supplied would be greater than the quantity demanded and there would be a surplus of the good on the market. Specifically, from the graph we see that if the unit price is $3 (assuming relative pricing in dollars), the quantities supplied and demanded would be: Quantity Supplied = 42 units Quantity Demanded = 26 units Therefore there would be a surplus of 42 - 26 = 16 units. The sellers then would lower their price in order to sell the surplus. Suppose the sellers lowered their prices below the equilibrium point. In this case, the quantity demanded would increase beyond what was supplied, and there would be a shortage. If the price is held at $2, the quantity supplied then would be: Quantity Supplied = 28 units Quantity Demanded = 38 units Therefore, there would be a shortage of 38 - 28 = 10 units. The sellers then would increase their prices to earn more money. The equilibrium point must be the point at which quantity supplied and quantity demanded are in balance, which is where the supply and demand curves cross. From the graph above, one sees that this is at a price of approximately $2.40 and a quantity of 34 units. To understand how the law of supply and demand functions when there is a shift in demand, consider the case in which there is a shift in demand: Shift in Demand

In this example, the positive shift in demand results in a new supply-demand equilibrium point that in higher in both quantity and price. For each possible shift in the supply or demand curve, a similar graph can be constructed showing the effect on equilibrium price and quantity. The following table summarizes the results that would occur from shifts in supply, demand, and combinations of the two. Result of Shifts in Supply and Demand Equilibrium Equilibrium Demand Supply Price Quantity + + + + + + + + ? + ? + + ? + ? In the above table, "+" represents an increase, "-" represents a decrease, a blank represents no change, and a question mark indicates that the net change cannot be determined without knowing the magnitude of the shift in supply and demand. If these results are not immediately obvious, drawing a graph for each will facilitate the analysis.

Opportunity Cost

Scarcity of resources is one of the more basic concepts of economics. Scarcity necessitates trade-offs, and trade-offs result in an opportunity cost. While the cost of a good or service often is thought of in monetary terms, the opportunity cost of a decision is based on what must be given up (the next best alternative) as a result of the decision. Any decision that involves a choice between two or more options has an opportunity cost. Opportunity cost contrasts to accounting cost in that accounting costs do not consider forgone opportunities. Consider the case of an MBA student who pays $30,000 per year in tuition and fees at a private university. For a two-year MBA program, the cost of tuition and fees would be $60,000. This is the monetary cost of the education. However, when making the decision to go back to school, one should consider the opportunity cost, which includes the income that the student would have earned if the alternative decision of remaining in his or her job had been made. If the student had been earning $50,000 per year and was expecting a 10% salary increase in one year, $105,000 in salary would be foregone as a result of the decision to return to school. Adding this amount to the educational expenses results in a cost of $165,000 for the degree. Opportunity cost is useful when evaluating the cost and benefit of choices. It often is expressed in nonmonetary terms. For example, if one has time for only one elective course, taking a course in microeconomics might have the opportunity cost of a course in management. By expressing the cost of one

option in terms of the foregone benefits of another, the marginal costs and marginal benefits of the options can be compared. As another example, if a shipwrecked sailor on a desert island is capable of catching 10 fish or harvesting 5 coconuts in one day, then the opportunity cost of producing one coconut is two fish (10 fish / 5 coconuts). Note that this simple example assumes that the production possibility frontier between fish and coconuts is linear. Relative Price Opportunity cost is expressed in relative price, that is, the price of one choice relative to the price of another. For example, if milk costs $4 per gallon and bread costs $2 per loaf, then the relative price of milk is 2 loaves of bread. If a consumer goes to the grocery store with only $4 and buys a gallon of milk with it, then one can say that the opportunity cost of that gallon of milk was 2 loaves of bread (assuming that bread was the next best alternative). In many cases, the relative price provides better insight into the real cost of a good than does the monetary price. Applications of Opportunity Cost The concept of opportunity cost has a wide range of applications including:

Consumer choice Production possibilities Cost of capital Time management Career choice Analysis of comparative advantage

The Production Possibility Frontier

Consider the case of an island economy that produces only two goods: wine and grain. In a given period of time, the islanders may choose to produce only wine, only grain, or a combination of the two according to the following table: Production Possibility Table Wine Grain (thousands of bottles) (thousands of bushels) 0 15 5 14 9 12 12 9 14 5 15 0 The production possibility frontier (PPF) is the curve resulting when the above data is graphed, as shown below: Production Possibility Frontier

The PPF shows all efficient combinations of output for this island economy when the factors of production are used to their full potential. The economy could choose to operate at less than capacity somewhere inside the curve, for example at point a, but such a combination of goods would be less than what the economy is capable of producing. A combination outside the curve such as point b is not possible since the output level would exceed the capacity of the economy. The shape of this production possibility frontier illustrates the principle of increasing cost. As more of one product is produced, increasingly larger amounts of the other product must be given up. In this example, some factors of production are suited to producing both wine and grain, but as the production of one of these commodities increases, resources better suited to production of the other must be diverted. Experienced wine producers are not necessarily efficient grain producers, and grain producers are not necessarily efficient wine producers, so the opportunity cost increases as one moves toward either extreme on the curve of production possibilities. Suppose a new technique was discovered that allowed the wine producers to double their output for a given level of resources. Further suppose that this technique could not be applied to grain production. The impact on the production possibilities is shown in the following diagram: Shifted Production Possibility Frontier

In the above diagram, the new technique results in wine production that is double its previous level for any level of grain production. Finally, if the two products are very similar to one another, the production possibility frontier may be shaped more like a straight line. Consider the situation in which only wine is produced. Let's assume that two brands of wine are produced, Brand A and Brand B, and that these two brands use the same grapes and production process, differing only in the name on the label. The same factors of production can produce either product (brand) equally efficiently. The production possibility frontier then would appear as follows: PPF for Very Similar Products

Note that to increase production of Brand A from 0 to 3000 bottles, the production of Brand B must be decreased by 3000 bottles. This opportunity cost remains the same even at the other extreme, where increasing the production of Brand A from 12,000 to 15,000 bottles still requires that of Brand B to be decreased by 3000 bottles. Because the two products are almost identical in this case and can be produced equally efficiently using the same resources, the opportunity cost of producing one over the other remains constant between the two extremes of production possibilities.

David Ricardo and Comparative Advantage

In his 1817 book, On the Principles of Political Economy and Taxation, David Ricardo used the example of Portugal and England's trading of wine and cloth to illustrate the benefits of specialization and trade. His writing served as the basis for the principle of comparative advantage, under which total output will be increased if people and nations engage in those activities for which their advantages over others are the largest or their disadvantages are the smallest. Imagine two individuals, A and B, living on a remote island. Two goods are needed and produced: coconuts and fish. Person A has an absolute advantage in the production of both goods, able to produce more coconuts than B and more fish than B. Their production capabilities are summarized in the following table:

Output Alternatives
Coconuts | V A --> 10 Fish | V 10

B -->

Note that the numbers in the above table indicate the maximum amount of one commodity that could be produced assuming the individual produced none of the other commodity. For example, if A decided to harvest 10 coconuts, then A would not be able to catch any fish. Similarly, if B decided to catch 8 fish, then B would not be able to harvest any coconuts. The values represent the endpoints of each individual's production possibility frontier. For this discussion, we will assume that each production possibility frontier is linear as shown below. Production Possibilities for A and B

If the individuals did not trade, then each would produce both coconuts and fish. For example, if each spent half of his or her time harvesting coconuts and the other half catching fish, the output from A would be 5 coconuts and 5 fish, and the output from B would be 2 coconuts and 4 fish. The total combined output then would be 7 coconuts and 9 fish. Since both A and B must make trade-offs in their production decisions, they each have an opportunity cost for each commodity they produce: Opportunity cost of coconuts

A: 1 fish per coconut. (10 fish per 10 coconuts.) B: 2 fish per coconut. (8 fish per 4 coconuts.)

Opportunity cost of fish

A: 1 coconut per fish. (10 coconuts per 10 fish.) B: 0.5 coconut fish. (4 coconuts per 8 fish.)

Since the opportunity cost of coconuts is lower for A than for B, one can say that A has a comparative advantage in producing coconuts, so A should produce coconuts to maximize the island's output. Since the opportunity cost of fish is lower for B than for A, one can say that B has a comparative advantage in producing fish, so B should produce fish to maximize the island's output. If A produces coconuts and B produces fish, then the total combined output would be 10 coconuts and 8 fish. (versus 7 coconuts and 9 fish without specialization.) From this example, it might not be immediately obvious that the individuals are better off - while they have gained 3 coconuts they at the same time have lost one fish. However, A easily can choose to produce 9 coconuts and one fish, so that the combined output becomes 9 coconuts and 9 fish. Compared to the case of no specialization, there is a net gain of 2 coconuts with no loss of fish. By trading with one another, the two individuals can distribute the goods according to their preferences, and both are better off as a result of their specialization and trading. The effect of specialization and trade is an expansion of the production possibilities for the individuals. Even though A has an absolute advantage over B for both commodities, they both benefit by specializing and trading.


Future Value

The future value of a sum of money invested at interest rate i for one year is given by: FV = PV ( 1 + i ) where FV = future value PV = present value i = annual interest rate If the resulting principal and interest are re-invested a second year at the same interest rate, the future value is given by: FV = PV ( 1 + i ) ( 1 + i ) In general, the future value of a sum of money invested for t years with the interest credited and re-invested at the end of each year is: FV = PV ( 1 + i ) t

Solving for Required Interest Rate or Time Given a present sum of money and a desired future value, one can determine either the interest rate required to attain the future value given the time span, or the time required to reach the future value at a given interest rate. Because solving for the interest rate or time is slightly more difficult than solving for future value, there are a few methods for arriving at a solution: 1. Iteration - by calculating the future value for different values of interest rate or time, one gradually can converge on the solution. 2. Financial calculator or spreadsheet - use built-in functions to instantly calculate the solution. 3. Interest rate table - by using a table such as the one at the end of this page, one quickly can find a value of interest rate or time that is close to the solution. 4. Algebraic solution - mathematically calculating the exact solution. Algebraic Solution Beginning with the future value equation and given a fixed time period, one can solve for the required interest rate as follows. FV = PV ( 1 + i ) t Dividing each side by PV and raising each side to the power of 1/t: ( FV / PV ) 1/t = 1 + i The required interest rate then is given by: i = ( FV / PV ) 1/t - 1 To solve for the required time to reach a future value at a specified interest rate, again start with the equation for future value: FV = PV ( 1 + i ) t Taking the logarithm (natural log or common log) of each side:

log FV = log [ PV ( 1 + i ) t ] Relying on the properties of logarithms, the expression can be rearranged as follows: log FV = log PV + t log ( 1 + i ) Solving for t: t = log ( FV / PV ) log ( 1 + i )

Interest Factor Table The term ( 1 + i ) t is the future value interest factor and may be calculated for an array of time periods and interest rates to construct a table as shown below: Table of Future Value Interest Factors












1 1.010 1.020 1.030 1.040 1.050 1.060 1.070 1.080 1.090 1.100 2 1.020 1.040 1.061 1.082 1.103 1.124 1.145 1.166 1.188 1.210 3 1.030 1.061 1.093 1.125 1.158 1.191 1.225 1.260 1.295 1.331 4 1.041 1.082 1.126 1.170 1.216 1.262 1.311 1.360 1.412 1.464 5 1.051 1.104 1.159 1.217 1.276 1.338 1.403 1.469 1.539 1.611 6 1.062 1.126 1.194 1.265 1.340 1.419 1.501 1.587 1.677 1.772 7 1.072 1.149 1.230 1.316 1.407 1.504 1.606 1.714 1.828 1.949 8 1.083 1.172 1.267 1.369 1.477 1.594 1.718 1.851 1.993 2.144 9 1.094 1.195 1.305 1.423 1.551 1.689 1.838 1.999 2.172 2.358 10 1.105 1.219 1.344 1.480 1.629 1.791 1.967 2.159 2.367 2.594 11 1.116 1.243 1.384 1.539 1.710 1.898 2.105 2.332 2.580 2.853 12 1.127 1.268 1.426 1.601 1.796 2.012 2.252 2.518 2.813 3.138 13 1.138 1.294 1.469 1.665 1.886 2.133 2.410 2.720 3.066 3.452 14 1.149 1.319 1.513 1.732 1.980 2.261 2.579 2.937 3.342 3.797 15 1.161 1.346 1.558 1.801 2.079 2.397 2.759 3.172 3.642 4.177

Present Value

The present value of a sum of money to be received at a future date is determined by discounting the future value at the interest rate that the money could earn over the period. Starting with the future value equation: FV = PV ( 1 + i ) t where

FV = future value PV = present value i = annual interest rate we see that the present value is given by: PV = FV (1+i)t

The term 1 / ( 1 + i ) t is known as the discount factor. If both the future value and present value are known, one can solve for the time or the interest rate using one of the techniques discussed in future value calculations. Present Value of Multiple Future Cash Payments When there is more than a single cash payment at a future date, the present value is calculated by taking the present values of the individual cash payments and summing them. It is helpful to draw a time line depicting the timing of the cash payments: Time Line 0 PV 1 C1 2 C2 3 C3

In this model, the cash payment at each date may be either an inflow or an outflow; the direction is designated by the sign. The present value of the above cash flow is: PV = C1 / ( 1 + i ) + C2 / ( 1 + i )2 + C3 / ( 1 + i )3

Discount Factor Table The discount factor 1 / ( 1 + i ) t may be calculated for a range of time periods and interest rates and tabulated for quick reference. Table of Discount Factors












1 0.990 0.980 0.971 0.962 0.952 0.943 0.935 0.926 0.917 0.909 2 0.980 0.961 0.943 0.925 0.907 0.890 0.873 0.857 0.842 0.826 3 0.971 0.942 0.915 0.889 0.864 0.840 0.816 0.794 0.772 0.751 4 0.961 0.924 0.888 0.855 0.823 0.792 0.763 0.735 0.708 0.683 5 0.951 0.906 0.863 0.822 0.784 0.747 0.713 0.681 0.650 0.621 6 0.942 0.888 0.837 0.790 0.746 0.705 0.666 0.630 0.596 0.564 7 0.933 0.871 0.813 0.760 0.711 0.665 0.623 0.583 0.547 0.513 8 0.923 0.853 0.789 0.731 0.677 0.627 0.582 0.540 0.502 0.467 9 0.914 0.837 0.766 0.703 0.645 0.592 0.544 0.500 0.460 0.424 10 0.905 0.820 0.744 0.676 0.614 0.558 0.508 0.463 0.422 0.386 11 0.896 0.804 0.722 0.650 0.585 0.527 0.475 0.429 0.388 0.350

12 0.887 0.788 0.701 0.625 0.557 0.497 0.444 0.397 0.356 0.319 13 0.879 0.773 0.681 0.601 0.530 0.469 0.415 0.368 0.326 0.290 14 0.870 0.758 0.661 0.577 0.505 0.442 0.388 0.340 0.299 0.263 15 0.861 0.743 0.642 0.555 0.481 0.417 0.362 0.315 0.275 0.239


An annuity is a series of equal payments over a specified time frame. For example, a cash payment of C made at the end of each year for four years at annual interest rate i is shown in the following time line:

4-Year Annuity Time Line 0 PV 1 C 2 C 3 C 4 C

This time line is for an ordinary annuity, in which the cash payments are made at the end of each year. For example, the first payment is made exactly one year from the present. The present value of this cash flow is calculated by:

PV = C / ( 1 + i ) + C / ( 1 + i )2 + C / ( 1 + i )3 + C / ( 1 + i )4

In general, for a t year annuity:

PV = C / ( 1 + i ) + C / ( 1 + i )2 + ... + C / ( 1 + i )t

From this potentially long series, a present value formula can be derived. First, multiply each side by 1 / ( 1 + i ).

PV / ( 1 + i ) = C / ( 1 + i )2 + C / ( 1 + i )3 + ... + C / ( 1 + i )t+1

In order to eliminate most of the terms in the series, subtract the second equation from the first equation:

PV - PV / ( 1 + i ) = C / ( 1 + i ) - C / ( 1 + i )t+1

Solving for PV, the present value of an ordinary annuity is given by: C 1 i 1 (1+i)t

PV =

This equation assumes that the first payment of the annuity is made at the end of the first time period. If instead the payments are made at the beginning of each time period, then the present value calculation would be similar to the above, except that all payments would be shifted forward by one year. This shift can be accomplished by multiplying the entire present value expression by ( 1 + i ). Such an annuity with the payments occurring at the beginning of each time period is called an annuity due.

Annuity Factor Table The factor for calculating the present value of an ordinary annuity may be calculated for a range of time periods and interest rates and tabulated for quick reference. The annuity factor is the value of the following expression: 1 1 i 1 (1+i)t

The following table shows the value of this factor for various interest rates and time periods.

Table of Present Value Annuity Factors


\i 1 2 3 4 5 6 7 8 9











0.990 0.980 0.971 0.962 0.952 0.943 0.935 0.926 0.917 0.909 1.970 1.942 1.913 1.886 1.859 1.833 1.808 1.783 1.759 1.736 2.941 2.884 2.829 2.775 2.723 2.673 2.624 2.577 2.531 2.487 3.902 3.808 3.717 3.630 3.546 3.465 3.387 3.312 3.240 3.170 4.853 4.713 4.580 4.452 4.329 4.212 4.100 3.993 3.890 3.791 5.795 5.601 5.417 5.242 5.076 4.917 4.767 4.623 4.486 4.355 6.728 6.472 6.230 6.002 5.786 5.582 5.389 5.206 5.033 4.868 7.652 7.325 7.020 6.733 6.463 6.210 5.971 5.747 5.535 5.335 8.566 8.162 7.786 7.435 7.108 6.802 6.515 6.247 5.995 5.759 9.471 8.983 8.530 8.111 7.722 7.360 7.024 6.710 6.418 6.145 9.787 9.253 8.760 8.306 7.887 7.499 7.139 6.805 6.495 9.954 9.385 8.863 8.384 7.943 7.536 7.161 6.814 9.986 9.394 8.853 8.358 7.904 7.487 7.103 9.899 9.295 8.745 8.244 7.786 7.367


11 10.368

12 11.255 10.575

13 12.134 11.348 10.635

14 13.004 12.106 11.296 10.563

15 13.865 12.849 11.938 11.118 10.380 9.712 9.108 8.559 8.061 7.606


A perpetuity is a series of equal payments over an infinite time period into the future. Consider the case of a cash payment C made at the end of each year at interest rate i, as shown in the following time line:

Perpetuity Time Line 0 PV 1 C 2 C 3 C

Because this cash flow continues forever, the present value is given by an infinite series:

PV = C / ( 1 + i ) + C / ( 1 + i )2 + C / ( 1 + i )3 + . . .

From this infinite series, a usable present value formula can be derived by first dividing each side by ( 1 + i ).

PV / ( 1 + i ) = C / ( 1 + i )2 + C / ( 1 + i )3 + C / ( 1 + i )4 + . . .

In order to eliminate most of the terms in the series, subtract the second equation from the first equation:

PV - PV / ( 1 + i ) = C / ( 1 + i )

Solving for PV, the present value of a perpetuity is given by: PV = Growing Perpetuities Sometimes the payments in a perpetuity are not constant but rather, increase at a certain growth rate g as depicted in the following time line: C i

Growing Perpetuity Time Line 0 PV 1 C 2 C(1+g) 3 C(1+g)2

The present value of a growing perpetuity can be written as the following infinite series: PV = C C(1+g) C ( 1 + g )2 + + + . . . (1+i) ( 1 + i )2 ( 1 + i )3

To simplify this expression, first multiply each side by (1 + g) / (1 + i): PV ( 1 + g) C(1+g) C ( 1 + g )2 = + + . . . (1+i) ( 1 + i )2 ( 1 + i )3 Then subtract the second equation from the first: PV PV ( 1 + g) C = (1+i) (1+i)

Finally, solving for PV yields the expression for the present value of a growing perpetuity: PV = C i-g

For this expression to be valid, the growth rate must be less than the interest rate, that is, g < i .

Capital Budgeting

A capital expenditure is an outlay of cash for a project that is expected to produce a cash inflow over a period of time exceeding one year. Examples of projects include investments in property, plant, and equipment, research and development projects, large advertising campaigns, or any other project that requires a capital expenditure and generates a future cash flow. Because capital expenditures can be very large and have a significant impact on the financial performance of the firm, great importance is placed on project selection. This process is called capital budgeting.

Criteria for Capital Budgeting Decisions Potentially, there is a wide array of criteria for selecting projects. Some shareholders may want the firm to select projects that will show immediate surges in cash inflow, others may want to emphasize long-term growth with little importance on short-term performance. Viewed in this way, it would be quite difficult to satisfy the differing interests of all the shareholders. Fortunately, there is a solution. The goal of the firm is to maximize present shareholder value. This goal implies that projects should be undertaken that result in a positive net present value, that is, the present value of the expected cash inflow less the present value of the required capital expenditures. Using net present value (NPV) as a measure, capital budgeting involves selecting those projects that increase the value of the firm because they have a positive NPV. The timing and growth rate of the incoming cash flow is important only to the extent of its impact on NPV.

Using NPV as the criterion by which to select projects assumes efficient capital markets so that the firm has access to whatever capital is needed to pursue the positive NPV projects. In situations where this is not the case, there may be capital rationing and the capital budgeting process becomes more complex. Note that it is not the responsibility of the firm to decide whether to please particular groups of shareholders who prefer longer or shorter term results. Once the firm has selected the projects to maximize its net present value, it is up to the individual shareholders to use the capital markets to borrow or lend in order to move the exact timing of their own cash inflows forward or backward. This idea is crucial in the principal-agent relationship that exists between shareholders and corporate managers. Even though each may have their own individual preferences, the common goal is that of maximizing the present value of the corporation. Alternative Rules for Capital Budgeting While net present value is the rule that always maximizes shareholder value, some firms use other criteria for their capital budgeting decisions, such as:

Internal Rate of Return (IRR) Profitability Index Payback Period Return on Book Value

In some cases, the investment decisions resulting from the IRR and profitability index methods agree with those of NPV. Decisions made using the payback period and return on book value methods usually are suboptimal from the standpoint of maximizing shareholder value.

Common Size Financial Statements

Common size ratios are used to compare financial statements of different-size companies, or of the same company over different periods. By expressing the items in proportion to some size-related measure, standardized financial statements can be created, revealing trends and providing insight into how the different companies compare. The common size ratio for each line on the financial statement is calculated as follows: Common Size Ratio = Item of Interest Reference Item

For example, if the item of interest is inventory and it is referenced to total assets (as it normally would be), the common size ratio would be: Common Size Ratio for Inventory = Inventory Total Assets

The ratios often are expressed as percentages of the reference amount. Common size statements usually are prepared for the income statement and balance sheet, expressing information as follows:

Income statement items - expressed as a percentage of total revenue Balance sheet items - expressed as a percentage of total assets

The following example income statement shows both the dollar amounts and the common size ratios: Common Size Income Statement Income Statement Common-Size Income Statement

Revenue Cost of Goods Sold Gross Profit SG&A Expense Operating Income Interest Expense Provision for Taxes Net Income

70,134 44,221 25,913 13,531 12,382 2,862 3,766 5,754

100% 63.1% 36.9% 19.3% 17.7% 4.1% 5.4% 8.2%

For the balance sheet, the common size percentages are referenced to the total assets. The following sample balance sheet shows both the dollar amounts and the common size ratios: Common Size Balance Sheet Balance Sheet ASSETS Cash & Marketable Securities Accounts Receivable Inventory Total Current Assets Property, Plant, & Equipment Total Assets 6,029 14,378 17,136 37,543 2,442 39,985 Common-Size Balance Sheet 15.1% 36.0% 42.9% 93.9% 6.1% 100%

LIABILITIES AND SHAREHOLDERS' EQUITY Current Liabilities 14,251 Long-Term Debt 12,624 Total Liabilities 26,875 Shareholders' Equity 13,110 Total Liabilities & Equity 39,985

35.6% 31.6% 67.2% 32.8% 100%

The above common size statements are prepared in a vertical analysis, referencing each line on the financial statement to a total value on the statement in a given period. The ratios in common size statements tend to have less variation than the absolute values themselves, and trends in the ratios can reveal important changes in the business. Historical comparisons can be made in a time-series analysis to identify such trends. Common size statements also can be used to compare the firm to other firms. Comparisons Between Companies (Cross-Sectional Analysis) Common size financial statements can be used to compare multiple companies at the same point in time. A common-size analysis is especially useful when comparing companies of different sizes. It often is insightful to compare a firm to the best performing firm in its industry (benchmarking). A firm also can be compared to its industry as a whole. To compare to the industry, the ratios are calculated for each firm in the industry and an average for the industry is calculated. Comparative statements then may be constructed with the company of interest in one column and the industry averages in another. The result is a quick overview of where the firm stands in the industry with respect to key items on the financial statements. Limitations

As with financial statements in general, the interpretation of common size statements is subject to many of the limitations in the accounting data used to construct them. For example:

Different accounting policies may be used by different firms or within the same firm at different points in time. Adjustments should be made for such differences. Different firms may use different accounting calendars, so the accounting periods may not be directly comparable.

Financial Ratios

Financial ratios are useful indicators of a firm's performance and financial situation. Most ratios can be calculated from information provided by the financial statements. Financial ratios can be used to analyze trends and to compare the firm's financials to those of other firms. In some cases, ratio analysis can predict future bankruptcy. Financial ratios can be classified according to the information they provide. The following types of ratios frequently are used:

Liquidity ratios Asset turnover ratios Financial leverage ratios Profitability ratios Dividend policy ratios

Liquidity Ratios Liquidity ratios provide information about a firm's ability to meet its short-term financial obligations. They are of particular interest to those extending short-term credit to the firm. Two frequently-used liquidity ratios are the current ratio (or working capital ratio) and the quick ratio. The current ratio is the ratio of current assets to current liabilities: Current Ratio = Current Assets Current Liabilities

Short-term creditors prefer a high current ratio since it reduces their risk. Shareholders may prefer a lower current ratio so that more of the firm's assets are working to grow the business. Typical values for the current ratio vary by firm and industry. For example, firms in cyclical industries may maintain a higher current ratio in order to remain solvent during downturns. One drawback of the current ratio is that inventory may include many items that are difficult to liquidate quickly and that have uncertain liquidation values. The quick ratio is an alternative measure of liquidity that does not include inventory in the current assets. The quick ratio is defined as follows: Quick Ratio = Current Assets - Inventory Current Liabilities

The current assets used in the quick ratio are cash, accounts receivable, and notes receivable. These assets essentially are current assets less inventory. The quick ratio often is referred to as the acid test. Finally, the cash ratio is the most conservative liquidity ratio. It excludes all current assets except the most liquid: cash and cash equivalents. The cash ratio is defined as follows: Cash Ratio = Cash + Marketable Securities

Current Liabilities The cash ratio is an indication of the firm's ability to pay off its current liabilities if for some reason immediate payment were demanded. Asset Turnover Ratios Asset turnover ratios indicate of how efficiently the firm utilizes its assets. They sometimes are referred to as efficiency ratios, asset utilization ratios, or asset management ratios. Two commonly used asset turnover ratios are receivables turnover and inventory turnover. Receivables turnover is an indication of how quickly the firm collects its accounts receivables and is defined as follows: Receivables Turnover = Annual Credit Sales Accounts Receivable

The receivables turnover often is reported in terms of the number of days that credit sales remain in accounts receivable before they are collected. This number is known as the collection period. It is the accounts receivable balance divided by the average daily credit sales, calculated as follows: Average Collection Period = The collection period also can be written as: Average Collection Period = 365 Receivables Turnover Accounts Receivable Annual Credit Sales / 365

Another major asset turnover ratio is inventory turnover. It is the cost of goods sold in a time period divided by the average inventory level during that period: Inventory Turnover = Cost of Goods Sold Average Inventory

The inventory turnover often is reported as the inventory period, which is the number of days worth of inventory on hand, calculated by dividing the inventory by the average daily cost of goods sold: Inventory Period = The inventory period also can be written as: Inventory Period = 365 Inventory Turnover Average Inventory Annual Cost of Goods Sold / 365

Other asset turnover ratios include fixed asset turnover and total asset turnover. Financial Leverage Ratios Financial leverage ratios provide an indication of the long-term solvency of the firm. Unlike liquidity ratios that are concerned with short-term assets and liabilities, financial leverage ratios measure the extent to which the firm is using long term debt. The debt ratio is defined as total debt divided by total assets:

Debt Ratio =

Total Debt Total Assets

The debt-to-equity ratio is total debt divided by total equity: Debt-to-Equity Ratio = Total Debt Total Equity

Debt ratios depend on the classification of long-term leases and on the classification of some items as longterm debt or equity. The times interest earned ratio indicates how well the firm's earnings can cover the interest payments on its debt. This ratio also is known as the interest coverage and is calculated as follows: Interest Coverage = where EBIT = Earnings Before Interest and Taxes Profitability Ratios Profitability ratios offer several different measures of the success of the firm at generating profits. The gross profit margin is a measure of the gross profit earned on sales. The gross profit margin considers the firm's cost of goods sold, but does not include other costs. It is defined as follows: Gross Profit Margin = Sales - Cost of Goods Sold Sales EBIT Interest Charges

Return on assets is a measure of how effectively the firm's assets are being used to generate profits. It is defined as: Return on Assets = Net Income Total Assets

Return on equity is the bottom line measure for the shareholders, measuring the profits earned for each dollar invested in the firm's stock. Return on equity is defined as follows: Return on Equity = Dividend Policy Ratios Dividend policy ratios provide insight into the dividend policy of the firm and the prospects for future growth. Two commonly used ratios are the dividend yield and payout ratio. The dividend yield is defined as follows: Dividend Yield = Dividends Per Share Share Price Net Income Shareholder Equity

A high dividend yield does not necessarily translate into a high future rate of return. It is important to consider the prospects for continuing and increasing the dividend in the future. The dividend payout ratio is helpful in this regard, and is defined as follows:

Payout Ratio = Use and Limitations of Financial Ratios

Dividends Per Share Earnings Per Share

Attention should be given to the following issues when using financial ratios:

A reference point is needed. To to be meaningful, most ratios must be compared to historical values of the same firm, the firm's forecasts, or ratios of similar firms. Most ratios by themselves are not highly meaningful. They should be viewed as indicators, with several of them combined to paint a picture of the firm's situation. Year-end values may not be representative. Certain account balances that are used to calculate ratios may increase or decrease at the end of the accounting period because of seasonal factors. Such changes may distort the value of the ratio. Average values should be used when they are available. Ratios are subject to the limitations of accounting methods. Different accounting choices may result in significantly different ratio values.


Frederick Taylor and Scientific Management

In 1911, Frederick Winslow Taylor published his work, The Principles of Scientific Management, in which he described how the application of the scientific method to the management of workers greatly could improve productivity. Scientific management methods called for optimizing the way that tasks were performed and simplifying the jobs enough so that workers could be trained to perform their specialized sequence of motions in the one "best" way. Prior to scientific management, work was performed by skilled craftsmen who had learned their jobs in lengthy apprenticeships. They made their own decisions about how their job was to be performed. Scientific management took away much of this autonomy and converted skilled crafts into a series of simplified jobs that could be performed by unskilled workers who easily could be trained for the tasks. Taylor became interested in improving worker productivity early in his career when he observed gross inefficiencies during his contact with steel workers. Soldiering Working in the steel industry, Taylor had observed the phenomenon of workers' purposely operating well below their capacity, that is, soldiering. He attributed soldiering to three causes: 1. The almost universally held belief among workers that if they became more productive, fewer of them would be needed and jobs would be eliminated. 2. Non-incentive wage systems encourage low productivity if the employee will receive the same pay regardless of how much is produced, assuming the employee can convince the employer that the slow pace really is a good pace for the job. Employees take great care never to work at a good pace for fear that this faster pace would become the new standard. If employees are paid by the quantity they produce, they fear that management will decrease their per-unit pay if the quantity increases.

3. Workers waste much of their effort by relying on rule-of-thumb methods rather than on optimal work methods that can be determined by scientific study of the task. To counter soldiering and to improve efficiency, Taylor began to conduct experiments to determine the best level of performance for certain jobs, and what was necessary to achieve this performance. Time Studies Taylor argued that even the most basic, mindless tasks could be planned in a way that dramatically would increase productivity, and that scientific management of the work was more effective than the "initiative and incentive" method of motivating workers. The initiative and incentive method offered an incentive to increase productivity but placed the responsibility on the worker to figure out how to do it. To scientifically determine the optimal way to perform a job, Taylor performed experiments that he called time studies, (also known as time and motion studies). These studies were characterized by the use of a stopwatch to time a worker's sequence of motions, with the goal of determining the one best way to perform a job. The following are examples of some of the time-and-motion studies that were performed by Taylor and others in the era of scientific management. Pig Iron If workers were moving 12 1/2 tons of pig iron per day and they could be incentivized to try to move 47 1/2 tons per day, left to their own wits they probably would become exhausted after a few hours and fail to reach their goal. However, by first conducting experiments to determine the amount of resting that was necessary, the worker's manager could determine the optimal timing of lifting and resting so that the worker could move the 47 1/2 tons per day without tiring. Not all workers were physically capable of moving 47 1/2 tons per day; perhaps only 1/8 of the pig iron handlers were capable of doing so. While these 1/8 were not extraordinary people who were highly prized by society, their physical capabilities were well-suited to moving pig iron. This example suggests that workers should be selected according to how well they are suited for a particular job. The Science of Shoveling In another study of the "science of shoveling", Taylor ran time studies to determine that the optimal weight that a worker should lift in a shovel was 21 pounds. Since there is a wide range of densities of materials, the shovel should be sized so that it would hold 21 pounds of the substance being shoveled. The firm provided the workers with optimal shovels. The result was a three to four fold increase in productivity and workers were rewarded with pay increases. Prior to scientific management, workers used their own shovels and rarely had the optimal one for the job. Bricklaying Others performed experiments that focused on specific motions, such as Gilbreth's bricklaying experiments that resulted in a dramatic decrease in the number of motions required to lay bricks. The husband and wife Gilbreth team used motion picture technology to study the motions of the workers in some of their experiments. Taylor's 4 Principles of Scientific Management After years of various experiments to determine optimal work methods, Taylor proposed the following four principles of scientific management: 1. Replace rule-of-thumb work methods with methods based on a scientific study of the tasks. 2. Scientifically select, train, and develop each worker rather than passively leaving them to train themselves. 3. Cooperate with the workers to ensure that the scientifically developed methods are being followed. 4. Divide work nearly equally between managers and workers, so that the managers apply scientific management principles to planning the work and the workers actually perform the tasks.

These principles were implemented in many factories, often increasing productivity by a factor of three or more. Henry Ford applied Taylor's principles in his automobile factories, and families even began to perform their household tasks based on the results of time and motion studies. Drawbacks of Scientific Management While scientific management principles improved productivity and had a substantial impact on industry, they also increased the monotony of work. The core job dimensions of skill variety, task identity, task significance, autonomy, and feedback all were missing from the picture of scientific management. While in many cases the new ways of working were accepted by the workers, in some cases they were not. The use of stopwatches often was a protested issue and led to a strike at one factory where "Taylorism" was being tested. Complaints that Taylorism was dehumanizing led to an investigation by the United States Congress. Despite its controversy, scientific management changed the way that work was done, and forms of it continue to be used today.

Maslow's Hierarchy of Needs

If motivation is driven by the existence of unsatisfied needs, then it is worthwhile for a manager to understand which needs are the more important for individual employees. In this regard, Abraham Maslow developed a model in which basic, low-level needs such as physiological requirements and safety must be satisfied before higher-level needs such as self-fulfillment are pursued. In this hierarchical model, when a need is mostly satisfied it no longer motivates and the next higher need takes its place. Maslow's hierarchy of needs is shown in the following diagram: Maslow's Hierarchy of Needs

Esteem Needs

Social Needs

Safety Needs

Physiological Needs

Physiological Needs

Physiological needs are those required to sustain life, such as:

air water

nourishment sleep

According to Maslow's theory, if such needs are not satisfied then one's motivation will arise from the quest to satisfy them. Higher needs such as social needs and esteem are not felt until one has met the needs basic to one's bodily functioning.

Once physiological needs are met, one's attention turns to safety and security in order to be free from the threat of physical and emotional harm. Such needs might be fulfilled by:

Living in a safe area Medical insurance Job security Financial reserves

According to Maslow's hierarchy, if a person feels that he or she is in harm's way, higher needs will not receive much attention.
Social Needs

Once a person has met the lower level physiological and safety needs, higher level needs become important, the first of which are social needs. Social needs are those related to interaction with other people and may include:

Need for friends Need for belonging Need to give and receive love


Once a person feels a sense of "belonging", the need to feel important arises. Esteem needs may be classified as internal or external. Internal esteem needs are those related to self-esteem such as self respect and achievement. External esteem needs are those such as social status and recognition. Some esteem needs are:

Self-respect Achievement Attention Recognition Reputation

Maslow later refined his model to include a level between esteem needs and self-actualization: the need for knowledge and aesthetics.

Self-actualization is the summit of Maslow's hierarchy of needs. It is the quest of reaching one's full potential as a person. Unlike lower level needs, this need is never fully satisfied; as one grows psychologically there are always new opportunities to continue to grow. Self-actualized people tend to have needs such as:

Truth Justice Wisdom Meaning

Self-actualized persons have frequent occurrences of peak experiences, which are energized moments of profound happiness and harmony. According to Maslow, only a small percentage of the population reaches the level of self-actualization.

Implications for Management If Maslow's theory holds, there are some important implications for management. There are opportunities to motivate employees through management style, job design, company events, and compensation packages, some examples of which follow:

Physiological needs: Provide lunch breaks, rest breaks, and wages that are sufficient to purchase the essentials of life. Safety Needs: Provide a safe working environment, retirement benefits, and job security. Social Needs: Create a sense of community via team-based projects and social events. Esteem Needs: Recognize achievements to make employees feel appreciated and valued. Offer job titles that convey the importance of the position. Self-Actualization: Provide employees a challenge and the opportunity to reach their full career potential.

However, not all people are driven by the same needs - at any time different people may be motivated by entirely different factors. It is important to understand the needs being pursued by each employee. To motivate an employee, the manager must be able to recognize the needs level at which the employee is operating, and use those needs as levers of motivation.

Limitations of Maslow's Hierarchy While Maslow's hierarchy makes sense from an intuitive standpoint, there is little evidence to support its hierarchical aspect. In fact, there is evidence that contradicts the order of needs specified by the model. For example, some cultures appear to place social needs before any others. Maslow's hierarchy also has difficulty explaining cases such as the "starving artist" in which a person neglects lower needs in pursuit of higher ones. Finally, there is little evidence to suggest that people are motivated to satisfy only one need level at a time, except in situations where there is a conflict between needs. Even though Maslow's hierarchy lacks scientific support, it is quite well-known and is the first theory of motivation to which many people they are exposed. To address some of the issues of Maslow's theory, Clayton Alderfer developed the ERG theory, a needs-based model that is more consistent with empirical findings.

ERG Theory

To address some of the limitations of Maslow's hierarchy as a theory of motivation, Clayton Alderfer proposed the ERG theory, which like Maslow's theory, describes needs as a hierarchy. The letters ERG stand for three levels of needs: Existence, Relatedness, and Growth. The ERG theory is based on the work of Maslow, so it has much in common with it but also differs in some important aspects. Similarities to Maslow's Hierarchy Studies had shown that the middle levels of Maslow's hierarchy have some overlap; Alderfer addressed this issue by reducing the number of levels to three. The ERG needs can be mapped to those of Maslow's theory as follows:

Existence: Physiological and safety needs Relatedness: Social and external esteem needs Growth: Self-actualization and internal esteem needs

Like Maslow's model, the ERG theory is hierarchical - existence needs have priority over relatedness needs, which have priority over growth. Differences from Maslow's Hierarchy In addition to the reduction in the number of levels, the ERG theory differs from Maslow's in the following three ways:

Unlike Maslow's hierarchy, the ERG theory allows for different levels of needs to be pursued simultaneously. The ERG theory allows the order of the needs be different for different people. The ERG theory acknowledges that if a higher level need remains unfulfilled, the person may regress to lower level needs that appear easier to satisfy. This is known as the frustration-regression principle.

Thus, while the ERG theory presents a model of progressive needs, the hierarchical aspect is not rigid. This flexibility allows the ERG theory to account for a wider range of observed behaviors. For example, it can explain the "starving artist" who may place growth needs above existence ones. Implications for Management If the ERG theory holds, then unlike with Maslow's theory, managers must recognize that an employee has multiple needs to satisfy simultaneously. Furthermore, if growth opportunities are not provided to employees, they may regress to relatedness needs. If the manager is able to recognize this situation, then steps can be taken to concentrate on relatedness needs until the subordinate is able to pursue growth again.

Herzberg's Motivation-Hygiene Theory (Two Factor Theory)

To better understand employee attitudes and motivation, Frederick Herzberg performed studies to determine which factors in an employee's work environment caused satisfaction or dissatisfaction. He published his findings in the 1959 book The Motivation to Work. The studies included interviews in which employees where asked what pleased and displeased them about their work. Herzberg found that the factors causing job satisfaction (and presumably motivation) were different from those causing job dissatisfaction. He developed the motivation-hygiene theory to explain these results. He called the satisfiers motivators and the dissatisfiers hygiene factors, using the term "hygiene" in the sense that they are considered maintenance factors that are necessary to avoid dissatisfaction but that by themselves do not provide satisfaction. The following table presents the top six factors causing dissatisfaction and the top six factors causing satisfaction, listed in the order of higher to lower importance. Factors Affecting Job Attitudes Leading to Dissatisfaction

Leading to Satisfaction

Company policy Supervision Relationship w/Boss Work conditions

Achievement Recognition Work itself Responsibility

Salary Relationship w/Peers

Advancement Growth

Herzberg reasoned that because the factors causing satisfaction are different from those causing dissatisfaction, the two feelings cannot simply be treated as opposites of one another. The opposite of satisfaction is not dissatisfaction, but rather, no satisfaction. Similarly, the opposite of dissatisfaction is no dissatisfaction. While at first glance this distinction between the two opposites may sound like a play on words, Herzberg argued that there are two distinct human needs portrayed. First, there are physiological needs that can be fulfilled by money, for example, to purchase food and shelter. Second, there is the psychological need to achieve and grow, and this need is fulfilled by activities that cause one to grow. From the above table of results, one observes that the factors that determine whether there is dissatisfaction or no dissatisfaction are not part of the work itself, but rather, are external factors. Herzberg often referred to these hygiene factors as "KITA" factors, where KITA is an acronym for Kick In The A..., the process of providing incentives or a threat of punishment to cause someone to do something. Herzberg argues that these provide only short-run success because the motivator factors that determine whether there is satisfaction or no satisfaction are intrinsic to the job itself, and do not result from carrot and stick incentives. Implications for Management If the motivation-hygiene theory holds, management not only must provide hygiene factors to avoid employee dissatisfaction, but also must provide factors intrinsic to the work itself in order for employees to be satisfied with their jobs. Herzberg argued that job enrichment is required for intrinsic motivation, and that it is a continuous management process. According to Herzberg:

The job should have sufficient challenge to utilize the full ability of the employee. Employees who demonstrate increasing levels of ability should be given increasing levels of responsibility. If a job cannot be designed to use an employee's full abilities, then the firm should consider automating the task or replacing the employee with one who has a lower level of skill. If a person cannot be fully utilized, then there will be a motivation problem.

Critics of Herzberg's theory argue that the two-factor result is observed because it is natural for people to take credit for satisfaction and to blame dissatisfaction on external factors. Furthermore, job satisfaction does not necessarily imply a high level of motivation or productivity. Herzberg's theory has been broadly read and despite its weaknesses its enduring value is that it recognizes that true motivation comes from within a person and not from KITA factors.

McClelland's Theory of Needs

In his acquired-needs theory, David McClelland proposed that an individual's specific needs are acquired over time and are shaped by one's life experiences. Most of these needs can be classed as either achievement, affiliation, or power. A person's motivation and effectiveness in certain job functions are influenced by these three needs. McClelland's theory sometimes is referred to as the three need theory or as the learned needs theory. Achievement

People with a high need for achievement (nAch) seek to excel and thus tend to avoid both low-risk and highrisk situations. Achievers avoid low-risk situations because the easily attained success is not a genuine achievement. In high-risk projects, achievers see the outcome as one of chance rather than one's own effort. High nAch individuals prefer work that has a moderate probability of success, ideally a 50% chance. Achievers need regular feedback in order to monitor the progress of their acheivements. They prefer either to work alone or with other high achievers. Affiliation Those with a high need for affiliation (nAff) need harmonious relationships with other people and need to feel accepted by other people. They tend to conform to the norms of their work group. High nAff individuals prefer work that provides significant personal interaction. They perform well in customer service and client interaction situations. Power A person's need for power (nPow) can be one of two types - personal and institutional. Those who need personal power want to direct others, and this need often is percieved as undesirable. Persons who need institutional power (also known as social power) want to organize the efforts of others to further the goals of the organization. Managers with a high need for institutional power tend to be more effective than those with a high need for personal power. Thematic Apperception Test McClelland used the Thematic Apperception Test (TAT) as a tool to measure the individual needs of different people. The TAT is a test of imagination that presents the subject with a series of ambiguous pictures, and the subject is asked to develop a spontaneous story for each picture. The assumption is that the subject will project his or her own needs into the story. Psychologists have developed fairly reliable scoring techniques for the Thematic Apperception Test. The test determines the individual's score for each of the needs of achievement, affiliation, and power. This score can be used to suggest the types of jobs for which the person might be well suited. Implications for Management People with different needs are motivated differently.

High need for achievement - High achievers should be given challenging projects with reachable goals. They should be provided frequent feedback. While money is not an important motivator, it is an effective form of feedback. High need for affiliation - Employees with a high affiliation need perform best in a cooperative environment. High need for power - Management should provide power seekers the opportunity to manage others.

Note that McClelland's theory allows for the shaping of a person's needs; training programs can be used to modify one's need profile.

Theory X and Theory Y

In his 1960 book, The Human Side of Enterprise, Douglas McGregor proposed two theories by which to view employee motivation. He avoided descriptive labels and simply called the theories Theory X and Theory Y. Both of these theories begin with the premise that management's role is to assemble the factors of production, including people, for the economic benefit of the firm. Beyond this point, the two theories of management diverge.

Theory X Theory X assumes that the average person:

Dislikes work and attempts to avoid it. Has no ambition, wants no responsibility, and would rather follow than lead. Is self-centered and therefore does not care about organizational goals. Resists change. Is gullible and not particularly intelligent.

Essentially, Theory X assumes that people work only for money and security.
Theory X - The Hard Approach and Soft Approach

Under Theory X, management approaches can range from a hard approach to a soft approach. The hard approach relies on coercion, implicit threats, close supervision, and tight controls, essentially an environment of command and control. The soft appoach is to be permissive and seek harmony with the hope that in return employees will cooperate when asked to do so. However, neither of these extremes is optimal. The hard approach results in hostility, purposely low-output, and hard-line union demands. The soft approach results in ever-increasing requests for more rewards in exchange for ever-decreasing work output. The optimal management approach under Theory X probably would be somewhere between these extremes. However, McGregor asserts that neither approach is appropriate because the assumptions of Theory X are not correct.
The Problem with Theory X

Drawing on Maslow's hierarchy, McGregor argues that a satisfied need no longer motivates. Under Theory X the firm relies on money and benefits to satisfy employees' lower needs, and once those needs are satisfied the source of motivation is lost. Theory X management styles in fact hinder the satisfaction of higher-level needs. Consequently, the only way that employees can attempt to satisfy their higher level needs in their work is by seeking more compensation, so it is quite predictable that they will focus on monetary rewards. While money may not be the most effective way to self-fulfillment, in a Theory X environment it may be the only way. Under Theory X, people use work to satisfy their lower needs, and seek to satisfy their higher needs in their leisure time. But it is in satisfying their higher needs that employees can be most productive. McGregor makes the point that a command and control environment is not effective because it relies on lower needs as levers of motivation, but in modern society those needs already are satisfied and thus no longer are motivators. In this situation, one would expect employees to dislike their work, avoid responsibility, have no interest in organizational goals, resist change, etc., thus making Theory X a selffulfilling prophecy. From this reasoning, McGregor proposed an alternative: Theory Y. Theory Y The higher-level needs of esteem and self-actualization are continuing needs in that they are never completely satisfied. As such, it is these higher-level needs through which employees can best be motivated. Theory Y makes the following general assumptions:

Work can be as natural as play and rest. People will be self-directed to meet their work objectives if they are committed to them. People will be committed to their objectives if rewards are in place that address higher needs such as self-fulfillment. Under these conditions, people will seek responsibility.

Most people can handle responsibility because creativity and ingenuity are common in the population.

Under these assumptions, there is an opportunity to align personal goals with organizational goals by using the employee's own quest for fulfillment as the motivator. McGregor stressed that Theory Y management does not imply a soft approach. McGregor recognized that some people may not have reached the level of maturity assumed by Theory Y and therefore may need tighter controls that can be relaxed as the employee develops.
Theory Y Management Implications

If Theory Y holds, the firm can do many things to harness the motivational energy of its employees:

Decentralization and Delegation - If firms decentralize control and reduce the number of levels of management, each manager will have more subordinates and consequently will be forced to delegate some responsibility and decision making to them. Job Enlargement - Broadening the scope of an employee's job adds variety and opportunities to satisfy ego needs. Participative Management - Consulting employees in the decision making process taps their creative capacity and provides them with some control over their work environment. Performance Appraisals - Having the employee set objectives and participate in the process of evaluating how well they were met.

If properly implemented, such an environment would result in a high level of motivation as employees work to satisfy their higher level personal needs through their jobs.


The Marketing Concept

The marketing concept is the philosophy that firms should analyze the needs of their customers and then make decisions to satisfy those needs, better than the competition. Today most firms have adopted the marketing concept, but this has not always been the case. In 1776 in The Wealth of Nations, Adam Smith wrote that the needs of producers should be considered only with regard to meeting the needs of consumers. While this philosophy is consistent with the marketing concept, it would not be adopted widely until nearly 200 years later. To better understand the marketing concept, it is worthwhile to put it in perspective by reviewing other philosophies that once were predominant. While these alternative concepts prevailed during different historical time frames, they are not restricted to those periods and are still practiced by some firms today. The Production Concept The production concept prevailed from the time of the industrial revolution until the early 1920's. The production concept was the idea that a firm should focus on those products that it could produce most efficiently and that the creation of a supply of low-cost products would in and of itself create the demand for the products. The key questions that a firm would ask before producing a product were:

Can we produce the product? Can we produce enough of it?

At the time, the production concept worked fairly well because the goods that were produced were largely those of basic necessity and there was a relatively high level of unfulfilled demand. Virtually everything that could be produced was sold easily by a sales team whose job it was simply to execute transactions at a price determined by the cost of production. The production concept prevailed into the late 1920's. The Sales Concept By the early 1930's however, mass production had become commonplace, competition had increased, and there was little unfulfilled demand. Around this time, firms began to practice the sales concept (or selling concept), under which companies not only would produce the products, but also would try to convince customers to buy them through advertising and personal selling. Before producing a product, the key questions were:

Can we sell the product? Can we charge enough for it?

The sales concept paid little attention to whether the product actually was needed; the goal simply was to beat the competition to the sale with little regard to customer satisfaction. Marketing was a function that was performed after the product was developed and produced, and many people came to associate marketing with hard selling. Even today, many people use the word "marketing" when they really mean sales. The Marketing Concept After World War II, the variety of products increased and hard selling no longer could be relied upon to generate sales. With increased discretionary income, customers could afford to be selective and buy only those products that precisely met their changing needs, and these needs were not immediately obvious. The key questions became:

What do customers want? Can we develop it while they still want it? How can we keep our customers satisfied?

In response to these discerning customers, firms began to adopt the marketing concept, which involves:

Focusing on customer needs before developing the product Aligning all functions of the company to focus on those needs Realizing a profit by successfully satisfying customer needs over the long-term

When firms first began to adopt the marketing concept, they typically set up separate marketing departments whose objective it was to satisfy customer needs. Often these departments were sales departments with expanded responsibilities. While this expanded sales department structure can be found in some companies today, many firms have structured themselves into marketing organizations having a company-wide customer focus. Since the entire organization exists to satisfy customer needs, nobody can neglect a customer issue by declaring it a "marketing problem" - everybody must be concerned with customer satisfaction. The marketing concept relies upon marketing research to define market segments, their size, and their needs. To satisfy those needs, the marketing team makes decisions about the controllable parameters of the marketing mix.

The Marketing Process

Under the marketing concept, the firm must find a way to discover unfulfilled customer needs and bring to market products that satisfy those needs. The process of doing so can be modeled in a sequence of steps: the situation is analyzed to identify opportunities, the strategy is formulated for a value proposition, tactical decisions are made, the plan is implemented and the results are monitored.

The Marketing Process Situation Analysis | V Marketing Strategy | V Marketing Mix Decisions | V Implementation & Control

I. Situation Analysis A thorough analysis of the situation in which the firm finds itself serves as the basis for identifying opportunities to satisfy unfulfilled customer needs. In addition to identifying the customer needs, the firm must understand its own capabilities and the environment in which it is operating. The situation analysis thus can be viewed in terms an analysis of the external environment and an internal analysis of the firm itself. The external environment can be described in terms of macro-environmental factors that broadly affect many firms, and micro-environmental factors closely related to the specific situation of the firm. The situation analysis should include past, present, and future aspects. It should include a history outlining how the situation evolved to its present state, and an analysis of trends in order to forecast where it is going. Good forecasting can reduce the chance of spending a year bringing a product to market only to find that the need no longer exists. If the situation analysis reveals gaps between what consumers want and what currently is offered to them, then there may be opportunities to introduce products to better satisfy those consumers. Hence, the situation analysis should yield a summary of problems and opportunities. From this summary, the firm can match its own capabilities with the opportunities in order to satisfy customer needs better than the competition. There are several frameworks that can be used to add structure to the situation analysis:

5 C Analysis - company, customers, competitors, collaborators, climate. Company represents the internal situation; the other four cover aspects of the external situation PEST analysis - for macro-environmental political, economic, societal, and technological factors. A PEST analysis can be used as the "climate" portion of the 5 C framework. SWOT analysis - strengths, weaknesses, opportunities, and threats - for the internal and external situation. A SWOT analysis can be used to condense the situation analysis into a listing of the most relevant problems and opportunities and to assess how well the firm is equipped to deal with them.

II. Marketing Strategy Once the best opportunity to satisfy unfulfilled customer needs is identified, a strategic plan for pursuing the opportunity can be developed. Market research will provide specific market information that will permit the firm to select the target market segment and optimally position the offering within that segment. The result is a value proposition to the target market. The marketing strategy then involves:

Segmentation Targeting (target market selection) Positioning the product within the target market

Value proposition to the target market

III. Marketing Mix Decisions Detailed tactical decisions then are made for the controllable parameters of the marketing mix. The action items include:

Product development - specifying, designing, and producing the first units of the product. Pricing decisions Distribution contracts Promotional campaign development

IV. Implementation and Control At this point in the process, the marketing plan has been developed and the product has been launched. Given that few environments are static, the results of the marketing effort should be monitored closely. As the market changes, the marketing mix can be adjusted to accomodate the changes. Often, small changes in consumer wants can addressed by changing the advertising message. As the changes become more significant, a product redesign or an entirely new product may be needed. The marketing process does not end with implementation - continual monitoring and adaptation is needed to fulfill customer needs consistently over the long-term.

Situation Analysis

In order to profitably satisfy customer needs, the firm first must understand its external and internal situation, including the customer, the market environment, and the firm's own capabilities. Furthermore, it needs to forecast trends in the dynamic environment in which it operates. A useful framework for performing a situation analysis is the 5 C Analysis. The 5C analysis is an environmental scan on five key areas especially applicable to marketing decisions. It covers the internal, the micro-environmental, and the macro-environmental situation. The 5 C analysis is an extension of the 3 C analysis (company, customers, and competitors), to which some marketers added the 4th C of collaborators. The further addition of a macro-environmental analysis (climate) results in a 5 C analysis, some aspects of which are outlined below. Company

Product line Image in the market Technology and experience Culture Goals


Distributors Suppliers Alliances


Market size and growth Market segments Benefits that consumer is seeking, tangible and intangible. Motivation behind purchase; value drivers, benefits vs. costs Decision maker or decision-making unit

Retail channel - where does the consumer actually purchase the product? Consumer information sources - where does the customer obtain information about the product? Buying process; e.g. impulse or careful comparison Frequency of purchase, seasonal factors Quantity purchased at a time Trends - how consumer needs and preferences change over time


Actual or potential Direct or indirect Products Positioning Market shares Strengths and weaknesses of competitors

Climate (or context) The climate or macro-environmental factors are:

Political & regulatory environment - governmental policies and regulations that affect the market Economic environment - business cycle, inflation rate, interest rates, and other macroeconomic issues Social/Cultural environment - society's trends and fashions Technological environment - new knowledge that makes possible new ways of satisfying needs; the impact of technology on the demand for existing products.

The analysis of the these four external "climate" factors often is referred to as a PEST analysis. Information Sources Customer and competitor information specifically oriented toward marketing decisions can be found in market research reports, which provide a market analysis for a particular industry. For foreign markets, country reports can be used as a general information source for the macro-environment. By combining the regional and market analysis with knowledge of the firm's own capabilities and partnerships, the firm can identify and select the more favorable opportunities to provide value to the customer.

Market Definition

In marketing, the term market refers to the group of consumers or organizations that is interested in the product, has the resources to purchase the product, and is permitted by law and other regulations to acquire the product. The market definition begins with the total population and progressively narrows as shown in the following diagram. Market Definition Conceptual Diagram

Beginning with the total population, various terms are used to describe the market based on the level of narrowing:

Total population Potential market - those in the total population who have interest in acquiring the product. Available market - those in the potential market who have enough money to buy the product. Qualified available market - those in the available market who legally are permitted to buy the product. Target market - the segment of the qualified available market that the firm has decided to serve (the served market). Penetrated market - those in the target market who have purchased the product.

In the above listing, "product" refers to both physical products and services. The size of the market is not necessarily fixed. For example, the size of the available market for a product can be increased by decreasing the product's price, and the size of the qualified available market can be increased through changes in legislation that result in fewer restrictions on who can buy the product. Defining the market is the first step in analyzing it. Since the market is likely to be composed of consumers whose needs differ, market segmentation is useful in order to better understand those needs and to select the groups within the market that the firm will serve.

Market Segmentation

Market segmentation is the identification of portions of the market that are different from one another. Segmentation allows the firm to better satisfy the needs of its potential customers. The Need for Market Segmentation

The marketing concept calls for understanding customers and satisfying their needs better than the competition. But different customers have different needs, and it rarely is possible to satisfy all customers by treating them alike. Mass marketing refers to treatment of the market as a homogenous group and offering the same marketing mix to all customers. Mass marketing allows economies of scale to be realized through mass production, mass distribution, and mass communication. The drawback of mass marketing is that customer needs and preferences differ and the same offering is unlikely to be viewed as optimal by all customers. If firms ignored the differing customer needs, another firm likely would enter the market with a product that serves a specific group, and the incumbant firms would lose those customers. Target marketing on the other hand recognizes the diversity of customers and does not try to please all of them with the same offering. The first step in target marketing is to identify different market segments and their needs. Requirements of Market Segments In addition to having different needs, for segments to be practical they should be evaluated against the following criteria:

Identifiable: the differentiating attributes of the segments must be measurable so that they can be identified. Accessible: the segments must be reachable through communication and distribution channels. Substantial: the segments should be sufficiently large to justify the resources required to target them. Unique needs: to justify separate offerings, the segments must respond differently to the different marketing mixes. Durable: the segments should be relatively stable to minimize the cost of frequent changes.

A good market segmentation will result in segment members that are internally homogenous and externally heterogeneous; that is, as similar as possible within the segment, and as different as possible between segments. Bases for Segmentation in Consumer Markets Consumer markets can be segmented on the following customer characteristics.

Geographic Demographic Psychographic Behavioralistic

Geographic Segmentation The following are some examples of geographic variables often used in segmentation.

Region: by continent, country, state, or even neighborhood Size of metropolitan area: segmented according to size of population Population density: often classified as urban, suburban, or rural Climate: according to weather patterns common to certain geographic regions

Demographic Segmentation Some demographic segmentation variables include:

Age Gender Family size Family lifecycle

Generation: baby-boomers, Generation X, etc. Income Occupation Education Ethnicity Nationality Religion Social class

Many of these variables have standard categories for their values. For example, family lifecycle often is expressed as bachelor, married with no children (DINKS: Double Income, No Kids), full-nest, empty-nest, or solitary survivor. Some of these categories have several stages, for example, full-nest I, II, or III depending on the age of the children. Psychographic Segmentation Psychographic segmentation groups customers according to their lifestyle. Activities, interests, and opinions (AIO) surveys are one tool for measuring lifestyle. Some psychographic variables include:

Activities Interests Opinions Attitudes Values

Behavioralistic Segmentation Behavioral segmentation is based on actual customer behavior toward products. Some behavioralistic variables include:

Benefits sought Usage rate Brand loyalty User status: potential, first-time, regular, etc. Readiness to buy Occasions: holidays and events that stimulate purchases

Behavioral segmentation has the advantage of using variables that are closely related to the product itself. It is a fairly direct starting point for market segmentation. Bases for Segmentation in Industrial Markets In contrast to consumers, industrial customers tend to be fewer in number and purchase larger quantities. They evaluate offerings in more detail, and the decision process usually involves more than one person. These characteristics apply to organizations such as manufacturers and service providers, as well as resellers, governments, and institutions. Many of the consumer market segmentation variables can be applied to industrial markets. Industrial markets might be segmented on characteristics such as:

Location Company type Behavioral characteristics

Location In industrial markets, customer location may be important in some cases. Shipping costs may be a purchase factor for vendor selection for products having a high bulk to value ratio, so distance from the vendor may

be critical. In some industries firms tend to cluster together geographically and therefore may have similar needs within a region. Company Type Business customers can be classified according to type as follows:

Company size Industry Decision making unit Purchase Criteria

Behavioral Characteristics In industrial markets, patterns of purchase behavior can be a basis for segmentation. Such behavioral characteristics may include:

Usage rate Buying status: potential, first-time, regular, etc. Purchase procedure: sealed bids, negotiations, etc.

Market Analysis

The goal of a market analysis is to determine the attractiveness of a market and to understand its evolving opportunities and threats as they relate to the strengths and weaknesses of the firm. David A. Aaker outlined the following dimensions of a market analysis:

Market size (current and future) Market growth rate Market profitability Industry cost structure Distribution channels Market trends Key success factors

Market Size The size of the market can be evaluated based on present sales and on potential sales if the use of the product were expanded. The following are some information sources for determining market size:

government data trade associations financial data from major players customer surveys

Market Growth Rate A simple means of forecasting the market growth rate is to extrapolate historical data into the future. While this method may provide a first-order estimate, it does not predict important turning points. A better method is to study growth drivers such as demographic information and sales growth in complementary products. Such drivers serve as leading indicators that are more accurate than simply extrapolating historical data. Important inflection points in the market growth rate sometimes can be predicted by constructing a product diffusion curve. The shape of the curve can be estimated by studying the characteristics of the adoption rate of a similar product in the past.

Ultimately, the maturity and decline stages of the product life cycle will be reached. Some leading indicators of the decline phase include price pressure caused by competition, a decrease in brand loyalty, the emergence of substitute products, market saturation, and the lack of growth drivers. Market Profitability While different firms in a market will have different levels of profitability, the average profit potential for a market can be used as a guideline for knowing how difficult it is to make money in the market. Michael Porter devised a useful framework for evaluating the attractiveness of an industry or market. This framework, known as Porter's five forces, identifies five factors that influence the market profitability:

Buyer power Supplier power Barriers to entry Threat of substitute products Rivalry among firms in the industry

Industry Cost Structure The cost structure is important for identifying key factors for success. To this end, Porter's value chain model is useful for determining where value is added and for isolating the costs. The cost structure also is helpful for formulating strategies to develop a competitive advantage. For example, in some environments the experience curve effect can be used to develop a cost advantage over competitors. Distribution Channels The following aspects of the distribution system are useful in a market analysis:

Existing distribution channels - can be described by how direct they are to the customer. Trends and emerging channels - new channels can offer the opportunity to develop a competitive advantage. Channel power structure - for example, in the case of a product having little brand equity, retailers have negotiating power over manufacturers and can capture more margin.

Market Trends Changes in the market are important because they often are the source of new opportunities and threats. The relevant trends are industry-dependent, but some examples include changes in price sensitivity, demand for variety, and level of emphasis on service and support. Regional trends also may be relevant. Key Success Factors The key success factors are those elements that are necessary in order for the firm to achieve its marketing objectives. A few examples of such factors include:

Access to essential unique resources Ability to achieve economies of scale Access to distribution channels Technological progress

It is important to consider that key success factors may change over time, especially as the product progresses through its life cycle.

Target Market Selection

Target marketing tailors a marketing mix for one or more segments identified by market segmentation. Target marketing contrasts with mass marketing, which offers a single product to the entire market. Two important factors to consider when selecting a target market segment are the attractiveness of the segment and the fit between the segment and the firm's objectives, resources, and capabilities. Attractiveness of a Market Segment The following are some examples of aspects that should be considered when evaluating the attractiveness of a market segment:

Size of the segment (number of customers and/or number of units) Growth rate of the segment Competition in the segment Brand loyalty of existing customers in the segment Attainable market share given promotional budget and competitors' expenditures Required market share to break even Sales potential for the firm in the segment Expected profit margins in the segment

Market research and analysis is instrumental in obtaining this information. For example, buyer intentions, salesforce estimates, test marketing, and statistical demand analysis are useful for determining sales potential. The impact of applicable micro-environmental and macro-environmental variables on the market segment should be considered. Note that larger segments are not necessarily the most profitable to target since they likely will have more competition. It may be more profitable to serve one or more smaller segments that have little competition. On the other hand, if the firm can develop a competitive advantage, for example, via patent protection, it may find it profitable to pursue a larger market segment. Suitability of Market Segments to the Firm Market segments also should be evaluated according to how they fit the firm's objectives, resources, and capabilities. Some aspects of fit include:

Whether the firm can offer superior value to the customers in the segment The impact of serving the segment on the firm's image Access to distribution channels required to serve the segment The firm's resources vs. capital investment required to serve the segment

The better the firm's fit to a market segment, and the more attractive the market segment, the greater the profit potential to the firm. Target Market Strategies There are several different target-market strategies that may be followed. Targeting strategies usually can be categorized as one of the following:

Single-segment strategy - also known as a concentrated strategy. One market segment (not the entire market) is served with one marketing mix. A single-segment approach often is the strategy of choice for smaller companies with limited resources. Selective specialization- this is a multiple-segment strategy, also known as a differentiated strategy. Different marketing mixes are offered to different segments. The product itself may or may not be different - in many cases only the promotional message or distribution channels vary. Product specialization- the firm specializes in a particular product and tailors it to different market segments. Market specialization- the firm specializes in serving a particular market segment and offers that segment an array of different products.

Full market coverage - the firm attempts to serve the entire market. This coverage can be achieved by means of either a mass market strategy in which a single undifferentiated marketing mix is offered to the entire market, or by a differentiated strategy in which a separate marketing mix is offered to each segment.

The following diagrams show examples of the five market selection patterns given three market segments S1, S2, and S3, and three products P1, P2, and P3. Single Segment S1 S2 S3 P1 P2 P3 P1 P2 P3 Selective Specialization S1 S2 S3 P1 P2 P3 Product Specialization S1 S2 S3 P1 P2 P3 Market Specialization S1 S2 S3 P1 P2 P3 Full Market Coverage S1 S2 S3

A firm that is seeking to enter a market and grow should first target the most attractive segment that matches its capabilities. Once it gains a foothold, it can expand by pursuing a product specialization strategy, tailoring the product for different segments, or by pursuing a market specialization strategy and offering new products to its existing market segment. Another strategy whose use is increasing is individual marketing, in which the marketing mix is tailored on an individual consumer basis. While in the past impractical, individual marketing is becoming more viable thanks to advances in technology.

The Product Life Cycle

A product's life cycle (PLC) can be divided into several stages characterized by the revenue generated by the product. If a curve is drawn showing product revenue over time, it may take one of many different shapes, an example of which is shown below: Product Life Cycle Curve

The life cycle concept may apply to a brand or to a category of product. Its duration may be as short as a few months for a fad item or a century or more for product categories such as the gasoline-powered automobile.

Product development is the incubation stage of the product life cycle. There are no sales and the firm prepares to introduce the product. As the product progresses through its life cycle, changes in the marketing mix usually are required in order to adjust to the evolving challenges and opportunities. Introduction Stage When the product is introduced, sales will be low until customers become aware of the product and its benefits. Some firms may announce their product before it is introduced, but such announcements also alert competitors and remove the element of surprise. Advertising costs typically are high during this stage in order to rapidly increase customer awareness of the product and to target the early adopters. During the introductory stage the firm is likely to incur additional costs associated with the initial distribution of the product. These higher costs coupled with a low sales volume usually make the introduction stage a period of negative profits. During the introduction stage, the primary goal is to establish a market and build primary demand for the product class. The following are some of the marketing mix implications of the introduction stage:

Product - one or few products, relatively undifferentiated Price - Generally high, assuming a skim pricing strategy for a high profit margin as the early adopters buy the product and the firm seeks to recoup development costs quickly. In some cases a penetration pricing strategy is used and introductory prices are set low to gain market share rapidly. Distribution - Distribution is selective and scattered as the firm commences implementation of the distribution plan. Promotion - Promotion is aimed at building brand awareness. Samples or trial incentives may be directed toward early adopters. The introductory promotion also is intended to convince potential resellers to carry the product.

Growth Stage The growth stage is a period of rapid revenue growth. Sales increase as more customers become aware of the product and its benefits and additional market segments are targeted. Once the product has been proven a success and customers begin asking for it, sales will increase further as more retailers become interested in carrying it. The marketing team may expand the distribution at this point. When competitors enter the market, often during the later part of the growth stage, there may be price competition and/or increased promotional costs in order to convince consumers that the firm's product is better than that of the competition. During the growth stage, the goal is to gain consumer preference and increase sales. The marketing mix may be modified as follows:

Product - New product features and packaging options; improvement of product quality. Price - Maintained at a high level if demand is high, or reduced to capture additional customers. Distribution - Distribution becomes more intensive. Trade discounts are minimal if resellers show a strong interest in the product. Promotion - Increased advertising to build brand preference.

Maturity Stage The maturity stage is the most profitable. While sales continue to increase into this stage, they do so at a slower pace. Because brand awareness is strong, advertising expenditures will be reduced. Competition may result in decreased market share and/or prices. The competing products may be very similar at this point, increasing the difficulty of differentiating the product. The firm places effort into encouraging competitors' customers to switch, increasing usage per customer, and converting non-users into customers. Sales promotions may be offered to encourage retailers to give the product more shelf space over competing products. During the maturity stage, the primary goal is to maintain market share and extend the product life cycle. Marketing mix decisions may include:

Product - Modifications are made and features are added in order to differentiate the product from competing products that may have been introduced. Price - Possible price reductions in response to competition while avoiding a price war. Distribution - New distribution channels and incentives to resellers in order to avoid losing shelf space. Promotion - Emphasis on differentiation and building of brand loyalty. Incentives to get competitors' customers to switch.

Decline Stage Eventually sales begin to decline as the market becomes saturated, the product becomes technologically obsolete, or customer tastes change. If the product has developed brand loyalty, the profitability may be maintained longer. Unit costs may increase with the declining production volumes and eventually no more profit can be made. During the decline phase, the firm generally has three options:

Maintain the product in hopes that competitors will exit. Reduce costs and find new uses for the product. Harvest it, reducing marketing support and coasting along until no more profit can be made. Discontinue the product when no more profit can be made or there is a successor product.

The marketing mix may be modified as follows:

Product - The number of products in the product line may be reduced. Rejuvenate surviving products to make them look new again. Price - Prices may be lowered to liquidate inventory of discontinued products. Prices may be maintained for continued products serving a niche market. Distribution - Distribution becomes more selective. Channels that no longer are profitable are phased out. Promotion - Expenditures are lower and aimed at reinforcing the brand image for continued products.

Limitations of the Product Life Cycle Concept The term "life cycle" implies a well-defined life cycle as observed in living organisms, but products do not have such a predictable life and the specific life cycle curves followed by different products vary substantially. Consequently, the life cycle concept is not well-suited for the forecasting of product sales. Furthermore, critics have argued that the product life cycle may become self-fulfilling. For example, if sales peak and then decline, managers may conclude that the product is in the decline phase and therefore cut the advertising budget, thus precipitating a further decline. Nonetheless, the product life cycle concept helps marketing managers to plan alternate marketing strategies to address the challenges that their products are likely to face. It also is useful for monitoring sales results over time and comparing them to those of products having a similar life cycle.

The Marketing Mix (The 4 P's of Marketing)

Marketing decisions generally fall into the following four controllable categories:

Product Price Place (distribution) Promotion

The term "marketing mix" became popularized after Neil H. Borden published his 1964 article, The Concept of the Marketing Mix. Borden began using the term in his teaching in the late 1940's after James Culliton had described the marketing manager as a "mixer of ingredients". The ingredients in Borden's marketing mix included product planning, pricing, branding, distribution channels, personal selling, advertising, promotions, packaging, display, servicing, physical handling, and fact finding and analysis. E. Jerome McCarthy later grouped these ingredients into the four categories that today are known as the 4 P's of marketing, depicted below:

The Marketing Mix

These four P's are the parameters that the marketing manager can control, subject to the internal and external constraints of the marketing environment. The goal is to make decisions that center the four P's on the customers in the target market in order to create perceived value and generate a positive response. Product Decisions The term "product" refers to tangible, physical products as well as services. Here are some examples of the product decisions to be made:

Brand name Functionality Styling Quality Safety Packaging Repairs and Support Warranty Accessories and services

Price Decisions Some examples of pricing decisions to be made include:

Pricing strategy (skim, penetration, etc.) Suggested retail price Volume discounts and wholesale pricing

Cash and early payment discounts Seasonal pricing Bundling Price flexibility Price discrimination

Distribution (Place) Decisions Distribution is about getting the products to the customer. Some examples of distribution decisions include:

Distribution channels Market coverage (inclusive, selective, or exclusive distribution) Specific channel members Inventory management Warehousing Distribution centers Order processing Transportation Reverse logistics

Promotion Decisions In the context of the marketing mix, promotion represents the various aspects of marketing communication, that is, the communication of information about the product with the goal of generating a positive customer response. Marketing communication decisions include:

Promotional strategy (push, pull, etc.) Advertising Personal selling & sales force Sales promotions Public relations & publicity Marketing communications budget

Limitations of the Marketing Mix Framework The marketing mix framework was particularly useful in the early days of the marketing concept when physical products represented a larger portion of the economy. Today, with marketing more integrated into organizations and with a wider variety of products and markets, some authors have attempted to extend its usefulness by proposing a fifth P, such as packaging, people, process, etc. Today however, the marketing mix most commonly remains based on the 4 P's. Despite its limitations and perhaps because of its simplicity, the use of this framework remains strong and many marketing textbooks have been organized around it.

Brand Equity

A brand is a name or symbol used to identify the source of a product. When developing a new product, branding is an important decision. The brand can add significant value when it is well recognized and has positive associations in the mind of the consumer. This concept is referred to as brand equity. What is Brand Equity? Brand equity is an intangible asset that depends on associations made by the consumer. There are at least three perspectives from which to view brand equity:

Financial - One way to measure brand equity is to determine the price premium that a brand commands over a generic product. For example, if consumers are willing to pay $100 more for a branded television over the same unbranded television, this premium provides important information

about the value of the brand. However, expenses such as promotional costs must be taken into account when using this method to measure brand equity. Brand extensions - A successful brand can be used as a platform to launch related products. The benefits of brand extensions are the leveraging of existing brand awareness thus reducing advertising expenditures, and a lower risk from the perspective of the consumer. Furthermore, appropriate brand extensions can enhance the core brand. However, the value of brand extensions is more difficult to quantify than are direct financial measures of brand equity. Consumer-based - A strong brand increases the consumer's attitude strength toward the product associated with the brand. Attitude strength is built by experience with a product. This importance of actual experience by the customer implies that trial samples are more effective than advertising in the early stages of building a strong brand. The consumer's awareness and associations lead to perceived quality, inferred attributes, and eventually, brand loyalty.

Strong brand equity provides the following benefits:

Facilitates a more predictable income stream. Increases cash flow by increasing market share, reducing promotional costs, and allowing premium pricing. Brand equity is an asset that can be sold or leased.

However, brand equity is not always positive in value. Some brands acquire a bad reputation that results in negative brand equity. Negative brand equity can be measured by surveys in which consumers indicate that a discount is needed to purchase the brand over a generic product. Building and Managing Brand Equity In his 1989 paper, Managing Brand Equity, Peter H. Farquhar outlined the following three stages that are required in order to build a strong brand:
1. Introduction - introduce a quality product with the strategy of using the brand as a platform from

which to launch future products. A positive evaluation by the consumer is important.

2. Elaboration - make the brand easy to remember and develop repeat usage. There should be

accessible brand attitude, that is, the consumer should easily remember his or her positive evaluation of the brand. 3. Fortification - the brand should carry a consistent image over time to reinforce its place in the consumer's mind and develop a special relationship with the consumer. Brand extensions can further fortify the brand, but only with related products having a perceived fit in the mind of the consumer. Alternative Means to Brand Equity Building brand equity requires a significant effort, and some companies use alternative means of achieving the benefits of a strong brand. For example, brand equity can be borrowed by extending the brand name to a line of products in the same product category or even to other categories. In some cases, especially when there is a perceptual connection between the products, such extensions are successful. In other cases, the extensions are unsuccessful and can dilute the original brand equity. Brand equity also can be "bought" by licensing the use of a strong brand for a new product. As in line extensions by the same company, the success of brand licensing is not guaranteed and must be analyzed carefully for appropriateness. Managing Multiple Brands Different companies have opted for different brand strategies for multiple products. These strategies are:

Single brand identity - a separate brand for each product. For example, in laundry detergents Procter & Gamble offers uniquely positioned brands such as Tide, Cheer, Bold, etc. Umbrella - all products under the same brand. For example, Sony offers many different product categories under its brand.

Multi-brand categories - Different brands for different product categories. Campbell Soup Company uses Campbell's for soups, Pepperidge Farm for baked goods, and V8 for juices. Family of names - Different brands having a common name stem. Nestle uses Nescafe, Nesquik, and Nestea for beverages.

Brand equity is an important factor in multi-product branding strategies. Protecting Brand Equity The marketing mix should focus on building and protecting brand equity. For example, if the brand is positioned as a premium product, the product quality should be consistent with what consumers expect of the brand, low sale prices should not be used compete, the distribution channels should be consistent with what is expected of a premium brand, and the promotional campaign should build consistent associations. Finally, potentially dilutive extensions that are inconsistent with the consumer's perception of the brand should be avoided. Extensions also should be avoided if the core brand is not yet sufficiently strong.

Pricing Strategy

One of the four major elements of the marketing mix is price. Pricing is an important strategic issue because it is related to product positioning. Furthermore, pricing affects other marketing mix elements such as product features, channel decisions, and promotion. While there is no single recipe to determine pricing, the following is a general sequence of steps that might be followed for developing the pricing of a new product: Develop marketing strategy - perform marketing analysis, segmentation, targeting, and positioning. Make marketing mix decisions - define the product, distribution, and promotional tactics. Estimate the demand curve - understand how quantity demanded varies with price. Calculate cost - include fixed and variable costs associated with the product. Understand environmental factors - evaluate likely competitor actions, understand legal constraints, etc. 6. Set pricing objectives - for example, profit maximization, revenue maximization, or price stabilization (status quo). 7. Determine pricing - using information collected in the above steps, select a pricing method, develop the pricing structure, and define discounts.
1. 2. 3. 4. 5.

These steps are interrelated and are not necessarily performed in the above order. Nonetheless, the above list serves to present a starting framework. Marketing Strategy and the Marketing Mix Before the product is developed, the marketing strategy is formulated, including target market selection and product positioning. There usually is a tradeoff between product quality and price, so price is an important variable in positioning. Because of inherent tradeoffs between marketing mix elements, pricing will depend on other product, distribution, and promotion decisions. Estimate the Demand Curve Because there is a relationship between price and quantity demanded, it is important to understand the impact of pricing on sales by estimating the demand curve for the product. For existing products, experiments can be performed at prices above and below the current price in order to determine the price elasticity of demand. Inelastic demand indicates that price increases might be feasible.

Calculate Costs If the firm has decided to launch the product, there likely is at least a basic understanding of the costs involved, otherwise, there might be no profit to be made. The unit cost of the product sets the lower limit of what the firm might charge, and determines the profit margin at higher prices. The total unit cost of a producing a product is composed of the variable cost of producing each additional unit and fixed costs that are incurred regardless of the quantity produced. The pricing policy should consider both types of costs. Environmental Factors Pricing must take into account the competitive and legal environment in which the company operates. From a competitive standpoint, the firm must consider the implications of its pricing on the pricing decisions of competitors. For example, setting the price too low may risk a price war that may not be in the best interest of either side. Setting the price too high may attract a large number of competitors who want to share in the profits. From a legal standpoint, a firm is not free to price its products at any level it chooses. For example, there may be price controls that prohibit pricing a product too high. Pricing it too low may be considered predatory pricing or "dumping" in the case of international trade. Offering a different price for different consumers may violate laws against price discrimination. Finally, collusion with competitors to fix prices at an agreed level is illegal in many countries. Pricing Objectives The firm's pricing objectives must be identified in order to determine the optimal pricing. Common objectives include the following:

Current profit maximization - seeks to maximize current profit, taking into account revenue and costs. Current profit maximization may not be the best objective if it results in lower long-term profits. Current revenue maximization - seeks to maximize current revenue with no regard to profit margins. The underlying objective often is to maximize long-term profits by increasing market share and lowering costs. Maximize quantity - seeks to maximize the number of units sold or the number of customers served in order to decrease long-term costs as predicted by the experience curve. Maximize profit margin - attempts to maximize the unit profit margin, recognizing that quantities will be low. Quality leadership - use price to signal high quality in an attempt to position the product as the quality leader. Partial cost recovery - an organization that has other revenue sources may seek only partial cost recovery. Survival - in situations such as market decline and overcapacity, the goal may be to select a price that will cover costs and permit the firm to remain in the market. In this case, survival may take a priority over profits, so this objective is considered temporary. Status quo - the firm may seek price stabilization in order to avoid price wars and maintain a moderate but stable level of profit.

For new products, the pricing objective often is either to maximize profit margin or to maximize quantity (market share). To meet these objectives, skim pricing and penetration pricing strategies often are employed. Joel Dean discussed these pricing policies in his classic HBR article entitled, Pricing Policies for New Products. Skim pricing attempts to "skim the cream" off the top of the market by setting a high price and selling to those customers who are less price sensitive. Skimming is a strategy used to pursue the objective of profit margin maximization.

Skimming is most appropriate when:

Demand is expected to be relatively inelastic; that is, the customers are not highly price sensitive. Large cost savings are not expected at high volumes, or it is difficult to predict the cost savings that would be achieved at high volume. The company does not have the resources to finance the large capital expenditures necessary for high volume production with initially low profit margins.

Penetration pricing pursues the objective of quantity maximization by means of a low price. It is most appropriate when:

Demand is expected to be highly elastic; that is, customers are price sensitive and the quantity demanded will increase significantly as price declines. Large decreases in cost are expected as cumulative volume increases. The product is of the nature of something that can gain mass appeal fairly quickly. There is a threat of impending competition.

As the product lifecycle progresses, there likely will be changes in the demand curve and costs. As such, the pricing policy should be reevaluated over time. The pricing objective depends on many factors including production cost, existence of economies of scale, barriers to entry, product differentiation, rate of product diffusion, the firm's resources, and the product's anticipated price elasticity of demand. Pricing Methods To set the specific price level that achieves their pricing objectives, managers may make use of several pricing methods. These methods include:

Cost-plus pricing - set the price at the production cost plus a certain profit margin. Target return pricing - set the price to achieve a target return-on-investment. Value-based pricing - base the price on the effective value to the customer relative to alternative products. Psychological pricing - base the price on factors such as signals of product quality, popular price points, and what the consumer perceives to be fair.

In addition to setting the price level, managers have the opportunity to design innovative pricing models that better meet the needs of both the firm and its customers. For example, software traditionally was purchased as a product in which customers made a one-time payment and then owned a perpetual license to the software. Many software suppliers have changed their pricing to a subscription model in which the customer subscribes for a set period of time, such as one year. Afterwards, the subscription must be renewed or the software no longer will function. This model offers stability to both the supplier and the customer since it reduces the large swings in software investment cycles. Price Discounts The normally quoted price to end users is known as the list price. This price usually is discounted for distribution channel members and some end users. There are several types of discounts, as outlined below.

Quantity discount - offered to customers who purchase in large quantities. Cumulative quantity discount - a discount that increases as the cumulative quantity increases. Cumulative discounts may be offered to resellers who purchase large quantities over time but who do not wish to place large individual orders. Seasonal discount - based on the time that the purchase is made and designed to reduce seasonal variation in sales. For example, the travel industry offers much lower off-season rates. Such discounts do not have to be based on time of the year; they also can be based on day of the week or time of the day, such as pricing offered by long distance and wireless service providers. Cash discount - extended to customers who pay their bill before a specified date.

Trade discount - a functional discount offered to channel members for performing their roles. For example, a trade discount may be offered to a small retailer who may not purchase in quantity but nonetheless performs the important retail function. Promotional discount - a short-term discounted price offered to stimulate sales.

Process Flow Structures

The flow structure of the process used to make or deliver a product or service impacts facility layout, resources, technology decisions, and work methods. The process architecture may be an important component in the firm's strategy for building a competitive advantage. When characterized by its flow structure, a process broadly can be classified either as a job shop or a flow shop. A job shop process uses general purpose resources and is highly flexible. A flow shop process uses specialized resources and the work follows a fixed path. Consequently, a flow shop is less flexible than a job shop. Finer distinctions can be made in the process structure as follows:

Project - Example: building construction Job shop - Example: print shop Batch process - Example: bakery Assembly line - Example: automobile production line Continuous flow - Example: oil refinery

These process structures differ in several respects such as:

Flow - ranging from a large number of possible sequences of activities to only one possible sequence. Flexibility - A process is flexible to the extent that the process performance and cost is independent of changes in the output. Changes may be changes in production volume or changes in the product mix. Number of products - ranging from the capability of producing a multitude of different products to producing only one specific product. Capital investment - ranging from using lower cost general purpose equipment to expensive specialized equipment. Variable cost - ranging from a high unit cost to a low unit cost. Labor content and skill - ranging from high labor content with high skill to low content and low skill. Volume - ranging from a quantity of one to large scale mass production.

It is interesting to note that these aspects generally increase or decrease monotonically as one moves between the extremes of process structures. The following chart illustrates how the process characteristics vary with structure. Comparison of Process Structures and Characteristics Project Flow None Job Batch Assembly Continuous Shop Process Line Flow Continuous

Flexibility No. of Products Capital Investment Variable Cost Labor Content Labor Skill Volume

High High Low High High High Low

Low Low High Low Low Low High

The following sections describe each of the architectures, highlighting their differentiating characteristics. Project

Flow - no flow Flexibility - very high Products - unique Capital investment - very low Variable cost - very high Labor content and skill - very high Volume - one

In a project, the inputs are brought to the project location as they are needed; there is no flow in the process. Technically, a project is not a process flow structure since there is no flow of product - the quantity produced usually is equal to one. It is worthwhile, however, to treat it as a process structure here since it represents one extreme of the spectrum. Projects are suitable for unique products that are different each time they are produced. The firm brings together the resources as needed, coordinating them using project management techniques. Job Shop

Flow - jumbled flow Flexibility - high Products - many Capital investment - low Variable cost - high Labor content and skill - high Volume - low

A job shop is a flexible operation that has several activities through which work can pass. In a job shop, it is not necessary for all activities to be performed on all products, and their sequence may be different for different products. To illustrate the concept of a job shop, consider the case of a machine shop. In a machine shop, a variety of equipment such as drill presses, lathes, and milling machines is arranged in stations. Work is passed only to those machines required by it, and in the sequence required by it. This is a very flexible arrangement that can be used for wide variety of products. A job shop uses general purpose equipment and relies on the knowledge of workers to produce a wide variety of products. Volume is adjusted by adding or removing labor as needed. Job shops are low in efficiency but high in flexibility. Rather than selling specific products, a job shop often sells its capabilities. Batch Process

Flow - disconnected, with some dominant flows

Flexibility - moderate Products - several Capital investment - moderate Variable cost - moderate Labor content and skill - moderate Volume - moderate

A batch process is similar to a job shop, except that the sequence of activities tends to be in a line and is less flexible. In a batch process, dominant flows can be identified. The activities, while in-line, are disconnected from one another. Products are produced in batches, for example, to fill specific customer orders. A batch process executes different production runs for different products. The disadvantage is the setup time required to change from one product to the other, but the advantage is that some flexibility in product mix can be achieved. Assembly Line Process

Flow - connected line Flexibility - low Products - a few Capital investment - high Variable cost - low Labor content and skill - low Volume - high

Like a batch process, an assembly line processes work in fixed sequence. However, the assembly line connects the activities and paces them, for example, with a conveyor belt. A good example of an assembly line is an automobile plant. Continuous Flow Process

Flow - continuous Flexibility - very low Products - one Capital investment - very high Variable cost - very low Labor content and skill - very low, but with skilled overseers Volume - very high

Like the assembly line, a continuous flow process has a fixed pace and fixed sequence of activities. Rather than being processed in discrete steps, the product is processed in a continuous flow; its quantity tends to be measured in weight or volume. The direct labor content and associated skill is low, but the skill level required to oversee the sophisticated equipment in the process may be high. Petroleum refineries and sugar processing facilities use a continuous flow process. Process Selection The primary determinants of the optimal process are the product variety and volume. The amount of capital that the firm is willing or able to invest also may be an important determinant, and there often is a trade-off between fixed and variable cost. The choice of process may depend on the firm's marketing plans and business strategy for developing a competitive advantage. From a marketing standpoint, a job shop allows the firm to sell its capabilities, whereas flow-shop production emphasizes the product itself. From a competitive advantage perspective, a job shop helps a firm to follow a differentiation strategy, whereas a flow shop is suited for a low cost strategy.

The process choice may depend on the stage of the product life cycle. In 1979 Robert H. Hayes and Steven C. Wheelwright put forth a product-process matrix relating process selection to the product life cycle stage. For example, early in a product's life cycle, a job shop may be most appropriate structure to rapidly fill the early demand and adjust to changes in the design. When the product reaches maturity, the high volumes may justify an assembly line, and in the declining phase a batch process may be more appropriate as product volumes fall and a variety of spare parts is required. The optimal process also depends on the local economics. The cost of labor, energy, equipment, and transportation all can impact the process selection. A break-even analysis may be performed to assist in process selection. A break-even chart relates cost to levels of demand in various processes and the selection is made based on anticipated demand.

Process Analysis

An operation is composed of processes designed to add value by transforming inputs into useful outputs. Inputs may be materials, labor, energy, and capital equipment. Outputs may be a physical product (possibly used as an input to another process) or a service. Processes can have a significant impact on the performance of a business, and process improvement can improve a firm's competitiveness. The first step to improving a process is to analyze it in order to understand the activities, their relationships, and the values of relevant metrics. Process analysis generally involves the following tasks:

Define the process boundaries that mark the entry points of the process inputs and the exit points of the process outputs. Construct a process flow diagram that illustrates the various process activities and their interrelationships. Determine the capacity of each step in the process. Calculate other measures of interest. Identify the bottleneck, that is, the step having the lowest capacity. Evaluate further limitations in order to quantify the impact of the bottleneck. Use the analysis to make operating decisions and to improve the process.

Process Flow Diagram The process boundaries are defined by the entry and exit points of inputs and outputs of the process. Once the boundaries are defined, the process flow diagram (or process flowchart) is a valuable tool for understanding the process using graphic elements to represent tasks, flows, and storage. The following is a flow diagram for a simple process having three sequential activities: Process Flow Diagram

The symbols in a process flow diagram are defined as follows:

Rectangles: represent tasks Arrows: represent flows. Flows include the flow of material and the flow of information. The flow of information may include production orders and instructions. The information flow may take the form of a slip of paper that follows the material, or it may be routed separately, possibly ahead of the material in order to ready the equipment. Material flow usually is represented by a solid line and information flow by a dashed line. Inverted triangles: represent storage (inventory). Storage bins commonly are used to represent raw material inventory, work in process inventory, and finished goods inventory.

Circles: represent storage of information (not shown in the above diagram).

In a process flow diagram, tasks drawn one after the other in series are performed sequentially. Tasks drawn in parallel are performed simultaneously. In the above diagram, raw material is held in a storage bin at the beginning of the process. After the last task, the output also is stored in a storage bin. When constructing a flow diagram, care should be taken to avoid pitfalls that might cause the flow diagram not to represent reality. For example, if the diagram is constructed using information obtained from employees, the employees may be reluctant to disclose rework loops and other potentially embarrassing aspects of the process. Similarly, if there are illogical aspects of the process flow, employees may tend to portray it as it should be and not as it is. Even if they portray the process as they perceive it, their perception may differ from the actual process. For example, they may leave out important activities that they deem to be insignificant. Process Performance Measures Operations managers are interested in process aspects such as cost, quality, flexibility, and speed. Some of the process performance measures that communicate these aspects include:

Process capacity - The capacity of the process is its maximum output rate, measured in units produced per unit of time. The capacity of a series of tasks is determined by the lowest capacity task in the string. The capacity of parallel strings of tasks is the sum of the capacities of the two strings, except for cases in which the two strings have different outputs that are combined. In such cases, the capacity of the two parallel strings of tasks is that of the lowest capacity parallel string. Capacity utilization - the percentage of the process capacity that actually is being used. Throughput rate (also known as flow rate ) - the average rate at which units flow past a specific point in the process. The maximum throughput rate is the process capacity. Flow time (also known as throughput time or lead time) - the average time that a unit requires to flow through the process from the entry point to the exit point. The flow time is the length of the longest path through the process. Flow time includes both processing time and any time the unit spends between steps. Cycle time - the time between successive units as they are output from the process. Cycle time for the process is equal to the inverse of the throughput rate. Cycle time can be thought of as the time required for a task to repeat itself. Each series task in a process must have a cycle time less than or equal to the cycle time for the process. Put another way, the cycle time of the process is equal to the longest task cycle time. The process is said to be in balance if the cycle times are equal for each activity in the process. Such balance rarely is achieved. Process time - the average time that a unit is worked on. Process time is flow time less idle time. Idle time - time when no activity is being performed, for example, when an activity is waiting for work to arrive from the previous activity. The term can be used to describe both machine idle time and worker idle time. Work In process - the amount of inventory in the process. Set-up time - the time required to prepare the equipment to perform an activity on a batch of units. Set-up time usually does not depend strongly on the batch size and therefore can be reduced on a per unit basis by increasing the batch size. Direct labor content - the amount of labor (in units of time) actually contained in the product. Excludes idle time when workers are not working directly on the product. Also excludes time spent maintaining machines, transporting materials, etc. Direct labor utilization - the fraction of labor capacity that actually is utilized as direct labor.

Little's Law The inventory in the process is related to the throughput rate and throughput time by the following equation: W.I.P. Inventory = Throughput Rate x Flow Time

This relation is known as Little's Law, named after John D.C. Little who proved it mathematically in 1961. Since the throughput rate is equal to 1 / cycle time, Little's Law can be written as: Flow Time = W.I.P. Inventory x Cycle Time The Process Bottleneck The process capacity is determined by the slowest series task in the process; that is, having the slowest throughput rate or longest cycle time. This slowest task is known as the bottleneck. Identification of the bottleneck is a critical aspect of process analysis since it not only determines the process capacity, but also provides the opportunity to increase that capacity. Saving time in the bottleneck activity saves time for the entire process. Saving time in a non-bottleneck activity does not help the process since the throughput rate is limited by the bottleneck. It is only when the bottleneck is eliminated that another activity will become the new bottleneck and present a new opportunity to improve the process. If the next slowest task is much faster than the bottleneck, then the bottleneck is having a major impact on the process capacity. If the next slowest task is only slightly faster than the bottleneck, then increasing the throughput of the bottleneck will have a limited impact on the process capacity. Starvation and Blocking Starvation occurs when a downstream activity is idle with no inputs to process because of upstream delays. Blocking occurs when an activity becomes idle because the next downstream activity is not ready to take it. Both starvation and blocking can be reduced by adding buffers that hold inventory between activities. Process Improvement Improvements in cost, quality, flexibility, and speed are commonly sought. The following lists some of the ways that processes can be improved.

Reduce work-in-process inventory - reduces lead time. Add additional resources to increase capacity of the bottleneck. For example, an additional machine can be added in parallel to increase the capacity. Improve the efficiency of the bottleneck activity - increases process capacity. Move work away from bottleneck resources where possible - increases process capacity. Increase availability of bottleneck resources, for example, by adding an additional shift - increases process capacity. Minimize non-value adding activities - decreases cost, reduces lead time. Non-value adding activities include transport, rework, waiting, testing and inspecting, and support activities. Redesign the product for better manufacturability - can improve several or all process performance measures. Flexibility can be improved by outsourcing certain activities. Flexibility also can be enhanced by postponement, which shifts customizing activities to the end of the process.

In some cases, dramatic improvements can be made at minimal cost when the bottleneck activity is severely limiting the process capacity. On the other hand, in well-optimized processes, significant investment may be required to achieve a marginal operational improvement. Because of the large investment, the operational gain may not generate a sufficient rate of return. A cost-benefit analysis should be performed to determine if a process change is worth the investment. Ultimately, net present value will determine whether a process "improvement" really is an improvement.

Linear Programming

Operations management often presents complex problems that can be modeled by linear functions. The mathematical technique of linear programming is instrumental in solving a wide range of operations management problems. Linear Program Structure Linear programming models consist of an objective function and the constraints on that function. A linear programming model takes the following form: Objective function: Z = a1X1 + a2X2 + a3X3 + . . . + anXn Constraints: b11X1 + b12X2 + b13X3 + . . . + b1nXn < c1 b21X1 + b22X2 + b23X3 + . . . + b2nXn < c2 . . . bm1X1 + bm2X2 + bm3X3 + . . . + bmnXn < cm In this system of linear equations, Z is the objective function value that is being optimized, Xi are the decision variables whose optimal values are to be found, and ai, bij, and ci are constants derived from the specifics of the problem. Linear Programming Assumptions Linear programming requires linearity in the equations as shown in the above structure. In a linear equation, each decision variable is multiplied by a constant coefficient with no multiplying between decision variables and no nonlinear functions such as logarithms. Linearity requires the following assumptions:

Proportionality - a change in a variable results in a proportionate change in that variable's contribution to the value of the function. Additivity - the function value is the sum of the contributions of each term. Divisibility - the decision variables can be divided into non-integer values, taking on fractional values. Integer programming techniques can be used if the divisibility assumption does not hold.

In addition to these linearity assumptions, linear programming assumes certainty; that is, that the coefficients are known and constant. Problem Formulation With computers able to solve linear programming problems with ease, the challenge is in problem formulation - translating the problem statement into a system of linear equations to be solved by computer. The information required to write the objective function is derived from the problem statement. The problem is formulated from the problem statement as follows: 1. Identify the objective of the problem; that is, which quantity is to be optimized. For example, one may seek to maximize profit. 2. Identify the decision variables and the constraints on them. For example, production quantities and production limits may serve as decision variables and constraints.

3. Write the objective function and constraints in terms of the decision variables, using information from the problem statement to determine the proper coefficient for each term. Discard any unnecessary information. 4. Add any implicit constraints, such as non-negative restrictions. 5. Arrange the system of equations in a consistent form suitable for solving by computer. For example, place all variables on the left side of their equations and list them in the order of their subscripts. The following guidelines help to reduce the risk of errors in problem formulation:

Be sure to consider any initial conditions. Make sure that each variable in the objective function appears at least once in the constraints. Consider constraints that might not be specified explicitly. For example, if there are physical quantities that must be non-negative, then these constraints must be included in the formulation.

The Effect of Constraints Constraints exist because certain limitations restrict the range of a variable's possible values. A constraint is considered to be binding if changing it also changes the optimal solution. Less severe constraints that do not affect the optimal solution are non-binding. Tightening a binding constraint can only worsen the objective function value, and loosening a binding constraint can only improve the objective function value. As such, once an optimal solution is found, managers can seek to improve that solution by finding ways to relax binding constraints. Shadow Price The shadow price for a constraint is the amount that the objective function value changes per unit change in the constraint. Since constraints often are determined by resources, a comparison of the shadow prices of each constraint provides valuable insight into the most effective place to apply additional resources in order to achieve the best improvement in the objective function value. The reported shadow price is valid up to the allowable increase or allowable decrease in the constraint. Applications of Linear Programming Linear programming is used to solve problems in many aspects of business administration including:

product mix planning distribution networks truck routing staff scheduling financial portfolios corporate restructuring

Operations > Linear Programming

Work Breakdown Structure

A complex project is made managable by first breaking it down into individual components in a hierarchical structure, known as the work breakdown structure, or the WBS. Such a structure defines tasks that can be completed independently of other tasks, facilitating resource allocation, assignment of responsibilities, and measurement and control of the project. The work breakdown structure can be illustrated in a block diagram: Work Breakdown Structure Diagram

Because the WBS is a hierarchical structure, it may be conveyed in outline form: Work Breakdown Structure Outline Level 1 Task 1 Subtask 1.1 Work Package 1.1.1 Work Package 1.1.2 Work Package 1.1.3 Subtask 1.2 Work Package 1.2.1 Work Package 1.2.2 Work Package 1.2.3 Task 2 Subtask 2.1 Work Package 2.1.1 Work Package 2.1.2 Work Package 2.1.3 Level 2 Level 3

Terminology for Different Levels Each organization uses its own terminology for classifying WBS components according to their level in the hierarchy. For example, some organizations refer to different levels as tasks, sub-tasks, and work packages, as shown in the above outline. Others use the terms phases, entries, and activities. Organization by Deliverables or Phases The WBS may be organized around deliverables or phases of the project life cycle. Higher levels in the structure generally are performed by groups. The lowest level in the hierarchy often comprises activities performed by individuals, though a WBS that emphasizes deliverables does not necessarily specify activities. Level of Detail The breaking down of a project into its component parts facilitates resource allocation and the assignment of individual responsibilities. Care should be taken to use a proper level of detail when creating the WBS. On the one extreme, a very high level of detail is likely to result in micro-management. On the other extreme,

the tasks may become too large to manage effectively. Defining tasks so that their duration is between several days and a few months works well for most projects. WBS's Role in Project Planning The work breakdown structure is the foundation of project planning. It is developed before dependencies are identified and activity durations are estimated. The WBS can be used to identify the tasks in the CPM and PERT project planning models.

Gantt Chart

During the era of scientific management, Henry Gantt developed a tool for displaying the progression of a project in the form of a specialized chart. An early application was the tracking of the progress of ship building projects. Today, Gantt's scheduling tool takes the form of a horizontal bar graph and is known as a Gantt chart, a basic sample of which is shown below: Gantt Chart Format Task Duration Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1 2 mo. 2 2 mo. 3 2 mo. 4 2 mo. 5 2 mo. 6 2 mo.

The horizontal axis of the Gantt chart is a time scale, expressed either in absolute time or in relative time referenced to the beginning of the project. The time resolution depends on the project - the time unit typically is in weeks or months. Rows of bars in the chart show the beginning and ending dates of the individual tasks in the project. In the above example, each task is shown to begin when the task above it completes. However, the bars may overlap in cases where a task can begin before the completion of another, and there may be several tasks performed in parallel. For such cases, the Gantt chart is quite useful for communicating the timing of the various tasks. For larger projects, the tasks can be broken into subtasks having their own Gantt charts to maintain readability. Gantt Chart Enhancements This basic version of the Gantt chart often is enhanced to communicate more information.

A vertical marker can used to mark the present point in time. The progression of each activity may be shown by shading the bar as progress is made, allowing the status of each activity to be known with just a glance. Dependencies can be depicted using link lines or color codes. Resource allocation can be specified for each task. Milestones can be shown.

Gantt Chart Role in Project Planning

For larger projects, a work breakdown structure would be developed to identify the tasks before constructing a Gantt chart. For smaller projects, the Gantt chart itself may used to identify the tasks. The strength of the Gantt chart is its ability to display the status of each activity at a glance. While often generated using project management software, it is easy to construct using a spreadsheet, and often appears in simple ascii formatting in e-mails among managers. For sequencing and critical path analysis, network models such as CPM or PERT are more powerful for dealing with dependencies and project completion time. Even when network models are used, the Gantt chart often is used as a reporting tool. Editorial note: The name of this tool frequently is misspelled as "Gannt".

CPM - Critical Path Method

In 1957, DuPont developed a project management method designed to address the challenge of shutting down chemical plants for maintenance and then restarting the plants once the maintenance had been completed. Given the complexity of the process, they developed the Critical Path Method (CPM) for managing such projects. CPM provides the following benefits:

Provides a graphical view of the project. Predicts the time required to complete the project. Shows which activities are critical to maintaining the schedule and which are not.

CPM models the activities and events of a project as a network. Activities are depicted as nodes on the network and events that signify the beginning or ending of activities are depicted as arcs or lines between the nodes. The following is an example of a CPM network diagram: CPM Diagram

Steps in CPM Project Planning 1. 2. 3. 4. 5. 6. Specify the individual activities. Determine the sequence of those activities. Draw a network diagram. Estimate the completion time for each activity. Identify the critical path (longest path through the network) Update the CPM diagram as the project progresses.

1. Specify the Individual Activities From the work breakdown structure, a listing can be made of all the activities in the project. This listing can be used as the basis for adding sequence and duration information in later steps.

2. Determine the Sequence of the Activities Some activities are dependent on the completion of others. A listing of the immediate predecessors of each activity is useful for constructing the CPM network diagram. 3. Draw the Network Diagram Once the activities and their sequencing have been defined, the CPM diagram can be drawn. CPM originally was developed as an activity on node (AON) network, but some project planners prefer to specify the activities on the arcs. 4. Estimate Activity Completion Time The time required to complete each activity can be estimated using past experience or the estimates of knowledgeable persons. CPM is a deterministic model that does not take into account variation in the completion time, so only one number is used for an activity's time estimate. 5. Identify the Critical Path The critical path is the longest-duration path through the network. The significance of the critical path is that the activities that lie on it cannot be delayed without delaying the project. Because of its impact on the entire project, critical path analysis is an important aspect of project planning. The critical path can be identified by determining the following four parameters for each activity:

ES - earliest start time: the earliest time at which the activity can start given that its precedent activities must be completed first. EF - earliest finish time, equal to the earliest start time for the activity plus the time required to complete the activity. LF - latest finish time: the latest time at which the activity can be completed without delaying the project. LS - latest start time, equal to the latest finish time minus the time required to complete the activity.

The slack time for an activity is the time between its earliest and latest start time, or between its earliest and latest finish time. Slack is the amount of time that an activity can be delayed past its earliest start or earliest finish without delaying the project. The critical path is the path through the project network in which none of the activities have slack, that is, the path for which ES=LS and EF=LF for all activities in the path. A delay in the critical path delays the project. Similarly, to accelerate the project it is necessary to reduce the total time required for the activities in the critical path. 6. Update CPM Diagram As the project progresses, the actual task completion times will be known and the network diagram can be updated to include this information. A new critical path may emerge, and structural changes may be made in the network if project requirements change.

CPM Limitations CPM was developed for complex but fairly routine projects with minimal uncertainty in the project completion times. For less routine projects there is more uncertainty in the completion times, and this uncertainty limits the usefulness of the deterministic CPM model. An alternative to CPM is the PERT project planning model, which allows a range of durations to be specified for each activity.


Complex projects require a series of activities, some of which must be performed sequentially and others that can be performed in parallel with other activities. This collection of series and parallel tasks can be modeled as a network. In 1957 the Critical Path Method (CPM) was developed as a network model for project management. CPM is a deterministic method that uses a fixed time estimate for each activity. While CPM is easy to understand and use, it does not consider the time variations that can have a great impact on the completion time of a complex project. The Program Evaluation and Review Technique (PERT) is a network model that allows for randomness in activity completion times. PERT was developed in the late 1950's for the U.S. Navy's Polaris project having thousands of contractors. It has the potential to reduce both the time and cost required to complete a project. The Network Diagram In a project, an activity is a task that must be performed and an event is a milestone marking the completion of one or more activities. Before an activity can begin, all of its predecessor activities must be completed. Project network models represent activities and milestones by arcs and nodes. PERT originally was an activity on arc network, in which the activities are represented on the lines and milestones on the nodes. Over time, some people began to use PERT as an activity on node network. For this discussion, we will use the original form of activity on arc. The PERT chart may have multiple pages with many sub-tasks. The following is a very simple example of a PERT diagram: PERT Chart

The milestones generally are numbered so that the ending node of an activity has a higher number than the beginning node. Incrementing the numbers by 10 allows for new ones to be inserted without modifying the numbering of the entire diagram. The activities in the above diagram are labeled with letters along with the expected time required to complete the activity. Steps in the PERT Planning Process PERT planning involves the following steps: 1. 2. 3. 4. Identify the specific activities and milestones. Determine the proper sequence of the activities. Construct a network diagram. Estimate the time required for each activity. 5. Determine the critical path. 6. Update the PERT chart as the project progresses.

1. Identify Activities and Milestones

The activities are the tasks required to complete the project. The milestones are the events marking the beginning and end of one or more activities. It is helpful to list the tasks in a table that in later steps can be expanded to include information on sequence and duration. 2. Determine Activity Sequence This step may be combined with the activity identification step since the activity sequence is evident for some tasks. Other tasks may require more analysis to determine the exact order in which they must be performed. 3. Construct the Network Diagram Using the activity sequence information, a network diagram can be drawn showing the sequence of the serial and parallel activities. For the original activity-on-arc model, the activities are depicted by arrowed lines and milestones are depicted by circles or "bubbles". If done manually, several drafts may be required to correctly portray the relationships among activities. Software packages simplify this step by automatically converting tabular activity information into a network diagram. 4. Estimate Activity Times Weeks are a commonly used unit of time for activity completion, but any consistent unit of time can be used. A distinguishing feature of PERT is its ability to deal with uncertainty in activity completion times. For each activity, the model usually includes three time estimates:

Optimistic time - generally the shortest time in which the activity can be completed. It is common practice to specify optimistic times to be three standard deviations from the mean so that there is approximately a 1% chance that the activity will be completed within the optimistic time. Most likely time - the completion time having the highest probability. Note that this time is different from the expected time. Pessimistic time - the longest time that an activity might require. Three standard deviations from the mean is commonly used for the pessimistic time.

PERT assumes a beta probability distribution for the time estimates. For a beta distribution, the expected time for each activity can be approximated using the following weighted average: Expected time = ( Optimistic + 4 x Most likely + Pessimistic ) / 6 This expected time may be displayed on the network diagram. To calculate the variance for each activity completion time, if three standard deviation times were selected for the optimistic and pessimistic times, then there are six standard deviations between them, so the variance is given by: [ ( Pessimistic - Optimistic ) / 6 ]2

5. Determine the Critical Path The critical path is determined by adding the times for the activities in each sequence and determining the longest path in the project. The critical path determines the total calendar time required for the project. If activities outside the critical path speed up or slow down (within limits), the total project time does not change. The amount of time that a non-critical path activity can be delayed without delaying the project is referred to as slack time.

If the critical path is not immediately obvious, it may be helpful to determine the following four quantities for each activity:

ES - Earliest Start time EF - Earliest Finish time LS - Latest Start time LF - Latest Finish time

These times are calculated using the expected time for the relevant activities. The earliest start and finish times of each activity are determined by working forward through the network and determining the earliest time at which an activity can start and finish considering its predecessor activities. The latest start and finish times are the latest times that an activity can start and finish without delaying the project. LS and LF are found by working backward through the network. The difference in the latest and earliest finish of each activity is that activity's slack. The critical path then is the path through the network in which none of the activities have slack. The variance in the project completion time can be calculated by summing the variances in the completion times of the activities in the critical path. Given this variance, one can calculate the probability that the project will be completed by a certain date assuming a normal probability distribution for the critical path. The normal distribution assumption holds if the number of activities in the path is large enough for the central limit theorem to be applied. Since the critical path determines the completion date of the project, the project can be accelerated by adding the resources required to decrease the time for the activities in the critical path. Such a shortening of the project sometimes is referred to as project crashing. 6. Update as Project Progresses Make adjustments in the PERT chart as the project progresses. As the project unfolds, the estimated times can be replaced with actual times. In cases where there are delays, additional resources may be needed to stay on schedule and the PERT chart may be modified to reflect the new situation.

Benefits of PERT PERT is useful because it provides the following information:

Expected project completion time. Probability of completion before a specified date. The critical path activities that directly impact the completion time. The activities that have slack time and that can lend resources to critical path activities. Activity start and end dates.

Limitations The following are some of PERT's weaknesses:

The activity time estimates are somewhat subjective and depend on judgement. In cases where there is little experience in performing an activity, the numbers may be only a guess. In other cases, if the person or group performing the activity estimates the time there may be bias in the estimate. Even if the activity times are well-estimated, PERT assumes a beta distribution for these time estimates, but the actual distribution may be different. Even if the beta distribution assumption holds, PERT assumes that the probability distribution of the project completion time is the same as the that of the critical path. Because other paths can become

the critical path if their associated activities are delayed, PERT consistently underestimates the expected project completion time. The underestimation of the project completion time due to alternate paths becoming critical is perhaps the most serious of these issues. To overcome this limitation, Monte Carlo simulations can be performed on the network to eliminate this optimistic bias in the expected project completion time.

Time-Cost Trade-offs

There is a relationship between a project's time to completion and its cost. For some types of costs, the relationship is in direct proportion; for other types, there is a direct trade-off. Because of these two types of costs, there is an optimal project pace for minimal cost. By understanding the time-cost relationship, one is better able to predict the impact of a schedule change on project cost. Types of Costs The costs associated with a project can be classified as direct costs or indirect costs.

Direct costs are those directly associated with project activities, such as salaries, travel, and direct project materials and equipment. If the pace of activities is increased in order to decrease project completion time, the direct costs generally increase since more resources must be allocated to accelerate the pace. Indirect costs are those overhead costs that are not directly associated with specific project activities such as office space, administrative staff, and taxes. Such costs tend to be relatively steady per unit of time over the life of the project. As such, the total indirect costs decrease as the project duration decreases.

The project cost is the sum of the direct and indirect costs. Compressing the Project Schedule Compressing or crashing the project schedule refers to the acceleration of the project activities in order to complete the project sooner. The time required to complete a project is determined by the critical path, so to compress a project schedule one must focus on critical path activities. A procedure for determining the optimal project time is to determine the normal completion time for each critical path activity and a crash time. The crash time is the shortest time in which an activity can be completed. The direct costs then are calculated for the normal and crash times of each activity. The slope of each cost versus time trade-off can be determined for each activity as follows: Slope = (Crash cost - Normal cost) / (Normal time - Crash time) The activities having the lowest cost per unit of time reduction should be shortened first. In this way, one can step through the critical path activities and create a graph of the total project cost versus the project time. The indirect, direct, and total project costs then can be calculated for different project durations. The optimal point is the duration resulting in the minimum project cost, as show in the following graph: Project Cost Versus Duration

Attention should be given to the critical path to make sure that it remains the critical path after the activity time is reduced. If a new critical path emerges, it must considered in subsequent time reductions. To minimize the cost, those activities that are not on the critical path can be extended to minimize their costs without increasing the project completion time. Time-Cost Model Assumptions The time-cost model described above relies on the following assumptions:

The normal cost for an activity is lower than the crash cost. There is a linear relationship between activity time and cost. The resources are available to shorten the activity.

The model would need to be adapted to cases in which the assumptions do not hold. For example, the schedule might need to take into account the need to level the load on a limited resource such as a specialized piece of equipment. Additional Considerations There are other considerations besides project cost. For example, when the project is part of the development of a new product, time-to-market may be extremely important and it may be beneficial to accelerate the project to a point where its cost is much greater than the minimum cost. In contract work, there may be incentive payments associated with early completion or penalties associated with late completion. A time-cost model can be adapted to take such incentives and penalties into account by modeling them as indirect costs. Because of the importance of the critical path in compressing a project schedule, a project planning technique such as the Critical Path Method or PERT should be used to identify the critical path before attempting to compress the schedule.

Box Plots

In 1977, John Tukey published an efficient method for displaying a five-number data summary. The graph is called a boxplot (also known as a box and whisker plot) and summarizes the following statistical measures:


upper and lower quartiles minimum and maximum data values

The following is an example of a boxplot. Box Plot

The plot may be drawn either vertically as in the above diagram, or horizontally. Interpreting a Boxplot The boxplot is interpreted as follows:

The box itself contains the middle 50% of the data. The upper edge (hinge) of the box indicates the 75th percentile of the data set, and the lower hinge indicates the 25th percentile. The range of the middle two quartiles is known as the inter-quartile range. The line in the box indicates the median value of the data. If the median line within the box is not equidistant from the hinges, then the data is skewed. The ends of the vertical lines or "whiskers" indicate the minimum and maximum data values, unless outliers are present in which case the whiskers extend to a maximum of 1.5 times the inter-quartile range. The points outside the ends of the whiskers are outliers or suspected outliers.

Boxplot Enhancements Beyond the basic information, boxplots sometimes are enhanced to convey additional information:

The mean and its confidence interval can be shown using a diamond shape in the box. The expected range of the median can be shown using notches in the box. The width of the box can be varied in proportion to the log of the sample size.

Advantages of Boxplots Boxplots have the following strengths:

Graphically display a variable's location and spread at a glance. Provide some indication of the data's symmetry and skewness. Unlike many other methods of data display, boxplots show outliers. By using a boxplot for each categorical variable side-by-side on the same graph, one quickly can compare data sets.

One drawback of boxplots is that they tend to emphasize the tails of a distribution, which are the least certain points in the data set. They also hide many of the details of the distribution. Displaying a histogram in conjunction with the boxplot helps in this regard, and both are important tools for exploratory data analysis.

The Histogram

The histogram is a summary graph showing a count of the data points falling in various ranges. The effect is a rough approximation of the frequency distribution of the data. The groups of data are called classes, and in the context of a histogram they are known as bins, because one can think of them as containers that accumulate data and "fill up" at a rate equal to the frequency of that data class. Consider the exam scores of a group of students. By defining data classes each spanning an interval of 10 points and counting the number of scores in each data class, a frequency table can be constructed as in the following example: Frequency Table Group 0-9 10 - 19 20 - 29 30 - 39 40 - 49 50 - 59 60 - 69 70 - 79 80 - 89 90 - 99 Count 1 2 3 4 5 4 3 2 2 1

To construct the histogram, groups are plotted on the x axis and their frequencies on the y axis. The following is a histogram of the data in the above frequency table. Histogram

Information Conveyed by Histograms

Histograms are useful data summaries that convey the following information:

The general shape of the frequency distribution (normal, chi-square, etc.) Symmetry of the distribution and whether it is skewed Modality - unimodal, bimodal, or multimodal

The histogram of the frequency distribution can be converted to a probability distribution by dividing the tally in each group by the total number of data points to give the relative frequency. The shape of the distribution conveys important information such as the probability distribution of the data. In cases in which the distribution is known, a histogram that does not fit the distribution may provide clues about a process and measurement problem. For example, a histogram that shows a higher than normal frequency in bins near one end and then a sharp drop-off may indicate that the observer is "helping" the results by classifying extreme data in the less extreme group. Bin Width The shape of the histogram sometimes is particularly sensitive to the number of bins. If the bins are too wide, important information might get omitted. For example, the data may be bimodal but this characteristic may not be evident if the bins are too wide. On the other hand, if the bins are too narrow, what may appear to be meaningful information really may be due to random variations that show up because of the small number of data points in a bin. To determine whether the bin width is set to an appropriate size, different bin widths should be used and the results compared to determine the sensitivity of the histogram shape with respect to bin size. Bin widths typically are selected so that there are between 5 and 20 groups of data, but the appropriate number depends on the situation. Histograms and Boxplots The histogram provides a graphical summary of the shape of the data's distribution. It often is used in combination with other statistical summaries such as the boxplot, which conveys the median, quartiles, and range of the data.

Stem and Leaf Plot

Using the data set's numbers themselves to form a diagram, the stem and leaf plot (or simply, stemplot) is a histogram-style tabulation of data developed by John Tukey. Consider the following data set, sorted in ascending order: 8, 13, 16, 25, 26, 29, 30, 32, 37, 38, 40, 41, 44, 47, 49, 51, 54, 55, 58, 61, 63, 67, 75, 78, 82, 86, 95 A stem and leaf plot of this data can be constructed by writing the first digits in the first column, then writing the second digits of all the numbers in that range to the right. Stem and Leaf Plot Stem Leaf 0|8 1|36 2|569 3|0278 4|01479 5|1458 6|137 7|58

8|26 9|5 The result is a histogram turned on its side, constructed from the digits of the data. The term "stem and leaf" is used to describe the diagram since it resembles the right half of a leaf, with the stem at the left and the outline of the edge of the leaf on the right. Alternatively, some people consider the rows to be stems and their digits to be leaves. If a larger number of bins is desired, the stem may be 2 digits for larger numbers, or there may be two stems for each first digit - one for 2nd digits of 0 to 4 and the other for 2nd digits of 5 to 9. Stem and Leaf Plot Advantages The stem and leaf plot essentially provides the same information as a histogram, with the following added benefits:

The plot can be constructed quickly using pencil and paper. The values of each individual data point can be recovered from the plot. The data is arranged compactly since the stem is not repeated in multiple data points.

The stem and leaf plot offers information similar to that conveyed by a histogram, and easily can be constructed without a computer.

Scatter Plot

Scatter plots show the relationship between two variables by displaying data points on a two-dimensional graph. The variable that might be considered an explanatory variable is plotted on the x axis, and the response variable is plotted on the y axis. Scatter plots are especially useful when there is a large number of data points. They provide the following information about the relationship between two variables:

Strength Shape - linear, curved, etc. Direction - positive or negative Presence of outliers

A correlation between the variables results in the clustering of data points along a line. The following is an example of a scatter plot suggestive of a positive linear relationship. Example Scatterplot

Scatterplot Smoothing Scatter plots may be "smoothed" by fitting a line to the data. This line attempts to show the non-random component of the association between the variables. Smoothing may be accomplished using:

A straight line A quadratic or polynomial line Smoothing splines - allow greater flexibility in nonlinear associations.

The curve is fitted in a way that provides the best fit, often defined as the fit that results in the minimum sum of the squared errors (least squares criterion). The use of smoothing to separate the non-random from the random variations allows one to make predictions of the response based on the value of the explanatory variable. Cause and Effect When a scatter plot shows an association between two variables, there is not necessarily a cause and effect relationship. Both variables could be related to some third variable that explains their variation or there could be some other cause. Alternatively, an apparent association simply could be the result of chance. Use of the Scatterplot The scatter plot provides a graphical display of the relationship between two variables. It is useful in the early stages of analysis when exploring data before actually calculating a correlation coefficient or fitting a regression curve. For example, a scatter plot can help one to determine whether a linear regression model is appropriate.

The Normal Distribution (Bell Curve)

In many natural processes, random variation conforms to a particular probability distribution known as the normal distribution, which is the most commonly observed probability distribution. Mathematicians de Moivre and Laplace used this distribution in the 1700's. In the early 1800's, German mathematician and physicist Karl Gauss used it to analyze astronomical data, and it consequently became known as the Gaussian distribution among the scientific community. The shape of the normal distribution resembles that of a bell, so it sometimes is referred to as the "bell curve", an example of which follows: Normal Distribution

The above curve is for a data set having a mean of zero. In general, the normal distribution curve is described by the following probability density function:

Bell Curve Characteristics The bells curve has the following characteristics:

Symmetric Unimodal Extends to +/- infinity Area under the curve = 1

Completely Described by Two Parameters The normal distribution can be completely specified by two parameters:

mean standard deviation

If the mean and standard deviation are known, then one essentially knows as much as if one had access to every point in the data set. The Empirical Rule The empirical rule is a handy quick estimate of the spread of the data given the mean and standard deviation of a data set that follows the normal distribution. The empirical rule states that for a normal distribution:

68% of the data will fall within 1 standard deviation of the mean 95% of the data will fall within 2 standard deviations of the mean Almost all (99.7%) of the data will fall within 3 standard deviations of the mean

Note that these values are approximations. For example, according to the normal curve probability density function, 95% of the data will fall within 1.96 standard deviations of the mean; 2 standard deviations is a convenient approximation. Normal Distribution and the Central Limit Theorem The normal distribution is a widely observed distribution. Furthermore, it frequently can be applied to situations in which the data is distributed very differently. This extended applicability is possible because of the central limit theorem, which states that regardless of the distribution of the population, the distribution of the means of random samples approaches a normal distribution for a large sample size. Applications to Business Administration The normal distribution has applications in many areas of business administration. For example:

Modern portfolio theory commonly assumes that the returns of a diversified asset portfolio follow a normal distribution. In operations management, process variations often are normally distributed. In human resource management, employee performance sometimes is considered to be normally distributed.

The normal distribution often is used to describe random variables, especially those having symmetrical, unimodal distributions. In many cases however, the normal distribution is only a rough approximation of the actual distribution. For example, the physical length of a component cannot be negative, but the normal distribution extends indefinitely in both the positive and negative directions. Nonetheless, the resulting errors may be negligible or within acceptable limits, allowing one to solve problems with sufficient accuracy by assuming a normal distribution.


The extent to which two random variables vary together (co-vary) can be measured by their covariance. Consider the two random variables x and y: x1, x2, x3, . . . xn, y1 y2 y3 . . . yn

For two random variables x and y having means E{x} and E{y}, the covariance is defined as: Cov(x,y) = E{[ x - E(x) ][ y - E(y) ]} The covariance calculation begins with pairs of x and y, takes their differences from their mean values and multiplies these differences together. For instance, if for x1 and y1 this product is positive, for that pair of data points the values of x and y have varied together in the same direction from their means. If the product is negative, they have varied in opposite directions. The larger the magnitude of the product, the stronger the strength of the relationship. The covariance is defined as the mean value of this product, calculated using each pair of data points xi and yi. If the covariance is zero, then the cases in which the product was positive were offset by those in which it was negative, and there is no linear relationship between the two random variables. Computationally, it is more efficient to use the following equivalent formula to calculate the covariance: Cov(x,y) = E{xy} - E{x}E{y} The value of the covariance is interpreted as follows:

Positive covariance - indicates that higher than average values of one variable tend to be paired with higher than average values of the other variable. Negative covariance - indicates that higher than average values of one variable tend to be paired with lower than average values of the other variable. Zero covariance - if the two random variables are independent, the covariance will be zero. However, a covariance of zero does not necessarily mean that the variables are independent. A nonlinear relationship can exist that still would result in a covariance value of zero.

Useful Properties The variance of the sum of two random variables can be written as: Var(x + y) = Var(x) + Var(y) + 2Cov(x,y)

When the random variables each are multiplied by constants a and b, the covariance can be written as follows: Cov(ax,by) = abCov(x,y) Limitations Because the number representing covariance depends on the units of the data, it is difficult to compare covariances among data sets having different scales. A value that might represent a strong linear relationship for one data set might represent a very weak one in another. The correlation coefficient addresses this issue by normalizing the covariance to the product of the standard deviations of the variables, creating a dimensionless quantity that facilitates the comparison of different data sets.

Strategic Management

The Strategic Planning Process

In the 1970's, many large firms adopted a formalized top-down strategic planning model. Under this model, strategic planning became a deliberate process in which top executives periodically would formulate the firm's strategy, then communicate it down the organization for implementation. The following is a flowchart model of this process: The Strategic Planning Process

Mission | V Objectives | V Situation Analysis | V Strategy Formulation | V Implementation | V Control This process is most applicable to strategic management at the business unit level of the organization. For large corporations, strategy at the corporate level is more concerned with managing a portfolio of businesses. For example, corporate level strategy involves decisions about which business units to grow, resource allocation among the business units, taking advantage of synergies among the business units, and

mergers and acquisitions. In the process outlined here, "company" or "firm" will be used to denote a singlebusiness firm or a single business unit of a diversified firm. Mission A company's mission is its reason for being. The mission often is expressed in the form of a mission statement, which conveys a sense of purpose to employees and projects a company image to customers. In the strategy formulation process, the mission statement sets the mood of where the company should go. Objectives Objectives are concrete goals that the organization seeks to reach, for example, an earnings growth target. The objectives should be challenging but achievable. They also should be measurable so that the company can monitor its progress and make corrections as needed. Situation Analysis Once the firm has specified its objectives, it begins with its current situation to devise a strategic plan to reach those objectives. Changes in the external environment often present new opportunities and new ways to reach the objectives. An environmental scan is performed to identify the available opportunities. The firm also must know its own capabilities and limitations in order to select the opportunities that it can pursue with a higher probability of success. The situation analysis therefore involves an analysis of both the external and internal environment. The external environment has two aspects: the macro-environment that affects all firms and a microenvironment that affects only the firms in a particular industry. The macro-environmental analysis includes political, economic, social, and technological factors and sometimes is referred to as a PEST analysis. An important aspect of the micro-environmental analysis is the industry in which the firm operates or is considering operating. Michael Porter devised a five forces framework that is useful for industry analysis. Porter's 5 forces include barriers to entry, customers, suppliers, substitute products, and rivalry among competing firms. The internal analysis considers the situation within the firm itself, such as:

Company culture Company image Organizational structure Key staff Access to natural resources Position on the experience curve Operational efficiency Operational capacity Brand awareness Market share Financial resources Exclusive contracts Patents and trade secrets

A situation analysis can generate a large amount of information, much of which is not particularly relevant to strategy formulation. To make the information more manageable, it sometimes is useful to categorize the internal factors of the firm as strengths and weaknesses, and the external environmental factors as opportunities and threats. Such an analysis often is referred to as a SWOT analysis. Strategy Formulation Once a clear picture of the firm and its environment is in hand, specific strategic alternatives can be developed. While different firms have different alternatives depending on their situation, there also exist

generic strategies that can be applied across a wide range of firms. Michael Porter identified cost leadership, differentiation, and focus as three generic strategies that may be considered when defining strategic alternatives. Porter advised against implementing a combination of these strategies for a given product; rather, he argued that only one of the generic strategy alternatives should be pursued. Implementation The strategy likely will be expressed in high-level conceptual terms and priorities. For effective implementation, it needs to be translated into more detailed policies that can be understood at the functional level of the organization. The expression of the strategy in terms of functional policies also serves to highlight any practical issues that might not have been visible at a higher level. The strategy should be translated into specific policies for functional areas such as:

Marketing Research and development Procurement Production Human resources Information systems

In addition to developing functional policies, the implementation phase involves identifying the required resources and putting into place the necessary organizational changes. Control Once implemented, the results of the strategy need to be measured and evaluated, with changes made as required to keep the plan on track. Control systems should be developed and implemented to facilitate this monitoring. Standards of performance are set, the actual performance measured, and appropriate action taken to ensure success. Dynamic and Continuous Process The strategic management process is dynamic and continuous. A change in one component can necessitate a change in the entire strategy. As such, the process must be repeated frequently in order to adapt the strategy to environmental changes. Throughout the process the firm may need to cycle back to a previous stage and make adjustments. Drawbacks of this Process The strategic planning process outlined above is only one approach to strategic management. It is best suited for stable environments. A drawback of this top-down approach is that it may not be responsive enough for rapidly changing competitive environments. In times of change, some of the more successful strategies emerge informally from lower levels of the organization, where managers are closer to customers on a dayto-day basis. Another drawback is that this strategic planning model assumes fairly accurate forecasting and does not take into account unexpected events. In an uncertain world, long-term forecasts cannot be relied upon with a high level of confidence. In this respect, many firms have turned to scenario planning as a tool for dealing with multiple contingencies.

PEST Analysis

A PEST analysis is an analysis of the external macro-environment that affects all firms. P.E.S.T. is an acronym for the Political, Economic, Social, and Technological factors of the external macro-environment. Such external factors usually are beyond the firm's control and sometimes present themselves as threats. For this reason, some say that "pest" is an appropriate term for these factors. However, changes in the external

environment also create new opportunities and the letters sometimes are rearranged to construct the more optimistic term of STEP analysis. Many macro-environmental factors are country-specific and a PEST analysis will need to be performed for all countries of interest. The following are examples of some of the factors that might be considered in a PEST analysis. Political Analysis

Political stability Risk of military invasion Legal framework for contract enforcement Intellectual property protection Trade regulations & tariffs Favored trading partners Anti-trust laws Pricing regulations Taxation - tax rates and incentives Wage legislation - minimum wage and overtime Work week Mandatory employee benefits Industrial safety regulations Product labeling requirements

Economic Analysis

Type of economic system in countries of operation Government intervention in the free market Comparative advantages of host country Exchange rates & stability of host country currency Efficiency of financial markets Infrastructure quality Skill level of workforce Labor costs Business cycle stage (e.g. prosperity, recession, recovery) Economic growth rate Discretionary income Unemployment rate Inflation rate Interest rates

Social Analysis

Demographics Class structure Education Culture (gender roles, etc.) Entrepreneurial spirit Attitudes (health, environmental consciousness, etc.) Leisure interests

Technological Analysis

Recent technological developments Technology's impact on product offering Impact on cost structure Impact on value chain structure Rate of technological diffusion

The number of macro-environmental factors is virtually unlimited. In practice, the firm must prioritize and monitor those factors that influence its industry. Even so, it may be difficult to forecast future trends with an acceptable level of accuracy. In this regard, the firm may turn to scenario planning techniques to deal with high levels of uncertainty in important macro-environmental variables.

SWOT Analysis

SWOT analysis is a simple framework for generating strategic alternatives from a situation analysis. It is applicable to either the corporate level or the business unit level and frequently appears in marketing plans. SWOT (sometimes referred to as TOWS) stands for Strengths, Weaknesses, Opportunities, and Threats. The SWOT framework was described in the late 1960's by Edmund P. Learned, C. Roland Christiansen, Kenneth Andrews, and William D. Guth in Business Policy, Text and Cases (Homewood, IL: Irwin, 1969). The General Electric Growth Council used this form of analysis in the 1980's. Because it concentrates on the issues that potentially have the most impact, the SWOT analysis is useful when a very limited amount of time is available to address a complex strategic situation. The following diagram shows how a SWOT analysis fits into a strategic situation analysis. Situation Analysis / Internal Analysis /\ Strengths Weaknesses \ External Analysis /\ Opportunities Threats

| SWOT Profile The internal and external situation analysis can produce a large amount of information, much of which may not be highly relevant. The SWOT analysis can serve as an interpretative filter to reduce the information to a manageable quantity of key issues. The SWOT analysis classifies the internal aspects of the company as strengths or weaknesses and the external situational factors as opportunities or threats. Strengths can serve as a foundation for building a competitive advantage, and weaknesses may hinder it. By understanding these four aspects of its situation, a firm can better leverage its strengths, correct its weaknesses, capitalize on golden opportunities, and deter potentially devastating threats. Internal Analysis The internal analysis is a comprehensive evaluation of the internal environment's potential strengths and weaknesses. Factors should be evaluated across the organization in areas such as:

Company culture Company image Organizational structure Key staff Access to natural resources Position on the experience curve Operational efficiency Operational capacity Brand awareness Market share Financial resources Exclusive contracts Patents and trade secrets

The SWOT analysis summarizes the internal factors of the firm as a list of strengths and weaknesses. External Analysis An opportunity is the chance to introduce a new product or service that can generate superior returns. Opportunities can arise when changes occur in the external environment. Many of these changes can be perceived as threats to the market position of existing products and may necessitate a change in product specifications or the development of new products in order for the firm to remain competitive. Changes in the external environment may be related to:

Customers Competitors Market trends Suppliers Partners Social changes New technology Economic environment Political and regulatory environment

The last four items in the above list are macro-environmental variables, and are addressed in a PEST analysis. The SWOT analysis summarizes the external environmental factors as a list of opportunities and threats. SWOT Profile When the analysis has been completed, a SWOT profile can be generated and used as the basis of goal setting, strategy formulation, and implementation. The completed SWOT profile sometimes is arranged as follows: Strengths 1. 2. 3. . . . Opportunities 1. 2. 3. . . . Weaknesses 1. 2. 3. . . . Threats 1. 2. 3. . . .

When formulating strategy, the interaction of the quadrants in the SWOT profile becomes important. For example, the strengths can be leveraged to pursue opportunities and to avoid threats, and managers can be alerted to weaknesses that might need to be overcome in order to successfully pursue opportunities. Multiple Perspectives Needed The method used to acquire the inputs to the SWOT matrix will affect the quality of the analysis. If the information is obtained hastily during a quick interview with the CEO, even though this one person may have a broad view of the company and industry, the information would represent a single viewpoint. The

quality of the analysis will be improved greatly if interviews are held with a spectrum of stakeholders such as employees, suppliers, customers, strategic partners, etc. SWOT Analysis Limitations While useful for reducing a large quantity of situational factors into a more manageable profile, the SWOT framework has a tendency to oversimplify the situation by classifying the firm's environmental factors into categories in which they may not always fit. The classification of some factors as strengths or weaknesses, or as opportunities or threats is somewhat arbitrary. For example, a particular company culture can be either a strength or a weakness. A technological change can be a either a threat or an opportunity. Perhaps what is more important than the superficial classification of these factors is the firm's awareness of them and its development of a strategic plan to use them to its advantage.

Competitor Analysis

In formulating business strategy, managers must consider the strategies of the firm's competitors. While in highly fragmented commodity industries the moves of any single competitor may be less important, in concentrated industries competitor analysis becomes a vital part of strategic planning. Competitor analysis has two primary activities, 1) obtaining information about important competitors, and 2) using that information to predict competitor behavior. The goal of competitor analysis is to understand:

with which competitors to compete, competitors' strategies and planned actions, how competitors might react to a firm's actions, how to influence competitor behavior to the firm's own advantage.

Casual knowledge about competitors usually is insufficient in competitor analysis. Rather, competitors should be analyzed systematically, using organized competitor intelligence-gathering to compile a wide array of information so that well informed strategy decisions can be made. Competitor Analysis Framework Michael Porter presented a framework for analyzing competitors. This framework is based on the following four key aspects of a competitor:

Competitor's objectives Competitor's assumptions Competitor's strategy Competitor's capabilities

Objectives and assumptions are what drive the competitor, and strategy and capabilities are what the competitor is doing or is capable of doing. These components can be depicted as shown in the following diagram:

Competitor Analysis Components What drives the competitor Objectives What the competitor is doing or is capable of doing Strategy

Competitor Response Profile


Resources & Capabilities

Adapted from Michael E. Porter, Competitive Strategy, 1980, p. 49.

A competitor analysis should include the more important existing competitors as well as potential competitors such as those firms that might enter the industry, for example, by extending their present strategy or by vertically integrating. Competitor's Current Strategy The two main sources of information about a competitor's strategy is what the competitor says and what it does. What a competitor is saying about its strategy is revealed in:

annual shareholder reports 10K reports interviews with analysts statements by managers press releases

However, this stated strategy often differs from what the competitor actually is doing. What the competitor is doing is evident in where its cash flow is directed, such as in the following tangible actions:

hiring activity R & D projects capital investments promotional campaigns strategic partnerships mergers and acquisitions

Competitor's Objectives Knowledge of a competitor's objectives facilitates a better prediction of the competitor's reaction to different competitive moves. For example, a competitor that is focused on reaching short-term financial goals might not be willing to spend much money responding to a competitive attack. Rather, such a competitor might favor focusing on the products that hold positions that better can be defended. On the other hand, a company that has no short term profitability objectives might be willing to participate in destructive price competition in which neither firm earns a profit.

Competitor objectives may be financial or other types. Some examples include growth rate, market share, and technology leadership. Goals may be associated with each hierarchical level of strategy - corporate, business unit, and functional level. The competitor's organizational structure provides clues as to which functions of the company are deemed to be the more important. For example, those functions that report directly to the chief executive officer are likely to be given priority over those that report to a senior vice president. Other aspects of the competitor that serve as indicators of its objectives include risk tolerance, management incentives, backgrounds of the executives, composition of the board of directors, legal or contractual restrictions, and any additional corporate-level goals that may influence the competing business unit. Whether the competitor is meeting its objectives provides an indication of how likely it is to change its strategy. Competitor's Assumptions The assumptions that a competitor's managers hold about their firm and their industry help to define the moves that they will consider. For example, if in the past the industry introduced a new type of product that failed, the industry executives may assume that there is no market for the product. Such assumptions are not always accurate and if incorrect may present opportunities. For example, new entrants may have the opportunity to introduce a product similar to a previously unsuccessful one without retaliation because incumbant firms may not take their threat seriously. Honda was able to enter the U.S. motorcycle market with a small motorbike because U.S. manufacturers had assumed that there was no market for small bikes based on their past experience. A competitor's assumptions may be based on a number of factors, including any of the following:

beliefs about its competitive position past experience with a product regional factors industry trends rules of thumb

A thorough competitor analysis also would include assumptions that a competitor makes about its own competitors, and whether that assessment is accurate. Competitor's Resources and Capabilities Knowledge of the competitor's assumptions, objectives, and current strategy is useful in understanding how the competitor might want to respond to a competitive attack. However, its resources and capabilities determine its ability to respond effectively. A competitor's capabilities can be analyzed according to its strengths and weaknesses in various functional areas, as is done in a SWOT analysis. The competitor's strengths define its capabilities. The analysis can be taken further to evaluate the competitor's ability to increase its capabilities in certain areas. A financial analysis can be performed to reveal its sustainable growth rate. Finally, since the competitive environment is dynamic, the competitor's ability to react swiftly to change should be evaluated. Some firms have heavy momentum and may continue for many years in the same direction before adapting. Others are able to mobilize and adapt very quickly. Factors that slow a company down include low cash reserves, large investments in fixed assets, and an organizational structure that hinders quick action. Competitor Response Profile Information from an analysis of the competitor's objectives, assumptions, strategy, and capabilities can be compiled into a response profile of possible moves that might be made by the competitor. This profile

includes both potential offensive and defensive moves. The specific moves and their expected strength can be estimated using information gleaned from the analysis. The result of the competitor analysis should be an improved ability to predict the competitor's behavior and even to influence that behavior to the firm's advantage.

The Experience Curve

In the 1960's, management consultants at The Boston Consulting Group observed a consistent relationship between the cost of production and the cumulative production quantity (total quantity produced from the first unit to the last). Data revealed that the real value-added production cost declined by 20 to 30 percent for each doubling of cumulative production quantity: The Experience Curve

The vertical axis of this logarithmic graph is the real unit cost of adding value, adjusted for inflation. It includes the cost that the firm incurs to add value to the starting materials, but excludes the cost of those materials themselves, which are subject the experience curves of their suppliers. Note that the experience curve differs from the learning curve. The learning curve describes the observed reduction in the number of required direct labor hours as workers learn their jobs. The experience curve by contrast applies not only to labor intensive situations, but also to process oriented ones. The experience curve relationship holds over a wide range industries. In fact, its absence would be considered by some to be a sign of possible mismanagement. Cases in which the experience curve is not observed sometimes involve the withholding of capital investment, for example, to increase short-term ROI. The experience curve can be explained by a combination of learning (the learning curve), specialization, scale, and investment. Implications for Strategy The experience curve has important strategic implications. If a firm is able to gain market share over its competitors, it can develop a cost advantage. Penetration pricing strategies and a significant investment in advertising, sales personnel, production capacity, etc. can be justified to increase market share and gain a competitive advantage. When evaluating strategies based on the experience curve, a firm must consider the reaction of competitors who also understand the concept. Some potential pitfalls include:

The fallacy of composition holds: if all other firms equally pursue the strategy, then none will increase market share and will suffer losses from over-capacity and low prices. The more competitors that pursue the strategy, the higher the cost of gaining a given market share and the lower the return on investment.

Competing firms may be able to discover the leading firm's proprietary methods and replicate the cost reductions without having made the large investment to gain experience. New technologies may create a new experience curve. Entrants building new plants may be able to take advantage of the latest technologies that offer a cost advantage over the older plants of the leading firm.

The Value Chain

To better understand the activities through which a firm develops a competitive advantage and creates shareholder value, it is useful to separate the business system into a series of value-generating activities referred to as the value chain. In his 1985 book Competitive Advantage, Michael Porter introduced a generic value chain model that comprises a sequence of activities found to be common to a wide range of firms. Porter identified primary and support activities as shown in the following diagram:

Porter's Generic Value Chain M A R G I N

Inbound Logistics




Outboun d Logistics


Marketing & Sales




Firm Infrastructure HR Management Technology Development Procurement

The goal of these activities is to offer the customer a level of value that exceeds the cost of the activities, thereby resulting in a profit margin. The primary value chain activities are:

Inbound Logistics: the receiving and warehousing of raw materials, and their distribution to manufacturing as they are required. Operations: the processes of transforming inputs into finished products and services. Outbound Logistics: the warehousing and distribution of finished goods. Marketing & Sales: the identification of customer needs and the generation of sales. Service: the support of customers after the products and services are sold to them.

These primary activities are supported by:

The infrastructure of the firm: organizational structure, control systems, company culture, etc. Human resource management: employee recruiting, hiring, training, development, and compensation. Technology development: technologies to support value-creating activities. Procurement: purchasing inputs such as materials, supplies, and equipment.

The firm's margin or profit then depends on its effectiveness in performing these activities efficiently, so that the amount that the customer is willing to pay for the products exceeds the cost of the activities in the value chain. It is in these activities that a firm has the opportunity to generate superior value. A competitive advantage may be achieved by reconfiguring the value chain to provide lower cost or better differentiation.

The value chain model is a useful analysis tool for defining a firm's core competencies and the activities in which it can pursue a competitive advantage as follows:

Cost advantage: by better understanding costs and squeezing them out of the value-adding activities. Differentiation: by focusing on those activities associated with core competencies and capabilities in order to perform them better than do competitors.

Cost Advantage and the Value Chain A firm may create a cost advantage either by reducing the cost of individual value chain activities or by reconfiguring the value chain. Once the value chain is defined, a cost analysis can be performed by assigning costs to the value chain activities. The costs obtained from the accounting report may need to be modified in order to allocate them properly to the value creating activities. Porter identified 10 cost drivers related to value chain activities:

Economies of scale Learning Capacity utilization Linkages among activities Interrelationships among business units Degree of vertical integration Timing of market entry Firm's policy of cost or differentiation Geographic location Institutional factors (regulation, union activity, taxes, etc.)

A firm develops a cost advantage by controlling these drivers better than do the competitors. A cost advantage also can be pursued by reconfiguring the value chain. Reconfiguration means structural changes such a new production process, new distribution channels, or a different sales approach. For example, FedEx structurally redefined express freight service by acquiring its own planes and implementing a hub and spoke system. Differentiation and the Value Chain A differentiation advantage can arise from any part of the value chain. For example, procurement of inputs that are unique and not widely available to competitors can create differentiation, as can distribution channels that offer high service levels. Differentiation stems from uniqueness. A differentiation advantage may be achieved either by changing individual value chain activities to increase uniqueness in the final product or by reconfiguring the value chain. Porter identified several drivers of uniqueness:

Policies and decisions Linkages among activities Timing Location Interrelationships Learning Integration Scale (e.g. better service as a result of large scale) Institutional factors

Many of these also serve as cost drivers. Differentiation often results in greater costs, resulting in tradeoffs between cost and differentiation. There are several ways in which a firm can reconfigure its value chain in order to create uniqueness. It can forward integrate in order to perform functions that once were performed by its customers. It can backward integrate in order to have more control over its inputs. It may implement new process technologies or utilize new distribution channels. Ultimately, the firm may need to be creative in order to develop a novel value chain configuration that increases product differentiation. Technology and the Value Chain Because technology is employed to some degree in every value creating activity, changes in technology can impact competitive advantage by incrementally changing the activities themselves or by making possible new configurations of the value chain. Various technologies are used in both primary value activities and support activities:

Inbound Logistics Technologies o Transportation o Material handling o Material storage o Communications o Testing o Information systems Operations Technologies o Process o Materials o Machine tools o Material handling o Packaging o Maintenance o Testing o Building design & operation o Information systems Outbound Logistics Technologies o Transportation o Material handling o Packaging o Communications o Information systems Marketing & Sales Technologies o Media o Audio/video o Communications o Information systems Service Technologies o Testing o Communications o Information systems

Note that many of these technologies are used across the value chain. For example, information systems are seen in every activity. Similar technologies are used in support activities. In addition, technologies related to training, computer-aided design, and software development frequently are employed in support activities.

To the extent that these technologies affect cost drivers or uniqueness, they can lead to a competitive advantage. Linkages Between Value Chain Activities Value chain activities are not isolated from one another. Rather, one value chain activity often affects the cost or performance of other ones. Linkages may exist between primary activities and also between primary and support activities. Consider the case in which the design of a product is changed in order to reduce manufacturing costs. Suppose that inadvertantly the new product design results in increased service costs; the cost reduction could be less than anticipated and even worse, there could be a net cost increase. Sometimes however, the firm may be able to reduce cost in one activity and consequently enjoy a cost reduction in another, such as when a design change simultaneously reduces manufacturing costs and improves reliability so that the service costs also are reduced. Through such improvements the firm has the potential to develop a competitive advantage. Analyzing Business Unit Interrelationships Interrelationships among business units form the basis for a horizontal strategy. Such business unit interrelationships can be identified by a value chain analysis. Tangible interrelationships offer direct opportunities to create a synergy among business units. For example, if multiple business units require a particular raw material, the procurement of that material can be shared among the business units. This sharing of the procurement activity can result in cost reduction. Such interrelationships may exist simultaneously in multiple value chain activities. Unfortunately, attempts to achieve synergy from the interrelationships among different business units often fall short of expectations due to unanticipated drawbacks. The cost of coordination, the cost of reduced flexibility, and organizational practicalities should be analyzed when devising a strategy to reap the benefits of the synergies. Outsourcing Value Chain Activities A firm may specialize in one or more value chain activities and outsource the rest. The extent to which a firm performs upstream and downstream activities is described by its degree of vertical integration. A thorough value chain analysis can illuminate the business system to facilitate outsourcing decisions. To decide which activities to outsource, managers must understand the firm's strengths and weaknesses in each activity, both in terms of cost and ability to differentiate. Managers may consider the following when selecting activities to outsource:

Whether the activity can be performed cheaper or better by suppliers. Whether the activity is one of the firm's core competencies from which stems a cost advantage or product differentiation. The risk of performing the activity in-house. If the activity relies on fast-changing technology or the product is sold in a rapidly-changing market, it may be advantageous to outsource the activity in order to maintain flexibility and avoid the risk of investing in specialized assets. Whether the outsourcing of an activity can result in business process improvements such as reduced lead time, higher flexibility, reduced inventory, etc.

The Value Chain System A firm's value chain is part of a larger system that includes the value chains of upstream suppliers and downstream channels and customers. Porter calls this series of value chains the value system, shown conceptually below:

The Value System Supplier Value Chain Firm Value Chain Channel Value Chain Buyer Value Chain






Linkages exist not only in a firm's value chain, but also between value chains. While a firm exhibiting a high degree of vertical integration is poised to better coordinate upstream and downstream activities, a firm having a lesser degree of vertical integration nonetheless can forge agreements with suppliers and channel partners to achieve better coordination. For example, an auto manufacturer may have its suppliers set up facilities in close proximity in order to minimize transport costs and reduce parts inventories. Clearly, a firm's success in developing and sustaining a competitive advantage depends not only on its own value chain, but on its ability to manage the value system of which it is a part.

The BCG Growth-Share Matrix

The BCG Growth-Share Matrix is a portfolio planning model developed by Bruce Henderson of the Boston Consulting Group in the early 1970's. It is based on the observation that a company's business units can be classified into four categories based on combinations of market growth and market share relative to the largest competitor, hence the name "growth-share". Market growth serves as a proxy for industry attractiveness, and relative market share serves as a proxy for competitive advantage. The growth-share matrix thus maps the business unit positions within these two important determinants of profitability. BCG Growth-Share Matrix

This framework assumes that an increase in relative market share will result in an increase in the generation of cash. This assumption often is true because of the experience curve; increased relative market share implies that the firm is moving forward on the experience curve relative to its competitors, thus developing a cost advantage. A second assumption is that a growing market requires investment in assets to increase capacity and therefore results in the consumption of cash. Thus the position of a business on the growthshare matrix provides an indication of its cash generation and its cash consumption. Henderson reasoned that the cash required by rapidly growing business units could be obtained from the firm's other business units that were at a more mature stage and generating significant cash. By investing to become the market share leader in a rapidly growing market, the business unit could move along the experience curve and develop a cost advantage. From this reasoning, the BCG Growth-Share Matrix was born. The four categories are:

Dogs - Dogs have low market share and a low growth rate and thus neither generate nor consume a large amount of cash. However, dogs are cash traps because of the money tied up in a business that has little potential. Such businesses are candidates for divestiture. Question marks - Question marks are growing rapidly and thus consume large amounts of cash, but because they have low market shares they do not generate much cash. The result is a large net cash comsumption. A question mark (also known as a "problem child") has the potential to gain market share and become a star, and eventually a cash cow when the market growth slows. If the question mark does not succeed in becoming the market leader, then after perhaps years of cash consumption it will degenerate into a dog when the market growth declines. Question marks must be analyzed carefully in order to determine whether they are worth the investment required to grow market share. Stars - Stars generate large amounts of cash because of their strong relative market share, but also consume large amounts of cash because of their high growth rate; therefore the cash in each direction approximately nets out. If a star can maintain its large market share, it will become a cash cow when the market growth rate declines. The portfolio of a diversified company always should have stars that will become the next cash cows and ensure future cash generation. Cash cows - As leaders in a mature market, cash cows exhibit a return on assets that is greater than the market growth rate, and thus generate more cash than they consume. Such business units should be "milked", extracting the profits and investing as little cash as possible. Cash cows provide the cash required to turn question marks into market leaders, to cover the administrative costs of the company, to fund research and development, to service the corporate debt, and to pay dividends to shareholders. Because the cash cow generates a relatively stable cash flow, its value can be determined with reasonable accuracy by calculating the present value of its cash stream using a discounted cash flow analysis.

Under the growth-share matrix model, as an industry matures and its growth rate declines, a business unit will become either a cash cow or a dog, determined soley by whether it had become the market leader during the period of high growth. While originally developed as a model for resource allocation among the various business units in a corporation, the growth-share matrix also can be used for resource allocation among products within a single business unit. Its simplicity is its strength - the relative positions of the firm's entire business portfolio can be displayed in a single diagram. Limitations The growth-share matrix once was used widely, but has since faded from popularity as more comprehensive models have been developed. Some of its weaknesses are:

Market growth rate is only one factor in industry attractiveness, and relative market share is only one factor in competitive advantage. The growth-share matrix overlooks many other factors in these two important determinants of profitability. The framework assumes that each business unit is independent of the others. In some cases, a business unit that is a "dog" may be helping other business units gain a competitive advantage. The matrix depends heavily upon the breadth of the definition of the market. A business unit may dominate its small niche, but have very low market share in the overall industry. In such a case, the definition of the market can make the difference between a dog and a cash cow.

While its importance has diminished, the BCG matrix still can serve as a simple tool for viewing a corporation's business portfolio at a glance, and may serve as a starting point for discussing resource allocation among strategic business units.

Scenario Planning

Traditional forecasting techniques often fail to predict significant changes in the firm's external environment, especially when the change is rapid and turbulent or when information is limited. Consequently, important opportunities and serious threats may be overlooked and the very survival of the

firm may be at stake. Scenario planning is a tool specifically designed to deal with major, uncertain shifts in the firm's environment. Scenario planning has its roots in military strategy studies. Herman Kahn was an early founder of scenariobased planning in his work related to the possible scenarios associated with thermonuclear war ("thinking the unthinkable"). Scenario planning was transformed into a business tool in the late 1960's and early 1970's, most notably by Pierre Wack who developed the scenario planning system used by Royal Dutch/Shell. As a result of these efforts, Shell was prepared to deal with the oil shock that occurred in late 1973 and greatly improved its competitive position in the industry during the oil crisis and the oil glut that followed. Scenario planning is not about predicting the future. Rather, it attempts to describe what is possible. The result of a scenario analysis is a group of distinct futures, all of which are plausible. The challenge then is how to deal with each of the possible scenarios. Scenario planning often takes place in a workshop setting of high level executives, technical experts, and industry leaders. The idea is to bring together a wide range of perspectives in order to consider scenarios other than the widely accepted forecasts. The scenario development process should include interviews with managers who later will formulate and implement strategies based on the scenario analysis - without their input the scenarios may leave out important details and not lead to action if they do not address issues important to those who will implement the strategy. Some of the benefits of scenario planning include:

Managers are forced to break out of their standard world view, exposing blind spots that might otherwise be overlooked in the generally accepted forecast. Decision-makers are better able to recognize a scenario in its early stages, should it actually be the one that unfolds. Managers are better able to understand the source of disagreements that often occur when they are envisioning different scenarios without realizing it.

The Scenario Planning Process The following outlines the sequence of actions that may constitute the process of scenario planning. 1. Specify the scope of the planning and its time frame. 2. For the present situation, develop a clear understanding that will serve as the common departure point for each of the scenarios. 3. Identify predetermined elements that are virtually certain to occur and that will be driving forces. 4. Identify the critical uncertainties in the environmental variables. If the scope of the analysis is wide, these may be in the macro-environment, for example, political, economic, social, and technological factors (as in PEST). 5. Identify the more important drivers. One technique for doing so is as follows. Assign each environmental variable two numerical ratings: one rating for its range of variation and another for the strength of its impact on the firm. Multiply these ratings together to arrive at a number that specifies the significance of each environmental factor. For example, consider the extreme case in which a variable had a very large range such that it might be rated a 10 on a scale of 1 to 10 for variation, but in which the variable had very little impact on the firm so that the strength of impact rating would be a 1. Multiplying the two together would yield 10 out of a possible 100, revealing that the variable is not highly critical. After performing this calculation for all of the variables, identify the two having the highest significance. 6. Consider a few possible values for each variable, ranging between extremes while avoiding highly improbable values. 7. To analyze the interaction between the variables, develop a matrix of scenarios using the two most important variables and their possible values. Each cell in the matrix then represents a single scenario. For easy reference in later discussion it is worthwhile to give each scenario a descriptive name. If there are more than two critical factors, a multidimensional matrix can be created to handle them but would be difficult to visualize beyond 2 or 3 dimensions. Alternatively, factors can be taken

in pairs to generate several two-dimensional matrices. A scenario matrix might look something like this:

Scenario Matrix
VARIABLE 1 Outcome 1A | V V A R I A B L E 2 Outcome 1B | V

Outcome 2A -->

Scenario 1

Scenario 2

Outcome 2B -->

Scenario 3

Scenario 4

One of these scenarios most likely will reflect the mainstream views of the future. The other scenarios will shed light on what else is possible. 8. At this point there is not any detail associated with these "first-generation" scenarios. They are simply high level descriptions of a combination of important environmental variables. Specifics can be generated by writing a story to develop each scenario starting from the present. The story should be internally consistent for the selected scenario so that it describes that particular future as realistically as possible. Experts in specific fields may be called upon to devlop each story, possibly with the use of computer simulation models. Game theory may be used to gain an understanding of how each actor pursuing its own self interest might respond in the scenario. The goal of the stories is to transform the analysis from a simple matrix of the obvious range of environmental factors into decision scenarios useful for strategic planning. 9. Quantify the impact of each scenario on the firm, and formulate appropriate strategies. An additional step might be to assign a probability to each scenario. Opinions differ on whether one should attempt to assign probabilities when there may be little basis for determining them. Business unit managers may not take scenarios seriously if they deviate too much from their preconceived view of the world. Many will prefer to rely on forecasts and their judgement, even if they realize that they may miss important changes in the firm's environment. To overcome this reluctance to broaden their thinking, it is useful to create "phantom" scenarios that show the adverse results if the firm were to base its decisions on the mainstream view while the reality turned out to be one of the other scenarios. Recommended Reading Wack, Pierre. "Scenarios: Uncharted Waters Ahead." Harvard Business Review 63, no. 5 (1985)

Turnaround Management

Times of corporate distress present special strategic management challenges. In such situations, a firm may be in bankruptcy or nearing bankruptcy. Often turnaround consultants are brought into the company to

devise and execute a plan of corporate renewal, assuming that the firm has enough potential to make it worth saving. Before a viable turnaround strategy can be formulated, one must identify the root cause or causes of the crisis. Frequently encountered causes include:

Revenue downturn caused by a weak economy Overly optimistic sales projections Poor strategic choices Poor execution of a good strategy High operating costs High fixed costs that decrease flexibility Insufficient resources Unsuccessful R&D projects Highly successful competitor Excessive debt burden Inadequate financial controls

While each case is unique, the turnaround process frequently involves the following stages:
1. Management change - consultants may be called in to manage the turnaround of the firm. 2. Situation analysis - a situation analysis is performed to evaluate the prospects of survival. Assuming

the firm is worth turning around, depending on the root causes of the distress one or more of the following turnaround strategies may be selected and presented to the board: o Change of top management o Divestment of certain assets o Reformulation of strategy o Revenue increase o Cost reduction o Strategic acquisitions

3. Emergency action plan - achieve positive cash flow as soon as possible by eliminating departments,

reducing staff, etc.

4. Business restructuring - once positive cash flow is achieved, the strategic plan is implemented,

improving continuing operations, adjusting the product mix and repositioning products if necessary. The management team begins to focus on achieving sustained profitability. 5. Return to normalcy - the company becomes profitable and the changes are internalized. Employees regain confidence in the firm and emphasis is placed on growing the restructured business while maintaining a strong balance sheet.

Abandonment Strategy In some cases the prospects of the firm may be too bleak to continue as an ongoing operation and an exit strategy may be appropriate. Different strategies may be pursued that vary in their immediacy. An immediate abandonment strategy exits the market by immediately liquidating or selling to another firm. In other situations, a harvest strategy is appropriate by which the firm plays the end-game, maximizing nearterm cash flows at the expense of market position.