Sie sind auf Seite 1von 79

Table of Contents

Balanced scorecard ......................................................................................................................................... 2 Behavioural sciences ....................................................................................................................................... 5 Competitive advantage .................................................................................................................................... 6 Competitiveness ............................................................................................................................................... 7 Competitor analysis .......................................................................................................................................... 8 Core competency ............................................................................................................................................. 8 Critical Success Factors .................................................................................................................................. 9 Institutional memory ....................................................................................................................................... 10 Job satisfaction ............................................................................................................................................... 11 Key Performance Indicators (KPI) ............................................................................................................... 15 Learning organization .................................................................................................................................... 17 Market concentration (Industry concentration, Seller concentration) ..................................................... 17 Market share ................................................................................................................................................... 20 Marketing mix .................................................................................................................................................. 20 Metrics .............................................................................................................................................................. 21 Organizational learning .................................................................................................................................. 23 Operations management ............................................................................................................................... 26 Operational performance management ...................................................................................................... 27 Organizational climate ................................................................................................................................... 27 Organizational commitment .......................................................................................................................... 28 Organizational communication ..................................................................................................................... 29 Organizational culture, or corporate culture ............................................................................................... 33 Organization development ............................................................................................................................ 39 Organizational Ecology (also Organizational Demography and the Population Ecology of Organizations) ................................................................................................................................................. 40 Organizational effectiveness......................................................................................................................... 41 Organizational Engineering........................................................................................................................... 42 Organizational Ethics ..................................................................................................................................... 42 Organizational learning .................................................................................................................................. 45 Organizational structure ................................................................................................................................ 48 Performance management (Business performance management (BPM)) ........................................... 59 1

Performance problem .................................................................................................................................... 63 Price elasticity of demand (PED) ................................................................................................................. 63 Reengineering ................................................................................................................................................. 67 Strategic Synergy ......................................................................................................................................... 68 Synergy ............................................................................................................................................................ 71 Trend analysis ................................................................................................................................................. 73 Trend estimation ............................................................................................................................................. 74 Vertical integration .......................................................................................................................................... 77

Balanced scorecard
In 1992, Robert S. Kaplan and David P. Norton introduced the balanced scorecard, a concept for measuring whether the activities of a company are meeting its objectives in terms of vision and strategy. By focusing not only on financial outcomes but also on the human issues, the balanced scorecard helps to provide a more comprehensive view of a business which in turn helps organizations to act in their best long-term interests. The strategic management system helps managers focus on performance metrics while balancing financial objectives with customer, process and employee perspectives. Measures are often indicators of future performance. Since the original concept was introduced, balanced scorecards have become a fertile field of theory and research, and many practitioners have diverted from the original Kaplan & Norton articles. Kaplan & Norton themselves revisited the scorecard with the benefit of a decade's experience since the original article. Implementing the scorecard typically includes four processes: 1. 2. 3. 4. Translating the vision into operational goals; Communicate the vision and link it to individual performance; Business planning; Feedback and learning and adjusting the strategy accordingly.

A comprehensive view of business performance


Balanced Scorecard is simply a concise report featuring a set of measures that relate to the performance of an organization. By associating each measure with one or more expected values (targets), managers of the organization can be alerted when organizational performance is failing to meet their expectations. The challenge with Balanced Scorecard is, and has been since it was popularized by an article in 1992 published [an actual example of such a report would greatly clarify in the Harvard Business Review, deciding on which measures to choose.
this further]

From the outset, the Balanced Scorecard has been promoted as a tool to help organizations monitor the implementation of organizational strategy.

The earliest Balanced Scorecards comprised simple tables broken into four sections - typically these 'perspectives' were labeled "Financial", "Customer", "Internal Business Processes", and "Learning & Growth". Designing the Balanced Scorecard simply required picking five or six good measures for each perspective. Many writers have since suggested alternative headings for these perspectives, and also suggested using either additional or fewer perspectives: these suggestions being triggered by a recognition that different but equivalent headings will yield alternative sets of measures. The major design challenge faced with this type of Balanced Scorecard is justfiying the choice of measures made - "of all the measures you could have chosen why did you choose these...?" is a common question asked (and using this type of design process, hard to answer). If users are not confident that the measures within the Balanced Scorecard are well chosen, they will have less confidence in the information it provides. Although less common, these early style Balanced Scorecards are still designed and used today. The early style Balanced Scorecards are hard to design in a way that builds confidence that they are well designed. Because of this, many are abandoned soon after completion. In the mid 1990s an improved design method emerged. In the new method, selection of measures was based on a set of 'strategic objectives' plotted on a 'strategic linkage model' or 'strategy map'. With this modified approach, the strategic objectives are typically distributed across a similar set of 'perspectives' as is found in the earlier designs, but the design question becomes slightly more abstract. Managers have to identify the five or six goals they have within each of the perspectives, and then demonstrate some inter-linking between them by plotting causal links on the diagram. Having reached some consensus about the objectives and how they inter-relate, the Balanced Scorecard's measures are chosen by picking suitable measures for each objective. This type of approach provides greater contextual justification for the measures chosen, and is generally easier for managers to work through. This style of Balanced Scorecard has been the most common type for the last ten years or so. Several design issues still remain with this modified approach to Balanced Scorecard design, but it has been much more successful than the design approach it supersedes. Since the late 1990s, various improved versions of Balanced Scorecard design methods have emerged examples being The Performance Prism, Results Based Management and Third Generation Balanced Scorecard for example. These more advanced design methods seek to solve some of the remaining design issues - in particular issues relating to the design of sets of Balanced Scorecards to use across an organization, and in setting targets for the measures selected. Many books and articles on Balanced Scorecard topics confuse the design process elements and the Balanced Scorecard itself: in particular, it is common for people to refer to a 'strategic linkage model' or 'strategy map' as being a Balanced Scorecard. Balanced Scorecard is a performance management tool: although it helps focus managers' attention on strategic issues and the management of the implementation of strategy, it is important to remember that Balanced Scorecard itself has no role in the formation of strategy. Balanced Scorecard can comfortably coexist with strategic planning systems and other tools.

Actual usage of the balanced scorecard


Kaplan and Norton found that companies are using the scorecard to: Clarify and update budgets

Identify and align strategic initiatives Conduct periodic performance reviews to learn about and improve strategy.

In 1997, Kurtzman found that 64 percent of the companies questioned were measuring performance from a number of perspectives in a similar way to the balanced scorecard. Balanced scorecards have been implemented by government agencies, military units, corporate units and corporations as a whole, nonprofits, and schools; many sample scorecards can be found via Web searches, though adapting one organization's scorecard to another is generally not advised by theorists, who believe that much of the benefit of the scorecard comes from the implementation method.

Comparison to Applied Information Economics


A criticism of balanced scorecard is that the scores are not based on any proven economic or financial theory and have no basis in the decision sciences. The process is entirely subjective and makes no provision to assess quantities like risk and economic value in a way that is actuarially or economically well-founded. The Balanced scorecard does not provide a bottom line score or a unified view with clear recommendations, it is simply a list of metrics [1]. Positive responses from users of balanced scorecard may merely be a type of placebo effect. Furthermore, studies showed that many companies utilising the process were not convinced of its benefits after a period of ten years. The use of the process has been linked to the Enron disaster. There are no empirical studies linking the use of balanced scorecard to better decision making or improved financial performance of companies. Applied Information Economics (AIE) has been researched as an alternative to Balanced Scorecards. In 2000, the Federal CIO Council commissioned a study [2] to compare the two methods by funding studies in side-by-side projects in two different agencies. The Dept. of Veterans Affairs used AIE and the US Dept. of Agriculture applied balanced scorecard. The resulting report found that while AIE was much more sophisticated, AIE actually took slightly less time to utilize. AIE was also more likely to generate findings that were newsworthy to the organization while the users of balanced scorecard felt it simply documented their inputs and offered no other particular insight. However, balanced scorecard is still much more widely used than AIE.

Key performance indicators


According to each perspective of the balanced scorecard there are a number of KPIs. Financial o Cash flow o ROI o Financial Result o Return on capital employed o Return on equity o Customer o Delivery Performance to Customer - by Date o Delivery Performance to Customer - by Quantity o Customer satisfaction rate o Customer retention o Internal Business Processes o Number of Activities o Opportunity Success Rate o Learning & Growth o Investment Rate o Illness rate

Behavioural sciences
Behavioural sciences (or Behavioral science) is a term that encompasses all the disciplines that explore the activities of and interactions among organisms in the natural world. It involves the systematic analysis and investigation of human and animal behaviour through controlled and naturalistic experimental observations and rigorous formulations. (E. D. Klemke, R. Hollinger, and A. D. Kline, (ed) (1980)) However, recently, many academic departments of psychology have adopted this term to refer to groups of people who study behavioral questions scientifically, as distinquished from the study of more general psychology topics.

Difference between behavioural sciences and social sciences


The term behavioural sciences is often confused with the term social sciences. Though these two broad areas are interrelated and study systemic processes of behaviour, they differ on their level of scientific analysis of various dimensions of behaviour. Behavioural sciences essentially investigates the decision processes and communication strategies within and between organisms in a social system. This involves fields like psychology and social neuroscience, among others. In contrast, Social sciences study the structural-level processes of a social system and its impact on social processes and social organization. They typically include fields like sociology, economics, history, public health, anthropology, and political science (As per E. D. Klemke, R. Hollinger and A. D. Kline, eds., 1988). { Some universities, such as SUNY-Stony Brook, consider political science, etc. to be behavioral sciences }

Categories of behavioural sciences


Behavioural sciences includes two broad categories: Neural-Decision sciences and Social-Communication sciences. Decision sciences involves those disciplines primarily dealing with the decision processes and individual functioning used in the survival of organism in a social environment. These include psychology, cognitive organization theory, psychobiology, management science, operations research (not to be confused with business administration) and social neuroscience. On the other hand, Communication sciences include those fields which study the communication strategies used by organisms and its dynamics between organisms in an environment. These include fields like Anthropology, Organizational behaviour, Organization studies, Sociology and Social networks.

Behavioural sciences as integrative sciences


Neural-Decision sciences form the bridge between the behavioural sciences and Cognitive and natural sciences through creating theories that account for the interaction of bio-physical systems and cognitive processes in decision making of the organism. Social-Communication sciences form the link between behavioural sciences and social sciences through the interaction of individual cognitive and communication strategies and social-structural processes. Thus behavioural sciences lies at the crossroads between the natural sciences and the social sciences, linking broad areas of scientific exploration. Science
Natural sciences

Physical sciences

Chemical sciences

Biological sciences

Cognitive sciences

Behavioural sciences

Neural-(Decision sciences)

Social-(Communication sciences)

Psychology including Social Psychology

Cognitive organization Theory and Consumer Psychology

Anthropology

Organizational behaviour

Management science and Psychobiology Operations research Social neuroscience Ethology

Organization studies & Social Psychonetworks Economics Memetics Organizational ecology

Social sciences

Sociology

Economics

Political science

Economic sociology

Competitive advantage
Competitive advantage (CA) is a position that a firm occupies in its competitive landscape. Michael Porter posits that a competitive advantage, sustainable or not, exists when a company makes economic rents, that is, their earnings exceed their costs (including cost of capital). That means that normal competitive pressures are not able to drive down the firm's earnings to the point where they cover all costs and just provide minimum sufficient additional return to keep capital invested. Most forms of competitive advantage cannot be sustained for any length of time because the promise of economic rents drives competitors to duplicate the competitive advantage held by any one firm. A firm possesses a Sustainable Competitive Advantage (SCA) when it has value-creating processes and positions that cannot be duplicated or imitated by other firms that lead to the production of above normal rents. An SCA is different from a competitive advantage (CA) in that it provides a long-term advantage that is not easily replicated. But these above-normal rents can attract new entrants who drive down economic rents. A CA is a position a firm attains that lead to above-normal rents or a superior financial performance. The processes and positions that engender such a position are not necessarily non-duplicable or inimitable. Analysis of the factors of profitability is the subject of numerous theories of strategy including the five forces model pioneered by Michael Porter of the Harvard Business School. In marketing and strategic management, sustainable competitive advantage is an advantage that one firm has relative to competing firms. The source of the advantage can be something the company does that is distinctive and difficult to replicate, also known as a core competency -- for example Procter & Gamble's ability to derive superior consumer insights and implement them in managing its brand portfolio. It can also be an asset such as a brand (e.g. Coca Cola) or a patent, such as Viagra. It can also simply be a result of the industry's cost structure -- for example, the large fixed costs that tend to create natural monopolies in utility industries. To be sustainable, the advantage must be: 1. distinctive, and 2. proprietary

In 2006, Jaynie L. Smith authored Creating Competitive Advantage (Doubleday). This book outlines how companies fail to understand their own existing competitive advantages and use them in sales/marketing. She provides a framework for how companies can evaluate their own operations and develop competitive advantage/competitive positioning statements to better hone their sales/marketing messages. Competitive advantage statements help distinguish companies by highlighting what they offer to the customer using tangible terms and concepts. The next step is to test those CA statements through independent market research. This allows a company to understand their customers' hierarchy of buying criteria in an objective indepenedent context. From there, companies can tailor their CA statements to speak directly to the buying interests of the customer. Competitive Advantage: a company is said to have a competitive advantage over its rivals when its profitability is greater than the average profitability of all other companies competing for the same set of customers. Sustainable Competitive Advantage: a company has a sustained competitive advantage when its strategies enable it to maintain above-average profitability for a number of years.

Competitiveness
Competitiveness is a comparative concept of the ability and performance of a firm, sub-sector or country to sell and supply goods and/or services in a given market. The usefulness of the concept, particularly in the context of national competitiveness, is vigorously disputed by economists, such as Paul Krugman [1]. The term may also be applied to markets, where it is used to refer to the extent to which the market structure may be regarded as perfectly competitive. This usage has nothing to do with the extent to which individual firms are "competitive'.

Firm competitiveness
Within capitalist economic systems, the drive of enterprises is to maintain and improve their own competitiveness. this practically pertains to business sectors.

National Competitiveness
The term is also used to refer in a broader sense to the economic competitiveness of countries, regions or cities. Recently, countries are increasing looking at their competitiveness on global markets. Ireland (1997), Greece (2003), Croatia (2004), Bahrain (2005), the Philippines (2006), Guyana and the Dominican Republic are just some examples of countries that have advisory bodies or special government agencies that tackle competitiveness issues. Other nations, such as Dubai, are considering the establishment of such a body. National competitiveness is said to be particularly important for small open economies, which rely on trade, and typically foreign direct investment, to provide the scale necessary for productivity increases to drive increases in living standards. The Irish National Competitiveness Council uses a Competitiveness Pyramid structure to simplify the factors the affect national competitiveness. It distinguishes in particular between policy inputs in relation to the business environment, the physical infrastructure and the knowledge infrastructure and the essential conditions of competitiveness that good policy inputs create, including business performance metrics, productivity, labour supply and prices/costs for business. International comparisons of national competitiveness are conducted by the World Economic Forum, in its Global Competitiveness Report, and the Institute for Management Development, in its World Competitiveness Yearbook.

Criticism
Krugman argues that "As a practical matter, however, the doctrine of 'competitiveness' is flatly wrong. The world's leading nations are not, to any important degree, in economic competition with each other." As

Krugman notes, national economic welfare is determined primarily by productivity in both traded and nontraded sectors of the economy. [2].

Competitor analysis
Competitor analysis in marketing and strategic management is an assessment of the strengths and weaknesses of current and potential competitors. Created by Michael Porter competitor analysis focuses on four key aspects: competitor's objectives, competitor's assumptions, competitor's strategy, and competitor's resources and capabilities. In 1989 Garsombke created international competitor analysis framework adding components relating to the understanding of the international marketplace''

Core competency
A core competency is something that a firm can do well and that meets the following three conditions specified by Hamel and Prahalad (1990): 1. It provides customer benefits 2. It is hard for competitors to imitate 3. It can be leveraged widely to many products and markets. A core competency can take various forms, including technical/subject matter know how, a reliable process, and/or close relationships with customers and suppliers (Mascarenhas et al. 1998). It may also include product development or culture such as employee dedication. Modern business theories suggest that most activities that are not part of a company's core competency should be outsourced. If a core competency yields a long term advantage to the company, it is said to be a sustainable competitive advantage.

Development of the Concept


The concept of core competencies was developed in the management field. C.K. Prahalad and Gary Hamel introduced the concept in a 1990 Harvard Business Review article. It wrote that a core competency is "an area of specialized expertise that is the result of harmonizing complex streams of technology and work activity." As an example they gave Honda's expertise in engines. Honda was able to exploit this core competency to develop a variety of quality products from lawn mowers and snow blowers to trucks and automobiles. To take an example from the automotive industry, it has been claimed that Volvos core competency is safety. This however is perhaps the end result of their competency in terms of customer benefit. Their core competency might be more about their ability to source and design high protection components, or to research and respond to market demands concerning safety. Ever since Prahalad and Hamel introduced the term in the 1990s many researchers have tried to highlight and further illuminate the meaning of core competency. According to D. Leonard-Barton, "Capabilities are considered core if they differentiate a company strategically." On the other hand Galunic and Rodan (1998) argue that "a core competency differentiates not only between firms but also inside a firm it differentiates amongst several competencies. In other words, a core competency guides a firm recombining its competencies in response to demands from the environment."

Individual versus Core Competencies

It is important to distinguish between individual competencies or capabilities and core competencies. Individual capabilities stand alone and are generally considered in isolation. Gallon, Stillman, and Coates (1995) made it explicit that core competencies are more than the traits of individuals. They defined core competencies as "aggregates of capabilities, where synergy is created that has sustainable value and broad applicability." That synergy needs to be sustained in the face of potential competition and, as in the case of engines, must not be specific to one product or market. So according to this definition, core competencies are harmonized, intentional constructions. Coyne, Hall, and Clifford (1997) proposed that "a core competence is a combination of complementary skills and knowledge bases embedded in a group or team that results in the ability to execute one or more critical processes to a world class standard." Two ideas are especially important here. The skills or knowledge must be complementary, and taken together they should make it possible to provide a superior product." For example, Black and Decker's core technological competencies pertain to 200 to 600 W electric motors, and this motor is their core product. All of their end products are modifications of this basic technology (with the exception of their work benches, flash lights, battery charging systems, toaster ovens, and coffee percolators). They produce products for three markets: 1. the home workshop market: In the home workshop market, small electric motors are used to produce drills, circular saws, sanders, routers, rotary tools, polishers, and drivers 2. the home cleaning and maintenance market: In the home cleaning and maintenance market, small electric motors are used to produce dust busters, etc. 3. the kitchen appliance market: In the kitchen appliance market, small electric motors are used to produce can openers, food processors, blenders, bread makers, and fans.

Characteristics of Core Competencies


There are three tests for Core Competencies 1. Potential access to a wide variety of markets - the core competency must be capable of developing new products and services 2. A core competency must make a significant contribution to the perceived benefits of the end product. 3. Core Competencies should be difficult for competitors to imitate. In many industries, such competencies are likely to be unique

Critical Success Factors


Critical Success Factor (CSF) or Critical Success Factors is a business term for an element which is necessary for an organization or project to achieve its mission. For example, a CSF for a successful [1] Information Technology (IT) project is user involvement. The concept of "success factors" was developed by D. Ronald Daniel of McKinsey & Company in 1961. The [3] process was refined by Jack F. Rockart in 1986. In 1995 James A. Johnson and Michael Friesen applied it [4] to many sector settings, including health care. Rockart and Bullen presented five key sources of CSFs: the industry, competitive strategy and industry [5] position, environmental factors, temporal factors, and managerial position . A plan should be implemented that considers a platform for growth and profits as well as takes into [6] consideration the following critical success factors:
[2]

Money factors: positive cash flow, revenue growth, and profit margins. Acquiring new customers and/or distributors -- your future. Customer satisfaction -- how happy are they? Quality -- how good is your product and service? Product or service development -- what's new that will increase business with existing customers and attract new ones? Intellectual capital -- increasing what you know that's profitable. Strategic relationships -- new sources of business, products and outside revenue. Employee attraction and retention -- your ability to do extend your reach. Sustainability -- your personal ability to keep it all going
[7]

Key success factors generally include exceptional management of several of the following: Product design Market segmentation Distribution ad promotion Pricing Financing Securing of key personnel Research and development Production Servicing Maintenance of quality/value Securing key suppliers

A critical success factor is not a key performance indicator (KPI). Critical success factors are elements that are vital for a strategy to be successful. KPIs are measures that quantify objectives and enable the measurement of strategic performance. For example: KPI = number of new customers CSF = installation of a call centre for providing quotations

Institutional memory
Institutional memory is a collective of facts, concepts, experiences and know-how held by a group of people. As it transcends the individual, it requires the ongoing transmission of these memories between members of this group. Elements of institutional memory may be found in corporations, professional groups, government bodies, religious groups, academic collaborations and by extension in entire cultures. Institutional memory may be encouraged to preserve a group's ideology or way of work. Conversely, institutional memory may be ingrained to the point that it becomes hard to challenge if something is found to contradict that which was previously thought to have been correct.

Institutional knowledge
Institutional knowledge is gained by organizations translating historical data into useful knowledge and wisdom. Memory depends upon the preservation of data and also the analytical skills necessary for its effective use within the organization. Religion is one of the significant institutional forces acting on humanity's collective memory. Alternatively, the evolution of ideas in Marxist theory, is that the mechanism whereby knowledge

10

and wisdom are passed down through the generations is subject to economic determinism. In all instances social systems, cultures and organizations have an interest in controlling and using institutional memories. Organizational structure determines the training requirements and expectations of behaviour associated with various roles. This is part of the implicit institutional knowledge. Progress to higher echelons requires assimilation of this, and when outsiders enter at a high level if they do not appreciate this morale and effectiveness tends to deteriorate.

Literature and documents


Publishing has changed greatly in its organization, financing, distribution, and bottom line emphasis. The dissemination of knowledge in printed media has been consolidated under the control of a relatively few corporate publishers, many with ties to mass entertainment multi-national conglomerates.

Personal reminiscences
Memories were shared and sustained across generations before writing appeared. Some of the oral tradition can be traced, distantly, back to the dawn of civilization, but not all past societies have left any mark on the present.

Is institutional memory fading?


Various types of organizational education systems exist, many threatened in the information age by newer technologies. It is appropriate to maintain electronic access to significant historical archives, in repositories such as the Wikisource database or Project Gutenberg. Increasing archival activity in recent years, spurred by increasing use of electronic data retrieval systems, has necessitated enhancement of certain document repositories, while actually improving accessibility. The teaching of mathematics, for instance, has been fundamentally altered by the algorithmic shortcuts enabled by calculators.

Job satisfaction
Job satisfaction describes how content an individual is with his or her job. It is a relatively recent term since in previous centuries the jobs available to a particular person were often predetermined by the occupation of that person's parent. There are a variety of factors that can influence a person's level of job satisfaction; some of these factors include the level of pay and benefits, the perceived fairness of the promotion system within a company, the quality of the working conditions, leadership and social relationships, and the job itself (the variety of tasks involved, the interest and challenge the job generates, and the clarity of the job description/requirements). The happier people are within their job, the more satisfied they are said to be. Job satisfaction is not the same as motivation, although it is clearly linked. Job design aims to enhance job satisfaction and performance, methods include job rotation, job enlargement and job enrichment. Other influences on satisfaction include the management style and culture, employee involvement, empowerment and autonomous work groups. Job satisfaction is a very important attribute which is frequently measured by organisations. The most common way of measurement is the use of rating scales where employees report their reactions to their jobs. Questions relate to rate of pay, work responsibilities, variety of tasks, promotional opportunities the work itself and co-workers. Some questioners ask yes or no questions while others ask to rate satisfaction on 1-5 scale (where 1 represents "not at all satisfied" and 5 represents "extremely satisfied").

Definitions
11

Job satisfaction has been defined as a pleasurable emotional state resulting from the appraisal of [1] [2] [3] ones job ); an affective reaction to ones job ; and an attitude towards ones job . Weiss (2002) has argued that job satisfaction is an attitude but points out that researchers should clearly [4] distinguish the objects of cognitive evaluation which are affect (emotion), beliefs and behaviours . This definition suggests that we form attitudes towards our jobs by taking into account our feelings, our beliefs, and our behaviors.

History
One of the biggest preludes to the study of job satisfaction was the Hawthorne studies. These studies (1924-1933), primarily credited to Elton Mayo of the Harvard Business School, sought to find the effects of various conditions (most notably illumination) on workers productivity. These studies ultimately showed that novel changes in work conditions temporarily increase productivity (called the Hawthorne Effect). It was later found that this increase resulted, not from the new conditions, but from the knowledge of being observed. This finding provided strong evidence that people work for purposes other than pay, which paved the way for researchers to investigate other factors in job satisfaction. Scientific management (aka Taylorism) also had a significant impact on the study of job satisfaction. Frederick Winslow Taylors 1911 book, Principles of Scientific Management, argued that there was a single best way to perform any given work task. This book contributed to a change in industrial production philosophies, causing a shift from skilled labor and piecework towards the more modern approach of assembly lines and hourly wages. The initial use of scientific management by industries greatly increased productivity because workers were forced to work at a faster pace. However, workers became exhausted and dissatisfied, thus leaving researchers with new questions to answer regarding job satisfaction. It should also be noted that the work of W.L. Bryan, Walter Dill Scott, and Hugo Munsterberg set the tone for Taylors work. Some argue that Maslows hierarchy of needs theory, a motivation theory, laid the foundation for job satisfaction theory. This theory explains that people seek to satisfy five specific needs in life physiological needs, safety needs, social needs, self-esteem needs, and self-actualization. This model served as a good basis from which early researchers could develop job satisfaction theories.

Models of job satisfaction


Affect Theory
Edwin A. Lockes Range of Affect Theory (1976) is arguably the most famous job satisfaction model. The main premise of this theory is that satisfaction is determined by a discrepancy between what one wants in a job and what one has in a job. Further, the theory states that how much one values a given facet of work (e.g. the degree of autonomy in a position) moderates how satisfied/dissatisfied one becomes when expectations are/arent met. When a person values a particular facet of a job, his satisfaction is more greatly impacted both positively (when expectations are met) and negatively (when expectations are not met), compared to one who doesnt value that facet. To illustrate, if Employee A values autonomy in the workplace and Employee B is indifferent about autonomy, then Employee A would be more satisfied in a position that offers a high degree of autonomy and less satisfied in a position with little or no autonomy compared to Employee B. This theory also states that too much of a particular facet will produce stronger feelings of dissatisfaction the more a worker values that facet.

Dispositional Theory

12

Another well-known job satisfaction theory is the Dispositional Theory . It is a very general theory that suggests that people have innate dispositions that cause them to have tendencies toward a certain level of satisfaction, regardless of ones job. This approach became a notable explanation of job satisfaction in light of evidence that job satisfaction tends to be stable over time and across careers and jobs. Research also indicates that identical twins have similar levels of job satisfaction. A significant model that narrowed the scope of the Dispositional Theory was the Core Selfevaluations Model, proposed by Timothy A. Judge in 1998. Judge argued that there are four Core Self-evaluations that determine ones disposition towards job satisfaction: self-esteem, general selfefficacy, locus of control, and neuroticism. This model states that higher levels of self-esteem (the value one places on his self) and general self-efficacy (the belief in ones own competence) lead to higher work satisfaction. Having an internal locus of control (believing one has control over her\his own life, as opposed to outside forces having control) leads to higher job satisfaction. Finally, lower levels of neuroticism lead to higher job satisfaction .

Two-Factor Theory (Motivator-Hygiene Theory)


Frederick Herzbergs Two factor theory (also known as Motivator Hygiene Theory) attempts to [5] explain satisfaction and motivation in the workplace This theory states that satisfaction and dissatisfaction are driven by different factors motivation and hygiene factors, respectively. Motivating factors are those aspects of the job that make people want to perform, and provide people with satisfaction. These motivating factors are considered to be intrinsic to the job, or the [5] work carried out. Motivating factors include aspects of the working environment such as pay, [5] company policies, supervisory practices, and other working conditions. While Hertzberg's model has stimulated much research, researchers have been unable to reliably empirically prove the model, with Hackman & Oldham suggesting that Hertzberg's original [5] formulation of the model may have been a methodological artifact . Furthermore, the theory does not consider individual differences, conversely predicting all employees will react in an identical [5] manner to changes in motivating/hygiene factors. . Finally, the model has been criticised in that it [5] does not specify how motivating/hygiene factors are to be measured.

Job Characteristics Model


Hackman & Oldham proposed the Job Characteristics Model, which is widely used as a framework to study how particular job characteristics impact on job outcomes, including job satisfaction. The model states that there are five core job characteristics (skill variety, task identity, task significance, autonomy, and feedback) which impact three critical psychological states (experienced meaningfulness, experienced responsibility for outcomes, and knowledge of the actual results), in [6] turn influencing work outcomes (job satisfaction, absenteeism, work motivation, etc.) . The five core job characteristics can be combined to form a motivating potential score (MPS) for a job, which can be used as an index of how likely a job is to affect an employee's attitudes and behaviors. A meta-analysis of studies that assess the framework of the model provides some support for the [7] validity of the JCM .

Measuring job satisfaction


There are many methods for measuring job satisfaction. By far, the most common method for collecting data regarding job satisfaction is the Likert scale (named after Rensis Likert). Other less common methods of for gauging job satisfaction include: Yes/No questions, True/False questions, point systems, checklists, and forced choice answers. The Job Descriptive Index (JDI), created by Smith, Kendall, & Hulin (1969), is a specific

13

questionnaire of job satisfaction that has been widely used. It measures one s satisfaction in five facets: pay, promotions and promotion opportunities, coworkers, supervision, and the work itself. The scale is simple, participants answer either yes, no, or cant decide (indicated by ?) in response to whether given statements accurately describe ones job. The Job in General Index is an overall measurement of job satisfaction. It was an improvement to the Job Descriptive Index because the JDI focused too much on individual facets and not enough on work satisfaction in general. Other job satisfaction questionnaires include: the Minnesota Satisfaction Questionnaire (MSQ), the Job Satisfaction Survey (JSS), and the Faces Scale. The MSQ measures job satisfaction in 20 facets and has a long form with 100 questions (5 items from each facet) and a short form with 20 questions (1 item from each facet). The JSS is a 36 item questionnaire that measures nine facets of job satisfaction. Finally, the Faces Scale of job satisfaction, one of the first scales used widely, measured overall job satisfaction with just one item which participants respond to by choosing a face. 'Variables and Measures' The overall job satisfaction levels of the Faculty members measured with the help of 5 dimensions namely Job,supervisor,coworkers,pay ,and promotion. Information regarding faculty members age,education ,job level,foreign qualification,numbers of years in organization,other source of income,gender,and marital status have also been obtained.(shamail etal,2004) (JOURNAL OF INDEPENDENT STUDIES & RESEARCH,VOLUME2,NUMBER1,JANUARY 2004)

Relationships and practical implications


Job Satisfaction can be an important indicator of how employees feel about their jobs and a predictor [8] [9] [10] of work behaviours such as organizational citizenship , absenteeism , and turnover . Further, job satisfaction can partially mediate the relationship of personality variables and deviant work [11] behaviors . One common research finding is that job satisfaction is correlated with life satisfaction . This correlation is reciprocal, meaning people who are satisfied with life tend to be satisfied with their job and people who are satisfied with their job tend to be satisfied with life. However, some research has found that job satisfaction is not significantly related to life satisfaction when other variables such as [13] nonwork satisfaction and core self-evaluations are taken into account . An important finding for organizations to note is that job satisfaction has a rather tenuous correlation to productivity on the job. This is a vital piece of information to researchers and businesses, as the idea that satisfaction and job performance are directly related to one another is often cited in the media and in some non-academic management literature. A recent meta-analysis found an average uncorrected correlation between job satisfaction and productivity to be r=.18; the average true [14] correlation, corrected for research artifacts and unreliability, was r=.30 . Further, the meta-analysis found that the relationship between satisfaction and performance can be moderated by job complexity, such that for high-complexity jobs the correlation between satisfaction and performance is higher (=.52) than for jobs of low to moderate complexity (=.29). In short, the relationship of satisfaction to productivity is not necessarily straightforward and can be influenced by a number of other work-related constructs, and the notion that "a happy worker is a productive worker" should not be the foundation of organizational decision-making. With regard to job performance, employee personality may be more important than job satisfaction. The link between job satisfaction and performance is thought to be a spurious relationship; instead, both satisfaction and performance are the result of personality.
[15] [12]

14

Key Performance Indicators (KPI)


Key Performance Indicators (KPI) are financial and non-financial metrics used to quantify objectives to reflect strategic performance of an organization. KPIs are used in Business Intelligence to assess the present state of the business and to prescribe a course of action. The act of monitoring KPIs in real-time is known as business activity monitoring. KPIs are frequently used to "value" difficult to measure activities such as the benefits of leadership development, engagement, service, and satisfaction. KPIs are typically tied to an organization's strategy (as exemplified through techniques such as the Balanced Scorecard). The KPIs differ depending on the nature of the organization and the organization's strategy. They help an organization to measure progress towards their organizational goals, especially toward difficult to quantify knowledge-based processes. A KPI is a key part of a measurable objective, which is made up of a direction, KPI, benchmark, target and time frame. For example: "Increase Average Revenue per Customer from 10 to 15 by EOY 2008". In this case, 'Average Revenue Per Customer' is the KPI. KPIs should not be confused with a Critical Success Factor. For the example above, a critical success factor would be something that needs to be in place to achieve that objective; for example, a product launch.

Identifying indicators
Performance indicators differ with business drivers and aims (or goals). A school might consider the failure rate of its students as a Key Performance Indicator which might help the school understand its position in the educational community, whereas a business might consider the percentage of income from return customers as a potential KPI. But it is necessary for an organization to at least identify its KPI's. The key environments for identifying KPI's are: Having a pre-defined business process. Having clear goals/performance requirements for the business processes. Having a quantitative/qualitative measurement of the results and comparison with set goals. Investigating variances and tweaking processes or resources to achieve short-term goals.

When identifying KPI's the acronym SMART is often applied. KPI's need to be: Specific Measurable Achievable Result-oriented Time- based

Areas to be analyzed
Among the areas top management analyzes are: 1. 2. 3. 4. 5. 6. Customer related numbers: New customers acquired Status of existing customers Customer attrition Turnover generated by segments of the customers - these could be demographic filters. Outstanding balances held by segments of customers and terms of payment - these could be demographic filters. 7. Collection of bad debts within customer relationships.

15

8. Demographic analysis of individuals (potential customers) applying to become customers, and the levels of approval, rejections and pending numbers. 9. Delinquency analysis of customers behind on payments. 10. Profitability of customers by demographic segments and segmentation of customers by profitability.

Many of these aforementioned customer KPIs are developed and improved with customer relationship management. This is more an inclusive list than an exclusive one. The above more or less describe what a bank would do, but could also refer to a telephone company or similar service sector company. What is important is: 1. KPI-related data which is consistent and correct. 2. Timely availability of KPI related Data.

Faster availability of data is beginning to become a concern for more and more organizations. Delays of a month or two were commonplace. Of late, several banks have tried to move to availability of data at shorter intervals and less delays. For example, in businesses which have higher operational/credit risk loading (that involve credit cards, wealth management), Citibank has moved onto a weekly availability of KPI related data or sometimes a daily analysis of numbers. This means that data is usually available within 24 hours as a result of automation and the use of IT.

Categorization of indicators
Key Performance Indicators define a set of values used to measure against. These raw sets of values fed to systems to summarize information against are called indicators. Indicators identifiable as possible candidates for KPIs can be summarized into the following sub-categories: Quantitative indicators which can be presented as a number. Practical indicators that interface with existing company processes. Directional indicators specifying whether an organization is getting better or not. Actionable indicators are sufficiently in an organization's control to effect change.

Key Performance Indicators in practical terms and strategy development means are OBJECTIVES to be target that will add the VALUE to the business MOST (MOST = KEY INDICATORS OF SUCCESS).

Problems
In practice, organizations and businesses looking for Key Performance Indicators discover that it is difficult or impossible (eg staff morale may be impossible to qualify with a number) to measure the performance indicators exactly required to a particular business or process objective. Often a business metrics that is similar is used to measure that KPI. In practice this tends to work but the analyst must be aware of the limitation of what is being measured which is often a rough guide rather than an exact measurement. Another serious issue in practice is that once a KPI is created, it becomes difficult to change them as your yearly comparisons with previous years can be lost. Furthermore you should be aware that if it is too inhouse, it maybe extremely difficult for an organization to use its KPIs to get comparisons with other similar organizations.

16

Learning organization
The concept of the learning organization is that the successful organization must and does continually adapt and learn in order to respond to changes in environment and to grow. This raises a range of scholarly and theoretical questions relating to what it means for an organization to learn, and practical questions around what organizations need to do in order to learn and adapt. The idea of a learning organization suggests that there is some learning in organizations that takes place over and above the learning undertaken by different individuals as part of their work and experience in organizations. This has been contested by different authors, but has proven an interesting idea. Is it possible, for example, for certain aspects of learning to remain in an organization, even if the participants responsible for it leave? It has been proposed that certain organizational artifacts, such as stories, records, system of doing work, tools, recipes, et cetera, function in a way that detaches the learning from individuals and makes it a property of organizations themselves.

Peter Senge and the learning organization


In his book The Fifth Discipline: The Art and Practice of the Learning Organization , Peter Senge defined a learning organization as human beings cooperating in dynamical systems that are in a state of continuous adaptation and improvement. According to Senge: Real learning gets to the heart of what it means to be human. Through learning we re-create ourselves. Through learning we become able to do something we never were able to do. Through learning we reperceive the world and our relationship to it. Through learning we extend our capacity to create, to be part of the generative process of life. There is within each of us a deep hunger for this type of learning.

The reality each of us sees and understands depend on what we believe is there. By learning the principles of the five disciplines, teams begin to understand how they can think and inquire that reality, so that they can collaborate in discussions and in working together create the results that matter (to them). Often the practitioner has seen the work as a vital yet viable means of developing a cadre of high performance leaders able to mobilize peoples' commitment towards results and change in organizations with ease. Feedback: Organizations that are adapted for maximum organizational learning build feedback loops deliberately to maximize their own learning. Taxonomy: A learning organization may create a specific enterprise taxonomy - a common and agreed upon understanding of terms, concepts, categories and keywords that apply within that organization. Challenging assumptions: Once it has established what they are, learning organization must constantly challenge its processes, instructions, assumptions and even its basic structure. The true learning organization is redesigning itself constantly.

Market concentration (Industry concentration, Seller concentration)


In economics, market concentration is a function of the number of s and their respective shares of the total production (alternatively, total capacity or total reserves) in a market. Alternative terms are Industry [1] concentration and Seller concentration. Market concentration is related to the concept of industrial concentration, which concerns the distribution of production within an industry, as opposed to a market.

17

Desirable properties
To be practically useful, a market concentration measure should be decreasing in the number of firms in the market. Additionally, it should also be decreasing (or at least nonincreasing) with the degree of symmetry between the firms' shares.

Examples
Commonly used market concentration measures are the Herfindahl index (HHI or simply H) and the concentration ratio (CR). The Hannah-Kay (1971) index has the general form

Note,

, which is the exponential index.

Uses
When antitrust agencies are evaluating a potential violation of competition laws, they will typically make a determination of the relevant market and attempt to measure market concentration within the relevant market.

Motivation
As an economic tool market concentration is useful because it reflects the degree of competition in the market. Tirole (1988, p. 247) notes that: Bain's (1956) original concern with market concentration was based on an intuitive relationship between high concentration and collusion. There are game theoretic models of market interaction (e.g. among oligopolists) that predict that an increase in market concentration will result in higher prices and lower consumer welfare even when collusion in the sense of cartelization (i.e. explicit collusion) is absent. Examples are Cournot oligopoly, and Bertrand oligopoly for differentiated products.

Empirical tests
Empirical studies that are designed to test the relationship between market concentration and prices are collectively known as price-concentration studies; see Weiss (1989). Typically, any study that claims to test the relationship between price and the level of market concentration is also (jointly, that is, simultaneously) testing whether the market definition (according to which market concentration is being calculated) is relevant; that is, whether the boundaries of each market is not being determined either too narrowly or too broadly so as to make the defined "market" meaningless from the point of the competitive interactions of the firms that it includes (or is made of).

Alternative definition
In economics, market concentration is a criterion that can be used to rank order various distributions of s' shares of the total production (alternatively, total capacity or total reserves) in a market.

Further Examples

18

Section 1 of the Department of Justice and the Federal Trade Commission's Horizontal Merger Guidelines is entitled "Market Definition, Measurement and Concentration." Herfindahl index is the measure of concentration that these Guidelines state that will be used. A simple measure of market concentration is 1/N where N is the number of firms in the market. This measure of concentration ignores the dispersion among the firms' shares. It is decreasing in the number of firms and nonincreasing in the degree of symmetry between them. This measure is practically useful only if a sample of firms' market shares is believed to be random, rather than determined by the firms' inherent characteristics. Any criterion that can be used to compare or rank distributions (e.g. probability distribution, frequency distribution or size distribution) can be used as a market concentration criterion. Examples are stochastic dominance and Gini coefficient. Curry and George (1981) enlist the following "alternative" measures of concentration: (a) The mean of the first moment distribution (Niehans, 1958); Hannah and Kay (1977) call this an "absolute concentration" index:

(b) The Rosenbluth (1961) index (also Hall and Tideman, 1967): where symbol i indicates the firm's rank position.

(c) Comprehensive concentration index (Horvath 1970): where s1 is the share of the largest firm. The index is similar to except that greater weight is assigned to the share of the largest firm.

(d) The Pareto slope (Ijiri and Simon, 1971). If the Pareto distribution is plotted on double logarithmic scales, [then] the distribution function is linear, and its slope can be calculated if it is fitted to an observed sizedistribution. (e) The Linda index (1976) where Qi is the ratio between the average share of the first firms and the average share of the remaining firms. This index is designed to measure the degree of inequality between values of the size variable accounted for by various sub-samples of firms. It is also intended to define the boundary between the oligopolists within an industry and other firms. It has been used by the European Union.

(f) The U Index (Davies, 1980): where is an accepted measure of inequality (in practice the coefficient of variation is suggested), is a constant or a parameter (to be estimated empirically) and N is the number of firms. Davies (1979) suggests that a concentration index should in general depend on both N and the inequality of firms' shares.

19

The "number of effective competitors" is the inverse of the Herfindahl index. Terrence Kavyu Muthoka defines distribution just as functionals in the Swartz space which is the space of functions with compact support and with all derivatives existing.The or the Dirac function is a good example .

Market share
Market share, in strategic management and marketing, is the percentage or proportion of the total available market or market segment that is being serviced by a company. It can be expressed as a company's sales revenue (from that market) divided by the total sales revenue available in that market. It can also be expressed as a company's unit sales volume (in a market) divided by the total volume of units sold in that market.

Objectives
Increasing market share is one of the most important objectives used in business. The main advantage of using market share is that it abstracts from industry-wide macroenvironmental variables such as the state of the economy, or changes in tax policy. According to the national environment, the respective share of different companies changes and hence this causes change in the share market values; the reason can be political ups and downs, any disaster, any happening or mis-happening. Other objectives include return on investment (ROI), return on assets (ROA), and target rate of profit. Market share has the potential to increase profits as profit leads to more customers with a higher demand for a particular product.

Marketing mix
The marketing mix is generally accepted as the use and specification of the 4 Ps describing the strategic position of a product in the marketplace. One version of the origins of the marketing mix starts in 1948 when Culliton said that a marketing decision should be a result of something similar to a recipe. This version continues in 1953 when Neil Borden, in his American Marketing Association presidential address, took the recipe idea one step further and coined the term 'Marketing-Mix'. A prominent person to take centre stage was E. Jerome McCarthy in 1960; he proposed a four-P classification which was popularised. Philip Kotler describes the concept well in his Marketing Management book (see references below)

Definition
Although some marketers have added other Ps, such as personnel and packaging, the fundamental dogma of marketing typically identifies the four Ps of the marketing mix as referring to: Product - An object or a service that is mass produced or manufactured on a large scale with a specific volume of units. A typical example of a mass produced service is the hotel industry. A less obvious but ubiquitous mass produced service is a computer operating system. Typical examples of a mass produced objects are the motor car and the disposable razor. Price The price is the amount a customer pays for a product. It is determined by a number of factors including market share, competition, material costs, product identity and the customer's perceived value of the product. The business may increase or decrease the product if other stores have the same product.

20

Place Place represents the location where a product can be purchased. It is often referred to as the distribution channel. It can include any physical store as well as virtual stores on the Internet. Promotion Promotion represents all of the communications that a marketer may use in the marketplace. Promotion has four distinct elements - advertising, public relations, word of mouth and point of sale. A certain amount of crossover occurs when promotion uses the four principle elements together, which is common in film promotion. Advertising covers any communication that is paid for, from television and cinema commercials, radio and Internet adverts through print media and billboards. One of the most notable means of promotion today is the Promotional Product, as in useful items distributed to targeted audiences with no obligation attached. This category has grown each year for the past decade while most other forms have suffered. It is the only form of advertising that targets all five senses and has the recipient thanking the giver. Public relations are where the communication is not directly paid for and includes press releases, sponsorship deals, exhibitions, conferences, seminars or trade fairs and events. Word of mouth is any apparently informal communication about the product by ordinary individuals, satisfied customers or people specifically engaged to create word of mouth momentum. Sales staff often plays an important role in word of mouth and Public Relations (see Product above).

Broadly defined, optimizing the marketing mix is the primary responsibility of marketing. By offering the product with the right combination of the four Ps marketers can improve their results and marketing effectiveness. Making small changes in the marketing mix is typically considered to be a tactical change. Making large changes in any of the four Ps can be considered strategic. For example, a large change in the price, say from $129.00 to $39.00 would be considered a strategic change in the position of the product. However a change of $129.00 to $131.00 would be considered a tactical change, potentially related to a promotional offer.

Criticisms
Peter Doyle (Doyle, P. 2000) claims that the marketing mix approach leads to unprofitable decisions because it is not grounded in financial objectives such as increasing shareholder value. According to Doyle it has never been clear what criteria to use in determining an optimum marketing mix. Objectives such as providing solutions for customers at low cost have not generated adequate profit margins. Doyle claims that developing marketing based objectives while ignoring profitability has resulted in the dot-com crash and the Japanese economic collapse. He also claims that pursuing a ROI approach while ignoring marketing objectives is just as problematic. He argues that a net present value approach maximizing shareholder value provides a "rational framework" for managing the marketing mix. Against the four P's, some claim that they are too strongly oriented towards consumer markets and do not offer an appropriate model for industrial product marketing. Others claim it has too strong of a product market perspective and is not appropriate for the marketing of services.

Metrics
Metrics are a system of parameters or ways of quantitative and periodic assessment of a process that is to be measured, along with the procedures to carry out such measurement and the procedures for the interpretation of the assessment in the light of previous or comparable assessments. Metrics are usually specialized by the subject area, in which case they are valid only within a certain domain and cannot be directly benchmarked or interpreted outside it. The following ISM3 table suggests the elements that must be known for a metric to be fully defined. This table is criticized sometimes as it omits controls against bias.

21

Element

Description

Metric

Name of the metric

Metric Description

Description of what is measured

Measurement Procedure How is the metric measured

Measurement Frequency How often is the measurement taken

Thresholds Estimation

How are the thresholds calculated

Current Thresholds

Current range of values considered normal for the metric

Target Value

Best possible value of the metric

Units

Units of measurement

(Source of this table: ISM3) Metrics are used in business model, CMMI, ISM3, Balanced scorecard and knowledge management. These measurements or metrics can be used to track trends, productivity, resources and much more. Typically, the metrics tracked are key performance indicators, also known as KPIs. For example, you would use metrics to better understand how a company is performing compared to other companies within its industry. Most methodologies define hierarchies to guide organizations in achieving their strategic or tactical goals. An example can be: * Objectives Goals Critical Success Factors (CSFs) Key Performance Indicators (KPIs or Metrics). The intention is to identify future-state objectives, relate them to specific goals that can be achieved through critical success factors or performance drivers which are then monitored and measured by key performance indicators. Through this hierarchy, organizations can define and communicate relationships between metrics and how they contribute to the achievement of organizational goals and objectives. Metrics are important in IT Service Management including ITIL; the intention is to measure the effectiveness of the various processes at delivering services to customers. Some suggest that data from different organizations can be gathered together, against an agreed set of metrics, to form a benchmark, which would

22

allow organizations to evaluate their performance against industry sectors to establish, objectively, how well they are performing. There is strong disagreement with these views from other quarters. No agreed standard set of best practice [1] metrics exists; Kaner has raised serious objections about the purported validity of metrics used in software [2] engineering. Douglas Hubbard published his findings that unless the value of information is quantified, managers are unlikely to choose the highest payoff metrics. Subsequent US Government studies further demonstrated this [1].

Organizational learning
Organizational learning is an area of knowledge within organizational theory that studies models and theories about the way an organization learns and adapts. In Organizational development (OD), learning is a characteristic of an adaptive organization, i.e., an organization that is able to sense changes in signals from its environment (both internal and external) and adapt accordingly. (see adaptive system). OD specialists endeavor to assist their clients to learn from experience and incorporate the learning as feedback into the planning process.

How organizations learn


Where Argyris and Schon were the first to propose models that facilitate organizational learning, the following literatures have followed in the tradition of their work: Argyris and Schon (1978) distinguish between single-loop and double-loop learning, related to Gregory Bateson's concepts of first and second order learning. In single-loop learning, individuals, groups, or organizations modify their actions according to the difference between expected and obtained outcomes. In double-loop learning, the entities (individuals, groups or organization) question the values, assumptions and policies that led to the actions in the first place; if they are able to view and modify those, then second-order or double-loop learning has taken place. Double loop learning is the learning about single-loop learning. March and Olsen (1975) attempt to link up individual and organizational learning. In their model, individual beliefs lead to individual action, which in turn may lead to an organizational action and a response from the environment which may induce improved individual beliefs and the cycle then repeats over and over. Learning occurs as better beliefs produce better actions. Kim (1993), as well, in an article titled "The link between individual and organizational learning", integrates Argyris, March and Olsen and another model by Kofman into a single comprehensive model; further, he analyzes all the possible breakdowns in the information flows in the model, leading to failures in organizational learning; for instance, what happens if an individual action is rejected by the organization for political or other reasons and therefore no organizational action takes place? Nonaka and Takeuchi (1995) developed a four stage spiral model of organizational learning. They started by differentiating Polanyi's concept of "tacit knowledge" from "explicit knowledge" and describe a process of alternating between the two. Tacit knowledge is personal, context specific, subjective knowledge, whereas explicit knowledge is codified, systematic, formal, and easy to communicate. The tacit knowledge of key personnel within the organization can be made explicit, codified in manuals, and incorporated into new products and processes. This process they called "externalization". The reverse process (from explicit to implicit) they call "internalization" because it involves employees internalizing an organization's formal rules, procedures, and other forms of explicit knowledge. They also use the term "socialization" to denote the sharing of tacit knowledge, and the term "combination" to denote the dissemination of codified knowledge. According to this model, knowledge creation and organizational learning take a path of socialization, externalization, combination, internalization, socialization, externalization, combination . . . etc. in an infinite spiral. Nick Bontis et al. (2002) empirically tested a model of organizational learning that encompassed both stocks and flows of knowledge across three levels of analysis: individual, team and organization.

23

Results showed a negative and statistically significant relationship between the misalignment of stocks and flows and organizational performance. Flood (1999) discusses the concept of organizational learning from Peter Senge and the origins of the theory from Argyris and Schon. The author aims to "re-think" Senge's The Fifth Discipline through systems theory. The author develops the concepts by integrating them with key theorists such as Bertalanffy, Churchman, Beer, Checkland and Ackoff. Conceptualizing organizational learning in terms of structure, process, meaning, ideology and knowledge, the author provides insights into Senge within the context of the philosophy of science and the way in which systems theorists were influenced by twentieth-century advances from the classical assumptions of science. Imants (2003) provides theory development for organizational learning in schools within the context of teachers' professional communities as learning communities, which is compared and contrasted to teaching communities of practice. Detailed with an analysis of the paradoxes for organizational learning in schools, two mechanisms for professional development and organizational learning, (1) steering information about teaching and learning and (2) encouraging interaction among teachers and workers, are defined as critical for effective organizational learning. Common (2004) discusses the concept of organisational learning in a political environment to improve public policy-making. The author details the initial uncontroversial reception of organisational learning in the public sector and the development of the concept with the learning organization. Definitional problems in applying the concept to public policy are addressed, noting research in UK local government that concludes on the obstacles for organizational learning in the public sector: (1) overemphasis of the individual, (2) resistance to change and politics, (3) social learning is selflimiting, i.e. individualism, and (4) political "blame culture." The concepts of policy learning and policy transfer are then defined with detail on the conditions for realizing organizational learning in the public sector.

Subscript text

Organizational knowledge
What is the nature of knowledge created, traded and used in organizations? Some of this knowledge can be termed technical knowing the meaning of technical words and phrases, being able to read and make sense of economic data and being able to act on the basis of law-like generalizations. Scientific knowledge is propositional; it takes the form of causal generalizations whenever A, then B. For example, whenever water reaches the temperature of 100 degrees, it boils; whenever it boils, it turns into steam; steam generates pressure when in an enclosed space; pressure drives engines. And so forth. A large part of the knowledge used by managers, however, does not assume this form. The complexities of a managers task are such that applying A may result in B, C, or Z. A recipe or an idea that solved very well a particular problem, may, in slightly different circumstances backfire and lead to ever more problems. More important than knowing a whole lot of theories, recipes and solutions for a manager is to know which theory, recipe or solution to apply in a specific situation. Sometimes a manager may combine two different recipes or adapt an existing recipe with some important modification to meet a situation at hand. Managers often use knowledge in the way that a handyman will use his or her skills, the materials and tools that are at hand to meet the demands of a particular situation. Unlike an engineer who will plan carefully and scientifically his or her every action to deliver the desired outcome, such as a steam engine, a handyman is flexible and opportunistic, often using materials in unorthodox or unusual ways, and relies a lot on trial and error. This is what the French call bricolage, the resourceful and creative deployment skills and materials to meet each challenge in an original way. Rule of thumb, far from being the enemy of management, is what managers throughout the world have relied upon to inform their action. In contrast to the scientific knowledge that guides the engineer, the physician or the chemist, managers are often informed by a different type of know-how. This is sometimes referred to a narrative knowledge or experiential knowledge, the kind of knowledge that comes from experience and resides in stories and

24

narratives of how real people in the real world dealt with real life problems, successfully or unsuccessfully. Narrative knowledge is what we use in everyday life to deal with awkward situations, as parents, as consumers, as patients and so forth. We seek the stories of people in the same situation as ourselves and try to learn from them. As the Chinese proverb says "A wise man learns from experience; a wiser man learns from the experience of others." Narrative knowledge usually takes the form of organization stories (see organization story and organizational storytelling). These stories enable partipants to make sense of the difficulties and challenges they face; by listening to stories, members of organizations learn from each other's experiences, adapt the recipes used by others to address their own difficulties and problems. Narrative knowledge is not only the preserve of managers. Most professionals (including doctors, accountants, lawyers, business consultants and academics) rely on narrative knowledge, in addition to their specialist technical knowledge, when dealing with concrete situations as part of their work. More generally, narrative knowledge represents an endlessly mutating reservoir of ideas, recipes and stories that are traded mostly by word or mouth on the internet. They are often apocryphal and may be inaccurate or untrue - yet, they have the power to influence people's sensemaking and actions.

Individual versus organizational learning


Learning by individuals in an organizational context is a well understood process. This is the traditional domain of human resources, including activities such as: training, increasing skills, work experience, and formal education. Given that the success of any organization is founded on the knowledge of the people who work for it, these activities will and, indeed, must continue. However, individual learning is only a prerequisite to organizational learning. Others take it farther with continuous learning. The world is orders of magnitude more dynamic than that of our parents, or even when we were young. Waves of change are crashing on us virtually one on top of another. Change has become the norm rather than the exception. Continuous learning throughout ones career has become essential to remain relevant in the workplace. Again, necessary but not sufficient to describe organizational learning. What does it mean to say that an organization learns? Simply summing individual learning is inadequate to model organizational learning. The following definition outlines the essential difference between the two: A learning organization actively creates, captures, transfers, and mobilizes knowledge to enable it to adapt to a changing environment. Thus, the key aspect of organizational learning is the interaction that takes place among individuals. A learning organization does not rely on passive or ad hoc process in the hope that organizational learning will take place through serendipity or as a by-product of normal work. A learning organization actively promotes, facilitates, and rewards collective learning. Creating (or acquiring) knowledge can be an individual or group activity. However, this is normally a smallscale, isolated activity steeped in the jargon and methods of knowledge workers. As first stated by Lucilius in the 1st century BC, Knowledge is not knowledge until someone else knows that one knows. Capturing individual learning is the first step to making it useful to an organization. There are many methods for capturing knowledge and experience, such as publications, activity reports, lessons learned, interviews, and presentations. Capturing includes organizing knowledge in ways that people can find it; multiple structures facilitate searches regardless of the users perspective (e.g., who, what, when, where, why,and how). Capturing also includes storage in repositories, databases, or libraries to insure that the knowledge will be available when and as needed.

25

Transferring knowledge requires that it be accessible to everyone when and where they need it. In a digital world, this involves browser-activated search engines to find what one is looking for. A way to retrieve content is also needed, which requires a communication and network infrastructure. Tacit knowledge may be shared through communities of practice or consulting experts. It is also important that knowledge is presented in a way that users can understand it. It must suit the needs of the user to be accepted and internalized. Mobilizing knowledge involves integrating and using relevant knowledge from many, often diverse, sources to solve a problem or address an issue. Integration requires interoperability standards among various repositories. Using knowledge may be through simple reuse of existing solutions that have worked previously. It may also come through adapting old solutions to new problems. Conversely, a learning organization learns from mistakes or recognizes when old solutions no longer apply. Use may also be through synthesis; that is creating a broader meaning or a deeper level of understanding. Clearly, the more rapidly knowledge can be mobilized and used, the more competitive an organization. An organization must learn so that it can adapt to a changing environment. Historically, the life-cycle of organizations typically spanned stable environments between major socioeconomic changes. Blacksmiths who didnt become mechanics simply fell by the wayside. More recently, many fortune 500 companies of two decades ago no longer exist. Given the ever-accelerating rate of global-scale change, the more critical learning and adaptation become to organization relevance, success, and ultimate survival. Organizational learning is a social process, involving interactions among many individuals leading to wellinformed decision making. Thus, a culture that learns and adapts as part of everyday working practices is essential. Reuse must equal or exceed reinvent as a desirable behavior. Adapting an idea must be rewarded along with its initial creation. Sharing to empower the organization must supersede controlling to empower an individual. Clearly, shifting from individual to organizational learning involves a non-linear transformation. Once someone learns something, it is available for their immediate use. In contrast, organizations need to create, capture, transfer, and mobilize knowledge before it can be used. Although technology supports the latter, these are primarily social processes within a cultural environment, and cultural change, however necessary, is a particularly challenging undertaking.

Learning organization
The work in Organizational Learning can be distinguished from the work on a related concept, the learning organization. This later body of work, in general, uses the theoretical findings of organizational learning (and other research in organizational development, system theory, and cognitive science) in order to prescribe specific recommendations about how to create organizations that continuously and effectively learn. This practical approach was championed by Peter Senge in his book The Fifth Discipline.

Diffusion of innovations
Diffusion of innovations theory explores how and why people adopt new ideas, practices and products. It may be seen as a subset of the anthropological concept of diffusion and can help to explain how ideas are spread by individuals, social networks and organizations.

Operations management
Operations management is an area of business that is concerned with the production of goods and services, and involves the responsibility of ensuring that business operations are efficient and effective. It is also the management of resources, the distribution of goods and services to customers, and the analysis of

26

queue systems. APICS The Association for Operations Management also defines operations management as "the field of study that focuses on the effective planning, scheduling, use, and control of a manufacturing or service organization through the study of concepts from design engineering, industrial engineering, management information systems, quality management, production management, inventory management, accounting, and other functions as they affect the organization" (APICS Dictionary, 11th edition). Operations also refers to the production of goods and services, the set of value-added activities that [1] transform inputs into many outputs. Fundamentally, these value-adding creative activities should be aligned with market opportunity (see Marketing) for optimal enterprise performance.

Origins
Historically, the body of knowledge stemming from industrial engineering formed the basis of the first MBA programs, and is central to operations management as used across diverse business sectors, industry, consulting and non-profit organizations.

Operations Management Planning Criteria


Control by creating and maintaining a positive flow of work by utilizing what resources and facilities are available Lead by developing and cascading the organizations strategy/mission statement to all staff Organize resources such as facilities and employees so as to ensure effective production of goods and services Plan by prioritizing customer, employee and organizational requirements Maintaining and monitoring staffing, levels, Knowledge-Skill-Attitude (KSA), expectations and motivation to fulfill organizational requirements Performance Measures for the measurement of performance and consideration of efficiency versus [2] effectiveness

Operational performance management


Operational performance management is a type of Performance management that addresses the growing pressure to increase revenue while managing costs, meeting ever-evolving and expanding customer demands.

Organizational climate
The concept of organizational climate has been assessed by various authors, of which many of them published their own definition of organizational climate. Organizational climate, however, proves to be hard to define. There are two especially intractable and related difficulties: how to define climate and how to measure it effectively on different levels of analysis. Furthermore there are several approaches to the concept of climate, of which two in particular have received substantial patronage: the cognitive schema approach and the shared perception approach. The first approach regards the concept of climate as an individual perception and cognitive representation of the work environment. From this perspective climate assessments should be conducted at an individual level. The second approach emphasizes the importance of shared perceptions as underpinning the notion of climate (Anderson, & West, 1998; Mathisen & Einarsen 2004). Reichers and Schneider (1990) define organizational

27

climate as the shared perception of the way things are around here (p.22). It is important to realize that from these two approaches, there is no best approach and they actually have a great deal of overlap. For further information about the concept of organizational climate I want to refer to the work of Anderson, & West (1998).

Organizational commitment
In the study of organizational behavior and Industrial/Organizational Psychology, organizational commitment is, in a general sense, the employee's psychological attachment to the organization. It can be contrasted with other work-related attitudes, such as Job Satisfaction (an employee's feelings about their job) and Organizational Identification (the degree to which an employee experiences a 'sense of oneness' with their organization). Organizational scientists have developed many definitions of organizational commitment, and numerous scales to measure it. Exemplary of this work is Meyer & Allen's model of commitment, which was developed to integrate numerous definitions of commitment that had proliferated in the research literature. According to Meyer and Allen's (1991) three-component model of commitment, prior research indicated that there are three "mind sets" which can characterize an employee's commitment to the organization: Affective Commitment: AC is defined as the employee's positive emotional attachment to the organization. An employee who is affectively committed strongly identifies with the goals of the organization and desires to remain a part of the organization. This employee commits to the organization because he/she "wants to". In developing this concept, Meyer and Allen drew largely on Mowday, Porter, and Steers's (1982) concept of commitment, which in turn drew on earlier work by Kanter (1968). Continuance Commitment: The individual commits to the organization because he/she perceives high costs of losing organizational membership (cf. Becker's 1960 "side bet theory"), including economic losses (such as pension accruals) and social costs (friendship ties with co-workers) that would have to be given up. The employee remains a member of the organization because he/she "has to". Normative Commitment: The individual commits to and remains with an organization because of feelings of obligation. These feelings may derive from many sources. For example, the organization may have invested resources in training an employee who then feels a 'moral' obligation to put forth effort on the job and stay with the organization to 'repay the debt.' It may also reflect an internalized norm, developed before the person joins the organization through family or other socialization processes, that one should be loyal to one's organization. The employee stays with the organization because he/she "ought to".

Note that according to Meyer and Allen, these components of commitment are not mutually exclusive: an employee can simultaneously be committed to the organization in an affective, normative, *and* continuance sense, at varying levels of intensity. This idea led Meyer and Herscovitch (2001) to argue that at any point in time, an employee has a "commitment profile" that reflects high or low levels of all three of these mind-sets, and that different profiles have different effects on workplace behavior such as job performance, absenteeism, and the chance that the organization member will quit. Meyer and Allen developed the Affective Commitment Scale (ACS), the Normative Commitment Scale (NCS) and the Continuance Commitment Scale (CCS) to measure these components of commitment. Many researchers have used them to determine what impact an employee's level of commitment has on outcomes such as quitting behavior, job performance, and absenteeism. However, some researchers have questioned how well they actually assess an employee's commitment, and efforts to improve the validity of these scales, and similar commitment scales such as Mowday, Porter, and Steers' Organizational Commitment Questionnaire (OCQ), continues.

28

In addition to methodological investigations of the validity and reliability of these scales, recent research has focused on determining the cross-cultural validity of Meyer and Allen's measures (do employees in other countries/cultures experience commitment the same way as employees in the USA?), and on expanding the three-component model to other foci (such as commitment to one's occupation, department, organization change initiatives, and work team).

Organizational communication
Organizational communication, broadly speaking, is: people working together to achieve individual or [1] collective goals.

Discipline History
The modern field traces its lineage through business information, business communication, and early mass communication studies published in the 1930s through the 1950s. Until then, organizational communication as a discipline consisted of a few professors within speech departments who had a particular interest in speaking and writing in business settings. The current field is well established with its own theories and empirical concerns distinct from other communication subfields and other approaches to organizations. Several seminal publications stand out as works broadening the scope and recognizing the importance of communication in the organizing process, and in using the term "organizational communication". Nobel Laureate Herbert Simon wrote in 1947 about "organization communications systems", saying communication [2] is "absolutely essential to organizations". In 1951 Bavelas and Barrett wrote An Experimental Approach to Organizational Communication in which they stated that communication "is the essence of organized activity". In 1953 the economist Kenneth Boulding wrote The Organizational Revolution: A Study in the Ethics of Economic Organization. While this work directly addressed the economic issues facing organizations, in it he questions the ethical and moral issues underlying their power, and maintains that an "organization consists of a system of communication." In 1954, a young Chris Argyris published Personality and Organization. This careful and research-based book attacked many things, but singled out "organizational communication" for special attention. Argyris made the case that what passed for organizational communication at the time was based on unstated and indefensible propositions such as "management knows best" and "workers are inherently stupid and lazy." He accused the emerging field of relying on untested gimmicks designed to trick employees into doing management's will.

Assumptions underlying early organizational communication


''' Some of the main assumptions underlying much of the early organizational communication research were: ''' Humans act rationally. Sane people behave in rational ways, they generally have access to all of the information needed to make rational decisions they could articulate, and therefore will make rational decisions, unless there is some breakdown in the communication process. Formal logic and empirically verifiable data ought to be the foundation upon which any theory should rest. All we really need to understand communication in organizations is (a) observable and replicable behaviors that can be transformed into variables by some form of measurement, and (b) formally replicable syllogisms that can extend theory from observed data to other groups and settings

29

Communication is primarily a mechanical process, in which a message is constructed and encoded by a sender, transmitted through some channel, then received and decoded by a receiver. Distortion, represented as any differences between the original and the received messages, can and ought to be identified and reduced or eliminated. Organizations are mechanical things, in which the parts (including employees functioning in defined roles) are interchangeable. What works in one organization will work in another similar organization. Individual differences can be minimized or even eliminated with careful management techniques. Organizations function as a container within which communication takes place. Any differences in form or function of communication between that occurring in an organization and in another setting can be identified and studied as factors affecting the communicative activity.

Herbert Simon introduced the concept of bounded rationality which challenged assumptions about the perfect rationality of communication participants. He maintained that people making decisions in organizations seldom had complete information, and that even if more information was available, they tended to pick the first acceptable option, rather than exploring further to pick the optimal solution. Through the 1960s, 1970s and 1980s the field expanded greatly in parallel with several other academic disciplines, looking at communication as more than an intentional act designed to transfer an idea. Research expanded beyond the issue of "how to make people understand what I am saying" to tackle questions such as "how does the act of communicating change, or even define, who I am?", "why do organizations that seem to be saying similar things achieve very different results?" and "to what extent are my relationships with others affected by our various organizational contexts?"

Types of Communication Flow


Downward Communication Upward Communication Horizontal Communication

Research Methodologies
Historically, organizational communication was driven primarily by quantitative research methodologies. Included in functional organizational communication research are statistical analyses (such as surveys, text indexing, network mapping and behavior modeling). In the early 1980s, the interpretive revolution took place in organizational communication. In Putnam and Pacanowsky's 1983 text Communication and Organizations: An Interpretive Approach. they argued for opening up methodological space for qualitative approaches such as narrative analyses, participant-observation, interviewing, rhetoric and textual approaches readings) and philosophic inquiries. During the 1980s and 1990s critical organizational scholarship began to gain prominence with a focus on issues of gender, race, class, and power/knowledge. In its current state, the study of organizational communication is open methodologically, with research from post-positive, interpretive, critical, postmodern, and discursive paradigms being published regularly. Organizational communication scholarship appears in a number of communication journals including but not limited to Management Communication Quarterly, Journal of Applied Communication Research, Communication Monographs, Academy of Management Journal, Communication Studies, and Southern Communication Journal.

Current Organizational Communication Research

30

Organizational communication can include: Flow of Communication, e.g., formal, informal internal, external upward, downward, horizontal networks

Induction, e.g., new hire orientation policies & procedures employee benefits

Channels, e.g., electronic media such as e-mail, intranet, internet teleconference print media such as memos, bulletin boards, newsletters etc. face-to-face

Meetings, e.g., briefings staff meetings project meetings town hall meetings

Interviews, e.g., Selection Performance Career

More recently, the field of organizational communication has moved from acceptance of mechanistic models (e.g., information moving from a sender to a receiver) to a study of the persistent, hegemonic and taken-forgranted ways in which we not only use communication to accomplish certain tasks within organizational settings (e.g., public speaking) but also how the organizations in which we participate affect us. These approaches include "postmodern", "critical", "participatory", "feminist", "power/political", "organic", etc. and draw from disciplines as wide-ranging as sociology, philosophy, theology, psychology (see, in particular, "industrial/organizational psychology"), business, business administration, institutional management, medicine (health communication), neurology (neural nets), semiotics, anthropology, international relations, and music. Thus the field has expanded or moved to study phenomena such as: Constitution, e.g., how communicative behaviors construct or modify organizing processes or products

31

how the organizations within which we interact affect our communicative behaviors, and through these, our own identities structures other than organizations which might be constituted through our communicative activity (e.g., markets, cooperatives, tribes, political parties, social movements) when does something "become" an organization? When does an organization become (an)other thing(s)? Can one organization "house" another? Is the organization still a useful entity/thing/concept, or has the social/political environment changed so much that what we now call "organization" is so different from the organization of even a few decades ago that it cannot be usefully tagged with the same word--"organization"?

Narrative, e.g., how do group members employ narrative to acculturate/initiate/indoctrinate new members? do organizational stories act on different levels? Are different narratives purposively invoked to achieve specific outcomes, or are there specific roles of "organizational storyteller"? If so, are stories told by the storyteller received differently than those told by others in the organization? in what ways does the organization attempt to influence storytelling about the organization? under what conditions does the organization appear to be more or less effective in obtaining a desired outcome? when these stories conflict with one another or with official rules/policies, how are the conflicts worked out? in situations in which alternative accounts are available, who or how or why are some accepted and others rejected?

Identity, e.g., who do we see ourselves to be, in terms of our organizational affiliations? do communicative behaviors or occurrences in one or more of the organizations in which we participate effect changes in us? to what extent are we comprised of the organizations to which we belong? is it possible for individuals to successfully resist organizational identity? what would that look like? do people who define themselves by their work-organizational membership communicate differently within the organizational setting than people who define themselves more by an avocational (nonvocational) set of relationships?

Interrelatedness of organizational experiences, e.g., how do our communicative interactions in one organizational setting affect our communicative actions in other organizational settings? how do the phenomenological experiences of participants in a particular organizational setting effect changes in other areas of their lives? when the organizational status of a member is significantly changed (e.g., by promotion or expulsion) how are their other organizational memberships affected?

Power e.g., how do the use of particular communicative practices within an organizational setting reinforce or alter the various interrelated power relationships within the setting? Are the potential responses of those within or around these organizational settings constrained by factors or processes either within or outside of the organization--(assuming there is an "outside"? do taken-for-granted organizational practices work to fortify the dominant hegemonic narrative? Do individuals resist/confront these practices, through what actions/agencies, and to what effects? do status changes in an organization (e.g., promotions, demotions, restructuring, financial/social strata changes) change communicative behavior? Are there criteria employed by organizational members to differentiate between "legitimate" (i.e., endorsed by the formal organizational structure) and "illegitimate" (i.e., opposed by or unknown to the formal power structure)? Are there "pretenders"

32

or "usurpers" who employ these communicative behaviors? When are they successful, and what do we even mean by "successful?"

Organizational culture, or corporate culture


Organizational culture, or corporate culture, comprises the attitudes, experiences, beliefs and values of an organization. It has been defined as "the specific collection of values and norms that are shared by people and groups in an organization and that control the way they interact with each other and with stakeholders outside the organization. Organizational values are beliefs and ideas about what kinds of goals members of an organization should pursue and ideas about the appropriate kinds or standards of behavior organizational members should use to achieve these goals. From organizational values develop organizational norms, guidelines or expectations that prescribe appropriate kinds of behavior by employees in particular situations [1] and control the behavior of organizational members towards one another" Senior management may try to determine a corporate culture. They may wish to impose corporate values and standards of behavior that specifically reflect the objectives of the organization. In addition, there will also be an extant internal culture within the workforce. Work-groups within the organization have their own behavioral quirks and interactions which, to an extent, affect the whole system. Task culture can be imported. For example, computer technicians will have expertise, language and behaviors gained independently of the organization, but their presence can influence the culture of the organization as a whole.

Strong/weak cultures
Strong culture is said to exist where staff respond to stimulus because of their alignment to organizational values. Conversely, there is weak culture where there is little alignment with organizational values and control must be exercised through extensive procedures and bureaucracy. Where culture is strongpeople do things because they believe it is the right thing to do there is a risk of another phenomenon, Groupthink. "Groupthink" was described by Irving L. Janis. He defined it as "...a quick and easy way to refer to a mode of thinking that people engage when they are deeply involved in a cohesive ingroup, when members' strivings for unanimity override their motivation to realistically appraise alternatives of action." This is a state where people, even if they have different ideas, do not challenge organizational thinking, and therefore there is a reduced capacity for innovative thoughts. This could occur, for example, where there is heavy reliance on a central charismatic figure in the organization, or where there is an evangelical belief in the organizations values, or also in groups where a friendly climate is at the base of their identity (avoidance of conflict). In fact groupthink is very common, it happens all the time, in almost every group. Members that are defiant are often turned down or seen as a negative influence by the rest of the group, because they bring conflict (, through reliance on established procedures. Innovative organizations need individuals who are prepared to challenge the status quo be it groupthink or bureaucracy, and also need procedures to implement new ideas effectively.

Classification schemes
Several methods have been used to classify organizational culture. Some are described below:

Geert Hofstede
Geert Hofstede demonstrated that there are national and regional cultural groupings that affect the behavior of organizations. Hofstede identified five dimensions of culture in his study of national influences:
[2]

33

Power distance - The degree to which a society expects there to be differences in the levels of power. A high score suggests that there is an expectation that some individuals wield larger amounts of power than others. A low score reflects the view that all people should have equal rights. Uncertainty avoidance reflects the extent to which a society accepts uncertainty and risk. individualism vs. collectivism - individualism is contrasted with collectivism, and refers to the extent to which people are expected to stand up for themselves, or alternatively act predominantly as a member of the group or organization. Masculinity vs. femininity - refers to the value placed on traditionally male or female values. Male values for example include competitiveness, assertiveness, ambition, and the accumulation of wealth and material possessions. Long vs. short term orientation - describes a society's "time horizon," or the importance attached to the future versus the past and present. In long term oriented societies, thrift and perseverance are valued more; in short term oriented societies, respect for tradition and reciprocation of gifts and favors are valued more. Eastern nations tend to score especially high here, with Western nations scoring low and the less developed nations very low; China scored highest and Pakistan lowest.

Deal and Kennedy


Deal and Kennedy defined organizational culture as the way things get done around here. They measured organizations in respect of: Feedback - quick feedback means an instant response. This could be in monetary terms, but could also be seen in other ways, such as the impact of a great save in a soccer match. Risk - represents the degree of uncertainty in the organizations activities.
[3]

Using these parameters, they were able to suggest four classifications of organizational culture: The Tough-Guy Macho Culture. Feedback is quick and the rewards are high. This often applies to fast moving financial activities such as brokerage, but could also apply to a police force, or athletes competing in team sports. This can be a very stressful culture in which to operate. The Work Hard/Play Hard Culture is characterized by few risks being taken, all with rapid feedback. This is typical in large organizations, which strive for high quality customer service. It is often characterized by team meetings, jargon and buzzwords. The Bet your Company Culture, where big stakes decisions are taken, but it may be years before the results are known. Typically, these might involve development or exploration projects, which take years to come to fruition, such as oil prospecting or military aviation. The Process Culture occurs in organizations where there is little or no feedback. People become bogged down with how things are done not with what is to be achieved. This is often associated with bureaucracies. While it is easy to criticize these cultures for being overly cautious or bogged down in red tape, they do produce consistent results, which is ideal in, for example, public services.

Charles Handy
Charles Handy (1985) popularized a method of looking at culture which some scholars have used to link organizational structure to Organizational Culture. He describes: a Power Culture which concentrates power among a few. Control radiates from the center like a web. Power Cultures have few rules and little bureaucracy; swift decisions can ensue. In a Role Culture, people have clearly delegated authorities within a highly defined structure. Typically, these organizations form hierarchical bureaucracies. Power derives from a person's position and little scope exists for expert power. By contrast, in a Task Culture, teams are formed to solve particular problems. Power derives from expertise as long as a team requires expertise. These cultures often feature the multiple reporting lines of a matrix structure.
[4]

34

A Person Culture exists where all individuals believe themselves superior to the organization. Survival can become difficult for such organizations, since the concept of an organization suggests that a group of like-minded individuals pursue the organizational goals. Some professional partnerships can operate as person cultures, because each partner brings a particular expertise and clientele to the firm.

Edgar Schein
Edgar Schein, an MIT Sloan School of Management professor, defines organizational culture as "the residue of success" within an organization. According to Schein, culture is the most difficult organizational attribute to change, outlasting organizational products, services, founders and leadership and all other physical attributes of the organization. His organizational model illuminates culture from the standpoint of the observer, described by three cognitive levels of organizational culture. At the first and most cursory level of Schein's model is organizational attributes that can be seen, felt and heard by the uninitiated observer. Included are the facilities, offices, furnishings, visible awards and recognition, the way that its members dress, and how each person visibly interacts with each other and with organizational outsiders. The next level deals with the professed culture of an organization's members. At this level, company slogans, mission statements and other operational creeds are often expressed, and local and personal values are widely expressed within the organization. Organizational behavior at this level usually can be studied by interviewing the organization's membership and using questionnaires to gather attitudes about organizational membership. At the third and deepest level, the organization's tacit assumptions are found. These are the elements of culture that are unseen and not cognitively identified in everyday interactions between organizational members. Additionally, these are the elements of culture which are often taboo to discuss inside the organization. Many of these 'unspoken rules' exist without the conscious knowledge of the membership. Those with sufficient experience to understand this deepest level of organizational culture usually become acclimatized to its attributes over time, thus reinforcing the invisibility of their existence. Surveys and casual interviews with organizational members cannot draw out these attributes--rather much more in-depth means is required to first identify then understand organizational culture at this level. Notably, culture at this level is the underlying and driving element often missed by organizational behaviorists. Using Schein's model, understanding paradoxical organizational behaviors becomes more apparent. For instance, an organization can profess highly aesthetic and moral standards at the second level of Schein's model while simultaneously displaying curiously opposing behavior at the third and deepest level of culture. Superficially, organizational rewards can imply one organizational norm but at the deepest level imply something completely different. This insight offers an understanding of the difficulty that organizational newcomers have in assimilating organizational culture and why it takes time to become acclimatized. It also explains why organizational change agents usually fail to achieve their goals: underlying tacit cultural norms are generally not understood before would-be change agents begin their actions. Merely understanding culture at the deepest level may be insufficient to institute cultural change because the dynamics of interpersonal relationships (often under threatening conditions) are added to the dynamics of organizational culture while attempts are made to institute desired change.
[5]

Organizational Culture Evolution


Arthur F Carmazzi states that the dynamics of organisational culture are an evolutionary process that can change and evolve with the proper Psychology of Leadership.

Foundations of Culture Evolution


35

At each level of Organisational Evolution, people will be working, acting, thinking, and feeling at different levels of personal commitment. Carmazzis Directive Communication psychology classifies these levels commitment as: The level of Individual People rely on personal skill and the direction from Leaders. When wor king on the plane of SKILL people work at the level of Individual. They work because it is required and use and develop their skill because it maintains the security related to their job. The Level of Group People have an emotional connection to their work. This has further developed their attitude for success. They thrive on an environment of personal growth and others who have the same Attitude. When working on the plane of ATTITUDE, people work at the level Group. They take on additional tasks a nd even apply more effort to their job. Unlike those working at the level of Individual, they do not need to be told what to do, only to be guided to a direction. The Level of Organization The Pinnacle of greatness comes when individuals see their work as their purpose. People see a greater purpose to the work they do, something greater than the individual, or the group. The organisation is the vehicle to doing and becoming something greater than themselves. When working on the plane of SELF ACTUALIZATION, people work at the level of Organization. At this level of commitment, an individual will do for the organization the same he would do for himself. The individual and the organisation (and all its components and people) are one.

Insights on Evolving Corporate Culture


According to Carmazzi, each culture affects the effectiveness and level of commitment of the people within that culture. And that perpetuates the psychology that creates the culture the first place. In order to break the cycle and evolve a culture and the commitment of those in it, leaders need to understand their role in the psychological dynamics behind the culture and make adjustments that will move it to the next level. Carmazzi has stated 5 levels of Organizational Culture. The Blame culture This culture cultivates distrust and fear, people blame each other to avoid being reprimanded or put down, this results in no new ideas or personal initiative because people dont want to risk being wrong. The majority of commitment here is at the level of Individual Multi-directional culture This culture cultivates minimized cross-department communication and cooperation. Loyalty is only to specific groups (departments). Each department becomes a clique and is often critical of other departments which in turn create lots of gossip. The lack of cooperation and Multi-Direction is manifested in the organizations inefficiency. The majority of personal commitment in this culture borders on the level of Individual and level of Group. Live and let live culture This culture is Complacency, it manifests Mental Stagnation and Low Creativity. People here have little future vision and have given up their passion. There is Average cooperation and communication and things do work, but they do not grow. People have developed their personal relationships and decided who to stay away from, there is not much left to learn. Personal commitment here is mixed between the level of Individual and level of Group. Brand Congruent Culture

36

People in this culture believe in the product or service of the organization, they feel good about what their company is trying to achieve and cooperate to achieve it. People here are passionate and seem to have similar goals in the organisation. They use personal resources to acti vely solve problems and while they dont always accept the actions of management or others around them, they see their job as important. Almost everyone in this culture is operating at the level of Group. Leadership Enriched Culture People view the organisation as an extension of themselves, they feel good about what they personally achieve through the organisation and have exceptional Cooperation. Individual goals are aligned with the goals of the organisation and people will do what it takes to make things happen. As a group, the organisation is more like family providing personal fulfillment which often transcends ego so people are consistently bringing out the best in each other. In this culture, Leaders do not develop followers, but develop other leaders. Almost everyone in this culture is operating at the level of Organisation.

Culture Maintenance
Once an organizational culture has evolved to a higher level, the challenge lies in maintaining it. To continuously develop an organizations people -as well as new staff- Carmazzi considers it essential to use the application of Directive Communication Psychology.

Elements
G. Johnson described a cultural web, identifying a number of elements that can be used to describe or influence Organizational Culture: The Paradigm: What the organization is about; what it does; its mission; its values. Control Systems: The processes in place to monitor what is going on. Role cultures would have vast rulebooks. There would be more reliance on individualism in a power culture. Organizational Structures: Reporting lines, hierarchies, and the way that work flows through the business. Power Structures: Who makes the decisions, how widely spread is power, and on what is power based? Symbols: These include organizational logos and designs, but also extend to symbols of power such as parking spaces and executive washrooms. Rituals and Routines: Management meetings, board reports and so on may become more habitual than necessary. Stories and Myths: build up about people and events, and convey a message about what is valued within the organization.
[6]

These elements may overlap. Power structures may depend on control systems, which may exploit the very rituals that generate stories which may not be true.

Dimensions of organizational culture


Innovation and risk taking Aggressiveness Outcome orientation Team orientation People orientation Attention to detail Stability

Organizational culture and change

37

When one wants to change an aspect of the culture of an organization one has to keep in consideration that this is a long term project. Corporate culture is something that is very hard to change and employees need time to get used to the new way of organizing. For companies with a very strong and specific culture it will be even harder to change. Cummings & Worley (2005, p. 491 492) give the following six guidelines for cultural change, these changes are in line with the eight distinct stages mentioned by Kotter (1995, p. 2)3: 1. Formulate a clear strategic vision (stage 1,2 & 3 of Kotter, 1995, p. 2) In order to make a cultural change effective a clear vision of the firms new strategy, shared values and behaviours is needed. This vision provides the intention and direction for the culture change (Cummings & Worley, 2005, p.490). 2. Display Top-management commitment (stage 4 of Kotter, 1995, p. 2) It is very important to keep in mind that culture change must be managed from the top of the organization, as willingness to change of the senior management is an important indicator (Cummings & Worley, 2005, page 490). The top of the organization should be very much in favour of the change in order to actually implement the change in the rest of the organization. De Caluw & Vermaak (2004, p 9) provide a framework with five different ways of thinking about change. 3. Model culture change at the highest level (stage 5 of Kotter, 1995, p. 2) In order to show that the management team is in favour of the change, the change has to be notable at first at this level. The behaviour of the management needs to symbolize the kinds of values and behaviours that should be realized in the rest of the company. It is important that the management shows the strengths of the current culture as well, it must be made clear that the current organizational does not need radical changes, but just a few adjustments. (See for more: (Deal & Kennedy, 1982; Sathe, 1983; Schall; 1983; Weick, 1985; DiTomaso, 1987) 4. Modify the organization to support organizational change The fourth step is to modify the organization to support organizational change. 5. Select and socialize newcomers and terminate deviants (stage 7 & 8 of Kotter, 1995, p. 2) A way to implement a culture is to connect it to organizational membership, people can be selected and terminate in terms of their fit with the new culture (Cummings & Worley, 2005, p. 491). 6. Develop ethical and legal sensitivity Changes in culture can lead to tensions between organizational and individual interests, which can result in ethical and legal problems for practitioners. This is particularly relevant for changes in employee integrity, control, equitable treatment and job security (Cummings & Worley, 2005, p. 491).

Entrepreneurial culture
Stephen McGuire defined and validated a model of organizational culture that predicts revenue from new sources. An Entrepreneurial Organizational Culture (EOC) is a system of shared values, beliefs and norms of members of an organization, including valuing creativity and tolerance of creative people, believing that innovating and seizing market opportunities are appropriate behaviors to deal with problems of survival and prosperity, environmental uncertainty, and competitors threats, and expecting organizational members to behave accordingly.
[7]

Critical views
38

Writers from Critical management studies have tended to express skepticism about the functionalist and unitarist views of culture put forward by mainstream management thinkers. Whilst not necessarily denying that organizations are cultural phenomena, they would stress the ways in which cultural assumptions can stifle dissent and reproduce management propaganda and ideology. After all, it would be naive to believe that a single culture exists in all organizations, or that cultural engineering will reflect the interests of all [8] stakeholders within an organization. In any case, Parker has suggested that many of the assumptions of those putting forward theories of organizational culture are not new. They reflect a long-standing tension between cultural and structural (or informal and formal) versions of what organizations are. Further, it is perfectly reasonable to suggest that complex organizations might have many cultures, and that such subcultures might overlap and contradict each other. The neat typologies of cultural forms found in textbooks rarely acknowledge such complexities, or the various economic contradictions that exist in capitalist organizations. One of the strongest and widely recognised criticisms of theories that attempt to categorise or 'pigeonhole' organisational culture is that put forward by Linda Smircich. She uses the metaphor of a plant root to represent culture, describing that it drives organisations rather than vice verca. Organisations are the product of organisational culture, we are unaware of how it shapes behaviour and interaction (also recognised through Scheins (2002) underlying assumptions) and so how can we categorise it and define what it is?

Measurement
Despite the evidence suggesting their potential usefulness, organisational climate metrics have not been fully exploited as leading safety, health and environmental performance indicators and as an aid to relative risk [9] ranking. Dodsworth et al. are the first researchers to have successfully used PLS modelling techniques to correlate organizational climate metrics with an organisations safety performance. Further information regarding this research can be obtained from the following link Dodsworth's Homepage In the context of effectiveness, the repertory grid interview can be used to capture a representation of an organisation's culture or corporate culture - the organisation's construct system. The repertory grid interview process provides a structured way of comparing effective and less effective performance and capturing it in the interviewee's words without imposing someone else's model or way of thinking.

Organization development
Organization development is the process through which an organization develops the internal capacity to be the most effective it can be in its mission work and to sustain itself over the long term. This definition highlights the explicit connection between organizational development work and the achievement of organizational mission. This connection is the rationale for doing OD work. Organization development, according to Richard Beckhard, is defined as: a planned effort, organization-wide, managed from the top, to increase organization effectiveness and health, through planned interventions in the organization's [1] 'processes', using behavioural science knowledge. There are also a number of methodologies specifically dedicated to Organization Development such as Peter Senges 5th Discipline and Arthur F. Carmazzis Directive Communication. These are a few of more popular approaches that have been developed into a system for specific outcomes such as the 5th Disciplines learning organization or Directive Communications Organizational culture enhancement. According to Warren Bennis, organization development (OD) is a complex strategy intended to change the beliefs, attitudes, values, and structure of organizations so that they can better adapt to new technologies, markets, and challenges. Warner Burke emphasizes that OD is not just "anything done to better an organization"; it is a particular kind

39

of change process designed to bring about a particular kind of end result. OD involves organizational reflection, system improvement, planning, and self-analysis. The term "Organization Development" is often used interchangeably with Organizational effectiveness, especially when used as the name of a department or a part of the Human Resources function within an organization. Organization Development is a growing field that is responsive to many new approached including Positive Adult Development.

Definition
At the core of OD is the concept of an organization, defined as two or more people working together toward one or more shared goals. Development in this context is the notion that an organization may become more effective over time at achieving its goals. "OD is a long range effort to improve organization's problem solving and renewal processes, particularly through more effective and collaborative management of organization culture-with specific emphasis on the culture of formal workteams-with the assistance of a change agent or catalyst and the use of the theory and technology of applied behavioral science including action research"

History
Kurt Lewin (1898 - 1947) is widely recognized as the founding father of OD, although he died before the concept became current in the mid-1950s. From Lewin came the ideas of group dynamics, and action research which underpin the basic OD process as well as providing its collaborative consultant/client ethos. Institutionally, Lewin founded the Research Center for Group Dynamics at MIT, which moved to Michigan after his death. RCGD colleagues were among those who founded the National Training Laboratories (NTL), from which the T-group and group-based OD emerged. In the UK, working as close as was possible with Lewin and his colleagues, the Tavistock Institute of Human Relations was important in developing systems theories. Important too was the joint TIHR journal Human Relations, although nowadays the Journal of Applied Behavioral Sciences is seen as the leading OD journal. In recent years, serious questioning has emerged about the relevance of OD to managing change in modern organizations. The need for "reinventing" the field has become a topic that even some of its "founding [2] fathers" are discussing critically.

Organizational Ecology (also Organizational Demography and the Population Ecology of Organizations)
Organizational Ecology (also Organizational Demography and the Population Ecology of Organizations) is a theoretical and practical approach in the social sciences that is especially used in organizational studies. Organizational Ecology uses a biological analogy and statistical analysis to try and understand the conditions under which organizations emerge, grow, and die.

Introduction to Organizational Ecology


First fully developed by Michael Hannan and John Freeman in 1989 in their book Organizational Ecology, organizational ecology examines an environment in which organizations compete and a process like natural selection occurs. This theory looks at the death of firms (firm mortality) and the founding of new firms (firm founding), as well as organizational growth. The theory holds that organizations that are reliable and accountable are those that survive (favored by selection). A negative by-product, however, of the need for reliability and accountability is a high degree of inertia and a resistance to change. A key prediction of Organizational Ecology is that the process of change

40

itself is so disruptive that it will result in an elevated rate of mortality. Organizational Ecology also predicts that the rates of founding and the rates of mortality are dependent on the number of organizations in the market. The two central mechanisms here are legitimation (the recognition of that group of organizations) and competition. Legitimation generally increases (at a decreasing rate) with the number of organizations, but so does competition (at an increasing rate). The result is that competitive processes will prevail at high numbers of organizations, while legitimation at low numbers. The founding rate will therefore first increase with the number of organizations (due to an increase in legitimation) but will decrease at high numbers of organizations (due to competition). The reverse holds for mortality rates. The exact way in which these rates are dependent on the number of organizations in the market also depends on the 'carrying capacity' of a particular market niche. Other lines of research investigate how the rate of mortality depends on organizational age, size, competitive conditions at founding, and the position in the market niche. Organizational Ecology has over the years become one of the central fields in organizational studies, and is known for its empirical, quantitative character. Ecological studies usually have a large-scale, longitudinal focus (datasets often span several decades, sometimes even centuries). The book The Demography of Corporations and Industries by Glenn Carroll and Michael Hannan (2000) currently provides the most comprehensive overview of the various theories and methods in Organizational Ecology. Prominent organizational ecology theorists currently active include Michael Hannan, John Freeman, Glenn Carroll, William Barnett and Terry Amburgey.

Organizational effectiveness
Organizational effectiveness is the concept of how effective an organization is in achieving the outcomes the organization intends to produce. The idea of organizational effectiveness is especially important for nonprofit organizations as most people who donate money to non-profit organizations and charities are interested in knowing whether the organization is effective in accomplishing its goals. An organization's effectiveness is also dependent on its communicative competence and ethics. The relationship between these three are simultaneous. Ethics is a foundation found within organizational effectivenss. An organization must exemplify respect, honesty, integrity and equity to allow communicative competence with the participating members. Along with ethics and communicative competence, members in that particular group can finally achieve their intended goals. Foundations and other sources of grants and other types of funds are interested in organzational effectiveness of those people who seek funds from the foundations. Foundations always have more requests for funds or funding proposals and treat funding as an investment using the same care as a venture capitalist would in picking a company in which to invest. Organizational effectiveness is an abstract concept and is basically impossible to measure. Instead of measuring organizational effectiveness, the organization determines proxy measures which will be used to represent effectiveness. Proxy measures used may include such things as number of people served, types and sizes of population segements served, and the demand within those segments for the services the organization supplies. For instance, a non-profit organization which supplies meals to house bound people may collect statistics such as the number of meals cooked and served, the number of volunteers delivering meals, the turnover and retention rates of volunteers, the demographics of the people served, the turnover and retention of consumers, the number of requests for meals turned down due to lack of capacity (amount of food, capacity of meal preparation facilities, and number of delivery volunteers), and amount of wastage. Since the

41

organization has as its goal the preparation of meals and the delivery of those meals to house bound people, it measures its organizational effectiveness by trying to determine what actual activities the people in the organization do in order to generate the outcomes the organization wants to create. Activities such as fundraising or volunteer training are important because they provide the support needed for the organization to deliver its services but they are not the outcomes per se. These other activities are overhead activities which assist the organization in achieving its desired outcomes. The term Organizational Effectiveness is often used interchangeably with Organization Development, especially when used as the name of a department or a part of the Human Resources function within an organization.

Organizational Engineering
Organizational Engineering is a form of Organizational Development created by Gary Salton of Professional Communications, Inc. While traditional organizational development is based on psychology and sociology theories, organizational engineering aims to take a formula based approach in which people can be plugged into an organizational environment equation and the outcome is predicted. Thus engineering organizational development. Like organizational development the focus is to increase efficiency, effectiveness, communication and coordination in groups of all kinds. The information derived from organizational engineering testing is often used to place people into groups based on their relationships for optimal compatibility with out trying to change individuals. The range of Organizational Engineering (OE) is from the individual level (puberty and older) to culture (shared values, beliefs and behaviors). It provides a means to understand, measure, predict and guide human behavior both individually and in groups. The end objective of the discipline is to produce visible, positive results of significant consequence and magnitude within a time frame that is useful to the entity being addressed. OE uses human information processing at an individual level. Sociology is the tool of choice at the group level. The methods, tools and processes employed have been documented in the books Organizational Engineering (Salton, 1996) and the Managers' Guide to Organizational Engineering (Salton, 2000). The instrumentation has been validated across all eight validity dimensions in the book Validation of Organizational Engineering (Soltysik, 2000). These books are available from Professional Communications Inc. Recent discoveries, additions and enhancements are published in the Journal of Organizational Engineering (JOE) and are incorporated in the seminars Dr. Salton periodically holds in Ann Arbor, MI.

Tools
Organizational Engineering is considered to be a knowledge base of how people act and why. Developed to compliment the knowledge obtained through organizational engineering research, "I Opt" measures the characteristics of an individual so that one can draw conclusions based on the organizational knowledge

Organizational Ethics
Organizational Ethics is the ethics of an organization, and it is how an organization ethically responds to an internal or external stimulus. Organizational ethics is interdependent with the organizational culture. Although, it is akin to both organizational behavior (OB) and business ethics on the micro and macro levels, organizational ethics is neither OB, nor is it solely business ethics (which includes corporate governance

42

and corporate ethics). Organizational ethics express the values of an organization to its employees and/or other entities irrespective of governmental and/or regulatory laws.

Overview of the Field


The Foreign Corrupt Practices Act (FCPA) restricts U.S. firms from engaging in bribery and other illegal practices internationally. There are laws that have the same type of prohibition for European companies. [1] These laws create a disadvantage competitively for both European and U.S. firms. Such laws are not a restricting element to organizations that have highly elevated ethical behavior as part of their values. Organizations that do not have an outlook for positive ethical practices as part of their cultures, usually lead to their own demise, such as, Enron and WorldCom by their questionable accounting practices. The converse is generally true, organizations that have integrity and encouraging ethical practices as part of their culture [2] are viewed with respect by their employees, community, and corresponding industries. Thereby, the positive ethical outlook of an organization results in a solid financial bottom-line, because of greater sales along with their ability to retain and attract new and talented personnel. More importantly, an ethical organization will have the ability to retain employees that are experienced and knowledgeable (generally referred to as human capital). This human capital results in less employee turnover and less time to train new employees, which in turn allows for greater output of services (or production of goods). Basic Elements of An Ethical Organization There are at least four elements which exist in organizations that make ethical behavior conducive within an organization. The four elements necessary to quantify an organization's ethics are: 1) written code of ethics and standards; 2) ethics training to executives, managers, and employees; 3) availability for advice on ethical [3] situations (i.e, advice lines or offices); and 4) systems for confidential reporting.

Intrinsic and Extrinsic


The intrinsic and extrinsic rewards of an ethical organization are tethered to the organizational culture and business ethics of an organization. Based upon the reliability and support structure of each of the four areas needed for ethical behavior, then the organizational ethics will be evident throughout the organization. The organization, the employees, and other entities will receive intrinsic and extrinsic rewards. Actions of employees can range from whistle blowing (intrinic) to the extraordinary actions of an hourly employee buying all the peanut butter (as produced by his employer), because the labels were not centered, and he knew that [4] his employer (extrinsic) would reimburse him in full.

Above and Beyond


Above and beyond is a standard part of the operational and strategic plans for organizations that have positive organizational ethics. Above and beyond the quarterly or yearly income statements, an entity will plan for its employees by offering wellness programs along with general health coverage, and/or a viable stable retirement plan. Further, an organization will allow for paid maternity leave, or even paid time off for new parents after an adoption. Other perks may include, on-site childcare, flex-time for work hours, employee education reimbursement, and even telecommuting for various days during a week. All the above are just a few examples of employee benefits that quality organizations offer to their employees. These benefits are not mandates by law, and they represent only a few of benefits that best known corporations and firms offer to their employees throughout the world.

Leadership and Theory for Ethics in an Organization


There are many theories and organizational studies that are coarsely related to organizational ethics, but "organizations" and "ethics" are wide and varied in application and scope. These theories and studies can range from individual(s), team(s), stakeholder, management, leadership, human resources, group(s) interaction(s), as well as, the psychological framework behind each area to include the distribution of job tasks within various types of organizations. As among these areas, it is the influence of leadership in any organization that cannot go unexamined, because they must have a clear understanding of the direction of

43

the organizations vision, goals (to include immediate and long term strategic plans), and values. It is the leadership that sets the tone for organizational impression management (strategic actions taken by an organization to create a positive image to both internal and external publics). In turn, leadership directly influences the organizational symbolism (which reflects the culture, the language of the members, any meaningful objects, representations, and/or how someone may act or think within an organization). The values and ideals within an organization are generally center upon values for business as the theoretical approach that most leaders select to present to their "co-members" (which in truth maybe subordinates). In fact, an examination of business methodology reveals that most leaders approach the ethical theory from the [5][6] perspective of values for business. Importantly, as transverse along side of presenting the vision, values, and goals of the organization, the leadership should infuse a spirit of empowerment to its members. In particular, leadership using this management style of empowerment for their subordinates is based upon view of: Achieving organizational ownership of company values is a continuous process of communication, [7] discussion, and debate throughout all areas of the organization .

Stakeholder and Other Theories


Rather it is a team, small group, or a large international entity, the ability for any organization to reason, act rationally, and respond ethically is paramount. Leadership must have the ability to recognize the needs of its members (or called stakeholders in some theories or models), especially, the very basics of a persons desire to belong and fit into the organization. It is the stakeholder theory that implies that all stakeholders (or individuals) must be treated equally regardless of the fact that some people will obviously contribute more [8] than others to an organization. Leadership has to not only place aside each of their individual (or personal) ambitions (along with any prejudice) in order to present the goals of the organization, but they have to also have the stakeholders engaged for the benefit of the organization. Further, it is leadership that has to be able to influence the stakeholders by presenting the strong minority voice in order to move the organizations members towards ethical behavior. Importantly, the leadership (or stakeholder management) has to have the desire, will, and the skills to ensure that the other stakeholders voices are respected within the organization, and leadership has to ensure that those other voices are not expressing views (or needs as in respects to Maslow's Hierarchy of Needs) that are not shared by the larger majority of the members (or stakeholders). Therefore, stakeholder management, as well as, any other leadership of organizations have to take upon themselves the arduous task of ensuring an ethics system for their own m anagement styles, personalities, systems, performances, plans, policies, strategies, productivity, openness, and even risk(s) within their cultures or industries.

Ethical System Implementation and Consideration


The function of developing and the implantation of an ethics system is difficult, because there is no clear, nor any singular decisive way that is able to be presented as a standard across the board for any organization as due to each organizations own culture. Also, the implementation should be done accordingly to the entire areas of operations within the organization. If it is not implemented pragmatically and with empathic caution for the needs, desires, and personalities (consider the Big Five Personality Traits) of the stakeholders, or the culture, then unethical views may be taken by the stakeholders, or even unethical behavior throughout the organization. Therefore, although, it may require a great deal of time, stakeholder management should consider the Rational Decision-Making Model for implementation of various aspects of an ethical system to the stakeholders. If implantation is done successfully, then all stakeholders (not just the leadership) have accepted the task of benchmarking not only the implantation of an ethics system, but each stakeholder feels empowered for the moment to moment daily decisions that are ethically positive for the organization. When executed timely and with care, then all stakeholders (including leadership) will have at the very less a positive and functional success as the basis for continuous improvement (or kaizen) to present as the norm for its organizational ethics.

44

Organizational learning
Organizational learning is an area of knowledge within organizational theory that studies models and theories about the way an organization learns and adapts. In Organizational development (OD), learning is a characteristic of an adaptive organization, i.e., an organization that is able to sense changes in signals from its environment (both internal and external) and adapt accordingly. (see adaptive system). OD specialists endeavor to assist their clients to learn from experience and incorporate the learning as feedback into the planning process.

How organizations learn


Where Argyris and Schon were the first to propose models that facilitate organizational learning, the following literatures have followed in the tradition of their work: Argyris and Schon (1978) distinguish between single-loop and double-loop learning, related to Gregory Bateson's concepts of first and second order learning. In single-loop learning, individuals, groups, or organizations modify their actions according to the difference between expected and obtained outcomes. In double-loop learning, the entities (individuals, groups or organization) question the values, assumptions and policies that led to the actions in the first place; if they are able to view and modify those, then second-order or double-loop learning has taken place. Double loop learning is the learning about single-loop learning. March and Olsen (1975) attempt to link up individual and organizational learning. In their model, individual beliefs lead to individual action, which in turn may lead to an organizational action and a response from the environment which may induce improved individual beliefs and the cycle then repeats over and over. Learning occurs as better beliefs produce better actions. Kim (1993), as well, in an article titled "The link between individual and organizational learning", integrates Argyris, March and Olsen and another model by Kofman into a single comprehensive model; further, he analyzes all the possible breakdowns in the information flows in the model, leading to failures in organizational learning; for instance, what happens if an individual action is rejected by the organization for political or other reasons and therefore no organizational action takes place? Nonaka and Takeuchi (1995) developed a four stage spiral model of organizational learning. They started by differentiating Polanyi's concept of "tacit knowledge" from "explicit knowledge" and describe a process of alternating between the two. Tacit knowledge is personal, context specific, subjective knowledge, whereas explicit knowledge is codified, systematic, formal, and easy to communicate. The tacit knowledge of key personnel within the organization can be made explicit, codified in manuals, and incorporated into new products and processes. This process they called "externalization". The reverse process (from explicit to implicit) they call "internalization" because it involves employees internalizing an organization's formal rules, procedures, and other forms of explicit knowledge. They also use the term "socialization" to denote the sharing of tacit knowledge, and the term "combination" to denote the dissemination of codified knowledge. According to this model, knowledge creation and organizational learning take a path of socialization, externalization, combination, internalization, socialization, externalization, combination . . . etc. in an infinite spiral. Nick Bontis et al. (2002) empirically tested a model of organizational learning that encompassed both stocks and flows of knowledge across three levels of analysis: individual, team and organization. Results showed a negative and statistically significant relationship between the misalignment of stocks and flows and organizational performance. Flood (1999) discusses the concept of organizational learning from Peter Senge and the origins of the theory from Argyris and Schon. The author aims to "re-think" Senge's The Fifth Discipline through systems theory. The author develops the concepts by integrating them with key theorists such as Bertalanffy, Churchman, Beer, Checkland and Ackoff. Conceptualizing organizational learning in terms of structure, process, meaning, ideology and knowledge, the author provides insights into Senge within the context of the philosophy of science and the way in which systems theorists were influenced by twentieth-century advances from the classical assumptions of science. Imants (2003) provides theory development for organizational learning in schools within the context of teachers' professional communities as learning communities, which is compared and contrasted to teaching communities of practice. Detailed with an analysis of the paradoxes for organizational

45

learning in schools, two mechanisms for professional development and organizational learning, (1) steering information about teaching and learning and (2) encouraging interaction among teachers and workers, are defined as critical for effective organizational learning. Common (2004) discusses the concept of organisational learning in a political environment to improve public policy-making. The author details the initial uncontroversial reception of organisational learning in the public sector and the development of the concept with the learning organization. Definitional problems in applying the concept to public policy are addressed, noting research in UK local government that concludes on the obstacles for organizational learning in the public sector: (1) overemphasis of the individual, (2) resistance to change and politics, (3) social learning is selflimiting, i.e. individualism, and (4) political "blame culture." The concepts of policy learning and policy transfer are then defined with detail on the conditions for realizing organizational learning in the public sector.

Organizational knowledge
What is the nature of knowledge created, traded and used in organizations? Some of this knowledge can be termed technical knowing the meaning of technical words and phrases, being able to read and make sense of economic data and being able to act on the basis of law-like generalizations. Scientific knowledge is propositional; it takes the form of causal generalizations whenever A, then B. For example, whenever water reaches the temperature of 100 degrees, it boils; whenever it boils, it turns into steam; steam generates pressure when in an enclosed space; pressure drives engines. And so forth. A large part of the knowledge used by managers, however, does not assume this form. The complexities of a managers task are such that applying A may result in B, C, or Z. A recipe or an idea that solved very well a particular problem, may, in slightly different circumstances backfire and lead to ever more problems. More important than knowing a whole lot of theories, recipes and solutions for a manager is to know which theory, recipe or solution to apply in a specific situation. Sometimes a manager may combine two different recipes or adapt an existing recipe with some important modification to meet a situation at hand. Managers often use knowledge in the way that a handyman will use his or her skills, the materials and tools that are at hand to meet the demands of a particular situation. Unlike an engineer who will plan carefully and scientifically his or her every action to deliver the desired outcome, such as a steam engine, a handyman is flexible and opportunistic, often using materials in unorthodox or unusual ways, and relies a lot on trial and error. This is what the French call bricolage, the resourceful and cr eative deployment skills and materials to meet each challenge in an original way. Rule of thumb, far from being the enemy of management, is what managers throughout the world have relied upon to inform their action. In contrast to the scientific knowledge that guides the engineer, the physician or the chemist, managers are often informed by a different type of know-how. This is sometimes referred to a narrative knowledge or experiential knowledge, the kind of knowledge that comes from experience and r esides in stories and narratives of how real people in the real world dealt with real life problems, successfully or unsuccessfully. Narrative knowledge is what we use in everyday life to deal with awkward situations, as parents, as consumers, as patients and so forth. We seek the stories of people in the same situation as ourselves and try to learn from them. As the Chinese proverb says "A wise man learns from experience; a wiser man learns from the experience of others." Narrative knowledge usually takes the form of organization stories (see organization story and organizational storytelling). These stories enable partipants to make sense of the difficulties and challenges they face; by listening to stories, members of organizations learn from each other's experiences, adapt the recipes used by others to address their own difficulties and problems. Narrative knowledge is not only the preserve of managers. Most professionals (including doctors, accountants, lawyers, business consultants and academics) rely on narrative knowledge, in addition to their specialist technical knowledge, when dealing with

46

concrete situations as part of their work. More generally, narrative knowledge represents an endlessly mutating reservoir of ideas, recipes and stories that are traded mostly by word or mouth on the internet. They are often apocryphal and may be inaccurate or untrue - yet, they have the power to influence people's sensemaking and actions.

Individual versus organizational learning


Learning by individuals in an organizational context is a well understood process. This is the traditional domain of human resources, including activities such as: training, increasing skills, work experience, and formal education. Given that the success of any organization is founded on the knowledge of the people who work for it, these activities will and, indeed, must continue. However, individual learning is only a prerequisite to organizational learning. Others take it farther with continuous learning. The world is orders of magnitude more dynamic than that of our parents, or even when we were young. Waves of change are crashing on us virtually one on top of another. Change has become the norm rather than the exception. Continuous learning t hroughout ones career has become essential to remain relevant in the workplace. Again, necessary but not sufficient to describe organizational learning. What does it mean to say that an organization learns? Simply summing individual learning is inadequate to model organizational learning. The following definition outlines the essential difference between the two: A learning organization actively creates, captures, transfers, and mobilizes knowledge to enable it to adapt to a changing environment. Thus, the key aspect of organizational learning is the interaction that takes place among individuals. A learning organization does not rely on passive or ad hoc process in the hope that organizational learning will take place through serendipity or as a by-product of normal work. A learning organization actively promotes, facilitates, and rewards collective learning. Creating (or acquiring) knowledge can be an individual or group activity. However, this is normally a smallscale, isolated activity steeped in the jargon and methods of knowledge workers. As first stated by Lucilius in the 1st century BC, Knowledge is not knowledge until someone else knows that one knows. Capturing individual learning is the first step to making it useful to an organization. There are many methods for capturing knowledge and experience, such as publications, activity reports, lessons learned, interviews, and presentations. Capturing includes organizing knowledge in ways that people can find it; multiple structures facilitate searches regardless of the users perspective (e.g., who, what, when, where, why,and how). Capturing also includes storage in repositories, databases, or libraries to insure that the knowledge will be available when and as needed. Transferring knowledge requires that it be accessible to everyone when and where they need it. In a digital world, this involves browser-activated search engines to find what one is looking for. A way to retrieve content is also needed, which requires a communication and network infrastructure. Tacit knowledge may be shared through communities of practice or consulting experts. It is also important that knowledge is presented in a way that users can understand it. It must suit the needs of the user to be accepted and internalized. Mobilizing knowledge involves integrating and using relevant knowledge from many, often diverse, sources to solve a problem or address an issue. Integration requires interoperability standards among various repositories. Using knowledge may be through simple reuse of existing solutions that have worked previously. It may also come through adapting old solutions to new problems. Conversely, a learning organization learns from mistakes or recognizes when old solutions no longer apply. Use may also be

47

through synthesis; that is creating a broader meaning or a deeper level of understanding. Clearly, the more rapidly knowledge can be mobilized and used, the more competitive an organization. An organization must learn so that it can adapt to a changing environment. Historically, the life-cycle of organizations typically spanned stable environments between major socioeconomic changes. Blacksmiths who didnt become mechanics simply fell by the wayside. More recently, many fortune 500 companies of two decades ago no longer exist. Given the ever-accelerating rate of global-scale change, the more critical learning and adaptation become to organization relevance, success, and ultimate survival. Organizational learning is a social process, involving interactions among many individuals leading to wellinformed decision making. Thus, a culture that learns and adapts as part of everyday working practices is essential. Reuse must equal or exceed reinvent as a desirable behavior. Adapting an idea must be rewarded along with its initial creation. Sharing to empower the organization must supersede controlling to empower an individual. Clearly, shifting from individual to organizational learning involves a non-linear transformation. Once someone learns something, it is available for their immediate use. In contrast, organizations need to create, capture, transfer, and mobilize knowledge before it can be used. Although technology supports the latter, these are primarily social processes within a cultural environment, and cultural change, however necessary, is a particularly challenging undertaking.

Learning organization
The work in Organizational Learning can be distinguished from the work on a related concept, the learning organization. This later body of work, in general, uses the theoretical findings of organizational learning (and other research in organizational development, system theory, and cognitive science) in order to prescribe specific recommendations about how to create organizations that continuously and effectively learn. This practical approach was championed by Peter Senge in his book The Fifth Discipline.

Diffusion of innovations
Diffusion of innovations theory explores how and why people adopt new ideas, practices and products. It may be seen as a subset of the anthropological concept of diffusion and can help to explain how ideas are spread by individuals, social networks and organizations.

Organizational structure
Pre-bureaucratic
Pre-bureaucratic (entrepreneurial) structures lack standardization of tasks. This structure is most common in smaller organizations and is best used to solve simple tasks. The structure is totally centralized. The strategic leader makes all key decisions and most communication is done by one on one conversations. It is particularly useful for new (Entrepreneurial) business as it enables the founder to control growth and development. They are usually based on traditional domination or charismatic domination in the sense of Max Weber's tripartite classification of authority.

Bureaucratic

48

Bureaucratic structures have a certain degree of standardization. They are better suited for more complex or larger scale organizations. Then tension between bureaucratic structures and non-bureaucratic is echoed in Burns and Stalker's (1961) distinction between mechanistic and organic structures.

Functional Structure
The organization is structured according to functional areas instead of product lines. The functional structure groups specialize in similar skills in separate units. This structure is best used when creating specific, uniform products. A functional structure is well suited to organizations which have a single or dominant core product because each subunit becomes extremely adept at performing its particular portion of the process. They are economically efficient, but lack flexibility. Communication between functional areas can be difficult.

Matrix Structure
A matrix structure overlays two organizational forms in order to leverage the benefits of both. Some global corporations adopt a matrix structure that combines geographical with product divisions. The product-based structure allows the company to exploit global economies of scale, whereas the geographic structure keeps knowledge close to the needs of individual countries. Many organizations also have degrees of matrix structure, meaning that each divisional group has specific responsibilities, but some issues must be decided jointly across all of these groups. Instead of combining two divisional structures, some matrix structures overlap a functional structure with project teams. Employees are assigned to a cross-functional project team, yet they also belong to a permanent functional unit (e.g., engineering, marketing) to which they return when a project is completed. Matrix structures create the unusual situation where employees have two bosses. A project team member would report to the project leader on a daily basis, but also reports to the functional leader (engineering, marketing, etc.). Some companies give these managers equal power; more often, each has authority over different elements of the employees or work units tasks. Matrix structures that combine two divisionalized forms also have a dual-boss reporting system, but only for some employees.

Divisional Structure
Divisional structure is formed when an organization is split up into a number of self-managed units, each of which operates as a profit center. Such a division may occur on the basis of product or market or a combination of the two with each unit tending to operate along functional or product lines, but with certain key function (e.g., finance, personnel, corporate planning) provided centrally, usually at a company headquarters.

Post-Bureaucratic
The term of post bureaucratic is used in two senses in the organizational literature: one generic and one much more specific (see Grey & Garsten, 2001). In the generic sense the term post bureaucratic is often used to describe a range of ideas developed since the 1980's that specifically contrast themselves with Weber's ideal type Bureaucracy. This may include Total Quality Management, Culture Management and the Matrix Organization amongst others. None of these however has left behind the core tenets of Bureaucracy. Hierarchies still exist, authority is still Weber's rational, legal type, and the organisation is still rule bound. Heckshcer, arguing along these lines, describes them as cleaned up bureaucracies (Hecksher & Donellson, 1994), rather than a fundamental shift away from bureaucracy. Gideon Kunda, in his classic study of culture management at 'Tech' argued that 'the essence of bureaucratic control - the formalisation, codification and enforcement of rules and regulations - does not change in principle.....it shifts focus from organizational structure to the organization's culture'. Another smaller group of theorists have developed the theory of the Post-Bureaucratic Organization. Heckscher and Donnellson [1994], provide a detailed discussion which attempts to describe an organization

49

that is fundamentally not bureaucratic. Heckscher has developed an ideal type Post-Bureaucratic Organization in which decisions are based on dialogue and consensus rather than authority and command, the organisation is a network rather than a hierarchy, open at the boundaries (in direct contrast to culture management); there is an emphasis on meta-decision making rules rather than decision making rules. This sort of horizontal decision making by consensus model is often used in Housing cooperatives, other Cooperatives and when running a non-profit or Community organization. It is used in order to encourage participation and help to empower people who normally experience Oppression in groups. Still other theorists are developing a resurgence of interest in Complexity Theory and Organizations, and have focused on how simple structures can be used to engender organizational adaptations. For instance, Miner and colleagues (2000) studied how simple structures could be used to generate improvisational outcomes in product development. Their study makes links to simple structures and improviseal learning. Other scholars such as Jan Rivkin, Kathleen Eisenhardt Nicolaj Sigglekow, and Nelson Repenning revive an older interest in how structure and strategy relate in dynamic environments. See also: Management cybernetics

Five Phases of The Organizational Life Cycle


Organizations go through different phases of growth. The first challenge for executives who wish to grow their organizations is to understand what phase of the organizational life cycle one is in. Many organizations will enter the decline phase unless there is in place a rigorous program of transformational leadership development. Different experts will argue on how many phases there are, but there is elegance in using something easy to remember. We divide the organizational life cycle into the following phases: ''' Startup. (or Birth) Growth. This is sometimes divided into an early growth phase (fast growth) and maturity phase (slow growth or no growth). However, maturity often leads to Decline. When in decline, an organization will either undergo Renewal or Death and bankruptcy''' Each of these phases present different management and leadership challenges that one must deal with. The Start-Up Phase : Getting ready is the secret of success. In this phase, we see the entrepreneur thinking about the business, a management group formed, a business plan written. For entrepreneurs needing money to kick start the business, the company goes into the growth phase once the investor writes the check. For those that don't need outside funds, start-up ends when you declare yourself open for business. The Growth Phase : It was the best of times, it was the worst of times. It was the age of wisdom, it was the age of foolishness, it was the spring of hope, it was the winter of despair. In the growth phase, one expects to see revenues climb, new services and products developed, more employees hired and so on. The management textbooks love to assume that sales grow each year. The reality is much different since a company can have both good and bad years depending on market conditions. In organizations that have been around for a few years, a very interesting thing happens dry rot sets in. There are many symptoms, some of which we have presented below: That's why many companies have different types of programs relating to organizational development in place. The Decline Phase : Corporate Insanity is doing the same thing, the same way, but expecting different results.

50

Using the above definition, one finds a tremendous amount of corporate insanity out there. Management expects next year to be better, but doesn't know, or is unwilling to change to get better results. If one can detect the symptoms of decline early, one can more easily deal with it. Some of the more obvious signs being: declining sales, relative to competitors; disappearing profit margins; and debt loads, which continue to grow year after year. However, by the time the accountants figure out that the organization is in trouble, it's often too late. The Renewal Phase : It is not death that a man should fear, but he should fear never beginning to live. Decline doesn't have to continue, however. External experts have focused on the importance of organizational development as a way of preventing decline or reducing its affects. A story from Aesop's Fables might help here. A horse rider took the utmost pains with his charger. As long as the war lasted, he looked upon him as his fellow-helper in all emergencies, and fed him carefully with hay and corn. But when the war was over, he only allowed him chaff to eat and made him carry heavy loads of wood, subjecting him to much slavish drudgery and ill-treatment. War was again proclaimed, however, and when the trumpet summoned him to his standard, the Soldier put on his charger its military trappings, and mounted, being clad in his heavy coat of mail. The Horse fell down straightway under the weight, no longer equal to the burden, and said to his master, You must now go to the war on foot, for you have tr ansformed me from a Horse into an Ass; and how can you expect that I can again turn in a moment from an Ass to a Horse?" One way to reverse dry rot is through the use of training as a way of injecting new knowledge and skills. One can also put in place a rigorous program to change and transform the organization's culture. This assumes, though, that one has enough transformational leaders to challenge the status quo. Without the right type of leadership, the organization will likely spiral down to bankruptcy. Failure : Advice after injury is like medicine after death. As many as 80% of business failures occur due to factors within the executive's control. Even firms close to bankruptcy can overcome tremendous adversity to nurse themselves back to financia l health. Lee Iacoccas turnaround of the Chrysler Corporation is one shining example. In some cases, failure means being acquired and merged into a larger organization. In other cases, it occurs when an organization elects or is forced into bankruptcy. This does not signify the organization ceases to exist since it can limp along for many years by going in and out of bankruptcy court

Predictive analytics
Predictive analytics encompasses a variety of techniques from statistics and data mining that process current and historical data in order to make predictions about future events. Such predictions rarely take the form of absolute statements, and are more likely to be expressed as values that correspond to the odds of a particular event or behavior taking place in the future. In business, the models often process historical and transactional data to identify the risk or opportunity associated with a specific customer or transaction. These analyses weigh the relationship between many data elements to isolate each customers risk or potential, which guides the action on that customer. Predictive analytics is widely used in making customer decisions. One of the most well-known applications is credit scoring, which is used throughout financial services. Scoring models process a customers credit history, loan application, customer data, etc., in order to rank-order individuals by their likelihood of making future credit payments on time. Predictive analytics are also used in insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields.

51

Types of predictive analytics


Generally, predictive analytics is used to mean predictive modeling. However, people are increasingly using the term to describe related analytic disciplines, such as descriptive modeling and decision modeling or optimization. These disciplines also involve rigorous data analysis, and are widely used in business for segmentation and decision making, but have different purposes and the statistical techniques underlying them vary.

Predictive models
Predictive models analyze past performance to assess how likely a customer is to exhibit a specific behavior in the future in order to improve marketing effectiveness. This category also encompasses models that seek out subtle data patterns to answer questions about customer performance, such as fraud detection models. Predictive models often perform calculations during live transactions, for example, to evaluate the risk or opportunity of a given customer or transaction, in order to guide a decision.

Descriptive models
Descriptive models describe relationships in data in a way that is often used to classify customers or prospects into groups. Unlike predictive models that focus on predicting a single customer behavior (such as credit risk), descriptive models identify many different relationships between customers or products. But the descriptive models do not rank-order customers by their likelihood of taking a particular action the way predictive models do. Descriptive models are often used offline, for example, to categorize customers by their product preferences and life stage. Descriptive modeling tools can be utilized to develop agent based models that can simulate large number of individualized agents to predict possible futures.

Decision models
Decision models describe the relationship between all the elements of a decision the known data (including results of predictive models), the decision and the forecast results of the decision in order to predict the results of decisions involving many variables. These models can be used in optimization, a datadriven approach to improving decision logic that involves maximizing certain outcomes while minimizing others. Decision models are generally used offline, to develop decision logic or a set of business rules that will produce the desired action for every customer or circumstance.

Predictive analytics
Definition
Predictive analytics is an area of statistical analysis that deals with extracting information from data and using it to predict future trends and behavior patterns. The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting it to predict future outcomes.

Current uses
Although predictive analytics can be put to use in many applications, we outline a few examples where predictive analytics has shown positive impact in recent years. Analytical Customer Relationship Management (CRM) Analytical Customer Relationship Management is a frequent commercial application of Predictive Analysis. Methods of predictive analysis are applied to customer data to pursue CRM objectives. Direct marketing

52

Product marketing is constantly faced with the challenge of coping with the increasing number of competing products, different consumer preferences and the variety of methods (channels) available to interact with each consumer. Efficient marketing is a process of understanding the amount of variability and tailoring the marketing strategy for greater profitability. Predictive analytics can help identify consumers with a higher likelihood of responding to a particular marketing offer. Models can be built using data from consumers past purchasing history and past response rates for each channel. Additional information about the consumers demographic, geographic and other characteristics can be used to make more accurate predictions. Targeting only these consumers can lead to substantial increase in response rate which can lead to a significant reduction in cost per acquisition. Apart from identifying prospects, predictive analytics can also help to identify the most effective combination of products and marketing channels that should be used to target a given consumer. Cross-sell Often corporate organizations collect and maintain abundant data (e.g. customer records, sale transactions) and exploiting hidden relationships in the data can provide a competitive advantage to the organization. For an organization that offers multiple products, an analysis of existing customer behavior can lead to efficient cross sell of products. This directly leads to higher profitability per customer and strengthening of the customer relationship. Predictive analytics can help analyze customers spending, usage and other behavior, and help cross-sell the right product at the right time. Customer retention With the amount of competing services available, businesses need to focus efforts on maintaining continuous consumer satisfaction. In such a competitive scenario, consumer loyalty needs to be rewarded and customer attrition needs to be minimized. Businesses tend to respond to customer attrition on a reactive basis, acting only after the customer has initiated the process to terminate service. At this stage, the chance of changing the customers decision is almost impossible. Proper application of predictive analytics can lead to a more proactive retention strategy. By a frequent examination of a customers past service usage, service performance, spending and other behavior patterns, predictive models can determine the likelihood of a customer wanting to terminate service sometime in the near future. An intervention with lucrative offers can increase the chance of retaining the customer. Silent attrition is the behavior of a customer to slowly but steadily reduce usage and is another problem faced by many companies. Predictive analytics can also predict this behavior accurately and before it occurs, so that the company can take proper actions to increase customer activity. Underwriting Many businesses have to account for risk exposure due to their different services and determine the cost needed to cover the risk. For example, auto insurance providers need to accurately determine the amount of premium to charge to cover each automobile and driver. A financial company needs to assess a borrowers potential and ability to pay before granting a loan. For a health insurance provider, predictive analytics can analyze a few years of past medical claims data, as well as lab, pharmacy and other records where available, to predict how expensive an enrollee is likely to be in the future. Predictive analytics can help underwriting of these quantities by predicting the chances of illness, default, bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition, by predicting the future risk behavior of a customer using application level data. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default. Collection analytics Every portfolio has a set of delinquent customers who do not make their payments on time. The financial institution has to undertake collection activities on these customers to recover the amounts due. A lot of collection resources are wasted on customers who are difficult or impossible to recover. Predictive analytics can help optimize the allocation of collection resources by identifying the most effective collection agencies,

53

contact strategies, legal actions and other strategies to each customer, thus significantly increasing recovery at the same time reducing collection costs. Fraud detection Fraud is a big problem for many businesses and can be of various types. Inaccurate credit applications, fraudulent transactions, identity thefts and false insurance claims are some examples of this problem. These problems plague firms all across the spectrum and some examples of likely victims are credit card issuers, insurance companies, retail merchants, manufacturers, business to business suppliers and even services providers. This is an area where a predictive model is often used to help weed out the bads and reduce a business's exposure to fraud. Portfolio, product or economy level prediction Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example a retailer might be interested in predicting store level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These type of problems can be addressed by predictive analytics using Time Series techniques (see below).

Statistical techniques
The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques. Regression Techniques Regression models are the mainstay of predictive analytics. The focus lies on establishing a mathematical equation as a model to represent the interactions between the different variables in consideration. Depending on the situation, there is a wide variety of models that can be applied while performing predictive analytics. Some of them are briefly discussed below. Linear Regression Model The linear regression model analyzes the relationship between the response or dependent variable and a set of independent or predictor variables. This relationship is expressed as an equation that predicts the response variable as a linear function of the parameters. These parameters are adjusted so that a measure of fit is optimized. Much of the effort in model fitting is focused on minimizing the size of the residual, as well as ensuring that it is randomly distributed with respect to the model predictions. The goal of regression is to select the parameters of the model so as to minimize the sum of the squared residuals. This is referred to as ordinary least squares (OLS) estimation and results in best linear unbiased estimates (BLUE) of the parameters. Once the model has been estimated we would be interested to know if the predictor variables belong in the model i.e. is the estimate of each variables contribution reliable? To do this we can check the statistical significance of the models coefficients which can be measured using the t-statistic. This amounts to testing whether the coefficient is significantly different from zero. How well the model predicts the dependent variable based on the value of the independent variables can be assessed by using the R statistic. It measures predictive power of the model i.e. the proportion of the total variation in the dependent variable that is explained (accounted for) by variation in the independent variables. Discrete choice models Multivariate regression (above) is generally used when the response variable is continuous with an unbounded range. Often the response variable may not be continuous but rather discrete. While

54

mathematically it is feasible to apply multivariate regression to discrete ordered dependent variables, some of the assumptions behind the theory of multivariate linear regression no longer hold, and there are other techniques such as discrete choice models which are better suited for this type of analysis. If the dependent variable is discrete, some of those superior methods are logistic regression, multinomial logit and probit models. Logistic regression and probit models are used when the dependent variable is binary. Logistic regression In a classification setting, assigning outcome probabilities to observations can be achieved through the use of a logistic model, which is basically a method which transforms information about the binary dependent variable into an unbounded continuous variable and estimates a regular multivariate model (See Allisons Logistic Regression for more information on the theory of Logistic Regression). The Wald and likelihood-ratio test are used to test the statistical significance of each coefficient b in the model (analogous to the t tests used in OLS regression; see above). A test assessing the goodness-of-fit of a classification model is the Hosmer and Lemeshow test. Multinomial logistic regression An extension of the binary logit model to cases where the dependent variable has more than 2 categories is the multinomial logit model. In such cases collapsing the data into two categories might not make good sense or may lead to loss in the richness of the data. The multinomial logit model is the appropriate technique in these cases, especially when the dependent variable categories are not ordered (for examples colors like red, blue, green). Some authors have extended multinomial regression to include feature selection/importance methods such as Random multinomial logit. Probit regression Probit models offer an alternative to logistic regression for modeling categorical dependent variables. Even though the outcomes tend to be similar, the underlying distributions are different. Probit models are popular in social sciences like economics. A good way to understand the key difference between probit and logit models, is to assume that there is a latent variable z. We do not observe z but instead observe y which takes the value 0 or 1. In the logit model we assume that follows a logistic distribution. In the probit model we assume that follows a standard normal distribution. Note that in social sciences (example economics), probit is often used to model situations where the observed variable y is continuous but takes values between 0 and 1. Logit vs. Probit The Probit model has been around longer than the logit model. They look identical, except that the logistic distribution tends to be a little flat tailed. In fact one of the reasons the logit model was formulated was that the probit model was extremely hard to compute because it involved calculating difficult integrals. Modern computing however has made this computation fairly simple. The coefficients obtained from the logit and probit model are also fairly close. However the odds ratio makes the logit model easier to interpret. For practical purposes the only reasons for choosing the probit model over the logistic model would be: There is a strong belief that the underlying distribution is normal The actual event is not a binary outcome (e.g. Bankrupt/not bankrupt) but a proportion (e.g. Proportion of population at different debt levels).

55

Time series models Time series models are used for predicting or forecasting the future behavior of variables. These models account for the fact that data points taken over time may have an internal structure (such as autocorrelation, trend or seasonal variation) that should be accounted for. As a result standard regression techniques cannot be applied to time series data and methodology has been developed to decompose the trend, seasonal and cyclical component of the series. Modeling the dynamic path of a variable can improve forecasts since the predictable component of the series can be projected into the future. Time series models estimate difference equations containing stochastic components. Two commonly used forms of these models are autoregressive models (AR) and moving average (MA) models. The Box-Jenkins methodology (1976) developed by George Box and G.M. Jenkins combines the AR and MA models to produce the ARMA (autoregressive moving average) model which is the cornerstone of stationary time series analysis. ARIMA (autoregressive integrated moving average models) on the other hand are used to describe non-stationary time series. Box and Jenkins suggest differencing a non stationary time series to obtain a stationary series to which an ARMA model can be applied. Non stationary time series have a pronounced trend and do not have a constant long-run mean or variance. Box and Jenkins proposed a three stage methodology which includes: model identification, estimation and validation. The identification stage involves identifying if the series is stationary or not and the presence of seasonality by examining plots of the series, autocorrelation and partial autocorrelation functions. In the estimation stage, models are estimated using non-linear time series or maximum likelihood estimation procedures. Finally the validation stage involves diagnostic checking such as plotting the residuals to detect outliers and evidence of model fit. In recent years time series models have become more sophisticated and attempt to model conditional heteroskedasticity with models such as ARCH (autoregressive conditional heteroskedasticity) and GARCH (generalized autoregressive conditional heteroskedasticity) models frequently used for financial time series. In addition time series models are also used to understand inter-relationships among economic variables represented by systems of equations using VAR (vector autoregression) and structural VAR models. Survival or duration analysis Survival analysis is another name for time to event analysis. These techniques were primarily developed in the medical and biological sciences, but they are also widely used in the social sciences like economics, as well as in engineering (reliability and failure time analysis). Censoring and non-normality which are characteristic of survival data generate difficulty when trying to analyze the data using conventional statistical models such as multiple linear regression. The Normal distribution, being a symmetric distribution, takes positive as well as negative values, but duration by its very nature cannot be negative and therefore normality cannot be assumed when dealing with duration/survival data. Hence the normality assumption of regression models is violated. A censored observation is defined as an observation with incomplete information. Censoring introduces distortions into traditional statistical methods and is essentially a defect of the sample data. The assumption is that if the data were not censored it would be representative of the population of interest. In survival analysis, censored observations arise whenever the dependent variable of interest represents the time to a terminal event, and the duration of the study is limited in time. An important concept in survival analysis is the hazard rate. The hazard rate is defined as the probability that the event will occur at time t conditional on surviving until time t. Another concept related to the hazard rate is the survival function which can be defined as the probability of surviving to time t.

56

Most models try to model the hazard rate by choosing the underlying distribution depending on the shape of the hazard function. A distribution whose hazard function slopes upward is said to have positive duration dependence, a decreasing hazard shows negative duration dependence whereas constant hazard is a process with no memory usually characterized by the exponential distribution. Some of the distributional choices in survival models are: F, gamma, Weibull, log normal, inverse normal, exponential etc. All these distributions are for a non-negative random variable. Duration models can be parametric, non-parametric or semi-parametric. Some of the models commonly used are Kaplan-Meier, Cox proportional hazard model (non parametric). Classification and regression trees Classification and regression trees (CART) is a non-parametric technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively. Trees are formed by a collection of rules based on values of certain variables in the modeling data set Rules are selected based on how well splits based on variables values can differentiate observations based on the dependent variable Once a rule is selected and splits a node into two, the same logic is applied to each child node (i .e. it is a recursive procedure) Splitting stops when CART detects no further gain can be made, or some pre-set stopping rules are met

Each branch of the tree ends in a terminal node Each observation falls into one and exactly one terminal node Each terminal node is uniquely defined by a set of rules

A very popular method for predictive analytics is Leo Breiman's Random forests or derived versions of this technique like Random multinomial logit. Multivariate regression splines Multivariate adaptive regression splines is a non-parametric technique that builds flexible models by fitting piecewise linear regressions. An important concept associated with regression splines is that of a knot. Knot is where one local regression model gives way to another and thus is the point of intersection between two splines. In multivariate and adaptive regression splines, basis functions are the tool used for generalizing the search for knots. Basis functions are a set of functions used to represent the information contained in one or more variables. Multivariate and Adaptive Regression Splines model almost always creates the basis functions in pairs. Multivariate and adaptive regression spline approach deliberately overfits the model and then prunes to get to the optimal model. The algorithm is computationally very intensive and in practice we are required to specify an upper limit on the number of basis functions.

Machine learning techniques


Machine learning, a branch of artificial intelligence, was originally employed to develop techniques to enable computers to learn. Today, since it includes a number of advanced statistical methods for regression and

57

classification, it finds application in a wide variety of fields including medical diagnostics, credit card fraud detection, face and speech recognition and analysis of the stock market. In certain applications it is sufficient to directly predict the dependent variable without focusing on the underlying relationships between variables. In other cases, the underlying relationships can be very complex and the mathematical form of the dependencies unknown. For such cases, machine learning techniques emulate human cognition and learn from training examples to predict future events. A brief discussion of some of these methods used commonly for predictive analytics is provided below. A detailed study of machine learning can be found in [TomMitchell] Neural networks Neural networks are nonlinear sophisticated modeling techniques that are able to model complex functions. They can be applied to problems of prediction, classification or control in a wide spectrum of fields such as finance, medicine, engineering, and physics. Neural networks are used when the exact nature of the relationship between inputs and output is not known. A key feature of neural networks is that they learn the relationship between inputs and output through training. There are two types of training in neural networks used by different networks, supervised and unsupervised training, with supervised being the most common one. Some examples of neural network training techniques are backpropagation, quick propagation, conjugate gradient descent, projection operator, Delta-Bar-Delta etc. Theses are applied to network architectures such as multilayer perceptrons, Kohonen networks, Hopfield networks, etc. Radial basis functions A radial basis function (RBF) is a function which has built into it a distance criterion with respect to a center. Such functions can be used very efficiently for interpolation and for smoothing of data. Radial basis functions have been applied in the area of neural networks where they are used as a replacement for the sigmoidal transfer function. Such networks have 3 layers, the input layer, the hidden layer with the RBF non-linearity and a linear output layer. The most popular choice for the non-linearity is the Gaussian. RBF networks have the advantage of not being locked into local minima as do the feed-forward networks such as the multilayer perceptron. Support vector machines Support Vector Machines (SVM) are used to detect and exploit complex patterns in data by clustering, classifying and ranking the data. They are learning machines that are used to perform binary classifications and regression estimations. They commonly use kernel based methods to apply linear classification techniques to non-linear classification problems. There are a number of types of SVM such as linear, polynomial, sigmoid etc. Nave Bayes Nave Bayes based on Bayes conditional probability rule is used for performing classification tasks. Nave Bayes assumes the predictors are statistically independent which makes it an effective classification tool that is easy to interpret. It is best employed when faced with the problem of curse of dimensionality i.e. when the number of predictors is very high. k-nearest neighbours The nearest neighbour algorithm (KNN) belongs to the class of pattern recognition statistical methods. The method does not impose a priori any assumptions about the distribution from which the modeling sample is

58

drawn. It involves a training set with both positive and negative values. A new sample is classified by calculating the distance to the nearest neighbouring training case. The sign of that point will determine the classification of the sample. In the k-nearest neighbour classifier, the k nearest points are considered and the sign of the majority is used to classify the sample. The performance of the kNN algorithm is influenced by three main factors: (1) the distance measure used to locate the nearest neighbours; (2) the decision rule used to derive a classification from the k-nearest neighbours; and (3) the number of neighbours used to classify the new sample. It can be proved that, unlike other methods, this method is universally asymptotically convergent, i.e.: as the size of the training set increases, if the observations are iid, regardless of the distribution from which the sample is drawn, the predicted class will converge to the class assignment that minimizes misclassification error. See Devroy et alt.

Popular tools
There are numerous tools available in the marketplace which help with the execution of predictive analytics. These range from those which need very little user sophistication (eg. KnowledgeSEEKER or KXEN) to those that are designed for the expert practitioner (eg. SPSS, SAS modules such as STAT, ETS, OR etc.). The difference between these tools is often in the level of customization and heavy data lifting allowed. For traditional statistical modeling some of the popular tools are SAS, SPLUS, SPSS, STATA. For machine learning /data mining type of applications Enterprise Miner, KnowledgeSTUDIO, Clementine, KXEN Analytic Framework, InforSense and Excel Miner are some of the popularly used options. Classification Tree analysis can be performed using CART software. R is a very powerful tool that can be used to perform almost any kind of statistical analysis, and is freely downloadable. WEKA is a freely available open-source collection of machine learning methods for pattern classification, regression, clustering, and some types of meta-learning, which can be used for predictive analytics. RapidMiner is another freely available integrated open-source software environment for predictive analytics, data mining, and machine learning fully integrating WEKA and providing an even larger number of methods for predictive analytics. The widespread use of predictive analytics in industry has led to the proliferation of numerous service firms. Some of them are highly specialized (focusing, for example, on fraud detection or response modeling). Others provide predictive analytics services in support of a wide range of business problems. Predictive Analytics competitions are also fairly common and often pit academics and Industry practitioners (see for example, KDD CUP).

Conclusion
Predictive analytics adds great value to a businesses decision making capabilities by allowing it to formulate smart policies on the basis of predictions of future outcomes. A broad range of tools and techniques are available for this type of analysis and their selection is determined by the analytical maturity of the firm as well as the specific requirements of the problem being solved.

Performance management (Business performance management (BPM))


Business performance management (BPM) is a set of processes that help organizations optimize their business performance. It is a framework for organizing, automating and analyzing business methodologies, [1] metrics, processes and systems that drive business performance. BPM is seen as the next generation of business intelligence (BI). BPM helps businesses make efficient use of [2] their financial, human, material and other resources.

History
An early reference to non-business performance management occurs in Sun Tzu's The Art of War. Sun Tzu claims that to succeed in war, one should have full knowledge of one's own strengths and weaknesses and

59

full knowledge of one's enemy's strengths and weaknesses. Lack of either one might result in defeat. A certain school of thought draws parallels between the challenges in business and those of war, specifically: collecting data - both internal and external discerning patterns and meaning in the data (analyzing) responding to the resultant information

Prior to the start of the Information Age in the late 20th century, businesses sometimes took the trouble to laboriously collect data from non-automated sources. As they lacked computing resources to properly analyze the data they often made commercial decisions primarily on the basis of intuition. As businesses started automating more and more systems, more and more data became available. However, collection remained a challenge due to a lack of infrastructure for data exchange or due to incompatibilities between systems. Reports on the data gathered sometimes took months to generate. Such reports allowed informed long-term strategic decision-making. However, short-term tactical decision-making continued to rely on intuition. In modern businesses, increasing standards, automation, and technologies have led to vast amounts of data becoming available. Data warehouse technologies have set up repositories to store this data. Improved ETL and even recently Enterprise Application Integration tools have increased the speedy collecting of data. OLAP reporting technologies have allowed faster generation of new reports which analyze the data. Business intelligence has now become the art of sieving through large amounts of data, extracting useful information and turning that information into actionable knowledge. In 1989 Howard Dresner, a research analyst at Gartner , popularized "Business Intelligence" as an umbrella term to describe a set of concepts and methods to improve business decision-making by using fact-based support systems. Performance Management is built on a foundation of BI, but marries it to the planning and control cycle of the enterprise - with enterprise planning, consolidation and modeling capabilities. The term "BPM" is now becoming confused with "Business Process Management", and many are converting to the term "Corporate Performance Management" or "Enterprise Performance Management".

What is BPM?
BPM involves consolidation of data from various sources, querying, and analysis of the data, and putting the results into practice. BPM enhances processes by creating better feedback loops. Continuous and real-time reviews help to identify and eliminate problems before they grow. BPM's forecasting abilities help the company take corrective action in time to meet earnings projections. Forecasting is characterized by a high degree of predictability which is put into good use to answer what-if scenarios. BPM is useful in risk analysis and predicting outcomes of merger and acquisition scenarios and coming up with a plan to overcome potential problems. BPM provides key performance indicators (KPIs) that help companies monitor efficiency of projects and employees against operational targets.

Metrics / Key Performance Indicators


For business data analysis to become a useful tool, however, it is essential that an enterprise understand its goals and objectives essentially, that they know the direction in which they want the enterprise to progress. To help with this analysis key performance indicators (KPIs) are laid down to assess the present state of the

60

business and to prescribe a course of action. More and more organizations have started to speed up the availability of data. In the past, data only became available after a month or two, which did not help managers react swiftly enough. Recently, banks have tried to make data available at shorter intervals and have reduced delays. For example, for businesses which have higher operational/credit risk loading (for example, credit cards and "wealth management"), A large multinational bank makes KPI-related data available weekly, and sometimes offers a daily analysis of numbers and realtime dashboards are also provided. This means data usually becomes available within 24 hours, necessitating automation and the use of IT systems. Most of the time, BPM simply means use of several financial/nonfinancial metrics/key performance indicators to assess the present state of the business and to prescribe a course of action. Some of the areas from which top management analysis could gain knowledge by using BPM: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. Customer-related numbers: New customers acquired Status of existing customers Attrition of customers (including breakup by reason for attrition) Turnover generated by segments of the Customers - these could be demographic filters. Outstanding balances held by segments of customers and terms of payment - these could be demographic filters. Collection of bad debts within customer relationships. Demographic analysis of individuals (potential customers) applying to become customers, and the levels of approval, rejections and pending numbers. Delinquency analysis of customers behind on payments. Profitability of customers by demographic segments and segmentation of customers by profitability. Campaign management Realtime Dashboard on Key operational metrics such Clickstream analysis on a website Key product portfolio trackers Marketing Channel analysis Sales Data analysis by product segments Callcenter metrics

This is more an inclusive list than an exclusive one. The above more or less describes what a bank would do, but could also refer to a telephone company or similar service sector company. What is important is: 1. KPI related data which is consistent, correct and provide an insight into operational aspects of a company. 2. Timely availability of KPI-related data. 3. KPIs designed to directly reflect the efficiency and effectiveness of a business 4. Information presented in a format which aids decision making for top management and decision makers 5. Ability to discern patterns or trends from organised information

BPM integrates the company's processes with CRM or ERP. Companies become able to gauge customer satisfaction, control customer trends and influence shareholder value.

61

Application software types


People working in business intelligence have developed tools that ease the work, especially when the intelligence task involves gathering and analyzing large amounts of unstructured data. Tool categories commonly used for business performance management include: OLAP Online Analytical Processing, sometimes simply called "Analytics" (based on dimensional analysis and the so-called "hypercube" or "cube") Scorecarding, dashboarding and data visualization Data warehouses Document warehouses Text mining DM Data mining BPM Business performance management EIS Executive information systems DSS Decision support systems MIS Management information systems SEMS Strategic Enterprise Management Software

Designing and implementing a business performance management programme


When implementing a BPM programme one might like to pose a number of questions and take a number of resultant decisions, such as: Goal Alignment queries: The first step is determining what the short and medium term purpose of the programme will be. What strategic goal(s) of the organization will be addressed by the programme? What organizational mission/vision does it relate to? A hypothesis needs to be crafted that details how this initiative will eventually improve results / performance (i.e. a strategy map). Baseline queries: Current information gathering competency needs to be assessed. Do we have the capability to monitor important sources of information? What data is being collected and how is it being stored? What are the statistical parameters of this data, e.g., how much random variation does it contain? Is this being measured? Cost and risk queries: The financial consequences of a new BI initiative should be estimated. It is necessary to assess the cost of the present operations and the increase in costs associated with the BPM initiative? What is the risk that the initiative will fail? This risk assessment should be converted into a financial metric and included in the planning. Customer and stakeholder queries: Determine who will benefit from the initiative and who will pay. Who has a stake in the current procedure? What kinds of customers / stakeholders will benefit directly from this initiative? Who will benefit indirectly? What are the quantitative / qualitative benefits? Is the specified initiative the best way to increase satisfaction for all kinds of customers, or is there a better way? How will customer benefits be monitored? What about employees, shareholders, and distribution channel members? Metrics-related queries: These information requirements must be operationalized into clearly defined metrics. One must decide what metrics to use for each piece of information being gathered. Are these the best metrics? How do we know that? How many metrics need to be tracked? If this is a large number (it usually is), what kind of system can be used to track them? Are the metrics standardized, so they can be benchmarked against performance in other organizations? What are the industry standard metrics available? Measurement Methodology-related queries: One should establish a methodology or a procedure to determine the best (or acceptable) way of measuring the required metrics. What methods will be used, and how frequently will data be collected? Are there any industry standards for this? Is this the best way to do the measurements? How do we know that? Results-related queries: The BPM programme should be monitored to ensure that objectives are being met. Adjustments in the programme may be necessary. The programme should be tested for

62

accuracy, reliability, and validity. How can it be demonstrated that the BI initiative, and not something else, contributed to a change in results? How much of the change was probably random?

Performance problem
In organizational development (OD), a performance problem is found any time there is a discrepancy between the sought-after results and the actual results. This can occur at various levels: individual performance problems team performance problems unit (e.g. department or division) performance shortfalls organizational performance problems

There are many causes of performance problems including: interference attitude skills

Price elasticity of demand (PED)


In economics and business studies, the price elasticity of demand (PED) is an elasticity that measures the nature and degree of the relationship between changes in quantity demanded of a good and changes in its price.

Introduction
When the price of a good falls, the quantity consumers demand of the good typically rises; if it costs less, consumers buy more. Price elasticity of demand measures the responsiveness of a change in quantity demanded for a good or service to a change in price. Mathematically, the PED is the ratio of the relative (or percent) change in quantity demanded to the relative change in price. For most goods this ratio is negative, but in practice the elasticity is represented as a positive number and the minus sign is understood. For example, if for some good when the price decreases by 10%, the quantity demanded increases by 20%, the PED for that good will be two. When the PED of a good is greater than one in absolute value, the demand is said to be elastic; it is highly responsive to changes in price. Demands with an elasticity less than one in absolute value are inelastic; the demand is weakly responsive to price changes.

Interpretation of elasticity
Value Meaning

n=0

Perfectly inelastic.

63

0 < n < 1 Relatively inelastic.

n=1

Unit elastic.

1 < n < 8 Relatively elastic.

n=8

Perfectly elastic.

For all normal goods and most inferior goods, a price drop results in an increase in the quantity demanded by consumers. The demand for a good is relatively inelastic when the quantity demanded does not change much with the price change. Goods and services for which no substitutes exist are generally inelastic. Demand for an antibiotic, for example, becomes highly inelastic when it alone can kill an infection resistant to all other antibiotics.. Rather than die of an infection, patients will generally be willing to pay whatever is necessary to acquire enough of the antibiotic to kill the infection. Price elasticity of demand is rarely constant throughout the ranges of quantity demanded and price. A good or service can have relatively inelastic demand up to a certain price, above which demand becomes elastic. Even if automobiles, for example, were extremely inexpensive, parking or other related ownership issues would presumably keep most people from owning more than some "maximum" number of automobiles. For these and other reasons, elasticity of demand remains valid only over a specific (and small) range of price. Demand for cars (as well as other goods and services) is not elastic or inelastic for all prices. Elasticity of demand can change dramatically across a range of prices. Inelastic demand is commonly associated with "necessities," although there are many more reasons a good or service may have inelastic demand other than the fact that consumers may "need" it. Demand for salt, for instance, at its modern levels of supply is highly inelastic not because it is a necessity but because it is such a small part of the household budget. (Technology has increased the supply of salt modernly and reduced its historically high price.) Demand for water, another necessity, is highly inelastic for similar supply side reasons. Demand for other goods, like chocolate, which is not a necessity, can be highly elastic. Substitution serves as a much more reliable predictor of elasticity of demand than "necessity." For example, few substitutes for oil and gasoline exist, and as such, demand for these goods is relatively inelastic. However, products with a high elasticity usually have many substitutes. For example, potato chips are only one type of snack food out of many others, such as corn chips or crackers, and predictably, consumers have more room to turn to those substitutes if potato chips were to become more expensive. It may be possible that quantity demanded for a good rises as its price rises, even under conventional economic assumptions of consumer rationality. Two such classes of goods are known as Giffen goods or Veblen goods. Another case is the price inflation during an economic bubble. Consumer perception plays an important role in explaining the demand for products in these categories. A starving musician who offers lessons at a bargain basement rate of $5.00 per hour will continue to starve, but if the musician were to raise the price to $35.00 per hour, consumers may perceive the musician's lessons ability to charge higher prices as an indication of higher quality, thus increasing the quantity of lessons demanded.

64

Various research methods are used to calculate price elasticity: Test markets Analysis of historical sales data Conjoint analysis

Mathematical definition
The formula used to calculate the coefficient of price elasticity of demand for a given product is

This simple formula has a problem, however. It yields different values for Ed depending on whether Qd and Pd are the original or final values for quantity and price. This formula is usually valid either way as long as you are consistent and choose only original values or only final values. A more elegant and reliable calculation uses a midpoint calculation, which eliminates this ambiguity. Another benefit of using the following formula is that when Ed = 1, it means there will be no change in revenue when the price changes from P1 (the original price) to P2. Qav means the average of the original and final values of quantity demanded, and likewise for Pav.

Or, using the differential calculus:

Interestingly, repeated use of the chain rule reveals that:

From the following process: Note:

Further Note:

65

Implying:

Substitution reveals:

Elasticity and revenue


When the price elasticity of demand for a good is inelastic (|Ed| < 1), the percentage change in quantity is smaller than that in price. Hence, when the price is raised, the total revenue of producers rises, and vice versa. When the price elasticity of demand for a good is elastic (|Ed| > 1), the percentage change in quantity is greater than that in price. Hence, when the price is raised, the total revenue of producers falls, and vice versa. When the price elasticity of demand for a good is unit elastic (or unitary elastic) (|Ed| = 1), the percentage change in quantity is equal to that in price. Hence, when the price is raised, the total revenue remains unchanged. The demand curve is a rectangular hyperbola. When the price elasticity of demand for a good is perfectly elastic (Ed is undefined), any increase in the price, no matter how small, will cause demand for the good to drop to zero. Hence, when the price is raised, the total revenue of producers falls to zero. The demand curve is a horizontal straight line. A banknote is the classic example of a perfectly elastic good; nobody would pay $10.01 for a $10 bill, yet everyone will pay $9.99 for it. When the price elasticity of demand for a good is perfectly inelastic (Ed = 0), changes in the price do not affect the quantity demanded for the good. The demand curve is a vertical straight line; this violates the law of demand. An example of a perfectly inelastic good is a human heart for someone who needs a transplant; neither increases nor decreases in price effect the quantity demanded (no matter what the price, a person will pay for one heart but only one; nobody would buy more than the exact amount of hearts demanded, no matter how low the price is).

Point-price elasticity
Point Elasticity = (% change in Quantity)/(% change in Price) Point Elasticity = (Q/Q)/(P/P) Point Elasticity = (P Q)/(Q P) Point Elasticity = (P/Q)(Q/P) Note: In the limit (or "at the margin"), "(Q/P)" is the derivative of the demand function with respect to P. "Q" means 'Quantity' and "P" means 'Price'.

66

Example Demand curve: Q = 1,000 - 0.6P a.) Given this demand curve determine the point price elasticity of demand at P = 80 and P = 40 as follows. i.) obtain the derivative of the demand function when it's expressed Q as a function of P.

ii.) next apply the above equation to the sought ordered pairs: (40, 976), (80, 952)

e = -0.6(40/976) = -0.02 e = -0.6(80/952) = -0.05

Reengineering
Using information technology to improve performance and cut costs. Its main premise, as popularized by the book "Reengineering the Corporation" by Michael Hammer and James Champy, is to examine the goals of an organization and to redesign work and business processes from the ground up rather than simply automate existing tasks and functions. Driven By Competition According to the authors, reengineering is driven by open markets and competition. No longer can we enjoy the protection of our own country's borders as we could in the past. Today, in a global economy, worldwide customers are more sophisticated and demanding. Less Management Modern industrialization was based on theories of specialization with millions of workers doing dreary, monotonous jobs. It created departments, functions and business units governed by multiple layers of management, the necessary glue to control the fragmented workplace. In order to be successful in the future, the organization will have fewer layers of management and fewer, but more highly skilled workers who do more complex tasks. Information technology, used for the past 50 years to automate manual tasks, will be used to enable new work models. The successful organization will not be "technology driven;" rather it will be "technology enabled." Customer Oriented and Radical Improvement Although reengineering may wind up reducing a department of 200 employees down to 50, it is not just about eliminating jobs. Its goals are customer oriented: it is about processing a contract in 24 hours instead of two weeks or performing a telecommunications service in one day instead of 30. It is about reducing the time it takes to get a drug to market from eight years to four years or reducing the number of suppliers from 200,000 to 700. Reengineering is about radical improvement, not incremental changes.

Reengineering
The application of technology and management science to the modification of existing systems, organizations, processes, and products in order to make them more effective, efficient, and responsive. Responsiveness is a critical need for organizations in industry and elsewhere. It involves providing products and services of demonstrable value to customers, and thereby to those individuals who have a stake in the success of the organization. Reengineering can be carried out at the level of the organization, at the level of organizational processes, or at the level of the products and services that support an organization's activities. The entity to be reengineered can be systems management, process, product, or some combination. In each case, reengineering involves a basic three-phase systems-engineering life cycle comprising definition, development, and deployment of the entity to be reengineered.

67

Systems-management reengineering At the level of systems management, reengineering is directed at potential change in all business or organizational processes, including the systems acquisition process life cycle itself. Systems-management reengineering may be defined as the examination, study, capture, and modification of the internal mechanisms or functionality of existing system-management processes and practices in an organization in order to reconstitute them in a new form and with new features, often to take advantage of newly emerged organizational competitiveness requirements, but without changing the inherent purpose of the organization itself. Process reengineering Reengineering can also be considered at the levels of an organizational process. Process reengineering is the examination, study, capture, and modification of the internal mechanisms or functionality of an existing process or systems-engineering life cycle, in order to reconstitute it in a new form and with new functional and nonfunctional features, often to take advantage of newly emerged or desired organizational or technological capabilities, but without changing the inherent purpose of the process that is being reengineered. Product reengineering The term reengineering could mean some sort of reworking or retrofit of an already engineered product, and could be interpreted as maintenance or refurbishment. Reengineering could also be interpreted as reverse engineering, in which the characteristics of an already engineered product are identified, such that the product can perhaps be modified or reused. Inherent in these notions are two major facets of reengineering: it improves the product or system delivered to the user for enhanced reliability or maintainability, or to meet a newly evolving need of the system users; and it increases understanding of the system or product itself. This interpretation of reengineering is almost totally product-focused. Thus, product reengineering may be redefined as the examination, study, capture, and modification of the internal mechanisms or functionality of an existing system or product in order to reconstitute it in a new form and with new features, often to take advantage of newly emerged technologies, but without major change to the inherent functionality and purpose of the system. This definition indicates that product reengineering is basically structural reengineering with, at most, minor changes in purpose and functionality of the product. This reengineered product could be integrated with other products having rather different functionality than was the case in the initial deployment. Thus, reengineered products could be used, together with this augmentation, to provide new functionality and serve new purposes. There are a number of synonyms for product reengineering, including renewal, refurbishing, rework, repair, maintenance, modernization, reuse, redevelopment, and retrofit. Much of product reengineering is very closely associated with reverse engineering to recover either design specifications or user requirements. Then follows refinement of these requirements or specifications and forward engineering to achieve an improved product. Forward engineering is the original process of defining, developing, and deploying a product, or realizing a system concept as a product; whereas reverse engineering, sometimes called inverse engineering, is the process though which a given system or product is examined in order to identify or specify the definition of the product either at the level of technological design specifications or at system- or user-level requirements.

Strategic Synergy
We are constantly faced with numerous money making opportunities online everyday, its amazing how one always chances upon The Next Big Thing or the latest cutting edge marketing method. What this does to majority of us is that it throws us off track, derailing us from finishing up the current project that we are working on, or pushing its deadline back considerably. Face it! Im willing to bet $100 that this is a

68

bad habit that you have, and if you dont have it. You are probably wildly successful and dont need my $100 anyway. What is Strategic Synergy all about? Strategic Synergy is something that probably nobody talks about, because it is simply common sense. But yet we see so many people not following it. And those adopting it simply get more out of their time and valuable resources. Synergy is defined as: The interaction of two or more agents or forces so that their combined effect is greater than the sum of their individual effects.

So how does this apply to your online business or any business for the matter? Strategic Synergy is simply building your online businesses to complement each other. You can apply this through either vertical or horizontal growth of your online businesses. Let me present some examples to better illustrate this point Vertical Growth would mean growing your sales funnel and adding higher/lower (depending on where you started out) priced items to cater to your particular niche. So you simply dig deeper into your niche to offer them a range of products at multiple price points which provide better and bigger value at the higher price points. Product creation in this instance would be much faster as you should be well versed in your niche. And each additional product produced for your niche further entrenches you in the prospects mind that you are the leading authority of this field If you think about it hard, every niche business can actually be grow vertically to offer more value to your customers. There are so many ways that you can do this to milk the remaining change from your customers. The list below is not exhaustive but here are some ideas of what you can offer: eReports eBooks Physical book (try lulu.com or cafepress.com) Audio Packages

69

Dvd/Video Packages Membership sites Seminars Coaching

Example: Eben Pagan (David DeAngelo) started out doubleyourdating.com with his ebook, then the advanced course, dvds and then seminars and so on and so forth. Besides vertical growth, you can expand further through horizontal growth. Horizontal Growth would mean expanding into related niches around your core niche. Besides the usual marketing methods you would need to adopt, this would allow you to draw strength from the equity/list/brand (whatever!) from your core niche. Hence there is synergy between your niches. Do this many times enough and you will have a whole network of niches all supporting each other. Relevant backlinks and what not. You get the drift. Example: Check out one of Bradley Thompsons websites at www.selfgrowthgiveaway.com/giveaway/ Those websites providing gifts there all below to the same network and each of them can hold their own fort in their own specific niche. And of course they all come under the self-improvement umbrella. Or David Tang with the products he has created. ArticlePostRobot.com A software that semi-automates unique articles to be submitted to different article directories to prevent duplicate content penalties. ContentRewriterPro.com A software to assist rewriting of articles to create different unique content fast. ContentAssistant.com A software to create unique content quickly & easily even if you dont know much about the topic. VideoUploadPro.com A software to upload your videos to Multiple Sites Automatically. ArticleAware.com/linksubmission/ A service providing manual submissions of your websites to directories So what you can clearly see is that David Tang creates products/services which greatly helps one in speedy submissions, be it for your articles, videos or websites. And he creates supporting products for these products. I would not be surprised if he later went on to create a software to assist in creating unique videos from scrapping other videos online. So as to supplement his VideoUploadPro Sofware. Hence for whichever niche/market that you have decided to move into, seek to grow it vertically or horizontally. Do not (TRY not to) go into other unrelated niches if: Your hands got itchy and you just had to start something new. It seemed like the next big thing that some guru said You got greedy and it seemed like a good profit opportunity. (Probably good for short-term cashflow but not as a sustainable long-term business) You got bored of your niche

70

Your current niche is profitable

While all this seems like pure common sense, you will find that some Gurus or people out there jump from project to project. An article writing membership side, another list building software site etc. Can they still make money? Yeah but probably with more work then if they concentrated on expanding and growing the network within their core niche first. Well of course a minority of you have enough contacts/firepower to startup all these multiple projects and still make a ton of money. But for the newbies to average joes, this strategy will save you time, money and assist you in being successful faster. So remember to keep strategic synergy in mind whenever you start a new business project.

Synergy
Synergy (from the Greek synergos, meaning working together, circa 1660) refers to the phenomenon in which two or more discrete influences or agents acting together create an effect greater than that predicted by knowing only the separate effects of the individual agents. It is originally a scientific term. Often (but not always, see Toxicologic synergy, below) the prediction is the sum of the effects each is able to create independently. The opposite of synergy is antagonism, the phenomenon where two agents in combination have an overall effect which is less than that predicted from their individual effects. Synergism stems from the 1657 theological doctrine that humans will cooperate with the Divine Grace in regeneration[1]. The term began to be used in the broader, non-theological, sense by 1925. Synergy can also mean: A mutually advantageous conjunction where the whole is greater than the sum of the parts. A dynamic state in which combined action is favored over the sum of individual component actions. Behavior of whole systems unpredicted by the behavior of their parts taken separately. More [1] accurately known as emergent behavior

Examples
Drug synergism
Drug synergism occurs when drugs can interact in ways that enhance or magnify one or more effects, or side effects, of those drugs. This is sometimes exploited in combination preparations, such as Codeine mixed with Acetaminophen or Ibuprofen to enhance the action of codeine as a pain reliever. This is often seen with recreational drugs, where 5-HTP, a Serotonin precursor often used as an antidepressant, is often used prior to, during, and shortly after recreational use of MDMA as it allegedly increases the "high" and decreases the "comedown" stages of MDMA use (although most anecdotal evidence has pointed to 5-HTP moderately muting the effect of MDMA). Other examples include the use of Cannabis with LSD, where the active chemicals in cannabis enhance the hallucinatory experience of LSD-use. An example of negative effects of synergy is if more than one depressant drug is used that affects the Central Nervous System (CNS), for example alcohol and Valium. The combination can cause a greater reaction than simply the sum of the individual effects of each drug if they were used separately. In this particular case, the most serious consequence of drug synergy is exaggerated respiratory depression, which can be fatal if left untreated.

Pest synergy
Pest synergy, for example, would occur in a biological host organism population, where the introduction of parasite A may cause 10% fatalities of the individuals, and parasite B may also cause 10% loss. When both parasites are present, the losses are observed to be significantly greater than the expected 20%, and it is

71

said that the parasites in combination have a synergistic effect. An example is beekeeping in North America where three foreign parasites of the honeybee, acarine mite, tracheal mite and the small hive beetle, all were introduced within a short period of time.

Toxicologic synergy
Toxicologic synergy is of concern to the public and regulatory agencies because chemicals individually considered safe might pose unacceptable health or ecological risk when exposure is to a combination. Articles in scientific and lay journals include many definitions of chemical or toxicologic synergy, often vague or in conflict with each other. Because toxic interactions are defined relative to the expectation under "no interaction," a determination of synergy (or antagonism) depends on what is meant by "no interaction." The United States Environmental Protection Agency has one of the more detailed and precise definitions of toxic interaction, designed to facilitate risk assessment. In their guidance documents, the no-interaction default assumption is dose addition, so synergy means a mixture response that exceeds that predicted from dose addition. The EPA emphasizes that synergy does not always make a mixture dangerous, nor does antagonism always make the mixture safe; each depends on the predicted risk under dose addition. synergy has a greater effect in quality circles For example, a consequence of pesticide use is the risk of health effects. During the registration of pesticides in the US exhaustive tests are performed to discern health effects on humans at various exposure levels. A regulatory upper limit of presence in foods is then placed on this pesticide. As long as residues in the food stay below this regulatory level, health effects are deemed highly unlikely and the food is considered safe to consume. However in normal agal practice it is rare to use only a single pesticide. During the production of a crop several different materials may be used. Each of them has had determined a regulatory level at which they would be considered individually safe. In many cases, a commercial pesticide is itself a combination of several chemical agents, and thus the safe levels actually represent levels of the mixture. In contrast, combinations created by the end user, such as a farmer, are rarely tested as that combination. The potential for synergy is then unknown or estimated from data on similar combinations. This lack of information also applies to many of the chemical combinations to which humans are exposed, including residues in food, indoor air contaminants, and occupational exposures to chemicals. Some groups think that the rising rates of cancer, asthma and other health problems may be caused by these combination exposures; others have other explanations. This question will likely be answered only after years of exposure by the population in general and research on chemical toxicity, usually performed on animals.

Human synergy
Human synergy relates to interacting humans. For example, say person A alone is too short to reach an apple on a tree and person B is too short as well. Once person B sits on the shoulders of person A, they are more than tall enough to reach the apple. In this example, the product of their synergy would be one apple. Another case would be two politicians. If each is able to gather one million votes on their own, but together they were able to appeal to 2.5 million voters, their synergy would have produced 500,000 more votes than had they each worked independently. A third form of human synergy is when one person is able to complete two separate tasks by doing one action. For example, if a person was asked by a teacher and his boss at work to write an essay on how he could improve his work, that would be considered synergy. Synergy usually arises when two persons with different complementary skills cooperate. The fundamental example is cooperation of men and women in a couple. In business, cooperation of people with organizational and technical skills happens very often. In general, the most common reason why people

72

cooperate is that it brings a synergy. On the other hand, people tend to specialize just to be able to form groups with high synergy (see also division of labor and teamwork).

Corporate synergy
Corporate synergy occurs when corporations interact congruently. A corporate synergy refers to a financial benefit that a corporation expects to realize when it merges with or acquires another corporation. This type of synergy is a nearly ubiquitous feature of a corporate acquisition and is a negotiating point between the buyer and seller that impacts the final price both parties agree to. There are two distinct types of corporate synergies: Revenue A revenue synergy refers to the opportunity of a combined corporate entity to generate more revenue than its two predecessor standalone companies would be able to generate. For example, if company A sells product X through its sales force, company B sells product Y, and company A decides to buy company B then the new company could use each sales person to sell products X and Y thereby increasing the revenue that each sales person generates for the company. By implementing quality circles we can feel the effect of synergy. Cost A cost synergy refers to the opportunity of a combined corporate entity to reduce or eliminate expenses associated with running a business. Cost synergies are realized by eliminating positions that are viewed as duplicate within the merged entity. Examples include the head quarters office of one of the predecessor companies, certain executives, the human resources department, or other employees of the predecessor companies. This is related to the economic concept of Economies of Scale. In terms of leverage Synergy in terms of leverage is a term that was used in the Announcement of webMethods' merge with Software AG. Analysts and developers the world over have attempted to decode the meaning of this phrase. Currently, the best guess is that it's nonsensical corporate rhetoric used to confuse the listening audience.

Computers
Synergy can also be defined as the combination of human strengths and computer strengths. Computers can process data much faster than humans, but lack common sense. For a person using a computer, the persons thoughts are the input for the computer, where it is translated into efficient processing of large amounts of data. Other humans must first set up the methods for processing.

Synergy in the media


Synergy is the process by which a media institution tries to use various products to sell one another (e.g. film and soundtrack and video game). Walt Disney pioneered synergistic marketing techniques in the 1930s by granting dozens of firms the right to use his Mickey Mouse character in products and ads, and continued to market Disney media through licensing arrangements. These products can help advertise the film itself and thus help to increase the film's sales. For example, the Spider-Man films had toys of webshooters and figures of the characters made, as well as posters and games.

Trend analysis
The term "trend analysis" refers to the concept of collecting information and attempting to spot a pattern, or trend, in the information. In some fields of study, the term "trend analysis" has more formally-defined [1] [2] [3] meanings. Although trend analysis is often used to predict future events, it could be used to estimate uncertain events in

73

the past, such as how many ancient kings probably ruled between two dates, based on data such as the average years which other known kings reigned.

History
The term "trend analysis" has a history spanning many years. Today, trend analysis often refers to the science of studying changes in social patterns, including fashion, technology and the consumer behaviour.

Trend estimation
When a series of measurements of a process is treated as a time series, trend estimation is the application of statistical techniques to make and justify statements about trends in the data. Assuming the underlying process is a physical system that is incompletely understood, one may thereby construct a model, independent of anything known about the physics of the process, to explain the behaviour of the measurement. In particular, one may wish to know if the measurements exhibit an increasing or decreasing trend, that can be statistically distinguished from random behaviour. For example, take daily average temperatures at a given location, from winter to summer; or the global temperature series over the last 100 years. Particularly in that latter case, issues of homogeneity (is the series equally reliable throughout its length?) are important. For the moment we shall simplify the discussion and neglect those points. This article does not attempt a full mathematical treatment, merely an exposition.

Fitting a trend: least-squares


Given a set of data, and the desire to produce some kind of "model" of that data (model, in this case, meaning a function fitted through the data) there are a variety of functions that can be chosen for the fit. But if there is no prior understanding of the data, the simplest function to fit is a straight line and thus this is the "default". Continuing, once it has been decided to fit a straight line, there are various ways to do so, but the 2 overwhelming default is the least-squares fit, equivalent to minimisation of the L norm. See least squares. Thus, given a set of data points , and data values , one chooses and so that

is minimised. This can always be done, in closed form [1]. For the rest of this article, "trend" will mean the least squares trend. It's what it means in 99% of cases everywhere else. Now, we have a trend. But is it significant? And what do we mean by significant?

Trends in random data


Before we can consider trends in real data, we need to understand trends in random data.

74

Red shaded values are greater than 99% of the rest; blue, 95%; green, 90%. In this case, the V values discussed in the text for (onesided) 95% confidence is seen to be 0.2.

If we take a series which is known to be random - fair dice falls; or computer-generated random numbers and fit a trend line through the data, the chances of a truly zero trend are negligible. But we would probably expect the trend to be "small". If we take a series with a given degree of noise, and a given length (say, 100 points), and generate a large number of such series (say, 100,000 series), we can then calculate the trends from these 100,000 series, and empirically establish a distribution of trends that are to be expected from such random data - see diagram. Such a distribution will be normal (central limit theorem except in pathological cases, since (in a slightly non-obvious way of thinking about it) the trend is a linear combination of the ) and (if the series genuinely is random) centered on zero. We may now establish a level of statistical certainty, , desired - 95% confidence is typical; 99% would be stricter, 90% rather looser - and say: what value, , do we have to choose so that % of trends are within ? (complication: we may be interested in positive and negative trends - 2-tailed - or may have prior knowledge that only positive, or only negative, trends are of interest). In the above discussion the distribution of trends was calculated empirically, from a large number of trials. In simple cases (normally distributed random noise being a classic) the distribution of trends can be calculated exactly. Suppose we then take another series with approximately the same variance properties as our random series. We do not know in advance whether it "really" has a trend in it, so we calculate the trend, , and discover that it is less than . Then we may say that, at degree of certainty , any trend in the data cannot be distinguished from random noise. However, note that whatever value of we choose, then a given fraction, , of truly random series will be declared (falsely, by construction) to have a significant trend. Conversely, a certain fraction of series that "really" have a trend will not be declared to have a trend.

Data as trend plus noise


To analyse a (time) series of data, we assume that it may be represented as trend plus noise:

where and are (usually unknown) constants and the 's are independent randomly distributed "errors". Unless something special is known about the 's, they will be assumed to have a normal distribution. It is simplest if the 's all have the same distribution, but if not (if some have higher variance, meaning that those data points are effectively less certain) then this can be taken into account during the least squares fitting, by weighting each point by the inverse of the variance of that point.

75

In most cases, where only a single time series exists to be analysed, the variance of the 's is estimated by fitting a trend, thus allowing to be removed and leaving the 's as residuals, and calculating the variance of the 's from the residuals this is often the only way of estimating the variance of the 's. One particular special case of great interest, the (global) temperature time series, is known not to be homogeneous in time: apart from anything else, the number of weather observations has (generally) increased with time, and thus the error associated with estimating the global temperature from a limited set of observations has decreased with time. In fitting a trend to this data, this can be taken into account, as described above. Once we know the "noise" of the series, we can then assess the significance of the trend by making the null hypothesis that the trend, , is not significantly different from 0. From the above discussion of trends in random data with known variance, we know the distribution of trends to be expected from random (trendless) data. If the calculated trend, , is larger than the value, , then the trend is deemed significantly differentiable from zero at significance level .

Noisy time series, and an example


It is harder to see a trend in a noisy time series. For example, if the true series is 0, 1, 2, 3 all plus some independent normally distributed "noise" of standard deviation , and we have a sample series of length 50, then if the trend will be obvious; if the trend will probably be visible; but if the trend will be buried in the noise. If we consider a concrete example, the global surface temperature record of the past 140 years as presented by the IPCC: [2], then the interannual variation is about 0.2C and the trend about 0.6C over 140 years, with 95% confidence limits of 0.2C (by coincidence, about the same value as the interannual variation). Hence the trend is statistically different from 0. This alone, however, tells us nothing about the physical causes of the temperature change.

Goodness of fit (R-squared) and trend

Illustration of the variation of r^2 with filtering whilst fit remains the same

The least-squares fitting process throws out a value - r-squared ( ) - which is the square of the residuals of the data after the fit. It says what fraction of the variance of the data is explained by the fitted trend line. It does not relate to the significance of the trend line - see graph. A noisy series can have a very low value but a very high significance of fit. Often, filtering a series increases whilst making little difference to the fitted trend or significance.

Real data is auto-correlated


76

Thus far the data is assumed to consist of the trend plus noise, with the noise at each data point being independent (Markov, Gaussian noise). The assumption that the noise is a stationary Gauss Markov process arises from the principle of minimal information. This is important, as it makes an enormous difference to the ease with which the statistics can be analysed. Real data (for example climate data) may not fulfil this criterion. Autocorrelated time series may be modelled using autoregressive moving average models

Vertical integration
In microeconomics and managing management, the term vertical integration describes a style of ownership and control. The degree to which a firm owns its upstream suppliers and its downstream buyers determines how vertically integrated it is. Vertically integrated companies are united through a hierarchy and share a common owner. Usually each member of the hierarchy produces a different product or service, and the products combine to satisfy a common need. It is contrasted with horizontal integration. Vertical integration is one method of avoiding the hold-up problem. A monopoly produced through vertical integration is called a vertical monopoly, although it might be more appropriate to speak of this as some form of cartel. One of the earliest, largest and most famous examples of vertical integration was the Carnegie Steel company. The company controlled not only the mills where the steel was manufactured, but also the mines where the iron ore was extracted, the coal mines that supplied the coal, the ships that transported the iron ore and the railroads that transported the coal to the factory, the coke ovens where the coal was coked, etc. Later on, Carnegie even established an institute of higher learning to teach the steel processes to the next generation.

Three types
Vertical integration is the degree to which a firm owns its upstream suppliers and its downstream buyers. Contrary to horizontal integration, which is a consolidation of many firms that handle the same part of the production process, vertical integration is typified by one firm engaged in different aspects of production (e.g. growing raw materials, manufacturing, transporting, marketing, and/or retailing). There are three varieties: backward (upstream) vertical integration, forward (downstream) vertical integration, and balanced (horizontal) vertical integration. In backward vertical integration, the company sets up subsidiaries that produce some of the inputs used in the production of its products. For example, an automobile company may own a tire company, a glass company, and a metal company. Control of these three subsidiaries is intended to create a stable supply of inputs and ensure a consistent quality in their final product. It was the main business approach of Ford and other car companies in the 1920s, who sought to minimize costs by centralizing the production of cars and car parts. In forward vertical integration, the company sets up subsidiaries that distribute or market products to customers or use the products themselves. An example of this is a movie studio that also owns a chain of theaters. In balanced vertical integration, the company sets up subsidiaries that both supply them with inputs and distribute their outputs.

If you view McDonald's, for example, as primarily a food manufacturer, backwards vertical integration would mean that they would own the farms where they raise the cows, chickens, potatoes and wheat as well as the factories that processes everything and turns it all into food. Forwards vertical integration would imply that they own the distribution centers for every area and the fast food retailers. Balanced vertical integration would mean that they own all of the mentioned components.

77

Examples
Oil industry
One of the best examples of vertically integrated companies is the oil industry. Oil companies, both multinational (such as ExxonMobil, Royal Dutch Shell, or BP) and national (e.g. Petronas) often adopt a vertically integrated structure. This means that they are active all the way along the supply chain from locating crude oil deposits, drilling and extracting crude, transporting it around the world, refining it into petroleum products such as Petrol/Gasoline, to distributing the fuel to company-owned retail stations, where it is sold to consumers.

Problems and Benefits


There are internal and external (e.g. society-wide) gains and losses due to vertical integration. They will differ according to the state of technology in the industries involved, roughly corresponding to the stages of the industry lifecycle.

Static technology
This is the simplest case, where the gains and losses have been studied extensively. Internal gains: Lower transaction costs Synchronization of supply and demand along the chain of products Lower uncertainty and higher investment Ability to monopolize markets throughout the chain by market foreclosure

Internal losses: Higher monetary and organizational costs of switching to other suppliers/buyers

Benefits to society: Better opportunities for investment growth through reduced uncertainty

Losses to society: Monopolization of markets Rigid organizational structure, having much the same shortcomings as the socialist economy (cf. John Kenneth Galbraith's works), etc...

Dynamic technology
Some argue that vertical integration will eventually hurt a company because when new technologies are available, the company is forced to reinvest in its infrastructures in order to keep up with competition. Some say that today, when technologies evolve very quickly, this can cause a company to invest into new technologies, only to reinvest in even newer technologies later, thus costing a company financially. However, a benefit of vertical integration is that all the components that are in a company product will work harmoniously, which will lower downtime and repair costs.

Vertical expansion
78

Vertical expansion, in economics, is the growth of a business enterprise through the acquisition of companies that produce the intermediate goods needed by the business or help market and distribute its final goods. Such expansion is desired because it secures the supplies needed by the firm to produce its product and the market needed to sell the product. The result is a more efficient business with lower costs and more profits. Related is lateral expansion, which is the growth of a business enterprise through the acquisition of similar firms, in the hope of achieving economies of scale. Vertical expansion is also known as a vertical acquisition. Vertical expansion or acquisitions can also be used to increase scales and to gain market power. The acquisition of DirectTV by News Corporation is an example of vertical expansion or acquisition. DirectTV is a satellite TV company through which News Corporation can distribute more of its media content: news, movies, and televsion shows.

79

Das könnte Ihnen auch gefallen