Sie sind auf Seite 1von 28

A software development process, also known as a software development life cycle (SDLC), is a structure imposed on the development of a software

product. Similar terms include software life cycle and software process. It is often considered a subset ofsystems development life cycle. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. Some people consider a life-cycle model a more general term and a software development process a more specific term. For example, there are many specific software development processes that 'fit' the spiral life-cycle model. ISO/IEC 12207 is an international standard for software lifecycle processes. It aims to be the standard that defines all the tasks required for developing and maintaining software.
Contents
[hide]

1 Overview 2 Software development activities

o o o

2.1 Planning 2.2 Implementation, testing and documenting 2.3 Deployment and maintenance

3 Software development models

o o o o o

3.1 Waterfall model 3.2 Spiral model 3.3 Iterative and incremental development 3.4 Agile development 3.5 Code and fix

4 Process improvement models 5 Formal methods 6 See also

o o

6.1 Development methods 6.2 Related subjects

7 Bibliography 8 References 9 External links

[edit]Overview The large and growing body of software development organizations implement process methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on 'process models' to obtain contracts.

The international standard for describing the method of selecting, implementing and monitoring the life cycle for software is ISO/IEC 12207. A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking. Organizations may create a Software Engineering Process Group (SEPG), which is the focal point for process improvement. Composed of line practitioners who have varied skills, the group is at the center of the collaborative effort of everyone in the organization who is involved with software engineering process improvement. [edit]Software

development activities

The activities of the software development process represented in the waterfall model. There are several other models to represent this process.

[edit]Planning An important task in creating a software program is extracting therequirements or requirements [1] analysis. Customers typically have an abstract idea of what they want as an end result, but not what softwareshould do. Skilled and experienced software engineers recognize incomplete, ambiguous, or even contradictory requirements at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Once the general requirements are gathered from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document. Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be

considered a legal document so that if there are ever disputes, any ambiguity of what was promised to the client can be clarified. [edit]Implementation,

testing and documenting

Implementation is the part of the process where software engineers actually program the code for the project. Software testing is an integral and important phase of the software development process. This part of the process ensures that defectsare recognized as soon as possible. Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the writing of an API, be it external or internal. The software engineering process chosen by the developing team will determine how much internal documentation (if any) is necessary.Plan-driven models (e.g., Waterfall) generally produce more documentation than Agile models. [edit]Deployment

and maintenance

Deployment starts after the code is appropriately tested, approved for release, and sold or otherwise distributed into a production environment. This may involve installation, customization (such as by setting [citation needed] parameters to the customer's values), testing, and possibly an extended period of evaluation. Software training and support is important, as software is only effective if it is used correctly.
[citation needed]

Maintaining and enhancing software to cope with newly discovered faults or requirements can take [citation needed] substantial time and effort, as missed requirements may force redesign of the software. [edit]Software

development models

Several models exist to streamline the development process. Each one has its pros and cons, and it's up to the development team to adopt the most appropriate one for the project. Sometimes a combination of the models may be more suitable. [edit]Waterfall

model

Main article: Waterfall model The waterfall model shows a process, where developers are to follow these phases in order: 1. Requirements specification (Requirements analysis) 2. Software design 3. Implementation and Integration 4. Testing (or Validation) 5. Deployment (or Installation) 6. Maintenance In a strict Waterfall model, after each phase is finished, it proceeds to the next one. Reviews may occur before moving to the next phase which allows for the possibility of changes (which may involve a formal change control process). Reviews may also be employed to ensure that the phase is indeed complete; the phase completion criteria are often referred to as a "gate" that the project must pass through to move to the next phase. Waterfall discourages revisiting and revising any prior phase once it's complete. This

"inflexibility" in a pure Waterfall model has been a source of criticism by supporters of other more "flexible" models. [edit]Spiral

model

Main article: Spiral model The key characteristic of a Spiral model is risk management at regular stages in the development cycle. In 1988, Barry Boehmpublished a formal software system development "spiral model," which combines some key aspect of the waterfall model and rapid prototyping methodologies, but provided emphasis in a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems. The Spiral is visualized as a process passing through some number of iterations, with the four quadrant diagram representative of the following activities: 1. formulate plans to: identify software targets, selected to implement the program, clarify the project development restrictions; 2. Risk analysis: an analytical assessment of selected programs, to consider how to identify and eliminate risk; 3. the implementation of the project: the implementation of software development and verification; Risk-driven spiral model, emphasizing the conditions of options and constraints in order to support software reuse, software quality can help as a special goal of integration into the product development. However, the spiral model has some restrictive conditions, as follows: 1. The spiral model emphasizes risk analysis, and thus requires customers to accept this analysis and act on it. This requires both trust in the developer as well as the willingness to spend more to fix the issues, which is the reason why this model is often used for large-scale internal software development. 2. If the implementation of risk analysis will greatly affect the profits of the project, the spiral model should not be used. 3. Software developers have to actively look for possible risks, and analyze it accurately for the spiral model to work. The first stage is to formulate a plan to achieve the objectives with these constraints, and then strive to find and remove all potential risks through careful analysis and, if necessary, by constructing a prototype. If some risks can not be ruled out, the customer has to decide whether to terminate the project or to ignore the risks and continue anyway. Finally, the results are evaluated and the design of the next phase begins. [edit]Iterative

and incremental development


[2]

Main article: Iterative and incremental development Iterative development prescribes the construction of initially small but ever-larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster. [edit]Agile

development

Main article: Agile software development Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software. There are many variations of agile processes: In Extreme Programming (XP), the phases are carried out in extremely small (or "continuous") steps compared to the older, "batch" processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can't think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. The same people who do the coding do design. (Only the last feature merging design and code is common to all the other agile processes.) The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most [3] important part of the system. Scrum Dynamic systems development method

[edit]Code

and fix

"Code and fix" development is not so much a deliberate strategy as an artifact of naivet and schedule [4] pressure on software developers. Without much of a design in the way, programmers immediately begin producing code. At some point, testing begins (often late in the development cycle), and the inevitable bugs must then be fixed before the product can be shipped. See also:Continuous integration and Cowboy coding. [edit]Process

improvement models

Capability Maturity Model Integration The Capability Maturity Model Integration (CMMI) is one of the leading models and based on best practice. Independent assessments grade organizations on how well they follow their defined processes, not on the quality of those processes or the software produced. CMMI has replaced CMM. ISO 9000 ISO 9000 describes standards for a formally organized process to manufacture a product and the methods of managing and monitoring progress. Although the standard was originally created for the manufacturing sector, ISO 9000 standards have been applied to software development as well. Like CMMI, certification with ISO 9000 does not guarantee the quality of the end result, only that formalized business processes have been followed. ISO/IEC 15504

ISO/IEC 15504 Information technology Process assessment also known as Software Process Improvement Capability Determination (SPICE), is a "framework for the assessment of software processes". This standard is aimed at setting out a clear model for process comparison. SPICE is used much like CMMI. It models processes to manage, control, guide and monitor software development. This model is then used to measure what a development organization or project team actually does during software development. This information is analyzed to identify weaknesses and drive improvement. It also identifies strengths that can be continued or integrated into common practice for that organization or team. [edit]Formal

methods

Formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification, and design levels. Formal methods are most likely to be applied to safety-critical or security-critical software and systems, such as avionics software. Software safety assurance standards, such as DO-178B, DO-178C, and Common Criteria demand formal methods at the highest levels of categorization. For sequential software, examples of formal methods include the B-Method, the specification languages used in Automated theorem proving, RAISE, VDM, and the Z notation. Formalization of software development is creeping in, in other places, with the application of Object Constraint Language (and specializations such as Java Modeling Language) and especially with Model-driven architecture allowing execution of designs, if not specifications. For concurrent software and systems, Petri nets, Process Algebra, and finite state machines (which are based on automata theory - see also virtual finite state machine or event driven finite state machine) allow executable software specification and can be used to build up and validate application behavior. Another emerging trend in software development is to write a specification in some form of logic (usually a variation of FOL), and then to directly execute the logic as though it were a program. The OWL language, based on Description Logic, is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, and executing the logic directly. Examples are Attempto Controlled English, and Internet Business Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English-logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level.

Software Development Overview Sometimes off-the-shelf software isnt enough for your business; maybe you have a

unique business model or just need to extend an existing software package. RAH Software is a professional software house and we specialise in the full software development life cycle. Our Approach RAH Software use a mix of approaches which work with a given project, where possible we develop using TDD (known as Test-Driven Development), this means we write tests for the code before we even write the code. This is a fantastic way of us developing high quality code and we make it essential to any software modules which deal with any kind of financial or sensitive data. Like most professional software companies, we follow the software development life cycle, the diagram below shows the multiple different stages of software and we have described how we handle the process.

Initial Planning So you have decided that a software solution is what you need to help your business,

in this process we get an architect involved to help understand what your trying to achieve from the software, and what else the software will give your business. Specification Analysis When the architect has taken your requirements we then start to transform them into a requirements document. The requirements document transfers all the things discussed in the meeting into a hard format which describes exactly what the software will do and how it will do it. This process usually gets a lot of users excited because they are made aware just exactly what the software can do for them. It can take various revisions and it will never be 100% fully accurate because after all, business is not a static entity. Design and Initial Development During this process it goes to the user experience designers (UX) and the software developers work together to transfer a document into a software design. User interfaces (the face of the software) will be mocked up into a prototype and the underlying software architecture will be designed. When prototypes get designed, we demonstrate it to you and run through multiple scenarios ensuring the software is what you want. Implementation Once youre happy with the specification and the design the developers start working on creating the mock-up into a real life application, we implement the best pattern for the type of project and make regular releases to the client during this process, this helps your users get familiar with the software and spot any defects before the final version. Testing and Integration

During this period, our test team run through a list of scenarios to test the software, your staff may also choose to test the software to spot any defects. Software tests are also run on the software automatically every time problems are fixed to ensure theyre no knock on effects from our defect fixing. Evaluation When we are satisfied that the software is in a fit state, we give the customer the software package to evaluate. During this process they will be constant communication with RAH Software and the customer. Depending on the project, this usually takes between 1 and 3 months, the bigger the project, the longer the evaluation period maybe. Signing Off Once the software has been evaluated, the project is signed off by the customer, this means they are happy with the product and the software goes live. Recycle Once the software has been released to our customers, we then provide full support for the software; any bugs found in the software can be quickly identified and fixed.

Decision support system


From Wikipedia, the free encyclopedia

Example of a Decision Support System for John Day Reservoir.

A decision support system (DSS) is a computer-based information system that supports business or organizational decision-makingactivities. DSSs serve the management, operations, and planning levels of an organization and help to make decisions, which may be rapidly changing and not easily specified in advance. DSSs include knowledge-based systems. A properly designed DSS is an interactive software-based system intended to help decision makers compile useful information from a combination of raw data, documents, and personal knowledge, or business models to identify and solve problems and make decisions. Typical information that a decision support application might gather and present includes:

inventories of information assets (including legacy and relational data sources, cubes, data warehouses, and data marts),

comparative sales figures between one period and the next, projected revenue figures based on product sales assumptions.
Contents
[hide]

1 History 2 Taxonomies 3 Components

3.1 Development Frameworks

4 Classification 5 Applications 6 Benefits 7 See also 8 References 9 Further reading

[edit]History
According to Keen (1978),[1] the concept of decision support has evolved from two main areas of research: The theoretical studies of organizational decision making done at the Carnegie Institute of Technology during the late 1950s and early 1960s, and the technical work on interactive computer systems, mainly carried out at the Massachusetts Institute of Technology in the 1960s. It is considered that the concept of DSS became an area of research of its own in the middle of the 1970s, before gaining in intensity during the 1980s. In the middle and late 1980s, executive information systems (EIS), group decision support systems (GDSS), and organizational decision support systems (ODSS) evolved from the single user and model-oriented DSS. According to Sol (1987)[2] the definition and scope of DSS has been migrating over the years. In the 1970s DSS was described as "a computer based system to aid decision making". Late 1970s the DSS movement started focusing on "interactive computer-based systems which help decision-makers utilize data bases and models to solve ill-structured problems". In the 1980s DSS should provide systems "using suitable and available technology to improve effectiveness of managerial and professional activities", and end 1980s DSS faced a new challenge towards the design of intelligent workstations.[2] In 1987, Texas Instruments completed development of the Gate Assignment Display System (GADS) for United Airlines. This decision support system is credited with significantly reducing travel delays by aiding the management of ground operations at various airports, beginning with O'Hare International Airport in Chicago and Stapleton Airport in Denver Colorado.[3][4] Beginning in about 1990, data warehousing and on-line analytical processing (OLAP) began broadening the realm of DSS. As the turn of the millennium approached, new Web-based analytical applications were introduced. The advent of better and better reporting technologies has seen DSS start to emerge as a critical component of management design. Examples of this can be seen in the intense amount of discussion of DSS in the education environment. DSS also have a weak connection to the user interface paradigm of hypertext. Both the University of Vermont PROMIS system (for medical decision making) and the Carnegie Mellon ZOG/KMS system (for

military and business decision making) were decision support systems which also were major breakthroughs in user interface research. Furthermore, although hypertext researchers have generally been concerned with information overload, certain researchers, notably Douglas Engelbart, have been focused on decision makers in particular.

[edit]Taxonomies
As with the definition, there is no universally accepted taxonomy of DSS either. Different authors propose different classifications. Using the relationship with the user as the criterion, Haettenschwiler[5] differentiates passive, active, and cooperative DSS. A passive DSS is a system that aids the process of decision making, but that cannot bring out explicit decision suggestions or solutions. An active DSScan bring out such decision suggestions or solutions. A cooperative DSS allows the decision maker (or its advisor) to modify, complete, or refine the decision suggestions provided by the system, before sending them back to the system for validation. The system again improves, completes, and refines the suggestions of the decision maker and sends them back to him for validation. The whole process then starts again, until a consolidated solution is generated. Another taxonomy for DSS has been created by Daniel Power. Using the mode of assistance as the criterion, Power differentiatescommunication-driven DSS, data-driven DSS, document-driven DSS, knowledge-driven DSS, and model-driven DSS.[6]

A communication-driven DSS supports more than one person working on a shared task; examples include integrated tools like Microsoft's NetMeeting or Groove[7]

A data-driven DSS or data-oriented DSS emphasizes access to and manipulation of a time series of internal company data and, sometimes, external data.

A document-driven DSS manages, retrieves, and manipulates unstructured information in a variety of electronic formats.

A knowledge-driven DSS provides specialized problem-solving expertise stored as facts, rules, procedures, or in similar structures.[6]

A model-driven DSS emphasizes access to and manipulation of a statistical, financial, optimization, or simulation model. Model-driven DSS use data and parameters provided by users to assist decision makers in analyzing a situation; they are not necessarily data-intensive. Dicodess is an example of an open source model-driven DSS generator.[8]

Using scope as the criterion, Power[9] differentiates enterprise-wide DSS and desktop DSS. An enterprise-wide DSS is linked to large data warehouses and serves many managers in the company. A desktop, single-user DSS is a small system that runs on an individual manager's PC.

[edit]Components

Design of a Drought Mitigation Decision Support System.

Three fundamental components of a DSS architectureare:[5][6][10][11][12] 1. the database (or knowledge base), 2. the model (i.e., the decision context and user criteria), and 3. the user interface. The users themselves are also important components of the architecture.[5][12]

[edit]Development

Frameworks

DSS systems are not entirely different from other systems and require a structured approach. Such a framework includes people, technology, and the development approach.[10] DSS technology levels (of hardware and software) may include: 1. The actual application that will be used by the user. This is the part of the application that allows the decision maker to make decisions in a particular problem area. The user can act upon that particular problem. 2. Generator contains Hardware/software environment that allows people to easily develop specific DSS applications. This level makes use of case tools or systems such as Crystal, AIMMS, Analytica and iThink. 3. Tools include lower level hardware/software. DSS generators including special languages, function libraries and linking modules

An iterative developmental approach allows for the DSS to be changed and redesigned at various intervals. Once the system is designed, it will need to be tested and revised where necessary for the desired outcome.

[edit]Classification
There are several ways to classify DSS applications. Not every DSS fits neatly into one of the categories, but may be a mix of two or more architectures. Holsapple and Whinston[13] classify DSS into the following six frameworks: Text-oriented DSS, Databaseoriented DSS, Spreadsheet-oriented DSS, Solver-oriented DSS, Rule-oriented DSS, and Compound DSS. A compound DSS is the most popular classification for a DSS. It is a hybrid system that includes two or more of the five basic structures described by Holsapple and Whinston.[13] The support given by DSS can be separated into three distinct, interrelated categories[14]: Personal Support, Group Support, and Organizational Support. DSS components may be classified as: 1. Inputs: Factors, numbers, and characteristics to analyze 2. User Knowledge and Expertise: Inputs requiring manual analysis by the user 3. Outputs: Transformed data from which DSS "decisions" are generated 4. Decisions: Results generated by the DSS based on user criteria DSSs which perform selected cognitive decision-making functions and are based on artificial intelligence or intelligent agentstechnologies are called Intelligent Decision Support Systems (IDSS).[citation needed] The nascent field of Decision engineering treats the decision itself as an engineered object, and applies engineering principles such asDesign and Quality assurance to an explicit representation of the elements that make up a decision.

[edit]Applications
As mentioned above, there are theoretical possibilities of building such systems in any knowledge domain. One example is the clinical decision support system for medical diagnosis. Other examples include a bank loan officer verifying the credit of a loan applicant or an engineering firm that has bids on several projects and wants to know if they can be competitive with their costs. DSS is extensively used in business and management. Executive dashboard and other business performance software allow faster decision making, identification of negative trends, and better allocation of business resources. Due to DSS all the information from any organization is represented in the form of charts, graphs i.e. in a summarized way, which helps the management to take strategic decision.

A growing area of DSS application, concepts, principles, and techniques is in agricultural production, marketing for sustainable development. For example, the DSSAT4 package,[15][16] developed through financial support of USAID during the 80's and 90's, has allowed rapid assessment of several agricultural production systems around the world to facilitate decision-making at the farm and policy levels. There are, however, many constraints to the successful adoption on DSS in agriculture.[17] DSS are also prevalent in forest management where the long planning time frame demands specific requirements. All aspects of Forest management, from log transportation, harvest scheduling to sustainability and ecosystem protection have been addressed by modern DSSs. A specific example concerns the Canadian National Railway system, which tests its equipment on a regular basis using a decision support system. A problem faced by any railroad is worn-out or defective rails, which can result in hundreds of derailments per year. Under a DSS, CN managed to decrease the incidence of derailments at the same time other companies were experiencing an increase.

[edit]Benefits

1. Improves personal efficiency 2. Speed up the process of decision making 3. Increases organizational control 4. Encourages exploration and discovery on the part of the decision maker 5. Speeds up problem solving in an organization 6. Facilitates interpersonal communication 7. Promotes learning or training 8. Generates new evidence in support of a decision 9. Creates a competitive advantage over competition 10. Reveals new approaches to thinking about the problem space 11. Helps automate managerial processes

Expert system
From Wikipedia, the free encyclopedia

In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert.[1] Expert systems are designed to solve complex problems by reasoning about knowledge, like an expert, and not by following the procedure of adeveloper as is the case in conventional

programming.[2][3][4] The first expert systems were created in the 1970s and then proliferated in the 1980s.[5] Expert systems were among the first truly successful forms of AI software.[6][7][8][9][10][11] An expert system has a unique structure, different from traditional programs. It is divided into two parts, one fixed, independent of the expert system: the inference engine, and one variable: the knowledge base. To run an expert system, the engine reasons about the knowledge base like a human.[12] In the 80s a third part appeared: a dialog interface to communicate with users.[13] This ability to conduct a conversation with users was later called "conversational".[14][15]
Contents
[hide]

1 History 2 Software architecture

o o

2.1 The rule base or knowledge base 2.2 The inference engine

3 Advantages

o o o o o o o o

3.1 Conversational 3.2 Quick availability and opportunity to program itself 3.3 Ability to exploit a considerable amount of knowledge 3.4 Reliability 3.5 Scalability 3.6 Pedagogy 3.7 Preservation and improvement of knowledge 3.8 New areas neglected by conventional computing

4 Disadvantages 5 Application field 6 Examples of applications 7 Knowledge engineering 8 See also 9 References 10 Bibliography

o o o

10.1 Textbooks 10.2 History of AI 10.3 Other

11 External links

[edit]History
Expert systems were introduced by researchers in the Stanford Heuristic Programming Project, including the "father of expert systems" with the Dendral and Mycin systems. Principal contributors to the technology were Bruce Buchanan, Edward Shortliffe, Randall Davis, William vanMelle, Carli Scott and others at Stanford. Expert systems were among the first truly successful forms of AI software.[6][7][8][9][10][11] Research is also very active in France, where researchers focus on the automation of reasoning and logic engines. The French Prologcomputer language, designed in 1972, marks a real advance over expert systems like Dendral or Mycin: it is a shell,[16] that is to say a software structure ready to receive any expert system and to run it. It integrates an engine using First-Order logic, with rules and facts. It's a tool for mass production of expert systems and was the first operational declarative language,[17] later becoming the best selling AI language in the world.[18] However Prolog is not particularly user friendly and is an order of logic away from human logic.[19][20][21] In the 1980s, expert systems proliferated as they were recognized as a practical tool for solving real-world problems. Universities offered expert system courses and two thirds of the Fortune 1000 companies applied the technology in daily business activities.[5][22] Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe. Growth in the field continued into the 1990s. The development of expert systems was aided by the development of the symbolic processing languages Lisp and Prolog. To avoid re-inventing the wheel, expert system shells were created that had more specialized features for building large expert systems.[23] In 1981 the first IBM PC was introduced, with MS-DOS operating system. Its low price started to multiply users and opened a new market for computing and expert systems. In the 80's the image of AI was very good and people believed it would succeed within a short time[15]. Many companies began to market expert systems shells from universities, renamed "generators" because they added to the shell a tool for writing rules in plain language and thus, theoretically, allowed to write expert systems without a programming language nor any other software[16]. The best known: Guru (USA) inspired by Mycin[17][18], Personal Consultant Plus (USA)[19][20], Nexpert Object (developed by Neuron Data, company founded in California by three French)[21][22], Genesia (developed by French public company Electricit de France and marketed by Steria)[23], VP Expert (USA)[24]. But eventually the tools were only used in research projects. They did not penetrate the business market, showing that AI technology was not mature.

In 1986, a new expert system generator for PCs appeared on the market, derived from the French academic research: Intelligence Service,[24][25] sold by GSI-TECSI software company. This software showed a radical innovation: it used propositional logic ("Zeroth order logic") to execute expert systems, reasoning on a knowledge base written with everyday language rules, producing explanations and detecting logic contradictions between the facts. It was the first tool showing the AI defined by Edward Feigenbaum in his book about the Japanese Fifth Generation, Artificial Intelligence and Japan's Computer Challenge to the World (1983): "The machines will have reasoning power: they will automatically engineer vast amounts of knowledge to serve whatever purpose humans propose, from medical diagnosis to product design, from management decisions to education", "The reasoning animal has, perhaps inevitably, fashioned the reasoning machine", "the reasoning power of these machines matches or exceeds the reasoning power of the humans who instructed them and, in some cases, the reasoning power of any human performing such tasks ". Intelligence Service was in fact "Pandora" (1985),[26] a software developed for their thesis by two academic students of Jean-Louis Laurire,[27] one of the most famous and prolific French AI researcher.[28] Unfortunately, as this software was not developed by his own IT developers, GSI-TECSI was unable to make it evolve. Sales became scarce and marketing stopped after a few years.

[edit]Software [edit]The

architecture

rule base or knowledge base

In expert system technology, the knowledge base is expressed with natural language rules IF ... THEN ... For examples :

"IF it is living THEN it is mortal" "IF his age = known THEN his year of birth = date of today - his age in years" "IF the identity of the germ is not known with certainty AND the germ is gram-positive AND the morphology of the organism is "rod" AND the germ is aerobic THEN there is a strong probability (0.8) that the germ is of type enterobacteriacae"[29]

This formulation has the advantage of speaking in everyday language which is very rare in computer science (a classic program is coded). Rules express the knowledge to be exploited by the expert system. There exist other formulations of rules, which are not in everyday language, understandable only to computer scientists. Each rule style is adapted to an engine style. The whole problem of expert systems is to collect this knowledge, usually unconscious, from the experts. There are methods but almost all are usable only by computer scientists.

[edit]The

inference engine

The inference engine is a computer program designed to produce a reasoning on rules. In order to produce a reasoning, it should be based on logic. There are several kinds of logic: propositional logic, predicates of order

1 or more, epistemic logic, modal logic, temporal logic, fuzzy logic, etc. Except for propositional logic, all are complex and can only be understood by mathematicians, logicians or computer scientists. Propositional logic is the basic human logic, that is expressed in syllogisms. The expert system that uses that logic is also called a zeroth-order expert system. With logic, the engine is able to generate new information from the knowledge contained in the rule base and data to be processed. The engine has two ways to run: batch or conversational. In batch, the expert system has all the necessary data to process from the beginning. For the user, the program works as a classical program: he provides data and receives results immediately. Reasoning is invisible. The conversational method becomes necessary when the developer knows he cannot ask the user for all the necessary data at the start, the problem being too complex. The software must "invent" the way to solve the problem, request the missing data from the user, gradually approaching the goal as quickly as possible. The result gives the impression of a dialogue led by an expert. To guide a dialogue, the engine may have several levels of sophistication: "forward chaining", "backward chaining" and "mixed chaining". Forward chaining is the questioning of an expert who has no idea of the solution and investigates progressively (e.g. fault diagnosis). In backward chaining, the engine has an idea of the target (e.g. is it okay or not? Or: there is danger but what is the level?). It starts from the goal in hopes of finding the solution as soon as possible. In mixed chaining the engine has an idea of the goal but it is not enough: it deduces in forward chaining from previous user responses all that is possible before asking the next question. So quite often he deduces the answer to the next question before asking it. A strong interest in using logic is that this kind of software is able to give the user clear explanation of what it is doing (the "Why?") and what it has deduced (the "How?" ). Better yet, thanks to logic, the most sophisticated expert systems are able to detect contradictions[30] in user information or in the knowledge and can explain them clearly, revealing at the same time the expert's knowledge and way of thinking.

[edit]Advantages
This section does not cite anyreferences or sources.(November 2011)

This article may need to be rewritten entirely to comply with Wikipedia's quality standards, as section. You can help. The discussion page may contain suggestions.(December
2011)

[edit]Conversational
Expert systems offer many advantages for users when compared to traditional programs because they operate like a human brain [31],[32].

[edit]Quick

availability and opportunity to program itself

As the rule base is in everyday language (the engine is untouchable), expert system can be written much faster than a conventional program, by users or experts, bypassing professional developers and avoiding the need to explain the subject.

[edit]Ability

to exploit a considerable amount of knowledge

The expert system uses a rule base, unlike conventional programs, which means that the volume of knowledge to program is not a major concern. Whether the rule base has 10 rules or 10 000, the engine operation is the same.

[edit]Reliability
The reliability of an expert system is the same as the reliability of a database, i.e. good, higher than that of a classical program. It also depends on the size of knowledge base.

[edit]Scalability
Evolving an expert system is to add, modify or delete rules. Since the rules are written in plain language, it is easy to identify those to be removed or modified.

[edit]Pedagogy
The engines that are run by a true logic are able to explain to the user in plain language why they ask a question and how they arrived at each deduction. In doing so, they show knowledge of the expert contained in the expert system. So, user can learn this knowledge in its context. Moreover, they can communicate their deductions step by step. So, the user has information about their problem even before the final answer of the expert system.

[edit]Preservation

and improvement of knowledge

Valuable knowledge can disappear with the death, resignation or retirement of an expert. Recorded in an expert system, it becomes eternal. To develop an expert system is to interview an expert and make the system aware of their knowledge. In doing so, it reflects and enhances it.

[edit]New

areas neglected by conventional computing

Automating a vast knowledge, the developer may meet a classic problem: "combinatorial explosion" commonly known as "information overload" that greatly complicates his work and results in a complex and time consuming program. The reasoning expert system does not encounter that problem since the engine automatically loads combinatorics between rules. This ability can address areas where combinatorics are enormous: highly interactive or conversational applications, fault diagnosis, decision support in complex systems, educational software, logic simulation of machines or systems, constantly changing software.

[edit]Disadvantages

This section needs additionalcitations for verification. (May


2012)

The expert system has a major flaw, which explains its low success despite the principle having existed for 70 years: knowledge collection and its interpretation into rules, or knowledge engineering. Most developers have no automated method to perform this task; instead they work manually, increasing the likelihood of errors. Expert knowledge is generally not well understood; for example, rules may not exist, be contradictory, or be poorly written and unusable. Worse still, most expert systems use an engine incapable of reasoning. As a result, an expert system will often work poorly, and the project abandoned.[33] Correct development methodology can mitigate these problems. There exists software capable of interviewing a true expert on a subject and automatically writing the rule base, or knowledge base, from the answers. The expert system can then be simultaneously run before the true expert's eyes, performing a consistency of rules check.[34][35][36] Experts and users can check the quality of the software before it is finished. Many expert systems are also penalized by the logic used. Most formal systems of logic operate on variable facts, i.e. facts the value of which changes several times during one reasoning. This is considered a property belonging to more powerful logic. This is the case of the Mycin and Dendral expert systems, and of, for example, fuzzy logic, predicate logic (Prolog), symbolic logic and mathematical logic. Propositional logic uses only invariant facts.[37] In the human mind, the facts used must remain invariable as long as the brain reasons with them. This makes possible two ways of controlling the consistency of the knowledge: detection of contradictions and production of explanations.[38][39] That is why expert systems using variable facts, which are more understandable to developers creating such systems and hence more common, are less easy to develop, less clear to users, less reliable, and why they don't produce explanations of their reasoning, or contradiction detection.

[edit]Application

field

Expert systems address areas where combinatorics is enormous:

highly interactive or conversational applications, IVR, voice server, chatterbot fault diagnosis, medical diagnosis decision support in complex systems, process control, interactive user guide educational and tutorial software logic simulation of machines or systems knowledge management constantly changing software.

They can also be used in software engineering for rapid prototyping applications (RAD). Indeed, the expert system quickly developed in front of the expert shows him if the future application should be programmed. Indeed, any program contains expert knowledge and classic programming always begins with an expert interview. A program written in the form of expert system receives all the specific benefits of expert system, among others things it can be developed by anyone without computer training and without programming languages. But this solution has a defect: expert system runs slower than a traditional program because he consistently "thinks" when in fact a classic software just follows paths traced by the programmer.

[edit]Examples

of applications

Expert systems are designed to facilitate tasks in the fields of accounting, the law, medicine, process control, financial service,production, human resources, among others. Typically, the problem area is complex enough that a more simple traditional algorithm cannot provide a proper solution. The foundation of a successful expert system depends on a series of technical procedures and development that may be designed by technicians and related experts. As such, expert systems do not typically provide a definitive answer, but provide probabilistic recommendations. An example of the application of expert systems in the financial field is expert systems for mortgages. Loan departments are interested in expert systems for mortgages because of the growing cost of labour, which makes the handling and acceptance of relatively small loans less profitable. They also see a possibility for standardized, efficient handling of mortgage loan by applying expert systems, appreciating that for the acceptance of mortgages there are hard and fast rules which do not always exist with other types of loans. Another common application in the financial area for expert systems are in trading recommendations in various marketplaces. These markets involve numerous variables and human emotions which may be impossible to deterministically characterize, thus expert systems based on the rules of thumb from experts and simulation data are used. Expert system of this type can range from ones providing regional retail recommendations, like Wishabi, to ones used to assist monetary decisions by financial institutions and governments. Another 1970s and 1980s application of expert systems, which we today would simply call AI, was in computer games. For example, the computer baseball games Earl Weaver Baseball and Tony La Russa Baseball each had highly detailed simulations of the game strategies of those two baseball managers. When a human played the game against the computer, the computer queried the Earl Weaver or Tony La Russa Expert System for a decision on what strategy to follow. Even those choices where some randomness was part of the natural system (such as when to throw a surprise pitch-out to try to trick a runner trying to steal a base) were decided based on probabilities supplied by Weaver or La Russa. Today we would simply say that "the game's AI provided the opposing manager's strategy".

A new application for expert systems is automated computer program generation. Funded by a US Air Force grant, an expert system-based application (hprcARCHITECT) that generates computer programs for mixed processor technology (FPGA/GPU/Multicore) systems without a need for technical specialists has recently been commercially introducted. There is also a large body of contemporary research and development directed toward using expert systems for human behavior modeling and decision support systems. The former is especially important in the area of intercultural relations and the latter in improving management operations in small businesses.

[edit]Knowledge

engineering

Main article: knowledge engineering The building, maintaining and development of expert systems is known as knowledge engineering.[40] Knowledge engineering is a "discipline that involves integrating knowledge into computer systems in order to solve complex problems normally requiring a high level of human expertise".[41] There are generally three individuals having an interaction in an expert system. Primary among these is the end-user, the individual who uses the system for its problem solving assistance. In the construction and maintenance of the system there are two other roles: theproblem domain expert who builds the system and supplies the knowledge base, and a knowledge engineer who assists the experts in determining the representation of their knowledge, enters this knowledge into an explanation module and who defines the inference technique required to solve the problem. Usually the knowledge engineer will represent the problem solving activity in the form of rules. When these rules are created from domain expertise, the knowledge base stores the rules of the expert system.

Executive information system


From Wikipedia, the free encyclopedia

An executive information system (EIS) is a type of management information system intended to facilitate and support the information and decision-making needs of senior executives by providing easy access to both internal and external information relevant to meeting the strategic goals of the organization. It is commonly considered as a specialized form of decision support system (DSS).[1] The emphasis of EIS is on graphical displays and easy-to-use user interfaces. They offer strong reporting and drill-down capabilities. In general, EIS are enterprise-wide DSS that help top-level executives analyze, compare, and highlight trends in important variables so that they can monitor performance and identify opportunities and problems. EIS and data warehousing technologies are converging in the marketplace.

In recent years, the term EIS has lost popularity in favor of business intelligence (with the sub areas of reporting, analytics, and digital dashboards).
Contents
[hide]

1 History 2 Components

o o o o

2.1 Hardware 2.2 Software 2.3 User interface 2.4 Telecommunication

3 Applications

o o o

3.1 Manufacturing 3.2 Marketing 3.3 Financial

4 Advantages and disadvantages

o o

4.1 Advantages of EIS 4.2 Disadvantages of EIS

5 Future trends 6 See also 7 References 8 External links

[edit]History
Traditionally, executive information systems were developed as mainframe computer-based programs. The purpose was to package a companys data and to provide sales performance or market research statistics for decision makers, such as financial officers, marketing directors, and chief executive officers, who were not necessarily well acquainted with computers. The objective was to develop computer applications that would highlight information to satisfy senior executives needs. Typically, an EIS provides data that would only need to support executive level decisions instead of the data for all the company. Today, the application of EIS is not only in typical corporate hierarchies, but also at personal computers on a local area network. EIS now cross computer hardware platforms and integrate information stored on mainframes, personal computer systems, and minicomputers. As some client service companies adopt the latest enterprise information systems, employees can use their personal computers to get access to the

companys data and decide which data are relevant for their decision makings. This arrangement makes all users able to customize their access to the proper companys data and provide relevant informati on to both upper and lower levels in companies.

[edit]Components
The components of an EIS can typically be classified as:

[edit]Hardware
When talking about computer hardware for an EIS environment, we should focus on the hardware that meet the executives needs. The executive must be put first and the executives needs must be defined before the hardware can be selected. The basic hardware needed for a typical EIS includes four components: 1. Input data-entry devices. These devices allow the executive to enter, verify, and update data immediately 2. The central processing unit (CPU), which is the kernel because it controls the other computer system components 3. Data storage files. The executive can use this part to save useful business information, and this part also help the executive to search historical business information easily 4. Output devices, which provide a visual or permanent record for the executive to save or read. This device refers to the visual output device such as monitor or printer In addition, with the advent of local area networks (LAN), several EIS products for networked workstations became available. These systems require less support and less expensive computer hardware. They also increase access of the EIS information to many more users within a company.

[edit]Software
Choosing the appropriate software is vital to design an effective EIS.[citation needed] Therefore, the software components and how they integrate the data into one system are very important. The basic software needed for a typical EIS includes four components: 1. Text base software. The most common form of text are probably documents 2. Database. Heterogeneous databases residing on a range of vendor-specific and open computer platforms help executives access both internal and external data 3. Graphic base. Graphics can turn volumes of text and statistics into visual information for executives. Typical graphic types are: time series charts, scatter diagrams, maps, motion graphics, sequence charts, and comparison-oriented graphs (i.e., bar charts)

4. Model base. The EIS models contain routine and special statistical, financial, and other quantitative analysis

[edit]User

interface

An EIS needs to be efficient to retrieve relevant data for decision makers, so the user interface is very important. Several types of interfaces can be available to the EIS structure, such as scheduled reports, questions/answers, menu driven, command language, natural language, and input/output.

[edit]Telecommunication
As decentralizing is becoming the current trend in companies, telecommunications will play a pivotal role in networked information systems. Transmitting data from one place to another has become crucial for establishing a reliable network. In addition, telecommunications within an EIS can accelerate the need for access to distributed data.

[edit]Applications
EIS enables executives to find those data according to user-defined criteria and promote information-based insight and understanding. Unlike a traditional management information system presentation, EIS can distinguish between vital and seldom-used data, and track different key critical activities for executives, both which are helpful in evaluating if the company is meeting its corporate objectives. After realizing its advantages, people have applied EIS in many areas, especially, in manufacturing, marketing, and finance areas.

[edit]Manufacturing
Basically, manufacturing is the transformation of raw materials into finished goods for sale, or intermediate processes involving the production or finishing of semi-manufactures. It is a large branch of industry and of secondary production. Manufacturing operational control focuses on day-to-day operations, and the central idea of this process is effectiveness and efficiency.

[edit]Marketing
In an organization, marketing executives role is to create the future. Their main duty is managing available marketing resources to create a more effective future. For this, they need make judgments about risk and uncertainty of a project and its impact on the company in short term and long term. To assist marketing executives in making effective marketing decisions, an EIS can be applied. EIS provides an approach to sales forecasting, which can allow the market executive to compare sales forecast with past sales. EIS also offers an approach to product price, which is found in venture analysis. The market executive can evaluate pricing as related to competition along with the relationship of product quality with price charged. In summary, EIS software package enables marketing executives to manipulate the data by looking for trends, performing audits of the sales data, and calculating totals, averages, changes, variances, or ratios.

[edit]Financial
A financial analysis is one of the most important steps to companies today. The executive needs to use financial ratios and cash flow analysis to estimate the trends and make capital investment decisions. An EIS is a responsibility-oriented approach that integrates planning or budgeting with control of performance reporting, and it can be extremely helpful to finance executives. Basically, EIS focuses on accountability of financial performance and it recognizes the importance of cost standards and flexible budgeting in developing the quality of information provided for all executive levels.

[edit]Advantages [edit]Advantages

and disadvantages
of EIS

Easy for upper-level executives to use, extensive computer experience is not required in operations Provides timely delivery of company summary information Information that is provided is better understood EIS provides timely delivery of information. Management can make

decisions more promptly.

Improves tracking information Offers efficiency to decision makers

[edit]Disadvantages
System dependent

of EIS

Limited functionality, by design Information overload for some managers Benefits hard to quantify High implementation costs System may become slow, large, and hard to manage Need good internal processes for data management May lead to less reliable and less secure data

[edit]Future

trends

The future of executive info systems will not be bound by mainframe computer systems. This trend allows executives escaping from learning different computer operating systems and substantially decreases the implementation costs for companies. Because utilizing existing software applications lies in this trend, executives will also eliminate the need to learn a new or special language for the EIS package.

Das könnte Ihnen auch gefallen