Sie sind auf Seite 1von 61

SOFTWARE DESIGN & PROJECT MANGEMENT MODULE 1: SYSTEM ANALYSIS & DESIGN: Overview of system analysis & Design:

Introduction to different methodologies & Structured system analysis Details of SDLC approach mini cases E-R diagrams DFD concepts Data dictionary concepts. Structure charts modular programming I/O & file design consideration Entity Life histories (ELH). Introduction to different methodologies &structured system analysis CHARACTERISTICS OF SYSTEM (a)Predetermined Objective: All systems have predetermined objectives. If any organization is working without predetermined objectives then it is not called a system. The objective is the goal of the system. It is the purpose of the system for which it is functioning. If the purpose of the system is not achieved then the system will fail. The system is said to be successful and in working condition only if it achieves the predetermined objectives. System Objective Education System to convert empty brain to developedmindsAutomobile system to transport goods or people from one place to another Human body to live and be lively. Business system to achieve profit and maintain existence in the business world. (b)Set of Components: A system is not standing alone. It is a system because it contains a list of components inside it. A system is a group of components, if any component is missing then the system will fail. A system cannot work unless the components inside it donor work. Thus a systems existence is a result of the work of the components of the system. (c)Set of Process: A system is not only the set of components taken together but is also the process of the components. If we gather the raw material then the product is not going to get formed unless some process is applied to the raw material. Likewise collecting bricks, water, cement, worker and other component will not make the building (system) ready unless the process is applied to it. System Component Process Education TeacherStudentInstituteTeaching Process Learning Process Admission Process Human Body StomachLungsDigestion Process Purification Process (d)Interrelated and Interdependence: All systems consist of interrelated and interdependent elements or components. For example, our biological system contains bones, organs and different biochemical. Similarly business system consists of organizational structures, people and other equipments, working together to earn profit. All the components are interrelated with each other and they are interdependent on each other. For example, the blood carries oxygen from lungs to other parts of the body. That is blood, lungs, and other body parts are interrelated and interdependent on each other.

(e)Subsystem: A System can be further divided into sub system, each of which has their own components and process. In practical world like components make one system which is a part of sum super system. This can be further easily being understood with the following table: System Subsystem Human Body Digestive system, Circulatory System, etc.Business System Accounting System, Purchase and Sale System, Production System,Etc.Automobile System Internal Combustion Engine, Fuel System, Weight Carrying System,etc. Education System Teaching Subsystem, Admission System, Exam System, Etc. The way each component of a system functions with the other components of a system is called interaction. The different subsystem of a system interacts with each other to achieve the objective of the system. In a business system, for example, marketing subsystem must interact with production subsystem and payroll sub system may interact with personnel sub system. (g)Integration: Interrelationship and interdependence must exist among the components; this is referred to as integration. It is said for a system that the whole is greater than the sum of the parts, i.e. the components of a system work together to produce an effect which is greater than the sum of the effect which is greater than the sum of the effects of its components taken separately. The work done by individual sub system is integrated to achieve the central goal of the system. The goal of individual sub system is of lower priority than the goal of the system as whole Components of the System: The system contains the following components: 1. Input 2. Process 3. Output 4. Control 5. Feedback System Environment, Sub system and Supra System: System Environment All systems function within some sort of environment. The environment, like systems, is a collection of elements. These elements surround the system and often interact with it. For any given problem, there are many types of systems and many types of environments. Thus it is important to be clear about what constitutes of system and the environment of interest. The features that define and delineate a system, forms its boundary. The system is inside the boundary, the environment is outside the boundary. In some cases, it is fairly simple to define what part of the system is and what is not. In other cases, the person studying the system may arbitrarily define the boundaries. Some examples of boundaries are given in the following table. System Boundary Human Skin, hair, nails and all parts contained inside forms the system. All things outside is environment Automobile: The automobile body plus tires and all parts contained within form the system

Production machines inventory or work in process, production employees, procedure etc. form a system. The rest of the company forms its environment A system and its environment can be described in many ways. A subsystem is a part of a larger system, each sub system. Being delineated by its boundaries. The interconnections and interactions between the sub systems are termed interface. Interface occurs at the boundary and take the forms of input and output. A supra system refers to the entity formed by a system and other equivalent system with which it interacts. For example, an organization may be subdivided into numerous functional areas such as marketing, finance, manufacturing, research and development and so on. Each of these functional areas can be viewed as a subsystem of a larger organizational system because each could be considered to be a system in and of it. For example, marketing may be viewed as a system that consists of elements such as market research, advertising, sales and so on. Collectively, these elements in the marketing area may be viewed as making up of the marketing supra system. Similarly various functional areas of an organization are elements in the same supra system within the organization. System entropy and system stress System can run down and decay or can become disordered or disorganized. Stated in system terminology, an increase in entropy takes place preventing or offsetting to maintain the system. This maintenance input is termed as negative entropy. Open systems require more negative entropy than relatively closed systems for keeping at a steady state of organization and operation. System Stress and System Change Systems whether they are living or artificial systems, organizational systems, information systems, or systems of controls, change because they undergo stress. A stress is a force transmitted by a systems supra system that causes a system to change, so that the supra system can better achieve its goals in trying to accommodate the stress, the system may impose stress on its subsystems and so on. Effects of stress: there are two effects of stresses, which can be imposed on a system, separately or concurrently :( i) a change in the goal set of the system. New goals maybe created or old goals may be eliminated. (ii)A change in the achievement levels desired for existing goals. The level of desired achievement may be increased or decreased. Type of System: There are following types of system:(a) Physical System(b) Abstract System(c) Open System(d) Closed System(e) Deterministic & Probabilistic System(f) Information System (a)Physical System: A physical System is a tangible or visible system. That is tangible system can be seen, touched and felt. Physical system may operate statically or dynamically. For example, a steel filing cabinet is a static physical system. An air conditioning unit is a dynamic

physical system which responds to the environment and stops or starts operating depending on the temperature. (b)Abstract System: Abstract systems are conceptual or non physical entities. Such systems just involve abstract conceptualization of the physical system. For example, a model is an abstract system as it is conceptualization and a representation. Another example of an abstract system is an algorithm or an equation. Abstract models are often used to understand physical systems, their components, interrelationships etc.

(c)Open Systems: An open system is a system that interacts freely with its environment, taking input and returning output. When the environment changes, an open system must also change in order to adapt itself to the environment. Otherwise the system will be labelled as outdated. For example, our body system, educational system, business system and all other systems which take input and return output come in the category of open system. To achieve its goal the open system has to interact with the environment that lies outside the boundaries of the system. Information systems are open systems since they accept inputs from the environment, and produce output. Characteristics of Open System: A good dynamic open system is a system that continues to function properly and usefully with passage of time even as the environment changes. It is such system that the process of system analysis and design aims at. Characteristics of good dynamic open systems are: .Open system reaches a steady state of equilibrium when they function properly. They are able to adjust themselves and regulate themselves. They resist entropy (growing disorder and loss of energy over time) by modifying their process or by seeking new input to return to the steady state. Have a useful process cycle and produce useful output... Are clear about the goal and may achieve the goal using any of the alternative paths and courses of action. Use focused specialized functions and greater differentiation between their components. Physical System: Construction of building, Dam, etc. An automobile system Abstract System Teaching / Training Systems, Learning Systems, Franchisee System, etc. Open System Almost all systems but to name a few: Recruiting System, Production System, Marketing System, etc. Closed System: A closed system is a system that is cut off from its environment and does not interact with it. That is, a closed system is isolated from its environment and remains unaffected by the changes in the environment. Very rare to find but to name a few: Teaching with defined syllabus, Raw Material ordering system, Payroll System of a firm. Deterministic Systems: A deterministic system operates in a predictable manner. If one knows the state of the system at a given point of time then it is a deterministic system. Production System, Cost Accounting System, Health and Hygiene System, etc.

Probabilistic Systems: if one can predict the next state without error, then it is probabilistic system... Similarly, in an inventory system, average demand, lead time etc. Are probabilistic. Marketing and Sales System, Economic forecasting system, Weather forecasting system, etc. Information System: Banking System, Placement Agency Systems, University admission System, etc. Closed systems are very rare. In our daily life, we will be mostly dealing with open systems. (e)Deterministic and Probabilistic System: Information System: An information system is a system which provides information for decision making and /or control of the organization. Information is a material or non material entity which reduces uncertainty about a situation or about an event. For example, information that the weather will be fine tomorrow reduces our uncertainty about if the cricket match will be played or not. Organizations use information systems to process transactions, reduce costs and generate revenue. For example, banks use information systems to process customer cheques and produce statements. INFORMATION SYSTEM (IS) CONCEPTS (a)Understanding Data and Information Data: Consists of raw facts, such as an employees name and number of hours worked in a week, inventory part numbers, or sales orders. Data represents real-world things. As we have stated, data simply, raw facts has little value beyond its existence. For example, consider data as pieces of railroad track in a model railroad kit. In this state, each piece of track has little value beyond its inherent value as a single object. However, if some relationship is defined among the pieces of the track, they will gain value. By arranging the pieces of track in a certain way, a railroad layout begins to emerge Data is represented By Alphanumeric data, Numbers, letters, and other characters, Image data. Graphic images and pictures, audio data Sound, noise, or tones Video data Moving images or pictures Information: A collection of facts organized in such a way that they have Information is much the same. Rules and relationships can be set up to organize data into useful, valuable information. The type of information created depends on the relationship defined among existing data. For example, the pieces of track could be arranged in a different way to form different layouts adding new or different data means relationships can be redefined and new information can be created. For instance, adding new pieces to the track can greatly increase the value of the final product. We can now create a more elaborate railroad layout. Process: Turning Data into information is a process. A set of logically related tasks performed to achieve a defined outcome. Knowledge:

Awareness and understanding of a set of information and ways that information can be made useful to support a specific task or reach a decision. The Process of Transforming Data into Information The Value of Information The value of information is directly linked to how it helps decision makers achieve their organizations goals. For example, the value of information might be measured in the time required to make a decision or in increased profits to the company. The characteristics of Valuable Information It is a set of interrelated elements or components that collect (input), manipulate (process), and disseminate (output) data and information and provide a feedback mechanism to meet an objective) Components of Information System: A component of an information system feedback is critical to the successful operation of a system. Processing Input to Output INPUT, PROCESSING, OUTPUT, AND FEEDBACK (i) Input In information systems, input is the activity of gathering and capturing raw data. In producing pay checks, for example, the number of hours worked for every employee must be collected before pay checks can be calculated or printed. In university grading system, student grades must be obtained from instructors before a total summary of grades for the semester or quarter can be compiled and sent to the appropriatestudents.Input can take many forms. In an information system designed to produce pay checks, for example, employee timecards might be the initial input. In a 911 emergency telephone system, an incoming call would be considered an input. Input To a marketing system might include customer survey responses. Notice that regardless of the system involved, the type of input is determined by the desired output. Input can be a manual process, or it may be automated. Scanner at a grocery store that reads bar codes and enters the grocery item and price into a computerized cash register is type of automated input process. Regardless of the input method, accurate input is critical to achieve the desired output. (ii)Processing In information systems processing involves converting or transforming data into useful outputs. Processing can involve making calculations making comparisons and taking alternative actions, and storing data for future use. Processing can be done manually or with the assistance of computers. In the payroll application, the number of hours worked for each employee must be converted into net pay. The required processing can first involve multiplying the numbers of hours worked by the employees hourly pay rate to get gross pay. If more than 40 weekly hours are worked, overtime pay may also be determined. Then deduction is subtracted from gross pay to get net pay. For instance, federal and state taxes can be withheld or subtracted from gross pay; many employees have health and life insurance, savings plans, and other deductions that must also be subtracted from gross pay to arrive at net pay.

(iii) Output In information systems, output involves producing useful information usually in the form of documents and reports. Output can include pay checks for employees, reporters for managers, and information supplied to stockholders, banks government agencies, and other groups. In some cases, output from one system can become input for another. For example, output from a system that processes sales orders can be used as input to a customer billing system. Often output one system can be used as input to control other systems or devices. For instance, manufacturing office furniture is complicated with many variable s. thus, the salesperson, customer, and furniture designer go through several iteration of designing furniture to meet the customers needs. Special computer software and hardware is used to create the original design and rapidly revisit. Once the last design mock-up is approved, the design workstation software creates a bill of materials that goes to manufacturing to produce the order. Output can be produced in a variety of ways. For a computer, printers and display screens are common output devices. Output can also be a manual process involving handwritten reports and documents. (iv) Feedback In information systems, feedback is output that is used to make changes to input to processing activities. For example, errors or problems might make it necessary to correct input data or change a process. Consider a payroll example. Perhaps the number of hours an employee worked was entered into computer as 400 instead of 40 hours. Fortunately, most information systems check to make sure that data falls within certain predetermined ranges. For number of hours worked, the range might be from 0 to 100 hours. It is unlikely that an employee would work more than 100 hours for any given week. In this case, the information system would determine that 400hours is out of range and provide feedback, such as an error report. The feedback is used to check and correct the input on the number of hours worked to 40. If undetected, this error would result in a very high net pay printed on the pay check! Feedback is also important for managers and decision makers. For example, output from an information system might indicate that inventory levels for a few items are getting low. A manager could use this feedback to decide to order more inventories. The new inventory orders then become input to the system. In this case, the feedback system reacts to an existing problem and alerts a manager that there are too few inventory items on hand. In addition to this reactive approach, a computer system can also be proactive by predicting future events to avoid problems. This concept, often called forecasting. Can be used to estimate future sales and order more inventory before shortage occurs. CATEGORIES OF INFORMATION SYSTEM Transaction Processing System: The most fundamental computer based system in an organization pertain to the processing of business transaction processing systems are aimed at expediting andimporvinghte routine business activities that all organization are engaged. Standard operating procedures which facilitate handling of transactions are often embedded in computer programs that control the entry of data, processing of details and search and presentation of data and information. The high volume of well

understood transaction associated with the operating level of an organization, as well as the ability for managers to develop specific procedures for handling them often trigger the need for computer assistance. Transaction processing system if computerized provides speed and accuracy and can be programmed to follow routines procedures without any variance. Systems analyst designs the systems and process to handle various activates. Management Information systems: Transaction processing are operations oriented. In contrast, IS assist managers in decision making and problem solving. They use results produced by the transaction processing system, but they may also use other information. In any organization, decision must be made on many issues that recur regularly and require a certain amount of information. Because the decision-making process is well understood, the managers can identify the information that will be needed for the purpose. In turn, the information system can be developed so that reports are prepared regularly to support these recurring decisions. Decision Support Systems: Not all decisions are of a recurring nature. Some occur only once or recur infrequently. DSS are aimed at assisting managers who are faced with unique decision problems. Often an important aspect of this decision is determining what information is needed. In well structured situations, it is possible to identify information needs in advance, but in an unstructured environment, it is difficult to do so. As information is acquired, the managers may realize that additional information is required. In such cases, it is impossible to redesign systems report formats and contents. A decision support system must, therefore, have greater flexibility than other information systems. The decision support system is much more use when the decision are if an unstructured or semi structured nature. Executive Information Systems (EIS): They are designed primarily for the strategic level of management. They enable executive to extract summary data from the databases and model complex problems without the need to learn complex query languages, enter formula, use complex statistics, or have high computing skills. The systems are easy to use, incorporating touch screens in some instances, and being graphically based. High level summary data and trend analysis is provided at the touch of a button, using graphics as a way of presenting the information. There are standard templates for these the executive does not need to construct the query or model. For the executives PC, telecommunication slinks are often made to public database and the information superhighway, so that external data can be browsed and incorporated into the models. Expert Systems: They are designed to replace the need for a human expert. They are particularly important where experience is scare and therefore expensive. This is not number crunching software, but software knowledge in terms of facts and rules. This knowledge will be in a specific area, and therefore experts systems are not general, as are most decision support systems, which can be applied to most scenarios. An expert system for oil drilling is not of much use in solving company taxation problems. Expert systems have arisen largely from academic research into artificial intelligence.

The expert system should be able to learn, i.e. change or add new rules. They are developed using very different programming languages such as PROLOG, which are referred to as fifth generation languages or expert systems shells which can make the process quicker and easier. It has-been suggested that expert system would be of greater use in the tactical and strategic level. This has been the case in banking, where expert systems scrutinize applications for loans, and lower level staff accepts the systems decision. This has replaced the somewhat subjective decision making of more senior managers. Structured systems analysis and design methodology (SSADM) is a set of standards for systems analysis and application design. It uses a formal methodical approach to the analysis and design of information systems. SSADM follows the waterfall life cycle model starting from the feasibility study to the physical design stage of development. One of the main features of SSADM is the intensive user involvement in the requirements analysis stage. The users are made to sign off each stage as they are completed assuring that requirements are met. The users are provided with clear, easily understandable documentation consisting of various diagrammatic representations of the system. SSADM breaks up a development project into stages, modules, steps and tasks. The first and foremost model developed in SSADM is the data model. It is a part of requirements gathering and consists of well defined stages, steps and products. The techniques used in SSADM are logical data modelling, data flow modelling and entity behaviour modelling.

Logical Data Modelling: This involves the process of identifying, modelling and documenting data as a part of system requirements gathering. The data are classified further into entities and relationships. Data Flow Modelling: This involves tracking the data flow in an information system. It clearly analyzes the processes, data stores, external entities and data movement. Entity Behaviour Modelling: This involves identifying and documenting the events influencing each entity and the sequence in which these events happen.

Some of the important characteristics of SSADM are:


Dividing a project into small modules with well defined objectives Useful during requirements specification and system design stage Diagrammatic representation and other useful modelling techniques Simple and easily understood by clients and developers Performing activities in a sequence

The stages of SSADM include:


Determining feasibility Investigating the current environment Determining business systems options

Defining requirements Determining technical system options Creating the logical design Creating the physical design

The term software development methodology is used to describe a framework for the development of information systems. A particular methodology is usually associated with a specific set of tools, models and methods that are used for the analysis, design and implementation of information systems, and each tends to favour a particular lifecycle model. Often, a methodology has its own philosophy of system development that practitioners are encouraged to adopt, as well as its own system of recording and documenting the development process. Many methodologies have emerged in the past few decades in response to the perceived need to manage different types of project using different tools and methods. Each methodology has its own strengths and weaknesses, and the choice of which approach to use for a given project will depend on the scale of the project, the nature of the business environment, and the type of system being developed. Structured methodologies use a formal process of eliciting system requirements, both to reduce the possibility of the requirements being misunderstood and to ensure that all of the requirements are known before the system is developed. They also introduce rigorous techniques to the analysis and design process. SSADM is perhaps the most widely used of these methodologies, and is used in the analysis and design stages of system development. It does not deal with the implementation or testing stages. SSADM is an open standard, and as such is freely available for use by companies or individuals. It has been used for all government information systems development since 1981, when it was first released, and has also been used by many companies in the expectation that its use will result in robust, high-quality information systems. SSADM is still widely used for large scale information systems projects, and many proprietary CASE tools are available that support SSADM techniques. The SSADM standard specifies a number of modules and stages that should be undertaken sequentially. It also specifies the deliverables to be produced by each stage, and the techniques to be used to produce those deliverables. The system development life cycle model adopted by SSADM is essentially the waterfall model, in which each stage must be completed and signed off before the next stage can begin.

SSADM techniques SSADM revolves around the use of three key techniques that derive three different but complementary views of the system being investigated. The three different view of the

system are cross referenced and checked against each other to ensure that an accurate and complete overview of the system is obtained. The three techniques used are:

Logical Data Modelling (LDM) - this technique is used to identify, model and document the data requirements of the system. The data held by an organisation is concerned with entities (things about which information is held, such as customer orders or product details) and the relationships (or associations) between those entities. A logical data model consists of a Logical Data Structure (LDS) and its associated documentation. The LDS is sometimes referred to as an Entity Relationship Model (ERM). Relational data analysis (or normalisation) is one of the primary techniques used to derive the system's data entities, their attributes (or properties), and the relationships between them. Data Flow Modelling - this technique is used to identify, model and document the way in which data flows into, out of, and around an information system. It models processes (activities that act on the data in some way), data stores (the storage areas where data is held), external entities (an external entity is either a source of data flowing into the system, or a destination for data flowing out of the system), and data flows (the paths taken by the data as it moves between processes and data stores, or between the system and its external entities). A data flow model consists of a set of integrated Data Flow Diagrams (DFDs), together with appropriate supporting documentation. Entity Behaviour Modelling - this technique is used to identify, model and document the events that affect each entity, and the sequence in which these events may occur. An entity behaviour model consists of a set of Entity Life History (ELH) diagrams (one for each entity), together with appropriate supporting documentation.

Activities within the SSADM framework are grouped into five main modules. Each module is sub-divided into one or more stages, each of which contains a set of rigorously defined tasks. SSADM's modules and stages are briefly described in the table below.

Module 1 - the Feasibility Study Stage Title / description 0 Feasibility The high level analysis of a business area to determine whether a proposed system can cost effectively support the business requirements identified. A Business Activity Model (BAM) is produced that describes the business activities and events, and the business rules in operation. Problems associated with the current system, and the additional services required, are identified. A high level

data flow diagram is produced that describes the current system in terms of its existing processes, data stores and data flows. The structure of the system data is also investigated, and an initial LDM is created. Module 2 - Requirements Analysis Stage Title / description 1 Investigation of Current Environment The systems requirements are identified and the current business environment is modelled using data flow diagrams and logical data modelling. 2 Business System Options Up to six business system options are presented, of which one will be adopted. Data flow diagrams and logical data models are produced to support each option. The option selected defines the boundary of the system to be developed. Module 3 - Requirements Specification Stage Title / description 3 Definition of Requirements Detailed functional and non-functional requirements (for example, the levels of service required) are identified and the required processing and system data structures are defined. The data flow diagrams and logical data model are refined, and validated against the chosen business system option. The data flow diagrams and logical data model are then validated against the entity life histories, which are also produced during this stage. Parts of the system may be produced as prototypes and demonstrated to the customer to confirm correct interpretation of requirements and obtain agreement on aspects of the user interface. Module 4 - Logical System Specification Stage Title / description 4 Technical System Options Up to six technical options for the development and implementation of the system are proposed, and one is selected.

Logical Design In this stage the logical design of the system, including user dialogues and database enquiry and update processing, is undertaken.

Module 5 - Physical Design Stage Title / description 6 Physical Design The logical design and the selected technical system option provide the basis for the physical database design and a set of program specifications.

SSADM is well-suited to large and complex projects where the requirements are unlikely to change significantly during the project's life cycle. Its documentation-oriented approach and relatively rigid structure makes it inappropriate for smaller projects, or those for which the requirements are uncertain, or are likely to change because of a volatile business environment. Entity Life Histories The purpose of an information system is to provide up-to-date and accurate information. The information held on the system is constantly changing - the number and names of the patients on a hospital ward, for example, or the price of electronic components. The system must be able to keep track of such changes. An entity life history (ELH) is a diagrammatic method of recording how information may change over time, and models the complete catalogue of events that can affect a data entity from its creation to its deletion, the context in which each event might occur, and the order in which events may occur. The ELH represents every possible sequence of events that can occur during the life of the entity. Remember that, although data is changed by a system process, the occurrence of that process is triggered by some event. It would obviously be an overwhelming task to model the all of the events that could affect the system data at the same time, so instead we examine just one entity within the logical data structure at a time. An entity life history will be produced for each entity in the logical data structure. Information from the individual life histories can be collated at a later time to produce an entity/event matrix. The diagram below shows how we might model the life history of a bank account entity.

An entity life history for the "Bank Account" entity The entity life history for "Bank Account" should accommodate any possible occurrence of that entity. All bank accounts must be opened, and money is either paid in or withdrawn. The diagram itself is read from left to right. If the structure branches downward, the branch must be followed down before moving on towards the righthand side of the diagram. The first event to affect any occurrence of "Bank Account" will be the opening of the account. The account will have a life, which will consist of a series of transactions. Transactions can include the deposit or withdrawal of funds, direct payments, or the cashing of cheques. After an unspecified number of transactions have occurred, the account will be closed, and eventually deleted. The entity life history elements featured in the above example are:

Sequence - activities are undertaken in strict sequence, from left to right (for example, an account must be opened before any other event that will affect it can occur, and account closure must occur before account deletion). Iteration - the asterisk in the top right-hand corner of the Transaction box signifies that a transaction is an even that can occur repeatedly. Selection - boxes with small circles in their top righthand corner represent alternative forms of transaction. A single transaction may be a deposit or a withdrawal of funds, a direct payment, of the cashing of a cheque.

In the same way that an entity may be affected by several different events, a single event may affect more than one entity. When an instance of "Bank Account" is

created, for example, an instance of "Customer" must also be created. The interaction between an event and an entity is called an effect. Notice that elements that have an effect on an entity have no other elements below them in the entity life history diagram. Elements that do have other elements below them are called nodes. They have no significance other specifying the sequence in which events may occur within the context of the entity's life history. The name of each element (shown as a label inside the box representing the entity) reflects the event affecting the entity (if the element is an effect) or a particular stage within the life history (if the element is a node). Although an entity life history can be constructed using only the elements sequence, iteration and selection, the representation of certain complex scenarios can be greatly simplified using two additional element types:

parallel structures quit and resume

Sequence A sequence consists of a series of nodes and/or effects reading from left to right, as shown below.

Boxes A, B, C, and D represent a sequence In the above example, effect a will always occur first, followed by B, then C, then D. This is the only possible sequence. Although these sequential events will take place over a period of time, the time intervals involved are unspecified, and could range from a few seconds to many years.

Selection A selection defines a number of nodes or effects that are alternatives to one another at a particular point in the entity's life history. A circle in the top right hand corner of the box representing the element indicates that it is one of several elements that could be chosen.

Boxes E, F and G represent the available options Because node A is the first element in the entity life history of "Entity X", an occurrence of the entity can only be created by event E, F or G. If we want to represent the fact that none of the available options have to be selected, we can include a null box, as shown below.

A null box indicates that an option does not need to be selected Iteration If an event or node can occur repeatedly at the same point within an entity's life history, the fact is signified by an asterisk in the top left-hand corner of the box representing the event or node. The only restriction on iterations is that a single occurrence of the iteration must be complete before the next one starts.

Event H may occur repeatedly Once "Entity X" has been created by event E, F or G under node a, event H can affect the entity zero or more times the iteration symbol must not be used for events or nodes that occur only once, or not at all (use the null box instead).

Parallel structures A parallel structure can be used if the sequence in which two nodes or events can occur is unpredictable, or where they may occur concurrently. Such a structure is shown as two nodes or events connected by parallel horizontal lines, as illustrated below.

Nodes I and J form a parallel structure In the entity life history above, the events K, L and M may occur, in that order, under node I. At the same time event N, under node J, may occur zero or more times. Nodes I and J (representing the sequence and the iteration respectively) are connected by a parallel bar to signify possible concurrency. The same situation could be modelled using only sequence, iteration and selection elements, but the resulting diagram would be far more complex and consequently more difficult to interpret.

Quit and resume Occasionally, a situation can arise that cannot easily be modelled using the entity life history elements already described. In order to accommodate such situations without making the entity life history diagram unduly large or complex, the quit and resume facility allows the sequential progress of nodes or events to quit at one point in the entity life history and resume at another point. This concept is illustrated below.

Following event F, activity will continue at node C In the above example, event F has the label "Q1" immediately to its right, and node C has the label "R1" immediately to its right. Using this notation, we can signify that the event or node that will follow event F is whichever element has the label R1 to its right, which in this case is node C. As with parallel structures, the same situation could be modelled using only sequence, iteration and selection elements, but the resulting diagram would be more complex and difficult to interpret. The example below shows how we can model a situation in which a bank account has been closed (but not deleted), and is then re-opened. The event Account Reopened (labelled Q1) causes a quit back to the node Account Life (labelled R1).

Two possible uses of quit and resume The quit and resume facility also allows us to quit from the main structure altogether, and resume at a point in a stand-alone structure. This can be used in situations where an event that can occur at any time will alter the normal sequence of the life history. Since it is impossible to predict exactly where such an event might occur within the entity's life history, an appropriate instruction should be added to the diagram indicating the circumstances under which the quit might occur, and from where. In the above example, the death of a customer may occur at any time after an account is opened, triggering an immediate quit, followed by a resume at R2 (Death Structure). In circumstances such as the death of a customer, the normal sequence will no longer apply. Note that it is also possible to quit from a stand-alone structure back to the main structure in an entity life history. To avoid ambiguity, while there may be more than one quit point with the same identifier, there cannot be more than one resume point with that identifier. A data dictionary is a collection of data about the data. Its purpose is to rigorously define each and every data element, data structure, and data transform. Strengths, weaknesses, and limitations A data dictionary helps to improve communication between analysts and users and between technical personnel by establishing a set of consistent data definitions. If programmers develop data descriptions from a common data dictionary, several potentially serious module interface problems can be avoided. At a higher level,

different systems must often be linked or interfaced, and a common set of data definitions helps to minimize misunderstandings. By highlighting already existing data elements, a data dictionary helps the analyst avoid data redundancy. If all programs using a given data element are crossreferenced in the data dictionary, assessing the ripple effects of a change in the data is simplified. Inputs and related ideas The first step in creating a data dictionary is to identify the systems data elements and composites, a key objective of the information gathering phase of the system development life cycle .The data dictionary is an important adjunct to several analysis tools, such as data flow diagrams, entity-relationship diagrams, and data normalization. Creating a data dictionary is an important step in designing and developing traditional files or a database. The data dictionary often serves as a foundation for the requirements specification .Data structures are described in. Inverted-L charts and Warnier-Orr diagrams are useful for visualizing a data structure. Concepts A data dictionary is a collection of data about the data in which each and every data element, data structure, and data transform is rigorously defined. Data elements The data dictionary defines each data element, assigns it a meaningful name, specifies both its logical and physical characteristics, and records information concerning how it is used. a. Data names It is important to follow a consistent standard when assigning data names. For example, an organization might use the rules imposed by its primary programming language, database management system, data dictionary software, or CASE product. Some data elements are known by two or more names. This often happens when different groups use the same data for different purposes or when several analysts work concurrently on the system. Rather than creating redundant data dictionary entries, resolve any differences in the definitions of the equivalent data elements, merge them, and record the alias name on the primary description.

Information That Might Be Recorded for Each Data Element in a Data Dictionary(*) General Data element name Aliases or synonyms Definition Format Data type Length Picture Units (meters, pounds, etc.) Composite description Control Information Source Change authorizations Access authorizations Security information Authorized users Date of origin Usage characteristics Range of values Frequency of use Input/output/local Conditional values Limits Relationships Parent structures Child structures File or database Key Data flows Processes Reports Forms Screens

If two clearly different data elements have similar names, change at least one of them because similar names can be confusing. b. Definitions A good definition precisely indicates the data elements purpose and clearly distinguishes it from the systems other data elements. Examples are useful, particularly for identifying exceptions to a general rule. Data structures or composites Data structures also called group or composite data items are defined by showing the data elements and substructures that comprise them. The symbols depicted in Table (or their equivalents) are sometimes used to document (or partition) composite items. Figure shows how the data on a sales receipt might be defined using the symbols. Inverted-L charts and Warnier-Orr diagrams are other tools for visualizing a data structure.

Note that a data structure can contain both composite items and data elements. In the data dictionary, composite items are decomposed or partitioned down to the data element level, and each data element is fully defined c. Keys and relationships In a database, an entity is a thing about which data are stored and an occurrence is a single instance of an entity composed of data elements (or attributes).

Some typical data dictionary entries. These Symbols Can Be Used to Document a Data Structure Symbol = + [] | () {} Meaning Contains, or is composed of And Selection Separator Optional Repetition

Fig .Documenting a data structure.

Physically, entities map to files, occurrences map to records, and attributes map to fields. Occurrences (records) are composite data structures. In addition to the attributes that make up the composite, the key (the attribute or group of attributes that uniquely distinguishes one occurrence of the entity) is documented in the data dictionary. A database is composed of a set of related files (or entities). Typically, the files are linked (or related) by storing an entitys key in the related entity. These relationships are also documented in the data dictionary. Transforms A transform is a process or operation that modifies data. Many data dictionary systems allow the analyst to name, define, and record data about the transforms in the data dictionary. Key terms Alias An alternate name for a data element. Attribute A property of an entity. Composite A set of related data elements. Data dictionary A collection of data about the data. Data element An attribute that cannot be logically decomposed Data structure A set of related data elements Database A set of related files Entity An object (a person, group, place, thing, or activity) about which data are stored Field A data element physically stored on some medium File A set of related records Foreign key A key to some other entity stored with the target entity Key The attribute or group of attributes that uniquely distinguishes one occurrence of an entity Meta-data the contents of the data dictionary Occurrence A single instance of an entity Record The set of fields associated with an occurrence of an entity Relationship A link between two data structures

Transform A process or operation that modifies data. Structured English is a very limited, highly restricted subset of the English language used to plan, design, or document program routines, modules, and manual procedures. Strengths, weaknesses, and limitations Structured English is useful for planning or designing program routines, modules, and manual procedures. It resembles a programming language, so programmers find it easy to understand. The base for structured English is, of course, English, so users find it easy to follow, too. Structured English is excellent for describing an algorithm, particularly when user communication is essential. If the main concern is communication with the programmers, however, pseudo code may be a better choice. Structured English is not a good choice for describing a high-level control structure or an algorithm in which numerous decisions must be made; logic flowcharts, decision tables, and decision trees are better for such tasks. 60.3 Inputs and related ideas Before writing structured English, the designer must understand the algorithm or procedure. The necessary information might be compiled from direct observation, extracted from existing documentation, or derived from the problem definition. And/or analysis stages of the system development life cycle. Other tools for documenting or planning routines or processes include logic flowcharts, Nassi-Schneiderman charts, decision trees, decision tables, pseudo code, and input/process/output (IPO) charts .A pseudo code routine usually exists in the context of a larger program. Tools for documenting or planning program structure include structure charts and HIPO. There are several variations of structured English, none of which can be considered a standard. Consequently, view this as a guideline. A good structured English statement reads like a short imperative sentence. By convention, only key words such as IF, THEN, SO, REPEAT, UNTIL, DO, and so on are capitalized; data names and the general English needed to complete a sentence or a phrase are lower case. Many sources recommend that a data name defined in a data dictionary be underlined, and that convention will be followed in the examples shown below. Sequence Sequence statements begin with commands such as MOVE, GET, WRITE, READ, or COMPUTE followed by the name or names of the associated data elements or data structures. For example, COMPUTE gross pay. ADD 1 to counter. MULTIPLY hours worked by pay rate to get gross pay. GET inventory record. MOVE customer name to invoice. WRITE invoice. Blocks of logic It is often convenient to group several structured English statements into a block, assign a name to the block, and reference the block by coding a single sequence

statement. For example, all the instructions required to compute gross pay might be grouped in a block under the name compute gross pay. Subsequently, the statement DO compute gross pay. References the entire block. Note that a block can contain any combination of code, including decisions, repetitive logic, and even other blocks. Indentation should always be used to show the relationship between the parts of a block. Decision or selection Decision (or selection) logic follows an IF-THEN-ELSE structure: IF condition THEN block-1 ELSE (not condition) SO block-2. The key word IF is followed by a condition. If the condition is true, the block following THEN is executed. ELSE identifies the negative of the condition. SO precedes the block to be executed if the initial condition is false. For example, IF stock-on-hand is less than reorder-point THEN turn on reorder-flag ELSE (stock-on-hand not less than reorder-point SO turn off reorder-flag. Indenting makes the IF-THEN-ELSE logic easier to read. (Note: The negative condition following ELSE is often assumed and not explicitly coded.) Nested decisions are also supported: IF condition-1 THEN IF condition-2 THEN block-a ELSE (not condition-2) SO block-b ELSE (not condition-1) SO block-c. Note that any or all of block-a, block-b, or block-c could contain yet another decision. Repetition or iteration Repetitive (or iterative) logic defines a block of structured English that is executed repetitively until a terminal condition is reached. For example, such instructions as: REPEAT UNTIL condition-1 Block-1 Or FOR EACH TRANSACTION Block-a Imply both repetitive logic and the condition used to terminate that logic. Key terms Module A portion of a larger program that performs a specific task. Procedure A set of guidelines, rules, and instructions for performing a task; often, a manual procedure. Routine A set of instructions that performs a specific, limited task. Structured English

A very limited, highly restricted subset of the English language used to plan, design, or document program routines, modules, and manual procedures. Software Few software tools are designed to produce structured English. Word processors and text editors are sometimes used. Structured English is the additional method which is used for overcoming the problems of the ambiguous language in stating the actions and conditions in making the decisions and formulating the procedures. The procedure is described in the narrative format using the Structured English. It doesn't show any decisions and rules but it states the rules. Structured English specifications require the analyst to identify the conditions which occur in a process and also identify the decisions which makes these conditions occur. It also forces the analyst to find alternative actions to be taken. In this method the steps are listed in a specific order in which they are to be taken. No special signs, symbols or any other format are used for the displaying of the steps involved like those involved in the decisions tree of decision tables. Since only Structured English statements are used it becomes easy for the analyst to state the entire procedure without wasting much time. The terminologies used in Structured English consist of mostly the data names of the elements and they are stored in the Data Dictionary. Developing Structure StatementsThe process is defined by using three types of statements: sequence structure, decision structure and iteration structure. Sequence structure: It is the single stepped or action included in the process and it does not depend on the existence of any other condition but if it does encounter a condition, it is taken into consideration. Decision structure: It occurs when two or more actions take place depending on the value of the condition. The condition is expanded and the necessary decisions are taken. Iteration structure: It is commonly found that certain conditions occur commonly or occur after certain conditions are executed. Iterative instructions help the analyst to describe these cases.

I/O & file design consideration SYSTEM DESIGN After the completion of requirement analysis for a system, system design activity takes place for the alternative which is selected by management. The system design phase usually consists of following three activities: Reviewing the system information and functional requirements Developing a model of the new system, including logical and physical specification of outputs, inputs, processing, storage procedures and personnel. Reporting results to management. Distinguish between logical and physical design? The logical design of an information system is like an engineering blue print: it shows major features of the system and how they are related to one another. The reports and outputs of the system are like the engineers design components. Data and procedures are linked together to produce a working system. Physical construction, the activity following logical design, produces, program software, files and a working system. Design specification instructs programmers about what the system should do. The programmers in turn write the programs that accept input from users, process data, produce the reports, and store data in the files. OBJECTIVES OF INPUT DESIGN It consists of developing specification and procedure for data preparation. Those steps necessary to put transaction data into a usable form for processing. There are main five objectives as follow: Controlling amount of input: Due to so many reasons, design should control the quantity of data for input. Reducing the data requirement can lower costs by reducing labour expense. By reducing input requirement, the analyst can speed the entire process from data capture to providing results to user. Avoiding delay: A processing delay resulting from data preparation or data entry operation is called bottleneck. Avoiding bottleneck should always be one objective of the analyst in designing input. Avoiding errors in data: The third objective deals with errors. In one sense. The rate at which errors occur depends on the quantity of data, since the smaller the amount of data to input, the fewer the opportunities for error. Avoiding extra steps: Sometimes the volume of transactions and amount of data preparation cannot be controlled When the volume of the transaction can note reduced, the analyst must be sure the process is as efficient as possible. For example, the effect of saving or adding a single step when feeding checks into the banking process is multiplied many times over in the course of working day. Keeping the process simple:

The best advice to analysis to achieve all of the objectives mentioned in the simplest manner possible. Users accept simplicity works, and it. Complexity should be avoided when there are simple alternatives. Data capture guidelines: The analyst should capture only those items that must actually be input. There are two types of data that must be inputted when processing transactions as follow: Variable data: Those data items that change for each transaction handled or decision made. Identification data: The element of data that uniquely identifies the item being processed. The identifying data in each transaction record is called record key. What data the user should not enter at the time of input. This is also determines by the analyst. Some of those are as follow: Constant data: Data that is same for every record or entry. Details that system can retrieve. Stored data that are quickly retrieved from system files. Details that the system can calculate: Results that can be produced by the system, using the combination of stored data and entered data. Design of source document The source document is form on which the data are initially captured. To design the source document, the analyst must first decide what data must be captured. The analyst can be developed the layout of the document showing what items should be included and where they should be placed. The documents includes not only a place for data, but also contains and information telling the user how to complete the form and what information to provide. Layout: The layout organizes the document by placing important information where it will be noticed and establishing the appropriate sequence of items. Most people fill in documenting from left to right and from top to bottom; the source document should be design in the same way. It should be possible for the user to provide information by following logical sequence rather the by having to skip to different locations on the document. A well-designed form will ask for each item of data only once, there are very few occasions where user has to feed the same information more than once. Users will not complete forms that do not allow enough space to provide the information request correctly. The analyst must also consider how the form will be completed in judging the amount of space between lines the actual document layout shows the position of each item of data and all headings and instructions to users. Captions: Captions on source document tell the user what data to provide and when they should be entered. The caption should be brief but easily understood, with the standard terms that all persons use the terms. Abbreviations generally should be avoided. Including a simple example will help the user how to feed the data. In other words supplying an example is a small price to pay for correctly supplied details. Well-designed document is easily completed and allows the process of actually recording the data to be rapid. If checkmarks and boxes will be sufficient for capturing data the respondents should not be asked to write longer responds.

VARIOUS CODING TECHNIQUES Since information systems projects are designed with space, time, and cost saving in mind, coding methods, in which, words, ideas, or relationships are expressed by a code, are developed to reduce input control errors and speed the entire process. A code is a brief number, title or symbols used instead of more lengthy or ambiguous description. When an event occurs the details of the event often summarized by a code. The following are some coding techniques: Classification code: A classification code place separate entities, such as events, people or objects into distinct groups called classes. A code is used to distinguish one class from another. The user of records the code on the source document in online system it can be keyed directly into the system through a terminal. The use classifies the event into several possible categories and record code. For example in an admission system to enter the category, the it categorized as SC - 1, ST - 2, SE - 3 and GE 4 Classification code vastly simplifies the input process because only a single digit code is required. The need of writing lengthy descriptions or making judgment is eliminated. Function code: It states the activity or work to be performed without spelling it all the details in narrative statements. Analysts use this type of code frequently in transaction data to tell the system how to process the data. For example the design for file processing may specify the addition of records in one transaction by means of A, modification of transaction by M, Deletion of transaction bid, sorting of all the transaction by Sather particular function code may determine the contents of the input record whether data in the code are keyed or scanned. Sequence code: Sequence codes are either number or letters assigned in series. They tell the order in which events have occurred. For example the banking system must keep the track of the transaction so that it is clear which transactions to process first, which second and so on Therefore a sequence number is specified to every transactions. Carelessly assigned sequence numbers do not allow the insertion of new members between existing ones in the sequence. Therefore analyst often specifies the assignment Of sequence of number in intervals of 10 or 20 or some other range to allow later expansion. Significant digit subset codes: A well conceived coding scheme, using significant digit subset codes, could provide wealth of information to user and management. The code can be divided into subsets or sub codes; characters that are part of identification number and that have special meaning. The sub code gives the user additional information. Consider the following example: Year Stream Student Number In the above figure first four boxes is for the year, second to box for stream (i.e. science, commerce, etc.) and last four box are for student number. Using digits in an identification number to convey additional information does not add to the length of the data or to processing time. Mnemonic codes:

It uses letters and symbols from the product to describe it in a way that communicate visually. For example to describe a 21-inch colour monitor TV set successful code set us TV-BW-21. It is difficult to confuse the mnemonic TV with that of other products. University frequently uses mnemonics to code information MCA for Master of Computer Application. The data and transaction coding reduces the volume of data for input and simplifies the process, thus lessening the like hood of errors. Code selection depends on the nature and the objectives of the analyst Source data capture with key to storage: It includes the following steps: 1.Write the data on source document2.If necessary, code the data from the source document into a form acceptable for computer processing.3.Process the tape or disk of data directly. No extra steps are needed to enter data into the computer.4.Validate the data as they are read into the computer for processing.5.Process the data. Source data capture with scanner: Optical character processing of input data enables organizations to speed their input activities. A source document is directly used as an input document for scanning by the optical character reader. When transactions are occurred the data are written in scan able form. Validation of data is accomplished as they entered in the computer. Direct entry through intelligent terminal: Intelligent terminals are similar to cathode ray terminals with built in processing capabilities. The use of intelligent terminal for data capture can even eliminate the need for source documents, unless a paper record of the transactions required. As a transaction is taken place the operator is keying the data directly through the intelligent terminals. The processor validates the data as they are entered. Unlike the other methods intelligent terminals can interact directly with the computer. INPUT VALIDATION Input designs are aimed at reducing the chance of mistakes or errors during the data entry. However the analyst must assume that the error will occur during the data entry. The general terms given to methods aimed at detecting errors in input are input validation. There are main three methods to validate the transactions as follow: Checking transaction: It is essential to identify any transactions that are not valid, that is not acceptable Transaction can be invalid because they are incomplete, unauthorized or even out of order. Batch control: In batch control all the similar transactions are cumulated.) Here the size of the batch infixed. Transactions are accumulated in a batch of say 50,100etc. during the course of the business period more than one batch will be accumulate Therefore the management wants to know the each batch is proceed and that none of is lost or overlooked. In batch transaction batch size indicates whether all transactions are in the batch, batch count indicates whether any batch has been lost or overlooked and the batch total indicates when all transactions in the batch have been processed properly. Sequence test:

Sequence test use code in the data - serial numbers - to test for either of two different conditions, depending on the characteristics of the application. In some system, the order of the transaction is important. When processing bank checking deposits and withdrawals, it is important to ensure that each is processed in the order in which it arrived_ Sequence test also point out missing items. Without the numbers it is very much difficult to identify that all the transactions are there or not. Checking transaction data: Even valid transaction can contain invalid data. Therefore analyst should be sure to specify methods for validating the data when developing the input procedure. There are four validation methods: Existence test: Some data fields in transactions are designed to not be left empty or blank. Existence test examine those essential fields to determine that they contain data. For example in any system while entering the name it must never be empty. It is task of the analyst to learn when the data items must be present and when their absence is acceptable. This information belongs in the design specifications and should be passed along to the programmers. Limit and range test: These tests verify the reasonableness of the transaction data. Limit test validate either the minimum or maximum amount acceptable for an item. Range test validate both minimum and maximum values of an item. Combination test: It validate that several data items jointly have acceptable values that is the value for one element of data determines whether other data values are correct. Duplicate processing: In especially sensitive areas, it may be necessary to process data more than once, either on different equipment or in different ways. The result is then compared for agreement and accuracy. Duplicate processing ensures utmost accuracy. Changing the transaction data: A third of validate the data involves modifying the incorrect data themselves. Two methods are described as follow: Automatic correction: Sometimes analyst specifies that programs be written to correct errors in the data. This input validation method is used to minimize the number of separate error correction steps or rejections of transaction during processing. This method simply requires detecting an error and correcting the error automatically. Check digit: Most common error in handling data occurs with data that are captured correctly but entered incorrectly into the processing. Transcription errors occur if the data entry person for example a customer inadvertently copies data. Incorrectly number of 24589 is transcribed incorrectly if its entered as 24587. Since the chance of these errors occurring is high, a special method has been devised to help, detect them during computer processing. This method called the check digit. It adds an additional digit to the original number being used for identification purpose. The check digit is added before the number is entered into the use. There are many different methods to add the check digit to the number. Module 11 is a widely use to add a check digit. The system

analyst should always assume that invalid data will be submitted for processing and develop methods for detecting them so that corrections can be made. SYSTEM OUTPUT: The term output applies to any information produced by an information system, whether printed or displayed. System output may be report, a document or a message. When analysts design computer output they identify the specific output that is needed to meet the information requirements, select methods for presenting information and create documents reports or other formats that contain information produced by the system. Important factors in output design: There are six important factors which should be considered by the system analyst while designing user outputs these are: content, form, volume, timeliness, media and format. Content: Content refers to the actual pieces of data included among outputs provided to users for example, the contents of weekly report to a sales manager might consist of salespersons name, sales calls made by each sales person during the week and the amount of each product sold by each salesperson to each major client category. Form: Form refers to the way content is presented to users. Content can be presented in various forms, quantitative, on quantitative, text, graphics, and video and audio for example, information on distribution channels may be more understandable to the concerned manager if it is prescribed in the form of a map with dots representing individual outlets for store. Output volume: The amount of data output required at any one time is known as output volume. It is better to use high speed printer of rapid retrieval display unit, which are fast and frequency used output devices in case the volume is heavy. Usually heavy output volume normally causes concern about paper cost. In such a case, alternative methods of output display such as COM (computer output medium) may be considered. Timeliness: Timelines refers to when users need outputs. Some outputs are required on a regular, periodic basis perhaps daily, weekly, monthly, at the end of a quarter or annually, other types of outputs are generated on request. A sales manager for example, may be requiring a weekly sales report. Other users, such as airline agents, are required both real time information and rapid responses times in order to render better client service. Hence, the system designer might require that display information be provided to the user very fast. Media: Input Output medium refers to the physical device used for input, storage or output. A variety of output media are available in the market these days, which include paper, video display, microfilm, magnetic tapes and voice output. Many of these media are available in different forms. The system designer can select a medium. Which is best suited to the user requirements. Format:

The manner in which data are physically arranged is referred to as format. This arrangement is called output format when referring to data output on a printed report or on a display screen. Traditionally, when formatting the printed report for managers or users, a design tool called a printer spacing chart is used. On the chart, title, headlines, columns of data and other types of report elements are set up in the manner described by users. TYPES OF OUTPUT Whether the output is a formatted report or a simple listing of the contents of a file, a computer process will produce the output. System output may be: A Report A Document A Message Depending upon the circumstances and the contents, the output may be displayed or printed. Output contents originate from the following sources Retrieval from a data store. Transmission from [process or system activity. Directly from an input source. What are different formats in which information can be presented? One of the most important features of an information system for users is the output it produces. Without quality output, the entire system may appear to be so unnecessary that users will avoid using it, possibly causing it to fail. The term output applies to any information produced by an information system, whether printed or displayed. When analyst design computer output they Identify specific output that is needed to meet the information requirements. Select methods for presenting information. Create document, report or other formats that contain information produced by the system. Objectives of output The output from an information system should accomplish one or more of the following objectives: Convey information about past activities, current status, or projection of the future. Signal important events, opportunities, problems or warnings. Trigger an action. Confirm an action. Key output questions System analysts should answer these questions for every output requirements: What is its planned use? How much detail is needed? When and how often is the output needed? By what method?

TYPES OF OUTPUT PRESENTATION How to present information? How the information is presented will determine whether the output is clear and readable, the details convincing, and the decision making fat and accurate. Any information can be presented in the following manner:-Tabular format-Graphic format Tabular format: Naturally the user is normally habituated to receive the information in a table format generally the tabular format can be used under the following circumstances. Details dominate and few narrative comments or explanations are needed. Details are presented in discrete categories Each category must be labelled Total must be drawn or comparisons made between components .Certain information is more important and should be more visible than other information. But this concept is varying for each application. But in general we can be sure that the following items should be there:

Major categories or groups of activities or entities. Summaries of major categories or activities. Unique identification information. Time-dependent entities the system analyst must design tabular output to achieve this objective. The first concern in designing output should be to ensure that unnecessary details should be avoided. The information should be presented in such a way so that the list detail should in some meaning full order. To differentiate heading and details, the dotted line should be put. When necessary add the subtotal, put page total on each page. Put the additional space between two lines. Graphic Format: Graphic systems area variable across a wide range of process and capabilities and for personal computers up to mainframes. Management presentations had been enhanced by graphics. Analyst has to determine when to use graphics and when to avoid them. The success of business graphics depends on the task for which they will be used and the nature of the information shown. Business graphics uses various types charts i.e. pie charts, area chart, curve chart, step and bar charts, map chart. The chart should be annotated to indicate the scale used, the meaning each line or shape, and what the chart represents. Business graphics may not save decision making time or reduce the volume of information produced by the system. However, if properly designed, graphics are excellent supplements for tables and narrative reports. When to use graphics: Graphics are used for several reasons 1. To improve the effectiveness of output reporting for the targeted recipients: Graphs are superior to tabular and narrative forms of information for detecting trends in business performance. Comparisons are also easier through graphics than through tabular data. It also facilitates remembering large amounts of data throughout a series of reports. 2. To manage information volume:

Comparing large amount of data into graphics form does not reduce the amount of information. The real benefit of compression is that breaking information into smaller chunks allows it to be more easily understood and remembered. Fragmenting information isolates individual elements and eases their comparison 3. To suit personal preference: Often people like to see information in graphics formats rather than in rows and columns. Well-prepared computer graphics have great eye appeal, but they do not automatically improve the effectiveness of the presentation. Standards in design of graphics: Each graphics report, whether printed or displayed, should include a title and the date of preparation. For a series, page numbers are also important. Because text data will take longer to read the graphics, placement and legibility are important. Labels for vertical data should be positioned horizontally, taking care to avoid excessive detail in grid scale. Consistent spacing in all labels and using a common family of fonts. Labels and data values can increase the readability. All vertical and horizontal axes should be proportional; as should their labels Bold and italics type or underlining can be used to emphasize key words or phrases. The use of all capital letters in long words, titles or footnotes impedes readability and thus should be avoided. Abbreviations should not be used. Using a common type of font assures consistency of graphics pages and displays design Icons: Icons are pictorial representations of entities described by the data. Icons are now commonly used in computer interfaces to represent documents, in-baskets, file drawers, and printers. Properly selected icons communicate information immediately. Icons eliminates the necessity for users to learn abbreviations, notation. In addition, the images themselves do not add to the complexity of the other tasks. In contrast, taking time tread labels or footnotes usually contributes to complexity. Of course, most of us will remember a picture more easily than words and phrases. The icons should be used using following suggestion: Select icons that will be immediately recognized and understood by the anticipated users. If there is no familiar icon for the situation, use narrative labels, avoid requiring users to learn and remember unfamiliar symbols or images. Use the same icons, the images itself should communicate its meaning clearly without requiring a label Use a layout that maintains space and avoids overcrowding between icons. Maintain a common size among different types of symbols. Colour presentation: With graphics, improper use of colour can hinder rather than help user and management productivity. Colour should enhance, not replace, good output design. In general, using four or fewer colours on a screen or reports recommended. The analyst must take care to maintain consistent colour usage through the output and reports for a system. Neither colour nor use of a system enhance or compensate for a poor design. However, the effective use of graphics can improve the result Graphics, icons and the

use of colour may appear in computer output of all types, whether printed or displayed. Design printed output System analyst specifies printed output they need to print a record of data or a report of information. As analyst, you should seek to use only those printed outputs that are absolutely needed. Special forms: Computer based information system can also produce special forms. Pre-printed form is customized form that is designed to include special symbols and trademarks for the organization and that has a colour printing, depending on the requirements established by the analyst or user. The special custom printed work is placed on blank paper stock by a forms maker; and using regular printing press. When the forms are delivered, all details that will not be added by the information systems printer are already on them. The cost of the pre-printed form is very high so analyst has to determine, when and where to use these forms. Multiple copies of output: Organization often need more than one copy of a report or document prepared by computer there are several ways to produce Carbonize copies: In this special chemical coating on the back side of each copy except the last. Writing or printing on the original carries through to the copies; the coating causes the image to appear on the copies underneath. Interleaved carbon copies: Carbon paper suitable for one time use in interleaved between each sheet of paper. Carbonize forms are more expensive than those with interleaved carbon, have the advantage of nor requiring extra time or equipment to remove the carbon paper from continuous forms. Guideline for preparing a printed output An output is the arrangement of items on the output medium. There are many guidelines as follow: Reports and documents should be designed to read from left to right and top to bottom. The most important item should be the easiest to find. All pages should have a title and page number and show the date the output was printed. All columns should be labelled. Abbreviations should be avoided CHARACTERISTICS OF A GOOD USER INTERFACE DESIGN Speed of learning: A good user interface should be simpleton learn. Most users are put off by complex syntax and semantics of command issue procedures. Also, a good user interface should not require its users to memorize commands. Another important characteristic of a user interface that affects the speed of learning is Consistency.

Once, the user learns a command, he should be able to use the same command in different circumstances for carrying out similar actions. Users can learn an interface faster, if the interface is based either on some day-to-day real life examples (also called metaphors) or on some concepts with which the users are already familiar. For example, if the user interface of a text editor uses concepts similar to the tools used by a writer for text editing such as cutting lines and paragraphs and pasting it at other places, the users can immediately relate their experience to this concept. Also, learning is facilitated by intuitive command names and symbolic command issue procedures. Speed of use The speed of use of a user interface is determined by the amount of time and effort required on the part of the user to initiate and execute different commands. The time and effort required to initiate and execute different commands should be minimal. For example, and interface that requires users to type in lengthy commands or involves mouse movements to areas that are wide apart, for issuing commands, can slow down the operating speed of users. Speed of recall: Once users learn how to use an interface, their speed of recall about how to use the s/w should be maximized. The speed of recall is improved if the interface is based on some metaphors, symbolic command issue procedures, and intuitive command names. Error rate: A good user interface should minimize the scope of committing errors while initiating different commands. This characteristic of an interface can be easily determined by monitoring the errors committed by different users while using the interface. This monitoring can be automated by incrementing the user interface code with monitoring code to record the frequency and types of user errors and to display the statistics of various kinds of errors committed by different users. Consistency of names, issue procedures, and the behaviour of similar commands and the simplicity of the command issue procedures minimize error possibilities. Attractiveness: A good user interface should be attractive to use. An attractive user interface catches user attention and fancy. In this respect, the graphics-based user interfaces have definite advantage over the text-based interfaces. Consistency: The commands supported by a user interface should be consistent. The basic purpose of consistency is to allow users to generalize the knowledge about one aspect of the interface to another. Thus, consistency facilitates a speed of learning, speed of recall, and also helps reduce error rate. Feedback: A good user interface must provide feedback to various user actions. For example, if any user request takes more than a few seconds to process, the user must be informed that his/her request is being processed. If possible, the user should be periodically informed about the progress made in processing his command. In the absence of any response from the computer for a long time, a novice user might even start recovery/shutdown procedures in panic. Support for multiple skill levels:

A good use interface must support multiple levels of sophistication of command issue procedure for different categories of users. This is necessary because users with different experience levels prefer different types of user interfaces. Experienced users are more concerned about the efficiency of the command issue procedures, whereas relatively novice users pay prime importance to usability aspects. Very cryptic and complex commands will discourage a novice, whereas elaborate command sequences will make the command issue procedure very slow and therefore put of experienced users. As users become more familiar with an interface, they should have options for selecting faster command issue procedures such as hot-keys, macros, etc. Error recovery (undo facility): All categories of users are prone to committing errors. Therefore, a good user interface should allow a user to undo a mistake committed by him while using the interface. Users are inconvenienced if they cant recover from simple errors made while using an s/w. User guidance and on-line help: Whenever users need guidance or seek help from the system they should be provided with appropriate guidance and help. Since providing user guidance and on-line help is a very important aspect of a good user interface design, USER GUIDANCE Users can be given guidance about interacting with the user interaction mechanism through the following two broad methods: On-line help system Guidance and error massages produced by the system in response to user actions. ON-LINE HELP SYSTEM Users do not like generic help messages. Instead, they expect the on-line help messages to be tailored to the context in which they invoke the help system. Therefore, a good on-line help system should keep trace of what a user is doing while invoking the help system and provide the output message in context dependent way. Also, the help messages should be tailored to the users experience level. Further, a good on-line help system should take advantage of any graphics characteristics of the screen and should not just be a copy of the users manual. GUIDANCE MESSAGES The guidance messages should be carefully designed to prompt the user about the next actions he might pursue, the current status of the system, the progress so far made in processing his last command, etc. A good guidance system should have different levels of sophistication for different categories of users. Also, the users should have an option to turn off the detailed messages. ERROR MESSAGES Error messages should be polite. Error messages should not have associated noise which might embarrass the user. The message should suggest how a given an error can be rectified. Inappropriate, the user should be given the option of invoking the online help system to find out more about the error situation.

MODE-BASED Vs. MODELESS INTERFACE A mode is a state or collection of states in which just subset of all user interaction tasks can be performed. In modeless interface, the same set of commands can be invoked At any time during the running of the s/w. However, in a mode-based interface different sets of commands can be invoked depending on the mode in which the system is, i.e. based on the past sequence of the user commands. A mode-based interface can be represented using a state transition diagram, where each node of the state transition diagram would represent a mode. The different states of the state transition diagram can be annotated with the commands that are meaningful in that state. TYPES OF USER INTERFACES Broadly speaking, various user interfaces can be classified into the following three categories1) Command language-based interface.2) Menu-based interface.3) Direct manipulation interface. (1)COMMAND LANGUAGE-BASED INTERFACE A command language-based interface-as the name itself suggests is based on designing a command language to define the set of commands. The user is expected to frame appropriate commands in the language and type them in appropriately whenever required. A simple command language-based interface might just assign unique names to the different commands. However, a more sophisticated command language-based interface allows users to compose complex commands by using a set of primitive commands. Such a facility to compose commands dramatically reduces the number of command names one would have to remember. Thus a command language-based interface can be made concise, requiring minimal typing by the user. Therefore, command language-based interfaces allow faster interaction with the computer and simplify the input of complex commands. A command language-based interface can be implemented even on cheap alphanumeric terminals. Also, a command language-based interface is easier to develop than a menu-based or a directmanipulation interface because compiler writing techniques are well developed. However, the command language-based interfaces suffer from several drawbacks. Usually, the command language-based interfaces are difficult to learn and require the user to memorize the set of primitive commands. Most users errors while formulating commands in the command language and also while typing them in. Also in a command language-based interface, all interactions with the system are through keyboard. Therefore, a command language-based interface cannot take advantage of effective interaction devices such as a mouse. Obviously, for the casual and inexperienced users, the command language-based interfaces are not suitable. DESIGNING A COMMAND LANGUAGE-BASED INTERFACE While designing a command language-based interface, several design decisions have to be made. The designer has to select the mnemonics to be used for supporting the different commands. The designer should try to develop meaningful mnemonics and yet be concise to minimize the amount of typing required. The designer has to decide whether the users will be allowed to redefine the command names to suit their own

preferences. Letting a user define his own mnemonics for various commands is a useful features, but it increases the complexity of development of the user interface. The designer has to decide whether it should be possible to compose primitive commands to create new complex commands. A sophisticated commands composition facility would require the syntax and semantics of the various command composition options to be clearly and unambiguously specified. The ability to combine commands is a powerful facility in the hands of experienced users, but quite unnecessary for inexperienced users. (2)MENU-BASED INTERFACE An important advantage of a menu-based interface over command language-based interface lies in the fact that in a menu-based interface the users are not requires remembering precise command names. Also, in a menu-based interface the typing affords is minimal as most interactions are carried out through menu selections using a pointing device. This factor becomes very important for the occasionally users can not typefast.However, for an experienced user a menu-based user interface turns out to be slower than a command language based interface because an experienced user can type fast and can get speed advantage by composing different primitive commands express complex commands. Composing commands in a menu-based interface is not possible. This is because of the fact that actions involving logical connectives (and, or, etc.) are awkward to specify in a menu-based system. Also, if the number of choices is larger, it is difficult to design a menu-based interface. Moderator sized software might need 100 or 1000SOF different menu choices. In fact, a major problem concerning menu-based interfaces is about structuring of a large number of menu choices into manageable forms. The following are some of the techniques available to structure a large number of menu items. Scrolling menu : When a full choice list cannot be displayed within the menu area, scrolling of the menu items can be permitted to enable the users to view and select the menu items that cannot be accommodated on the screen. Walking menu: A walking menu is a very commonly used way of structuring large menu items. In this technique, when a menu item is selected, it causes further menu items to be displayed adjacent to it in a submenu. Again, a walking menu can successfully structure commands only if there are tens rather than hundreds of choices since each adjacently displayed menu do take up screen space and the total screen area is after all limited. Hierarchical menu: In this technique, the menu items are organized in a hierarchy or tree structure. Selecting a menu item causes the current menu display to be replaced by inappropriate submenu. Thus in this case, we can consider the menu and its various submenus to form a hierarchical tree-like structure. A walking menu is also a form of hierarchical menu which is possible when the tree is shallow. A hierarchical menu can be used to manage large number of choices, but presents the user with navigational problems in the sense that users tend to lose track of where they are in the menu tree. (3)DIRECT MANIPULATION INTERFACES

Direct manipulation interfaces present information to the user as visual models or objects. Actions are performed on the visual representations of the objects, e.g. pull an icon representing a file into an icon representing a trash box for Deleting the file. A direct manipulation interfaces are sometimes called iconic interfaces. Important advantages of iconic interfaces include the fact that the icons can be organized by the users very easily, and that icons are language-independent. However, direct manipulation interfaces are considered slow for experienced users. Also, it is difficult to give complex commands using direct manipulation interfaces. For e.g. if we have to drag anion representing a file to a trash box for deleting the file, then in order to delete all the files in the directory we have to perform this operation individually for each file-which can be otherwise very easily done in command language-based interface by using a command like delete *.*. FILE DESIGN BASIC TERMINOLOGY Data item: Individual items of data are called data items or fields or simply items. Data items can comprise sub items or sub fields. Record: The complete set of related data pertaining to an entity. When a number of sizes of data items in record are constant then such a record is called fixed length record. If the number of data items varies from one record to another record than such record is called the variable length record. Record Key: Is one specific record from another, systems analysts select one data item in the record that is likely to be unique in all records of agile and use it for identification purposes? This item called the record key or key attribute or simply key. Basically it is a part of the record. Entity: An entity is any person, place, thing or event of interest to the organization and about which data are captured, stored or processed. File: A file is collection of related records. Each record in a file is included because it pertains to the same entity. Database: A database is an integrated collection of data stored in different type of records, and in a way that makes them accessible for multiple applications. TYPES OF FILES Master file: A master file is collection of records about an important aspect of organizations activities. It may contain the data describes the current status of specific events or business indicators. For example, the master file in accounts payable system shows the balance owed to every vendor or suppliers. A second type of master file reflects the history of events affecting a particular entity. Suppose in any organization any employee leaves the organization at that time all the details of the employee must

bestrode. In computer such type of data or information are stored as history file. Master file are useful only so long as they are kept accurate and up to date. Transaction file: A transaction file is a temporary file is a temporary file with two purposes: Accumulating the data about events as they occur and updating master files to reflect the results of current transactions. The term transactions refer to any business event that affects the organization and about which data are captured. Each transaction file contains only the records that pertain to the particular entities that are subject to the file. The details are accumulated as a record at a time in transaction file. The transaction and master file are read at the sometime by a program and master file is revised depending upon the transactions. Master files are permanent and exist as long as the system exists. However, the contents of the files change as a result of processing and updating. Transaction files, on the other hand, are temporary. At some time they are no longer needed and are erased or destroyed, depending upon the method used for time durations. Table file: A special type of master file is included in many systems to meet special processing requirements involving data that must be referenced repeatedly. Table files contain reference data used in processing transactions, updating master files, or producing output. Analyst often specifies the use of table files to store data that otherwise would be included in master files or embedded in computer program. Table file conserve storage space and ease program maintenance by storing in a file data that otherwise would be included in programs or master file. Report file: The central processing unit of computer very often produces data for output at a faster rate than the printer. Following the normal sequence of events, processing would have to be delayed while the results are printed. To prevent such inefficient use of the C.P.U, the analyst generates the output in text file called the report file. Report files are temporary files used when printing time is not available for all the reports produced; a situation that frequently arises in overlapped processing. The computer writes the output to a file on secondary storage. This file will be used whenever it is required and will be deleted when the use of the file is over. Other files: Other kind of files, as well as special uses of the file types previously discussed, play a role in information system. For example a backup file is a copy of master file, transaction file or table file made to ensure that duplicate is available if anything happens to the original file.

METHODS OF FILE ORGANIZATION Records are stored in files using a file organization that determines how the records will be stored, located and retrieved. There are basically three methods to store the records are as follow:

Sequential organizations: It is the simplest way to store and retrieve records in a file. In a sequential file records are stored one after the other without concern for the actual value of the data in the records. The first record is stored at the beginning of the file. The second record is stored right after the first record and so on. This order never changes in sequential file organization. A characteristic of sequential file is that all records are stored by position: first record, second record and so on. There are no addresses or location assignments in file. Direct access organization: This method requires the program to tell the system where a record is stored before it can access the record. In contrast to sequential organization, processing a direct access file does not require the system to start at the first recording the file. Using the record key as the storage address is called direct addressing. Direct addressing has a dataset with the following characteristics: The key set is in a dense ascending order with few unused values. Therefore few open gaps in key values are wanted The record keys correspond to the numbers of the storage addresses. There is a storage address for each actual or possible key value in the file and there are no duplicate key values. Indexed organization: A third way of accessing records is through an index the basic form of index includes a record key and the storage address for a record. To find a record when the storage address is unknown it is necessary to scan the records. However the search will be faster if an index is used, since it takes less time to search an index than an entire file. An index is a separate file from the master file to which it pertains. Each record in the index contains only two items of data: a record key and a storage address. To find a specific record when the file is stored under an indexed organization. The index is first searched to find the key of the record wanted when it is found the corresponding storage address is noted and then the program accesses the record directly. This method issues a sequential scan of the index, followed by direct access to the appropriate record. The index helps speed the search compared with a sequential file, but it is slower than direct addressing.

SYSTEM IMPLEMENTATION: System implementation & maintenance: Implementation Strategies SW/HW selection& procurement Control & security issues of designing & implementing on-line systems data communication requirements system conservation approaches & selection issues. PREPARE THE SYSTEMS PROPOSAL In order to prepare the systems proposal analysts must use a systematic approach to: Ascertain hardware and software needs. Identify and forecast costs and benefits. Compare costs and benefits. Choose the most appropriate alternative Ascertaining Hardware and Software Needs Steps used to determine hardware and software needs: Inventory computer hardware currently available. Estimate current and projected workload for the system. Evaluate the performance of hardware and software using some predetermined criteria. Choose the vendor according to the evaluation. Obtain hardware and software from the vendor. 1. When inventorying hardware check: Type of equipment. Status of equipment operation. Estimated age of equipment. Projected life of equipment. Physical location of equipment. Department or person responsible for equipment. Financial arrangement for equipment. Criteria for evaluating hardware: Time required for average transactions (including time for input and output). Total volume capacity of the system. Idle time of the central processing unit. Size of memory provided. The people involved: Management. Users. Systems analysts. There are three options for obtaining computer equipment: Buying. Leasing. Rental.

When evaluating hardware vendors, the selection committee needs to consider: Hardware support. Software support. Installation and training support. Maintenance support. Performance of the hardware Software may be: Custom created in-house. Purchased as COTS (commercial off-the-shelf) software. Provided by an application service provider (ASP). Software Evaluation Use the following to evaluating software packages: Performance effectiveness Performance efficiency Ease of use Flexibility Quality of documentation Manufacturer support

CONVERSION METHOD: Conversion is process of changing from the old system, which is currently running in the organization to the newly build system. There are four methods of handling the system conversion. Parallel System Method: In the parallel systems method, the new system is set to work along with old system. Data are input to both simultaneously and the functioning of the new system is tested and judged. This is continued till the new system proves that it can function effectively. This method is safer because we need not convert to new system unless the new system performs satisfactorily. However, this method is costly, since the cost of both the old and the new system has to be borne by the organization. Also the work and data are duplicated. But the switch over will be natural and the user can see the benefits of the new system, in due course of time. Dual System Method or Phase in Method: In the dual system method or phase in method, the old system is gradually phased out while the new one is being phased in. That is the old system is replaced by the new system function by function, till finally the whole of the new system is functional. This method has the advantage that the conversion cost is low and there is little

duplication of work or data. However, long phase in periods may create difficulties for analyst. Also if there are problems in the early phases of implementation rumour about difficulties may create problem in the implementation of the remaining system. Direct Cut over Method: In this method, conversion from old system to new system takes place abruptly. The old system works finely, till one day it s replaced by the new system. There are no parallel activities and there is no falling back to old system. This method is more suitable in the case of hotel reservation system, airline reservation system etc. it requires careful planning and properly scheduled and maintained training session. Pilot Approach method: In the pilot approach method, a working version of the system is implemented in just one part of the organization, for example in a particular department. The users are told that its pilot testing and that they can experiment to improve thesytem. When the system is considered beneficial, complete and fully functional, it can be implemented throughout the organization either by direct cut over method or by phase in method.

MODULE 3. PROJECT DEVELOPMENT & DATABASE DESIGN Introduction to Database technologies & CASE tools with specific packages overview of relational model Database creation SQL command Normalization designing forms & reports using CASE tools for system analysis & design-case studies Cost /benefit analysis project & resource planning design & development testing &documentation.

CASE Tools A software development project is a time-consuming and complex activity that requires the resources of a number of people and a significant financial outlay. A great deal of effort is required to establish the objectives of the project, which are then formalised in a requirements specification. Further effort must be expended in designing a solution that meets all of the requirements within the specified schedule, and within budgetary constraints. Once the design has been finalised, perhaps the most work-intensive part of the project is the implementation phase, during which all of the program code is written and tested. All of these activities must be documented, and the documentation maintained in an up-to-date state, to facilitate future system maintenance or enhancement. Any means of automating these activities, in whole or in part, can help to minimise the time required to complete the project and thus significantly reduce overall project costs. Computer-Aided Software Engineering (CASE) is the application of a range of software tools to the development of information systems. Indeed, software tools can be applied to the entire range of activities involved in the systems development lifecycle, including the analysis, design, implementation, testing, documentation, and maintenance of information systems. Since all stages of the software development life-cycle can be directly supported by software of one kind or another, a broad definition of the term "computer-aided software engineering" might indicate the inclusion of project management software, compilers, assemblers, and linkers in the list of CASE tools. Usually, however, only those tools that are directly involved in the analysis, design and coding of information systems are considered to be included. The first CASE tools were often developed to help software developers carry out a specific task, such as the production of flowcharts or the automated generation or refactoring of program code. Later, integrated families of CASE tools were developed that offered support for a range of activities related to the development of information systems. Some of these integrated CASE tool suites were designed to support a particular development methodology. There are a number of proprietary applications, for example, that provide support for SSADM. Such applications facilitate the creation of common SSADM documents such as data flow diagrams, process descriptions, entity relationship diagrams, and entity life history diagrams.

CASE tools are automated, microcomputer-based software packages for systems analysis and design. Four reasons for using CASE tools are: To increase analyst productivity. Facilitate communication among analysts and users. Providing continuity between life cycle phases. To assess the impact of maintenance.

Upper CASE tools: Create and modify the system design. Store data in a project repository. The repository is a collection of records, elements, diagrams, screens, reports, and other project information. These CASE tools model organizational requirements and define system boundaries.

CASE tools may be divided into several categories Upper CASE (also called front-end CASE) tools, used to perform analysis and design. Lower CASE (also called back-end CASE). These tools generate computer language source code from CASE design. Integrated CASE, performing both upper and lower CASE functions.

Lower CASE tools generate computer source code from the CASE design. Source code may usually be generated in several languages.

Projects are initiated for two broad reasons: Problems that lend themselves to systems solutions. Opportunities for improvement through Upgrading systems. Altering systems. Installing new systems.

PROJECT SELECTION Five specific criteria for project selection: Backed by management. Timed appropriately for commitment of resources. It moves the business toward attainment of its goals. Practicable. Important enough to be considered over other projects.

Possibilies of improvement Many possible objectives exist including: Speeding up a process. Streamlining a process. Combining processes. Reducing errors in input.

Reducing redundant storage. Reducing redundant output. Improving system and subsystem integration.

A feasibility impact grid (FIG) is used to assess the impact of any improvements to the existing system. It can increase awareness of the impacts made on the achievement of corporate objectives. Cost-Benefit Analysis Cost-benefit analysis is used to determine the economic feasibility of a project. The total expected costs are weighed against the total expected benefits. If the benefits outweigh the costs over a given period of time, the project may be considered to be financially viable. The costs involved with a software development project will consist of the initial development cost (the costs incurred up to the point where the new system becomes operational), and the operating costs of the system throughout its expected useful lifetime (usually a period of five years). The expectation is that at some point in the system's lifetime, the accumulated financial benefits of the system will exceed the cost of development and the ongoing operating costs. This point in time is usually referred to as the break-even point. The benefits of the new system are usually considered to be the tangible financial benefits engendered by the system. These could be manifested as reduced operating costs, increased revenue, or a combination of the two. In some cases there may be one or more less tangible benefits (i.e. benefits that cannot be measured in financial terms), but such benefits are difficult to assess. Indeed the accuracy of a cost benefit analysis is dependent on the accuracy with which the development costs, operational costs and future benefits of the system can be estimated, and its outcome should always be treated with caution. Because money will devalue over time, it is misleading to simply directly compare future operating costs and tangible benefits with the initial cost of developing the system. A discount rate is therefore selected so that future costs and benefits can be represented in terms of their present-day value. The discount rate used is often the current interest rate used by financial markets. The future value (FV) of a sum of money invested today (the present value, or PV) at a fixed interest rate (i) for a known number of time periods (n) can be calculated as follows: FV = PV x (1 + i)n Conversely, we can re-arrange this equation to get the present value of a future sum of money (e.g. the estimated future costs or benefits of the proposed system) as follows:

PV = FV/(1 + i)n Note that we have assumed here that n represents the time period in years. If the value of i represents an annual discount rate, and the time period n is measured in months, the value of i must be divided by twelve. System development costs (C) are assumed to be incurred when the system is commissioned. Taking an expected system life of five years, the yearly benefits (B1, B2, B3, B4 and B5) are assumed to occur at the end of year one, year two, year three, year four, and year five respectively. To compute the net present value (NPV), those benefits are discounted back to their present values and added to the cost of development, C (a negative value), as follows: NPV = C + (B1/(1+i)1) + (B1/(1+i)2) + (B1/(1+i)3) + (B1/(1+i)4) (B1/(1+i)5) When the net present value is positive, the system has passed the break even point (i.e. it has paid for itself and justified the cost of the project). Many organisations use the internal rate of return (IRR) to gauge the economic viability of a project. This figure is calculated by dividing the net present value (net present value benefits + net present value costs) by the present value of the total cost of the system, including the development costs and all operating costs. The following simple example demonstrates some of these principles:

Cost-benefit analysis example A new automated customer invoicing system has been recommended for our organisation by a firm of consultants. The requirements are already known, and we now want to carry out a cost benefit analysis. The system will cost 50,000 to develop, and will have a projected useful life ofn five years from the time it is installed, one year from now. After five years, the system database can be transferred to the replacement system, a saving of approximately 10,000. The current system has operating costs estimated at 100,000 per annum, whereas the annual operating costs for the new system are estimated at only 75,000. In addition, the new system has intangible benefits estimated to be worth 10,000 per annum. It is assumed that all estimates for costs and benefits will increase at a rate of 10% annually for both the current system and the new system. The organisation has set a present value discount rate of 15% per annum. The cost-benefit analysis calculations are carried out using a spreadsheet, and are shown below:

A spreadsheet can be used to perform a cost-benefit analysis The system starts to show a positive return during its second year of operation. The internal rate of return is calculated as follows: Cumulative PV Benefits + Costs / Cumulative PV Costs = 133,686 / 348,938 = 0.383 Costs and Benefits Systems analysts should take tangible costs, intangible costs, tangible benefits, and intangible benefits into consideration to identify cost and benefits of a prospective system. Tangible benefits Tangible benefits are advantages measurable in dollars that accrue to the organization through use of the information system.

Examples: Increase in the speed of processing. Access to information on a more timely basis.

Intangible benefits are advantages from use of the information system that are difficult to measure. Examples: Improved effectiveness of decision-making processes. Maintaining a good business image. Tangible costs are those that can be accurately projected by systems analysts and the business accounting personnel. Examples: Cost of equipment. Cost of resources. Cost of systems analysts' time.

Intangible costs are those that are difficult to estimate, and may not be known Examples: Cost of losing a competitive edge. Declining company image.

To select the best alternative, analysts should compare costs and benefits of the prospective alternatives using: Break-even analysis.: is the point at which the cost of the current system and the proposed system intersect. Break-even analysis is useful when a business is growing and volume is a key variable in costs. Payback.: Payback determines the number of years of operation that the system needs to pay back the cost of investing in it. Cash-flow analysis.: Cash-flow analysis is used to examine the direction, size, and pattern of cash flow associated with the proposed information system. Determine when cash outlays and revenues will occur for both (a)The initial purchase.(b) Over the life of the information system. Present value method: Way to assess all the economic outlays and revenues of the information system over its economic life and to compare costs today with future costs and today's benefits with future benefits. Use present value when the payback period is long, or when the cost of borrowing money is high. Guidelines to select the method for comparing alternatives: Use break-even analysis if the project needs to be justified in terms of cost, not benefits. Use payback when the improved tangible benefits form a convincing argument for the proposed system.

Use cash-flow analysis when the project is expensive, relative to the size of the company. Use present value when the payback period is long or when the cost of borrowing money is high.

When preparing a systems proposal, systems analysts should arrange the following ten items in order: Cover letter. Title page of project. Table of contents. Executive summary (including recommendation). Outline of systems study with appropriate documentation. Detailed results of the systems study. Systems alternatives (three or four possible solutions). Systems analysts recommendations. Summary.

When preparing a systems proposal, systems analyst should arrange the following ten items in order: Appendices Assorted documentation. Summary of phases. Correspondence. Other material as needed.

Report Design Considerations Constant information does not change when the report is printed. Variable information changes each time the report is printed. Paper quality, type, and size should be specified. Design guidelines for printed reports are: Design reports using software. Include functional attributes, such as headings, page numbers, and control breaks. Incorporate stylistic and aesthetic attributes, such as extra blank space and grouping data.

Primary considerations for designing graphical output: Output must be accurate, easy to understand and use The purpose of the graph. The kind of data to be displayed.

The audience. The effects on the audience of different kinds of graphical output.

Guidelines for display design are: Keep the display simple. Keep the display presentation consistent. Facilitate user movement among displayed output. Create an attractive display.

Graphical Output Primary considerations for designing graphical output: Output must be accurate, easy to understand and use.

The analyst must determine: The purpose of the graph. The kind of data to be displayed. The audience. The effects on the audience of different kinds of graphical output. Design principles must be used when designing Web sites. These include: Using professional tools. Studying other sites. Using Web resources. Examining the sites of professional Web site designers. Using tools that you are familiar with. Consulting books. Examining of poorly designed pages. Creating Web templates. Style sheets allow you to format all Web pages in a site consistently. Using plug-ins, audio, and video sparingly.

Plan ahead Structure. Content. Text. Graphics. Presentations style. Navigation.

Promotion.

WEB GRAPHICS Guidelines for using graphics when designing Web sites are: Use either JPEG or GIF formats. Keep the background simple and readable. Create a few professional-looking graphics for use on your page. Reuse bullet or navigational buttons. Examine your Web site on a variety of monitors and graphics resolutions. PRESENTATION STYLE Guidelines for entry displays for Web sites: Provide an entry screen or home page. Keep the number of graphics to a reasonable minimum. Use large and colorful fonts for headings. Use interesting images and buttons for links. Use tables to enhance the layout. Use the same graphics image on several Web pages. Avoid overusing animation, sound, and other busy elements. Navigation guidelines: Use the three-clicks rule. Promote the Web site. Encourage your viewers to bookmark your site.

System Documentation One of the requirements for total quality assurance is preparation of an effective set of system documentation. This serves as: A guideline for users. A communication tool. A maintenance reference as well as development reference.

Documentation can be one of the following: Pseudocode : is an English-like code to represent the outline or logic of a program.It is not a particular type of programming code, but it can be used as an intermediate step for developing program code.

Procedure manuals.: Common English-language documentation. Contain Background comments, Steps required to accomplish different transactions, Instructions on how to recover from problems, Online help may be available, Read Me files included with COTS software. Complaints regarding this method include : They are poorly organized. It is difficult to find needed information. The specific case in question does not appear in the manual. The manual is not written in plain English. A Web site can help maintain and document the system by providing: FAQ (Frequently Asked Questions). Help desks. Technical support. Fax-back services.

The Folklore method.: The FOLKLORE documentation method collects information in the categories of: Customs. Tales. Sayings. Art forms.

Guidelines for choosing a documentation technique: Is it compatible with existing documentation? Is it understood by others in the organization? Does it allow you to return to working on the system after you have been away from it for a period of time? Is it suitable for the size of the system you are working on? Does it allow for a structured design approach if it is considered to be more important than other factors? Does it allow for easy modification? TESTING The new or modified application programs, procedural manuals, new hardware, and all system interfaces must be tested thoroughly. The following testing process is recommended: Program testing with test data. Link testing with test data. Full system testing with test data.

Full system testing with live data

Program Testing with Test Data Desk check programs. Test with valid and invalid data. Check for errors and modify programs.

Link Testing with Test Data: Also called string testing. See if programs can work together within a system. Test for normal transactions. Test with invalid data Full System Testing with Test Data: Operators and end users test the system.Factors to consider: Is adequate documentation available? Are procedure manuals clear? Do work flows actually flow?

Is output correct and do the users understand the output? Compare the new system output with the existing system output. Only a small amount of live data are used Maintenance is performed to: Repair errors or flaws in the system. Enhance the system. Ensure feedback procedures are in place to communicate suggestions.

Auditing: There are internal and external auditors. Internal auditors study the controls used in the system to make sure that they are adequate. Internal auditors check security controls. External auditors are used when the system influences a companys financial statements Implementation is the process of assuring that the information system is operational. Well-trained users are involved in its operation. 1. Distributed systems use telecommunications technology and database management to interconnect people. A distributed system includes work stations that can communicate with each other and with data processors. The distributed system may have different configurations of data processors. 2. The client/server (C/S) model consists of clients request and the server fulfillment of the request.The client is a networked computer, running a GUI interface.A file server stores programs and data.A print server receives and stores files to be printed. The advantages of a client/server system are greater computer power and greater opportunity to customize applications.

The disadvantages of a client/server system are greater expense and applications must be written as two separate software components running on separate machines Standard types of networks include the wide-area network (WAN), the local area network (LAN), and the wireless local area network (WLAN). Wireless Local Area Network (WLAN) Called Wi-Fi, wireless fidelity Can include encryption wired equivalent privacy (WEP) for security Cheap to set up Flexible

Concerns: Security. Signal integrity. Wi-Fi networks are prone to interference from systems operating nearby in the same frequency spectrum. Bluetooth is suitable for personal networks and can include computers, printers, handheld devices, phones, keyboards, mice and household appliances.

Das könnte Ihnen auch gefallen