Sie sind auf Seite 1von 13

Name

__________AnswerKey____________________________________

CS101SystemsAnalysisandDesign HomeworkAssignment#6 Wednesday,August18,2010 Readthroughchapter10,chapter11,andchapter12beforecompletingtheremainingportionsofthis homeworkassignment.(Thisassignmentisworthatotalof120pointsandisduebythebeginningof classonWednesday,September1,2010at5:30pm.Youmayhandwriteortypeyourresponsestothese questions.)Studythekeytermsthatwerereviewedattheendofthechapterastheywillbehelpfulin furtherdiscussionsandmaybetestablequestions.)Remembertoreviewthepowerpoints,notes,and anyotherinformationthatwediscussedinclass.Allmaterialcoveredinclassistestableandfuture materialmaybuildonthesepastdiscussions. 1. CompletetheReviewQuestions,#110onpage488(chapter10)ofyourtextbook.(Youmay handwriteortypeyourresponsestothesequestions.)[40] ReviewQuestions

1. Define the term system architecture. Define the term scalability, and explain why it is important to consider scalability in system design. System architecture translates the logical design of an information system into a physical structure that includes hardware, software, network support, and processing methods. Scalability is the measure of a systems ability to expand, change, or downsize easily to meet the changing needs of a business enterprise. Scalability is especially important in implementing systems that are volume-related, such as transaction processing systems. (Page 446, 450) 2. When selecting an architecture, what items should a systems analyst consider as part of the overall design checklist? Before selecting a system architecture, the analyst must consider the following issues: Enterprise resource planning (ERP) Initial cost and TCO Scalability Web integration Legacy system interface requirements Processing options Security issues (Page 448) 3. What is enterprise resource planning (ERP) and why is it important? What is supply chain management? Enterprise resource planning (ERP) defines a specific architecture, including standards for data, processing, network, and user interface design. It is important because it describes a specific hardware and software environment that ensures hardware connectivity and easy

integration of future applications, including in-house software and commercial packages. ERP also can extend to suppliers and customers in a process called supply chain management. In a totally integrated supply chain, a customer order could cause a production planning system to schedule a work order, which in turn triggers a call for certain parts from one or more suppliers. (Page 448) 4. Explain the term server and provide an example of server-based processing; explain the term client and provide an example of client-based processing. A server is a computer that supplies data, processing services, or other support to one or more computers, called clients. Server-based processing allows users at remote locations to enter and access data from anywhere within the organization, regardless of where the centralized computer is located. Server-based processing is used in industries that require large amounts of data processing that can be done in batches at a central location. For example, a credit card company might run monthly statements in a batch, or a bank might use mainframe servers to update customer balances each night. A client is a stand-alone computer that allows the user to run application software locally. In client-based processing, an individual LAN client has a copy of the application program, but not the data, which is stored on a server. The client requests a copy of the data file from the server, and the server responds by transmitting the entire file to the client. After performing the processing, the client returns the data file to the server where it is stored. (Page 453455) 5. Describe client/server architecture, including fat and thin clients, client/server tiers, and middleware. Client/server architecture refers to systems that divide processing between one or more networked clients and a central server. In a typical client/server system, the client handles the entire user interface, including data entry, data query, and screen presentation logic. The server stores the data and provides data access and database management functions. Application logic is divided in some manner between the server and the clients. A fat client design locates all or most of the application processing logic at the client. A thin client design locates all or most of the processing logic at the server. Typically, thin server designs provide better performance, because program code resides on the server, near the data. In contrast, a fat client handles more of the processing and must access and update the data more often. Fat client design is simpler and less expensive to develop, because the architecture resembles traditional file server designs where all processing is performed at the client. Client/server designs can be two-tier or three-tier. In a two-tier design, the user interface resides on the client, all data resides on the server, and the application logic can run either on the server, on the client, or be divided between the client and the server. In a three-tier design, a middle layer between the client and server processes the client requests and translates them into data access commands that can be understood and carried out by the server. Middleware is software that connects dissimilar applications and enables them to communicate and exchange data. (Pages 455-456,458-459)

6. Describe the impact of the Internet on system architecture. Include examples. The Internet has had an enormous impact on system architecture. E-business trends are reshaping the corporate landscape as firms, large and small, learn how to harness the power of the Web and build efficient, reliable, and cost-effective business solutions. In proposing an e-commerce strategy, an IT group must consider in-house development of ebusiness systems, the availability of packaged solutions, and the design of corporate portals. The guidelines in Figure 10-20 on page 462 describe some key issues that a company must consider in planning an effective e-commerce strategy, and examples of Web-based systems development are shown in Figures 10-21 and 10-22. (Pages 461-467) 7. Explain the difference between online processing and batch processing and provide an example of each type. Online processing handles transactions when and where they occur and provides output directly to users. An example of online processing is an airline reservations system or customer interaction with an ATM. Batch processing occurs when data is collected and processed in groups, or batches. An example of batch processing occurs when a firm produces customer statements at the end of the month. A batch application might process thousands of records in one run of the program. Batch programs can be scheduled to run at a predetermined time without user involvement. (Page 468-469) 8. Explain the difference between a LAN and a WAN, define the term topology, and draw a sketch of each wired and wireless network model. Also describe four IEEE 802.11 amendments. The difference between a LAN and WAN is in the area they cover. WANs cover great distances, whereas LANs are local. Topology means model, a topology of a network is a model of how a network is configured and arranged. Sketches should resemble their textbook versions; hierarchal, bus, ring, star. Wireless sketches should resemble their textbook versions; BSS, ESS, and ISS. Four IEEE amendments are: 802.11b, midrange speed (11Mbps), uses 2.4 Ghz spectrum, 802.11a, fast speed (54Mbps), uses higher 5 Ghz spectrum, 802.11g, like a hybrid between a and b, fast speed (54Mbps) but 2.4 Ghz spectrum. This is currently the most popular amendment. 802.11n, not yet released, proposes speeds of 248 Mbps using MIMO technology. The 802.11n amendment could revolutionize wireless networking.
StudentsketchesshouldresembleFigure1034onpage472(hierarchicalnetwork);Figure1035 onpage473(busnetwork);Figure1036onpage474(ringnetwork);andFigure1037onpage 474(starnetwork).(Pages454;471478)

9.

Explain the differences between each wireless topology. Which is rarely used in business? What kind of network do the 802.16 standards define? The differences between each wireless topology are as follows: BSS (Basic Service Set) consists of a single, central access point and wireless clients. BSS is also called infrastructure mode, and connects wireless clients to a wired infrastructure.

ESS (Extended Service Set) consists of multiple BSS networks connected together. ESS can cover a wide area and allows for roaming between access points. ISS (Independent Service Set) is also called peer-to-peer mode. No access point is used in ISS. Instead, wireless clients connect to each other directly. ISS is not commonly found in business networks. The ISS (Independent Service Set) is not commonly found in business networks. The 802.16 standards define a MAN, Metropolitan Area Network. MANs are larger than LANs but smaller than WANs. The 802.16 specifications are called WirelessMAN, or WiMAX, and are expected to enable wireless multimedia applications with a range of up to 30 miles. (Pages 477-479).

10.

List the sections of a system design specification, and describe the contents. The system design specification consists of the following: Executive (or Management) Summary: Provides a brief overview of the project for company managers and executives. Outlines the development efforts to date, current status report, summary of project costs to date and remaining costs, review of the overall benefits of the new system, presents the systems development phase schedule, and highlights any issues that management will need to address. System Components: Contains the complete design for the new system, including user interface, outputs, inputs, files, databases, and network specifications. Also should include source documents, report and screen layouts, DFDs, O-O diagrams, and all other relevant documentation. System Environment: Describes the constraints, or conditions, affecting the system, including any requirements that involve operations, hardware, systems software, or security. Implementation Requirements: Specifies start-up processing, initial data entry or acquisition, user training requirements, and software test plans. Time and Cost Estimates: Detailed schedules, cost estimates, and staffing requirements for the systems development phase and revised projections for the remainder of the SDLC. Appendices: Supplemental material, such as copies of documents from the first three phases, can be included if they would provide easy reference for readers. (Page480)
2. CompletetheReviewQuestions,#110onpage543(chapter11)ofyourtextbook.(Youmay handwriteortypeyourresponsestothesequestions.)[40]

ReviewQuestions

1. Where does systems implementation fit in the SDLC, what tasks typically are performed during this phase, and why is quality assurance so important? Systems implementation is the fourth of five phases in the systems development life cycle (SDLC). In the previous phase, systems design, you developed a physical model that

included data design, user interface, input and output design, and system architecture. Now you are ready to begin the systems implementation, which includes application development, testing, installation, and evaluation. At the conclusion of the systems implementation phase, users will be working with the system on a day-to-day basis, and you will focus on system operation and support, which is the final phase in the SDLC. The main objective of quality assurance is to avoid problems or to detect them as soon as possible. Poor quality can result from inaccurate requirements, design problems, coding errors, faulty documentation, and ineffective testing. Rigorous testing can catch errors in the implementation stage, but it is much less expensive to correct mistakes earlier in the development process. (Page 497-498, 500) 2. How are structured, object-oriented, and agile methods similar? How are they different? Structured analysis regards processes and data as separate components, and uses DFDs to show how processes transform data into useful information. Object-oriented (O-O) analysis combines data and the processes that act on the data into things called objects. O-O analysis uses various diagrams and object models to represent data, behavior, and by what means objects affect other objects. By describing the objects (data) and methods (processes) needed to support a business operation, a system developer can design reusable components for faster system implementation and decreased development cost. Many analysts believe that, compared with structured analysis, O-O methods are more flexible, efficient, and realistic in todays dynamic business environment. Agile development methods have attracted a wide following and an entire community of users. Agile methods typically use a spiral model, which represents a series of iterations, or revisions, which are based on user feedback. Proponents of the spiral model believe that this approach reduces risks and speeds up software development. Analysts should recognize that agile methods have advantages and disadvantages. By their nature, agile methods allow developers to be much more flexible and responsive, but can be riskier than more traditional methods. For example, without a detailed set of system requirements, certain features requested by some users might not be consistent with the companys larger game plan. Other potential disadvantages of adaptive methods can include weak documentation, blurred lines of accountability, and too little emphasis on the larger business picture. Also, unless properly implemented, a long series of iterations might actually add to project cost and development time. (Page 503-504) 3. Describe structure charts and symbols, and define cohesion and coupling. Structure charts show the relationships among program modules. A structure chart consists of rectangles that represent the program modules, with arrows and other symbols that provide additional information. Symbols represent various actions or conditions. Structure chart symbols represent modules, data couples, control couples, conditions, and loops. A module is represented by a rectangle. A library module has vertical lines at the edges, is reusable, and can be invoked from more than one point in the chart. A data couple, indicated by an arrow with an empty circle, shows data that one module passes to another.

A control couple, represented by an arrow with a filled circle, shows a message or a flag which one module sends to another. A condition, represented by a line with a diamond on one end, indicates that a control module determines which subordinate modules will be invoked, depending on a specific condition. A loop, shown as a curved arrow, indicates that one or more modules are repeated. Cohesion measures a modules scope and processing characteristics. A module that performs a single function or task has a high degree of cohesion, which is desirable. Because it focuses on a single task, a cohesive module is much easier to code and reuse. Coupling describes the relationships and interdependence among modules. Modules that are independent are loosely coupled, which is desirable. Loosely coupled modules are easier to maintain and modify, because the logic in one module does not affect other modules. If a programmer needs to update a loosely coupled module, he or she can accomplish the task in a single location. If modules are tightly coupled, one module refers to internal logic contained in another module. (Page 506-508) 4. Define unit testing, integration testing, and system testing. Unit testing is the testing of individual programs or modules. The objective of unit testing is to identify and eliminate execution errors that could cause problems in the program and also identify logic errors that could have been missed during desk checking. Test data used should contain both correct data and erroneous data, and all possible situations should be tested. During unit testing, programmers must test programs that interact with other programs and files individually, before they are integrated into the system. Integration testing is testing two or more programs that depend on each other. It also is known as link testing. Test data should be used that would consider both normal and unusual situations. By implementing this type of testing, systems analysts ensure that programs work properly together. System testing is completed after integration testing and involves the entire information system. A system test would include all typical processing situations and during such testing the user would enter data, including live data or actual data, perform queries, and produce reports in order to simulate actual operating procedures. Major objectives of the system testing would include performing the final test of all programs, ensuring that documentation and instructions are available and clear, demonstrating user interactivity, verifying that the system is fully functional, and confirming that the system can handle predicted volumes in an efficient manner. (Pages 515-517) 5. What types of documentation does a systems analyst prepare, and what would be included in each type? Systems analysts prepare program documentation, system documentation, operations documentation, and user documentation. Program documentation starts in the systems analysis phase and continues during systems implementation. It includes process descriptions and report layouts, as well as documentation by the programmers. System documentation includes the systems functions and how they are implemented. It is prepared during the analysis and design phases. It also will include data dictionary

entries, data flow diagrams, object models, screen layouts, source documents, and the systems request. Operations documentation will include all the information necessary for processing and distributing printed output and will most likely also be available online. User documentation includes instructions and information for users who will interact with the system and manuals. Online documentation for providing immediate help is useful. User documentation will include major features, capabilities, and limitations of the system along with menu and data entry screen options, source document content, security features, procedural information, examples, and frequently asked questions. (Pages 518-521)

6. What is the purpose of an operational environment and a test environment?


Thedaytodayenvironmentfortheactualsystemoperationiscalledtheoperationalenvironment orproductionenvironment.Itprovidesasecureframeworkforthesystemsprograms,procedures, anddatafiles.Itsaccessshouldbelimitedtousersandprogrammers,andcontrolledstrictly becauseitinvolvestheuseofprograms,procedures,andlivedata.Onlyifasystemproblemoccurs shouldsystemanalystsorprogrammersbeinvolvedintheoperationalenvironment.Thetest environmentforthesystemalsocontainscopiesofallprogramsandprocedures.Itsworkingfiles, however,aretestdatafileswherechangestotheoperationalsystemareverifiedandapproved priortoconversiontotheoperationalenvironment.(Page524)

7. Who must receive training before a new information system is implemented?


Users,managers,andITstaffallmustbetrainedproperlyinorderforasystemtooperate successfully. Usersrequireamuchmoreintensetrainingscenario.Inadditiontohavingaclear understandingoftheoverallsystem,usersmusthaveconcentratedtraininginthefollowing areas: Systemoverview Keyterms Startupandshutdownprocedures Mainmenuandsubmenus Iconsandshortcutkeys Majorsystemfunctions OnlineandexternalHelpfeatures FAQs Troubleshootingguide Handlingemergencies

Managersmustreceivesufficienttrainingtobesupportiveofthesystemandusers.This includesprojectorigin,costbenefitanalysis,supportofbusinessgoals,keyITcontactpeople, majorreportsanddisplaysofthesystem,andhowtorequestenhancementsorsystem changes.Theyalsomusthaveanoverviewofusertraining.

TheITstaffmusthaveasolidcomprehensionoftheprojecthistoryandjustification.They mustunderstandthesystemsarchitectureanddocumentationandbetrainedtoanswer typicaluserquestionsandunderstandhowtousevendorsupportshoulditbeneeded.They alsomustunderstandhowtologproblemsforquickandefficientresolution.Mostimportantly, theymusthaveappropriatetechnicaltrainingsothattheycanprovideuserandmanagement trainingwhennecessaryasthefutureofacompanydictates.(Page525526)

8. List and describe the four system changeover methods. Which one generally is the most expensive? Which is the riskiest? Explain your answers.
Systemchangeoveristheprocessofputtingthenewinformationsystemonlineandretiringtheold system.Itcanbeaccomplishedbyanyofthefollowingfourmethods.StudentscanrefertoFigure 1140onpage534foranoverview.Themethodsaredescribedasfollows: Directcutover.Thismethodcausesthechangeovertooccurimmediately.Itusuallyistheleast expensivemethodbecauseitinvolvestheoperationandmaintenanceofonlyonesystemata time.Itisthemostriskyofthechangeovermethodsbecauseolddatawillnotbeavailable shouldthesystemdevelopproblems.

Parallel operation. This method requires that both the old and new systems be run in parallel for a specific period of time. Users must work fully in both systems to reap the benefits of a parallel conversion. This method is more expensive but it is the least risky.

Pilotoperation.Thismethodinvolvesimplementationofanewsystematabetaorpilottest site.Themajorityofuserscontinuetousetheoldsystemwhileaselectgroupofusersperform dailyoperationsonthenewsystem. Phasedoperation.Thismethodallowstheimplementationofanewsystemtobedonein stagesormodules,ratherthanallatonce.Relatedgroupsofusersbeginusingthenew system,whileotheruserscontinuetousetheoldsystemuntilitistheirturntobephasedinto theoperations. Paralleloperationchangeovermethodsgenerallytendtobethemostexpensivebecausea companyisrunningtwosystemsatthesametimetoaccomplishthesametasks.Thiscreatesa doublingoftimeandmaterialsfortheusers.Also,runningbothsystemsmightplaceanextra burdenonthehardware,whichcancauseprocessingdelays.Thedirectcutovermethodof changeovergenerallyisthemostriskybecausethismethodcutsofftheuseoftheoldsystemwhen thenewsystemisimplemented. Althoughanymethodofchangeoverinvolvesacertainamountofrisk,thedirectcutovermethod canbethemostcostlybecauseyoucannotrevertbacktotheoldsystemifthenewonefails.(Page 531534)

9. Who should be responsible for performing a post-implementation evaluation?


Ideally,peoplewhowerenotdirectlyinvolvedindevelopingthesystemshouldperformapost implementationevaluation.Usually,ITstaffandusersperformtheevaluation.Sometimesfirms prefertouseeitheraninternalauditgrouporevenanindependentauditortoensuretheaccuracy andcompletenessoftheevaluation.(Page535)

10. List the information usually included in the final report to management.
Fivemaintopicsshouldbeincludedinthefinalreporttomanagement: 3. CompletetheReviewQuestions,#110onpage603(chapter12)ofyourtextbook.(Youmay handwriteortypeyourresponsestothesequestions.)[40] ReviewQuestions Finalversionsofallsystemdocumentation Plannedmodificationsandenhancementstothesystemthathavebeenidentifiedthrough systemevaluations Recapofallsystemsdevelopmentcostsandschedules Comparisonofactualcostsandschedulestotheoriginalestimates Resultsofthepostimplementationevaluation,ifithasbeenperformed(Page537)

1. Describe the four classifications of maintenance and provide an example of each type. Students can refer to examples shown in Figure 12-6 on page 561. Instructors might want to try a different approach to this question and ask students to cite an automotive example for each type of maintenance. Sample answers might include the following: a. Corrective maintenance diagnoses and corrects errors in an operational system. (An automotive example would be replacing a burned out headlight.) b. Adaptive maintenance involves adding new capability and enhancements to the existing system. (An automotive example would be adding a trailer hitch to your SUV so you can tow your boat.) c. Perfective maintenance is designed to improve efficiency. (An automotive example would be having a tune-up performed in order to improve gas mileage.) d. Preventive maintenance is performed to reduce the possibility of future system failure. (An automotive example would be changing your oil every 3,000 miles to avoid engine problems). (Page 561)

2. Why are newly hired systems analysts often assigned to maintenance projects? Newly hired and recently promoted IT staff members sometimes are assigned to maintenance projects because most IT managers believe that maintenance work offers the best learning experience. (Pages 566) 3. What is configuration management and why is it important? Configuration management (CM) is a process for controlling changes in system requirements during the development phases of the SDLC. It also is an important management tool for managing systems changes and costs after a system becomes operational. (Page 568)

4. What is the purpose of capacity planning? How is what-if analysis used in capacity planning? Capacity planning is a process that monitors current activity and performance levels, anticipates future activity, and forecasts the resources needed to provide the desired level of service. What-if analysis allows you to vary one or more elements in a capacity planning model to measure the effect on the other elements. (Page 574) 5. What is a release methodology and what are the pros and cons of this approach? What is the purpose of version control? Under a release methodology, all noncritical changes are held until they can be implemented at the same time. Each change is documented and installed as a new version of the system called a maintenance release. When a release method is used, a numbering pattern distinguishes the different releases. In a typical system, the initial version of the system is 1.0, and the release that includes the first set of maintenance changes is version 1.1. A change, for example, from version 1.4 to 1.5 indicates relatively minor enhancements, while whole number changes, such as from version 1.0 to 2.0, or from version 3.4 to 4.0, indicates a significant upgrade. A release methodology offers several advantages, especially if two teams perform maintenance work on the same system. When a release methodology is used, all changes are tested together before a new system version is released. The release methodology also reduces costs, because only one set of system tests is needed for all maintenance changes. This approach results in fewer versions, less expense, and less interruption for users. Using a release methodology also reduces the documentation burden. Version control is the process of tracking system releases. Typically, when a new version is released, it is archived by a systems librarian who is responsible for archiving current and previously released versions of the system. Using version control, in the event of major system failure, the company can reinstate the prior version for system recovery. Version control also allows one individual to track version changes. (Page 569) 6. Define the following terms: response time, bandwidth, throughput, and turnaround time. How are the terms related? Response time measures the overall time between a request for system activity and the delivery of the response to the user. Bandwidth describes the amount of data that the system can handle in a fixed time period. Throughput expresses a data transfer rate that measures actual system performance under specific circumstances. Turnaround time applies to centralized batch processing operations and measures the time between submitting a request and the fulfillment of the request. Each term represents a different way of measuring system performance. Taken together, response time, bandwidth, throughput, and turnaround time provide a comprehensive view of system operations and performance. (Pages 573574) 7. What are some key issues that you must address when considering data backup and recovery?
Thecornerstoneofbusinessdataprotectionisabackuppolicy,whichcontainsdetailedinstructions andproceduresforallbackups.Thebackuppolicyshouldspecifybackupmedia,schedules,and

retentionperiods.Aneffectivebackuppolicycanhelpassurecontinuedbusinessoperations,andin somecases,bethekeytoafirmssurvival.Inadditiontobackingupcriticalbusinessdata,some companieshavetakenamoredramaticstepbyestablishingahotsite.AhotsiteisaseparateIT location,whichmightbeinanotherstateorevenanothercountrythatcansupportcritical businesssystemsintheeventofapoweroutage,systemcrash,orphysicalcatastrophe.(Pages 592594)

8. Explain the concept of risk management, including risk identification, assessment, and control.
Riskmanagementinvolvesconstantattentiontothreeinteractivetasks:riskidentification,risk assessment,andriskcontrol.Riskidentificationanalyzestheorganizationsassets,threatsand vulnerabilities.Riskassessmentmeasuresrisklikelihoodandimpact.Riskcontroldevelopssafeguards thatreducerisksandtheirimpact.(Page576577)

9. What are the six security levels? Name at least three specific security issues that apply to each level. Also provide three examples of threat categories, attacker profiles, and types of attacks. The six security levels are physical security, network security, application security, file security, user security, and procedural security. The following is a list of issues that pertain to each security level: Physical Security Issues Computer room security Biometric scanning systems Motion sensors Servers and desktop computers Keystroke loggers Tamper-evident cases BIOS-level passwords; boot-level passwords; power-on passwords Notebook computers Universal Security Slot (USS) Tracking software Stringent password requirements Account lockout thresholds Network Security Issues Encrypting network traffic Encryption vs. plain text Public key encryption Wi-Fi Protected Access (WPA and WPA2) Wired Equivalent Privacy (WEP) Private networks Tunnels Virtual private networks Ports and services Destination ports Services Port scans Denial of service attacks Firewalls Protocols that control traffic Application Security Issues Services Security holes Permissions Input validation Patches and updates File Security Issues Permissions User groups User Security Issues

Identity management Password protection Social engineering User resistance

Procedural Security Issues Managerial policies and controls Corporate culture that stresses security Define how particular tasks are to be performed Employee responsibility for security Dumpster diving Use of paper shredders Classification levels. (Pages 580-592) 10. List six indications that an information system is approaching obsolescence. Six indications that an information system is approaching obsolescence are: 1) The systems maintenance history indicates that adaptive and corrective maintenance is increasing steadily. 2) Operational costs or execution times are increasing rapidly, and routine perfective maintenance does not reverse or slow the trend. 3) A software package is available that provides the same or additional services faster, better, and less expensively than the current system. 4) New technology offers a way to perform the same or additional functions more efficiently. 5) Maintenance changes or additions are difficult and expensive to perform. 6) Users request significant new features to support business requirements. (Pages 595)

Das könnte Ihnen auch gefallen