Sie sind auf Seite 1von 9

MC0071 Set 2 Answer 1 The Information Age, also commonly known as the Computer Age or Information Era, is an idea

a that the current age will be characterized by the ability of individuals to transfer information freely, and to have instant access to knowledge that would have been difficult or impossible to find previously. The idea is linked to the concept of a Digital Age or Digital Revolution, and carries the ramifications of a shift from traditional industry that the Industrial Revolution brought through industrialization, to an economy based around the manipulation of information. The period is generally said to have begun in the latter half of the20th century, though the particular date varies. Since the invention of social media in the early 21st century, some have claimed that the Information Age has evolved into the Attention Age. The term has been widely used since the late 1980s and into the 21st century. The Internet The Internet was originally conceived as a distributed, fail-proof network that could connect computers together and be resistant to any one point of failure; the Internet cannot be totally destroyed in one event, and if large areas are disabled, the information is easily re-routed. It was created mainly by ARPA; its initial software applications were email and computer file transfer. It was with the invention of the World Wide Web in 1989 that the Internet truly became a global network. Today the Internet has become the ultimate platform for accelerating the flow of information and is, today, the fastest-growing form of media Progression In 1956 in the United States, researchers noticed that the number of people holding "white collar" jobs had just exceeded the number of people holding "blue collar" jobs. These researchers realized that this was an important change, as it was clear that the Industrial Age was coming to an end. As the Industrial Age ended, the newer times adopted the title of "the Information Era". At that time, relatively few jobs had much to do with computers and computer-related technology. There was a steady trend away from people holding Industrial Age manufacturing jobs. An increasing number of people held jobs as clerks in stores, office workers, teachers, nurses, etc. The Western world was shifting into a service economy. Eventually, Information and Communication Technology computers, computerized machinery, fiber optics, communication satellites, Internet, and other ICT toolsbecame a significant part of the economy. Microcomputers were developed and many business and industries were greatly changed by ICT. Nicholas Negroponte captured the essence of these changes in his 1995 book, Being Digital. His book discusses similarities and differences between products made of atoms and products made of bits. In essence, one can very cheaply and quickly make a copy of a product made of bits, and ship it across the country or around the world both quickly and at very low cost. Thus, the term "Information Era" is often applied in relation to the use of cell phones, digital music, high definition television, digital cameras, the Internet, computer games, and other relatively new products and services that have come into widespread use.

2. Suggest six reasons why software reliability is important. Using an example, explain the difficulties of describing what software reliability means? Ans. Reliable software must include extra, often redundant, code to perform the necessary checking for exceptional conditions. This reduces program execution speed and increases the amount of store required by the program. Reliability should always take precedence over efficiency for the following reasons: a) Computers are now cheap and fast: There is little need to maximize equipment usage. Paradoxically, however, faster equipment leads to increasing expectations on the part of the user so efficiency considerations cannot be completely ignored. b) Unreliable software is liable to be discarded by users: If a company attains a reputation for unreliability because of single unreliable product, it is likely to affect future sales of all of that companys products. c) System failure costs may be enormous: For some applications, such a reactor control system or an aircraft navigation system, the cost of system failure is orders of magnitude greater than the cost of the control system. d) Unreliable systems are difficult to improve: It is usually possible to tune an inefficient system because most execution time is spent in small program sections. An unreliable system is more difficult to improve as unreliability tends to be distributed throughout the system. e) Inefficiency is predictable: Programs take a long time to execute and users can adjust their work to take this into account. Unreliability, by contrast, usually surprises the user. Software that is unreliable can have hidden errors which can violate system and user data without warning and whose consequences are not immediately obvious. For example, a fault in a CAD program used to design aircraft might not be discovered until several plane crashers occurs. f) Unreliable systems may cause information loss: Information is very expensive to collect and maintains; it may sometimes be worth more than the computer system on which it is processed. A great deal of effort and money is spent duplicating valuable data to guard against data corruption caused by unreliable software. The Reliability of a software system is a measure of how well users think it provides the services that they require. Say it is claimed that software installed on an aircraft will be 99.99% reliable during an average flight of five hours. This means that a software failure of some kind will probably occur in one flight out of 10000. A system might be thought of as unreliable if it ever failed to provide some critical service. For example, say a system was used to control braking on an aircraft but failed to work under a single set of very rare conditions. If an aircraft crashed because of these failure conditions, pilots of similar aircraft would regard the software as unreliable. Software reliability:- Software reliability is a function of the number of failures experienced by a particular user of that software. It is a situation in which the software does not deliver the service expected by the user. Formal specifications and proof do not guarantee that the software will be reliable in practical use. The reasons for this are: a) The specifications may not reflect the real requirements of system users many failures experienced by users were a consequence of specification errors and omissions, which could not be detected by formal system specification. b) Program proofs are large and complex so, like large and complex programs, they usually contain errors. c) The Proof may assume a usage pattern, which is incorrect.

3. Discuss the difference between object oriented and function oriented design strategies. Ans: Object-oriented design is the process of planning a system of interacting objects for the purpose of solving a software problem. It is one approach to software design. An object contains encapsulated data and procedures grouped together to represent an entity. The 'object interface', how the object can be interacted with, is also defined. An object-oriented program is described by the interaction of these objects. Object-oriented design is the discipline of defining the objects and their interactions to solve a problem that was identified and documented during object-oriented analysis. From a business perspective, Object Oriented Design refers to the objects that make up that business. For example, in a certain company, a business object can consist of people, data files and database tables, artifacts, equipment, vehicles, etc. What follows is a description of the class-based subset of object-oriented design, which does not include object prototype-based approaches where objects are not typically obtained by instancing classes but by cloning other (prototype) objects. Input (sources) for object-oriented design The input for object-oriented design is provided by the output of object-oriented analysis. Realize that an output artifact does not need to be completely developed to serve as input of object-oriented design; analysis and design may occur in parallel, and in practice the results of one activity can feed the other in a short feedback cycle through an iterative process. Some typical input artifacts for object-oriented design are: Conceptual model : Conceptual model is the result of object-oriented analysis; it captures concepts in the problem domain. The conceptual model is explicitly chosen to be independent of implementation details, such as concurrency or data storage. Use case: Use case is description of sequences of events that, taken together, lead to a system doing something useful. Each use case provides one or more scenarios that convey how the system should interact with the users called actors to achieve a specific business goal or function. Use case actors may be end users or other systems. System Sequence Diagram: System Sequence diagram (SSD) is a picture that shows, for a particular scenario of a use case, the events that external actors generate, their order, and possible inter-system events. User interface documentations (if applicable): Document that shows and describes the look and feel of the end product's user interface. It is not mandatory to have this, but it helps to visualize the endproduct and therefore helps the designer. Relational data model (if applicable): A data model is an abstract model that describes how data is represented and used. If an object database is not used, the relational data model should usually be created before the design, since the strategy chosen for object-relational mapping is an output of the OO design process. However, it is possible to develop the relational data model and the object-oriented design artifacts in parallel and the growth of an artifact can stimulate the refinement of other artifacts. Object-oriented concepts The five basic concepts of object-oriented design are the implementation level features that are built into the programming language. These features are often referred to by these common names: Object/Class: A tight coupling or association of data structures with the methods or functions that act on the data. This is called a class, or object (an object is created based on a class). Each object serves a

separate function. It is defined by its properties, what it is and what it can do. An object can be part of a class, which is a set of objects that are similar. Information hiding: The ability to protect some components of the object from external entities. This is realized by language keywords to enable a variable to be declared as private or protected to the owning class. Inheritance: The ability for a class to extend or override functionality of another class. The so-called Subclass has a whole section that is the super class and then it has its own set of functions and data. Interface: The ability to defer the implementation of a method. The ability to define the functions or methods signatures without implementing them. Polymorphism: The ability to replace an object with its sub objects. The ability of an object-variable to contain, not only that object , but also all of its sub objects. Designing concepts Defining objects, creating class diagram from conceptual diagram: Usually map entity to class. Identifying attributes. Use design patterns(if applicable): A design pattern is not a finished design, it is a description of a solution to a common problem, in a context . The main advantage of using a design pattern is that it can be reused in multiple applications. It can also be thought of as a template for how to solve a problem that can be used in many different situations and/or applications. Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Define application framework(if applicable): Application framework is a term usually used to refer to a set of libraries or classes that are used to implement the standard structure of an application for a specific operating system. By bundling a large amount of reusable code into a framework, much time is saved for the developer, since he/she is saved the task of rewriting large amounts of standard code for each new application that is developed. Identify persistent objects/data (if applicable): Identify objects that have to last longer than a single runtime of the application. If a relational database is used, design the object relation mapping. Identify and define remote objects (if applicable). Output (deliverables) of object-oriented design Sequence Diagrams: Extend the System Sequence Diagram to add specific objects that handle the system events. A sequence diagram shows, as parallel vertical lines, different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in which they occur. Class diagram: A class diagram is a type of static structure UML diagram that describes the structure of a system by showing the system's classes, their attributes, and the relationships between the classes. The messages and classes identified through the development of the sequence diagram scan serve as input to the automatic generation of the global class diagram of the system

4. Write a note on Software Testing Strategy. Ans: Software testing is an investigation conducted to provide users with information about the quality of the product, with respect to the context in which it is intended to operate. Software Testing also provides an objective, independent view of the software to allow the business to appreciate and understand the risks at implementation of the software. Test techniques include the process of executing a program or application with the intent of finding software bugs. Software Testing can also be stated as the process of validating and verifying that a software program/application/product: meets the business and technical requirements that guided its design and development; works as expected; can be implemented with the same characteristics. Software Testing, depending on the testing method employed can be implemented at any time in the development process. Different software development models will focus the test effort at different points in the development process. In a more traditional model, most of the test effort occurs after the requirements have been defined and the coding process has been completed. History: The separation of debugging from testing was initially introduced by Glenford J.Myers in 1979. It illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages: Until 1956 - Debugging oriented 19571978 - Demonstration oriented 19791982 - Destruction oriented 19831987 - Evaluation oriented 19882000 - Prevention oriented Software testing topics Scope: A primary purpose for testing is to detect software failures so that defects may be uncovered and corrected. The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. Functional vs non-functional testing: Functional testing refers to tests that verify a specific action or function of the code. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work". Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or security. Non-functional testing tends to answer such questions as "how many people can log in at once", or "how easy is it to hack this software". Defects and failures: One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements that result in errors of omission by the program designer. A common source of requirements gaps is non-functional requirements such as testability, scalability, maintainability, usability, performance, and security. Finding faults early: It is commonly believed that the earlier a defect is found the cheaper it is to fix it. Compatibility: A frequent cause of software failure is compatibility with another application, a new operating system, or, increasingly, web browser version. In the case of lack of backward compatibility, this can occur (for example...) because the programmers have only considered coding their programs for, or testing the software upon, "the latest version of" this-or-that operating system. Static vs. dynamic testing: Reviews, walkthroughs, or inspections are considered as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing can be omitted. Dynamic testing takes place when the program itself is used for

the first time. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code (modules or discrete functions). Software verification and validation: Software testing is used in association with verification and validation: Verification: Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. 5. Give the difference between restructuring and forward engineering. Ans: Code restructuring: The common type of reengineering is code restructuring. Some legacy systems have relatively solid program architecture, but individual modules were coded in a way that makes them difficult to understand, test, and maintain, in such cases, the code within the suspect modules can be restructured. To accomplish this activity, the source code is analyzed using are structuring tool, Violations of structured programming constructs are noted and code is then restructured. The resultant restructured code is reviewed and tested to ensure that no anomalies have been introduced, internal code documentation is updated. "Restructuring is the transformation from one representation form to another at the same relative abstraction level, while preserving the subject system's external behavior (functionality and semantics)." Therefore, restructuring is something that you do while remaining at the same phase of software engineering life cycle. It's example can be the Normalization of the database at the design phase, i.e. taking the database design from 1st Normal Form to 2nd Normal Form to 3rd Normal Form and then from there to BNF (if necessary). So this is what you are doing while remaining at the same phase of Software Engineering, i.e. the Design phase. You have transformed a crude design into a more structured design. You are just restructuring the information that you have during the design phase, in another form that is more structured, clearer and less ambiguous. So you are achieving some benefit with the help of restructuring. Similarly, during the Implementation phase, if you are replacing your "bubble sort" algorithm with a "quick sort" algorithm in order to enhance the performance of the system, then this is also called restructuring but at a different level of abstraction. Forward Engineering: In an ideal word, applications should be rebuilt using a automated 'reengineering engine'. The old program would be fed into the engine, analyzed, restructured, and then regenerated in a form that exhibited the best aspects of software quality. In the short term, it is unlikely that such an 'engine' will appear, but CASE vendors have introduced told shat provide a limited subset of these capabilities that addresses specific application domain. More important these reengineering tools are becoming increasingly more sophisticated. Forward engineering, also called renovation or reclamation, not only recovers design information from existing software, but uses this information to alter or reconstitute the existing system in an effort to improve its overall quality. In most cases, reengineered software re implements the function of the existing system and also adds new functions and/or improves overall performance. Forward engineering is what you call the normal execution of the software life cycle in the normal, i.e. forward direction. It is given this name in order to represent "Reverse Engineering" (refer to my article titled "Reverse Engineering" dated March 7, 1998) something that is executed by reversing the normal software engineering procedure, which is from Requirements to design to implementation to testing to delivering to maintenance. So in this regard, these two terms are quite different in meaning and interpretation.

6. What are the various types of Prototyping models? Ans: Rapid prototyping is an intensive process of collecting information on requirements and on the adequacy and functionality of fairly new product designs. Model prototyping is an important data resource during the different stages of product development. Model prototyping makes use of various kinds of rapid prototyping techniques to provide the right model types used for different testing procedures. Such techniques may require the use of requirements animation, incremental, and evolutionary prototyping. 1. Requirements Animation- is the method used mainly to demonstrate functionality in test cases that can be easily assessed by users. Software tools are used to produce representational prototype models that are built using animation packages as well as screen painters. 2. Incremental prototyping- enables the development of large prototype system models installed in phases in order to avoid delays between product specification and its final delivery into the consumer market. Once the customer and supplier have agreed on certain core features, the installation of a skeleton system is applied as soon as possible. Important requirements can be checked out as the model is being used, enabling changes to core features while in operation with extra and optional features can be added later. 3. Evolutionary prototyping- considered as the most integral form of prototyping. It acts as a compromise between production methods and with that of model prototyping. Employing this technique, a model prototype is initially constructed that is then evaluated as it evolves continually and becomes a highly improved end product. Many designers believe that more acceptable systems would result if evolutionary prototyping were interconnected with periods of requirements animation or rapid prototyping. Here the tools are the actual facilities resources where the final system will be implemented. The use of different model prototyping techniques can help do away with the uncertainties about how well a final design may be able to fit into the end user's needs. The different model prototyping techniques help product designers make the necessary decisions by obtaining information from initial test users about the effective functionality of a certain product prototype. Model prototyping can also be an effective aid in determining how effective and ideal the features and the overall design can be for proposed users. Model prototyping and the subsequent testing procedures can provide useful representation information that can reveal possible design flaws, effective features as well as useless functions that can either be corrected, enhanced or done away with on the next stages of product development. 7. Write a note on Capability Maturity Model. Ans: The CMM (aka Humphrey's Capability Maturity Model) was originally described in the book Managing the Software Process (Addison Wesley Professional, Massachusetts, 1989) Watts Humphrey. The CMM was conceived by Watts Humphrey, who based it on the earlier work of Phil Crosby. Active development of the model by the SEI (US Dept. of Defence Software Engineering Institute) began in 1986. The SEI was at Carnegie Mellon University in Pittsburgh. The CMM was originally intended as a tool for objectively assessing the ability of government contractors' processes to perform a contracted software project. Though it comes from the area of software development, it can be (and has been and still is being) applied as a generally applicable model to assist in understanding the process capability maturity of organisations in diverse areas. For example, software engineering, system engineering, project management, risk management, system acquisition, information technology (IT), personnel management. It has been used extensively for avionics software and government projects around the world. Though still thus widely used as a general tool, for software development purposes the CMM has been superseded by CMMI (Capability Maturity Model Integration). The old CMM was renamed to Software

Engineering CMM (SE-CMM) and organizations accreditations based on SE-CMM expired on the 31st of December, 2007. Other variants of the CMM include Software Security Engineering CMM SSE-CMM and People CMM. Other maturity models such as ISM3 have also emerge A maturity model is a structured collection of elements that describe certain aspects of maturity in an organization. A maturity model may provide, for example: a place to start the benefit of a communitys prior experiences a common language and a shared vision a framework for prioritizing actions a way to define what improvement means for your organization. A maturity model can be used as a benchmark for assessing different organizations for equivalent comparison. The model describes the maturity of the company based upon the project the company is handling and the related clients. The CMM involves the following aspects: Maturity Levels: It is a layered framework providing a progression to the discipline needed to engage in continuous improvement (It is important to state here that an organization develops the ability to assess the impact of a new practice, technology, or tool on their activity. Hence it is not a matter of adopting these, rather it is a matter of determining how innovative efforts influence existing practices. This really empowers projects, teams, and organizations by giving them the foundation to support reasoned choice.) Key Process Areas: A Key Process Area (KPA) identifies a cluster of related activities that, when performed collectively, achieve a set of goals considered important. Goals: The goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area. Common Features: Common features include practices that implement and institutionalize a key process area. These five types of common features include: Commitment to Perform, Ability to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation. Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the key process areas. The Capability Maturity Model was initially funded by military research. The United States Air Force funded a study at the Carnegie-Mellon Software Engineering Institute to create an abstract model for the military to use as an objective evaluation of software subcontractors. The result was the Capability Maturity Model, published as Managing the Software Process in 1989. The CMM is no longer supported by the SEI and has been superseded by the more comprehensive Capability Maturity Model Integration (CMMI), of which version 1.2 has now been released. 8. Explain Time is closely correlated with money and cost, tools, and the characteristics of development methodologies. What do you make out of this statement? Ans: The software engineering process depends on time as a critical asset as well as a constraint or restriction on the process. Time can be a hurdle for organizational goals, effective problem solving, and quality assurance. Managed effectively, time can support the competitive advantage of an organization, but time is also a limitation, restricting or stressing quality and imposing an obstacle to efficient problem solving. Time is

the major concern of various stakeholders in the software engineering process, from users, customers, and business managers to software developers and project managers. Time is closely correlated with money and cost, tools, and the characteristics of development methodologies like Rapid Application Development that aim primarily at reducing time and accelerating the software engineering process. These methodologies exhibit characteristics such as reusability, which emphasizes avoiding reinventing the wheel, object-oriented analysis, design, and implementation. Examples include assembly from reusable components and component-based development; business objects; distributed objects; object-oriented software engineering and object-oriented business process reengineering; utilizing unified modeling languages (UML); and commercial-off-the-shelf software. Other characteristics are automation (via CASE tools); prototyping; outsourcing; extreme programming; and parallel processing. A redefined software engineering process must integrate the critical activities; major interdisciplinary resources (people, money, data, tools, and methodologies); organizational goals; and time in an ongoing round-trip approach to business-driven problem solving. This redefinition must address limitations identified in the literature related to business metrics, the process environment and external drivers, and process continuation, as fundamentals of process definition. A conceptual framework should emphasize the following characteristics for interdisciplinary software engineering. It must address exploring resources, external drivers, and diversity in the process environment to optimize the development process. It must overcome knowledge barriers in order to establish interdisciplinary skills in software-driven problem-solving processes. It must recognize that organizational goals determine the desired business values, which in turn guide, test, and qualify the software engineering process. The process activities are interrelated and not strictly sequential. Irrelevant activities not related to or that do not add value to other activities should be excluded. The optimized software engineering process must be iterative in nature with the degree of iteration ranging from internal feedback control to continual process improvement. The software engineering process is driven by time, which is a critical factor for goals; competition; stakeholder requirements; change; project management; money; evolution of tools; and problem-solving strategies and methodologies.

Das könnte Ihnen auch gefallen