Sie sind auf Seite 1von 14

The correct definition of Software Quality Assurance goes something like:The function of software quality that assures that

the standards, processes, and procedures are appropriate for the project and are correctly implemented. This definition is taken from Software Definitions at NASA The problem with this, and similar, definitions for commercial SQA practitioners are: It tells us little about what SQA is other than repeating the definition. That is, it uses the defined terms assures and software. It doesnt provide a scope for someone responsible for Software Quality Assurance. It doesnt address the role, or relationship, with Software Testing. In its pure form under which a separate audit style group needs to be established it is difficult to apply to a small development environment. What is QA?

The so called Quality Movement which was first established in Japan in 1946 by the U.S. Occupation Force's, was based on W. Edwards Deming (USA) research and papers on Statistical Quality Control (SQC). The various definitions and approaches to Quality Assurance come from Deming and other so called Quality Gurus. The important point, for this discussion, is that these methods were applied to industrial production. That is the production of something tangible. Fujitsus slogan Quality built-in, with cost and performance as prime consideration, typifies this approach. Under this approach the production process was broken down and specified. Each process would have an output (or component) that would have a required specification (measurement, weight etc.) and these would be verified as the main product was being built. It would be a separate group Quality Control (QC) that would measure the components, at various manufacturing stages. QC would make sure the components were within acceptable tolerances, i.e. they did not vary from agreed specifications. It is worth noting here that in manufacturing QC (or test and inspection) is easy to distinguish from Quality Assurance QA which is process compliance. In Software development, however, Quality Control itself presents significant challenges due to the intangible nature of Software. In Software development the distinction between QC (SQC) and QA (SQA) is not as clear and these terms are often mixed. A further definition of these terms, by way of example, can be found here. These QA methods, in manufacturing, proved themselves to work (in Sales, Customer satisfaction and the right cost of production, i.e. PROFIT) and were adopted all over the world. QA groups (for manufacturing) became the norm. These QA groups would not take any part in the manufacture process itself but would measure and audit the process to make sure the established guidelines and standards where being followed. The QA group would then give input (metrics or measures) into a process of continuous

improvement. In this way we could have a QC department that measures certain components at various manufacturing stages and have a QA department that makes sure that every process (including QC) is following the agreed and documented procedures. In short we had someone (QA) assuring that processes were conforming to document standards and procedures, hence these terms showing up in SQA and QA definitions, and this group were distinct from QC (i.e. testing). So far we are in great shape in the QA and QC world of manufacturing, then someone came up with the bright idea Why don't we apply these proven Quality Management processes to Software? The move to SQA and SQC

Hence SQA (and SQC) were born and with it came problems of definition and implementation. The definition still refers back to the traditional manufacturing QA world. There are, however, some notable differences between software and a manufactured product. These differences all stem from the fact that the manufactured product is physical and can be seen whereas the software product is not visible. Therefore its function, benefit and costs are not as easily measured. The following differences highlight some of the issues in taking the manufacturing QA\QC model and applying it to software development. The manufactured product is a physical realization of the customer requirements. The function of the product can be verified against this physical realization. The costs of manufacture, including rework, repairs, recalls etc., are readily categorized and visible. The benefit of the product to its user\customer are readily categorized and visible. In order to overcome these types of issues, and reap the benefit of QA\QC applied to software, other terms, models and paradigms needed to be (and were) developed. In order to identify the Software Costs and Benefits, remembering Fujitsus term with cost and performance as prime consideration, a number of Software Characteristics where defined. These characteristics are sometimes referred to as Quality Attributes, Software Metrics or Functional and Non-Functional Requirements. The intention here is to breakdown the Software product into attributes that can be measured (in terms of cost benefit). Examples of these attributes are Supportability, Adaptability, Usability and Functionality. There are many definitions of these Software Quality Attributes but a common one is the FURPS+ model which was developed by Robert Grady at Hewlett Packard. Under the FURPS, the following characteristics are identified:Functionality

The F in the FURPS+ acronym represents all the system-wide functional requirements that we would expect to see described. These usually represent the main product features that are familiar within the business domain of the solution being developed. For example, order processing is very natural for someone to describe if you are developing an order processing system. The functional requirements can also be very technically oriented. Functional requirements that you may consider to be also architecturally significant system-wide functional requirements may include auditing, licensing, localization, mail, online help, printing, reporting, security, system management, or workflow. Each of these may represent functionality of the system being developed and they are each a system-wide functional requirement. Usability Usability includes looking at, capturing, and stating requirements based around user interface issues, things such as accessibility, interface aesthetics, and consistency within the user interface. Reliability Reliability includes aspects such as availability, accuracy, and recoverability, for example, computations, or recoverability of the system from shut-down failure. Performance Performance involves things such as throughput of information through the system, system response time (which also relates to usability), recovery time, and startup time. Supportability Finally, we tend to include a section called Supportability, where we specify a number of other requirements such as testability, adaptability, maintainability, compatibility, configurability, installability, scalability, localizability, and so on. + The "+" of the FURPS+ acronym allow us to specify constraints, including design, implementation, interface, and physical constraints. The specification of the FURPS+ characteristics needs to go into the Systems Requirements. The testing of these characteristics should be done by the SQC (testing team). Some of the FURPS+ characteristics, i.e. Functionality and Usability can be tested by executing the actual software. Some, however, like Supportability and Adaptability can only be verified by code inspection or dry running What if? scenarios. It is important to note that neither the SQA nor SQC group should have the responsibility of putting the desired FURPS+ characteristic into the product. They (SQC) should only test the presence or absence of the FURPS+ characteristics. With an established practice of defining and measuring the FURPS+ (or similar model) characteristics it is possible to implement Software QA\QC along similar lines to manufacturing QA\QC. This should, in theory, overcome the difficulties caused by the intangible nature of software, allowing each Characteristic of the Software to be measured (by SQC) and subject all the processes of Software production to SQA and continuous improvement.

By way of example, consider the Supportability FURPS+ characteristic. This can be measured by the length of time in takes to fix a defect. In order to improve this measure coding standards could be implemented. In this scenario the SQC department would inspect the code to make sure that the coding standard was being implemented and the SQA department would make sure the SQC and the development group followed the process. The SQA department would also collect and analyze the time needed to repair the defect (Supportability measure) in order to give input to the usefulness of the standards as well as to the continuous process improvement initiative. Conclusion The definitions for SQA and SQC originate from their manufacturing QA and QC counterparts. SQA and SQC can be implemented in Software production as their QA and QC counterparts are in manufacturing. In order to implement SQA and SQC (as their manufacturing counterparts) a method of characterizing the intangible software product is needed. The FURPS+ model defines a system of characterization, under which software attributes can be used by SQC to Test and Measure. Anything that is tested and measured (by SQC) needs to be defined as a requirement, this includes the FURPS+ characteristics. SQA in Theory provides further pure SQA concepts, including the SQA role within the CMMi framework. SQA in Practice provides a jump start practical approach to establishing Software Quality Assurance (including SQC) within a small development team.

IN THEORY

SQA, SQC and CMMI Definitions Having positioned Software Quality Assurance SQA and Software Quality Control SQC (see SQA Definition) within their historical context, this paper outlines an example implementation of SQA and SQC, within a CMMI context that matches the formal definitions of these terms. The formal definitions that are used within this paper are:Software Quality Assurance The function of software quality that assures that the standards, processes, and procedures are appropriate for the project and are correctly implemented. Software Quality Control The function of software quality that checks that the project follows its standards, processes, and procedures, and that the project produces the required internal and

external (deliverable) products. A further definition of SQA and SQC, by way of example can be found here. CMMI A process improvement approach to software development. CMMi identifies a core set of Software Engineering process areas as: Requirements Development Requirements Management Technical Solution Product Integration Verification Validation

CMMI also covers other process areas, such as Process Management, Project Management and Support but only the core Software Engineering development processes are used here by way of example. It is also interesting to note that SQA and SQC are processes defined within CMMI, they are under the Support process area. In CMMI SQA\SQC is defined as Process and Product Quality Assurance. CMMI is an approach to process improvement, in which SQA\SQC play a major but not exclusive role. Everyone in a software development organization takes part in both the CMMI processes and any improvement initiatives for those processes. Each of the main Engineering process areas is now described together with the role that SQA\SQC plays within those areas.

SQA and SQC roles in CMMI Requirements Development The CMMI Requirements Development process area describes three types of requirements:customer requirements, product requirements, and product-component requirements. SQA role To observe (audit) that documented standards, processes, and procedures are followed. SQA would also establish software metrics in order to measure the effectiveness of this process. A common metric for measuring the Requirements process would be the number of errors (found during system testing) that could be traced to inaccurate or ambiguous requirements (note: SQC would perform the actual system testing but SQA would collect the metrics for monitoring and continuous improvement). SQC role SQC takes an active role with Verification (this is a process itself that is described later). Verification of the requirements would involve inspection (reading) and looking for clarity and completeness. SQC would also verify that any documentated requirement standards are followed.

Note there is a subtle difference between SQA and SQC with regard to standards, SQCs role is in verifying the output of this process (that is the Requirement document itself) while SQAs role is to make sure the process is followed correctly. SQA is more of an audit role here, and may sample actual Requirements whereas SQC is involved in the Verification of all Requirements. The type of requirement need not be just the functional aspect (or customer\user facing requirements) they could also include product and\or component requirements. The product requirements e.g. Supportability, Adaptability and Reliability etc. are characteristics discussed here (as part of the FURPS+ model). The respective roles of SQC and SQA is the same for all types of requirement (customer and product) with SQC focusing on the internal deliverable and SQA focusing on the process of how the internal deliverable is produced, as per the formal definition. SQA and SQC roles in CMMI Requirements Management

The purpose of (CMMI) Requirements Management is to manage the requirements of the project's products and product components and to identify inconsistencies between those requirements and the project's plans and work products. This process involves version control of the Requirements and the relationship between the Requirements and other work products. One tool used in Requirements Management is a Traceability Matrix. The Traceability Matrix maps where in the Software a given requirement is implemented, it is a kind of cross reference table. The traceability matrix also maps which test case verify a given requirement. There are other processes within Requirements Management and CMMI should be referenced for further information. SQA role To observe (audit) that documented standards, processes, and procedures are followed. SQA would also establish metrics in order to measure the effectiveness of this process. A common metric for measuring the Requirements Management would be the how many times the wrong Version was referenced. Another measure (for the Traceability Matrix) would be lack of test coverage, that is defects detected in the shipped product that were not tested due to the fact that they were not referenced in the Traceability matrix that referenced the requirements. SQC role As with the actual Requirements Development, SQC would be involved in inspecting the actual deliverables (e.g. Traceability Matrix) from this process. SQC may also get involved at this stage as they will be the people doing the actual Testing (Verification and Validation) so for their test coverage a complete Traceability Matrix is essential. SQA and SQC roles in CMMI Technical solution The purpose of (CMMI) Technical Solution is to design, develop, and implement solutions to requirements. Solutions, designs, and implementations encompass products, product components, and product-related life-cycle processes either singly or in combinations as appropriate.

This is the main Design and Coding processes. CMMI puts the design and build together. Other important processes, e.g. Configuration Management are listed in other process areas within CMMI. SQA role To observe (audit) that documented standards, processes, and procedures are followed. SQA would also establish metrics in order to measure the effectiveness of this process. Clearly testing the end product against the requirements (which is itself a SQC activity) will reveal any defects introduced during this (the Technical solution) process. The number of defects is a common measure for the Design\Build phase. This metric is usually further quantified by some form of scope, for example defects per 100 lines of code, or per function. It is important that the defect may not always be a functional (or Customer facing defect) it could be that a required adaptability scenario is absent from the Design and\or coded solution. The FURPS+ model references typical Software Metrics that are used for specifying the total (both functional and non-functional) software requirements. SQC role The major SQC role during this process will be testing (see Validation and Verification). The finished product does not have to be present before testing can begin. Unit and Component can both take place before the product is complete. Design and Code reviews are also something that SQC could get involved with. The purpose of the review has to be clearly stated, i.e. to verify standards are followed or to look for potential Supportability (part of the Product Requirements) issues. The Supportability metric is the time it takes for a defect in a system to be fixed. This metric is influenced by the complexity of the code, which impacts the developers ability to find the defect. SQA and SQC roles in CMMI Product Integration The purpose of Product Integration is to assemble the product from the product components, ensure that the product, as integrated, functions properly, and deliver the product. Note this is the final Integration and move to production or product delivery. For large Software packages (consider SAP, Oracle Financials etc.) the assembly process is huge and the potential for errors is high. This process does not involve any coding but pure integration and\or assembly. SQA role: To observe (audit) that documented standards, processes, and procedures are followed. SQA would also establish metrics in order to measure the effectiveness of this process. One measurement for this would be the defects found that resulted from the interface specifications (part of the Product requirements), potential process improvements could be to find other, perhaps less ambiguous ways, of specifying interfaces. For example a development team may move to XML or Web Services for all interfaces, SQA could then measure the defects and report back to management and development as to the effectiveness of this change. SQC role: Again testing would be a large role played by SQC. Systems Integration Testing (SIT) would be carried out by SQC. Also install ability testing would be done during this process. SQA and SQC roles in CMMI Verification The purpose of Verification is to ensure that selected work products meet their specified requirements Theses activities are only carried out by SQC, the role of SQA would be to make sure that SQC had documented procedures, plans etc. by audit. SQA would also measure the effectiveness of the Verification processes by tracking defects that were

missed by SQC during Verification. Note the term Verification, as opposed to Validation (see below). In essence Verification answers the question Are we building the product correctly while Validation answers the question Are we building the correct product. Validation demonstrates that the product satisfies its intended purpose when place in the correct environment while Verification refers to building to specification. The FURPS+ model identifies both Customer and Product requirements; Verification applies to both these types of requirements and can be applied to the intermediary work products. Design or Code reviews are examples of Verification. These terms Verification and Validation are often mixed, CMMI makes this comment about the distinction:- Although verification and validation at first seem quite similar in CMMI models, on closer inspection you can see that each addresses different issues. Verification confirms that work products properly reflect the requirements specified for them. In other words, verification ensures that you built it right. While SQC carries out all the Verification activities the Verification process itself is still subject to SQA and process improvement. SQA and SQC roles in CMMI Validation Validation confirms that the product, as provided, will fulfill its intended use. In other words, validation ensures that you built the right thing. As with Verification, Validation is mainly the domain of SQC. The term Acceptance Test could also apply to Validation, in most cases the Acceptance test is carried out by a different group of people from the SQC team that performed Verification, as the product was being built. In the case where an application is going to be used internally, then the end user or business representative would perform the Acceptance testing. Wherever this is done it is in essence a SQC activity. As with Verification, SQA makes sure that these processes conform to standards and documented procedures. The Validation process itself is subject to continuous improvement and measurement. Conclusion Although only a high level snapshot has been given of how some SDLC processes are subjected to SQA and SQC a clear pattern can be seen. In all cases the SQA and SQC do not get involved in building any of the products. SQC are only involved in Verification and Validation. The role of SQA is even more removed from development; they are mainly providing the role of an Auditor. In addition SQA will collect measurements of the effectiveness (and cost) of the processes in order to implement continuous process improvement. This separation of SQC from development and SQA from SQC ensures objectivity and impartiality. In an ideal environment theses (Development, SQC and SQA) would be three separate organization units reporting to different managers. Some of the benefits of SQA\SQC can be achieved in a less formal environment. This hybrid approach is typically used by small development groups. An example of this hybrid approach is documented here at SQA in Practice.

IN PRACTICE

SQA, SQC in the Commercial Workplace Having discussed a basic definition of Software Quality Assurance SQA and Software Quality Control SQC as well as a formal implementation of these defined terms, this paper documents a typical (or hybrid) framework which benefits from the main goals of SQA\SQC without the organizational structure of the pure implementation.

First a caveat, the danger in environments where SQA\SQC is not formally implemented is that the Quality Group becomes a kind of Engineering Services group and do what ever the main development team does not currently (or does not want to) do. In many environments the SQA\SQC team reports to an application development manager and the Engineering Services approach is very attractive to this manager. For example, lets say performance is an after thought that is not specified as a non-functional requirement, not designed into the system but noticed when things go wrong. Under the Engineering Services scenario it is tempting for the manager to send the QA person on a LoadRunner (or other load testing tool course) install the tool and declare the QA person our performance expert. This type of approach will not work, the QA team member should be able to perform load testing (as part of SQC) but the performance requirements (that are tested for) should be part of the requirements and the Designer\Programmer should own the performance objective. So, accepting that SQA\SQC have to be separate from development activities and certain (mainly Requirement and Design) activities always need to take place, lets look at a hybrid, or generalized, approach to SQA. SQA and SQC will be referred to as Software QA and the activities of this person(s) represent a good ROI for achieving the goals of SQA\SQC with a limited resource. Note: This is only a sample of SDLC activities but role of Software QA against these activities should illustrate the kind of roles that a general Software QA person should undertake.

SQA and SQC roles against generic SDLC Requirements This is the starting point, everything flows from the requirements. Templates should be established, that can be referenced during reviews, which have sections for all Functional and Non-Functional requirements (see FURPS+). For example the performance requirements should be stated in terms of user population and transaction rates, in this way it will not be an after thought. A Traceability Matrix should also be started to assist Requirements Management. The Traceability matrix not only helps with test coverage but also encourages the analyst to reference individual requirements, so that they can be crossed reference. Role of Software QA, this role verifies that the Requirements conform to the basic standard (Templates) and the requirements are free of any ambiguities. In terms of completeness, Software QA would also review the risks of not completing the Nonfunctional section. The Software QA would also review the Traceability Matrix in order to make sure all requirements were referenced. Test Cases At this stage, some test cases that refer to the requirements could be written by Software QA and cross referenced in the Traceability Matrix. In many cases this exercise will further verify the Requirements, as test cases (with data) are being built. Interface Specification If the system is component based then an Interface Specification should be produced. Again templates should exist for this. This document should be subjected to Verification by SQA. Other Specifications

Lower level specifications (for commercial applications) should only be subjected to Software QA if they are critical (i.e. main loan rate calculations) modules. In general paying attention to Interfaces and the Requirements (Functional and Non-Functional) will provide a good ROI for limited Software QA resources. Unit testing This should be done by the developers themselves. Component Testing If a harness (Web Services or HTTP) is available, then Software QA should get involved with this grey box testing. The test cases can be written ahead of time, by the Software QA team. For this reason the Interface Specification is an important deliverable. Integration Testing The actual staging and verification of the basic hand shake should be done by the technical development team. If the systems dont talk then the Software QA team should not be involved. Once a component is integrated then Software QA can execute the Integration Testing. SQA of Interface Specifications Both component and Integration testing will provide good measures on how effective the Interface Specification was. These measures will provide good feedback for continuous improvement of the Interface Specification process and documentation. System Test against the Requirements Following a complete system being built, Software QA should execute the main System test cases that validate the system against the requirements, this phase of the SDLC is what most people associate with the term software testing. Acceptance Testing This should be owned and executed by representatives of the sign off team. For small changes this could be Software QA. Defect Tracking and Resolution Software QA should track all defects, retest until testing is complete. During this process Software QA should determine the origin of the defect. If the defect was introduced as a result a misunderstanding of the Requirements then this information should feed into the Requirements Development process improvement initiative. Regression Testing Software QA should have a documented set of test cases that can verify if any component or function, not already tested as part of a release, has been negatively impacted by the new change. Performance Testing Software QA should run some performance tests to verify performance parameters (response times etc.) are within the requirements. If performance issues are found then the development team needs to be involved in diagnosing and resolution Move to Production This should be verified by Software QA but the actual process should be carried out by someone else. Support Software QA should analyze the origin of the defects and examine the SDLC to determine where the defect was introduced (Requirements\Design or Coding) and review these with the process owners for possible improvements.

Conclusion Software QA, as defined, can mix the formal SQA\SQC activities but the overriding principle that they remain independent from the actual Software production activities should still hold. As soon as Software QA owns the build process or does Usability Design or is the performance engineer or has some other non-QA role then you have moved to an Engineering Services model and real SQA\SQC is compromised.

SOME ARTICLES

Software Quality Attributes Article Purpose The purpose of this article is to define the term software quality attributes and to place that term in the context of SQA and software process improvement, SPI. To consider the function of software quality attributes (also known as software quality factors) lets revisit the overall goal of any quality management, namely Fujitsus slogan Quality built-in with cost and performance as prime consideration. Given the intangible and abstract nature of software, researchers and practitioners have been looking for ways to characterize software in order to make the benefits and costs more visible (for measurement). This quest continues today but there have been two notable models of software quality attributes: McCall (1977) Boehm (1978) There are others (see FURPS) but these two illustrate the general purpose and issues of these quality factor models. These two quality models are summarized below and similarities can be seen. To begin with there are some common objectives of these models, namely: The benefits and costs of software are represented in their totality with no overlap between the attributes. The presence, or absence, of these attributes can be measured objectively. The degree to which each of these attributes is present reflects the overall quality of the software product. These attribute facilitate continuous improvement, allowing cause and effect analysis which maps to these attributes, or measure of the attribute. Both the measurement (software metrics) of these attributes and the use of the software metrics in software process improvement, SPI, are discussed in other articles.

Software Quality Attributes McCall's Quality Model - 1977

Jim McCall produced this model for the US Air Force and the intention was to bridge the gap between users and developers. He tried to map the user view with the developer's priority. McCall identified three main perspectives for characterizing the quality attributes of a software product. These perspectives are: Product revision (ability to change). Product transition (adaptability to new environments). Product operations (basic operational characteristics). Product revision The product revision perspective identifies quality factors that influence the ability to change the software product, these factors are: Maintainability, the ability to find and fix a defect. Flexibility, the ability to make changes required as dictated by the business. Testability, the ability to Validate the software requirements. Product transition The product transition perspective identifies quality factors that influence the ability to adapt the software to new environments: Portability, the ability to transfer the software from one environment to another. Reusability, the ease of using existing software components in a different context. Interoperability, the extent, or ease, to which software components work together. Product operations The product operations perspective identifies quality factors that influence the extent to which the software fulfils its specification: Correctness, the functionality matches the specification. Reliability, the extent to which the system fails. Efficiency, system resource (including cpu, disk, memory, network) usage. Integrity, protection from unauthorized access. Usability, ease of use. In total McCall identified the 11 quality factors broken down by the 3 perspectives, as listed above. For each quality factor McCall defined one or more quality criteria (a way of measurement), in this way an overall quality assessment could be made of a given software product by evaluating the criteria for each factor. For example the Maintainability quality factor would have criteria of simplicity, conciseness and modularity.

Boehm's Quality Model - 1978

Barry W. Boehm also defined a hierarchical model of software quality characteristics, in

Software Quality Attributes trying to qualitatively define software quality as a set of attributes and metrics (measurements). At the highest level of his model, Boehm defined three primary uses (or basic software requirements), these three primary uses are: As-is utility, the extent to which the as-is software can be used (i.e. ease of use, reliability and efficiency). Maintainability, ease of identifying what needs to be changed as well as ease of modification and retesting. Portability, ease of changing software to accommodate a new environment. These three primary uses had quality factors associated with them , representing the next level of Boehm's hierarchical model. Boehm identified seven quality factors, namely: Portability, the extent to which the software will work under different computer configurations (i.e. operating systems, databases etc.). Reliability, the extent to which the software performs as required, i.e. the absence of defects. Efficiency, optimum use of system resources during correct execution. Usability, ease of use. Testability, ease of validation, that the software meets the requirements. Understandability, the extent to which the software is easily comprehended with regard to purpose and structure. Flexibility, the ease of changing the software to meet revised requirements. These quality factors are further broken down into Primitive constructs that can be measured, for example Testability is broken down into:- accessibility, communicativeness, structure and self descriptiveness. As with McCall's Quality Model, the intention is to be able to measure the lowest level of the model. Summary of the two models

Although only a summary of the two example software factor models has been given, some comparisons and observations can be made that generalize the overall quest to characterize software. Both of McCall and Boehm models follow a similar structure, with a similar purpose. They both attempt to breakdown the software artifact into constructs that can be measured. Some quality factors are repeated, for example: usability, portability, efficiency and reliability. The presence of more or less factors is not, however, indicative of a better or worse model. The value of these, and other models, is purely a pragmatic one and not in the semantics or structural differences. The extent to which one model allows for an accurate measurement (cost and benefit) of the software will determine its value. It is unlikely that one model will emerge as best and there is always likely to be other models proposed, this reflects the intangible nature of software and the essential

Software Quality Attributes difficulties that this brings. The ISO 9126 represents the ISO's recent attempt to define a set of useful quality characteristics. McCall (1977) Boehm (1978) A more in depth discussion of these two models can be found here

Das könnte Ihnen auch gefallen