Sie sind auf Seite 1von 9

What is traditional system life cycle?

Describe each of its steps and its advantages and disadvantages for system Building. Data processing was an art rather than a science, with no two systems being developed in the same way. As a result, it was difficult to predict the length of a project, its cost, and the degree to which it would solve the problem that had initiated it. Two separate departments frequently performed systems analysis and programming with only minimal communication between them. The systems analysts gathered customer requirements and gave them to the programming staff, who then worked to translate those requirements into computer systems. Because there were few common processes and because programmers were frequently unwilling to share the secrets of their success, individual programmers were viewed as creative beings that were essential to the continued running of the systems they had created. As business applications expanded beyond the Finance department, it became obvious that there must a better way to manage data processing and, in particular, a better way to develop systems. The result was the creation of formal methodologies centred on what was commonly referred to as a system development life cycle (SDLC). The objective of these methodologies was to document and institutionalize the best practices of system development. An SDLC divides the software development process into a number of clearly defined phases, each of which is further divided into steps. Progress through the steps is measured by the completion of forms and checklists. Because the phases were viewed as sequential steps, with the output from one phase becoming the input to the next, a traditional SDLC was often called a waterfall. And, like water flowing over a precipice, the underlying premise of the waterfall approach to system development

was that all motion was forward. Once a phase was completed, there was no returning to it. Use of an SDLC, the proponents claimed, would ensure that system development followed a common, sequential process with all critical information being properly documented. Although there were shortcomings, with the development of a life cycle, the data processing industry had made a major step in transforming itself from programming to software engineering. SDLC can be viewed as the foundation of the modern IT department
Project Begins

System Analysis

System Design


Testing and Quality assurance



Steps Involved in Traditional SDLC Project Initiation: Customer and identifies problems and demand a software, IT management forms a team and the system analysts identify preliminary requirements and will reach the customer to validate requirements. Once the validation is over the system analysts will complete the feasibility study and will submit it to the IT management for approval where they contact the customer for his approval. System Analysis and Design: Customer will specify the requirement; system analysts will complete the requirement specification and submit it back to the customer for approving the specification. Once the customer approves it the analysts will complete the functional design then the technical design. After completing program unit design the Programmer Analysts will generate the code and quality assurance team will design a test plan and again the programmer does unit test and a system test will be done by system analysts. Testing and quality assurance: Quality assurance team will do Integration test which then transferred to system analysts for a stress test and finally given to the customer for acceptance test. Implementation: Customer does the acceptance test and if approved system analysts will train the customers and will complete customer documentation. Programmers then convert the data and submit it to the IT management to evaluate the project. Advantages of the Traditional SDLC Although sometimes criticized for its rigidity, a traditional SDLC provided and continues to provide benefits for many organizations. In addition to the reason it was initiated namely, adding structure to a previously

unstructured process the waterfall approach to system development has two primary advantages: 1. The explicit guidelines allow the use of less-experienced staff for system development, as all steps are clearly outlined. Even junior staff members who have never managed a project can follow the steps in the SDLC to produce adequate systems. 2. Reliance on individual expertise is reduced. Use of an SDLC can have the added benefit of providing training for junior staff, again because the sequence of steps and the tasks to be performed in each step are clearly defined. SDLC Disadvantages While there is no doubt that the waterfall approach to system development is superior to a totally unstructured environment, there are some known disadvantages: 1. If followed slavishly, it can result in the generation of unnecessary documents. Many methodologies have forms for every possible scenario. Inexperienced staff may believe that all are required and may, for example, insist on three levels of customer sign-off when only one is needed. This can have the effect of complicating the process and extending the project schedule unnecessarily. To prevent this from occurring, most organizations view an SDLC as a set of guidelines and use only the steps and forms that apply to the specific size and type of project under development. Many even provide templates for their staff, outlining which steps are required for specific types and sizes of projects. 2. It is difficult for the customer to identify all requirements early in the project; however, the sequential river of no return approach dictates this. The philosophy of the SDLC means that there are no easy ways to tone down this problem and still remain true to the methodology.

3. The customer is involved only periodically, rather than being an active participant throughout the project. This can result in misunderstandings on both sides: IT and the customer. 4. The combination of incomplete specifications, infrequent communication, and long elapsed time increases the probability that the system will be off track when it is finally delivered. the long development cycle increases the possibility that by the time the system is delivered, business changes may have invalidated the initial design or that the project champions may have left the company or been reassigned, taking with them the impetus for the project.


Describe the capabilities of on-line analytical processing (OLAP) and data mining? Online Analytical Processing is a methodology used to provide end users with access to large amounts of data in a rapid manner to assist with deductions based on investigative reasoning. OLAP allows users to analyze database information from multiple database systems at one time. While relational databases are considered to be two-dimensional, OLAP data is multidimensional, meaning the information can be compared in many different ways. For example, a company might compare their computer sales in June with sales in July, then compare those results with the sales from another location, which might be stored in a different database. On-Line Analytical Processing (OLAP) tools meet the need for interactive multidimensional reporting and analysis. They allow operational managers to perform trend, comparative, and time-based analysis by enabling exploration of pre-calculated and summarized data along multiple dimensions. Operational managers can explore data first at a summary level, and then drill down through the data hierarchy to examine increasingly granular levels of detail. In order to process database information using OLAP, an OLAP server is required to organize and compare the information. Clients can analyze different sets of data using functions built into the OLAP server. Some popular OLAP server software programs include Oracle Express Server and Hyperion Solutions Essbase. Because of its powerful data analysis capabilities, OLAP processing is often used for data mining, which aims to discover new relationships between different sets of data.


OLAP makes Business Intelligence happen, broadly by enabling the following: y Transforming the data into multi-dimensional cubes y Summarized pre-aggregated and derived data y Strong query management y Multitude of calculation and modelling functions OLAP uses multidimensional data representations, known as cubes to provide rapid access to data stored in data warehouses. In a data warehouse, cubes model data in the dimension and fact tables in order to provide sophisticated query and analysis capabilities to client applications. The software used in OLAP offers real-time analysis of data stored in a data warehouse. Generally, the OLAP server is a separate component that contains specialized algorithms and indexing tools that enable the processing of data mining tasks with minimal impact on database performance. Online analytical processing is an integral part of businesses. It helps in the analysis and decision-making of an organization. For example, IT organizations often face the challenge of delivering systems that allow knowledge workers to make strategic and tactical decisions based on corporate information. These decision support systems are the OLAP systems that allow knowledge workers to intuitively, quickly and flexibly manipulate operational issues to provide analytical insight. Usually, OLAP systems are designed to: y Support the complex analysis requirements of decision-makers. y Analyze the data from a number of different perspectives. y Support complex analysis against large input (atomic-level) data sets. OLAP systems are generally designed based on two architecturesmultidimensional OLAP (MOLAP) and relational OLAP (ROLAP). The

MOLAP architecture utilizes a multidimensional database to provide analysis, while the ROLAP architecture access data directly from data warehouses. According to MOLAP architects OLAP is best implemented by storing data multi-dimensionally, whereas ROLAP architects like to believe that OLAP capabilities are best provided directly against the relational database. OLAP provides these benefits to analytical users: y Pre-aggregation of frequently queried data, enabling a very fast response time to ad hoc queries. y An intuitive multidimensional data model that makes it easy to select, navigate, and explore the data. y A powerful tool for creating new views of data based upon a rich array of ad hoc calculation functions. y Technology to manage security, client/server query management and data caching, and facilities to optimize system performance based upon user needs. y It provides an intuitive user interface for browsing data. y It gives you spectacular query performance, primarily owing to the intelligent navigation of aggregates and partitions. y Parent-child dimension structures are easy and intuitive to implement. y It gives you server-defined rules for handling semi-additive and nonadditive measures. An OLAP system lets you have server-defined calculations of great complexity. SQL's limitations as an analytic language were outlined in a previous column. SQL is not an analytic or report-writing language: You need an analysis server to support statistics, data mining algorithms, or even simple rule-based business calculations such as allocations and distributions. The OLAP server acts as a friendly interface to the data

cube, letting users consume server-defined analytics without worrying about how and where they are defined and computed. Server-defined, high-performance queries and calculations can be performed over multiple fact tables or cubes. Combining data from multiple fact tables is a difficult problem in the pure relational world, but can be made easy and intuitive in certain OLAP servers. Calculations can be defined once and used many times. The more calculations you can define on a central server, the more flexibility users have in accessing the data. Even a simple slice-and-dice tool can use complex analytics previously defined on the OLAP server. This capability is not generally found in relational environments. And of course, power users can define complex calculations on the server so all users benefit.