Sie sind auf Seite 1von 64

Introduction

1.1 An Overview
An organization/company will have a number of products and services they may offer to their customers. Depending on the type of business or service offered the customers may have several doubts/complaints/problems. The Enterprise Business Service Engine helps to accomplish this task of solving and clarifying the customers queries. This product facilitates the user to submit online complaints independent of the system The Enterprise Business Service Engine can be used by a business organization to provide online support to its customers. This may include questions about their services or even complaints the customers may have. Although a manual system can be done, time plays an important role in customer satisfaction.

A customer always expects services to be offered as soon as possible and the organization is responsible for making sure its customers stay satisfied. As most organizations are going, or are already online, the Online Business Service Engine will prove an added advantage to them in this internet world.

1.2 Objectives of the Project


The manual submission of complaints and customer servicing has been the most tedious part of the business. The time delay for a response and accuracy of complaint solving plays as vital role for customer servicing. This scenarios and objectives play the mandatory development of application to serve the customer with customer service satisfaction. This System will allow the business process to run efficiently and will be used to assist or satisfy the customer by giving a brief solution for their

complaint within a minimum time. The Customer receives the solution within a day online.

The customers may arise with many problems which cannot be solved at the same time with full support by the business. The main aim of the business is the Customer satisfaction. This system helps the customers to give their problems online whenever the customer needs, without any constraints implied on them. This system helps the customers to briefly explain their complaint without any hesitation. Through this system the customers are assisted with some predefined complaints and immediate solutions which save the precious time of the customers. The customers receive the solutions for their complaint within a day and the customer can also view the status of the complaint at any time using the complaint no. Through this system the customers receive a brief solution for their complaint.

1.3 Background Study 1.3.2 Existing System


Current System doesnt extend its functionality for Overseas Business Process and Outsourcing. The cost and human resource expenses towards building Customer Services for Business prosperity with in the Locality are undesirable. The system cannot be enhanced to provide distribution of data for different services. The complaints are dropped in the complaint box and they are collected by the respective members for scanning process and served on day to day basis. This process model and business approach is inefficient and not dynamic for the future growth of the organization.

Analyzing the Employees performance and generating reports are done manually which is time consuming. Accuracy in data maintenance also becomes a tedious task. Drafting required for trainees also done manually. Searching for information through multiple files and data analysis also becomes a tedious task.

Draw Backs 3

Time-consuming A tedious task Cause damage to the brand image of the organization

Slow updation and retrieval of information

System Analysis

2.1 Proposed System 2.1.1 Problem Definition


The aim is to develop a system to assist the customers by providing solutions for their complaints. This system focuses on receiving the complaints from the customer, distributing complaints to the customer support representatives, solving complaints based on some constraints, solved complaints are verified by the team leaders, providing accuracy percentage to the customer support representatives based on failure constraints, validation engine which is used to provide the solution to a repeated complaint.

The complaints are distributed by the distribution engine category wise. The customer support representatives are classified under different categories. The distribution engine does not distribute the same complaint for more than one customer support representative. The distribution engine maintains the complaint details of how many complaints have been distributed to the customer support representative and how many complaints have been solved and how many complaints have been verified. The distribution engine maintains all the complaint details category wise. The Complaints are tracked as when the complaints are distributed to the customer support representative and when the customer support representatives solve the complaint. The Time taken for each complaint to be solved is also maintained.

The solution given by the customer support representatives are forwarded to the team leaders who verify the solution. The solution given by the customer support representative must be following some constraints that are already deployed in the system. The customer support representative can verify the customer records that are retrieved as and when the complaint is distributed. The Team Leaders verify whether

the given solution has followed the deployed constraints. The Team Leaders provide accuracy percentage to the customer support representative based on the failure methodologies. The failure methodologies validate the career of the customer support representative. If the customer support representatives get more than a specified failure percentage the customer support representative will be taken out from the company. The Team Leaders will be notified about the time spent on each complaint solved customer support representative. The complaint for which more time is spent is notified to the Team Leader by highlighting the complaint no. The customers can view the status of the complaint using the complaint no generated at the time of submission. The complaint no is submitted by the customer only after login.

2.1.2 Developing Solution Strategies


2.1.2.1 Complaint Process Analysis

Clients will have to submit their complaints online. A complaint number will be generated and given back to the customer. The number of complaints that will be submitted in a day may range from 8,000 to 10,000. Storage will be required to maintain such a large mass of complaints. The number of complaints submitted in a single second will be large and proper handling of this has to done. Predefined solutions will be readily obtained to the customer. To decrease the response time of retrieving complaints, three separate tables (C T, PT, AT) will be maintained and depending on the number of complaints coming these tables have to handle it. The records from the centralized database should move in batch of 500 complaints at a time or on the change of date, to the respective tables

2.1.2.2 Distribution Engine Process Analysis

The Distribution Engine has to distribute a complaint to the CSR on request. The complaints if present in AT have to be obtained first, then PT and then CT. Only one complaint at a time should move. Also the complaints that have been distributed have to also be distributed to the QA. No one complaint should be received by more than one 7

CSR. This process is also involved in updating the complaint and solution tables and also making the corresponding tracking information entries.

2.1.2.3 Solution Process

Received complaints have to be solved by the CSR. All necessary information has to be displayed to the CSR in a separate screen. Complaints will be solved and sent to the solution table. The CSR has to solve at least 100 complaints in a day with 98% accuracy. If he cannot solve a complaint immediately he can push it in his PQ but at the end of his working day he should have solved all those present in the queue.

2.1.2.4 Evaluation Process

Each complaint solved is evaluated by the QA. Once the complaints are evaluated the accuracy percentage is awarded to the customer support representative.

The Failure Methodologies are:

Resolution (25%) Incomplete work results in resolution failure.

Procedure: Process Knowledge (25%) If the required steps are not followed, you make any major mistakes it is procedure failure.

Notes (25%) Mistakes on notes end in Notes Failure.

Follow Through (25%) Any wrong letters send to customer then it is follow through failure. This accuracy percentage is used to find the efficiency and caliber of the CSRs.

2.2 System Specification 2.2.1 Application Specification


Architecture Middleware Web Server Database : Three-Tier Architecture : ASP.NET : IIS : SQL server

2.2.1.1 Client-Server Architecture Not so long ago, computer applications were made with a fixed set of proprietary tools for design, development, testing and deployment. This usually involved a special breed of top IT talent who knew nothing but just that. They specialized in a programming language that works only with a proprietary operating system and a proprietary database in a proprietary network. This posed a problem to most HR managers who had to keep their IT staff pampered owing to the difficulty of sourcing out these rare birds (of prey). It also posed a problem to IT staff themselves as they were bound to an allegiance to a single vendor. But not for long, those days now fizzle out of the horizon. As networking technology matured significantly in the PC arena, the role of networks evolved from simple file and printer/device sharing to complex back-end server applications. So, then came the introduction of a 2-tiered architecture. Here, programs are developed using front-end Tools (like PB), which could hook up to a host of databases (back-end). Soon, databases could also communicate and share data with proprietary databases (middle-tier?). Subsequently, with the specialties fragmented, the end-users have a wider variety of flavors to choose from and need not be bound to a single vendor or a whiz kid of specific talent. More choices led to greater flexibility in system design.

A 2-tiered architecture divides an application into 2 pieces: The GUI (client) and the Database (server). The client sends a request to the server. The server being a more powerful machine does all the fetching and processing and returns only the desired result set back to the client for the finishing touches. In effect, weaker machines virtually have a shared access to the strength of the server at the back-end. Faster execution at the server side results in less network traffic and increased response time for the program to fetch and process the data. A 3-tiered architecture on the other hand extends further the fragmentation of an application's components: The GUI (Client), Business Rules (Application Server), and the Database (Database Server). This type of design does not necessarily mean it is better than the 2-tiered architecture. Factors such as development time frame, IT resources, application complexity, and future scalability are things to be considered when choosing between the two. Introducing Dynamic Web Pages The client-to-server-to-client process just described is important because it happens each time your client contacts the server to get some data. Thats distinctly different from the stand-alone or client-server model you may be familiar with already. Because the server and the client dont really know anything about one another, for each interaction, you must send, initialize, or restore the appropriate values to maintain the continuity of your application. As a simple example, suppose you have a secured site with a login form. In a standard application, after the user has logged in successfully, thats the only authentication you need to perform. The fact that the user logged in successfully means that theyre authenticated for the duration of the application. In contrast, when you log in to a Web site secured by only a login and password, the server must reauthenticate you for each

10

subsequent request. That may be a simple task, but it must be performed for every request in the application. In fact, thats one of the reasons dynamic applications because popular. In a site that allows anonymous connections (like most public Web sites), you can only authenticate users if you can compare the login/password values entered by the user with the real copies stored on the server. While HTML is an adequate layout language for most purposes, it isnt a programming language. It takes code to authenticate users.

Another reason that dynamic pages became popular is because of the ever-changing nature of information. Static pages are all very well for articles, scholarly papers, books, and images-in general, for information that rarely changes. But static pages are simply inadequate to capture employee and contact lists, calendar information, news feeds, sports score-in general, and the type of data you interact with every day. The data changes far too often to maintain successfully in static pages. Besides, you dont always want to look at that data the same way. But its useful to note that even dynamic data usually has a predictable rate of change-something shall discuss later in the context of coaching.

How Does the Server Separate Code from Content? In classic ASP pages, you could mix code and content by placing special code tags (<% %>) around the code or by writing script blocks, where the code appeared between<script> and </script> tags. Classic ASP pages use an .asp file name extension. When the server receives a request for an ASP file, it recognizesvia the extension associationsthat responding to the request requires the ASP processor. Therefore, the server passes the request to the ASP engine, which parses the file to differentiate the code tag content from the markup

11

content. The ASP engine processes the code, merges the results with any HTML in the page, and sends the result to the client. ASP.NET goes through a similar process, but the file extension for ASP.NET files is .aspx rather than .asp. You can still mix code and content in exactly the same way, although now you can (and usually should) place code in a separate file, called a code-behind module, because doing so provides a cleaner separation between display code and application code, and makes it easier to reuse both. In ASP.NET, you can write code in all three placesin code-behind modules and also within code tags and script blocks in your HTML files. Nevertheless, the ASP.NET engine must still parse the HTML file for code tags. The ASP.NET engine itself is an Internet Server Application Programming Interface (ISAPI) application. ISAPI applications are DLLs that load into the servers address space, so theyre very fast. Different ISAPI applications handle different types of requests. You can create ISAPI applications for special file extensions, like .aspx, or that perform special operations on standard file types like HTML and XML. There are two types of ISAPI applications: extensions and filters. The ASP.NET engine is an ISAPI extension. An ISAPI extension replaces or arguments the standard IIS response. Extensions load on demand when the server receives a request with a file extension associated with the ISAPI extension DLL. ASP.NET pages that contain code tags bypass the standard IIS response procedure if they contain code tags or are associated with a code-behind module. If your ASPX file contains no code, the ASP.NET engine recognizes this when it finishes parsing the page. For pages that contain no code, the ASP.NET engine short-circuits its own response, and the standard server process resumes. Classic ASP pages began short-circuiting for pages that contained no code with IIS 5 (ASP version 3.0). Therefore, ASP and ASPX pages that contain no code are only slightly slower than standard HTML. .NET Framework

12

The Microsoft .NET Framework is a new platform for building integrated, service-oriented applications to meet the needs of today's Internet businesses; apps that gather information from, and interact with, a wide variety of sources, regardless of the platforms or languages in use. This article, the first of a two part series, illustrates how the .NET Framework enables you to quickly build and deploy Web services and applications in any programming language.

At the heart of the .NET platform is a common language runtime engine and a base framework. All programmers are familiar with these concepts. I'm sure many of you have at least dabbled with the C runtime library, the standard template library, the MFC library, the Active Template Library, the Visual Basic runtime library, or the Java virtual machine. In fact, the Windows operating system itself can be thought of as a runtime engine and library. Runtime engines and libraries offer services to applications, and programmers love them because they save time and facilitate code reuse.

The .NET base framework will allow developers to access the features of the common language runtime and also will offer many high-level services so that developers don't have to code the same services repeatedly. But more importantly, the .NET common language runtime engine sitting under this library will provide the technologies to support rapid software development. The following lists a small sampling of features to be provided by the .NET common language runtime engine. Consistent programming model All application services are offered via a common object-oriented programming model, unlike today where some OS facilities are accessed via DLL functions and other facilities are accessed via COM objects.

13

Simplified programming model .NET seeks to greatly simplify the plumbing and arcane constructs required by Win32 and COM. Specifically, developers no longer need to gain an understanding of the registry, GUIDs, IUnknown, AddRef, Release, HRESULTS, and so on. It is important to note that .NET doesn't just abstract these concepts away from the developer; in the new .NET platform, these concepts simply do not exist at all. Run once; run always All developers are familiar with DLL Hell. Since installing components for a new application can overwrite components of an old application, the old app can exhibit strange behavior or stop functioning altogether. The .NET architecture now separates application components so that an app always loads the components with which it was built and tested. If the application runs after installation, then the application should always run. This marks the end of DLL Hell. Execute on many platforms Today, there are many different flavors of Windows: Windows 95, Windows 98, Windows 98 SE, Windows Me, Windows NT 4.0, Windows 2000 (with various service packs), Windows CE, and soon a 64-bit version of Windows 2000. Most of these systems run on x86 CPUs, but Windows CE and 64-bit Windows run on non-x86 CPUs. Once written and built, a managed .NET application (that consists entirely of managed code, as I'll explain shortly) can execute on any platform that supports the .NET common language runtime. It is even possible that a version of the common language runtime could be built for platforms other than Windows in the future. Users will immediately appreciate the value of this broad execution model when they need to support multiple computing hardware configurations or operating systems.

14

Language integration COM allows different programming languages to interoperate with one another. .NET allows languages to be integrated with one another. For example, it is possible to create a class in C++ that derives from a class implemented in Visual Basic. .NET can enable this because it defines and provides a type system common to all .NET languages. The Microsoft Common Language Specification (discussed in Part 2 of this article) describes what compiler implementers must do in order for their languages to integrate well with other languages. Microsoft is providing several compilers that produce code targeting the .NET common language runtime: C++ with managed extensions, C# (pronounced "C sharp"), Visual Basic (which now subsumes VBScript and Visual Basic for Applications), and JScript. In addition, companies other than Microsoft are producing compilers for languages that also target the .NET common language runtime. Code reuse Using the mechanisms I just described, you can create your own classes that offer services to third-party applications. This, of course, makes it extremely simple to reuse code and broadens the market for component vendors. Automatic resource management Programming requires great skill and discipline. This is especially true when it comes to managing resources such as files, memory, screen space, network connections, database resources, and so on. One of the most common bugs occurs when an application neglects to free one of these resources, causing that application or others to perform improperly at some unpredictable time. The .NET common language runtime automatically tracks resource usage, guaranteeing that your application never leaks resources. In fact, there is no way to explicitly free a resource. In a future article, I'll explain exactly how this works.

15

Type safety The .NET common language runtime can verify that all your code is type safe. Type safety ensures that allocated objects are always accessed in compatible ways. Hence, if a method input parameter is declared as accepting a 4byte value, the common language runtime will detect and trap attempts to access the parameter as an 8-byte value. Similarly, if an object occupies 10 bytes in memory, the application can't coerce this into a form that will allow more than 10 bytes to be read. Type safety also means that execution flow will only transfer to well-known locations (namely, method entry points). There is no way to construct an arbitrary reference to a memory location and cause code at that location to begin execution. Together, these eliminate many common programming errors and classic system attacks such as the exploitation of buffer overruns.

Rich debugging support

Because the .NET common language runtime is used for many languages, it is now much easier to implement portions of your application using the language that's best suited for it. The .NET common language runtime fully supports debugging applications that cross language boundaries. The runtime also provides built-in stack-walking facilities, making it much easier to locate bugs and errors.

Consistent error-handling One of the most aggravating aspects of programming in Windows is the inconsistent ways errors are reported. Some functions return Win32 error codes, some return HRESULTS, and some raise exceptions. In .NET, all errors are reported via exceptionsperiod. Exceptions allow the developer to isolate the 16

error-handling code from the code required to get the work done. This greatly simplifies writing, reading, and maintaining code. In addition, exceptions work across module and language boundaries as well. Deployment Today, Windows-based applications can be incredibly difficult to install and deploy. There are usually several files, registry settings, and shortcuts that need to be created. In addition, completely uninstalling an application is nearly impossible. With Windows 2000, Microsoft introduced a new installation engine that helps with all of these issues, but it is still possible that a company authoring a Microsoft Installer Package may fail to do everything correctly. .NET seeks to make these issues ancient history. .NET components are not referenced in the registry. In fact, installing most .NET-based applications will require no more than copying the files to a directory, and uninstalling an application will be as easy as deleting those files.

Security Traditional OS security provides isolation and access control based on user accounts. This has proven to be a useful model, but at its core it assumes that all code is equally trustworthy. This assumption was justified when all code was installed from physical media (such as CD-ROM) or trusted corporate servers. But with the increasing reliance on mobile code such as Web scripts, Internet application downloads, and e-mail attachments, there is a need for more granular control of application behavior.

Microsoft also recognizes that many apps need to enforce behaviors based on a concept of roles as opposed to individual accounts. Microsoft initially delivered support for this concept with Microsoft Transaction Server (MTS), and provided further enhancements with COM+ 1.0. In .NET, Microsoft supports the deployment of application-defined roles and access control based on these roles.

17

These mechanisms are extended in ways that are appropriate for the Internet and heterogeneous environments. Session Tracking Session tracking is the capability of a server to maintain the current state of a single clients sequential requests. HTTP is a stateless protocol, which means that each request is independent of the previous one. However in some applications, it is necessary to save state information so that information can be collected from several interactions between a browser and a server. For example, an online video store must be able to determine each visitors sequence of actions. Suppose a customer goes to your site to order a movie. The first thing he does is look at the available titles. When he has found the title he is interested in he makes his selection. The problem now is determining who made the selection. Because each one of the clients requests is independent of the previous requests, we have no idea who actually made the final selection. We can solve this problem using session tracking. SQL Server Enterprise Manager SQL Server Enterprise Manager is a graphical tool that allows for easy, enterprise-wide configuration and management SQL Server and SQL Server objects. SQL Server Enterprise Manager provides: A scheduling engine. Administrator alert capability. Drag-and-drop control operations across multiple servers. A built-in replication management interface.

SQL Server Enterprise Manager is also used for: Manage logins, permissions, and users. Create scripts. Manage devices and databases. 18

Back up databases and transaction logs. Manage tables, views, stored procedures, triggers, indexes, rules, defaults, and user-defined data types.

Creating and Maintaining Databases Designing your Microsoft SQL Server database structure involves creating and maintaining a number of interrelated components.

Database component

Description Contain the objects used to

Databases

represent, manage, and access data. Store rows of data and define the

Tables

relationships between multiple tables. Represent database objects

Database Diagrams

graphically and enable you to interact with the database without using Transact-SQL.

Indexes

Optimize the speed of accessing the data in the table. Provide an alternate way of

Views

looking at the data in one or more tables. Centralize business rules, tasks,

Stored Procedures

and processes within the server using Transact-SQL programs.

Triggers

Centralize business rules, tasks, and processes within the server using special types of stored

19

procedures that are only executed when data in a table is modified.

Accessing and Changing Data SQL Server Enterprise Manager includes a tool for designing queries interactively using a graphical user interface (GUI). These queries are used: In views. In Data Transformation Services (DTS) Packages. To display the data in Microsoft SQL Server tables.

Replication Replication is an important and powerful technology for distributing data and stored procedures across an enterprise. The replication technology SQL Server allows you to make copies of your data, move those copies to different locations, and synchronize the data automatically so that all copies have the same data values. Replication can be implemented between databases on the same server or different servers connected by LANs, WANs, or the Internet. The procedures in this section help you configure and maintain replication using SQL Server Enterprise Manager.

Data Transformation Services Data Transformation Services (DTS) provides the functionality to import, export, and transform data using COM, OLE DB, and Microsoft ActiveX Scripts. DTS enables you to build and manage data marts and data warehouses by providing: An extensible transaction-oriented workflow engine that allows execution of a complex series of operations. Powerful integrated heterogeneous data movement, scrubbing, and movement. DTS can copy, validate, and transform data from many popular desktop and

20

server-based data sources including Microsoft Access, dBase, Microsoft Excel, Microsoft Visual FoxPro, Paradox, SQL Server, Oracle, and DB2. An industry standard method of sharing metadata and data lineage information through Microsoft Repository. Leading data warehousing and database design vendors have adopted this information model. Package storage in Microsoft Repository, SQL Server, or COM-structured storage files. After a package has been saved, it can be scheduled for execution using SQL Server Agent. Extensibility that allows advanced users to meet their unique needs while continuing to leverage DTS functionality. Integration with Microsoft SQL Server OLAP Services.

Managing Security To ensure that data and objects stored in Microsoft SQL Server are accessed only by authorized users, security must be set up correctly. Understanding how to set up security correctly can help simplify ongoing management. Security elements that may have to be set up include authentication modes, logins, users, roles, granting, revoking, and denying permissions on Transact-SQL statements and objects, and data encryption.

Databases A database in Microsoft SQL Server consists of a collection of tables with data, and other objects, such as views, indexes, stored procedures, and triggers, that are defined to support the activities performed with the data. Before objects within the database can be created, you must create the database and understand how to change the settings and the configuration of the database. This includes tasks such as expanding or shrinking the database, or specifying the files used to create the database

21

Tables Tables are database objects that contain all the data in a database. A table definition is a collection of columns in the same way a database is a collection of tables. Before data can be stored in a database, you must understand how to create, modify, and maintain the tables within your database. This includes tasks such as defining keys and adding or deleting columns from a table.

Database Diagrams Database diagrams enable you to create, manage, and view database objects in a graphical format. Before objects within the database can be manipulated using database diagrams, you must understand how to: create a database diagram, add objects to it, work within a database diagram, and save a database diagram. Indexes To create efficient indexes that improve the performance of your database application by increasing the speed of your queries, you need an understanding of how to create and maintain the indexes on the tables in your database. Views By creating, modifying, and maintaining views, you can customize each users perception of the database. Stored Procedures By creating, modifying, and using stored procedures, you can simplify your business applications and improve application and database performance.

22

Triggers By understanding how to create, modify, and maintain triggers, you can use triggers to: Cascade changes through related tables in the database. Disallow or roll back changes that violate referential integrity, thereby canceling the attempted data modification transaction. Enforce retractions that are more complex than those defined with CHECK constraints. Find the difference between the state of a table before and after a data modification and take action(s) based on that difference

2.2.2 Network Specification


Internet Specifications

There are four basic building blocks to the Internet, Hosts, Routers and Clients and Connections. Hosts and Clients are explained later in the chapter, but for now, be content to know that unless you have very special circumstances, in most cases your computer falls under the "Client" category. Data is sent from your computer in the form of a "packet". You can liken a packet to be similar to an envelope; it surrounds your data and contains both a return and destination address. Your computer handles the packets for you; it's all done in the background, without your knowledge. A Router is a special device. Basically routers sit at key points on the Internet and act like traffic cops at an intersection of hundreds of streets. The Router basically reads the destination address on the packets being sent by your computer and then forwards the packet to the appropriate destination. In some cases your data will travel through several routers before reaching its ultimate destination.

23

Connections. This is a catch all term describing how you can connect from one point to another point. As an end user, your only concern is that the connection is good, but for a network engineer, this can mean several different types of technologies, including; Dial Up Phone Lines Fiber Optics ISDN Frame Relay Satellite Links There are two classes of computers on the Internet, HOSTS and CLIENTS. Unless you have a permanent link to the Internet and your machine is always connected and on-line, then you are probably a client and not a host. As a client to the Internet, you should have the following abilities; Databases For long-term storage, Integer Software Solutions uses the SQL SERVER 8 relational database system Server Specification The following requirements apply to the server system environment: Microsoft Windows 2000 Server operating system supported by SQL SERVER a minimum of 512 MB RAM a backup system with larger capacity (recommended)

Client Specification The following requirements apply to the client system environment:

Microsoft Windows 2000 Server

24

256 MB RAM SQL SERVER license

Open database connectivity Open database connectivity is a windows technology that lets database client application connected to a external database, the database vendor must provide on ODBC driver for data access. Once this driver is available the client machine should be configured with the driver. The destination of the database, login ID password is also to be configured on every client machine. This is called as a data source. The user can configure multiple data sources either same or different drivers on the same machine. Thus using ODBC it is possible to access heterogeneous data from any client. Open Database connectivity is a windows technology that allows a database client application connects to a remote database. The Open Database methods create a connection between the application and the ODBC database and assign it to a database type object. The ODBC connectivity use in this application between the processes and its databases, which are developed, by using SQL Server. This gives the applicants more flexibility, easy updating, easy to handle and accuracy. This carries good connectivity between the client and the server. The client sends the request to the server, the server responds for the clients request. This application is used in LAN connectivity.

At the same time one or more users can access the same file. This provides the user more convenient to handling those records, which holds the information about the processes. If the user has the authority to handle those records, then the user can update the database.

25

Network

Internet

Operating System : Windows2000 Server, Windows XP The development and availability of internet technology has resulted in an upsurge of intranets within the organizations. It is now relatively easy for someone with an understanding of the technology and HTML to create web pages, and implement a server to host them. As their experience (and software resource) increases they may also become able to produce graphics to enhance the site, and more complicated and functional navigation. The main drawback is that every page must be individually created and linked or added to a menu structure, and often only the author, or ICT department have the skills to do it. Creation of new pages requires input from these same people. Also, if features such as discussion groups are required, another product would have to be acquired and 'tacked on', or written in-house if the skills exist. The management systems mentioned earlier are often extensive, often with many processes and procedures, work instructions, specifications, forms and external references, so the number of pages can be quite large. Managing and extending such a site is a time-consuming task.

2.2.3 Hardware Specification

Processor Hard Disk RAM Peripherals

: : : :

Pentium IV 40 GB 256 MB Printer

2.2.4 Software Specification

Database

SQL server ASP.NET

Scripting Language:

26

Web Server

IIS

2.3 Cost Estimation and scheduling


Software cost is related to many variables - Human, Technical, Environment, Political and Effort applied to develop it. However, software project estimation can be transformed from a blank art to a series of systematic steps that provide estimates with acceptable risks. The estimates of cost depend, in turn, on our ability to estimate and evaluate several factors, given below Experience and ability of the project personal. The quality of software development environment The degree to which our understanding of the problem and its acceptable solutions is likely to change. The complexity of the eventual code. The length of the eventual code. The degree to which software components can be reused. The degree of market readiness for the product. The amount of evaluation the product will eventually undergo.

The cost estimation of the proposed is purely based on the requirement of the Manpower and consumption of time involved in finishing the proposed system. Cost estimation also includes hardware and software, which is used for developing the proposed system. The cost estimation done by the manufacturer

27

after finishing the development of the proposed system. The cost estimation identifies the value of proposed system. Cost estimation and scheduling is Done before and after the development of the proposed system.

2.4

Final Outline of the Proposed System

The proposed system is designed based on the objectives prepared in the analysis phase of the existing system. The motive of the system is to distribute the data globally from one location to another with high security and integrity with business constraints and process which reduces the cost of implementation in Customer services.

The system enables the customers to submit the complaint online and receive the response of the complaints with in the stipulated period with accuracy. The system also enables the CSR to access the complaints based on the request and it avails the information regarding the customer for updation and verification. The system automates the process through which dynamically, thousands of complaints are distributed simultaneously to hundreds of CSR logged in, with request of the complaint.

The system also evaluates the process of complaint solving and the life cycle of the complaint through tracking application. The system provides Quality Assurance for every request of the customers. The application evaluates the CSR performance through quality parameters and ensures the business scale towards Business Process Outsourcing never declines.

The System facilitates the end user with analytical and graphical reports which forecast and analyze the status of the business. Online Business Service

28

Engine is dynamic, Secured highly integrated application software designed with latest technologies to reach the limit of Business Process Outsourcing.

29

Design and Development Process

30

Design is a creative process; a good design is the key to effective system. The term Design is defined as The process of applying various techniques and principles for the purpose of defining a process or a system in sufficient detail to permit its physical realization. Various design features are followed to develop the system. The design specification describes the features of the system, the components or elements of the system and their appearance to end-users. In system design high-end decisions are taken regarding the basic system architecture, platforms and tools to be used. The system design transforms a logical representation of what a given system is required to be in to the physical specification. Design starts with the systems requirement specification and coverts it into a physical reality during the development. Important design factors such as reliability, response time, throughput of the system, maintainability, expandability, etc should be taken into account.

3.1 Fundamental Design Concepts


Fundamental design concepts provide the software designer with a foundation from which more sophisticated design methods can be applied. Fundamental design concepts provide the necessary framework for getting it right.

3.1.1 Abstraction
Abstraction permits one to concentrate on a problem at some level of generalization without regard to irrelevant low level details, use of abstraction also permits one to work with concepts and terms that are familiar in the problem environment without having to transform them to an unfamiliar structure. Two types of abstraction are there, one is procedural abstraction and data abstraction. A procedural abstraction is a named sequence of instructions that has a specific and limited function. A data abstraction is a named collection of data that describes a data object.

31

3.1.2 Modularity
Modularity is the single attribute software that allows a program to be intellectually manageable. Software architecture embodies modularity, that is, software is divided into named and addressable components, called modules, which are integrated to satisfy problem requirements.

3.1.3 Software Architecture


Software Architecture alludes to the overall structure of the software and the ways in which that structure provides conceptual integrity for a system. Control hierarchy also called program structure, represents the organization of control. The tree structure used to represent the control hierarchy.

3.1.4 Data Structure


Data Structure is a representation of logical relationship among individual elements of data. Because the structure of information will invariably affects the final procedural design, data structure is very important as the program structure to the representation of the software architecture. Data structure dictates the

organization, methods of access, degree of associatively, and processing alternatives for information. The organization and complexity of a data structure are limited only by the ingenuity of the designer. Scalar item array and linked list are some of the representations of the data structure.

3.1.5 Software Procedure


Program structure defines control hierarchy without regard to the sequence of processing and decisions. Software procedure focuses on the processing details of each module individually. Procedure must provide a precise specification of processing, including sequence of events, exact, decision points, repetitive operations and even data organization / structure. Information hiding suggests that modules be characterized by design decisions that hide from all others.

32

3.2 Design Notations


Design is defining a model of the new system and continues by converting this model to a new system. The method is used to convert the model of the proposed system into computer specification. Data models are converted to a database and processes and flows to user procedures and computer programs. Design proposes the new system that meets these requirements .This new system may be built by a fresh or by changing the existing system. The detailed design starts with three activities, database design, user design and program design. Database design uses conceptual data model to produce a database design. User procedure design uses those parts of the DFD outside the automation boundary to design user procedures.

3.2.1 Data Flow Diagram


The data flow diagram (DFD) is one of the most important tools used by system analysts. Data flow diagrams are made up of a number symbols, which represent system components. Most data flow modeling methods use four kinds of symbols. These symbols are used to represent four kinds of system components. Processes, data stores, data flows and external entities. Processes are represented by circles in dfd. Data Flow represented by a thin line in the dfd and each data store has a unique name and represented by open rectangles and square or rectangle represents external entities. Unlike detailed flowchart, Data Flow Diagrams do not supply detailed description of the modules but graphically describes a systems data and how the data interact with the system. To construct a Data Flow Diagram, we use, Arrow Circles Open End Box Squares 33

Context Analysis Diagram :


Admin

Customer

EBSE

CSR

QA

Data Flow Diagram :

1) Customer Module Level 0 :


Login

Query

Customer

EBSE

Feedback

FAQ

Solution

34

Level 1:

FAQ Login

FAQ

Customer

login invalid

Login

Valid

AQ eF th e n Se lutio e so et th G
Select

Solution

Solution

Su bm it Q ue r

Registration

2) Admin Module Level 0:


Login

Register
complanin_id submit query

Feedback

y
Feedback Query Query

Emp_detail

Admin

EBSE

Dept_detail

Fetch Query

Distribution

Solution Back

35

Level 1:

Fetch Query FAQ

Query FAQ

Login A_Login

Customer Admin

login

tch Fe ery e Qu ibut r Dist


Login

e Qu

ry
Distribution Solution

Dist_of_Query Solution

Get the Solu tion Ke ep Em p De tai l

Complain_id Solution

QA Solution

Emp_Detail

1.1 Query

CSR_Detail

Modified
FAQ

QA_Detail FAQ

Level 1.1:

CSR_Detail
Em p

New_Regi QA_Detail

Admin

Emp_Detail

Ad dN ew

Department CSR_Detail
Update Emp Detail

QA_Detail

36

3) CSR Module Level 0:


Login

Solution CSR EBSE Pending

Escalation

Level 1:
Query Personal Queue

Personal-Que ue

Ar ra ng eQ ue ry

Solve The Query

ng di en P

ry ue Q

CSR_Solution

CSR

Login Pending

Solution

Escalation

can't solved query

Pending

Query

Escalation

37

4) QA Module Level 0:
Login

QA

EBSE

Verification

Modification

Level 1:

Grading
Grade to CSR

Grading

QA

Login

veri.solution
rw ise

Verification CSR_Solution
If True

Modify Solution

O th e

QA_Solution

38

3.2.2 Structure Chart


Structure chart is made up of program modules and the interconnection between them. This program module is represented by a rectangular box in the structure chart. Modules at the top level of the structure chart called the modules at the lower levels. Lines between the rectangular boxes represent the connection between modules. The connection describes data flow between the called and calling modules. As well as a DFD, it is also useful to develop a structural system mode. This structural model shows how a function is realized by a number of other functions, which it calls. Structure charts are a graphical way to represent this decomposition hierarchy. Like DFD, they are dynamic rather than static system models. They show how one function calls others. They do not show a static block structure of a function or procedure. A function is represented on a structure chart as a rectangle. The hierarchy is displayed by linking rectangles with lines. Inputs and outputs are indicated with annotated arrows. An arrow entering a box implies input, leaving a box implies output. Data stores are shown as rounded rectangles and user inputs as circles. Rules to be applied Many systems can be considered as three stages, input, validation and output. If data validation is required, function to implement these should be subordinate to an input function. The role of function near the top of the structural hierarchy may be to control and coordinate a set of lower level hierarchy. The objective of design process is to have loosely coupled highly cohesive components.

39

OBSE Structured Chart

OBSE

CLIENT

CSR

QA

ADMIN

Complaint

Solution

FAQ

Verification

Evaluation

Reporting

Authorization

Solution

Esc & Unwork

Suppression

Personal Queue

40

3.2.3 ER Diagram
A conceptual model describes the essential features of system data. This conceptual model is described by modeling method known as Entity Relationship analysis. Entity relationship analysis uses three major abstractions to describe data. These are entities- which are distinct things in the enterprise. Relationshipwhich are meaningful interactions between the objects and the attributes-which are properties of entities and relationship

Cust-id

Pwd

Add City

Comp-No

Time

Co-Desc

DOB Customer Post Compliant status

A/C No Gets Solution Solution CSR-ID Date CSR


M onitors Group-id

Comp-No

Provides

Solvin g CSR-ID Pwd

Verifies

Groupid

Phone-no

Phone No

Q.A Rights

QA-ID

pwd

Rights Admin name

Admin

Admin id

pwd

41

3.3 Design Process

Design begins when management approves the feasibility study produced during detailed analysis and authorizes the necessary funds and personnel to continue. It concludes when management approves the design and authorizes

development of the actual system.

3.3.1 Database Design


A database is a collection of interrelated data stored with minimum redundancy to serve many users quickly and efficiently. The general objective of database design is to make the data access easy, inexpensive and flexible to the user.
TABLES

1. Customer Modules a) Registration Field Name Customer_Name Firstname Lastname Customer_ID Address City Country Size Varchar(20) Varchar(20) Varchar(10) Varchar(10) Varchar(25) Varchar(15) Varchar(10) Primary Key Constraint Description Name of the customer First name of the customer Last name of the customer ID of the customer Address of the customer City of the customer Country name

42

Contact_No. E_Mail

Numeric Varchar(20)

Registration Date Date

Contact number of the customer E_Mail of the customer To show the registration date

b) Login Field Name Customer_ID Size Varchar(10) Constraint Foreign Key Description ID of the customer Password of the customer

Customer_Password Varchar(15)

c) FAQ Field Name FAQ_ Query FAQ_ Solution Size Varchar(100) Varchar(100) Constraint Description Solution about the FAQ

d)Query Field Name Customer_ID


Query

Size Varchar(10) Varchar(100) Date Varchar(10)

Constraint Foreign Key

Description ID of the customer To write the queries To get the Complain_ID

Date_of_Query Complain_ID

Primary Key

43

e) Solution Field Name Complain_ID Solution Size Varchar(10) Varchar(20) Constraint Foreign Key Description ID of the customer Solution of the Complain

f) Feedback Field Name Customer_ID Feedback Size Varchar(10) Varchar(200) Constraint Foreign Key Description ID of the customer To get the feedback

2. Administrator Module: a) Admin Detail Field Name Admin_Name Admin_ID Address Size Varchar(15) Varchar(10) Varchar(25) Constraint
Description

Primary Key

Name of the administrator ID of the administrator To show the administrator address To show the administrator country

City Country

Varchar(15) Varchar(10)

44

CSC_ID

Varchar(10)

ID of Customer Service Centre

b) Admin. Login Field Name Admin_ID Admin_Password Size Varchar(10) Varchar(10) Constraint Foreign Key Description ID of the administrator Password of the administrator

c) Department

Field Name Dept_No. Dept_Name

Size Numeric Varchar(10)

Constraint

Description Department Name of the department

d) Distribution of Query Field Name Complain_ID Admin_ID Date/Time Size Varchar(10) Varchar(10) Date Constraint Foreign Key Foreign Key Description To show the Complain_id ID of the administrator Complain date

e) CSR Login Field Name


CSR_ID

Password

Size Varchar(10) Varchar(10)

Constraint Primary Key

Description ID of the CSR Password of the CSR

f) QA Login

45

Field Name QA_ID Password

Size Varchar(10) Varchar(10)

Constraint Primary Key

Description ID of the QA Password of the QA

g) CSR Detail Field Name CSR_Name CSR_ID Address City Country CSC_ID Size Varchar(15) Varchar(10) Varchar(25) Varchar(15) Varchar(10) Varchar(10) Constraint
Description

Primary Key

Name of the CSR ID of the CSR To show the CSR address To show the CSR country ID of Customer Service Centre

h) QA Detail Field Name QA_Name QA_ID Address City Country CSC_ID Size Varchar(15) Varchar(10) Varchar(25) Varchar(15) Varchar(10) Varchar(10) Constraint Primary Key
Description

Name of the QA ID of the QA To show the QA address To show the QA country ID of Customer Service Centre

3. CSR MODULE a) Escalation Field Name Complain_ID Size Varchar(10) Constraint Foreign Key Description ID of the
46

CSR_ID Reason

Varchar(10) Varchar(25)

Foreign Key

Complain ID of the Complain To show the reason

b) CSR_Solution

Field Name Complain_ID CSR_ID CSR Solution

Size Varchar(10) Varchar(10) Varchar(200)

Constraint Foreign Key Foreign Key

Description ID of the Complain ID of the Complain About the CSR solution

c) Personal Queue

Field Name Serial_No. Complain_ID

Size Numeric(5) Varchar(10)

Constraint

Foreign Key

Description To show the serial number ID of the Complain

4. Quality Assurance Module a) QA_Solution Detail Field Name Complain_ID QA_ID QA_Solution Size Varchar(10) Varchar(10) Varchar(200) Constraint Foreign Key Foreign Key Description To show the Complain ID ID of the QA To get the QA solution

47

b) Pending Field Name Complain_ID Complain_Date Size Varchar(10) Date Constraint Foreign Key Description To show the Complain_id Complain date

c) Varification Field Name Complain_ID


QA_ID

Size Varchar(10) Varchar(10) Varchar(25)

Constraint Foreign Key Foreign Key

Remark

Description To show the Complain ID ID of the QA Remarks of the solution

c) Unworkable

Field Name Complain_ID


QA_ID

Size Varchar(10) Varchar(10)

Constraint Foreign Key Foreign Key

Description To show the Complain ID ID of the QA

d) Grading_Table Field Name Complain_ID


CSR_ID

Size Varchar(10) Varchar(10) Numeric(5) Varchar(10)

Constraint Foreign Key Foreign Key

Description To show the Complain_id ID of the CSR

Grade
QA_ID

Foreign Key

ID of the QA

48

3.3.2 Input Design

Input design is the link between the information system and the users and those steps that are necessary to put transaction data in to a usable form for processing data entry. The activity of putting data into the computer for processing can be activated by instructing the computer to read data from a written printed document or it can occur by keying data directly into the system. The designs of input focusing on controlling the amount of input required controlling the errors, avoid delay extra steps, and keeping the process simple. System analyst decides the following input design details

What data to input?

What medium to use?

How the data is arranged and coded?

The dialogue to guide the users in providing input.

Data items and transaction needing validation to detect errors.

Methods for performing input validation and steps to follow when

error occurs.

OBSE is summed up with user friendly and interactive forms which enable the customer, CSR, QA and Administrator to operate the application with ease of use. The Input forms are highly designed with data validation, data integration and consistency with databases and application logic.

49

The users are directed with standard messages and alerts which enables them to feed the data with accuracy. It also provides short keys which make the feeding of data much simpler and easier. OBSE provides Login input form which enables the user to access the application based on the access rights. The application accepts user input and generates the respective action object which is passed to the respective input forms. The input enables the CSR to access a single complaint details at a time. The CSR provides solution of the complaint which is fed in the respective tables.

3.3.3 Output Design


Designing computer should proceed in well thought out manner. The term output means any information produced by the information system whether printed or displayed. When analyst design computer out put they identified the specific output that is needed to meet the requirement. Computer is the most important source of information to the users. Output design is a process that involves designing necessary outputs that have to be used by various users according to requirements. Efficient intelligent output design should improve the system relationship with the user and help in decision making. Since the reports are directly required by the management for taking decision and to draw the conclusion must be simple, descriptive and clear to the user. Options for outputs and forms are given in the system menus. When designing the output, system analyst must accomplish the following: Determine the information to present. Decide whether to display, print, speak the information and select the output medium Arrange the information in acceptable format. Decide how to distribute the output to intended receipt.

50

3.4 Development Approach


Integrated Software System for Enterprise Resource Scheduling was designed and developed based on the Waterfall Model. This model particularly expresses the interaction between subsequent phases. Testing software is not an activity, which strictly follows the implementation phase. In each phase of the software development process, we have to compare the results obtained against that which is required. In all phases quality has to be assessed and controlled.

Requirements Analysis Verification and Validation Design Verification and Validation Implementation Verification and Validation Testing Verification and Validation Maintenance Verification and Validation

51

Testing and Implementation

52

4.1 System Testing


System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose, all work to verify that all system elements have been properly integrated and perform allocated functions. Testing is the final verification and validation activity within the organization itself. Testing is done to achieve the following goals; to affirm the quality of the product, to find and eliminate any residual errors from previous stages, to validate the software as a solution to the original problem, to demonstrate the presence of all specified functionality in the product, to estimate the operational reliability of the system. During testing the major activities are concentrated on the examination and modification of the source code.

4.1.1 Testing Methodologies


Testing is generally done at two levels - Testing of individual modules and testing of the entire system (System testing). During systems testing, the system is used experimentally to ensure that the software does not fail, i.e., that it will run according to its specifications and in the way users expect. Special test data are input for processing, and the results examined. A limited number of uses may be allowed to use the system so analysis cans se whether they use it in unforeseen ways. It is preferable to discover any surprise before the organization implements the system and depends on it.

Testing is done throughout systems development at various stages (not just at the end). It is always a good practice to test the system at many different levels at various intervals, that is, sub systems, program modules as work progresses and finally the system as a whole. If this is not done, then the poorly tested system can fail after installation. As you may already have gathered, testing is very tedious and time-consuming job. For a test to be successful the tester, should try and make the

53

program fail. The tester maybe an analyst, programmer, or specialist trained in software testing. One should try and find areas in which the program can fail. Each test case is designed with the intent of finding errors in the way the system will process it. Through testing of programs do not guarantee the reliability of systems. It is assure that the system runs error free.

4.1.1.1 Unit Testing

This involves the tests carried out on modules programs, which make up a system. This is also called as a Program Testing. The units in a system are the modules and routines that are assembled and integrated to perform a specific function. In a large system, many modules at different levels are needed. Unit testing focuses on the modules, independently of one another, to locate errors. The programs should be tested for correctness of logic applied and should detect errors in coding.

For example in the EBSE system, feeding the system with all combinations of data should test all the calculations. Valid and invalid data should be created and the programs should be made to process the data to catch errors. In the OBSE system, the complaint no: consists of 16 digits, so during testing one should ensure that the programs do not accept anything other than a 16-digit code for the complaint no. Another e.g. for valid and invalid data check is that, in case 16 digit no is entered during the entry of transaction, and that number does not exit in the master file, or if the number entered is an exit case, then the programs should not allow the entry of such cases. All dates that are entered should be validated. No program should accept invalidates. The checks that are needed to be incorporated are: in the month of Feb the date cannot be more than 29. For the months having days one should not be allowed to enter 31. All conditions present in the program should be tested. Before proceeding one must make sure that all the programs are working independently.

54

4.1.1.2 System Testing

When unit tests are satisfactorily concluded, the system, as a complete entity must be tested. At this stage, end users and operators become actively involved in testing. While testing one should also test to find discrepancies between the system and its original objective, current specifications and systems documentation.

For example, one module may expect the data item for complaint number to be numeric field, while other modules expect it to be a character data item. The system itself may not report this error, but the output may show unexpected results. A record maybe created and stored in one module, using the complaint number as a numeric field. If this is later sought on retrieval with the expectation that it will be a character field, the field will not be recognized and the message requested record not found would not be displayed.

System testing must also verify that file sizes are adequate and their indexes have been built properly. Sorting and rendering procedures assumed to be present in lower level modules must be tested at the systems level to see that they in fact exist and achieve the results modules expect.

4.1.1.3 Testing for Output

After performing the validation testing, the next step is output testing of the proposed system, since no system could be useful if it does not produce the required output in the specified format. The outputs generated or displayed by the system under consideration are tested by asking the users about the format required by them. Hence the output format is considered in 2 ways one is on screen and another in printed format.

55

4.1.1.4 Validation Checking Validation checks are performed on the following fields. Text Field The text field can contain only the number of characters lesser than or equal to its size. The text fields are alphanumeric in some tables and

alphabetic in other tables. Incorrect entry always flashes and error message. Numeric Field The numeric field can contain only numbers from 0 to 9. An entry of any character flashes an error messages. The individual modules are checked for accuracy and what it has to perform. Each module is subjected to test run along with sample data. The individually tested modules are integrated into a single system. Testing involves executing the real data information is used in the

program the existence of any program defect is inferred from the output. The testing should be planned so that all the requirements are individually tested.

4.2 Quality Assurance


Quality assurance consists of the auditing and reporting functions of management. The goal of quality assurance is to provide management with the data necessary to be informed about product quality, thereby gaining insight and confide that product quality is meeting its goals. This is an umbrella activity that is applied throughout the engineering process. Software quality assurance encompasses Analysis, design, coding and testing methods and tools Formal technical reviews that are applied during each software engineering Mulitiered testing strategy Control of software documentation and the change made to it. A procedure to ensure compliance with software development standards. Measurement and reporting mechanisms.

56

Quality Factors The factors that affect the quality can be categorized into two broad groups: Factors that can be directly measured. Factors that can be indirectly measured

These factors focus on three important aspects of a software product Its operational characteristics Its ability to undergo change

Its adaptability t new environment. 4.2.1 Generic Risks


Online Business Service Engine distributes customer data across locales for kilometers. The application transfers data belonging to customers of Telecom organizations. Customer data transferred contains the personal information and account details of the customers. As the complaints are transferred through a long distance , the network problem might affect the process of business. The data can be manipulated by the CSR based on the request of customers. The CSR are eligible enough to provide credit back to the customers. Inaccurate calculations might lead the organization to deep problems.

4.2.2 Security Technologies & Policies


OBSE facilitates in providing customer information for solving the

complaints of the respective customer across different geographical locations. The application transfers data belonging to customers of Telecom organizations. Customer data transferred contains the personal information and account details of the customers. These information are transferred through SSL (Secured Socket Layers) through the concept of data encryption.

57

The Enterprise application in J2EE environment provides implicit security services based on the user policies. J2EE environment uses Security realms and User policies to keep track of the users and limit the user accessibility.

4.3 System Implementation


Implementation is the stage of the project where the theoretical design is turned into a working system. At this stage the main work load, the greatest upheaval and the major impact on the existing system shifts to the user department. If the implementation is not carefully planned and controlled, it can cause chaos and confusion.

Implementation includes all those activities that take place to convert from the old system to the new one. The new system may be totally new, replacing an existing manual or automated system or it may be a major modification to an existing system. Proper implementation is essential to provide a reliable system to meet the organization requirements. Successful implementation may not guarantee improvement in the

organization using the new system, but improper installation will prevent it.

The process of putting the developed system in actual use is called system implementation. This includes all those activities that take place to convert from the old system to the new system. The system can be implemented only after thorough testing is done and if it is found to be working according to the specifications. The system personnel check the feasibility of the system.

The most crucial stage is achieving a new successful system and giving confidence on the new system for the user that it will work efficiently and effectively. It involves careful planning, investigation of the current system and its constraints on implementation, design of methods to achieve the changeover. The more complex the system being implemented, the more involved will be the system analysis and the design effort required just for implementation. The system implementation has three main aspects. They are education and training, system testing and changeover.

58

The implementation stage involves following tasks. Careful planning. Investigation of system and constraints. Design of methods to achieve the changeover. Training of the staff in the changeover phase. Evaluation of the changeover method.

The method of implementation and the time scale to be adopted are found out initially. Next the system is tested properly and the same time users are trained in the new procedures.

4.3.1 Implementation Procedures


After proper testing and validation, the question arises whether the system can be implemented or not. Implementation includes all those activities that take place to convert from old system to new. The new system may be totally new replacing an existing manual or automated system, or it may be a major modification to an existing system. In other case, proper implementation is essential to provide a reliable system to meet organization requirements

4.3.2 User Training


To achieve the objectives and benefits expected from computer based system, it is essential for the people who will be involved to be confident of their role in the new system. As systems become more complex, the need for education and
training is more and more important.

Education is complementary to training. It brings life to formal training by explaining the background to the resources for them. Education involves creating the right atmosphere and motivating user staff. Education sections

should encourage participation from all staff with protection for individuals for group criticism. Education should start will before any development work to enable users to maintain or to regain the ability to participate in the development of their system.

59

Education information can make training more interesting and more understandable. The aim should always be to make individual feel that they can still make all important contributions, to explain how they participate in making system changes, and to show that the computer and computer staff do not operate in isolation, but are of the same organization. Training on the Application Software After providing the necessary basic training on the computer awareness the users will have to be trained on the new application software. This will give the underlying philosophy of the use of the new system such as the screen flow, screen design, type of help on the screen, type of errors while entering the data the corresponding validation check at each entry and the ways to correct the data entered. It should then cover information needed by the specific user/groups to use the system or part of the system while imparting the training of the program on the application. This training may be different across different user groups and across different levels of hierarchy.

4.3.3 Operational Documentation


Once the implementation plan is decided, it is essential that the user of the system is made familiar and comfortable with the environment. Education involves right atmosphere & motivating the user. A documentation providing the whole operations of the system is being developed. The system is developed in such a way that the user can work with it in a well consistent way. The system is developed user friendly so that the user can work the system from the tips given in the application itself. Useful tips and guidance is given inside the application itself to help the user. Users have to be made aware that what can be achieved with the new system and how it increases the performance of the system. The user of the system should be given a general idea of the system before he uses the system.

60

4.4 System Maintenance


A system should be created whose design is comprehensive and farsighted enough to serve current and projected user for several years to come. Part of the analysts expertise should be in projecting what those needs might be in building flexibility and adaptability into the system.

The better the system design, the easier it will be to maintain and the maintenance costs is a major concern, since software maintenance can prove to be very expensive. It is important to detect software design errors early on; as it is less costly than if errors remain unnoticed until maintenance is necessary. Maintenance is performed most often to improve the existing software rather than to respond to a crisis or system failure. As user requirements change, software and documentation should be changed as part of the maintenance work.

Maintenance is also done to update software in response to the change made in an organization. This work is not as substantial as enhancing the Software, but it must be done. The system could fail if the system is not maintained properly.

61

Conclusion

62

5.1 Scope for Further Enhancement


The application developed is designed in such a way that any further enhancements can be done with ease. The system has the capability for easy integration with other systems. New modules can be added to the existing system with less effort. Currently, the only way for the CSR to know more about the personal Queue complaints is form the team leader or QA by manual way. Providing extra facilities can enhance the Complaint Solving module. One of them is the provision for communication with the QA. Currently while solving the complaint in personal queue the CSR is getting help from the team leader in a manual way. A well consistent communication facility can be provided between the CSR and the team leader. The will help a lot for the CSRs to know weather the complaints are solvable or not at the time of complaint retrieval itself..

63

64

Das könnte Ihnen auch gefallen