Sie sind auf Seite 1von 40

INTRODUCTION

INTRODUCTION
Steganography is the practice of hiding private or sensitive information within something that appears to be nothing out to the usual. The word steganography comes from the Greek Seganos, which mean covered or secret and graphy mean writing or drawing. Therefore, steganography mean, literally, covered writing. It is the art and science of hiding information such its presence cannot be detected and a communication is happening. A secrete information is encoding in a manner such that the very existence of the information is concealed. Paired with existing communication methods, steganography can be used to carry out hidden exchanges. Steganography is often confused with cryptology because the two are similar in the way that they both are used to protect important information. The difference between two is that steganography involves hiding information so it appears that no information is hidden at all. If a person or persons views the object that the information is hidden inside of he or she will have no idea that there is any hidden information, therefore the person will not attempt to decrypt the information. What steganography essentially does is exploit human perception, human senses are not trained to look for files that have information inside of them, although this software is available that can do what is called Steganography. The most common use of steganography is to hide a file inside another file.

OBJECTIVES
This project is developed for hiding information in any image file. The scope of the project is implementation of steganography tools for hiding information includes any type of information file and image files and the path where the user wants to save Image and extruded file. The goal of steganography is covert communication. So, a fundamental requirement of this steganography system is that the hider message carried by stego-media should not be sensible to human beings. The other goad of steganography is to avoid drawing suspicion to the existence of a hidden message. This approach of information hiding technique has recently became important in a number of application area. This project has following objectives: To product security tool based on steganography techniques. To explore techniques of hiding data using encryption module of this project To extract techniques of getting secret data using decryption module.

SYSTEM ANALYSIS

SYSTEM ANALYSIS
Steganography system requires any type of image file and the information or message that is to be hidden. It has two modules encrypt and decrypt. Microsoft .Net framework prepares a huge amount of tool and options for programmers that they simples programming. One of .Net tools for pictures and images is auto-converting most types of pictures to BMP format. I used this tool in this software called Steganography that is written in C#.Net language and you can use this software to hide your information in any type of pictures without any converting its format to BMP (software converts inside it). Cryptography was created as a technique for securing the secrecy of communication and many different methods have been developed to encrypt and decrypt data in order to keep the message secret. Unfortunately it is sometimes not enough to keep the contents of a message secret, it may also be necessary to keep the existence of the message secret. The technique used to implement this, is called steganography. Steganography differs from cryptography in the sense that where cryptography focuses on keeping the contents of a message secret, steganography focuses on keeping the existence of a message secret.

Different kinds of steganography

Steganography

Text

Images

Audio/video

Protocol

Almost all digital file formats can be used for steganography, but the formats that are more suitable are those with a high degree of redundancy. Redundancy can be defined as the bits of an object that provide accuracy far greater than necessary for the objects use and display . The redundant bits of an object are those bits that can be altered without the alteration being detected easily . Image and audio files especially comply with this requirement, while research has also uncovered other file formats that can be used for information hiding. Text Steganography Hiding information in text is historically the most important method of steganography. An obvious method was to hide a secret message in every nth letter of every word of a text message. It is only

since the beginning of the Internet and all the different digital file formats that is has decreased in importance . Text steganography using digital files is not used very often since text files have a very small amount of redundant data. Image Steganography Given the proliferation of digital images, especially on the Internet, and given the large amount of redundant bits present in the digital representation of an image, images are the most popular cover objects for steganography. This paper will focus on hiding information in images in the next sections. Audio/Video steganography To hide information in audio files similar techniques are used as for image files. One different technique unique to audio steganography is masking, which exploits the properties of the human ear to hide information unnoticeably. A faint, but audible, sound becomes inaudible in the presence of another louder audible sound . This property creates a channel in which to hide information. Although nearly equal to images in steganographic potential, the larger size of meaningful audio files makes them less popular to use than images . Protocol Steganography The term protocol steganography refers to the technique of embedding information within messages and network control protocols used in network transmission . In the layers of the OSI network model there exist covert channels where steganography can be used . An example of where information can be hidden is in the header of a TCP/IP packet in some fields that are either optional or are never used.

Image steganography
As stated earlier, images are the most popular cover objects used for steganography. In the domain of digital images many different image file formats exist, most of them for specific applications. For these different image file formats, different steganographic algorithms exist.

Image definition
To a computer, an image is a collection of numbers that constitute different light intensities in different areas of the image. This numeric representation forms a grid and the individual points are referred to as pixels. Most images on the Internet consists of a rectangular map of the images pixels (represented as bits) where each pixel is located and its colour. These pixels are displayed horizontally row by row.

The number of bits in a colour scheme, called the bit depth, refers to the number of bits used for each pixel.The smallest bit depth in current colour schemes is 8, meaning that there are 8 bits used to describe the colour of each pixel. Monochrome and greyscale images use 8 bits for each pixel and are able to display 256 different colours or shades of grey. Digital colour images are typically stored in 24-bit files and use the RGB colour model, also known as true colour. All colour variations for the pixels of a 24-bit image are derived from three primary colours: red, green and blue, and each primary colour is represented by 8 bits. Thus in one given pixel, there can be 256 different quantities of red, green and blue, adding up to more than 16-million combinations, resulting in more than 16-million colours. Not surprisingly the larger amount of colours that can be displayed, the larger the file size.

Image Compression
When working with larger images of greater bit depth, the images tend to become too large to transmit over a standard Internet connection. In order to display an image in a reasonable amount of time, techniques must be incorporated to reduce the images file size. These techniques make use of mathematical formulas to analyse and condense image data, resulting in smaller file sizes. This process is called compression . In images there are two types of compression: lossy and lossless . Both methods save storage space, but the procedures that they implement differ. Lossy compression creates smaller files by discarding excess image data from the original image. It removes details that are too

small for the human eye to differentiate resulting in close approximations of the original image, although not an exact duplicate. An example of an image format that uses this compression technique is JPEG (Joint Photographic Experts Group) . Lossless compression, on the other hand, never removes any information from the original image, but instead represents data in mathematical formulas . The original images integrity is maintained and the decompressed image output is bit-by-bit identical to the original image input . The most popular image formats that use lossless compression is GIF (Graphical Interchange Format) and 8-bit BMP (a Microsoft Windows bitmap file). Compression plays a very important role in choosing which steganographic algorithm to use. Lossy compression techniques result in smaller image file sizes, but it increases the possibility that the embedded message may be partly lost due to the fact that excess image data will be removed . Lossless compression though, keeps the original digital image intact without the chance of lost, although is does not compress the image to such a small file size.

JPEG steganography
Originally it was thought that steganography would not be possible to use with JPEG images, since they use lossy compression which results in parts of the image data being altered. One of the major characteristics of steganography is the fact that information is hidden in the redundant bits of an object and since redundant bits are left out when using JPEG it was feared that the hidden message would be destroyed. Even if one could somehow keep the message intact it would be difficult to embed the message without the changes being noticeable because of the harsh compression applied. However, properties of the compression algorithm have been exploited in order to develop a steganographic algorithm for JPEGs.One of these properties of JPEG is exploited to make the changes to the image invisible to the human eye.During the DCT transformation phase of the compression algorithm, rounding errors occur in the coefficient data that are not noticeable. Although this property is what classifies the algorithm as being lossy, this property can also be used to hide messages. It is neither feasible nor possible to embed information in an image that uses lossy compression, since the compression would destroy all information in the process. Thus it is important to recognize that the JPEG compression algorithm is actually divided into lossy and lossless stages. The DCT and the quantization phase form part of the lossy stage, while the Huffman encoding used to further compress the data is lossless. Steganography can take place between these two stages. Using the same principles of LSB insertion the message can be embedded into the least significant bits of the coefficients before applying the Huffman encoding. By embedding the information at this stage, in the transform domain, it is extremely difficult to detect, since it is not in the visual domain.

Different File Formats


There are a large number of file formats (hundreds) used to represent an image, some more common then others. Among the most popular are:

GIF (Graphics Interchange Format) The most common image format on the Web. Stores 1 to 8-bit color or grayscale images.

TIFF (Tagged Image File Format) The standard image format found in most paint, imaging, and desktop publishing programs. Supports 1- to 24- bit images and several different compression schemes.

SGI Image Silicon Graphics' native image file format. Stores data in 24-bit RGB color.

Sun Raster Sun's native image file format; produced by many programs that run on Sun workstations.

PICT Macintosh's native image file format; produced by many programs that run on Macs. Stores up to 24-bit color.

BMP (Microsoft Windows Bitmap) Main format supported by Microsoft Windows. Stores 1-, 4-, 8-, and 24-bit images. XBM (X Bitmap) A format for monochrome (1-bit) images common in the X Windows system. JPEG File Interchange Format Developed by the Joint Photographic Experts Group, sometimes simply called the JPEG file format. It can store up to 24-bits of color. Some Web browsers can display JPEG images inline (in particular, Netscape can), but this feature is not a part of the HTML standard.

Steganography Methods The different types of steganographic techniques available are:


1. Pure Steganography 2. Public key Steganography 3. Secret key Steganography Pure Steganography: Pure Steganography is the process of embedding the data into the object without using any private keys. This type of Steganography entirely depends upon the secrecy. This type of Steganography uses a cover image in which data is to be embedded, personal information to be transmitted, and encryption decryption algorithms to embed the

message into image. These types of steganography cant provide the better security because it is easy for extracting the message if the unauthorised person knows the embedding method. It has one advantage that it reduces the difficulty in key sharing.

Figure 7. Pure Steganography process

Secret key Steganography: Secret key Steganography is another process of Steganography which uses the same procedure other than using secure keys. It uses the individual key for embedding the data into the object which is similar to symmetric key. For decryption it uses the same key which is used for encryption. This type of Steganography provides better security compared to pure Steganography. The main problem of using this type of steganographic system is sharing the secret key. If the attacker knows the key it will be easier to decrypt and access original information.

Figure 8. Secret key Steganography Process

Public key Steganography: Public key Steganography uses two types of keys: one for encryption and another for decryption. The key used for encryption is a private key and for decryption, it is a public key and is stored in a public database

Figure 9. Public key Steganography Process

We have implemented the Secret Key Steganography technique in our project. The password shall be provided by the person who does the encryption and it has to be provided to decrypt the message from the image.

Image Encoding Techniques


Information can be hidden many different ways in images. Straight message insertion can be done, which will simply encode every bit of information in the image. More complex encoding can be done to embed the message only in ``noisy'' areas of the image that will attract less attention. The message may also be scattered randomly throughout the coverimage. The most common approaches to information hiding in images are: Least significant bit (LSB) insertion Masking and filtering techniques Algorithms and transformations

Least significant bit insertion

One of the most common techniques used in steganography today is called least significant bit (LSB) insertion. This method is exactly what it sounds like; the least significant bits of the cover-image are altered so that they form the embedded information. Masking and filtering

Masking and filtering techniques hide information by marking an image in a manner similar to paper watermarks. Because watermarking techniques are more integrated into the image, they may be applied without fear of image destruction from lossy compression. By covering, or masking a faint but perceptible signal with another to make the first non-perceptible, we exploit the fact that the human visual system cannot detect slight changes in certain temporal domains of the image. Technically, watermarking is not a steganographic form. Strictly, steganography conceals data in the image; watermarking extends the image information and becomes an attribute of the cover image, providing license, ownership or copyright details. Masking techniques are more suitable for use in lossy JPEG images than LSB insertion because of their relative immunity to image operations such as compression and cropping. Algorithms and transformations

Because they are high quality colour images with good compression, it is desirable to use JPEG images across networks such as the Internet. Indeed, JPEG images are becoming abundant on the Internet. JPEG images use the discrete cosine transform (DCT) to achieve compression. DCT is a

lossy compression transform, because the cosine values cannot be calculated precisely, and rounding errors may be introduced. Variances between the original data and the recovered data depends on the values and methods used the calculate the DCT.

Existing System
The existing system uses hide the image in an image, in both the methods there are number of loopholes through which the hackers can attack message. The hackers may change or damage the entire message. So there no safety in transferring the data through image. It is possible to combine the techniques by encrypting message using cryptography and then hiding the encrypted message using steganography. The resulting stego_image can be transmitted without revealing that secret image is being exchanged. Steganography pay attention to the degree of invisibility while watermarking pays most of its attribute to the robustness of the message and its ability to withstand attacks of removal, such as image operations in the case of images being watermarked respectively.

Drawbacks It provides less security ,because the secret messages are hacked by hackers and competitive companies . It does not have proper reliability. There is no proper acknowledgement. The authority is not properly maintained. Proposed System To overcome the limitations of the existing System a new system has been proposed using C#.Net. In this system we used two methods to add security to the file. The first method, cryptography is a technique of hiding message in the text file so that the authorized users cant get the original information. On the receiving end, only by knowing key (password) the user can decrypt the message. The method, steganography is a computer technique, is same as text encryption. But the message can hide in the picture in the pixel format. To provide high degree of correctness and effectiveness and to reduce the workload it is very important to computerize the system. System computerized is easy to handle and provide the high accuracy in its output. Since the software is developed for multi-user environment the password protection is provided to protect it from unauthorized user. Basic Idea of Proposed System
Least Significant Bit

Least significant bit (LSB) insertion is a common, simple approach to embedding information in a cover image. The least significant bit (in other words, the 8th bit) of some or all of the bytes inside an

image is changed to a bit of the secret message. When using a 24-bit image, a bit of each of the red, green and blue colour components can be used, since they are each represented by a byte. In other words, one can store 3 bits in each pixel. An 800 600 pixel image, can thus store a total amount of 1,440,000 bits or 180,000 bytes of embedded data . For example a grid for 3 pixels of a 24-bit image can be as follows: (00101101 00011100 11011100) (10100110 11000100 00001100) (11010010 10101101 01100011) When the number 200, which binary representation is 11001000, is embedded into the least significant bits of this part of the image, the resulting grid is as follows: (00101101 00011101 11011100) (10100110 11000101 00001100) (11010010 10101100 01100011) Although the number was embedded into the first 8 bytes of the grid, only the 3 underlined bits needed to be changed according to the embedded message. On average, only half of the bits in an image will need to be modified to hide a secret message using the maximum cover size . Since there are 256 possible intensities of each primary colour, changing the LSB of a pixel results in small changes in the intensity of the colours. These changes cannot be perceived by the human eye - thus the message is successfully hidden. With a well-chosen image, one can even hide the message in the least as well as second to least significant bit and still not see the difference. In the above example, consecutive bytes of the image data from the first byte to the end of the message are used to embed the information. This approach is very easy to detect . A slightly more secure system is for the sender and receiver to share a secret key that specifies only certain pixels to be changed. Should an adversary suspect that LSB steganography has been used, he has no way of knowing which pixels to target without the secret key. In its simplest form, LSB makes use of BMP images, since they use lossless compression.Unfortunately to be able to hide a secret message inside a BMP file, one would require a very large cover image. Nowadays, BMP images of 800 600 pixels are not often used on the Internet and might arouse suspicion . For this reason, LSB steganography has also been developed for use with other image file formats.

Encryption Encryption is done to provide an extra security level to our application. Even if the secret is compromised and someone came to know that there is some secret data in the image, he still cannot view it because of the encryption. Steganography The steganography part is done using the algorithms described above. The main feature of steganography is that the picture should not be distorted and the size of the original image to the modified image should remain the same. Decryption The decryption part is completely opposite to the encryption part described above. It requires the user to provide a correct password and the data in the image will decrypted.

Justifications of existing system It provides high security and reliability occurs. The authorization is highly provided. It gives assurance for security of data. There is no chance for hacking because the data is in the form of encrypted text.

Benefits of the proposed system Increase the efficiency of the system to reduce the manual work time. It is more effective and efficient way to transfer the file to receiving end. Easy method to manage the information so that the hackers cant understand the message.

FEASIBILITY STUDY

FEASIBILITY STUDY
Feasibility analysis is the procedure for identifying the candidate system, evaluating and selecting the most feasible system. This is done by investing the existing system in the area under investigation or generally ideas about a new system. It is a test of a system proposal according to its work ability, impact on the organization, ability to meet user needs and effective use of resources. The objective of feasibility is not to solve the problem but to acquire a sense of its scope. Feasibility analysis involves 8 steps. Form a project team and appoint a project leader. Prepare system flow charts. Enumerate a potential candidate system. Describe and identifying characteristics of candidate system. Determine and evaluate performance and cost effectiveness of each candidate system. Weigh system performance and cost data. Select the best candidate system. Repair and report final project directive to management.

Three key considerations are involved in the feasibility analysis: Technical, Economic and Behavioral. Economic Feasibility Economic feasibility is the most frequently used method for evaluating the effectiveness of proposed system. The procedure is to determine the benefits and the savings that are expected from the proposed system and compare them with costs. If the expected benefits are equal to or exceed the cost then the proposed system will be judge as economically feasible. Our package is also economically feasible; it can be run on any system with the basic requirements with out any additional costs. The cost in the Steganography is the cost needed for the development of the software and implementation of the software. The automated online reservation website preparation costs very less compared to the existing system and also the benefit is far better than the existing system. So the Steganography website is economically feasible. Technical Feasibility Technical feasibility can be expressed by comparing the hardware, software etc of the existing system. Technical feasibility, which is done to compare that to what extend the existing system, can support the proposed system. This also deals with the financial considerations. If the proposed system

can be implemented with in the existing systems specification then it is technically feasible. So by considering these all aspects, Steganography is technically feasible. Behavioral Feasibility Behavioral feasibility refers to the manner in which the user who is going to use it will accept it. If it is user friendly, then the people can interact with the system more efficiently. By considering all the feasibility considerations and if the proposed system makes huge amount, then it is not feasible. But here the Steganography is technically, economically and behaviorally feasible. So the proposed system is efficient to implement.

ANALYSIS MODELING UNIFIED MODELING LANGUAGE (UML) DIAGRAMS


Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created, by the Object Management Group. It was first added to the list of OMG adopted technologies in 1997, and has since become the industry standard for modeling software-intensive systems. Since UML is not a methodology, it does not require any formal work products. Yet it does provide several types of diagrams that, when used within a given methodology, increase the ease of understanding an application under development. There is more to UML than these diagrams, but for my purposes here, the diagrams offer a good introduction to the language and the principles behind its use. By placing standard UML diagrams in our methodology's work products, we make it easier for UML-proficient people to join your project and quickly become productive. The most useful, standard UML diagrams are: use case diagram, class diagram, sequence diagram, state chart diagram, activity diagram, component diagram, and deployment diagram.

Use-case diagram
A use case illustrates a unit of functionality provided by the system. The main purpose of the use-case diagram is to help development teams visualize the functional requirements of a system, including the relationship of "actors" (human beings who will interact with the system) to essential processes, as well as the relationships among different use cases. Use-case diagrams generally show groups of use cases either all use cases for the complete system, or a breakout of a particular group of use cases with related functionality .To show a use case on a use-case diagram, you draw an oval in the middle of the diagram and put the name of the use case in the center of, or below, the oval. To draw an actor (indicating a system user) on a use-case diagram, you draw a stick person to the left or right of your

diagram (and just in case you're wondering, some people draw prettier stick people than others). Use simple lines to depict relationships between actors and use cases. A use-case diagram is typically used to communicate the high-level functions of the system and the system's scope. With clear and simple use-case descriptions provided on a diagram, a project sponsor can easily see if needed functionality is present or not present in the system.

Import a file T encryption for Import a file for decryption Enter text to be embedded Perform encryption/decryption Save decrypted text in a text file General user Exit the application

Hide your secret

Class diagram
The class diagram shows how the different entities (people, things, and data) relate to each other. In other words, it shows the static structures of the system. A class diagram can be used to display logical classes, which are typically the kinds of things the business people in an organization talk about rock bands, CDs, radio play or loans, home mortgages, car loans, and interest rates. Class diagrams can also be used to show implementation classes, which are the things that programmers typically deal with. An implementation class diagram will probably show some of the same classes as the logical classes diagram. The implementation class diagram won't be drawn with the same attributes, however, because it will most likely have references to things like Vectors and Hash Map. A class is depicted on the class diagram as a rectangle with three horizontal sections. The upper section shows the class's name, the middle section contains the class's attributes, and the lower section contains the class's operations or methods. DATAFLOW DIAGRAMS (DFD) The data flow diagram is used for classifying system requirements to major transformation that will become programs in system design. This is starting point of the design phase that functionally

decomposes the required specifications down to the lower level of details. It consists of a series of bubbles joined together by lines. Bubbles: Represent the data transformations. Lines: Represents the logic flow of data. Data can trigger events and can be processed to useful information. System analysis recognizes the central goal of data in organizations. This dataflow analysis tells a great deal about organization objectives are accomplished. Dataflow analysis studies the use of data in each activity. It documents this finding in DFDs. Dataflow analysis give the activities of a system from the viewpoint of data where it originates how they are used or hanged or where they go, including the stops along the way from their destination. The components of dataflow strategy span both requirements determination and systems design. The first part is called dataflow analysis. As the name suggests, we didnt use the dataflow analysis tools exclusively for the analysis stage but also in the designing phase with documentation. Notations used in data flow diagrams

The logic dataflow diagrams can be drawn using only four simple notations. It means special symbols or icons and the annotation that associates them with a specific system.
Element References Data Flow Process symbols

Process

Data Store Source or Sink

Data base Description: Process: Data Store: describes how input data is converted to output Data Describes the repositories of data in a system

Data Flow: Sources: Sink:

Describes the data flowing between process, Data An external entity causing the origin of data. An external entity, which consumes the data.

stores and external entities.

Context Diagram: The top-level diagram is often called a context diagram. It contains a single process, but it plays a very important role in studying the current system. The context diagram defines the system that will be studied in the sense that it determines the boundaries. Anything that is not inside the process identified in the context diagram will not be part of the system study. It represents the entire software element as a single bubble with input and output data indicated by incoming and outgoing arrows respectively. Data flow diagram Level 0

User

Steganography

User

Level 1

User

2 Decryp t 1 Encry pt User

Types of data flow diagrams DFDs are two types 1. Physical DFD Structured analysis states that the current system should be first understand correctly. The physical DFD is the model of the current system and is used to ensure that the current system has been clearly understood. Physical DFDs shows actual devices, departments, people etc., involved in the current system

2.Logical DFD Logical DFDs are the model of the proposed system. They clearly should show the requirements on which the new system should be built. Later during design activity this is taken as the basis for drawing the systems structure charts. Six rules for constructing data flow diagram 1. 2. 3. 4. 5. Arrow should not cross each other. Squares, circles and filters must bear names. Decomposed data flow squares and circles can be have same names. Choose meaning full names for data flows. Draw all data flows around the outside of the diagram.

SYSTEM SPECIFICATION
System specification specifies the hardware, software and the database, which is used to develop the system. Software specification

Platform Front End Browser

: - Windows 7 Professional : - ASP.NET with C# coding :- INTERNET EXPLORER 6.0

Tools/Platform

Windows 7
This project works on Microsoft Windows 7 Professional . The Windows 7 provides a suitable environment for the smooth functioning of the project. Windows 7 makes personal computing easy. Power, performance, a bright new look and plenty of help are provided when you need it. Windows 7 has it all, along with unmatched dependability and security. Fast Wake Up & Fast Boot enables your Windows 7 machine to wake up faster when it was put in hibernate or standby mode. The fast boot feature allows Windows 7 to boot up faster when it is powered on from a cold boot. Wake on LAN for Wireless - bring the well-known wired Ethernet feature to wireless networks. Think about it an Admin can wake up thousands of sleeping computers, not even wired to the network, using wake on LAN for wireless. Virtualization Enhancements-With the Windows 7 Virtualization Enhancements, when you run Windows 7 in a VDI (virtual desktop interface) mode, the end user will enjoy a higher quality experience. To help you visualize how this works, let us say that you have a Hyper-V server and you are running Windows 7 as a Guest virtual machine on the server. End users running thin client devices connect to the Windows 7 Guest VMs on that server. Previously, with Windows XP or Vista, there would have been limitations to the users experience, as compared to a traditional desktop. With Windows 7 many of these limitations are removed. Here is what Windows 7 provides when used in a VDI mode.

Fix a Network Problem-One of my favorite changes to Windows 7 networking is the update to Vistas diagnose and repair. In Windows 7 if you want to get assistance fixing a network issue, you just click Fix a network problem. Sound simple and clear, right? Thats what I like about it. QoS Enhancements-While Quality of Service (QoS) is not something that end users think about they do see the results if QoS is not working. Windows 7 offers a number of QoS enhancements. URL based QoS is one of the new Windows 7 QoS Enhancments. Since many mission critical enterprise applications have been moved into hosted web environments, URL based QoS is the answer to giving those IT Admins the ability to prioritize those mission critical web applications over, say, other general web surfing. HomeGroup-The best new Windows 7 networking feature for home and small office users is the homeGroup feature. Essentially, a homegroup is a simple way to link computers on your home network together so that they can share pictures, music, videos, documents, and printers. There is just a single password that is used to access the homegroup, making creating it and connecting to it easy.

Language: ASP.NET
ASP.NET is a programming framework built on the common language runtime that can be used on a server to build powerful web applications. ASP.NET offers several important advantages over previous Web development models:ASP.NET is not just a simple upgrade or the latest version of ASP. ASP.NET combines unprecedented developer productivity with performance, reliability, and deployment. ASP.NET redesigns the whole process. It's still easy to grasp for new comers but it provides many new ways of managing projects. Below are the features of ASP.NET. Enhanced Performance. ASP.NET is compiled common language runtime code running on the server. Unlike its interpreted predecessors, ASP.NET can take advantage of early binding, just-in-time compilation, native optimization, and caching services right out of the box. This amounts to dramatically better performance before you ever write a line of code. World-Class Tool Support. The ASP.NET framework is complemented by a rich toolbox and designer in the Visual Studio integrated development environment. WYSIWYG editing, drag-and-drop server controls, and automatic deployment are just a few of the features this powerful tool provides.

Power and Flexibility. Because ASP.NET is based on the common language runtime, the power and flexibility of that entire platform is available to Web application developers. The .NET Framework class library, Messaging, and Data Access solutions are all seamlessly accessible from the Web. ASP.NET is also language-independent, so you can choose the language that best applies to your application or partition your application across many languages. Simplicity ASP.NET makes it easy to perform common tasks, from simple form submission and client authentication to deployment and site configuration. For example, the ASP.NET page framework allows you to build user interfaces that cleanly separate application logic from presentation code and to handle events in a simple, Visual Basic - like forms processing model. Additionally, the common language runtime simplifies development, with managed code services such as automatic reference counting and garbage collection Manageability ASP.NET employs a text-based, hierarchical configuration system, which simplifies applying settings to your server environment and Web applications. Because configuration information is stored as plain text, new settings may be applied without the aid of local administration tools. This "zero local administration" philosophy extends to deploying ASP.NET Framework applications as well. Scalability and Availability ASP.NET has been designed with scalability in mind, with features specifically tailored to improve performance in clustered and multiprocessor environments. Further, processes are closely monitored and managed by the ASP.NET runtime, so that if one misbehaves (leaks, deadlocks), a new process can be created in its place, which helps keep your applications constantly available to handle requests Easy Deployment ASP.NET takes the pain out of deploying server applications. "No touch" application deployment.

ASP.NET dramatically simplifies installation of your application. With ASP.NET, you can deploy an entire application as easily as an HTML page, just copy it to the server. No need to run regsvr32 to register any components, and configuration settings are stored in an XML file within the application. Dynamic update of running application ASP.NET now lets you update compiled components without restarting the web server. In the past with classic COM components, the developer would have to restart the web server each time he deployed an update. With ASP.NET, you simply copy the component over the existing DLL, ASP.NET will automatically detect the change and start using the new code. Easy Migration Path You don't have to migrate your existing applications to start using ASP.NET. ASP.NET runs on IIS side-by-side with classic ASP on Windows 2000 and Windows XP platforms. Your existing ASP applications continue to be processed by ASP.DLL, while new ASP.NET pages are processed by the new ASP.NET engine. You can migrate application by application, or single pages. And ASP.NET even lets you continue to use your existing classic COM business components. XML Web Services XML Web services allow applications to communicate and share data over the Internet, regardless of operating system or programming language. ASP.NET makes exposing and calling XML Web Services simple. Any class can be converted into an XML Web Service with just a few lines of code, and can be called by any SOAP client. Likewise, ASP.NET makes it incredibly easy to call XML Web Services from your application. No knowledge of networking, XML, or SOAP is required. Mobile Web Device Support ASP.NET Mobile Controls let you easily target cell phones, PDAs and over 80 mobile Web devices. You write your application just once, and the mobile controls automatically generate WAP/WML, HTML, or iMode. Customizability and Extensibility ASP.NET delivers a well-factored architecture that allows developers to "plug-in" their code at the appropriate level. In fact, it is possible to extend or replace any subcomponent of the ASP.NET runtime

with your own custom-written component. Implementing custom authentication or state services has never been easier. Security With built in Windows authentication and per-application configuration, you can be assured that your applications are secure.

Language Support The Microsoft .NET Platform currently offers built-in support for three languages: C#, Visual Basic, and JScript. The exercises and code samples in this tutorial demonstrate how to use C#, Visual Basic, and JScript to build .NET applications.

Hardware Specification
Selection of hardware configuration is very important task related to the software development. In sufficient random access memory may affect adversely on the speed and efficiency of the entire project. The process should be powerful to handle the entire operations. The hard disk should have sufficient capacity to store the huge database of project. C.P.U Primary memory Clock Speed System Bus HDD Key board Mouse Monitor : Intel Pentium II Processor : 128 MB RAM : 800 MHz : 32 bit : 5 GB : Standard 108 keys enhanced keyboard : Logitech : SVGA color

SYSTEM DESIGN

SYSTEM DESIGN
System design is the second phase of the software life cycle.The system goes through logical and physical state of development. The user oriented performance specification is extended into a design specification, while designing the needed system. The design phase begins when the Requirement Specification document for the software to be developed is available. When the Requirement Specification activity is entirely in the problem domain, design is the first step to move from the problem domain to the solution domain. Design is essentially the bridge between the requirements specification and the final solution solution for satisfying these requirements.

INPUT DESIGN
Input design is the process of converting a user-oriented description of the inputs to a computer-based business system into a programmer-oriented specification. The design decision for handling input specify how data are accepted for computer processing. Input design is a part of overall design that needs careful attention. The collection of input data is considered to be the most expensive part of the system design. Since the inputs have to be planned in such a way so as to get the relevant information, extreme care is taken to obtain the pertinent information. If the data going into the system is incorrect then the processing and outputs will magnify these errors. The goal of designing input data is to make data entry as easy, logical and free from errors as possible. The following are the objectives of input design: To produce a cost effective method of input. To ensure validation-Effort has been made to ensure that input data remains accurate . The stage at which it is recorded and documented to the stage at which it is accepted by the computer. Validation procedures are also present to detect errors in data input, which is beyond control procedures. Validation procedures are designed to check each record, data item or field against certain criteria .

OUTPUT DESIGN
The output design phase of the system design is concerned with the conveyance of information to the end users in user-friendly manner. The output design should be efficient, intelligible so that the system relationship with the end user is improved and thereby enhancing the process of decision making. The output design is an ongoing activity almost from the beginning of the project, efficient and well-defined output design improves the relation of the system and the user. The primary

considerations in the design of the output are the requirement of the information and the objective of the end user. Computer output is the most important and direct source of information to the user. Efficient intelligible output design should improve system relations with the user and help in decision making. Printouts should be designed around the output requirement of the user. Outputs also provide a mean of storage by copying the result for later reference in consultations. There is a chance that some of the end-users will not actually operate the input data or information through workstations, but will use the output from the system, Two phases of the design are: Output Definition Output Specification Output definition takes into accounts the type of output contents, Its frequency and its volume. The appropriate outputs media is determined for outputs. Once the output media is chosen, the detail specification of output documents are determined during logical design stage itself. The physical design stage takes the outline of the output from the logical design and produces the output as specified during the logical phase. The outputs that the proposed system generates are of the following types: External Outputs Internal Outputs External outputs are those that are meant for use outside the organization itself. All outputs are in the form of documents or reports, the different reports identified during the analysis are design by using proper design methodology. Output is the most important and source of information to the users.

The system output may be of any of the following

A report

A document A message

DATA DESIGN
The data design transforms the information domain model created during analysis into the data structures that will be required to implement the software. The data objects and relationships defined

in the entity relationship diagram and the detailed data content depicted in the dictionary provide the basis for the data design activity. Part of data design may occur in conjunction with the design of software architecture. more detailed data design occurs as each software component is designed. The structure of data has always been an important part of software design. At the program component level, the design of data structures and the associated algorithms required to manipulate them is essential to the creation of high quality applications. At the application level, the translation of data models into a database is essential to achieve the business objectives of a system. At the business level, the collection of information stored in desperate databases and reorganized into a data warehouse enables the mining or knowledge discovery that can have impact on the success of the business itself. In every case data design plays an important role. There are 6 major steps in design process. The first 5 steps are usually done on paper and finally the design is implemented. Identify the entity and relationships Identify the data that is needed for each entity and relationship Resolve the relationship Verify the design Implement the design

INTERFACE DESIGN
The user interface design creates an effective communication medium between a human and a computer. Following a set of interface design principles, design identifies interface objects and actions and then creates a screen layout that forms the basis for a user interface prototype. A software engineer designs the user interface by applying an iterative process that draws on predefined design principles. User interface design begins with the identification of user, task, and environmental requirements. Once user tasks have been identified, user scenarios are created and analyzed to define a set of interface objects and actions. These form the basis for the creation of screen layout that depicts graphical design and placement of icons, definition of descriptive screen text, specification and titling for windows, and specification of major and minor menu items. Tools are used to prototype and ultimately implement the design model, and the result is evaluated for a quality.

ARCHITECTURAL DESIGN
Architectural design represents the structure of data and program components that are required to build a computer based system. It considers the architectural style that the system will take, the

structure and properties of the components that constitute the system, and the interrelationships that occur among all architectural components of a system. Although a software engineer can design both data and architecture, the job is often allocated to specialists when large, complex systems are to be built. A database or data warehouse designer creates data architecture for a system. The system architect selects an appropriate architectural style for the requirements derived during system engineering and software requirement analysis. Architectural design begins with data design and then proceeds to the derivation of one or more representations of the architectural structure of the system. An architecture model encompassing data architecture and program structure is created during architectural design. In addition, component properties and relationships are described.

SYSTEM TESTING
Software Testing is a critical element of software quality assurance and represents the ultimate review of specification, design and coding, Testing presents an interesting anomaly for the software engineer.

Testing Objectives include:


1. Testing is a process of executing a program with the intent of finding an error 2. A good test case is one that has a probability of finding an as yet undiscovered error 3. A successful test is one that uncovers an undiscovered error

Testing Principles:
All tests should be traceable to end user requirements Tests should be planned long before testing begins Testing should begin on a small scale and progress towards testing in large Exhaustive testing is not possible To be most effective testing should be conducted by a independent third party

Types of testing
Unit Testing

Unit testing focuses verification effort on the smallest unit of software designs the module. To check whether each module in the software works properly so that it gives desired outputs to the given inputs. All validations and conditions are tested in the module level in the unit test. Control paths are tested to ensure the information properly flows into, and output of the program unit and out of the program unit under test. Boundary conditions are tested to ensure that the modules operate at boundaries. All independent paths through the control structure ensure that all statements in a module have been executed at least once. Integration Testing The major concerns of integration testing are developing an incremental strategy that will limit the complexity of entire actions among components as they are added to the system. Developing a component as they are added to the system, developing an implementation and integration schedules that will make the modules available when needed, and designing test cases that will demonstrate the

viability of the evolving system. Though each program works individually they should work after linking them together. This is also referred to as interfacing. Data may be lost across interface and one module can have adverse effect on another. Subroutines after linking may not do the desired function expected by the main routine. Integration testing is a systematic technique for constructing program structure while at the same time conducting tests to uncover errors associated with the interface. In the testing, the programs are constructed and tested in small segments. Here our objective is that to edit, compile and execute Java programs within a single platform. Using integration test plan prepared in the design phase of the system developments guide, the integration test is carried out and all the errors found in the system are corrected for the next testing steps. System Testing When a system is developed, it is hoped that it performs properly. In practice however some errors always occur. The main purpose of testing and information system is to find the errors and correct them. A successful test is one which finds an error. The main objectives of system testing are To ensure during operation the system will perform as per specifications. To make sure that the system meets users requirements during operation. To verify that the controls incorporated in the system function as intended. To see that when correct inputs are fed to the system the outputs are correct. To make sure that during operation incorrect input and output will be deleted. The scope of a system test should include both manual operations and computerized. Operations system testing is a comprehensive evaluation of the programs, manual procedures, computer operations and controls. System testing is the process of checking if the developed system is working according to the original objectives and requirements. All testing needs to be conducted in accordance to the test conditions specified earlier.

Password testing
When the user enters the user name the password, checking it with the already registered user name and the password in the database will validate it. If it matches, then only the user is allowed to access the page. Otherwise he is denied accesses and there by provides a strong security.

Data validation testing


Data validation checking is done to see whether the corresponding entries made in different tables are done correctly. Proper validation checks are done in case of insertion and updating of tables, in order

to see that no duplication of data has occurred. If any such case arises proper warning message will be displayed. Double confirmation is done before the administrator deletes a data in order to get positive results and to see that no data have been deleted by accident. White box testing is a test case design method that uses the control structure of other procedural of other procedural designs to divide the cases. test cases derive that: Guarantee that independent part within a module have been exercised at least once Exercise all logical decisions on their true or false sides. Exercise all loops at their boundaries and within their operational bounds. Exercise internal data structure to ensure their validity.

After all this each module was tested and the tested module was linked and integration testing was carried out.

DEBUGGING
Debugging is the process of executing programs on sampling data sets to determine if faulty results occur and if so to correct them. Effective testing early in the process translated in to long term cost is saving from a reduced number of errors. Back-up field are needed when the system is failure or down. The usability test verifies the user-friendly nature of the system. System testing is designed to uncover weakness that are found in the earlier test. This includes, forced system failure and validation of the total system as its user in the operational environment will implement it. Generally it begins with low volume of transaction based on live data. The volume is increased until the maximum level for each transaction type is reached. The total system is tested for recovery and fall back after major failures to ensure that no data are lost during the emergency. All this is done with the old system still in operation after candidate system passes the test, the old system is discontinued. Program testing checks for two types of errors: Syntax and Logic. A syntax error is a program statement that violates one or more rules of the language in which it is written. Logical errors deals with incorrect data fields, out of range items, and invalid combinations

System documentation is necessary for failure reference. The quality assurance goal of the testing phase if to ensure that the completeness and accuracy of the system and minimize the resetting process.

SYSTEM IMPLIMENTATION

SYSTEM IMPLIMENTATION
The implementation plan includes a description of all the activities that must occur to implement the new system and to put it into operation. It identifies the personnel responsible for the activities and prepares a time chart for implementing the system. The implementation plan consists of the following steps. List all files required for implementation. Identify all data required to build new files during the implementation. List all new documents and procedures that go into the new system.

The implementation plan should anticipate possible problems and must be able to deal with them. The usual problems may be missing documents, mixed data formats between current files errors in data translation, missing data.
System implementation is the conversion of new system into an operating one which involves creating compatible files, training staff and installing hardware. A critical factoring in conversion is not disrupting the functioning of organization. User training is crucial for minimizing resistance to change and giving chance to prove its worth. Training aids user friendly manuals and healthy screens provide the user with a good start. Software maintenance follows conversion to the extent that changes are necessary to maintain satisfactory operations relative to changes in the users environment. Maintenance often includes minor enhancements or corrections to the problem that surface late in the systems operations. In the implementation phase, the team builds the components either from scratch or by composition. Given the architecture document from the design phase and the requirement document from the analysis phase, the team should build exactly what has been requested, though there is still room for innovation and flexibility. For example, a component may be narrowly designed for this particular system, or the component may be made more general to satisfy a reusability guideline. The architecture document should give guidance. Sometimes, this guidance is found in the requirement document. The implementation phase deals with issues of quality, performance, baselines, libraries, and debugging. The end deliverable is the product itself. During the implementation phase, the system is built according to the specifications from the previous phases. This includes writing code, performing code reviews, performing tests, selecting components for integration, configuration, and integration.

The implementation includes the following things. Careful planning Investigation of system and constraints. Design the methods to achieve the charge over. Training the staff in the changed phase. Evaluation of change over method. The method of implementation and time scale to be adopted are found out initially.

SYSTEM MAINTENANCE

MAINTENANCE When the system is in maintenance phase, some people with in the system are responsible for collecting maintenance requests from users and other interested parties. The process of maintaining system is the process of returning to the beginning of system development phase until changes implemented. System maintenance is the activity that occurs following the delivery of the software product enhancement to software products adapting products to new environment and correcting errors. Software products enhancement may involve providing new functional capabilities improving user displays and modes of intersection or upgrading the performance characteristics of the system. Problem correction involved modification and revalidation of software to correct errors. The process that includes the diagnosis and correction of one or more error is known as corrective maintenance. As the software is used recommendation for new capabilities, modification and general enhancement may be obtained and this leads to perfect maintenance. When software is changed to improve feature maintainability or reliability to preventive maintenance. For maintaining this system the following have to be made strictly.The executable generated several of the form and reports are only given to the end users. The back up should be taken in order to safeguard the system against any loss of data due to system manufacturing.

Das könnte Ihnen auch gefallen