Beruflich Dokumente
Kultur Dokumente
ACKNOWLEDGEMENT
It is with great pleasure and learning spirit that we are bringing out this project report. We use
this opportunity to express our heartiest gratitude to support and guidance offered to us from
various sources during the courses and completion of our project.
We are extremely grateful to the head of the institute Dr.M.K Jana, sarabhai institute of science
and technology, vellanad for providing the necessary facilities. We are grateful to our head of
department, Dr.C.G Sukumaran Nair for valuable suggestions.
It is our duty to acknowledge Miss Sudha S.K, assistant Professor and Miss Sheena for sharing
their wealthy knowledge.
We also wish to thank all faculties of computer science Department for their proper guidance and
help. Also we extend our gratitude to all laboratory staff of computer lab.
Above all, we owe our gratitude to the almighty for showering abundant blessings upon us. And
last but not the least we wish to thank our parents and our friends for helping us to complete our
mini project work successfully.
ARUN K.R
DEEPAK.K.P
MANOJ.R
INTRODUCTION
INTRODUCTION
Image compression is to reduce irrelevance and redundancy of the image data in order to be able
to store or transmit data in an efficient form.
Compressing an image is significantly different than compressing raw binary data. General
purpose compression programs can be used to compress images, but the result is less than
optimal. This is because images have certain statistical properties which can be exploited by
encoders specifically designed for them. There are two types of compression
• Lossy compression
• Lossless compression
Lossy methods are especially suitable for natural images such as photographs in
applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a
substantial reduction in bit rate. The lossy compression that produces imperceptible differences
may be called visually lossless.
• Reducing the color space to the most common colors in the image. The selected colors
are specified in the color palette in the header of the compressed image. Each pixel just
references the index of a color in the color palette. This method can be combined with
dithering to avoid posterization.
• Chroma subsampling:- This takes advantage of the fact that the human eye perceives
spatial changes of brightness more sharply than those of color, by averaging or dropping
some of the chrominance information in the image.
• Transform coding :- This is the most commonly used method. A Fourier-related
transform such as DCT or the wavelet transform are applied, followed by quantization
and entropy coding.
• Fractal compression
Lossless compression is preferred for archival purposes and often for medical
imaging, technical drawings, clip art, or comics. This is because lossy compression methods,
especially when used at low bit rates, introduce compression artifacts.
• Run-length encoding
• DPCM and Predictive Coding
• Entropy encoding
• Deflation
• Chain codes
Fractal compression is a lossy image compression method using fractals. The method is best
suited for textures and natural images, relying on the fact that parts of an image often resemble
other parts of the same image. Fractal algorithms convert these parts into mathematical data
called "fractal codes" which are used to recreate the encoded image. Fractal compression differs
from pixel-based compression schemes such as JPEG, GIF and MPEG since no pixels are saved.
Once an image has been converted into fractal code, the image can be recreated to fill any screen
size without the loss of sharpness that occurs in conventional compression schemes.
A fractal is a structure that is made up of similar forms and patterns that occur in many
different sizes. The term fractal was first used by Benoit Mandelbrot to describe repeating
patterns that he observed occurring in many different structures. These patterns appeared nearly
identical in form at any size and occurred naturally in all things. Mandelbrot also discovered that
these fractals could be described in mathematical terms and could be created using very small
and finite algorithms and data.
Fractal encoding is largely used to convert bitmap images to fractal codes. Fractal decoding is
just the reverse, in which a set of fractal codes are converted to a bitmap.
Decoding a fractal image is a much simpler process. The hard work was performed finding all
the fractals during the encoding process. All the decoding process needs to do is to interpret the
fractal codes and translate them into a bitmap image.
Two tremendous benefits are immediately realized by converting conventional bitmap images to
fractal data. The first is the ability to scale any fractal image up or down in size without the
introduction of image artifacts or a loss in detail that occurs in bitmap images. This process of
"fractal zooming" is independent of the resolution of the original bitmap image, and the zooming
is limited only by the amount of available memory in the computer.
The second benefit is the fact that the size of the physical data used to store fractal codes is much
smaller than the size of the original bitmap data. If fact, it is not uncommon for fractal images to
be more than 100 times smaller than their bitmap sources. It is this aspect of fractal technology,
called fractal compression, that has promoted the greatest interest within the computer imaging
industry.
The process of matching fractals does not involve looking for exact matches, but instead looking
for "best fit" matches based on the compression parameters (encoding time, image quality, and
size of output). But the encoding process can be controlled to the point where the image is
"visually lossless." That is, you shouldn't be able to notice where the data was lost.
Fractal compression differs from other lossy compression methods, such as JPEG, in a number of
ways. JPEG achieves compression by discarding image data that is not required for the human
eye to perceive the image. The resulting data is then further compressed using a lossless method
of compression. To achieve greater compression ratios, more image data must be discarded,
resulting in a poorer quality image with a pixelized (blocky) appearance.
Encoding
Select an image and divide it into small,non-overlapping,square blocks, typically called “parent
blocks”. Divide each parent block into 4 individual blocks, or “child blocks.” Compare each
child block against a subset of all possible overlapping blocks of parent block size.
Need to reduce the size of the parent to allow the comparison to work.
Determine which larger block has the lowest difference, according to some measure, between it
and the child block.Upper left corner child block, very similar to upper right parent
block.Compute affine transform.Store location of parent block (or transform block), affine
transform components, and related child block into a file.
Repeat for each child block.
Lots of comparisons can calculations.
256x256 original image
16x16 sized parent blocks
241*241 = 58,081 block comparisons
Read in child block and transform block position, transform, and size information.
Use any blank starting image of same size as original image
For each child block apply stored transforms against specified transform block
Overwrite child block pixel values with transform block pixel values
Repeat until acceptable image quality is reached.
Advantages
• Reduces the image size without changing any part of the image
• Fractal image compression is well suited for applications requiring fast access to high
quality images
• It is essential to be able to send out the compressed image with minimum delay
• Faster decompression speed
PROCESSING ENVIRONMENT
HARDWARE REQUIREMENTS
RAM : 256 MB
Hard Disk : 40 GB
SOFTWARE REQUIREMENTS
SOFTWARE DESCRIPTIONS
be added to or is included with the Microsoft Windows operating system. It provides a large
body of pre-coded solutions to common program requirements, and manages the execution of
cover a large range of programming needs in areas including: user interface, data access,
network communications. The functions of the class library are used by programmers who
environment that manages the program's runtime requirements. This runtime environment, which
is also a part of the .NET Framework, is known as the Common Language Runtime (CLR). The
CLR provides the appearance of an application virtual machine, so that programmers need not
consider the capabilities of the specific CPU that will execute the program. The CLR also
provides other important services such as security mechanisms, memory management, and
exception handling. The class library and the CLR together compose the .NET Framework. The
framework is intended to make it easier to develop computer applications and to reduce the
Windows Server 2003 and Windows Vista, and can be installed on most older versions of
Windows
commonly required, the .NET Framework provides means to access functionality that is
implemented in programs that execute outside the .NET environment. Access to COM
manner known as just-in-time compilation (JIT) into native code. The combination of
(CLR).
System, or CTS. The CTS specification defines all possible data types and programming
constructs supported by the CLR and how they may or may not interact with each other.
programming languages. This is discussed in more detail in the .NET languages section
below.
• Base Class Library - The Base Class Library (BCL), sometimes referred to as the
Framework Class Library (FCL), is a library of types available to all languages using
the .NET Framework. The BCL provides classes which encapsulate a number of common
functions, including file reading and writing, graphic rendering, database interaction and
managed to ensure that it does not interfere with previously installed software, and that it
• Security - .NET allows for code to be run with different trust levels without the use of a
separate sandbox.
The most important component of the .NET Framework lies within the
Common Language Infrastructure, or CLI. The purpose of the CLI is to provide a language-
agnostic platform for application development and execution, including, but not limited to,
Microsoft's implementation of the CLI is called the Common Language Runtime, or CLR. The
Assemblies
the Windows implementation means a Portable Executable (PE) file (EXE or DLL). Assemblies
are the .NET unit of deployment, versioning and security. The assembly consists of one or more
files, but one of these must contain the manifest, which has the metadata for the assembly. The
complete name of an assembly contains its simple text name, version number, culture and public
key token; it must contain the name, but the others are optional. The public key token is
generated when the assembly is created, and is a value that uniquely represents the name and
contents of all the assembly files, and a private key known only to the creator of the assembly.
Two assemblies with the same public key token are guaranteed to be identical. If an assembly is
tampered with (for example, by hackers), the public key can be used to detect the tampering.
Metadata
metadata to ensure that the correct method is called. Metadata is usually generated by
language compilers but developers can create their own metadata through custom
the Framework Class Library (FCL) (which is a superset including the Microsoft.* namespaces),
is a library of classes available to all languages using the .NET Framework. The BCL provides
classes which encapsulate a number of common functions such as file reading and writing,
graphic rendering, database interaction, XML document manipulation, and so forth. The BCL is
much larger than other libraries, but has much more functionality in one package.
Security
.NET has its own security mechanism, with two general features: Code
Access Security (CAS), and validation and verification. Code Access Security is based on
evidence that is associated with a specific assembly. Code Access Security uses evidence to
determine the permissions granted to the code. Other code can demand that calling code is
granted a specified permission. The demand causes the CLR to perform a call stack walk: every
assembly of each method in the call stack is checked for the required permission and if
thrown.
Two such tests are validation and verification. During validation the CLR checks that the
assembly contains valid metadata and CIL, and it checks that the internal tables are correct.
Verification is not so exact. The verification mechanism checks to see if the code does anything
that is 'unsafe'. The algorithm used is quite conservative and hence sometimes code that is 'safe'
is not verified. Unsafe code will only be executed if the assembly has the 'skip verification'
permission, which generally means code that is installed on the local machine.
. N E T A p p l i c a t i o n s
V i s u a l B a s i c . N EV T i s u a l C # . N E T V i s u a l J #
O t h e r L a n g u a g e s
. N E T F r a m e w o r k
. N E T F r a m e w o r k C l a s s L i b r a r y
W i n d o w s F o r m C l a s s e s s , A S P . N E T C l a
C o m m o n L a n g u a g e R u n t i m e
M a n a g e d A p p l i c a t i o n s C T S I n
O p e r a t i n g S y s t e m a n d H a r d w a r e
environment which lets programmers create standalone applications, web sites, web applications,
and web services that run on any platforms supported by Microsoft's .NET Framework (for all
versions after 6). Supported platforms include Microsoft Windows servers and workstations,
• Visual C++
• Visual C#
• Visual J#
• ASP.NET
Express editions of Visual Studio have been released by Microsoft for lightweight streamlined
Island in Puget Sound), was released online in October 2005 and hit the stores a couple of weeks
later. Microsoft removed the ".NET" moniker from Visual Studio 2005 (as well as every other
product with .NET in its name), but it still primarily targets the .NET Framework, which was
upgraded to version 2.0. Visual Studio 2005's internal version number is 8.0 while the file format
version is 9.0.
the development environment itself is only available as a 32-bit application, Visual C++ 2005
supports compiling for x86-64 (AMD64 and Intel 64) as well as IA-64 (Itanium).The Platform
significantly different from previous versions: Express, Standard, Professional, Tools for Office,
and a set of five Visual Studio Team System Editions. The latter are provided in conjunction
with MSDN Premium subscriptions, covering four major roles of software development:
functionality of the four Team System Editions is provided in a Team Suite Edition.
small businesses, and are available as a free download from Microsoft's web site. The Express
Editions lack many of the more advanced development tools and extensibility of the other
underlying Common Language Infrastructure (CLI). Most of C#'s intrinsic types correspond to
• There are no global variables or functions. All methods and members must be declared
within classes.
• Local variables cannot shadow variables of the enclosing block, unlike C and C++. Variable
• C# supports a strict boolean type, bool. Statements that take conditions, such as while and
if, require an expression of a boolean type. While C and C++ also have a boolean type, it
can be freely converted to and from integers, and expressions such as if(a) require only
meaning true or false' approach on the grounds that forcing programmers to use expressions
• In C#, pointers can only be used within blocks specifically marked as unsafe, and programs
with unsafe code need appropriate permissions to run. Most object access is done through
safe references, which cannot be made invalid. An unsafe pointer can point to an instance of
a value-type, array, string, or a block of memory allocated on a stack. Code that is not
marked as unsafe can still store and manipulate pointers through the System.IntPtr type,
• Managed memory cannot be explicitly freed, but is automatically garbage collected. Garbage
collection addresses memory leaks. C# also provides direct support for deterministic
finalization with the using statement (supporting the Resource Acquisition Is Initialization
idiom).
• Multiple inheritance is not supported, although a class can implement any number of
interfaces. This was a design decision by the language's lead architect to avoid complication,
• C# is more type safe than C++. The only implicit conversions by default are safe
conversions, such as widening of integers and conversion from a derived type to a base type.
This is enforced at compile-time, during JIT, and, in some cases, at runtime. There are no
implicit conversions between booleans and integers and between enumeration members and
integers (except 0, which can be implicitly converted to an enumerated type), and any user-
defined conversion must be explicitly marked as explicit or implicit, unlike C++ copy
constructors (which are implicit by default) and conversion operators (which are always
implicit).
• Assessors called properties can be used to modify an object with syntax that resembles C++
member field access. In C++, declaring a member public enables both reading and writing to
that member, and accessor methods must be used if more fine-grained control is needed. In
C#, properties allow control over member access and data validation.
C# has a unified type system. This means that all types, including
primitives such as integers, are subclasses of the System. Object class. For example, every type
inherits a ToString() method. For performance reasons, primitive types (and value types in
general) are internally allocated on the stack. Boxing and unboxing allow one to translate
primitive data to and from their object form. Effectively, this makes the primitive types a subtype
using the struct keyword. From the programmer's perspective, they can be seen as lightweight
classes. Unlike regular classes, and like the standard primitives, such value types are allocated on
the stack rather than on the heap. They can also be part of an object (either as a field or boxed),
or stored in an array, without the memory indirection that normally exists for class types. Structs
also come with a number of limitations. Because structs have no notion of a null value and can
be used in arrays without initialization, they are implicitly initialized to default values (normally
by filling the struct memory space with zeroes, but the programmer can specify explicit default
values to override this). The programmer can define additional constructors with one or more
arguments. This also means that struts lack a virtual method table, and because of that (and the
fixed memory footprint), they cannot allow inheritance (but can implement interfaces).
Features of C#:
• C# is simple.
• C# is modern.
• C# is object-oriented.
• C# is modular.
• Partial classes allow class implementation across more than one file. This permits
breaking down very large classes, or is useful if some parts of a class are automatically
generated.
• Generics or parameterized types: This is a .NET 2.0 feature supported by C#. Unlike C++
templates, .NET parameterized types are instantiated at runtime rather than by the
compiler; hence they can be cross-language whereas C++ templates cannot. They support
some features not supported directly by C++ templates such as type constraints on
generic parameters by use of interfaces. On the other hand, C# does not support non-type
generic parameters. Unlike generics in Java, .NET generics use reification to make
parameterized types first-class objects in the CLI Virtual Machine, which allows for
• Static classes that cannot be instantiated and that only allow static members. This is
• A new form of iterator that provides generator functionality, using a yield return
• Nullable value types (denoted by a question mark, e.g. int? i = null;) which add
• Coalesce operator: (??) returns the first of its operands which is not null. The primary
use of this operator is to assign a nullable type to a non-nullable type with an easy syntax.
C# versus Java
line including C and C++. Each includes advanced features, like garbage collection, which
remove some of the low level maintenance tasks from the programmer. In a lot of areas they are
syntactically similar.
to Microsoft Intermediate Language (MSIL), and Java to byte code. In each case the intermediate
machine’. In C#, however, more support is given for the further compilation of the intermediate
C# contains more primitive data types than Java, and also allows
more extension to the value types. For example, C# supports enumerators’, types which are
limited to a defined set of constant variables, and ‘structs’, which are user-defined value types.
Unlike Java, C# has the useful feature that we can overload various operators. Like Java, C#
fashion; with base class methods either ‘overriding’ or ‘hiding’ super class methods.
with single-dimensional arrays where arrays can be members of other arrays. In addition to
management system (RDBMS) that offers a variety of administrative tools to ease the burdens of
database development, maintenance and administration. SQL Server 2005 is a powerful tool for
turning information into opportunity. The following are more common tools provided by SQL
server.
• Enterprise Manager is the main administrative console for SQL Server installations. It
provides us with a graphical “birds-eye” view of all of the SQL Server installations on
your network.
• Query Analyzer offers a quick method for performing queries against any of our SQL
Server databases. It’s a great way to quickly pull information out of a database.
• SQL Profiler provides a window into the inner workings of your database.
• Service Manager is used to control the MS SQL Server (the main SQL Server
Agent processes.
• Data Transformation Services (DTS) provide an extremely flexible method for importing
and exporting data between a Microsoft SQL Server installation and a large variety of
other formats.
• Security-Ensure you applications are secure in any networked environment, with role-
additional scalability.
• Data transformation services-Automate routines that extract, transform, and load data
and the integrated Transact-SQL debugger allow you to reuse code to simplify the
development process.
• Application hosting-With multi-instance support, SQL Server enables you to take full
single server .
SQL is the set of statements that all programs and user must use to access data within
database. Application programs in turn must use SQL when executing the user’s request. The
time.
• It provides statements for a variety of tasks, which concerns all activities regarding a
database.
English or any other language. It has rules for grammar and syntax but they are basically the
normal rules and can be readily understood. The SQL stands for Structured Query Language.
1.Queries:
It always begins with the keyword SELECT and is used to retrieve the
data from the database in any combination or in any order. The query type statements cannot
The main purpose of DDL is to create, modify and drop the database
database. The DCL statements allow the user to give and take
sharing.
SYSTEM ANALYSIS
INTRODUCTION
objective. System Analysis is an important activity that takes place when we attempt to build a
new system or when modifying existing ones. Analysis comprises a detailed study of the various
operations performed by a system and their relationships within and outside the system. It is the
process of gathering and interpreting facts, diagnosing problems and improving the system using
systems, deciding on what changes and new features are required and defining exactly what the
proposed system must be. This process of system analysis is largely concerned with determining
communicate well with the user and conceive a joint understanding of what a system should be
doing, together with a view of the relative importance of the system facilities using interactive
techniques.
• Learn the details of the existing system as well as procedures currently taking
place.
• Document the details of the current system and procedures for discussion and
review by others.
• Evaluate the efficiency and effectiveness of the current system and procedures,
• Document the new system features at a level of detail that allows others to
• Involve directors and employees in the entire process, both to draw on their
expertise and knowledge of the current system as well as to learn their ideas,
feelings and opinions about requirements for the new changed system.
FEASIBILITY STUDY
feasibility, the likelihood the system will be useful to the organization. This is done by
investigating the existing system in the area under investigation and generating ideas about the
new system. A feasibility study is conducted to identify the best system that meets all the
• Operational Feasibility
• Technical Feasibility
Operational Feasibility
Proposed projects are beneficial only if they can be turned into information
systems that will meet the organization’s operating requirements. This test of feasibility asks if
the system will work when it is developed and installed. The project ‘Frenz4Ever’ is aimed to be
used as a general purpose software. One of the main problems faced during the development of a
new system is getting acceptance from user. Being a general purpose software there are no
resistance from the user as this software is extremely beneficial for users.
Technical Feasibility
It is the study of resources availability that may affect the availability to achieve
an acceptable system. The system must be evaluated from the technical viewpoint first. The
assessment of this feasibility must be based on an outline design of the system requirements in
terms of input, output, procedures. Having identified the outline of the system, the investigation
must go on to suggest the type of equipment, required method of developing the system, and the
method of running the system. The system which is being developed is used by the users as a
means to communicate with each other. It should be able to store large amount of data and
should provide an attractive graphical interface. In order to attain these requirements, the
technologies used in this project are Microsoft .Net framework, Microsoft SQL Server 2005, IIS
Server.
criteria to ensure that effort is concentrated on project, which will give best return at the earliest.
One of the factors that affect the development of a new system is the cost it would require. Since
the system is developed as a part of my study, there is no manual cost to be spent for the
proposed system.
DATAFLOW DIAGRAMS
movement of data through a system manual or automated includes the process, storing of data
and delays in the system. Data flow diagrams are the central tool and the basis from which other
may be described logically and independently of the physical components associated with the
system. They are termed logical data flow diagrams, showing the actual implementation and the
movement of data between people, departments and workstations. Data Flow Diagrams (DFD) is
one of the most important modeling tools used in system design. DFD shows the flow of data
through different process in the system. Data flow diagrams can be used to provide a clear
representation of any business function. The technique starts with an overall picture of the
business and continues by analyzing each of the functional areas of interest. This analysis can be
independent and do not reflect decisional points. Rather they demonstrate the information and
how it flows between specific processes in a system. They provide one kind of documentation
for reports.
system components. Data flow modeling method uses four kinds of symbols in drawing data.
• Process
• Data stores
• Data flows
• External entities
Process
Process shows the work of the system. Each process has one or
more data inputs and produce one or more data outputs. Processes are represented by round
rectangles in Data Flow Diagram. Each process has a unique name and number. This name and
number appears inside the rectangle that represents the process in a Data Flow Diagram. Process
name should be unambiguous and should convey as much meaning as possible without being too
long.
Data Stores
A data store is a repository of data. Processes can enter data into a
store or retrieve data from the data store. Each data has a unique name.
Data Flows
Data flows show the passage of data in the system and are
represented by lines joining system components. An arrow indicates the direction of flow and the
External Entities
External entities are outside the system but they either supply data
into the system or use other systems output. They are entities on which the designer has control.
They may be an organization’s customer or other bodies with which the system interacts.
External entities that supply data into the system are sometimes called source. External entities
that use the system data are sometimes called sinks. These are represented by rectangles in the
An arrow identifies data flow. It is a pipeline through which the information flows.
A circle or a bubble represents a process that transforms incoming data flow(s) into out going
data flow(s).
UML DIAGRAMS
Each UML diagram is designed to let developers and customers view a software
system from a different perspective and in varying degrees of abstraction. UML diagrams
• Class Diagram
• Activity Diagram
• Interaction Diagrams
• State Diagram
are use cases and actors. An actor is represents a user or another system that will interact with
the system you are modeling. A use case is an external view of the system that
represents some action the user might perform in order to complete a task.
Class Diagrams
Class diagrams are widely used to describe the types of objects in a system and their
relationships. Class diagrams model class structure and contents using design elements such as
classes, packages and objects. Class diagrams describe three different perspectives when
Activity Diagrams
Activity diagrams describe the workflow behavior of a system. Activity diagrams are
similar to state diagrams because activities are the state of doing something. The diagrams
describe the state of activities by showing the sequence of activities performed. Activity
DFD
IMAGE
SOURCE COMPRESSIO DESTINATION
N
LOGIN
USER
VERIFICATIO LOGIN
N
NEW REGISTER
TESTING
SYSTEM TESTING
ensuring that the system works accurately and efficiently as expected before live operation
commences. It certifies that the whole set of program hang together. System testing requires a
test plan that consists of several keys, activities and steps to run program, string, system and user
test is implementation stage in software development. The system test in implementation should
be confirm that all is correct and an opportunity to show the users that the system works as
expected. Testing is a set of activity that can be planned in advance and conducted
systematically, which is aimed at ensuring that the system works accurately and efficiently
Testing Objectives
Testing is the process of correcting a program with intend of finding an error.
• A good test is one that has a high probability of finding a yet undiscovered error.
Unit Testing
overall system. Unit testing focuses verification efforts on the smaller unit of software design in
the module. This is also known as ‘module’ testing. The modules of the system are tested
separately. The testing is carried out during programming stage itself. In this testing step each
module is found to work satisfactory as regard to the expected output from the module. There are
some validation checks for verifying the data input given by the user . It is very easy to find error
Integration Testing
Data can be lost across an interface, one module can have an adverse
effect on the other sub functions when combined by may not produce the desired major
functions. Integrated testing is the systematic testing for constructing the uncover errors within
the interface. This testing was done with sample data. The need for integrated test is to find the
categories: Incorrect or missing functions, interface errors, errors in data structures, external
Validation Testing
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY
34
as a package, interface errors have been uncovered and corrected and final series of software
tests, validation tests begins. Validation testing can be defined in many ways but a simple
definition is that validation succeeds when the software functions in a manner that can be
After validation test has been conducted one of the two possible conditions exists.
Output Testing
After performing the validation testing, the next step is output testing of the
proposed system since no system could be useful if it doesn’t produce the required data in the
specific format. The output displayed or generated by the system under consideration is tested
by, asking the user about the format displayed. The output format on the screen is found to be
correct as the format was designed in the system according to the user needs. Hence the output
User acceptance of the system is the key factor for the success of the system.
The system under consideration is tested for user acceptance by constantly keeping in touch with
prospective system at the time of developing and making change wherever required. This is done
White box testing is a testing case design method that uses the control
structure of the procedural design to derive the test cases. The entire independent path in a
module is exercised at least once. All the logical decisions are exercised at least once. Executing
all the loops at boundaries and within their operational bounds exercise internal data structure to
SYSTEM IMPLEMENTATION
System Implementation is the stage of project when the theoretical design
is turned into a working system. If the implementation stage is not carefully planned and
controlled, it can cause chaos. The implementation stage is a system project in its own.
Implementation is the stage of the project where the theoretical design turns into a working
system. Thus, it can be considered to be the most crucial stage in achieving a successful new
system and giving the users the confidence that the new system will work efficiently and
accurately. It is less creative than system design. It is primarily concerned with user training and
site preparation.
be required. Implementation simply means converting a new system design into operation. An
important aspect of the system analyst job is to make sure that the new design is implemented to
establish standards. Implementation means the process of converting a new raised system design
• Easy to use
• Controlled flow
Implementation Planning
departments and system analysts are confronted with the practical problems of controlling the
activities of people outside their own data processing department prior to this point in the project
system, system analyst has interviewed department staff with the permission of their respective
managers.
responsible for a successful implementation. There should be at least one Representative of each
department affected by the changes and other members should be co-opted for discussion of
specific topics.
the right place at the right time. So it requires staff selection training for that part of the system
for which the staff will be responsible. That is training must begin before the implementation
activities begin.
Training sessions must aim to give user staff the specific skills
required in their new jobs. The training will be most successful if conducted by the supervisor
with the system analyst in attendance to sort out any queries, new methods gain acceptance more
creation of the right atmosphere and motivating user staff. Education sessions should encourage
participation from all staff, with protection for individuals from group criticism. Educational
SYSTEM MAINTENANCE
Maintenance corresponds to restoring something to original
condition, covering a wide range of activity, including correcting coding, design errors, updating
user support. Better the system design, easier to maintain the system. Maintenance is performed
most often to improve the existing software rather than to respond to a crisis or system failure.
According to user needs and operational environment change, maintenance should be done in
affect either the computer or other parts of a computer-based system such as activity are
normally called maintenance. It includes both improvement of system functions and the
correction of faults that arise during the operation of a system. Maintenance activity may require
• As part of the normal running of the system when errors are found, users ask for
corrective, coding and design errors, updating documentation and test data and upgrading user
support.
Maintenance is also done based on fixing the problem reported, changing the interface with other
possible hazards. Security measures are provided to prevent unauthorized access of the database
at various levels. An uninterrupted power supply should be provided so that power failure or
password are provided to the users. Also, unauthorized access is restricted. The software allows
the user to enter the system only through login utility with a valid login name and a password.
CONCLUSION
Microsoft SQL Server 2005 as back end in Windows operating system. This windows based
software is developed for compressing various images. Weaving through the system developed,
encountered. Checking different tables and listing out the errors created many problems. More
checks performed during the test data. The results obtained were fully satisfactory from the user
point of view. Thus, the project titled ‘ONLINE POST OFFICE MANAGEMENT SYSTEM
• User-friendly.
SCREENSHOTS
LOGIN PAGE
REGISTRATION FORM
IMAGE PROCESSING
IMAGE INFORMATION
IMAGE RESIZE
REFERENCE
Websites
1.www.msdn.microsoft.com
2.www.csharpcorner.com
3.www.csharpcenter.com
Bibliography
1.Professional C#
4.Software Engineering