Sie sind auf Seite 1von 46

PRECISION GUIDED PROJECTILES

ABSTRACT
The aim of the project is to predefine the trajectory of an unguided projectile accurately, to achieve a higher kill ratio. Both the Indian Army and the Indian Navy have a considerably large stock piles of unguided projectiles/Munitions, these munitions are relatively in effective in the field, because of their large miss distances, which result from a combination of Gunpoint error, shot-shot velocity variation and aero dynamic disturbances such wind turbulence that causes the projectile to deviate from the expected flight path, resulting in not achieving a higher accuracy, as a consequence the hit probability for a single projectile is farfetched. To overcome these set of defined and undefined problems in the field, they use a system called Circular Error Probability (CEP), even after taking up the CEP the chances of hitting the target are still highly variable. Our project is to write an algorithm with a user interface, in which the user will be giving a set of inputs such as Target Latitudes, Longitudes & Height Cannon Latitudes, Longitudes & Height the type of cannon being deployed & the wind turbulence during the time of operation, based on the following inputs the system will give the accurate angle of fire and rate of fire power to travel the distance, i.e. the charge to be used to propel the projectile towards the target. The Idea and the objectives are to bring down the Hit Probability or the Kill Ratio to 1:3 or 1:4.

OBJECTIVE
The main objective of the proposed solution is to launch the projectile accurately on the target. A Fire direction center is a command post, consisting of communication personnel and equipment, by means of which the commander exercises fire direction and/or fire control. The fire direction center receives targets inputs and requests for fire, and translates them into appropriate fire direction. The main objective of a fire direction center is to receive the inputs of the target (like latitudes, longitudes and height) from the forward observation officer. After receiving the inputs the fire direction center will calculate the distance between the target and the gunner, angle at which the target should be fired and then checks for the obstacles, if found then a projectile is used for firing down the target and wind turbulence checks the direction of wind and then calculates at which angle and with what speed the projectile should be fired then it sends the request for fire to the gunner. The purpose of fire direction center is to provide accurate, timely information which by category represents information on which command decisions are based.

CIRCULAR ERROR PROBABILITY


In the military science of ballistics, circular error probability (CEP) (also circle of equal probability) is an intuitive measure of a weapon system's precision. It is defined as the radius of a circle, centered about the mean, whose boundary is expected to include 50% of the population within it. Circular error probability is a method, were the Artillery shells are fired in a circular fashion assuming that one among those shells may hit the target. In this method the forward observation officer used to give the approximate inputs of the target to the gunner. Then the gunner used to bombard the target area using approximate input location. Hence the hit ratio in circular error probability was 1:10(i.e., one among the 10 shells use to hit the target). In Circular Error Probability (CEP) when a projectile is fired away from the target it alerts the enemy that we are attacking him and he will hide in the bunkers and safeguards himself.

FORWARD OBSERVATION OFFICER


A military artillery observer or spotter is responsible for directing artillery and mortar fire, because artillery is an indirect fire weapon system, the guns are rarely in line-ofsight of their target, often located miles away. The observer serves as the eyes of the gunners, calling in target locations and adjustments to the Fire Direction Center (FDC) via radio transmitters. The FDC then translates the observer's orders into firing solutions for the cannon batteries. Artillery observers are often deployed with combat arms maneuver units, typically infantry companies or armored unit. On land, artillery observers are considered high-priority targets by enemy forces, as they control a great amount of firepower, are within visual range of the enemy, and are often located deep within enemy territory. The artillery observer must be skilled not only in fire direction, but also in stealth and, if necessary, combat in self-defence.

2. SYSTEM ANALYSIS
2.1. USER REQUIREMENTS:
User Friendly: The said project is designed and organized in very simplified manner to suit the current requirements of the forward observation officer. Accurate Results: The forward observation officer needs to find out the accurate location of the target with reference to his position.

2.2 HARDWARE REQUIREMENTS


Processor RAM Hard Disk : : : 32 bit 256MB 8GB

2.3 SOFTWARE REQUIREMENTS


Operating System Language Back End Documentation : : : : Windows XP Professional .Net DataBase MS-Word

2.4 USER REQUIREMENTS


The User Requirements Document (URD) is a seminal document in the systems engineering and requirements engineering processes. The User Requirement Document contains the requirements from the stakeholders particularly the users (operators, maintainers, trainers, marketers, etc). All stakeholders must be involved or at least considered during the development of the user requiement document. As a minimum, the User Requirement Document should reveal: applications and missions, operational characteristics, operational constraints, external systems and interfaces; operational and support environment, and support concepts and responsibilities. Draft: The first version, or draft version, is compiled after requirements have been discovered, recorded, classified, and prioritized. Proposed: The draft document is then proposed as a potential requirements specification for the project. The proposed document should be reviewed by several parties, who may comment on any requirements and any priorities, either to agree, to disagree, or to identify missing requirements. Readers include end-users, developers, project managers, and any other stakeholders. The document may be amended and reproposed several times before moving to the next stage. Validated: Once the various stakeholders have agreed to the requirements in the document, it is considered validated. Approved: The validated document is accepted by representatives of each party of stakeholders as an appropriate statement of requirements for the project. The developers then use the requirements document as a guide to implementation and to check the progress of the project as it develops.

3. SYSTEM DESIGN
SYSTEM ARCHITECTURE

DATA FLOW DIAGRAMS

3.1 INTRODUCTION UNIFIED MODELING LANGUAGE


Introducing the UML: The Unified Modeling Language (UML) is a standard language for writing software blueprints. The UML may be used to visualize, specify, construct, and document the artifacts of a software-intensive system. The UML is appropriate for modeling systems ranging from enterprise information systems to distributed Web-based applications and even to hard real time embedded systems. It is a very expressive language, addressing all the views needed to develop and deploy such systems. Learning to apply the UML effectively starts with forming a conceptual model of the language, which requires learning three major elements: the UMLs basic building blocks, the rules that dictate how these building blocks may be put together, and some common mechanism that apply throughout the language. The UML is only a language and so is just one part of a software development method. The UML is process independent, although optimally it should be used in a process that is use case driven, architecture-centric, iterative, and incremental. Building Blocks of UML The vocabulary of the UML encompasses three kinds of building blocks: Things Relationships Diagrams

Things are the abstractions that are first-class citizens in a model; relationships tie these things together; diagram group interesting collection of things.

10

Things in the UML


1) Structural things: They are the nouns of UML models, which are basically static parts of a model, representing elements that are either conceptual or physical. Ex: Classes, Interface, Collaboration, Use Case, Component, and No 2) Behavioral things: They are the dynamic parts of UML models. These are the verbs of a model, representing behavior over time and space. Ex: Interaction and State Machine. 3) Grouping things: They are the oraganizational parts of UML models. These are the boxes into which a model can be decomposed. Ex: Package. 4) Annotational things: They are the explanatory parts of UML models. These are the comments you may apply to describe, illuminate, and remark about any element in a model. There is one primary kind of annotational thing, called a Note. Relationships in the UML There are four kinds of relationships in the UML. 1) Dependency: It is a semantic relationship between two things in which a change to one thing (the independent thing) may affect the semantics of the other thing (the dependant thing). 2) Association: It is a structural relationship that describes a set of links, a link being a connection among objects. Aggregation is a special kind of association, representing a structural relationship between a whole and its parts.

11

3) Generalization: It is a relationship in which objects of the specialized element (the child) are substitute for objects of the generalized element (the parent). 4) Realization: It is a semantic relationship between classifiers, wherein one classifier specifies a contract that another classifier guarantees to carry out.

Diagrams in the UML


A diagram is graphical presentation of a set of elements, most often rendered as a connected graph of vertices(things) and arcs(relationships).For all but the most trivial systems, a diagram is a represents an elided view of the elements that make up a system. 1) Class Diagram: Class Diagrams address the static design view of a system. A Class diagram shows a set of classes, interfaces, and collaborations and their relationships. 2) Object Diagram: Object diagrams represent static snapshots of instances of the things found in class diagrams. An Object diagram shows a set of objects and their relationships. 3) Use Case Diagram: Use Case diagrams address the static use case view of a system. These diagrams are especially important in organizing and modeling the behaviors of a system. A use case diagram shows a set of use cases and actors (a special kind of class) and their relationships 4) Interaction Diagram: An interaction diagram shows an interaction, consisting of a set of objects and their relationships, including the messages that may be dispatched among them. It addresses the dynamic view of a system.

12

Sequence Diagram: A sequence diagram is an interaction diagram that emphasizes the time-ordering of messages.
Collaboration Diagram:

A collaboration diagram is an interaction diagram that emphasizes the organization of the objects that send and receive messages. 5) Activity Diagram:

structural

An activity diagram is a special kind of a statechart diagram that shows the flow from activity to activity within a system. They are important in modeling the functions of a system and emphasize the flow of control among objects. 6) Component Diagram: A component diagram shows the organization and dependencies among a set of components. They address the static implementation view of a system. They are related to class diagrams in that a component typically maps to one or more classes, interfaces, or collaborations. 7) Deployment Diagram: A deployment diagram shows the configuration of run-time processing nodes and the components that live in them. They are related to component diagrams in that a node typically encloses one or more components.

13

3.2 UML DIAGRAMS


CLASS DIAGRAM:

14

SEQUENCE DIAGRAM:

15

USECASE DIAGRAM:

16

ACTIVITY DIAGRAM:

17

4. SOFTWARE DEVELOPMENT METHODOLOGIES

.NET:
Introduction Microsoft will be releasing .NET versions of existing products as part of its .NET Initiative. Of course a next version of these products would have existed with or without the .NET Initiative. First, an important part of Microsofts .NET Initiative is a new and exciting platform called the .NET Frameworks, which will be discussing shortly. Second, Microsoft has re-focused across its various product groups to address software development in an interconnected world. Lets talk about this for a moment. Microsoft creates software. This is what they are all about. For over a decade now, the Internet has been changing the software industry right out from under them (and many of the rest of us!) This is an unstoppable paradigm-shift that has the power to make or break even a goliath software company. And Microsoft has decided that they are going to address the Internet, head-on, from every nook and cranny of their company. So began the .NET Initiative. Before we jump headlong into the .NET Initiative, lets review how the software industry is changing. Here is my take on the current thinking.

The .NET Frameworks The .NET Frameworks is a component called the Common Language Runtime or CLR which is a lot like an operating system that runs within the context of another operating system (such as Windows ME or Windows 2000). This is not a new idea. It shares traits in common with the Java Virtual Machine, as well as the environments of many interpreted languages such as BASIC and LISP which have been around for decades.

The purpose of a middleware platform like the CLR is simply that a common OS like Windows is often too close to the hardware of a machine to retain the flexibility or

18

agility required by software targeted for business on the Internet. Software running on the CLR (referred to as Managed Code) is exceptionally agile! Another component of the .NET Frameworks is a massive library of reusable objecttypes called the Frameworks Class Library or FCL. The FCL contains hundreds of classes to perform tasks ranging from the mundane, such as file reads and writes, to the exotic, such as advanced cryptography and web services. Using the FCL you get software as a service with trivial development costs. Finally, the .NET Frameworks contains a collection of tools and compilers that help to make programming to this new environment productive and enjoyable. Up until now I have made little mention of C# (pronounced See-Sharp) or Visual Basic.NET. The reason is that the real guts of this new environment are the CLR. However, over twenty language compilers are currently being designed for the .NET Frameworks, including five offerings from Microsoft: Visual Basic, C#, C++, JavaScript and CIL. The CLR and the .NET Frameworks in general, however, are designed in such a way that code written in one language can not only seamlessly be used by another language, but it can also be naturally extended by code written in another programming language. This means that (depending on the needs of a projects work-force) developers will be able to write code in the language with which they are most comfortable, and continue to equally reap all the rewards of the .NET environment as well as the efforts of their coworkers! Now it is time to share some real-life specifics that translate to unparalleled productivity for developers. Developers are comfortable with calling functions. Developers are also comfortable (or can quickly become comfortable) with using "objects" in their software. These things take little time and effort by comparison to difficult tasks such as memory management and network programming. So the .NET Frameworks is designed from the ground-up to make the difficult tasks either automatic (such as memory management) or to expose them as objects and function calls.

19

The .NET Frameworks makes very clever use of general-purpose standards like SOAP, XML, and WSDL to make advanced functionality simple. It is worth noting that none of these standards have anything to do with the Real-Estate industry, and yet they can be used to transmit Real-Estate information (as well as any other kind of digital data) easily. Again, the network communication with the Simple Object Access Protocol(SOAP) protocol is handled by the FCL. Meanwhile, the .NETFrameworks will automatically create the WSDL necessary to describe your new web service. So, the hard stuff is automatic or exposed as objects and function calls. To further pronounce this goal, the .NET Frameworks can be used to write traditional client applications (like word-processors and spreadsheets) as well as cutting edge software as a service. Way, the concepts and tools are the same, the languages are the same, and the classes and types that your developers use are the same. This homogeneous view of development increases productivity. Whether you use the .NET Frameworks to write the client or the web-service, you will interoperate seamlessly with clients and servers running on other platforms so long as they adhere to web-service standards like SOAP. Remember, the .NET Frameworks is the Microsofts .NET Initiative. This new

product will enable your company to produce software that exposes itself as a service and/or uses services in an interconnected world. This is the next step. Meanwhile Microsoft will be releasing .NET versions of their enterprise servers and other products, such as Visual Studio.NET, which take full advantage of and complete the abilities of the .NET Frameworks. Microsoft will also be marketing their own web-services such as authentication services and personalization services. This is all part of the .NET Initiative. However, Microsoft does not corner this initiative. In fact significant portions of the .NET Frameworks have been submitted for ECMA standardization, i.e. Microsoft would not retain control of the technology. Meanwhile, third parties will release server products that tightly integrate with the .NET Frameworks.

20

Third parties will expose web-services using the .NET Frameworks. Some of these products will compete with Microsofts products; others will be new innovative products that were never before feasible. Software as a service is here with or without the .NET Initiative. The .NET Initiative brings new tools, a new platform, and a cohesive plan designed from the ground up to exploit software as a service. What we all get is an Internet that begins to meet its own potential. TYPEs .NET: VB.NET ASP.NET C#.NET VC++ VJ++

C#.NET:
The correct title of this article is C# (programming language). The substitution or omission of the # sign is because of technical restrictions. C# (pronounced see sharp) is a multi-paradigm programming language encompassing strongtyping, imperative, declarative, functional,generic, object -oriented (class-based), and component-oriented programming disciplines. It was developed by Microsoft within its .NETinitiative and later approved as a standard by Ecma (ECMA-334) and ISO (ISO/IEC 23270:2006). C# is one of the programming languages designed for the Common Language Infrastructure. C# is intended to be a simple, modern, general-purpose, object-oriented programming language. Its development team is led by Anders Hejlsberg. The most recent version is C# 4.0, which was released on April 12, 2010.

Design goals
The ECMA standard lists these design goals for C#: C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.

21

The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important. The language is intended for use in developing software components suitable for deployment in distributed environments. Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++. Support for internationalization is very important. C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions. Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language. The name "C sharp" was inspired by musical notation where a sharp indicates that the written note should be made a semitone higher in pitch. The "sharp" suffix has been used by a number of other .NET languages that are variants of existing languages, including J# (a .NET language also designed by Microsoft that is derived from Java 1.1), A# (from Ada), and the functional programming language F#.The original implementation of Eiffel for .NET was called Eiffel#,a name since retired since the full Eiffellanguage is now supported. The suffix has also been used for libraries, such as Gtk# (a .NET wrapper for GTK+ and other GNOME libraries), Cocoa# (a wrapper for Cocoa) and Qt# (a .NET language binding for the Qt toolkit).

22

History
During the development of the .NET Framework, the class libraries were originally written using a managed code compiler system called Simple Managed C (SMC). In January 1999, Anders Hejlsberg formed a team to build a new language at the time called Cool, which stood for "C-like Object OrientedLanguage". Microsoft had considered keeping the name "Cool" as the final name of the language, but chose not to do so for trademark reasons. By the time the .NET project was publicly announced at the July 2000 Professional Developers Conference, the language had been renamed C#, and the class libraries and ASP.NET runtime had been ported to C#. C#'s principal designer and lead architect at Microsoft is Anders Hejlsberg, who was previously involved with the design of Turbo Pascal, Embarcadero Delphi (formerly CodeGear Delphi and Borland Delphi), and Visual J++. In interviews and technical papers he has stated that flaws[citation
needed]

in most major

programming languages (e.g. C++, Java, Delphi, andSmalltalk) drove the fundamentals of the Common Language Runtime (CLR), which, in turn, drove the design of the C# language itself. James Gosling, who created the Java programming language in 1994, and Bill Joy, a co-founder of Sun Microsystems, the originator of Java, called C# an "imitation" of Java; Gosling further claimed that "[C# is] sort of Java with reliability, productivity and security deleted." Klaus Kraft and Angelika Langer (authors of a C++ streams book) stated in a blog post that "Java and C# are almost identical programming languages. Boring repetition that lacks innovation," "Hardly anybody will claim that Java or C# are revolutionary programming languages that changed the way we write programs," and "C# borrowed a lot from Java - and vice versa. Now that C# supports boxing and unboxing, we'll have a very similar feature in Java." Anders Hejlsberg has argued that C# is "not a Java clone" and is "much closer to C++" in its design. Since the release of C# 2.0 in November 2005, the C# and Java languages have evolved on increasingly divergent trajectories, becoming somewhat less similar. C# makes use of reification to provide "first-class" generic objects that can be used like any other class, with code generation performed at class-load time. By contrast, Java's 23

generics are essentially a language syntax feature, and they do not affect the generated byte code, because the compiler performs type erasure on the generic type information after it has verified its correctness.

Distributing Features
By design, C# is the programming language that most directly reflects the underlying Common Language Infrastructure (CLI). Most of its intrinsic types correspond to value-types implemented by the CLI framework. However, the language specification does not state the code generation requirements of the compiler: that is, it does not state that a C# compiler must target a Common Language Runtime, or generate Common Intermediate Language (CIL), or generate any other specific format. Theoretically, a C# compiler could generate machine code like traditional compilers of C++ or FORTRAN. Some notable features of C# that distinguish it from C and C++ (and Java, where noted) are: It has no global variables or functions. All methods and members must be declared within classes. Static members of public classes can substitute for global variables and functions. Local variables cannot shadow variables of the enclosing block, unlike C and C+ +. Variable shadowing is often considered confusing by C++ texts C# supports a strict Boolean data type, bool. Statements that take conditions, such as while and if, require an expression of a type that implements the true operator, such as the boolean type. While C++ also has a Boolean type, it can be freely converted to and from integers, and expressions such as if require only that is convertible to bool, allowing being an int, or a pointer. C# disallows this "integer meaning true or false" approach, on the grounds that forcing programmers to use expressions that return exactly boolcan prevent certain types of common programming mistakes in C or C++ such as if (a = b) (use of assignment = instead of equality ==). In C#, memory address pointers can only be used within blocks specifically marked as unsafe, and programs with unsafe code need appropriate permissions to run.

24

Most object access is done through safe object references, which always either point to a "live" object or have the well-defined null value; it is impossible to obtain a reference to a "dead" object (one that has been garbage collected), or to a random block of memory. An unsafe pointer can point to an instance of a value-type, array, string, or a block of memory allocated on a stack. Code that is not marked as unsafe can still store and manipulate pointers through the System.IntPtr type, but it cannot dereference them.

Common Type Systems


C# has a unified type system. This unified type system is called Common Type System (CTS). A unified type system implies that all types, including primitives such as integers, are subclasses of the System.Object class. For example, every type inherits a To String( ) method. Categories of data types CTS separate data types into two categories: 1. 2. Value types Reference types

Instances of value types do not have referential identity nor referential comparison semantics - equality and inequality comparisons for value types compare the actual data values within the instances, unless the corresponding operators are overloaded. Value types are derived from System.ValueType, always have a default value, and can always be created and copied. Some other limitations on value types are that they cannot derive from each other (but can implement interfaces) and cannot have an explicit default (parameterless) constructor. Examples of value types are all primitive types, such as int (a signed 32-bit integer), float (a 32-bit IEEE floating-point number), char (a 16bit Unicode code unit), andSystem.DateTime (identifies a specific point in time with nanosecond precision). Other examples are enum (enumerations) and struct (user defined structures).

25

In contrast, reference types have the notion of referential identity - each instance of a reference type is inherently distinct from every other instance, even if the data within both instances is the same. This is reflected in default equality and inequality comparisons for reference types, which test for referential rather than structural equality, unless the corresponding operators are overloaded (such as the case for System.String). In general, it is not always possible to create an instance of a reference type, nor to copy an existing instance, or perform a value comparison on two existing instances, though specific reference types can provide such services by exposing a public constructor or implementing a corresponding interface (such as ICloneable or IComparable). Examples of reference types are object (the ultimate base class for all other C# classes), System.String (a string of Unicode characters), and System.Array (a base class for all C# arrays).

Standardization and Licensing


In August, 2000, Microsoft Corporation, Hewlett-Packard and Intel Corporation cosponsored the submission of specifications for C# as well as the Common Language Infrastructure (CLI) to the standards organization Ecma International. In December 2001, ECMA released ECMA-334 C# Language Specification. C# became an ISO standard in 2003. ECMA had previously adopted equivalent specifications as the 2nd edition of C#, in December 2002. In June 2005, ECMA approved edition 3 of the C# specification, and updated ECMA334. Additions included partial classes, anonymous methods, nullable types, and generics (similar to C++ templates). In July 2005, ECMA submitted the standards and related TRs to ISO/IEC JTC 1 via the latters Fast-Track process. This process usually takes 69 months. The C# language definition that and the CLI are and standardized

under ISO and Ecma standards

provide reasonable

non-discriminatory

licensing protection from patent claims. However, Microsoft uses C# and the CLI in its Base Class Library (BCL) that is the foundation of its proprietary .NET framework, and which provides a variety of non-standardized classes (extended I/O, GUI, Web

26

services, etc.). Microsoft has agreed not to sue open source developers for violating patents in non-profit projects for the part of the framework that is covered by the OnScreen Programming Optical Storage Processor (OSP). Microsoft has also agreed not to enforce patents relating to Novell products against Novell's paying customers with the exception of a list of products that do not explicitly mention C#, .NET or Novell's implementation of .NET (The Mono Project). In a note posted on the Free Software Foundation's news website in June 2009, Richard Stallman warned that he believes that "Microsoft is probably planning to force all free C# implementations underground using software patents", and recommended that developers avoid taking what he described as the "gratuitous risk" associated with "depend[ing] on the free C# implementations". The Free Software Foundation later reiterated its warnings, claiming that the extension of Microsoft Community Promise to the C# and the CLIECMA specifications would not prevent Microsoft from harming open-source implementations of C#, because many specific Windows libraries included with .NET or Mono were not

Implementations
The reference C# compiler is Microsoft Visual C#. Other C# compilers exist, often including an implementation of the Common Language Infrastructure and the .NET class libraries up to .NET 2.0: The project also provides an open source C# compiler, a nearly complete implementation of the Common Language Infrastructure including the required framework libraries as they appear in the ECMA specification, and subset of some of the remaining Microsoft proprietary .NET class libraries up to .NET 2.0 (those not documented or included in the ECMA specification, but included in Microsoft's standard .NET Framework distribution). Microsoft's Rotor project (currently called Shared Source Common Language Infrastructure) (licensed for educational and research use only) provides a shared source implementation of the CLR runtime and a C# compiler, and a subset of the required Common Language Infrastructure framework libraries in the ECMA specification (up to C# 2.0, and supported on Windows XP only).

27

5. OUTPUT SCREENS

28

29

30

31

32

33

34

35

36

37

38

39

40

6. TESTING AND VALIDATION


6.1 INTRODUCTION
System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limited type of testing; it seeks to detect defects both within the "interassemblages" and also within the system as a whole.

6.2 TESTING METHODOLOGIES


Unit level testing: This type of testing involves tallying the specifications sent by the client with the product design. The process involves QC on the various intrinsic modules of the software. Integration testing: Testing is done related to the integration of different modules in a logical manner for providing a seamless effect between modules. System testing: This testing involves the testing of the whole product with respect mainly to its overall navigation and browsing features. It also includes GUI testing which is testing of different colour combinations and graphics. Functionality testing is also a part of the testing and takes in to account the business functions mapped within the software and its logical flow. Regression testing: This testing process is used to check the bugs within the software, fix them and regulate the flow of the programme.

41

User acceptance testing: This testing is done by the domain experts of the relevant software. They are required to do quality analysis of the same with respect to domain logic. Alpha testing and Beta testing: The program is tested by third party users but only to a limited segment to remove left out kinks and errors. Compatibility testing: This test is for assessing the compatibility of the software on various systems of different domains, platforms, and interfaces. This is solved through virtualisation, which helps in optimum utilisation of resources. Performance testing: This test is particularly with respect to web applications like CMS and interactive software. This is to gauge the performance of the different applications. Simulation tools are used for the same to check the stress (stress testing), the load which can be handled (load testing). API testing: APIs are basically used as a communication medium between the drivers and the systems and are based below the kernel level. This test involves testing of APIs (application programming interfaces). Security testing: Security testing involves security measures and penetration testing of softwares to find traces of SQL and HTML injections as well as cross scripting and get rid of them. Black box and white box testing: Black box testing involves testing of output with a preset input to a certain application and test on the basis of results only. White-box testing is testing of the overall inputprocess-output streamline which pays attention to detail.

6.3 VALIDATION:

42

The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. In other words, validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place, while verification is ensuring that the product has been built according to the requirements and design specifications. Validation ensures that you built the right thing. Verification ensures that you built it right. Validation confirms that the product, as provided, will fulfill its intended use. From testing perspective: Fault - wrong or missing function in the code. Failure - the manifestation of a fault during execution. Malfunction - according to its specification the system does not meet its specified functionality. Within the modeling and simulation community, the definitions of validation, verification and accreditation are similar. Validation is the process of determining the degree to which a model, simulation, or federation of models and simulations, and their associated data are accurate representations of the real world from the perspective of the intended use.

7. CONCLUSION

43

The Idea and the objectives was to bring down the Hit/Kill Probability or the Kill Ratio to 1:3 or 1:3.5, from the existing system were the kill probability is 1:10 and to destruct the targets precisely. An improvement in bombing accuracy occurred with the appearance of precision guided projectile.

8. FUTURE ENHANCEMENT

44

Laser guidance is a technique of guiding a missile or other projectile or vehicle to a target by means of a laser beam. Some laser guided systems utilise beam riding guidance, but most operate more similarly to Semi-Active Radar Homing (SARH). This technique is sometimes called SALH, for Semi-Active Laser Homing. With this technique, a laser is kept pointed at the target and the laser radiation bounces off the target and is scattered in all directions (this is known as painting the target, or laser painting). The missile, bomb, etc. is launched or dropped somewhere near the target. When it is close enough for some of the reflected laser energy from the target to reach it, a laser seeker detects which direction this energy is coming from and adjusts the projectile trajectory towards the source. While the projectile is in the general area and the laser is kept aimed at the target, the projectile should be guided accurately to the target.

45

BIBLIOGRAPHY
1. Holmes, Richard (1988), The World Atlas of Warfare: Military Innovations that Changed the Course of History. 2. Keegan , John, Vintage, A History of Warfare.

46

Das könnte Ihnen auch gefallen