Sie sind auf Seite 1von 8

Documentation is important in testing:

As I mention in my earlier post, in general, understanding about


software testing documentation is “It can be done only by the person
who has free time”. We need to change this mindset, and then only
we can leverage documentation power on our projects.

It’s not that we don’t know how to do the documentation right. We just
don’t think it’s important.

Everyone must have standard templates for all the kinds of


documentation starting from Test strategy, test Plan, Test cases, and
Test data to Bug report. These are just to follow some standards
(CMMI, ISO etc.) but, when it comes to actual implementation how
many of these documents are really used by us? We just need to
synchronize our quality process with documentation standards and
other process in an organization.

SMOKE TESTING:
* Smoke testing originated in the hardware testing practice of
turning on a new piece of hardware for the first time and considering
it a success if it does not catch fire and smoke. In software industry,
smoke testing is a shallow and wide approach whereby all areas of
the application without getting into too deep, is tested.
* A smoke test is scripted, either using a written set of tests or an
automated test
* A Smoke test is designed to touch every part of the application in
a cursory way. It’s shallow and wide.
* Smoke testing is conducted to ensure whether the most crucial
functions of a program are working, but not bothering with finer
details. (Such as build verification).
* Smoke testing is normal health check up to a build of an
application before taking it to testing in depth.

SANITY TESTING:
* A sanity test is a narrow regression test that focuses on one or a
few areas of functionality. Sanity testing is usually narrow and deep.
* A sanity test is usually unscripted.
* A Sanity test is used to determine a small section of the
application is still working after a minor change.
* Sanity testing is a cursory testing, it is performed whenever a
cursory testing is sufficient to prove the application is functioning
according to specifications. This level of testing is a subset of
regression testing.
* Sanity testing is to verify whether requirements are met or not,
checking all features breadth-first.

What is the difference between client-server testing and web based


testing and what are things that we need to test in such applications?

Ans:
Projects are broadly divided into two types of:

* 2 tier applications
* 3 tier applications

CLIENT / SERVER TESTING


This type of testing usually done for 2 tier applications (usually
developed for LAN)
Here we will be having front-end and backend.

The application launched on front-end will be having forms and


reports which will be monitoring and manipulating data

E.g: applications developed in VB, VC++, Core Java, C, C++, D2K,


PowerBuilder etc.,
The backend for these applications would be MS Access, SQL
Server, Oracle, Sybase, Mysql, Quadbase

The tests performed on these types of applications would be


- User interface testing
- Manual support testing
- Functionality testing
- Compatibility testing & configuration testing
- Intersystem testing
WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet /
xtranet)
Here we will be having Browser, web server and DB server.

The applications accessible in browser would be developed in HTML,


DHTML, XML, JavaScript etc. (We can monitor through these
applications)

Applications for the web server would be developed in Java, ASP,


JSP, VBScript, JavaScript, Perl, Cold Fusion, PHP etc. (All the
manipulations are done on the web server with the help of these
programs developed)

The DBserver would be having oracle, sql server, sybase, mysql etc.
(All data is stored in the database available on the DB server)

The tests performed on these types of applications would be


- User interface testing
- Functionality testing
- Security testing
- Browser compatibility testing
- Load / stress testing
- Interoperability testing/intersystem testing
- Storage and data volume testing

A web-application is a three-tier application.


This has a browser (monitors data) [monitoring is done using html,
dhtml, xml, javascript]-> webserver (manipulates data) [manipulations
are done using programming languages or scripts like adv java, asp,
jsp, vbscript, javascript, perl, coldfusion, php] -> database server
(stores data) [data storage and retrieval is done using databases like
oracle, sql server, sybase, mysql].

The types of tests, which can be applied on this type of applications,


are:
1. User interface testing for validation & user friendliness
2. Functionality testing to validate behaviors, i/p, error handling, o/p,
manipulations, services levels, order of functionality, links, content of
web page & backend coverage’s
3. Security testing
4. Browser compatibility
5. Load / stress testing
6. Interoperability testing
7. Storage & data volume testing

A client-server application is a two tier application.


This has forms & reporting at front-end (monitoring & manipulations
are done) [using vb, vc++, core java, c, c++, d2k, power builder etc.,]
-> database server at the backend [data storage & retrieval) [using
ms access, sql server, oracle, sybase, mysql, quadbase etc.,]

The tests performed on these applications would be


1. User interface testing
2. Manual support testing
3. Functionality testing
4. Compatibility testing
5. Intersystem testing
Some more points to clear the difference between client server, web
and desktop applications:

Desktop application:
1. Application runs in single memory (Front end and Back end in one
place)
2. Single user only

Client/Server application:
1. Application runs in two or more machines
2. Application is a menu-driven
3. Connected mode (connection exists always until logout)
4. Limited number of users
5. Less number of network issues when compared to web app.

Web application:
1. Application runs in two or more machines
2. URL-driven
3. Disconnected mode (state less)
4. Unlimited number of users
5. Many issues like hardware compatibility, browser compatibility,
version compatibility, security issues, performance issues etc.
Here is the example scenario that caused a bug:

Lets assume in your application under test you want to create a new
user with user information, for that you need to logon into the
application and navigate to USERS menu > New User, then enter all
the details in the ‘User form’ like, First Name, Last Name, Age,
Address, Phone etc. Once you enter all these information, you need
to click on ‘SAVE’ button in order to save the user. Now you can see
a success message saying, “New User has been created
successfully”.

But when you entered into your application by logging in and


navigated to USERS menu > New user, entered all the required
information to create new user and clicked on SAVE button. BANG!
The application crashed and you got one error page on screen.
(Capture this error message window and save as a Microsoft paint
file)

Now this is the bug scenario and you would like to report this as a
BUG in your bug-tracking tool.

How will you report this bug effectively?

Here is the sample bug report for above mentioned example:


(Note that some ‘bug report’ fields might differ depending on your bug
tracking system)

SAMPLE BUG REPORT:

Bug Name: Application crash on clicking the SAVE button while


creating a new user.
Bug ID: (It will be automatically created by the BUG Tracking tool
once you save this bug)
Area Path: USERS menu > New Users
Build Number: Version Number 5.0.1
Severity: HIGH (High/Medium/Low) or 1
Priority: HIGH (High/Medium/Low) or 1
Assigned to: Developer-X
Reported By: Your Name
Reported On: Date
Reason: Defect
Status: New/Open/Active (Depends on the Tool you are using)
Environment: Windows 2003/SQL Server 2005

Description:
Application crash on clicking the SAVE button while creating a new
user, hence unable to create a new user in the application.

Steps To Reproduce:
1) Logon into the application
2) Navigate to the Users Menu > New User
3) Filled all the user information fields
4) Clicked on ‘Save’ button
5) Seen an error page “ORA1090 Exception: Insert values Error…”
6) See the attached logs for more information (Attach more logs
related to bug..IF any)
7) And also see the attached screenshot of the error page.

Expected result: On clicking SAVE button, should be prompted to a


success message “New User has been created successfully”.

(Attach ‘application crash’ screen shot.. IF any)

Save the defect/bug in the BUG TRACKING TOOL. You will get a
bug id, which you can use for further bug reference.
Default ‘New bug’ mail will go to respective developer and the default
module owner (Team leader or manager) for further action.

Related: If you need more information about writing a good bug report
read our previous post “How to write a good bug report“.

Types of risks in s/w projects:


Are you developing any Test plan or test strategy for your project?
Have you addressed all risks properly in your test plan or test
strategy?

As testing is the last part of the project, it’s always under pressure
and time constraint. To save time and money you should be able to
prioritize your testing work. How will prioritize testing work? For this
you should be able to judge more important and less important
testing work. How will you decide which work is more or less
important? Here comes need of risk-based testing.

What is Risk?
“Risk are future uncertain events with a probability of occurrence and
a potential for loss”

Risk identification and management are the main concerns in every


software project. Effective analysis of software risks will help to
effective planning and assignments of work.

In this article I will cover what are the “types of risks”. In next articles I
will try to focus on risk identification, risk management and mitigation.

Risks are identified, classified and managed before actual execution


of program. These risks are classified in different categories.

Categories of risks:

Schedule Risk:
Project schedule get slip when project tasks and schedule release
risks are not addressed properly.
Schedule risks mainly affect on project and finally on company
economy and may lead to project failure.
Schedules often slip due to following reasons:

* Wrong time estimation


* Resources are not tracked properly. All resources like staff,
systems, skills of individuals etc.
* Failure to identify complex functionalities and time required to
develop those functionalities.
* Unexpected project scope expansions.

Budget Risk:

* Wrong budget estimation.


* Cost overruns
* Project scope expansion
Operational Risks:
Risks of loss due to improper process implementation, failed system
or some external events risks.
Causes of Operational risks:

* Failure to address priority conflicts


* Failure to resolve the responsibilities
* Insufficient resources
* No proper subject training
* No resource planning
* No communication in team.

Technical risks:
Technical risks generally leads to failure of functionality and
performance.
Causes of technical risks are:

* Continuous changing requirements


* No advanced technology available or the existing technology is
in initial stages.
* Product is complex to implement.
* Difficult project modules integration.

Programmatic Risks:
These are the external risks beyond the operational limits. These are
all uncertain risks are outside the control of the program.
These external events can be:

* Running out of fund.


* Market development
* Changing customer product strategy and priority
* Government rule changes.

These are all common categories in which software project risks can
be classified. I will cover in detail “How to identify and manage risks”
in next article.

Das könnte Ihnen auch gefallen