Beruflich Dokumente
Kultur Dokumente
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure
that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.
15.What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user
is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions
should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to
determine if things happen when they shouldn't or things don't happen when they should.
16.What are some recent major computer system failures caused by software bugs?
· In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000
erroneous tax overcharges. The problem was partly attibuted to the difficulty of testing the integration of multiple systems.
· A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been
used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another
country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the
problem, and shared the information, that U.S. officials became aware of the problems.
· According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems
with a large retirement plan management system. According to the reports, the client claimed that system deliveries were
late, the software had excessive defects, and it caused other systems to crash.
· In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The
company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'; the
trains were started by altering the control system's date settings.
· News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor
had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and
didn't work.
· In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district
with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class
registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for
at least a year until the bugs were worked out of the new system by the software vendors.
· In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a
simple data conversion error. It was determined that spacecraft software used certain data in English units that should
have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar
Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to
determine the process failures that allowed the error to go undetected.
· Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a
period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures
exchange, which was shut down for most of a week as a result of the outages.
· In April of 1999 a software bug caused the failure of a $1.2 billion military satellite launch, the costliest unmanned
accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a
complete military and industry review of U.S. space launch programs, including software integration and testing
processes. Congressional oversight hearings were requested.
· A small town in Illinois received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700
times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local
power company to deal with Y2K software issues.
· In early 1999 a major computer game company recalled all copies of a popular new product due to software problems.
The company made a public apology for releasing a product before it was ready.
· The computer system of a major online U.S. stock trading service failed during trading hours several times over a period
of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software
upgrade intended to speed online trade confirmations.
· In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit
card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause
was eventually traced to a software bug.
· January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no
charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called
up with questions about their bills.
· In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer
billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more
than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by
government agencies.
· A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card
company) due to the software's inability to handle credit cards with year 2000 expiration dates.
· In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web
site after less than two days of operation due to software problems. The new site allowed web site visitors instant access,
for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each others' reports
instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to "...unexpectedly
high demand from consumers and faulty software that routed the files to the wrong computers."
· In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of
the U.S. RBOC's to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using
their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had
been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that 'It
had nothing to do with the integrity of the software. It was human error.'
· On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket failed shortly after launching,
resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a
floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer.
· Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32
each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such
error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.
· Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news
reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking
up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on a
what he said was a '...funny feeling in my gut', decided the apparent missile attack was a false alarm. The filtering
software code was rewritten.
17.Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician
to a great lord. The physician was asked which of his family was the most skillful healer. He replied,
"I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets
out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and
neighbors."
"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown
outside our home."
18.Why does software have bugs?
·miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's
requirements).
·software complexity - the complexity of current software applications can be difficult to comprehend for anyone without
experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data
communications, enormous relational databases, and sheer size of applications have all contributed to the exponential
growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a
project unless it is well-engineered.
·programming errors - programmers, like anyone else, can make mistakes.
· changing requirements - the customer may not understand the effects of changes, or may understand and request them
anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be
redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major
changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the
complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some
fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management
must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to
keep the inevitable bugs from running out of control .
· time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines
loom and the crunch comes, mistakes will be made.
· egos - people prefer to say things like:
'no problem' 'piece of cake'
'I can whip that out in a few hours'
'it should be easy to update that old code'
instead of:'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'I can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'
If there are too many unrealistic 'no problem's', the
result is bugs.
· poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is
bugs. In many organizations management provides no incentive for programmers to document their code or write clear,
understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job
security if nobody else can understand it ('if it was hard to write, it should be hard to read').
·software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or
are poorly documented, resulting in added bugs.
19.How can new Software QA processes be introduced in an existing organization?
·A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of
lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
·Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time
process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
·For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and
projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications
among customers, managers, developers, and testers.
·In all cases the most value for effort will be in requirements management processes, with a goal of clear, complete,
testable requirement specifications or expectations.
20.What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications.
This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual
testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and
Validation.
Manual Testing Q & A Part 3
21.What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.
22.What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a
recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan,
and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of
meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection
meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the
most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother'
in the parable in Their skill may have low visibility but they are extremely valuable to any software development
organization, since bug prevention is far more cost-effective than bug detection.
23.What kinds of testing should be considered?
·Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and
functionality.
·White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of
code statements, branches, paths, conditions.
·unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the
programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always
easily done unless the application has a well-designed architecture with tight code; may require developing test driver
modules or test harnesses.
·incremental integration testing - continuous testing of an application as new functionality is added; requires that various
aspects of an application's functionality be independent enough to work separately before all parts of the program are
completed, or that test drivers be developed as needed; done by programmers or by testers.
·integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts'
can be code modules, individual applications, client and server applications on a network, etc. This type of testing is
especially relevant to client/server and distributed systems.
·functional testing - black-box type testing geared to functional requirements of an application; this type of testing should
be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it
(which of course applies to any stage of testing.)
·system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of
a system.
·end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application
environment in a situation that mimics real-world use, such as interacting with a database, using network communications,
or interacting with other hardware, applications, or systems if appropriate.
·sanity testing - typically an initial testing effort to determine if a new software version is performing
·well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes,
bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to
warrant further testing in its current state.
·regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine
how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be
especially useful for this type of testing.
·acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-
users/customers over some limited period of time.
·load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine
at what point the system's response time degrades or fails.
·stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as
system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large
numerical values, large complex queries to a database system, etc.
·performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and
any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
·usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or
customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers
and testers are usually not appropriate as usability testers.
·install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
·recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
·security testing - testing how well the system protects against unauthorized internal or external access, willful damage,
etc; may require sophisticated testing techniques.
·compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc.
environment.
·exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test
cases; testers may be learning the software as they test it.
·ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of
the software before testing it.
·user acceptance testing - determining if software is satisfactory to an end-user or customer.
·comparison testing - comparing software weaknesses and strengths to competing products.
·alpha testing - testing of an application when development is nearing completion; minor design changes may still be
made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
·beta testing - testing when development and testing are essentially completed and final bugs and problems need to be
found before final release. Typically done by end-users or others, not by programmers or testers.
·mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various
code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources.
24.What are 5 common problems in the software development process?
·poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.
·unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
·inadequate testing - no one will know whether or not the program is any good until the customer complains or systems
crash.
·featuritis - requests to pile on new features after development is underway; extremely common.
·miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems are
guaranteed.
25.What are 5 common solutions to software development problems?
·solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all
players. Use prototypes to help nail down requirements.
·realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation;
personnel should be able to complete the project without burning out.
·adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing.
·stick to initial requirements as much as possible - be prepared to defend against changes and additions once
development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately
reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can
see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes
later on.
·communication - require walkthroughs and inspections when appropriate; make extensive use of group communication
tools - e-mail, groupware, networked bug- tracking tools and change management tools, intranet capabilities, etc.; insure
that documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use
protoypes early on so that customers' expectations are clarified.
26.What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations,
and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their
overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might
include end-users, customer acceptance testers, customer contract officers, customer management, the development
organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders,
magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might
define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.
27.What is 'good code'?
'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding
'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is
too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be
kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks'
code analysis tools, etc. can be used to check for problems and enforce standards.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a
particular situation:
·minimize or eliminate use of global variables.
·use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many
characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in
naming conventions.
·use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as
necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming
conventions.
·function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable.
·function descriptions should be clearly spelled out in comments preceding a function's code.
·organize code for readability.
·use whitespace generously - vertically and horizontally
·each line of code should contain 70 characters max.
·one code statement per line.
·coding style should be consistent throught a program (eg, use of brackets, indentations, naming conventions, etc.)
·in adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there
should be at least as many lines of comments (including header blocks) as lines of code.
·no matter how small, an application should include documentaion of the overall program function and flow (even a few
paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation.
·make extensive use of error handling procedures and status and error logging.
·for C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies
(relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator
overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.)
· for C++, keep class methods small, less than 50 lines of code per method is preferable.
· for C++, make liberal use of exception handlers
28.What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is
indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust
with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design
is indicated by an application whose functionality can be traced back to customer and end-user requirements.For
programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge
and may not read a user manual or even the on-line help; some common rules-of-thumb include:
· the program should act in a way that least surprises the user
· it should always be evident to the user what can be done next and how to exit
· the program shouldn't let the users do something stupid without warning them.
29.What is SEI? CMM? ISO? IEEE? ANSI? Will it help?
·SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help
improve software development processes.
·CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of organizational 'maturity' that
determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense
Department contractors. However, many of the QA processes involved are appropriate to any organization, and if
reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified
auditors.
Level 1 - characterized by chaos, periodic panics, and heroic
efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.
44. When do you feel you need to modify the logical name?
a) Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too
long.
45. When it is appropriate to change physical description?
a) Changing the physical description is necessary when the property value of an object changes.
46. How WinRunner handles varying window labels?
a) We can handle varying window labels using regular expressions. WinRunner uses two “hidden” properties in order to
use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.
i.The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a
window’s label description.
ii.The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of
windows and for the object class object.
[
47. What is the purpose of regexp_label property and regexp_MSW_class property?
a) The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into
a window’s label description.
b) The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types
of windows and for the object class object.
48. How do you suppress a regular expression?
a) We can suppress the regular expression of a window by replacing the regexp_label property with label property.
49. How do you copy and move objects between different GUI map files?
a) We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed
are:
i Choose Tools > GUI Map Editor to open the GUI Map Editor.
ii.Choose View > GUI Files.
iii.Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.
iv. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.
v.In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to select multiple objects.
To select all objects in a GUI map file, choose Edit > Select All.
vi.Click Copy or Move.
vii.To restore the GUI Map Editor to its original size, click Collapse.
50. How do you select multiple objects during merging the files?
a) Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit >
Select All.
Win Runner Q & A Part 6
51. How do you clear a GUI map files?
a) We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor.
52. How do you filter the objects in the GUI map?
a) GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.
i. Logical name displays only objects with the specified logical name.
ii. Physical description displays only objects matching the specified physical description. Use any substring belonging to
the physical description.
iii. Class displays only objects of the specified class, such as all the push buttons.
53.How do you configure gui map?
a) When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the
minimum number of properties to provide a unique identification of the object.
b) Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard
classes used by WinRunner. These objects are therefore assigned to the generic “object” class. When WinRunner records
an operation on a custom object, it generates obj_mouse_ statements in the test script.
c) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure
the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the
configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration
permanent, you must add configuration statements to your startup test script.
54.W hat is the purpose of GUI map configuration?
a) GUI Map configuration is used to map a custom object to a standard object.
55. How do you make the configuration and mappings permanent?
a) The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and
the configuration permanent, you must add configuration statements to your startup test script.
56. What is the purpose of GUI spy?
a) Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to
an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all
the properties of an object, or only the selected set of properties that WinRunner learns.
57. What is the purpose of obligatory and optional properties of the objects?
a) For each class, WinRunner learns a set of default properties. Each default property is classified “obligatory” or
“optional”.
i. An obligatory property is always learned (if it exists).
ii.An optional property is used only if the obligatory properties do not provide unique identification of an object. These
optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are
necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to
the description until it obtains unique identification for the object.
58. When the optional properties are learned?
a) An optional property is used only if the obligatory properties do not provide unique identification of an object.
59. What is the purpose of location indicator and index indicator in GUI map configuration?
a) In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to
differentiate between them. Two types of selectors are available:
i. A location selector uses the spatial position of objects.
1. The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to
differentiate among objects with the same description.
ii. An index selector uses a unique number to identify the object in a window.
1. The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this
selector if the location of objects with the same description may change within a window.
60. How do you handle custom objects?
a) A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns
such objects under the generic “object” class. WinRunner records operations on custom objects using obj_mouse_
statements.
b) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure
the properties WinRunner uses to identify a custom object during Context Sensitive testing.
Win Runner Q & A Part 7
61. What is the name of custom class in WinRunner and what methods it applies on the custom
objects?
a) WinRunner learns custom class objects under the generic “object” class. WinRunner records
operations on custom objects using obj_ statements.
62. In a situation when obligatory and optional both the properties cannot uniquely identify an object
what method WinRunner applies?
a) In cases where the obligatory and optional properties do not uniquely identify an object,
WinRunner uses a selector to differentiate between them. Two types of selectors are available:
i. A location selector uses the spatial position of objects.
ii. An index selector uses a unique number to identify the object in a window.
63 What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore.
a) Record instructs WinRunner to record all operations performed on a GUI object. This is the default
record method for all classes. (The only exception is the static class (static text), for which the default
is Pass Up.)
b) Pass Up instructs WinRunner to record an operation performed on this class as an operation
performed on the element containing the object. Usually this element is a window, and the operation
is recorded as win_mouse_click.
c) As Object instructs WinRunner to record all operations performed on a GUI object as though its
class were “object” class.
d) Ignore instructs WinRunner to disregard all operations performed on the class.
64. How do you find out which is the start up file in WinRunner?
a) The test script name in the Startup Test box in the Environment tab in the General Options dialog
box is the start up file in WinRunner.
65. What are the virtual objects and how do you learn them?
a) Applications may contain bitmaps that look and behave like GUI objects. WinRunner records
operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual
object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you
record and run tests.
b) Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the
coordinates of that object, and assign it a logical name.
To define a virtual object using the Virtual Object wizard:
i. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.
ii. In the Class list, select a class for the new virtual object. If rows that are displayed in the window.
For a table class, select the number of visible rows and columns. Click Next.
iii. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use
the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or
click the right mouse button to display the virtual object’s coordinates in the wizard. If the object
marked is visible on the screen, you can click the Highlight button to view it. Click Next.
iv.Assign a logical name to the virtual object. This is the name that appears in the test script when you
record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests
using this text for the logical name. Otherwise, WinRunner suggests virtual_object,
virtual_push_button, virtual_list, etc.
v. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there
are no other objects in the GUI map with the same name before confirming your choice. Click Next.
66.H ow you created you test scripts 1) by recording or 2) programming?
a) Programming. I have done complete programming only, absolutely no recording.
67. What are the two modes of recording?
a) There are 2 modes of recording in WinRunner
i.Context Sensitive recording records the operations you perform on your application by identifying
Graphical User Interface (GUI) objects.
ii.Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates
traveled by the mouse pointer across the screen.
68. What is a checkpoint and what are different types of checkpoints?
a) Checkpoints allow you to compare the current behavior of the application being tested to its
behavior in an earlier version.
You can add four types of checkpoints to your test scripts:
i. GUI checkpoints verify information about GUI objects. For example, you can check that a button is
enabled or see which item is selected in a list.
ii. Bitmap checkpoints take a “snapshot” of a window or area of your application and compare this to
an image captured in an earlier version.
iii. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.
iv. Database checkpoints check the contents and the number of rows and columns of a result set,
which is based on a query you create on your database.
69. What are data driven tests?
Win Runner Q & A Part 8
71. What is parameterizing?
a) In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is
called parameterizing your test. The data is stored in a data table.
72. How do you maintain the document information of the test scripts?
a) Before creating a test, you can document information about the test in the General and Description tabs of the Test
Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of
the test, and a reference to the relevant functional specifications document.
73. What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?
a) You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled
or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog
box to add one of the following functions to the test script:
i. button_check_info
ii. scroll_check_info
iii. edit_check_info
iv. static_check_info
v. list_check_info
vi. win_check_info
vii. obj_check_info
Syntax: button_check_info (button, property, property_value );
edit_check_info ( edit, property, property_value );
74. What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?
a) You can create a GUI checkpoint to check a single object in the application being tested. You can either check the
object with its default properties or you can specify which properties to check.
b) Creating a GUI Checkpoint using the Default Checks
i. You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For
example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is
enabled.
ii. To create a GUI checkpoint using default checks:
1. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the
User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to
avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in
Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a
help window opens on the screen.
2. Click an object.
3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s
expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an
obj_check_gui statement
Syntax: win_check_gui ( window, checklist, expected_results_file, time );
c) Creating a GUI Checkpoint by Specifying which Properties to Check
d) You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push
button, you can choose to verify that it is in focus, instead of enabled.
e) To create a GUI checkpoint by specifying which properties to check:
i. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the
User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to
avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in
Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a
help window opens on the screen.
ii. Double-click the object or window. The Check GUI dialog box opens.
iii. Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.
iv. Select the properties you want to check.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click
the value in the Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next,
either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots)
appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to
specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for
certain properties of edit and static text objects. You also specify arguments for checks on certain properties of
nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s
expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an
obj_check_gui or a win_check_gui statement.
Syntax: win_check_gui ( window, checklist, expected_results_file, time );
obj_check_gui ( object, checklist, expected results file, time );
75. What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?
a) To create a GUI checkpoint for two or more objects:
i. Choose Create > GUI Checkpoint > For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the
User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to
avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens.
ii. Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.
iii. To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the
objects in the window.
iv. The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you
want to check.
v. Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The
Create GUI Checkpoint dialog box reopens.
vi. The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which
objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The
default properties are selected.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click
the value in the Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next,
either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the
Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments
if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of
edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
vii. To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current
property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is
inserted in the test script.
Syntax: win_check_gui ( window, checklist, expected_results_file, time );
obj_check_gui ( object, checklist, expected results file, time );
76. What information is contained in the checklist file and in which file expected results are stored?
a) The checklist file contains information about the objects and the properties of the object we are verifying.
b) The gui*.chk file contains the expected results which is stored in the exp folder
77. What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?
a) You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you
indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of
the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently
displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner
captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual,
and difference), you can identify the nature of the discrepancy.
b) When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a
screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap
statement.
c) Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the
CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording
extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to
check a bitmap.
d) To capture a window or object as a bitmap:
i. Choose Create > Bitmap Checkpoint > For Object/Window or click the Bitmap Checkpoint for Object/Window button on
the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW
softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens.
ii. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or
obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax:
win_check_bitmap ( object, bitmap, time );
iii. For an object bitmap, the syntax is:
obj_check_bitmap ( object, bitmap, time );
iv. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting
statement might be:
win_check_bitmap ("Flight Reservation", "Img2", 1);
v. However, if you click the Date of Flight box in the same window, the statement might be:
obj_check_bitmap ("Date of Flight:", "Img1", 1);
Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );
78. What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?
a) You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size:
it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its
upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area
intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative
to the entire screen (the root window).
b) To capture an area of the screen as a bitmap:
i. Choose Create > Bitmap Checkpoint > For Screen Area or click the Bitmap Checkpoint for Screen Area button.
Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The
WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens.
ii. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the
area; then release the mouse button.
iii. Press the right mouse button to complete the operation. WinRunner captures the area and generates a
win_check_bitmap statement in your script.
iv. The win_check_bitmap statement for an area of the screen has the following syntax:
win_check_bitmap ( window, bitmap, time, x, y, width, height );
79. What do you verify with the database checkpoint default and what command it generates, explain syntax?
a) By adding runtime database record checkpoints you can compare the information in your application during a test run
with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can
check the contents of databases in different versions of your application.
b) When you create database checkpoints, you define a query on your database, and your database checkpoint checks
the values contained in the result set. The result set is set of values retrieved from the results of the query.
c) You can create runtime database record checkpoints in order to compare the values displayed in your application
during the test run with the corresponding values in the database. If the comparison does not meet the success criteria
you
d) specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one
where one or more matching records were found, exactly one matching record was found, or where no matching records
are found.
e) You can create standard database checkpoints to compare the current values of the properties of the result set during
the test run to the expected values captured during recording or otherwise set before the test run. If the expected results
and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the
expected results can be established before the test run.
Syntax: db_check(<checklist_file>, <expected_restult>);
f) You can add a runtime database record checkpoint to your test in order to compare information that appears in your
application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime
database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard
inserts the appropriate db_record_check statement into your script.
Syntax:
db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );
ChecklistFileName A file created by WinRunner and saved in the test's checklist folder. The file contains information about
the data to be captured during the test run and its corresponding field in the database. The file is created based on the
information entered in the Runtime Record Verification wizard.
Contains one of the following values:
1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found.
2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found.
3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found.
RecordNumber An out parameter returning the number of records in the database.
80. How do you handle dynamically changing area of the window in the bitmap checkpoints?
a) The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels
that constitute a bitmap mismatch
Win Runner Q & A Part 9
81. What do you verify with the database check point custom and what command it generates, explain syntax?
a) When you create a custom check on a database, you create a standard database checkpoint in which you can specify
which properties to check on a result set.
b) You can create a custom check on a database in order to:
i. check the contents of part or the entire result set
ii. edit the expected results of the contents of the result set
iii. count the rows in the result set
iv. count the columns in the result set
c) You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.
82. What do you verify with the sync point for object/window property and what command it generates, explain syntax?
a) Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting
a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before
continuing the test.
b) You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For
example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you
may want WinRunner to wait for an object to appear in order to perform an operation on that object.
c) You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create
a window synchronization point. These functions have the following syntax:
Syntax:
obj_exists ( object [, time ] );
win_exists ( window [, time ] );
83. What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?
a) You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the
application being tested.
b) During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the
current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.
Syntax:
obj_wait_bitmap ( object, image, time );
win_wait_bitmap ( window, image, time );
84. What do you verify with the sync point for screen area and what command it generates, explain syntax?
a) For screen area verification we actually capture the screen area into a bitmap and verify the application screen area
with the bitmap file during execution
Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);
85. How do you edit checklist file and when do you need to edit the checklist file?
a) WinRunner has an edit checklist file option under the create menu. Select the “Edit GUI Checklist” to modify GUI
checklist file and “Edit Database Checklist” to edit database checklist file. This brings up a dialog box that gives you option
to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test
specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.
86. How do you edit the expected value of an object?
a) We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit
the gui*.chk file which contains the expected values which come under the exp folder to change the values.
87. How do you modify the expected results of a GUI checkpoint?
a) We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update
mode.
88. How do you handle ActiveX and Visual basic objects?
a) WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins
and these add-ins provide with a set of functions to work on ActiveX and VB objects.
89. How do you create ODBC query?
a) We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that
uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.
90. How do you record a data driven test?
a) We can create a data-driven testing using data from a flat file, data table or a database.
i. Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File
manipulation commands, reads data from the file and assign the variables with data.
ii. Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to
manipulate data in the data table.
iii.Database: we store test data in the database and access these data using ‘db_*’ functions.
Win Runner Q & A Part 10
91. How do you convert a database file to a text file?
a) You can use Data Junction to create a conversion file which converts a database to a target text file.
92. How do you parameterize database check points?
a) When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL
statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which
the SQL statement defining your query changes.
93. How do you create parameterize SQL commands?
a) A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the
value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a
query on the database in the sample Flight Reservation application:
expression is evaluated. If expression is true, statement1 is executed. If expression1 is false, statement2 is executed.
b) A switch statement enables WinRunner to make a decision based on an expression that can have more than two
values.
It has the following syntax:
switch (expression )
{
case case_1: statements
case case_2: statements
case case_n: statements
default: statement(s)
}
The switch statement consecutively evaluates each case expression until one is found that equals the initial expression. If
no case is equal to the expression, then the default statements are executed. The default statements are optional.
114. Write and explain switch command?
a) A switch statement enables WinRunner to make a decision based on an expression that can have more than two
values.
It has the following syntax:
switch (expression )
{
case case_1: statements
case case_2: statements
case case_n: statements
default: statement(s)
}
b) The switch statement consecutively evaluates each case expression until one is found that equals the initial
expression. If no case is equal to the expression, then the default statements are executed. The default statements are
optional.
115. How do you write messages to the report?
a) To write message to a report we use the report_msg statement
Syntax: report_msg (message);
116. What is a command to invoke application?
a) Invoke_application is the function used to invoke an application.
Syntax: invoke_application(file, command_option, working_dir, SHOW);
117. What is the purpose of tl_step command?
a) Used to determine whether sections of a test pass or fail.
Syntax: tl_step(step_name, status, description);
118. Which TSL function you will use to compare two files?
a) We can compare 2 files in WinRunner using the file_compare function.
Syntax: file_compare (file1, file2 [, save file]);
119. What is the use of function generator?
a) The Function Generator provides a quick, error-free way to program scripts. You can:
i. Add Context Sensitive functions that perform operations on a GUI object or get information from the application being
tested.
ii. Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or
sending user-defined messages to a report.
iii. Add Customization functions that enable you to modify WinRunner to suit your testing environment.
120. What is the use of putting call and call_close statements in the test script?
a) You can use two types of call statements to invoke one test from another:
i. A call statement invokes a test from within another test.
ii. A call_close statement invokes a test from within a script and closes the test when the test is completed.
iii. The call statement has the following syntax:
1. call test_name ( [ parameter1, parameter2, ...parametern ] );
iv. The call_close statement has the following syntax:
1. call_close test_name ( [ parameter1, parameter2, ... parametern ] );
v. The test_name is the name of the test to invoke. The parameters are the parameters defined for the called test.
vi. The parameters are optional. However, when one test calls another, the call statement should designate a value for
each parameter defined for the called test. If no parameters are defined for the called test, the call statement must contain
an empty set of parentheses.
Win Runner Q & A Part 13
121. What is the use of treturn and texit statements in the test script?
a) The treturn and texit statements are used to stop execution of called tests.
i. The treturn statement stops the current test and returns control to the calling test.
ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is
returned to the main batch test.
b) Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the
return value of the call statement is 0.
treturn
c) The treturn statement terminates execution of the called test and returns control to the calling test.
The syntax is:
treturn [( expression )];
d) The optional expression is the value returned to the call statement used to invoke the test.
texit
e) When tests are run interactively, the texit statement discontinues test execution. However, when tests are called from a
batch test, texit ends execution of the current test only; control is then returned to the calling batch test.
The syntax is:
texit [( expression )];
122. Where do you set up the search path for a called test.
a) The search path determines the directories that WinRunner will search for a called test.
b) To set the search path, choose Settings > General Options. The General Options dialog box opens. Click the Folders
tab and choose a search path in the Search Path for Called Tests box. WinRunner searches the directories in the order in
which they are listed in the box. Note that the search paths you define remain active in future testing sessions.
123. How you create user-defined functions and explain the syntax?
a) A user-defined function has the following structure:
[class] function name ([mode] parameter...)
{
declarations;
statements;
}
b) The class of a function can be either static or public. A static function is available only to the test or module within which
the function was defined.
c) Parameters need not be explicitly declared. They can be of mode in, out, or inout. For all non-array parameters, the
default mode is in. For array parameters, the default is inout. The significance of each of these parameter types is as
follows:
in: A parameter that is assigned a value from outside the function.
out: A parameter that is assigned a value from inside the function.
inout: A parameter that can be assigned a value from outside or inside the function.
124. What does static and public class of a function means?
a) The class of a function can be either static or public.
b) A static function is available only to the test or module within which the function was defined.
c) Once you execute a public function, it is available to all tests, for as long as the test containing the function remains
open. This is convenient when you want the function to be accessible from called tests. However, if you want to create a
function that will be available to many tests, you should place it in a compiled module. The functions in a compiled module
are available for the duration of the testing session.
d) If no class is explicitly declared, the function is assigned the default class, public.
125. What does in, out and input parameters means?
a) in: A parameter that is assigned a value from outside the function.
b) out: A parameter that is assigned a value from inside the function.
c) inout: A parameter that can be assigned a value from outside or inside the function.
126. What is the purpose of return statement?
a) This statement passes control back to the calling function or test. It also returns the value of the evaluated expression
to the calling function or test. If no expression is assigned to the return statement, an empty string is returned.
Syntax: return [( expression )];
127. What does auto, static, public and extern variables means?
a) auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the
function is running. A new copy of the variable is created each time the function is called.
b) static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its
value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is
executed.
c) public: A public variable can be declared only within a test or module, and is available for all functions, tests, and
compiled modules.
d) extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module.
128. How do you declare constants?
a) The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public
or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it
remains in existence until you exit WinRunner.
b) The syntax of this declaration is:
[class] const name [= expression];
129. How do you declare arrays?
a) The following syntax is used to define the class and the initial expression of an array. Array size need not be defined in
TSL.
b) class array_name [ ] [=init_expression]
c) The array class may be any of the classes used for variable declarations (auto, static, public, extern).
130. How do you load and unload a compile module?
a) In order to access the functions in a compiled module you need to load the module. You can load it from within any test
script using the load command; all tests will then be able to access the function until you quit WinRunner or unload the
compiled module.
b) You can load a module either as a system module or as a user module. A system module is generally a closed module
that is “invisible” to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause
command. A system module is not unloaded when you execute an unload statement with no parameters (global unload).
load (module_name [,1|0] [,1|0] );
The module_name is the name of an existing compiled module.
Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function
module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.
(Default = 0)
The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close
automatically after it is loaded: 1 indicates that the module will close automatically; 0 indicates that the module will remain
open.
(Default = 0)
c) The unload function removes a loaded module or selected functions from memory.
d) It has the following syntax:
unload ( [ module_name | test_name [ , "function_name" ] ] );