Sie sind auf Seite 1von 50

Testing code with Actel SDE

Jean Porcherot

Testing code with Actel SDE Version 1.7

Revision History
Date 07 July 2008 26 September 2008 10 October 2008 22 October 2008 18 Novembre 2008 15 January 2009 27 January 2009 09 April 2009 Version 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Description First draft version Added 4.9, 1.2.1.3 and 4.14.6 Reviewed by Gary Update with recent mktest changes Include feedbacks from software seminar Added 'Error Checking Method' section Added CppUnit::Exception Added cppunit.dll staging step Jean Porcherot Jean Porcherot Jean Porcherot Jean Porcherot Jean Porcherot Author Jean Porcherot Jean Porcherot

Contributors:
Jean Porcherot Edward Reusser Nabil Tewolde Location on Livelink: http://sv-livelink-02/actel813/livelink.exe?func=ll&objid=2230203&objAction=browse&sort=name

Confidential

Actel Corporation, 2012

Page 1 of 50

Testing code with Actel SDE

Jean Porcherot

Table of Contents
1. Introduction...............................................................................................................................3 2. Quick start: How to create a new test........................................................................................7 3. Maintaining your test..............................................................................................................23 4. Tips for writing unit and integration tests...............................................................................26 5. Using and taking advantage of your tests...............................................................................43 6. Test Driven Development.........................................................................................................48 7. Future enhancements/Roadmap.............................................................................................49

Confidential

Actel Corporation, 2012

Page 2 of 50

Testing code with Actel SDE

Jean Porcherot

1.

Introduction

The purpose of this document is to present how to easily create, run, maintain and take advantage of testing within Actels Software Development Environment (SDE). Any test you will write should be usable within Actels SDE. This document presents how to integrate your test in the vobs and have it be executed and validated by testing framework tools. This document will guide you in writing, running and maintaining tests using Actel SDE. We will not explain how test should be written in detail nor what technique you should use to test a specific component (white box, black box..). 1.1 About software testing

Software testing is the process of checking software, to verify that it satisfies its requirements and to detect errors. 1.1.1 Software testing goals

Writing tests is necessary to validate code. Goal number 1: Validate a new feature/piece of code you write You write some new code, a new test will validate the code. This test may or may not be written by the same developer writing the new code. The test should cover all use cases, it validates that the new code works when it should and returns errors when it needs to. Goal number 2: Validate a code change you do You modify an existing code (bug fix, enhancement, refactoring.), the existing test will help you verifying that your change does not affect previous functionalities. The test may be broken by the change if you modified the behaviour of the tested component. Then, and only then, you need to fix the test to make it test and validate the new behaviour. Note that the test may also be broken because your change introduced a bug ;-) Check the root cause of the failure before updating the test itself. 1.1.2 What sort of code can be tested?

Low level C code can be tested, C++ code and also MFC code can be tested. In general, any piece of code from any language should be testable. 1.2 Definitions 1.2.1 A test

We will separate tests in two categories. Unit tests and integration tests. In most case, a unit test validates a module from a library. Its a low level test. In most case, an integration test validates a functionality of a program (this may involve many modules and libraries, already tested by unit tests). Its a higher level test.

Confidential

Actel Corporation, 2012

Page 3 of 50

Testing code with Actel SDE

Jean Porcherot

1.2.1.1

Unit Test

A unit test is a program that verifies the individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming the smallest unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a method. To avoid having too many different tests, we most commonly apply unit testing to a full class or a module that may include a few classes. In Actels SDE, a unit test is a folder including set of files (header and source files), exporting at least a main entry point. To run the test, the set of files need to be compiled to produce an executable. This executable can link with the library defining the module to be tested (it could also link only with some .o files, see 4.14.2). Then, the executable can be ran and its exit code will be used for test validation. If it returns 0, the test passed, else, it failed. The program will provide, through its output, information explaining why it failed. Example: You have a should module, providing two functions, ShouldReturnTrue, supposed to always return true. If you want to write a small test program that validates this functions, you can create the test folder manually and it may contain a single file, TestAdd.cpp:
#include sys.h #include should.h int main(int argc, char* argv[]) { if ( !ShouldReturnTrue() ) { SysPrintf(ShouldReturnTrue does not work); return 1; } SysPrintf(Test succeeded!); return 0; }

Example 1 1.2.1.2 Integration Test

Integration test will test a set of modules that has been combined as a group (library/program). In Actels SDE, an integration test is a folder including a script to be executed by an existing program (libero.exe, designer.exe.). This script must be in a format supported by the tested program (Tcl in most cases). It does not necessarily test a single unit; it can test the integration of several units within the application. Example: The test can include a single file, mytest.tcl containing:
new_design -family PA save_design name foo.adb close_design

Example 2

Confidential

Actel Corporation, 2012

Page 4 of 50

Testing code with Actel SDE

Jean Porcherot

In Actel, we used to call the level1 tests as regression tests, but note that integration tests and regression tests are not the same concept. So we will name Tcl script based tests integration tests rather than regression tests in this document.

1.2.1.3

Regression test

A regression test is a test that exercises pre-existing feature behavior and can be used to detect any differences in behavior. This is very useful for detecting unexpected effects (a behavior regression) of a development change. Regression testing applies to both unit and integration testing.

1.2.1.4

WinRunner Test

WinRunner is functional testing software for IT applications. It captures, verifies and replays user interactions automatically, so you can identify defects and determine whether graphical user interface work as designed. They are currently not part of the testing framework and won't be commented in this document 1.2.2 Test validation

When a (unit or integration) test is run, we need an easy way to know if the test passed or failed. For unit tests, validation is fully based on the exit code. If the test program returns 0, the test succeeded. Any test-specific validation (check some files has been generated, test some outputs, compare dumped and golden files) can be performed by the program itself and guarantee that the exit code is the only value to be checked for validation. For integration tests, we may want the system to do some more checking automatically. Its harder to check for outputs in Tcl than in C++. So an integration test may specify a method that should be used for validation. For instance, test below could be validated only if TEST_OK is found in the log file:
new_design -family PA save_design -name foo.adb close_design puts TEST_OK

Example 3 1.2.3 Test suite

A test suite is a collection of tests. For instance, the well known level1 is an integration test suite. In most case a test suite always tests the same program/library. A unit test suite will group together unit tests for a specific library. Each test will validate a module of this library and then, the whole test suite will validate the whole library behavior. Confidential Actel Corporation, 2012 Page 5 of 50

Testing code with Actel SDE

Jean Porcherot

An integration test suite will group together integration tests for a specific program. Each test will validate a functionality of this program and then, the whole test suite will validate the whole program behaviors. This is the general rule, but you may want to do this differently. For instance, picasso integration test level includes two tests, both related to power, but not testing the same tool: - smartpower_tcl: will test SmartPower commands using designer.exe - vcd_flow: will test the VCD flow using libero.exe Adding your test to a test suite will make it possible to run your test from a test runner and/or from build_top. This will make it easy for you and other developers to run the suite (that will include your specific test) and have any failure be reported. If the test is not in a test suite, most likely it will never (or very rarely) be ran after you submitted it to the vobs. Most of us already ran level 1 tests to verify that we did not break anything when modifying some code related to Designer, but who ever searched for other Tcl scripts that should be ran manually? A CPPUNIT test suite is not at the same level as the test suite described above. See 2.3.1 for more details. 1.2.4 Test runner

A test runner is a tool that will execute test suite(s) for you and will report you what tests failed or passed. Actels SDE has two test runners, one for unit tests (run_tests.rb, see 5.1.1) and one for integration tests (top_regs, see 5.1.2). They both support threading and can be invoked directly from build_top with specific recipe flags (see 2.3.8 and 2.4.6). 1.3 Success story

Many programs/libraries are already using the unit and integration tests as presented below: sgcore, sdcmds, idebase, picasso, ide 1.4 What should I read in this 50-page document!? 1.4.1 You are about to write your first tests?

Read "1.2 Definitions" and "2 Quick start: How to create a new test" When you wrote your first test, come back and move to the next steps. 1.4.2 You already wrote a test and want to get some advices

Read "4 Tips for writing unit and integration tests" And also "3 Maintaining your test" 1.4.3 A test is broken in your c-set and you want to fix it

Read "3.2 A test is broken!?". And then refer to 2.3.9 to debug a unit test and 2.4.7 for integration tests.

Confidential

Actel Corporation, 2012

Page 6 of 50

Testing code with Actel SDE

Jean Porcherot

2.

Quick start: How to create a new test

Creating a new test should be very easy. The purpose of this section is to guide you through the creation of a new test within Actels SDE. We will first tell you how to decide what sort of test you should create. Then we will explain how to create, run and debug a new test. 2.1 Unit or integration test?

First of all, you need to decide if you will use unit or integration testing to validate your feature/functionality. 2.1.1 Unit test

- The code you want to test is in a library - The code does not have much dependency on other modules/libraries - You want to validate a single module 2.1.2 Integration test

- The code you want to test is located in an executable (program) - The program must support scripting - You dont want to validate a single module but want to test a global functionality involving several modules - The code you want to test has strong dependencies on other modules, tools for which you cant easily create mockups - You are not interested in model content validation (model content is not easily accessible through scripts) 2.1.3 Examples

2.1.3.1

Component import in Project Manager

In Project Manager (Libero IDE), a component (SmartGen, IP Core) is imported through a CXF file. - We have a unit test that validates CXF parsing; this unit test parses a CXF and checks that the model is updated correctly. As its a unit test, it can directly access the model to validate its content. This one imports CXF files created specially to guarantee a good coverage of the code. The, we also have integration tests that focuses on the flow and the display, for example to validate that a components can be generated in a new project, displayed in the GUI and that simulation and synthesis pass successfully 2.1.3.2 SmartPower Tcl support testing

We wanted to test SmartPower commands support. Command management is in a library (picbase) so we could have a unit test for it. But there was no easy way to create a mockup modConfidential Actel Corporation, 2012 Page 7 of 50

Testing code with Actel SDE

Jean Porcherot

el (power engine): the easiest way to create such a model is to open an adb. As commands are all scriptable, we decided to write an integration test (Tcl script) to be executed by Designer rather than a unit test. Then, the only thing we need to do to create a power engine is to open an adb. 2.2 Where to create tests

Tests you will create must be added to the vobs so that they can be run by other people (see 4.5). All afi tests should be added to /vobs/afi/tst folder, nsrc tests should be located under /vobs/nsrc/tst folder. If your test is located at the right place, you can then use tcd (by analogy with acd) to find the test folder. acd idebase goes to /vobs/afi/lib/idebase, tcd idebase goes to /vobs/afi/tst/idebase. Firstly, if you work from a snapshot view, make sure the test folder is loaded. By default, test folders are not loaded in snapshot view. Use sv_sync -p /afi/tst/<module> to load the folder. If you are adding the first test for a library, congratulation! Just create the tst folder! 2.3 New unit test 2.3.1 About CPPUNIT

CPPUNIT is a C++ unit testing framework. This toolkit will help you writing unit tests. It helps you: - Organizing your test in different sub-tests - Validating expressions - Provide explicit information (file/line) on failure This requires your test program to define some CPPUNIT derived classes. CPPUNIT is not required to write tests within Actel's SDE (Example 1 shows a test that works within Actels SDE without using CPPUNIT). But we strongly recommended using CPPUNIT as it makes test more structured and easier to read. Advantages: - It's easier to add assertions in the code - No need to print where the assertion is, CPPUNIT does it for you on failure - If one test fails (one method registered through CPPUNIT_TEST), the other ones are executed anyway, and then, the log may report failures in several CPPUNIT_TEST. One test stops when the first assertion is reached, but the other ones are executed anyway. Here's an interesting presentation of CPPUNIT: http://www.slideshare.net/iurii.kiyan/cppunit-using-introduction CPPUNIT has some objects called test suites and unit tests that are different than the ones we have in Actel's SDE.

Confidential

Actel Corporation, 2012

Page 8 of 50

Testing code with Actel SDE

Jean Porcherot

SDE Unit test suite SDE Unit test1 SDE Unit test2

Test function1 CppUnit::TestFixture derived class Test function2 CPPUNIT_TEST_SUITE macro

One SDE unit test suite lists a set of SDE unit tests (programs to be compiled and executed). One SDE unit test, if using CPPUNIT, defines a TestFixture derived class, this one is instantiated by the main entry point. The TestFixture declares a CPPUNIT_TEST_SUITE via a macro The CPPUNIT_TEST_SUITE references a set of functions; each one is a CPPUNIT test. Running the full SDE unit test program will end up calling all those CPPUNIT test functions one by one. Create a new unit test using Visual Studio 2005

2.3.2

There is a one-click tool that will create a new unit test for you. mktest is a utility that generates skeleton C++ source used for unit testing. Mktest will: - Add a new folder in the vobs (under /afi/tst/<module> when <module> is the name of the library containing the code youll test). - Add some common compilation files in it (linkfile.txt, keyinfo.txt.) - Add some source and header files - Define a main entry point Then, you can start writing your test code (validating expected behaviours). We recommend that you use CPPUNIT library for this purpose. 2.3.2.1 Installing mktest

Refer to this document to install mktest on your machine: http://sv-livelink-02/actel813/livelink.exe? Confidential Actel Corporation, 2012 Page 9 of 50

Testing code with Actel SDE

Jean Porcherot

func=ll&objId=2230203&objAction=browse&sort=name&viewType=1 If ruby is already installed on your machine, having mktest work should be very easy. 2.3.2.2 Setting your view

To use mktest you must be in a snapshot. $ setview <snapshot path> If your snapshot does not contain the /afi/tst directory re-sync using the p flag. $ sv_sync p /afi/tst/ 2.3.2.3 Running mktest

One new menu item is added to the project menu in Visual Studio 2005: Add CppUnit Test. Clicking it should open the dialog below.

Figure 1 This dialog allows you to specify new or existing tests. Each test has a single test runner and is the project label. For example, assume the above test runner name is Test1.

Confidential

Actel Corporation, 2012

Page 10 of 50

Testing code with Actel SDE

Jean Porcherot

The path can usually be defaulted to . for new tests. This defines the test path as vobs/afi/tst/amfc when amfc is the module. Notice of course that for snapshots vobs has the snapshot root dynamically substituted. The groups are selectors. All tests are defined to belong to the group called all or all_tests. However tests can also be specified as belonging to other groups for use in the test runner script. It can be entered as a comma separated list. If the test already exists, the current groups will be expanded to include the new groups. You cannot delete associated groups from tests at this time except by editing the underlying data files. The CPPUNIT test suite (MySuite) is a collection of single CPPUNIT unit tests (MyTestFunction is one of them), each of which is a separate function that is called by the test runner. So for example, one could specify a test suite as AmfcTooltip which defines the suite as containing a list of tooltip tests. Then you enter a comma separated list of new tests to be added to the suite. If the Test Suite already exists, the new tests are simply added. See 2.3.1 for more details on CPPUNIT test structure. All files are created into the path directory, the project is created (or replaced if it already exists), and the project will be loaded into the solution. Path and Module Name are pre-populated. The only thing you need to do is enter a Test Runner Name, Test Suite and New Test name. Make sure you check Add new files to ClearCase, otherwise the test will remain local to your machine and wont be added to the vobs. Once validated, this will create a new test folder: /afi/tst/<module name>/<module name>_<test name>, for the example above, it will be /afi/tst/sgcore/sgcore_mytest mktest will place the generated files below in the test folder: main.cpp - entry point for the unit test src directory - contains the class source files inc directory - contains the class header file keyinfo.txt - contains library dependencies linkfile.txt - contains the module dependencies needed to create the make file by mkmf There are three functions in the class MySuite located in the src directory: setUp(): Define all variables common to the set of tests here tearDown(): Last thing executed, free up any resources allocated in setUp here MyTestFunction(): This is where the test will be executed The file /afi/tst/sgcore/sgcore_mytest/src/MySuite.cpp will include the implementation of the MyTestFunction test, part of MySuite unit test:
void MySuite::MyTestFunction() { CPPUNIT_FAIL( not implemented ); }

Example 4 You now simply need to add your testing code in this function. You can use CPPUNIT macros to validate the functionalities. Note: You can also specify a test group in Figure 1. Using test group makes it possible to run a Confidential Actel Corporation, 2012 Page 11 of 50

Testing code with Actel SDE

Jean Porcherot

sub-set of tests from a test suite. See 2.3.8 2.3.3 Example

If we extend the test for Example 1 with a ShouldReturnFalse function, you would have to write your test file manually:
#include sys.h #include should.h int main(int argc, char* argv[]) { if ( ! ShouldReturnTrue() ) { SysPrintf(ShouldReturnTrue() does not work); return 1; } if ( ShouldReturnFalse() ) { SysPrintf(ShouldReturnFalse() does not work); return 2; } return 0; }

Example 5 You can use the mktest tool and then take advantage of CPPUNIT testing framework (and we strongly recommend that). Then, the only piece of code you need to write is:
void MySuite::MyTestFunction() { CPPUNIT_ASSERT( ShouldReturnTrue() == true ); CPPUNIT_ASSERT( ShouldReturnFalse() == false ); }

Example 6 2.3.4 Creating new tests without Visual Studio

2.3.4.1

Using mktst

mktest uses command line parameters to configure the generated source files, it can be run from a shell, without Visual Studio. For a complete list of options run $ mktest h Here is an example: $ mktest -scc ag <test group name> at <test name> af <function name> ac <class name> <module> $ mktest -scc -ag my_group -at my_unit_test -af add -ac test_adder sgcore scc - specifies that new files must be added to the vobs ag - add a test group called data Confidential Actel Corporation, 2012 Page 12 of 50

Testing code with Actel SDE

Jean Porcherot

at - specify the test name af - specify the name of the member function that will be called to run the test ac - specify the name of the class that will contain the test <module> - specify the name of the module to be tested 2.3.4.2 Adding test classes and functions

Mktest makes it easy to create a new test. It can also be useful to extend an existing test. Refer to the tools help to see how to add test functions and classes to an existing unit test. 2.3.4.3 By hand

Even if we dont recommend that, you can still create your CPPUNIT test by hand, or even write a non CPPUNIT-based test. A unit test is just a test program with a main entry point returning 0 on success and 1 on failure. We recommend prefixing the test name by the module name. Then, the generated executable, when you'll compile, will not collide with another executable from another module. Put the content of Example 5 in a main.cpp file, add linkfile.txt and keyinfo.txt. Make it compile. You now created a unit test by hand. Then, you need to add the test to a test suite (mktest does it for you automatically). A unit test suite is a testinfo.txt file, located in a library folder: /vobs/afi/idebase/testinfo.txt lists all the tests from idebase test suite. You need to update testinfo.txt by hand if you want to add your test to a test suite. testinfo.txt is a 3 column file. First column is the test name, second is the test location, and third (optional) is the test group. Example, for the library "base", /afi/lib/base/testinfo.txt could be:
systest sys_recurse_copy sys_recurse_rmdir sys_setreadonly syscopy_test defget defstringtest deftabtest deftest filtest ${SDE_SRC_ROOTDIR}/afi/tst/base/systest ${SDE_SRC_ROOTDIR}/afi/tst/base/sys_recurse_copy ${SDE_SRC_ROOTDIR}/afi/tst/base/sys_recurse_rmdir ${SDE_SRC_ROOTDIR}/afi/tst/base/sys_setreadonly ${SDE_SRC_ROOTDIR}/afi/tst/base/syscopy_test ${SDE_SRC_ROOTDIR}/afi/tst/base/defget ${SDE_SRC_ROOTDIR}/afi/tst/base/defstringtest ${SDE_SRC_ROOTDIR}/afi/tst/base/deftabtest ${SDE_SRC_ROOTDIR}/afi/tst/base/deftest ${SDE_SRC_ROOTDIR}/afi/tst/base/filtest sys sys sys sys sys def def def def

Example 7 Then, base test suite has 10 tests (all lines from the file). sys test group from base test suite has 5 tests. Groups are handled by mktest, see the group field in Figure 1. 2.3.5 Validating the test

The test program (main entry point) generated by mktest will return 0 on success and 1 on failure. Whatever needs to be done to validate your test must be part of the test itself. If your test Confidential Actel Corporation, 2012 Page 13 of 50

Testing code with Actel SDE

Jean Porcherot

validation is done by comparing dumped to golden files, write a C++ function doing the comparison and make your test fail through a CPPUNIT assertion. 2.3.6 Running the test

To run your unit test, you will first need to compile and stage it. A unit test is just a program and can be compiled like any other Actel program (sumatra, flashpro, ide). Once it's compiled, you need to stage your executable (use 'mk mod' to build and stage), you can directly run the executable and this will output messages to the console. With the examples above; mk mod should generate sgcore_mytest.exe in your staging area. Running it will output:

Figure 2 If the test fails, for instance if you wrote your test function as below:
void MySuite::MyTestFunction() { CPPUNIT_ASSERT( ShouldReturnTrue() == true ); CPPUNIT_ASSERT( ShouldReturnFalse() == true ); // this fails }

Example 8 Running it will output:

Figure 3 Note that CPPUNIT tells you in what file and line the error occurred. No need to do SysPrintf and returns as you would have done in Example 5. CPPUNIT requires the cppunit_dll.dll file (cppunitd_dll.dll with M_DEBUG) to be accessible. Those files are in alien/ms/bin. You may have need stage them in case this folder is not in your path when running the unit test. 2.3.7 Adding the test to a test suite

When you asked to create the new test mytest from Visual Studio, this one was automatically added to sgcore test suite. This makes it possible to run your test through a test runner: Confidential Actel Corporation, 2012 Page 14 of 50

Testing code with Actel SDE

Jean Porcherot

A recommendation is that one test suite from a library should only need this library to be compiled and staged. Note: its not the case for idebase tests ;-) This one needs @libero_hedwig.txt as some tests call libero.exe internally to run some Tcl script on it (in a way we have integration tests embedded in unit tests.). 2.3.8 Running a unit test suite with build_top

To have build_top compile, stage, run and validate your test, just use those recipe flags: DO_UNIT_TEST=true UNIT_TEST_LEVELS=all@sgcore UNIT_TEST_THREADS=3 This will run all tests from sgcore, including the new one you recently added. The system will use 3 threads and will run one unit test in each one.

Figure 4 If your test suite has groups, you can specify a group name to be ran: mygroup@sgcore rather than all@sgcore. Instead of running all unit tests from sgcore, only the ones from group mygroup will be ran. 2.3.9 Debugging the test

When creating your Visual Studio solution. You can add a -i parameter to include all tests. "mksln -i sgcore" will create a sgcore solution with all the scgore unit tests loaded in it. Then, you can use the test program like any other program; you can build it and run it from Visual Studio.

Confidential

Actel Corporation, 2012

Page 15 of 50

Testing code with Actel SDE

Jean Porcherot

Figure 5 If mktest is integrated in Visual Studio on the machine, tests can be loaded later from Visual Studio (no need to use the -i parameter when creating the solution)

Figure 6 CPPUNIT_ASSERT macros are raising exceptions. The program will not break on failure, except if you ask Visual Studio to do so. Go to Debug Exceptions. From the dialog, add a new exception C++ exception: "CppUnit::Exception". Check the "Thrown" column. Now, the program will break on failure. 2.4 New integration test

Scripts for integration tests could be written in any language supported by the program you want to test. At Actel, we mainly support Tcl, so well assume we are writing our integration test in Tcl in this section. If this script needs to be executed by a new program you are creating, this new program must support script parameters on his command line. Common script parameters for Actel software are: script:<script name> logfile:<logfile name> script_args:<script args> CONSOLE_MODE:show|hide SCRIPT_MODE:query|batch|startup Any program using PrgbaseApp as CWinApp based class will support those parameters automatically (its the case for libero and flashpro). Designer supports them too. In examples below, we will consider that your test needs to be executed by Designer software. 2.4.1 Creating a new integration test

You simply need to create a new folder with a top-level Tcl file to be executed. Your Tcl script can execute any Tcl command (see http://www.tcl.tk/man/tcl8.4), those can help accessing/deleting/modifying/comparing files, calling system functions, doing operations. Your Tcl script will also call some Tcl commands from your program to validate its behavior.

Confidential

Actel Corporation, 2012

Page 16 of 50

Testing code with Actel SDE

Jean Porcherot

2.4.2

Example

Here is a very simple integration test that could be run by Designer. This script can be located under /afi/tst/designer/mytest
# Tcl command, remove any remaining adb from previous run: file delete force foo.adb # Starting test: new_design -family PA save_design -name foo.adb close_design puts TEST_SUCCEEDED

Example 9 2.4.3 Validating the test

Note that, for Actel tools at least, a Tcl script execution will stop on failure. This means that you dont need to write much code to validate command execution. The test from Example 9 will guarantee that new_design, save_design and close_design work. When running the test, if the last line of the output is TEST_SUCCEEDED, you can consider the test passed. 2.4.4 Running the test

An integration test can either be ran from GUI or console mode. 2.4.4.1 Run the test from the console

To run the test from the console, just specify the script test on the command line: $ALSDIR/bin/designer.exe script:mytest.tcl logfile:mylog.txt Then, check mylog.txt to validate if the test passed or not. You may also pass arguments to the script (4.15.5 explains why you may want to do this). $ALSDIR/bin/designer.exe script:mytest.tcl logfile:mylog.txt script_args:"test1" On PC, when the script is executed, the output goes to the log file you specified but your console is frozen. You don't get the output in real time and have to wait for the test to complete for the log file to be written before you can have any information on the test execution status. You can specify CONSOLE_MODE:show on your command line. Then, a new console will be opened and this one will display all the outputs in real time (as Designer GUI would show them in the log window). SCRIPT_MODE is used on the command line to specify how the script should be executed: SCRIPT_MODE:batch (default) no GUI opened, script is executed in batch mode SCRIPT_MODE:startup Designer GUI opens and then executes the script. Then, you can see the output in the log window and you don't need CONSOLE_MODE parameter. SCRIPT_MODE:query Designer frame is created but not displayed, this is usefull if you want to test it's content (see 4.15.6)

Confidential

Actel Corporation, 2012

Page 17 of 50

Testing code with Actel SDE

Jean Porcherot

2.4.4.2

Run the test from GUI

You can also start Designer GUI, and then ask to run the script directly from here (this is equivalent to using SCRIPT_MODE:startup).

Figure 7 Script arguments can be specified from this dialog. 2.4.5 Adding the test to a test suite

An integration test suite is what we commonly call a regression level at Actel. Integration test suites are *.lst files, they are commonly stored in /vobs/test/reg_test/lists. A lst file is just a list of the folders where integration tests are located: Example, reglev1_ms.lst is:
/vobs/test/reg_test/testcases/lev1/AGLP030V5_FullyBonded_miro_commands /vobs/test/reg_test/testcases/lev1/A3P030_100_VQFP_USLICC_UJTAG /vobs/test/reg_test/testcases/lev1/RTAX1000S_top_edac1 ...

Example 10 In each folder, the test runner will be looking for a "readme" file. This file will give information how the test needs to be executed and validated. Let's come back to Example 9. We wanted to have a very simple script, executing 4 commands and then displaying "TEST_SUCCEEDED". This script should be executed by Designer. This test is validated by the presence of "TEST_SUCCEEDED" in the log file. Here is how to integrate Example 9 in a test suite: - We already created the script file /vobs/afi/tst/designer/mytest/mytest.tcl - In this folder, you will add a 'readme' file explaining how to run and validate the test:
Error Checking Method:exp EXECUTABLE=designer EXEC_PARAMS=script:myscript.tcl mdbtestlog:mylog.log

Example 11 - 'Error Checking Method:exp' means that your test is validated by expression checking in the program output (mylog.log). Then, in the same test folder, you will add a myscript.exp file to tell what expression should be searched. This file will contain a single line:

Confidential

Actel Corporation, 2012

Page 18 of 50

Testing code with Actel SDE

Jean Porcherot

TEST_SUCCEEDED

Example 12 See 4.15.8 for more information on Error Checking Method flag. - Then, you can add the test folder to an existing lst file, or create a new one. We will add it in a new test suite, we will create /vobs/test/reg_tests/lists/reglevexample.lst file, containing:
/vobs/afi/tst/designer/mytest

Example 13 You added your test to a suite. It can now be executed and validated by a test runner. Note that the integration test runner supports many kind of validations (errLog, heflow, mvn_exp, exp, rio_exp, mult_exp, fus, diff_fus.). Refer to /vobs/rtools/scripts/top_regs script to see what is supported (is there any documentation on Livelink for this? If someone knows, please advise). 2.4.6 Running an integration test suite with build_top

To have build_top run and validate your test, just use those recipe flags: DO_TEST=true TEST_LEVELS=example TEST_THREADS=3 This will run all tests from /vobs/test/reg_test/lists/reglevexample.lst The system will use 3 threads and will run one integration test in each one. The output directory for integration tests can be specified in the recipe with TEST_DIR. If not specified, it will be $SDE_LOCAL_ROOTDIR/tst (if the SDE variable is set). Refer to build_top documentation for more information on those options.

Figure 8 2.4.7 Debugging the test

To debug your test, you can simply need to repeat 2.4.4.2 from Visual Studio. Confidential Actel Corporation, 2012 Page 19 of 50

Testing code with Actel SDE

Jean Porcherot

If you want to run the test directly from the debugger, follow the steps below: You first need to know how the test is supposed to be launched (you simply need to see the 'readme' file from the test folder if any, see 2.4.5). This one will tell you what program is used, and what are the arguments passed to it. You then need to: - Create a solution for the program used by the test - Configure the project to use the right arguments for debugging - Copy the test folder somewhere - Run the program from this copy of the test folder

Figure 9 Figure 9 creates a copy of the components test folder and shows me how to run it.

Figure 10 Then, pressing F5 will start the test in the debugger. If the test runs sub-tests, see 4.15.5 to make it easy to debug only one sub-test. If one Tcl script runs several time the same action and one is failing, you may have a hard time Confidential Actel Corporation, 2012 Page 20 of 50

Testing code with Actel SDE

Jean Porcherot

debugging. If the script is for instance:


open_designf foo.adb compile compile compile compile compile compile

Example 14 If the fourth call to compile fails, you will need to put a breakpoint in the code executing compile to debug it and you will have to ignore the first 3 breaks. It could be nice in this case to add a "breakpoint" Tcl support in your program. This breakpoint Tcl command will do nothing, but will just be supported so that you can put a Debugger breakpoint in it. Then, you can update the script as below:
open_designf foo.adb compile compile compile breakpoint compile compile compile

Example 15 Then, you put a Visual Studio breakpoint in the "breakpoint" command execute method (see Figure 11), once reached, you can set the Visual Studio breakpoint in the compile function and hit F5 to reach it, you this is the one that will fail. Note that if your program is based on Acmd system, there is a built-in "breakpoint" command available. See the Acmd documentation on Livelink: http://sv-livelink-02/actel813/livelink.exe? func=ll&objId=1886405&objAction=browse&sort=name&viewType=1

Confidential

Actel Corporation, 2012

Page 21 of 50

Testing code with Actel SDE

Jean Porcherot

Figure 11 2.5 Debugging a test under Linux

No real tricks here.sorry but you'll definitively need to use gdb ;-) You just need to compile the program and execute it from gdb with the correct parameters (from readme or arglist.txt). Tip: Under Linux, when you run "designer" from your staging area, "designer" file is actually a script that will set up your environment and will then invoke the real binary "designer_bin". In most cases, running directly designer_bin won't work and gdb cannot be ran on "designer" as it requires a binary file. To fix that, edit the "designer" file and change it's last line so that it invokes gdb instead of designer_bin: Replace: "$exedir/$exename "$@"" With "\gdb $exedir/$exename "$@""

Confidential

Actel Corporation, 2012

Page 22 of 50

Testing code with Actel SDE

Jean Porcherot

3.

Maintaining your test

You implemented a new feature and you wrote a new test to validate this feature. The test passes, that's perfect. Now, what's next? You must make sure that this test is maintained. Make sure that, in two years from now, when you'll modify some code, you are still able to run the test and validate that what you did does not break the feature you implemented today. This means that you must force yourself to run this test some time to time and possibly update if needed, so that it continues working in the future. 3.1 Test must be maintained across all releases

The only reason why a test can start failing and you don't want to fix it, is if the feature it tests was abandoned and is not supported by the software anymore. Then, you can even consider removing the test for good from the vobs (or at least de-reference it from any test suite). 3.1.1 New functionality

When adding a new functionality to a module/class, you should add the corresponding validation test code in the module's test if any (if there's no test for this module.maybe it's a good opportunity to create one!). The developer, who wrote the test originally, probably tried to have a good coverage of the module, he made sure there's no way to make this module not behave correctly without breaking the test. If you add new code but don't update the test, you add some untested code in the module and make it more open to new bugs. 3.1.2 Bug fix

When fixing a bug, you have a high probability to introduce a new problem. By following the process below, you can take advantage of a test to reduce this risk. You have a module implementing some functionality. Youve been assigned to fix a reported bug in that code module, and youre about to fix it. There is a test for this module, so you would really appreciate if this test could validate the piece of code you are about to change. If the test is working, then you can expect it to detect any new bug your change may introduce (or any existing behaviour you may break). How to know if the specific piece of code you are about to modify is actually validated by this test? Maybe the test has a bad coverage (see 3.1.1) and does not even go through your code here's a simple tip to check this: - Before doing anything, run the test and verify that it passes - Comment out all the piece of code you are about to modify - Run the test again (1) If it passes, this specific piece of code is not tested. You may want to extend the test so that it covers and validates this piece of code before you change anything. (2) If it fails, the test validates what this specific piece of code is doing, you are lucky. Dont forget Confidential Actel Corporation, 2012 Page 23 of 50

Testing code with Actel SDE

Jean Porcherot

to restore the old code you commented out. The fact that the test does validate this code does not guarantee that any bug you may introduce would be detected, but there is at least a chance to be detected.. You can now modify the code to fix the bug and run the test again to validate your change. Use Test Driven Development: If the test passed (1), you may ask yourself: "why this test passes when there's actually a bug in the code"? The answer is "Because the test does not validate the functionality correctly". If the test failed (2), you will have to write a test for this piece of code. In both cases, it may be interesting to do some test driven programming: You can add a new test case it the test program or script that detects the bug you are about to fix. This will make the test fail temporarily.then you can safely fix the bug, if the test starts passing again, it means you fixed the bug correctly. 3.2 A test is broken!?

At some point, one test may be broken. There may be several reasons: - Your environment is not set correctly, so the test is not executed properly. - Someone (other than you) broke the test - The changes you did in your change-set broke the test The finality is that one test "seems" to be broken in your c-set. Here are some tips to get out of this situation. 3.2.1 Did I break it?

Firstly, you must determine if the changes you did in your c-set are the root cause of the failure. This is by far the most important thing to do before even trying to debug the test. Was the test broken by another developer in another c-set? Try to run the test from another c-set than the one you experienced the failure with. If it does fail from another c-set, most likely someone else broke the test. It does not mean for sure that your changes do not affect the test. One test may fail due to two different bugs (one introduced by another developer and one introduced by your c-set). As a test stops on failure, the fact that it fails ends up with all test cases not being fully executed, so maybe a part that would identify a bug in your code is not executed anymore. Ideally, you should wait that the other developer fixes his bug before you can see the test passing in your c-set and then submit. But in most case you won't do that and you'll submit. Then, the other developer will have to deal with the bug you introduced to make the test pass again .. If it does not fail from another c-set, most likely your changes are causing the failure then you can and need to debug the test. A recommendation is to always run tests after you create or update a c-set. Then, you know if the tests are supposed to pass, and, if they start failing, you know if it's due to changes you did in your c-set. 3.2.2 What should I do then?

When you identified that "a test is broken" by your changes (ie: "you broke the test"), there may be two reasons: - You introduced a bug in the code - The test itself has a bug Confidential Actel Corporation, 2012 Page 24 of 50

Testing code with Actel SDE

Jean Porcherot

Of course the test may have a bug (reason 2). The test is a piece of code (C++ or Tcl), so it may contain bugs, it can possibly report a failure when it should report a successful run..but that's probably in less than 5% of the cases. So don't spend too much time on debugging the test code itself.focus on the code changes you did and this will most likely be the easiest way to solve the problems. In some cases, if you did not write the test and if the changes you did in your c-set are easy to undo, it may be easy to comment all your changes, check that the test passes again, and then uncomment your changes one by one to find out which one makes the test fail. By experience, I can tell you that this will probably take you less that 30mn to find what change introduced the bug against one day to debug the test if you are not familiar with it. 3.2.3 Conclusion

Our recommendation is to run tests as often as possible. You're about to change something that may affect a test: run the test before and after your change.then it's easy to identify if a change breaks the test or not. Testing should be integrated in your development process, you need to plan time for it and expect to spend some time for writing or fixing tests. We should never say "I've been working on this c-set for 2 weeks now, I need to submit today and I just noticed the tests are failing.I don't have time to fix them". If testing is part of your development process and schedule, you should not end up in such a situation.

Confidential

Actel Corporation, 2012

Page 25 of 50

Testing code with Actel SDE

Jean Porcherot

4.

Tips for writing unit and integration tests

This section lists some tips to write efficient tests, easy to debug and maintain. It first gives tips that apply to both unit and integration tests and then will cover tips more specific to each one (4.14 and 4.15). 4.1 Negative testing

Negative testing is testing the tool/module with improper inputs (through unit or integration test). Its a way to validate that a module/program does work below and beyond its limit. It tests the robustness of the code. The test done in Example 3 only tests the good flow. It checks that a design can be created, saved and then closed. The program should not crash if we save before we create a design, or we forget the file name in the save_design command. Those corner cases should be tested too. Example of negative testing, using a Tcl integration test:
if { [catch {save_design} ] } { #Command failed -> thats expected puts "save_design failed because no design is opened, that's OK" } else { puts "Error: save_design succeeded...it shouldn't." exit } new_design -family PA if { [catch {save_design} ] } { #Command failed -> thats expected puts "save_design failed due to missing arguments, that's OK" } else { puts "Error: save_design succeeded...it shouldn't." exit } save_design -name foo.adb close_design puts TEST_OK

Example 16 If executed, this will be the output of the program (red highlights messages coming from the program being tested, blue highlights messages coming from the Tcl script itself). Error: No design loaded save_design failed because no design is opened, that's OK New design created Error in save_design command: Missing design name save_design failed due to missing arguments, that's OK Saved design foo.adb Closed design TEST_OK When debugging a test because it does not end correctly (ie: if it fails with a non-expected Confidential Actel Corporation, 2012 Page 26 of 50

Testing code with Actel SDE

Jean Porcherot

failure). You must read the log from bottom to top in order to detect the latest failure, which will be, most likely the significant one: For instance, if the last call to close_design does not work anymore, the output will be: Error: No design loaded save_design failed because no design is opened, that's OK New design created Error in save_design command: Missing design name save_design failed due to missing arguments, that's OK Saved design foo.adb Error: Unable to close design This is why the script fails. If reading from top to bottom, you will see the No design loaded error first, but this one is expected and must be ignored. 4.2 Test sizes, dependencies and runtime

Try to make tests as short as possible. You must cover as much source code as possible with a test program or script as small as possible. No need to cover and validate 10 times the same functionality in 10 different manners. This slows down testing, makes it hard to debug tests and makes it hard to maintain the test when one behavior changes in the module tested. But it's also acceptable to have very long tests in some cases. There's two kind of tests: - Short tests that should run very quickly because you want them to be part of the development process (like integration tests performed by 24_7 or unit tests you want to run on every change-set you'll submit) - Long tests that does more sanity testing and that will probably only be ran once before each code freeze to check that a functionality is not broken. For instance, Project Manager has an integration test for Precision: for each package of each die of each family, this runs Synthesis and verifies it works fine (to detect problems with die/package mapping between Actel software and the OEM software). This test runs in more than 4 hours.but we only run it when we get a new version of Precision. 4.3 Test dependencies

Try not to have dependencies between tests. If one test runs several test cases (or sub-tests), make sure that they are all independent and that one test does not reuse data or results from others. Because if one test case fails, you want to be able to comment all the other ones temporarily so that you can easily focus on the one that fails during debug. If the failing one uses results from another one you won't be able to do this. 4.4 Follow coding guidelines

When we write some code for production, we try to follow guidelines. Those guidelines also apply to code we write for tests.they must be commented, clear, easy to maintain and understand. This also applies to scripts, you can use Tcl procedures, variables, includes to split your script code in different files.

Confidential

Actel Corporation, 2012

Page 27 of 50

Testing code with Actel SDE

Jean Porcherot

4.5

Test must be executable by anyone

Firstly, any test MUST be added to the vobs.don't keep them local to your machine. Any input file must be accessible (see 4.7) and the test must be runnable by the test runners. This means that arguments to be used are listed either in the 'readme' file for integration tests (see 2.4.5) or the 'arglist.txt' file for unit tests (see 5.2.1.1) Remember that if your test cannot be ran easily, developers won't run it and will then most likely break it. Before you submit a new test to the vobs, make sure it works on its own and does not use any private file. From your snapshot view, check-in all your files, remove (or rename, that's safer) the test folder, the re-load it from scratch and try to run the test again using the appropriate test runnerif must work. 4.6 About tests working directory

For the examples below, we consider that: My SDE environment asks to isolate configurations for builds. SDE_LOCAL_ROOTDIR is "g:/user/myview" and $SDE_CONFIG is "DEBUG". 4.6.1 Unit test

If we run build_top, by default, unit tests outputs (log files) will be written in g:/user/myview/dbg/utst This folder will only contain log files. The test executable will be compiled and staged in g:/user/myview/dbg/stg/bin. It will then be launched from the unit test folder itself. ie: for TestIcmdBase@idebase, /vobs/afi/tst/idebase/TestIcmdBase will be the current working directory during the test execution. 4.6.2 Integration test

If we run build_top, by default, output integration tests will be copied and executed from g:/user/myview/dbg/tst Then, when running RTAX1000S_top_edac1 test from lev1 suite, the following folder will be created: g:/user/myview/dbg/tst/levide/RTAX1000S_top_edac1_XXXXXXXX (XXXXXXXX is a number generated by top_regs) This folder will contain a copy of the original script folder: /vobs/test/reg_test/testcases/lev1/RTAX1000S_top_edac1 This folder will be the current working directory from where designer.exe will be launched. 4.7 Test input files

If your test needs to open or import some files (HDL files, EDIF Netlists.), you must make sure that those files will still be available when the test is ran by a test runner. - Your test must not use any file local to your machine (C:/.) - If your test uses files from a folder different from the test folder itself, make sure this one is always loaded in snapshots; Confidential Actel Corporation, 2012 Page 28 of 50

Testing code with Actel SDE

Jean Porcherot

If you need the files to be accessible for writing (some tools will require write access to load a design/project), then you'll need to copy the files before your test starts using them. Then, you must consider removing the copies once the test is complete to avoid the creation of view private files. For unit tests, you can use SysCopyFolderRecursively. For Tcl integration tests, you can use "file copy" 4.7.1 Unit tests

For unit tests, the files can be located in the test folder itself (at the same level as the source code for your test). Or in a common folder for all the tests from your suite so that you don't have many files duplicated if they are used by different tests, example /afi/tst/<mymodule>/common_files. Then, as the working directory is the test folder itself (see 4.6.1) you can access the files using relative path (./myfile.vhd). 4.7.2 Integration tests

For integration tests, the files should also be located in the test folder (at the same level as the script for your test), and you can then access them with relative path. As they are part of the test folder they will be copied by the test runner (see 4.6.2) before it executes the test so it will work fine. If your files are located elsewhere in the vobs, use the same trick as Example 17 to access them. 4.8 Share common code/scripts

For one test suite, you may have some common code. If your test suite validates a model, you may have functions to set-up mock-up data, validate model states. As we should do for production code, we should avoid using copy/paste when writing tests (script or code). If one function/code is used by several tests, let's put it in a shared file. For unit tests, you can create a shared .h file under the test folder for instance: /afi/tst/<mymodule>/shared/common_tools.h For integration tests, you can create a shared .tcl as well: /afi/tst/<mymodule>/shared/common.tcl Note that, as the integration test folder is copied to your test area when the test is being executed (see 4.6.2), you should not source the common.tcl file using relative path: if /afi/tst/<mymodule>/MyTest/test.tcl does a "source ../shared/common.tcl" this will only work when you will run the test manually, but won't work when it will be ran by the test runner top_regs. To fix that, use environment variables to access files from the vobs, MyTest/test.tcl will do:
set view_dir "$env(VIEW_ROOTDIR)" set vobs_dir "$env(VOBS_ROOTDIR)" source "$view_dir$vobs_dir/afi/tst/<mymodule>/shared/common.tcl"

Example 17 4.9 Test code protected by a query box

Consider the piece of code below: Confidential Actel Corporation, 2012 Page 29 of 50

Testing code with Actel SDE

Jean Porcherot

if ( mylibASK_QUESTION_QRY( NULL, MdbNO ) == MdbYES ) { DoSomething(); }

Example 18 If executed from the GUI, the user will see a Yes/No popup asking him a question. Default answer is "No" and if the user answers "Yes", then, DoSomething() will be executed. If executed from an integration test (Tcl) or a unit test, where no query box can be displayed, the default answer is picked up automatically by the message system and we consider "No" as an answer. If you want to test the call to DoSomething() from a test, here's how to do. Update the code to be:
MdbQuery_result default_res = MdbNO; int bTestingQuery = 0; DefGetBool( "TESTING_DO_SOMTHING", & bTestingQuery ); if ( bOnlyOneCorePerProject == 1 ) default_res = MdbYES; // for testing if ( mylibASK_QUESTION_QRY( NULL, default_res ) == MdbYES ) { DoSomething(); }

Example 19 Then, set the TESTING_DO_SOMETHING def variable to 1 from your unit test (using DefSet) or integration test (using a Tcl command your program supports to change a def variable, or set it directly on the command line). 4.10 Write 100 tests or 1 test with 100 sub-tests.

When you will have to test 100 functionalities of a new module, you will have to take this decision: should you write 100 different testsor should you write a single test with 100 test cases in it (100 sub-tests, but we will have only one unit or integration test). It simply depends what's the cost of setting up the model for your tests. - If you dont need to load or create any design for your test, if the test can be performed without any big engine initialization, then, you can create a 100 different tests. - If you need a design to be loaded, models to be initialized, then repeating this test set-up 100 times will have a real runtime cost for your tests. So then, you may prefer to write a single test with 100 sub-tests working on the same model you set-up once only. We will take an integration test as an example: /vobs/test/reg_test/testcases/levpicasso/smartpower_tcl This test was written to test all the power Tcl commands from SmartPower. We support more than 50 different commands. SmartPower needs place and route to be completed, so, we must either open, either create a new design before we can execute a single command, this would probably take several seconds Confidential Actel Corporation, 2012 Page 30 of 50

Testing code with Actel SDE

Jean Porcherot

If we created 50 tests, one per command, running them all will spend several minutes only to set up the design. So, we decided to create a single test, this one sets up the design, and then runs 50 sub-tests on it: one per command. Then we only spend a few seconds seconds setting up my design (create, compile, run place and route). To keep all sub-tests dependants (see 4.3), we restore SmartPower (rather than commit) between two sub-tests. Debugging such a big test with 50 sub-tests can be a pain. 4.15.4 and 4.15.5 shows how to make it easy to isolate one sub-test and run only this one rather than all. 4.11 Check coverage

Once you wrote the test for a module, running Pure Coverage (or any other coverage tool) with your test (unit or iteration) is a good way to see if your test covers all the code from your module. 4.12 OEM tools dependency

Your test may need some OEM tools to be installed (Synplify, ModelSim.). Then, make sure you add in the test folder a script file that developer can run to set up his environment so that the tools can be found and ran (set LM_LICENSE_FILE and possibly PATH environment variables). Note that such scripts already exist for all OEM tools integrated in Project Manager (ie: Libero IDE). They are located here: Workstation (Linux) version: /vobs/afi/prg/ide/reg_tests/common/setenv.unix.scr Window version: /vobs/afi/prg/ide/reg_tests/common/setenv.pc.ksh Those will update your PATH so that it finds the latest versions of each tool available (Precision, Leonardo, Synplify, ModelSim, WFL) 4.13 Disabling a test temporarily

When one test is broken and you "don't have time to fix it", you may try to find some time to do it anyway ;-) Now, if you really can't fix it because you know your c-set breaks something and you plan to make it work again in a next iteration. You can possibly remove the test temporarily. But, then, you must understand that the test needs to be restored as soon as possible, this should become a very high priority because other functionalities validated by the test may be unintentionally broken too (by you or other persons), and then you may have a very hard time to enable the test back later. For instance, One test validates 3 functionalities of the code (from the same module, so they have been put in the same test): FuncA, FuncB and FuncC. You know your c-set breaks FuncA, and plan to make it work again later. So you disable the full test. Unfortunately your c-set also breaks FuncB, but you did not know (you expected the test to fail anywayhow to know if it failed because FuncA is broken or FuncBor both). Moreover, in parallel, another developer breaks FuncC, as you disabled the test, he did not notice he introduced a bug and then he submitted. One week after that, you fix FuncA in a c-set and try to enable the test backwe wish you good luck to figure out why it keeps failing and fix both FuncB and FuncC.. Conclusion: - Having three tests (one per functionality) is recommended - If you have one single test, see if you can disable FuncA testing and maintain FuncB Confidential Actel Corporation, 2012 Page 31 of 50

Testing code with Actel SDE

Jean Porcherot

and FuncC testing rather than disabling the full test. 4.14 Unit testing tips Testing non-exported code

4.14.1

If you want to test a class, but this ones only exported through an interface: mylib/ifc/MylibClassIf.h is:
class MylibClass { public: MylibClass& GetInstance(); virtual void DoSomething() = 0; };

Example 20 mylib/inc/MylibClassImpl.h is:


class mylibClassImpl : public MylibClass { public: mylibClassImpl () { m_bDidSomething = false; } virtual void DoSomething() { m_bDidSomething = true; }; virtual bool SomethingWasDone() { return m_bDidSomething; } private: bool m_bDidSomething; };

Example 21 mylib/src/MylibClassImpl.cpp is:


MylibClass& MylibClass::GetInstance() { static mylibClassImpl impl; return impl; }

Example 22 From your test, you'd like to call DoSomething() and then verify that something was done (by calling and checking SomethingWasDone()). But SomethingWasDone() is not part of the interface and then can't be accessed from your executable. One solution to this problem is to extend the interface, but you don't want to do that. You can easily include MylibClassImpl.h, using "#include ../inc/MylibClassImpl.h", but using mylibClassImpl won't work anyway as this object is not exported by mylib (because it starts with lower cases, if named, MylibClassImpl, it could work). Then, the only solution is to make your executable link directly with MylibClassImpl.o. This will Confidential Actel Corporation, 2012 Page 32 of 50

Testing code with Actel SDE

Jean Porcherot

make it possible for your test to use the mylibClassImpl class. You will add a testlibinfo.txt file in your test folder. This one will contain: MODULE = mylib OBJECTS = MylibClassImpl Then, your executable will link with MylibClassImpl.o and any object from MylibClassImpl.h will become available from your test. You can then write:

#inlude "../inc/MylibClassImpl.h" void TestRunner::MyTest() { mylibClassImpl* impl = (mylibClassImpl*) &(MylibClass::GetInstance()); CPPUNIT_ASSERT( !impl->SomethingWasDone() ); MylibClass::GetInstance().DoSomething(); CPPUNIT_ASSERT( impl->SomethingWasDone() ); }

Example 23 To avoid the cast, you can also make it possible to retrieve the implementation from outside: mylib/ifc/MylibClassIf.h is:
class mylibClassImpl; class MylibClass { public: MylibClass& GetInstance(); mylibClassImpl& GetInstanceImpl(); virtual void DoSomething() = 0; };

Example 24 mylib/src/MylibClassImpl.cpp is:


mylibClassImpl& MylibClass::GetInstanceImpl() { static mylibClassImpl impl; return impl; } MylibClass& MylibClass::GetInstance() { return GetInstanceImpl(); }

Example 25 This makes mylibClassImpl only visible with forward declaration from the interface, so it does not expose it for real nor add any compilation dependency. Confidential Actel Corporation, 2012 Page 33 of 50

Testing code with Actel SDE

Jean Porcherot

4.14.2

Minimize dependencies

If the module you are testing does not have dependency on other modules of your library. You may not list the library itself in linkfile.txt and only list the .o files you need in testinfo.txt (see 4.14.1). Then, your test executable does not need the library to be fully compiled and up to date. It will only link with .o files and this can save compilation time for your test program. 4.14.3 Create mock up objects/models/data

If the module you want to test needs some data, you may need to create mock-up data within your test program. When doing this, you should minimize the amount of code you duplicate from the library that creates the real data when modules are integrated together. For instance, when trying to test power or timing engine from a unit test, you may need to create a gdev instance using afl/adl files. This makes it possible to initialize models without needing to load an adb for real. 4.14.4 Create mock up listeners/views

If a model sends events/notifications to other systems, it may be interesting for you to test those. You may consider creating a mock-up event listener that will check that events are propagated correctly and that the listener is in sync with the model. This is the code you want to test and validate:

Confidential

Actel Corporation, 2012

Page 34 of 50

Testing code with Actel SDE

Jean Porcherot

class IntView { public: virtual void AddObjectDisplay( int iObject ) = 0; virtual void RemoveObjectDisplay( int iObject ) = 0; }; class IntContainer { public: void Init() { for ( int i = 0 ; i < 100 ; i++ ) { m_vObject.push_back( i ); m_pView->AddObjectDisplay( i); } } void Clear() { while ( !m_vObjects.empty() ) { m_pView->RemoveObjectDisplay(vObjects.front()); m_vObjects.erase( m_vObjects.begin() ); } } void SetView( IntView* pView ) { m_pView = pView; } const std::set<int>& GetObjects() { return m_vObjects; } private: std::set<int> m_vObjects; IntView* m_pView; };

Example 26 Here is how your test program could look like:

Confidential

Actel Corporation, 2012

Page 35 of 50

Testing code with Actel SDE

Jean Porcherot

class MyView : public IntView { MyView( IntContainer& cont ) : m_container(cont) {} virtual void AddObjectDisplay( int iObject ) { CPPUNIT_ASSERT(m_vDisplayed.find(iObject) == m_vDisplayed.end()); m_vDisplayed.insert( iObject ); Validate(); } virtual void RemoveObjectDisplay( int iObject ) = 0; { CPPUNIT_ASSERT(m_vDisplayed.find(iObject) != m_vDisplayed.end()); m_vDisplayed.erase( iObject ); Validate(); } void Validate() { CPPUNIT_ASSERT(m_vDisplayed == m_container.GetObjects()); } private: IntContainer& m_container; std::set<int> m_vDisplayed; }; void TestRunner::MyTest() { IntContainer container; MyView view( container ); container.SetView( &MyView ); container.Init(); container.Validate(); container.Clear(); container.Validate(); }

Example 27 4.14.5 Create static members to test calls

4.14.4 show how to set up your own listener to check that events are propagated correctly. If the module you want to test notifies another module and does not let you change the listener itself, you may need the trick below to validate notification is done. Here's the code you want to test:

Confidential

Actel Corporation, 2012

Page 36 of 50

Testing code with Actel SDE

Jean Porcherot

#include "MylibClassIf.h" class IntContainer { public: void Init() { for ( int i = 0 ; i < 100 ; i++ ) { m_vObject.push_back( i ); m_pView->AddObjectDisplay(i); if ( i % 2 == 0 ) MyLibClass::GetInstance().EvenNumberFound(i); } } };

Example 28 Init method now notifies MyLibClass when an even number is loaded. We want to validate that EvenNumberFound is called when i is an even number. Unfortunately, we can't easily have IntContainer call a mock-up MyLibClass instance. Then, we can add a static member in MyLibClass to track accesses to EvenNumberFound. This has low impact on code complexity, memory usage and runtime, and will allow me to verify the behavior we expect. We will then need to modify mylibClassImpl: mylib/inc/MylibClass.h is:
class mylibClassImpl : public MylibClass { public: ... static std::set<int>*& GetEvenFoundDebug() { static std::set<int>* debug = NULL; return debug; } void EvenNumberFound( int i ) { SysPrintf("FoundEvenNumber"); DoSomething(); if (GetEvenFoundDebug()) GetEvenFoundDebug()->insert(i); } ... };

Example 29 Then, we can test that myLibClassImpl has been notified as expected:

Confidential

Actel Corporation, 2012

Page 37 of 50

Testing code with Actel SDE

Jean Porcherot

void TestRunner::MyTest() { IntContainer container; std::set<int> my_debug; MyLibClass::GetInstanceImpl().GetEvenFoundDebug() = &my_debug; CPPUNIT_ASSERT( my_debug.size() == 0 ); container.Init(); CPPUNIT_ASSERT( my_debug.size() == 50 ); MyLibClass::GetInstanceImpl().GetEvenFoundDebug() = NULL; }

Example 30 4.14.6 Implement a GUI unit test

A unit test is a program linking to some libraries to be validated. Unit tests we presented above are supposed to run automatically and return 0 or 1 depending if they pass or fail. If you want to validate that a GUI component works fine (a grid, tooltip, dialog, control base class you implemented), you can also consider having a unit test program that will open a GUI and let you do the component validation by hand. We have this sort of tests for amfc, for instance, /afi/tst/amfc/cpptooltip is a GUI program, if you compile and run it, it will open the GUI below so that you can play with the amfc tooltip class.

Figure 12 Such tests must not appear in your unit test suite as they cannot be automatically ran and validated by a test runner.

Confidential

Actel Corporation, 2012

Page 38 of 50

Testing code with Actel SDE

Jean Porcherot

4.15

Integration testing tips Negative testing, use quit, rather than exit

4.15.1

As script execution stops on failure, you need to use the catch statement if you want to run a command supposed to fail.
if { [catch {save_design} ] } { #Command failed -> thats expected puts "save_design failed because no design is opened, that's OK" } else { puts "Error: save_design succeeded...it shouldn't." quit } new_design -family PA if { [catch {save_design} ] } { #Command failed -> thats expected puts "save_design failed due to missing arguments, that's OK" } else { puts "Error: save_design succeeded...it shouldn't." quit } save_design -name foo.adb close_design puts TEST_OK

Example 31 On failure, we prefer to do a quit rather than a exit. quit is not a valid Tcl command, so calling it will fail and will end up exiting cleanly. exit asks to exit the software instantly and this may not write the log file in some cases. 4.15.2 Dont catch all commands

Many old integration tests are catching every command:


if {[ catch {import_source -format "edif" top_edac1_ax1000s.edn }]} { puts "IMPORT FAILED" } else { puts "IMPORT_EDIF_SUCCESSFULL" }

Example 32 Because script execution exits on failure, the script below is probably easier to read, will give the same output on success and will exit on failure.
import_source -format "edif" top_edac1_ax1000s.edn puts "IMPORT_EDIF_SUCCESSFULL"

Example 33

Confidential

Actel Corporation, 2012

Page 39 of 50

Testing code with Actel SDE

Jean Porcherot

4.15.3

Organize your code

Use Tcl procedures (proc) to define procedures and avoid having a big non-organized test with a lots of duplicated statements in it. 4.15.4 Organize your test in sub-tests

When you want to test several aspects of a functional unit, you can either write one integration test for each, or write a single integration test with sub-tests for each. See 4.9. Heres how you can create a test with several sub-tests (example picked-up from Picasso tets suite):
set all_tests "test1 test2 test3 set DESIGN "smartpower_tcl.adb" new_design -name "foo" -family "ProASIC3" -path {.} -block "off" import_source -format "edif" -edif_flavor "GENERIC" {netlist.edn} set_device -die "A3P125" -package "132 QFN" save_design $DESIGN foreach the_test $all_tests { puts "Stating test: $the_test" source tests/${the_test}.tcl puts "${the_test}_SUCCESSFUL" close_design open_design $DESIGN } close_design puts "ALL_TESTS_SUCCESSFUL"

Example 34 Then, your test folder will contain a sub-folder tests, containing test1.tcl test2.tcl test3.tcl each one testing a different functionality. 4.15.5 Use script arguments

Arguments can be passed to a Tcl script. They can then be accessed in the script using $argc and $argv. Example 34 shows a test that runs three sub-tests (test1, test2 and test3). If you know one test is failing and want to run only this one, you can use script arguments to modify the $all_tests variable and then have only a single test run:
set all_tests "test1 test2 test3 if { $argc != 0 } { set all_tests $argv } ...

Confidential

Actel Corporation, 2012

Page 40 of 50

Testing code with Actel SDE

Jean Porcherot

Example 35 Refer to 2.4.4 to see how to pass arguments to your script when running it. 4.15.6 Test GUI content

If the main frame of the program you are testing is loaded, then, you can test the content of GUI components through a Tcl script. Your main frame will be created (OnCreate will be called), all its control bars will also be created but not displayed. Some MFC functions may crash in this situation (PostMessage, SendMessage, SetWindowPos), and you should then protect them by a test on GetSafeHwnd() (this will return NULL if the frame is created from a Tcl script). This may require some minimal code change, but will then makes it possible for a script to check that: - A view (tree, list) contains what it should contain - Controls are greyed out when they should be . For instance, your program can provide a Tcl command to dump a tree control content. Then, your script can ask the tree control to dump its content to a file, and then your script can compare it to a golden file. To have the main frame loaded you need to add SCRIPT_MODE:query on the command line when executing the script (see 2.4.4.1). Under Linux, this will require DISPLAY environment variable to be set. Then, such tests cannot be added to the tests executed by 24_7 for c-set processing as this one does not set DISPLAY. You dont need that when running the script directly from the programs GUI (using File Execute script). 4.15.7 Test model content

As you can add Tcl commands in your program to dump GUI controls content, you can add commands to dump model content or to return a specific model value (Tcl commands does not only pass or fail, they can also return a double, integer, Boolean or string value). Then, you can test your model from the Tcl script. Example from smartpower_tcl integration test:
smartpower_set_cooling -style {custom} -teta {26} if { [smartpower_get_tetaja -style custom] != 26 } { puts Invalid Teta JA value quit } puts TEST_OK

Example 36

Confidential

Actel Corporation, 2012

Page 41 of 50

Testing code with Actel SDE

Jean Porcherot

4.15.8

Error Checking Methods

4.15.8.1

More on Error Checking Methods

These can only be specified in the readme file. Each readme file must specify at least one error checking method. The error checking method determines how the results of the test are evaluated to determine success or failure. It is important to keep the concept of executing the test separate from the concept of evaluating the outcome of the test. String is: Error Checking Methods: <mthd1>:<mthd2> Where <mthd1>:<mthd2>, etc., is a colon separated list of error checking methods. The method can be specified in one of 3 ways (listed in order of highest to lowest precedence): - the fully qualified path to a script - the name of a predefined error checking method (these are methods hard-coded into the regression script examples are exp, qtf, mvn_exp) - the name of an error checking script that is located in /vobs/test/reg_test/scripts (example is new script retval which has been created for use by the actgen tests) Examples: Error Checking Methods: retval Error Checking Methods: /export/home/err_check.ksh Error Checking Methods: exp:qtf 4.15.8.2 Error Checking Method Parameters

This is where you specify parameters that are passed to the error checking script. For all nonpredefined error checking methods, there is an API that is always used (i.e. these parameters are always passed to the error checking method): -e <executable> -l <local test dir> -r <return value> Where <executable> is the full path to the executable that was called, <local test dir> is the full path to the directory where the test was run, and <return value> is the return value that the application returned. Additional parameters that are specified in the readme are appended to the end of the predefined API. These parameters can only be specified in the readme file. String is: ERROR_CHECKING_PARAMS=<params> Where <params> are the additional parameters that are to be passed to the error checking methods in addition to the above mentioned API. Note that this does not apply to pre-defined error checking methods their parameters are hard coded into the regression testing script. Example: Error Checking Method: retval ERROR_CHECKING_PARAMS=200 Results in the following call: /vobs/test/reg_test/scripts/retval e <executable> -l <test dir> -r <return val> 200

Confidential

Actel Corporation, 2012

Page 42 of 50

Testing code with Actel SDE

Jean Porcherot

5.

Using and taking advantage of your tests

By following the steps above, you could easily create a new unit or integration test. This section will provide you with some more advanced information about how to use your tests. 5.1 More about test runners 5.1.1 Unit test runner

The script run_tests.rb (/vobs/dtools/ruby/run_tests.rb) allows you to run and validate unit test suites. This script takes an output directory and a list of unit tests to be performed. For each unit test, it will: - Step into the unit test folder - Build and stage the test - Execute the generated executable - Verify the test's exit code Then, it will provide a status reporting how many tests passed and which ones failed. The output directory will contain the log of each test executed. Call run_tests.rb -h for more information on this tool. This test runner is the one used when specifying the build_top recipe flags: DO_UNIT_TEST=true UNIT_TEST_LEVELS=all@idebase,sys@base,deftest@base This will run all tests below: - All tests from /afi/idebase/testinfo.txt - All the sys group from /afi/base/testinfo.txt - deftest from base (/afi/tst/base/deftest) The output directory for integration tests can be specified in the recipe with UNIT_TEST_DIR. If not specified, it will be $SDE_LOCAL_ROOTDIR/utst (if the SDE variable is set). Refer to build_top documentation for more information on those options. This test runner is fully compatible with sdcmds, idebase and sgcore tests. It could easily support other test suites (base, tfc.any library that has tests). We would just need to write the testinfo.txt files and verify that tests can be validated by their exit code. Pre-requisites: - This Ruby script needs the following gem components to be installed: optparse, obstruct, pp, platform. They can be installed using gem install. Refer to Ruby help. - You must tell the script where to build and stage the tests. Use run_tests.rb -h to see where they are picked up from by default. If using the runner from build_top, make sure SDE_LOCAL_ROOTDIR, ACTEL_SW_DIR or ALSDIR are set so that the runner knows where to stage the executables once compiled. Confidential Actel Corporation, 2012 Page 43 of 50

Testing code with Actel SDE

Jean Porcherot

Recommendations: Always use the latest SDE recommendation for your environment, then the script should work with minimal parameters specified: run_tests.rb --tests all@idebase 5.1.2 Integration test runner

The script top_regs (/vobs/rtools/scripts/top_regs) allows you to run and validate integration test suites. This script takes an output directory and an "lst" file as parameter. For each test of the suite (lst file specified), it will: - Copy the whole integration test folder in the working output specified - Run the test based on the information from the 'readme' file - Use the "Error Checking Method' to validate the test. Then, it will provide a status reporting how many tests passed and which ones failed. Call top_regs -h for more information on this tool. This test runner is the one used when specifying the build_top recipe flags: DO_TEST=true TEST_LEVELS=ide This will run all tests from /vobs/test/reg_test/lists/reglevide.lst The output directory for integration tests can be specified in the recipe with TEST_DIR. If not specified, it will be $SDE_LOCAL_ROOTDIR/tst (if the SDE variable is set). Refer to build_top documentation for more information on those options. 5.2 Customizing test validation/execution 5.2.1 Unit test

By default: - To run a test, we simply need to call the executable. No arguments are passed. - To validate the test, we simply need to check the exit code. This can be customized per test. Those customizations are fully supported by the unit test runner. But we would not recommend using those customizations, it may be hard for a human person not familiar with the test to understand how to run and execute it (worst, if he does not understand how to determine if the test passes or fail, he may consider it passed when its actually broken). Basically, any argument that needs to be passed to the test can be hard-coded in the test's main program and any specific validation can also be done in C++ by the main program, or the CPPUNIT test class and guarantee that only the exit code should be checked for validation. Anyway, here is how to customize this if you really need to: 5.2.1.1 Passing arguments to the test

You can ask to pass some arguments to the test executable. You simply need to create a Confidential Actel Corporation, 2012 Page 44 of 50

Testing code with Actel SDE

Jean Porcherot

arglist.txt file in the test folder. The line it contains will be used as a list of arguments to be passed to the executable. Note that if your test has a "arglist.txt" file, this one it not handled by Visual Studio, you will need to specify the arguments to be used by the debugger manually. Thats one of the reason why we dont recommend to do that. Developers may miss this file and try to run the test from the debugger without arguments. 5.2.1.2 Multiple runs of the same test

You may want one test (ie: executable) to be executed several times by the test runner, with different arguments. Then, you simply need to add one new line to arglist.txt per execution you want. For instance, if your test is family specific, your arglist may contain:
ACT1 ACT2 PA ...

Example 37 Then, the program will be run many times by the test runner with ACT1 as single argument, then ACT2, the PA.... This is not handled by Visual Studio when youll try to debug the test and moreover, you hardcoded family names that should be accessed through the def system. That's one more reason why we would recommend to loop through all families directly in the main program itself. 5.2.1.3 Customize test validation

run_tests.rb will create a test runner class for each unit test to be performed. This class is the one who will build, run and validate the test. By default, it only checks the executable exit code. There is a mechanism to customize the test runner class used to validate one test suite or one specific test. You just need to add a sub_test_runner.rb file under the module test folder (/afi/tst/<module>) to customize the validation for all tests it contains or under the test itself to customize a single test validation (/afi/tst/<module>/<test>). You will need to be familiar with Ruby, heres the file youll need to add:

Confidential

Actel Corporation, 2012

Page 45 of 50

Testing code with Actel SDE

Jean Porcherot

#! /bin/env ruby class MyDoRunTest < DoRunTest def doValidate( exit_code ) result = super( exit_code ) if @cur_options.verbose puts "#{@folder}: Parsing #{@logfile} for result" end if result == 0 # Do some extra validation from here end result end end class MyPrototype < TestPrototype def DoCreateConcreteObject(folder,options,num,thread) MyDoRunTest.new(folder,options,num,thread) end end # add the new test runner to the factory TestFactory.register( MyPrototype.new() )

Example 38 Once again, this is really not recommended. This extra validation mechanism will be used by the runner, but not by a human person running the test by hand or debugging it. 5.2.2 Integration test

'readme' file from the test folder is the main entry point to customize test execution (change argument list) and validation (change "Error Checking Method'). 5.3 Overnight runs

You can use Windows' "Scheduled Tasks" to plan a build to be done and tests to be performed every night. Note that you need administrator privilege to be able to add new scheduled tasks.

Figure 13 Define the task so that it runs a ksh script for instance: Confidential Actel Corporation, 2012 Page 46 of 50

Testing code with Actel SDE

Jean Porcherot

Figure 14 Then, make the script enter the view, synchronize it and then call buildtop with a recipe. This recipe can run either unit or integration tests and can send you an email when done.
echo "echo \"Starting nightly build\"" >> e:/user/script.ksh echo "sv_sync" >> e:/user/script.ksh echo "build_top myrecipe" >> e:/user/script.ksh setview -e e:/user/script.ksh f:/user/sn/myview_sn

Example 39

Confidential

Actel Corporation, 2012

Page 47 of 50

Testing code with Actel SDE

Jean Porcherot

6.

Test Driven Development

Test-Driven Development (TDD) is a software development technique consisting of short iterations where new test cases covering the desired improvement or new functionality are written first, then the production code necessary to pass the tests is implemented, and finally the software is refactored to accommodate changes. The availability of tests before actual development ensures rapid feedback after any change. Practitioners emphasize that test-driven development is a method of designing software, not merely a method of testing. [Wikipedia] See how to take advantage of TDD when fixing bugs at the end of 3.1.2. As a unit test can be loaded in Visual Studio (see 2.3.9) it's possible to do TDD from Visual Studio. In this case, you will write the test before the code itself. Then, you will have a test that will be failing. This one should not be added to the test suite for your module. It may be added to a specific test suite temporarily and you will move it to the real test suite when it will pass (so that other people trying to validate changes they did in your module won't see the failing test and won't start investigating why a test is not passing).

Confidential

Actel Corporation, 2012

Page 48 of 50

Testing code with Actel SDE

Jean Porcherot

7.

Future enhancements/Roadmap

7.1

Improve test runners

Unit tests: - Make it possible to flag some tests that should not be executed in a test suite, to make TDD and GUI unit testing usage easier. - Make it possible to specify "all" as a unit test suite to build_top. It would then run all test suites from all the libraries compiled. - Make all existing tests (base, ipmgr) compatible with the test runner. So that build_top supports any librarys test level. - Make it possible to have multiple executables be compiled for interprocess communication unit tests. Integration tests: - Make it possible to place lst files under module paths, not only under /vobs/reg_tests - Make it possible to run a single integration test using top_regs - Support nested lists - Have a better documentation of top_regs error checking methods - Replace readme file by a one with a better format - Use integration top_regs script for SQA regressions. Misc: - Enhance buildtop result display. Today, if you run unit tests "all@idebase,all@nspirit", the result mail you receive says: "UnitTst: 34/34" It should say "UnitTst: all@idebase:33/33, logexp@nspirit:1/1" - Have 24/7 system run all unit tests every night - We may merge integration and unit test description files (*.lst and testinfo.txt) into a common top-level file so that you can run a test suite that would include both unit and integration tests. - Support parallel processing of unit and integration tests on different machines and platforms to improve runtime. - Have a GUI opened by the test runner so that you get a real-time status of your tests. This GUI could also launch the debugger for you when a test fails and set up the right environment for debugging automatically (executable path, arguments) - Save in a database the last processing time of a test and give the user an evaluation of how long the testing will take. 7.2 Make tests runnable from captures

- Make it possible to run unit tests from captures (where you don have write permission, so you cannot stage the test program) - Make it possible to easily run integration tests from captures - Make is possible to run tests from a machine that does not have ClearCase, on a capture provided by a developer. 7.3 Improve debugging

- We should load the argument list (arglist.txt) in Visual Studio when you create a solution Confidential Actel Corporation, 2012 Page 49 of 50

Testing code with Actel SDE

Jean Porcherot

including unit tests. Then you can run the test without setting the argument list by hand. - We should also load integration tests in solutions created by mksln, setting up the project properties with correct working directory and argument list (based on the readme file) to run an integration test.

Confidential

Actel Corporation, 2012

Page 50 of 50

Das könnte Ihnen auch gefallen