How to test Apama Applications

Since this blog post was originally written in 2016, the default PySys test structure has been simplified in several ways – for example there is now a single pysystest.py file instead of separate pysystest.xml and run.py files. The sample code below can still be used, but if you’re using PySys v2.0+ (included with Apama 10.11+) it’s recommended to use more modern patterns for new testcases. See PySys changes in 10.11.0 and the product samples in APAMA_HOME/samples/pysys for details.


Testing an application is obviously a very important part of the development cycle, and to this end Apama includes an Open Source testing framework called PySys – Welcome to PySys! — PySys v2.1 documentation.

This is a Python based system testing framework that can be used to orchestrate the different components of a system test and then verify the results to give a positive or negative testing outcome. It also allows the same test to be run across multiple platforms.

We have added Apama extensions to PySys to allow for easier orchestration of Apama components, such as a CorrelatorHelper which can start a correlator, inject an EPL file, redirect output to a file and send events. There are also example PySys tests included with the Apama install which can be found in ‘Apama\samples\pysys’ .

Anatomy of a PySys test

A basic PySys test is made up of 2 files, ‘pysystest.xml’ – An XML file which describes the test, and ‘run.py’ – which contains the Python code for the test. Any ‘pysystest.xml’ can be copied and updated from the samples, but we shall look closer at a ‘run.py’.

The Python code for a PySys test is contained in a class which inherits from BaseTest and overrides 2 functions, execute() and validate(). Here is an example stub of a PySys test:

from pysys.basetest import BaseTest
class PySysTest(BaseTest):
    def execute(self):
        pass
        
    def validate(self):
        pass

Code in execute() is used to run the test, it may start up systems, connect them, send data and then wait for a signal that the test has completed. Code in validate() is then used to assert the test ran as expected, and unexpected things didn’t happen. This could be checking for messages (or checking for lack of messages) in log files or checking for differences between a reference file and files created by the test.

Here is an example of an execute() which starts an Apama correlator, registers for output events and logs them to a file, injects some EPL, sends in some events and then waits for the expected number of output events:

# create the correlator helper, start the correlator and attach an
# engine_receive process listening to a test channel. The helper will
# automatically get an available port that will be used for all
# operations against it
correlator = CorrelatorHelper(self, name='testcorrelator')
correlator.start(logfile='testcorrelator.log')
receiveProcess = correlator.receive(filename='receive.evt', channels=['EchoChannel'], logChannels=True)
        
# inject the simple.mon monitor (directory defaults to the testcase input)
correlator.injectEPL(filenames=['simple.mon'])
    
# send in the events contained in the simple.evt file (directory defaults
# to the testcase input)
correlator.send(filenames=['simple.evt'])
            
# wait until the receiver writes the expected events to disk
self.waitForSignal('receive.evt', expr="SimpleEvent", condition=">=2")

In this example there are more files used by the test, ‘simple.mon’ and ‘simple.evt’. These files are also stored with the rest of the test code in a directory called ‘Input’. The injectEPL() and send() functions by default look in this directory for files. This test also creates some output files, ‘receive.evt’ and ‘testcorrelator.log’, which will be created in the ‘Output’ directory. It is good practice to use a waitForSignal() (with a timeout, which defaults to 30 seconds) on an output file to ensure the test has completed.

It is considered bad practice to just use a wait or sleep for a certain number of seconds as this is very unreliable and can easily waste testing time (for instance if you have 600 tests which could be waiting an extra second that they don’t need to, that could be an extra 10 minutes on a test run!). The output files can then be used to assert if this test passed or failed (other test outcomes are available, such as ‘blocked’, ‘timeout’ or even ‘core dump’) within the validate() function. Here is an example of a validate() function:

# look for log statements in the correlator log file

self.assertGrep('testcorrelator.log', expr=' (ERROR|FATAL) ', contains=False)

# create a list of messages and assert they appear in the correct order

exprList = []

exprList.append('Received simple event with message - This is the first simple event')

exprList.append('Received simple event with message - This is the second simple event')

self.assertOrderedGrep('testcorrelator.log', exprList=exprList)

# check the received events against the reference

self.assertDiff('receive.evt', 'ref_receive.evt')

This code first asserts that the regular expression is not contained in the correlator log file. It then creates an ordered list of messages and asserts they are contained in the correlator log and in the supplied order. It then asserts the difference between the file of received events and a reference file, and expects them to be the same. The reference file ‘ref_receive.evt’ is also contained with the test file system in the ‘Reference’ directory. If any of these asserts fail, the test is failed.

Running a PySys test project

You would normally have multiple PySys tests for an application and you will want to be running them all as a test set (and expecting them all to pass). There is one more file needed to define a PySys project, ‘pysysproject.xml’. This is an XML file that defines some properties for PySys and should be located in the root test directory. There is an example of this file in the PySys samples. PySys can be run from any child directory within your test directory, it will search up the directory tree to find ‘pysysproject.xml’ but by default will run all found tests within the directory it is run from.

To run all PySys tests you simply run this command from an Apama Command Prompt: pysys run
You can run a single specific test by supplying the test directory name: pysys run Apama_cor_001
Or you can run a subset by supplying a name range (this will run all tests from 004 onwards): pysys run Apama_cor_004:

After the test set is run, PySys will give you a breakdown of any failures which you should then investigate.

PySys extras

PySys also offers some other commands, such as printing all test descriptions or making a stub test, the possible options are:

Usage: pysys.py [mode] [option]* { [tests]* | [testId] }
    where [mode] can be;
       run    - run a set of tests rooted from the current working directory
       make   - make a new testcase directory structure in the current working directory
       print  - print details of a set of tests rooted from the current working directory
       clean  - clean the output subdirectories of tests rooted from the current working directory
    For more information on the options available to each mode, use the -h | --help option, e.g.
       pysys.py run --help

We encourage you to use these when appropriate.

PySys is a very powerful testing framework and this blog just scratches the surface. Some other things to investigate:

BaseTests – If you have common code in multiple tests, you should create your own BaseTest that contains the common code and your tests should inherit from it
Asserts – there are many assert***() functions in PySys that should be used during validation
Code Coverage – PySys can be run with the option ‘-X eplcoverage=true’ and it will generate a code coverage report for the lines tested.

Happy testing!