Internship and Enhancing EPL Testing (Part 1)

Introduction

Any software engineer who’s had to deal with unit testing will know the frustrating experience it can be. Whether it’s the steep learning curve of having to learn a new framework or language, unhelpful log messages or having to learn a plethora of new CLI arguments, there’s something terrible in unit-testing for everyone. Here in the Apama team, we have looked into ways to smooth out the experience for our customers and provide them with powerful tools to build reliable EPL code. During my internship this summer, I was responsible for two projects that support that aim: a VSCode extension for PySys and an EPL Testing Framework. This blog post will outline the engineering process behind these projects as well as give a snippet of my internship experience at Software AG.

To start off with I would like to introduce myself. My name is Yoav, I’m about to enter my third year of undergraduate studies in Information Engineering. I joined the IoT & Analytics team here in Cambridge at the end of June 2020 and finished at the start of September of 2020. Obviously, with a global pandemic going on this year, I expected this internship to be a bit special. It was very clear even before starting that I would not be able to come and work at an office. People were very anxious at the time, me included, and many of my friends were having their internships and holidays canceled and I wasn’t sure if my internship would be still be able to go forward. Fortunately Software AG has been very accommodating and transparent from the start and set up a remote internship experience. I feel very grateful for having had this experience, not many employers would give people my age that much freedom and autonomy. The additional independence from working at home allowed me to gain more confidence in my software engineering skills. This will help me massively in my future career.

But enough blabbing on about myself, let’s talk about what we’re really here for: EPL testing!

The first project (Which will be in part 2 of this blog!) I’ve had the chance to work on is a VSCode extension for the PySys Testing Framework. Currently, PySys is the chosen tool for uploading EPL apps and tests to cumulocity and running them locally. The goal of this extension is to provide users with a simple UI to run any tasks that may be relevant to EPL testing.

The second project is an EPL testing framework, the hope of which is to make it easier to write EPL tests and lower the barriers of entry for developers who are not proficient in EPL.

EPL testing framework

Although users have a simple pathway to uploading .mon files to Cumulocity IoT and running tests using PySys, there isn’t a streamlined methodology for writing EPL tests. The example below is an example of a test that checks that an alarm is raised when a measurement exceeds a threshold value on a device.

It involves setting up multiple listeners for devices, alarms and timeouts as well as testing pass/failure cases. Writing tests this way gives you a lot of flexibility, it does not scale easily and is not easy to replicate for new EPL users. We wanted to build a framework that removes all of this complexity whilst still giving users flexibility in testing their applications.

Moving toward a clearer test structure

We started researching existing testing frameworks from other languages and seeing what we may want to borrow and incorporate into EPL testing. JUnit was a heavy source of inspiration for building our assertion actions signature, with desirable features including:

  • A generic assertion that handle type checking and many uses cases.
  • Having a custom message when setting up an assertion, giving users good flexibility with error logging.

To decide on what logging style to use, here again we looked at how popular frameworks display their output, here again we have taken inspiration from JUnit and have added diff highlighting to it for improved readability. The image below is an example of a failing EPL test log displayed in PySys.

Using the asserts

After deciding on the testing style we wanted to emulate, we starting working on some assertion actions that may be useful in testing common EPL. These included actions such as assertEquals which asserts equality between two values and assertThrows which given a function and a set of input parameters asserts that the function throws an error. The ability to set the log level of the assertions and to enable/disable them easily was requested and at that stage was handled by a dictionary passed on to the asserter constructor. The code block below illustrates some of the common assertions that could be performed at that stage:

action onload() {
        dictionary<string, any> options := {
            "isEnabled": true,
            "logLevel": "ERROR"
        };
 
        Assert asserter := Assert.createCustom(options);
 
        device device1 := device.create("CZ-3", "droid", "127.0.0.1");
        device device2 := device.create("CZ-3", "droid", "127.2.0.1");
 
        asserter.assertEquals("sensors should be the same", device2, device1);
        asserter.assertEquals("log messages should be the same ", "Executed engine_inject <correlator> [test.mon]", "Executed engine_deploy <correlator> [test.mon]");
 
        asserter.assertNotEquals("values should not equal", 3.56, 3.57);
 
        asserter.assertInRange("value falls in range", 3, 2, 4);
 
        asserter.assertTrue("two plus two is four", (2+2)=4);
}

With the assertions handling the logging and fail test cases, the test above was simplified as follows:

action onload()
    {
        monitor.subscribe("TEST_CHANNEL");
         
        on DeviceCreated(reqId=helper.createNewDevice("AlarmOnMeasurementThresholdTestDevice")) as device
        {  
            // Send measurement and check to see whether an alarm is raised
            monitor.subscribe(Alarm.SUBSCRIBE_CHANNEL);
            integer measurementReqId := helper.sendMeasurement(device.deviceId, MEASUREMENT_THRESHOLD + 10.0);
             
            helper.assertAlarmRaised("aboveThresholdTest", device.deviceId, ALARM_TYPE);
            on Alarm(source=device.deviceId, type=ALARM_TYPE) as alarm {
                asserter.assertEquals("alarmSeverityTest", alarm.severity, "MINOR");
            }
        }
 
        on DeviceCreated(reqId=helper.createNewDevice("AlarmOnMeasurementThresholdTestDevice")) as device
        {  
            // Send measurement and check to see whether an alarm is raised
            monitor.subscribe(Alarm.SUBSCRIBE_CHANNEL);
            integer measurementReqId := helper.sendMeasurement(device.deviceId, MEASUREMENT_THRESHOLD - 10.0);
             
            asserter.assertAlarmNotRaised("belowThresholdTest", device.deviceId, ALARM_TYPE);
        }
 
    }

It is still very clunky and difficult to follow with all of the listeners being set-up in the same place. Moving these away changed a lot of things from a technical point of view. For instance this test involves a nested listener that relies on data being passed on from a parent listener ( device.deviceid ), in order to un-nest this assertion we would need to generate, assign and store identifiers at a higher level that can be accessed by any listeners that are reliant on different events occurring. The solution we came up with was to nest a set of common actions users may want to perform and test within an event that handles the setting up of the listeners in a builder pattern:

action onload() {
        float MEASUREMENT_THRESHOLD := 100.0;
        string ALARM_TYPE := "ThresholdExceededAlarm";
 
        ManagedObject mo := new ManagedObject;
        mo.type := "testObject";
        mo.name := "r2d2";
 
        //Test 1
        TestValidator verify := TestExecuter
            .create("test1")
            .enable(true)
            .createDevice("myDeviceId")
            .createDevice("myDeviceId2")
            .sendMeasurementOn("myDeviceId", MEASUREMENT_THRESHOLD + 10.0)
            .sendMeasurementOn("myDeviceId2", MEASUREMENT_THRESHOLD - 10.0)
            .sendManagedObject(mo)
            .findManagedObject({"type": "testObject"})
            .run();
         
        on TestValidator(validateId=verify.validateId) as verify {
            verify
                .assertAlarmRaised("should raise an alarm", "myDeviceId", ALARM_TYPE)
                .assertAlarmNotRaised("should not raise an alarm", "myDeviceId2", ALARM_TYPE)
                .assertEquals(verify.managedObjects[0].name, "r2d2")
                .end();
        }
}

In this design, we expect users to write their EPL tests in a 2-step process. First they run an instance of the TestExecuter event and setup any events / data they may want to test later. This includes things such as creating devices, sending measurements, sending/loading managed objects, etc. Then, they set-up a listener for a TestValidator event sent from TestExecuter . After the TestValidator event is received the user can then run any assertions that they want on the response object of that listener and have full access to all the data handled during test setup.

Trying to take it further

We attempted to take this concept further by getting rid of the listener on TestValidator, and process all the setup and execution under one event builder. The challenge with that is that the objects generated during test prep are not accessible on the monitor level, as they haven’t been instantiated yet ( see the verify object above). This required us to think of alternatives in how these objects may be accessed, the solution we came up with was using custom object comparators that take a set of arguments to navigate internal objects. The below code block showcases this iteration:

TestExecuter
    .create("test1")
    .enable(true)
 
    .createDevice("myDeviceId")
    .sendMeasurementOn("myDeviceId", MEASUREMENT_THRESHOLD + 10.0)
    .assertAlarmRaised("should raise an alarm", "myDeviceId", ALARM_TYPE)
 
    .createDevice("myDeviceId2")
    .sendMeasurementOn("myDeviceId2", MEASUREMENT_THRESHOLD - 10.0)
    .assertAlarmNotRaised("should not raise an alarm", "myDeviceId2", ALARM_TYPE)
 
    .sendManagedObject(mo)
    .findManagedObject({"type": "testObject"})
    .assertFn("strings should not be equal", stringComparator("hello", "helllo"))
    .assertFn("object name should equal", c8yObjectComparator("0", "name", "c2po"))
    .run();

This generic action that would run an assertion using the custom comparator and followed the following signature:

.assertFn( "message",Comparator(args...))

We figured this was not the way forward as it feels less natural to access data this way as opposed to using standard EPL syntax as in the previous itteration. In the future, it would be ideal to get rid of all listeners on the monitor for optimal readability, but this change would require further work.

Conclusion

The goal of the project was to come up with an easier to follow and use framework for creating tests within EPL Applications and EPL generally. Using the Builder pattern to make the asynchronous nature of the EPL less dominant achieves this. It is a move toward writing the EPL in a style similar to promises in Javascript or Typescript. There are also plans to enhance EPL with async/await like functionality that would make this structure even better to work with.

Try it for yourself!

All the projects discussed above are open source, you can check them out at: