Level: INTERMEDIATE
Apama EPL is powerful and like other languages it’s real potential comes from all the existing plugins, extensions and libraries that will save you from re-inventing the wheel over and over again. We have seen (Day #8) some of the Connectivity Plugins and Adapters, but your Apama installation also includes some other helpful Bundles.
One of them is MemoryStore bundle as described in the 10.11 documentation topic “Using the Memory Store”. To keep it simple, a MemoryStore provides a way to store data in a structured way in the memory (RAM). It closely resembles a database: it has tables with rows and columns and a schema that describes how the columns are named and of what type they are. Every row in a Memory Store table is uniquely identified by a single key that you provide and thus you can use it like a modern Key-Value store. It should be noted that MemoryStore is not part of the EPL language as such, but an in-the-box plugin developed against the public foreign-function-interface for EPL (Introduction to EPL Plug-ins (softwareag.com)).
But why would you use this, if you can do the same with some simple variables in your EPL code?
Here are some typical use cases for the Memory Store:
- Caching. Do you have a file / database / any other datasource that is expensive to query and thus you want to cache the result? Just populate the MemoryStore with the results and query it wherever you might need access to the data.
- Persistence of in-memory data. For Correlator state that should not be lost during restarts you might consider using the optional persistence capability of MemoryStore which is backed by a file (see footnote *).
- Sharing data between monitors. Apama Monitors usually only share information by sending events between them (recall from Day #6 that EPL is a shared-nothing language). For situations where this is not possible or practical just write the data to a Memory Store table and read it from the other monitor.
But be careful to not confuse the MemoryStore with a full-blown relational database. Like with a traditional K-V store, you can only query your data with the key and you can’t have more complex queries that involve e.g. joining multiple tables.
A given MemoryStore always follows the same schema. You have to set up the store and describe the tables before you can use them.
- Prepare the store:
monitor TableWriter {
Store store;
action onload() {
integer id := Storage.prepareInMemory("mystore");
on Finished(id,*,*)as f {
if not f.success { log "Whoops"; die; }
store := Storage.open("mystore");
initTable(store);
}
}
}
- Initialize the table(s) with schema(s):
Table tbl;
action initTable(Store store) {
Schema schema := new Schema;
schema.fields := ["counter", "type"];
schema.types := ["integer", "string"];
integer id := store.prepare("mytable", schema);
on Finished(id,*,*) as f {
if not f.success { log "Whoops"; die; }
tbl := store.open("mytable");
writeAndRead(tbl);
}
}
Tip: You can use the getFieldNames() and getFieldTypes() methods on an event (ApamaDoc) to create a schema that mirrors an Apama event.
- Start to write and read rows:
action writeAndRead(Table tbl) {
//get an empty row, fill its values and commit it
Row rowWrite := tbl.get("saltedcaramel");
boolean done := false;
while not done {
rowWrite.setInteger("counter",123);
rowWrite.setString("type","chocolate");
done := rowWrite.tryCommitOrUpdate();
}
//use it later in your code
Row rowRead := tbl.get("saltedcaramel");
integer n := rowRead.getInteger("counter");
string t := rowRead.getString("type");
log "I have eaten " + n.toString() + " " + t + " cookies";
}
The Memory Store API offers some convenient methods to store and read rows based on your events that will make handling data even easier.
Have fun exploring the Memory Store Plugin!
(Footnote: *Not available in Cumulocity IoT as persistent local file storage is currently not permitted for microservices.)
Today’s article was kindly provided by Mario Heidenreich from the Global Competency Centre for IoT team.
This is Day #22 of a short series of brief tips, tricks, hints, and reminders of information relating to the Apama Streaming Analytics platform, both from Software AG as well as from the community.
This series of articles will be published Monday-Friday only, with the occasional weekend bonus.