Level: INTERMEDIATE
Events in, Events out. But where from, and where to? And how?
Apama has a long tradition of great connectivity options, either “in the box”, or via additional products, or DIY via one of the many supported APIs.
We already talked about engine_send and engine_receive, which communicate directly with the correlator over our own high-performance network protocol. Those tools are built against the same public-supported APIs that anyone else can choose to use inside their own remote clients. We have those APIs for both native C/C++ and Java clients. You can also connect multiple correlators to each other - more on that later.
Historically, we’ve even had another dedicated Integration Adapter Framework (we’re good at names - it’s called the “IAF”), as a separate process that already embeds the afore-mentioned client API, and which provides a host process in which a user could place their own 3rd-party client libraries. We call that type of integration simply “adapters”. Indeed, quite aside from some in-the-box adapters such as for JDBC, we sell a wide range of Capital Markets integration adapters for consuming market data or managing orders. The beauty of adapters hosted separately in the IAF is that if you are hosting a 3rd-party component that is either not configured correctly or has stability issues, that adapter can be restarted without needing to restart the correlator that contains your solution logic. However, it does mean more processes to manage and monitor. Again, the APIs for building components to be hosted within the IAF are available for C and Java.
How about “messaging” integration? Well, as you might expect from an enterprise software company, here we have specialist in-process connectivity for one of the most popular messaging paradigms, i.e. JMS. Taking that a step further, we also have native (non-JMS) in-process integration to Software AG’s own high-performance message bus - Universal Messaging.
So, now that we have introduced the concept of connecting directly from the correlator to external sources/sinks of data, what else can we do? Here is where we recognize that we should not attempt to try to provide in-the-box solutions to every kind of connectivity we can find. That’s simply not the core business value of a streaming analytics product. Instead, we provide some common ones and make it as easy and as flexible/configurable as possible for a customer to build additional connectivity when needed. For modern deployments, we do this through what we call “Connectivity Plugins” and linking these together into “connectivity chains”. Rather than hosting “adapters” in a separate IAF process, “connectivity plugins” host similar capabilities directly inside the correlator, removing to need to monitor/manage a separate process, and removing a process-hop at the same time. Here, for example, we provide plugins for HTTP server, HTTP client, MQTT client, Kafka client, as well as the UM client and the Software AG Digital Event Services client (which is another abstraction over UM) for talking to other portfolio products. Those plugins I have mentioned are what is known a “transports” dealing with the actual connectivity, however, we also need to translate the data that is being transported, and that is where “codecs” come into the picture - e.g. the JSON codec.
I’d strongly recommend the following existing article: Apama connectivity plug-ins - Knowledge base - Software AG Tech Community & Forums
Connectivity Plugins are the primary mechanism for anyone building modern new integration direct to Apama. Yet again, the APIs are available in C++ and Java. Briefly referring back to an earlier point - if you did desire to separate your connectivity from your solution logic, but still wanted to use modern connectivity plugins, remember that if necessary you can host these in separate correlators and connect those correlators together, just like you might historically have done with IAF Adapters.
Of course, we are Software AG, and we do have the world-leading Integration products in our commercial portfolio, so in many circumstances, the more appropriate architecture may be to use pre-existing mature connectivity from the likes of webMethods Integration, and have Apama talk to that via any of the previously mentioned solutions such as JMS, or UM, or DES, or REST over HTTP and via a managed API. It really is about giving you the flexibility you need in your enterprise scenarios.
When we’re looking at devices outside the enterprise, again we link seamlessly with the leading specialist product on the market, Cumulocity IoT. Apama is already available as a prebuilt Streaming Analytics microservice for Cumulocity IoT, so if your devices are connected to Cumulocity IoT then your data can stream directly through into Apama for real-time analytics. As mentioned in previous days, step one step closer to the edge and Apama is also a standard component of Cumulocity IoT Edge for those higher volume on-prem IoT deployments. And now, going even smaller, even closer to the source of real-time data, with thin-edge.io you can embed Apama Streaming Analytics directly into those industrial gateways.
As you can see, you have many options to get Events In and Events Out!
Finally, as you may have come to expect by now, links are provided to more detailed information. In particular, a collection of blog posts for Connectivity Plugins over recent releases:
- Creating your own Apama Connectivity Plugins - Knowledge base
- Batch Codecs - Knowledge base
- New in 10.1 – HTTP Server Connectivity Plugin - Knowledge base
- HTTP Server EPL responses - Knowledge base
- Chain Managers – Dynamic Connectivity - Knowledge base
- Kafka and Apama - Knowledge base
Not forgetting the product documentation too (v10.11): Connecting Apama Applications to External Components (softwareag.com)
As the technology landscape changes over time, so do our approach and recommendations for connectivity. Apama has been around since before REST was popular. SOAP came and, mostly, went. Different message buses and protocols rise and fall in popularity. Servers used to be so small that we needed separate physical hosts for adapters vs. correlators, then they got large enough to either host many processes or even many VMs. Then we have the new world of microservices and HTTP, and even microservices with messaging, where we can have the architectures where Apama is containerized as a single process as an agent in a larger mesh. Then we come back, nearly full circle, to the smaller resource-constrained thin-edge gateways where we recommend using lightweight native (rather than Java) in-process connectivity-plugins such as MQTT. It’s funny though, to think back and compare the compute resource available now on these thin-edge gateways to the server hardware in the early days of Apama.
Keeping Apama focused, fast, and lean over the years, not becoming bloated or needing large clusters to achieve streaming analytics, and keeping the connectivity options open, means that the Apama technology is readily applicable in so many different domains.
Well… that one turned out to be a bit longer than I expected! We’ll try to make the next few a bit shorter.
This is Day #8 of a short series of brief tips, tricks, hints, and reminders of information relating to the Apama Streaming Analytics platform, both from Software AG as well as from the community.
This series of articles will be published Monday-Friday only, with the occasional weekend bonus.