The Coffee Factory

Introduction

Have you ever wondered how instant Coffee is made? In this blog, I’ll walk you through my Coffee Factory project which I created using Apama EPL without previous knowledge of the language. The project consists of two sub-projects, one of them is responsible for simulating the data within a Coffee Factory and the other one takes the role of Apama by analyzing the live data for any issues. If by any chance something wrong happens within our Coffee Factory, Apama takes care of the issue immediately and fixes it.

You can find my project on GitHub at GitHub - SpacePirato/Coffee-Factory: Written in EPL. Simulation of a Coffee Factory in which Apama tracks any issues in the production and reacts immediately.

Initially, I would like to introduce myself, my name is Peter and I’m an undergraduate student about to be in my final 4th year in Computer Science. I was a summer intern in Software AG’s Cambridge office from the 29th of July till the 27th of September (2019) and the Coffee Factory is just one of the things I worked on during my internship. The essence of this project was to think of an awesome demo I can develop to show off how Apama can be used for individual IoT. Therefore, since coffee is the second most popular drink throughout the world I thought it would be quite interesting for people to both get more familiar with how instant coffee is made and show off how Apama can be used to tackle the problems within the factory. After doing research about how coffee factories function, I realized that most of the tasks within a coffee factory are done by heavy machinery and lots of sensors which made the coffee factory project idea perfect for the demo I had to come up with!

How does it work

Making instant coffee is a complex process that consists of many steps and depending on the type of coffee they may vary. In my project I decided to display the several stages through which the coffee beans go through namely: Roasting, Extracting, Freezing, Freeze-drying.

Roasting

Firstly, the newly harvested green beans go in a roasting drum that contains sensors (attached outside of the drum due to the high temperature) that track the color of the beans. These sensors are called ColorTrack RT (see the Color Tracker graph below) which is laser-based, shade sensing technology. In general, the output coming from the laser module is expressed in a 0-10 Volt DC low voltage signal, which correlates to the color of the roasting coffee beans, however, I decided to multiply the number by ten due to the fact that it is easier to display the data on our Grafana graph when we have bigger numbers, therefore the voltage signal is expressed in a 0-100 Volt DC low voltage signal.

The way it works is by detecting the shade of the beans where a very dark sample reads, for example, 12 Volts DC, and the lighter sample reads 73 Volts DC. The rest of the readings in between would fall in between those two values. In the graph, we can see those fluctuating slightly which is due to the reason that all beans in the drum are rotating and there might be slight differences in their color. To the right of the Color Tracker graph is the Current Bean Color bar gauge which is used to display to the people in the factory in which stage the beans are, ranging from green/early yellow stage to extremely dark black. All this depends also on the temperature within the roast drum which is also displayed in the Temperatures graph. Also, there are rough estimations (depending on the type of coffee bean) from which we can get a rough idea of what the color of the coffee bean would be, for instance, the yellow stage should have already begun at about 173 degrees Celsius.

Extracting

Secondly, extracting the coffee happens once the beans are roasted, ground into powder-like pieces then put into an extractor drum. Once the extraction begins, from the Extraction graph you can observe how the caffeine, volatile oils, and organic acids are extracted. The caffeine is extracted the fastest, followed by the volatile oils which give coffee its aroma and flavors, and the organic acids, which are responsible for the bitterness of the coffee, are last. Similarly, the whole extraction process depends on the temperature of the extraction drum which can be seen in the Temperatures graph. Notably, Apama tracks the temperature in the extraction drum and if by any chance a problem occurs during the extraction process, Apama will instantly solve the issue by, for instance, stopping the heating process.

In the Temperature graph, we can see that at a certain point the extraction drum’s temperature reached critical temperature and then plummeted instantaneously afterward, which is an example of how Apama handled that urgent situation. Thereafter, the temperature starts going up the way it used to which means that a new simulation has started as new coffee beans went into the extraction drum.

Freezing

Thirdly, once the extraction is over, the condensed extract is put immediately into a freezer at a low temperature of -50 degrees which does not let the aroma of the coffee run away. The temperature in the freezer has to be maintained at a certain range and Apama takes care of that by carefully checking and taking the necessary actions if something goes wrong. The Freezer Temperature meter displays the current temperature of the freezer by making it easy for the people in the factory to observe what the current temperature is and if there are any alerts related to it.

Freeze-drying

Finally, after the coffee has left the freezer and broken up into granules, some granules still contain water which we need to get rid of. Stacked up on trays, the granules go straight into a large low pressured tube for a couple of hours and using sublimation (the process in which water transits directly from solid to gas state) the water left is taken out of the coffee powder. In the graph, we can see the moisture, vacuum pressure, and tube temperature which are being tracked and checked for any issues by Apama. The temperature in the tube has to be maintained at about 60 degrees Celsius, the vacuum pressure has to be maintained at 150 mTorr, and the moisture drops as the time goes by to about 2% which is when the coffee is ready to get out.

In the Temperatures graph, a slight increase in temperature was handled by Apama when problems occurred, which is another good example of how Apama managed that spike of temperature. To make this example more visible, the temperature spike is that high since when working with smaller numbers it’s quite hard to see the difference in the Temperatures graph.

Coffee Factory dashboard

This dashboard was created using EPL and Grafana.

Architecture of the project

The Producer is that part of the project that is responsible for simulating the data, which is sent to the Receiver who checks the live data for any inaccuracies and issues. If everything is fine the data is taken by the server – Prometheus, and then Grafana checks for new data which is thereafter displayed on Grafana per se.

When the Receiver receives the data (image below – black arrows) if an issue is encountered, it returns an alert and information of what went wrong back to the Producer (image below – red arrows). Within the Producer are small bits of code which if triggered, due to an alert, would take certain actions that will take place instantaneously.

Learning EPL

When it comes to learning a new programming language it could be easy and straightforward to get used to it or could get hard and time-consuming. In this small paragraph, I will walk you through the process of learning EPL to make my Coffee Factory project.

Since I feel comfortable using Java, I used my knowledge from it in order to learn EPL faster. How did I do that? Firstly, I imagined how the coffee project would look like in terms of code and architecture if I was using Java and then wrote down the main components. Secondly, I kept asking myself: If I can do X in Java by creating Y, how can I do X in EPL to get what I want? A question that helped me a lot to figure out which EPL library/components I should use. Since I didn’t know the answer to my question, I kept checking the Apama Community documentation (which can be found here: ApamaDoc) and looked carefully for what I needed. In addition, there are some EPL demo projects available, for example, ATM Fraud and Simple Backtesting, which are already working and they were a good example of what my code/project should look like. At first, it took a bit of time to get used to it, however, once I got used to it, my progress’ speed increased dramatically. For less than a week I started feeling way more comfortable with EPL and its syntax.

I encountered an interesting behavior while learning to work with listeners in EPL. To process specific events you have to create a listener, for example on all Alert(...){...}. In this example whenever an Alert event comes into the system the code within the braces is invoked. In the code snippet below I made a mistake on line 25 because the action listenForAlerts is called from within a listener that will execute every 3 seconds. Therefore the code will create a new listener every 3 seconds for Alert events. All these listeners will get triggered for every Alert received resulting in duplicate processing and some strange behavior!

All that can be fixed by removing line 25 and putting it, for instance, in the onLoad() action to be executed once.

image

Furthermore, since multiple types of data have to be analyzed live, I had to use asynchronous programming (executing multiple different things at the same time without having to finish executing the first thing in order to move forward with the next) in order to make sure that the program works fast with no delays, which could get quite complex. However, it turns out to be quite easy to manage the threads using EPL compared to Java. Because EPL takes care of the thread-safety which saves both a lot of trouble and time for whoever is developing with it.

Conclusion

My Coffee Factory consists of four stages to make instant coffee. Roasting is first and is responsible for roasting the green beans at a certain temperature until they become the desired color. Extraction is the second stage and is responsible for extracting the caffeine, volatile oils, and organic acids of the already powdered beans. This is followed by the Freezer stage which freezes the coffee extract immediately after being extracted. Finally, the last stage is Dry-freezing; once the already frozen coffee extract leaves the freezer and is broken down into granules, using the sublimation process we get rid of the water and the coffee becomes how we all drink it.

In terms of architecture, the Producer sends data to the Receiver and the Receiver checks for issues which, if found, are sent back to the Producer with actions to be taken. If the Receiver is happy with the live data, it passes it to Prometheus from which Grafana selects what is needed to be displayed.

Lastly, working with EPL turned out to be straightforward. To help me get the project done I used my knowledge of Java, the Apama documentation, and the demo projects ATM Fraud, Simple Backtracking, etc. as they were good examples of how to structure my code.

1 Like