Operational excellence - Adabas review

Optimize database performance with meaningful data analysis

How do you know if your applications access an Adabas database efficiently? Are you achieving optimal performance from the database? Even if you gather data and statistics, how easy is it to figure out what to do next? A powerful, ease-of-use presentation layer can let you view operational data and combine it with other information to make meaningful decisions and improve the operational effectiveness of your database.

Issue 3, 2017 Download PDF

What is involved in achieving operational excellence for a database?

First, what does operational excellence for a database mean? The most obvious answer is the database must provide enough capacity to fulfill the Service Level Agreement (SLA) your business needs to be successful. So what is required to achieve this goal? To answer this question, let’s first review what’s involved. At the highest level, a database runs on an operating system that uses underlying hardware. In addition, business applications or services process the data contained in the database. The application can run local to the database or remotely on an application server.

This simple scenario demonstrates that many components need to work very well together to achieve operational excellence.

Improving just one element is typically not sufficient for achieving operational excellence.

Operational data collection

Let’s review what data should be collected. The most important information is from the application that runs your business. If you are changing or extending an existing application, the behavior of the application is different and, therefore, may negatively impact the operational performance of the database. Getting information about the application, the statement used to access the database and the user would help identify any unexpected throughput. In the database kernel, it is important to know what type of request has been executed (e.g., direct access or a search request). Other crucial information needed includes the time it takes to process the request within the kernel.

In addition to this application-centric operational data, it is also important to get details about the resource usage of the database kernel itself. Knowing whether the I/O system performs as intended in context of the database is very important. Figures that help to identify bottlenecks include filling rate of queues, locking situations, and type of the load (e.g., update intensive applications run at the same time competing against the same resources).

Fig 1: Operational Data Flow

The Adabas database server collects the data and sends it to another server called Adabas Analytics Server. Before you can collect data, you must first determine if your environment is powerful enough to collect all the raw data or if you need to restrict the amount of data collected. While you may think that “the more data I have the better, because today I do not know the question I have to answer tomorrow,” it is possible that collecting all data is not prudent. Fortunately, if it turns out your system can’t handle collecting all the data, you may change the parameter that controls data collection on the fly.

While collecting and processing operational data is valuable, it should not come at the expense of negatively affecting the overall throughput of the database and IT infrastructure. Data collection needs to occur where the data exists, in the database. Processing needs to be outside of the database, which causes another problem—increasing the load on the network if the other process is not on the same server as the database. On the other hand, the throughput of the operational data should be fast enough to detect upcoming bottlenecks as soon as possible and not hours later. An effective way to avoid this problem is to collect relevant data in the database kernel, send it over to another process on the same host that does the required data processing then send it to a component that stores the data to be used by a presentation layer.

Analyzing data can be time-consuming; therefore, it makes sense to run these types of processes on a different computer. This is exactly how Adabas Review LUW works.

Visualization of operational data

Figure 2 illustrates how operational data can be perfectly visualized with Adabas Review LUW.

Fig 2: Operational Dashboard

While it is important to perfectly visualize information, it is even more crucial to easily create such a visual and know what needs to be done to use the data. As soon as the data is available in the data store, the metadata is also available. When creating a new visualization, you simply select the item that shall be shown and define how it should be presented and it is visible. Using the refresh option, the new data is shown automatically. It is possible to add new visualizations, change existing ones, or delete visualizations no longer needed.


Adabas Review LUW is a next-generation diagnosis tool. It collects operational data based on definitions without affecting database performance significantly—nothing is free. A dedicated server filters processes and sends the data based on the current configuration, which can be changed, to a data store. An easy-to- use presentation layer allows you to create visualizations based on your need and on the fly.

To learn more, visit the Tech Community to read the documentation or talk to your Software AG representative.