Understnading event replicator

I am working for a company where in we need to make our data available on a real time basis to downstream systems which will be used to generate reports for clients.

I wanted to understand how event replicator works and what are the pre-requisite for setting it up? Does event replicator uses PLOGS to provide real time updates to other DBs? If not, how does it captures the real time updates ? What are the possible time frames required to setup event replicator for single Adabas file ?

Are there any documentation available for event replicator that can be used to understanding the functioning of event replicator so as to guage the feasibility and applicability based on one’s requirements ?

Hi Ashish,

It is Adabas nucleus that pushes updated data out to the target. It is the reason why it can be real-time. You define what you want to be replicated. You can also do replay from plog.

Documentation you can find here: http://techcommunity.softwareag.com/ecosystem/documentation/adabas/ark311/arf/overview.htm

Mogens is of course right.

The Adabas nucleus knows about replication and which files are to be replicated. At least one additional address space (the Reptor) is started to activate replication.
At the time one or more updates are committed (ET processing) the nucleus will sent the modified data not only to PLOG/WORK but also to the REPTOR, which will sent the data to be replicated to the various databases. The reptor will then inform the nucleus that data has been replicated.

Rainer Herrmann

If you are not replicating real time to ADABAS downstream then you must connect to a messaging system. The supported messaging systems are Websphere MQ and EntireX Communicator.

Thanks everybody. The document seems to quite helpful.

I wanted to understand if event replicator has some limitations around the volumes it can handle. Also if we have more number of files to be replicated - how efficiently the reptor will be able to handle the data and replicate it on the target DB ?

There is usually no limitation on the replicator side. The replicator can handle many files going to different destination and files from many different databases.
There is always the possibility to have more than one replicator subject only to the limitation that a single file can go to only one replicator.

However, there can be bottlenecks that the messaging system or net-work or the target database can not handle the same load as an Adabas nucleus can handle updates. Than there may be a delay until the data is available on the target.

There is usually no limitation on the replicator side. The replicator can handle many files going to different destination and files from many different databases.
There is always the possibility to have more than one replicator subject only to the limitation that a single file can go to only one replicator.

However, there can be bottlenecks that the messaging system or net-work or the target database can not handle the same load as an Adabas nucleus can handle updates. Than there may be a delay until the data is available on the target.

It has been my experience that under a heavy load the bottleneck is usually the interface to the target RDBMS whether that be an application program or the Event Replicator Target Adapter. I am not saying that it is the RDBMS itself. Using EntireX, the data flows from ADABAS through the replication server to EntireX very quickly. You may need to run multiple instances of your RDBMS interface to keep up under heavy load.

As long as there are no communication problems between ADABAS and the replication server, the replication pool (RPL) pool size should not be a limiting factor even under a heavy load.

I have a decent sized RPL pool (300MB for replication server and 100MB to 200MB per database) but most of the time my high water mark is less than 3%.

My replication server subscription log (SLOG) file is 60,000 3390 cylinders. You never want to fill up the SLOG under any circumstance.

If you are going to use EntireX persistence, I recommend that you have a large persistent storage (PSTORE) file as well. My EntireX PSTORE file is 6,000 3390 cylinders.

My experience is also that populating data into the target RDBMS is the one that can be the time consuming part. Note that one update to an Adabas record (with PEs and MUs) can result into many updates into the RDMBS because sql does not support repeating fields.

Replication should even not suffer for a slow line between mainframe and target where the RDBMS reside because the Target Adapter (that populates data into the RDBMS) uses prefetch in its communication to mainframe which means it asynchronous reads ahead data from mainframe.

Hi,

We are testing the replication from Adabas mainframe to SQL Server. We just switched from VSAM to Adabas Persistent store to speed up the Broker process.

We are blowing up the NISNHQ of 800 because the replicator is writing so
much records to the Pstore. What value for NISNHQ do you have in your shop? Is it required to use Pstore in Broker when running the replicator?

Thank you.

NISNHQ is set to 1000 for the database we use for the EntireX PStore. It is not required to use a PStore but you will lose data if something crashes.

Thanks, Wayne.
Do you use an Adabas Pstore? We switched from VSAM for speed but my issue now is we are hitting the pstore file so much and increasing our cost. We do chargeback.

Yes, we use ADABAS for our PStore. The PStore can get very busy. We have our PStore in an ADABAS instance all by itself.

We run with a NISNHQ of 5000 in both Reptor and the Adabas nucleus that holds the pstore. And no difficulties with doing Initial-State of a file with over one million records.

We’ve performed Initial States with over 5 million rows with our NISNHQ. NISNHQ should not be a limiting factor.

We get only about 15 records/sec when we do initial state. So 5M would take more than 90 hours :frowning:

When we switched from VSAM to Adabas PSTORE, we noticed further performance degradation.

Can you share with us how you optimize your peformance? We use Tomcat Apache for our target adapter, which runs on a virtul machine. The performance of this machine looks okay as I don’t see sustained peaks in the performance monitor. Our target database is SQL/Server. I also don’t see any sustained peaks in the server where the database
is running. It seems that our bottleneck is in the Broker.

Thanks.

Check the performance between your mf and windows server. Try to do at “ping -t -l 30000” and see the response times.

Do you use prefetch between ART and Broker?

Thanks, Mogens.

I did a ping and got an average of 4ms roundtrip time. We do use messageprefetchcount=20 in the target adapter. Is this the prefetch that you’re talking about?

Thanks again.

Yes, I meant messageprefetchcount. When using this I don

Thanks, Mogens.

Is 20 a good number for prefetch? Can I make it higher, say 40?

The particular file we’re replicating has 121 fields, 2 MUs, 2 small PEs.

I suspected Broker to be the bottleneck because I monitored the CPU and disk performance of the target adapter and the SQL Server and did not see prolonged 100% utilization. Also, the thruput was reduced by a factor of 6 after we switched from VSAM pstore to Adabas pstore. The reason we switched to Adabas was because the Broker kept on hanging when using VSAM.

Thanks.