Destination/Subscription Processing Sequence

Destination = Dest1
Subscription = Sub1
Destination = Dest2
Subscription = Sub2

Dest1 is used by Sub1 and Dest2 is used by Sub2.

Add a record to MYDATA using N1 command and ET command.

Which destination will the Event Replicator send data to first? Is it random? Is it in alphabetical order of the destination name? Something else?

Is there a way to control the sequence (other than, for example, changing the destination names if the destination names are used to determine the sequence)?

Why do you care?
Subscriptions and destinations are supposed to be independent of each other and as far as I know the replicator treats them as such.
Why would it matter if one subscriptions or destination would be served say 100 microsecond before the other?
Anyway my guess is that the order is determined by the sequence in which subscriptions are activated and within a subscriptions the sequence in which destinations have been activated.

Rainer Herrmann

I apologize for not replying quicker. I have been out of the office for a few days.

I have an ADABAS file, CUSTOMER, that contains a PE of addresses. The replication target contains 2 tables, CUSTOMER and CUSTOMER_ADDRESS. There is a referential integrity constraint between the 2 tables.

We have an EntireX server program that registers as the handler for both the CUSTOMER and the CUSTOMER_ADDRESS destinations. The destination names are DFCUST2 and DFCUADDR. Whenever we add a record to ADABAS the CUSTOMER_ADDRESS records are sent before the CUSTOMER records are sent. When we try to INSERT a CUSTOMER_ADDRESS record we get a referential integrity error because the CUSTOMER record has not been added yet.

I was wondering if the Destination names had anything to do with the order in which the records were transmitted. Since DFCUADDR collates before DFCUST2 I was suspicious that the names were controlling the sequence.

Kind regards,

Hi Wayne,

Why do you have two destinations (and two subscriptions)? If this replicate into the same sql database (why else would you get referential integrity constraints?) - why do you not just have one subscription that replicate both CUSTOMER and its PE (CUSTOMER_ADDRESS). This will still replicate into two sql tables and then it is handled within same transaction.

There are a couple of reasons why this was done (I don’t know if they were good reasons).

  1. Simpler logic required for the programs that updates the SQL target. If all of the fields were in one subscription, on an ADABAS update the program would have to determine whether a CUSTOMER column was changed or a CUSTOMER_ADDRESS column was changed. This would require processing the Before Image as well as the After Image.

  2. The developers want to modularize the code so that it can be reused as part of a web service. They didn’t want to combine the CUSTOMER logic with the CUSTOMER_ADDRESS logic.

I will discuss this with the developers to see if we can architect this in a slightly different way.


Regardless, I would still like to know how the Reptor internals work regarding my original question.

You are not using ART (Target Adapter)?

Sorry, but I cannot answer your “original question” as I do not know the internal logic.

We are replicating within a z/OS LPAR.

I did not see any reason to go from z/OS over the wire to ART and then over the wire again to z/OS. If we implement Java on z/OS we may try ART there even though it is not officially supported (as far as I know).

Also, when we started (1) ART was rough around the edges, (2) you could only run a single instance of ART per machine, and (3) I was not comfortable with ART’s ability to handle the potential volume of data.

I studied the ARF Application Programmer’s Reference manual and I traced all of the calls that were sent between ARF and EntireX. I then decided to write my own version of ART for z/OS. My version does not convert the data to XML.


Hi Wayne,

Sorry, it was not clear to me that you replicate within z/OS.
ART run under Windows and unix (se details in the manual).

We did decide to combine all of the fields from the 1 ADABAS file into a single destination (and GFB) with a proxy program acting as an aggregate service calling the original 2 services.