Processing for Unique Descriptors


We have a situation where multiple, identical RPC requests can be submitted by users. Due to the nature of the front end (client) application it appears that the developers are unable to stop this from happening. This could possibly be solved by the use of a unique key on an ADABAS file to ensure that only one RPC is actually processed and the other fails with a non-unique descriptor value response code.

The following scenario is of concern to us as we need to make absolutely certain that we don’t end up with duplicate unique keys else the solution is useless.

  1. RPC calls are held up by a TTSYN (120 seconds) for an online backup and so two or more identical RPCs can be submitted by user due to frustration (WAIT timeout is 30 secs at the ESB which issues the RPC call on behalf of the client app).
  2. RPC A inserts unique numeric descriptor with value 1234. ie here we have an >after< value for the descriptor.
  3. The preceding transaction remains ‘open’; RPC A has not ET’d.
  4. RPC B attempts to add a record to the database with descriptor value 1234…
  5. RPC A then ETs
  6. RPC B ETs

To the best of my knowledge (I am happy to be corrected) the LDEUQP does not help here as it holds unique descriptor >before< images from deletes and updates in case the delete or update is backed out hence while the delete/update is in flight it is not possible to add a value to the UQ DE which in essence has not been committed as deleted/changed yet.

Does ADABAS manage the scenario described in the numbered list above? Is this managed by searching DB+LBP for descriptor values which exist there? I am afraid that I have never before used UQ DEs. However this would seem like the only way to stop our duplicate RPC transaction problem given the developer’s professed inability to prevent this situation arising.

An authoritative answer would be appreciated.


Would not user assigned ISNs work?

This assumes that you can dictate the nature of the unique numeric descriptor, and there is no reason (other than the existence of such a unique value) you actually require a descriptor.

Hi Brent,

you wrote:

I welcome to correct you: the UQDE pool is also used for the N1 (STORE) commands. The value is then stored in this pool and until the transaction is finished (ETed or BTed) any other user which tries to store the same value will receive rsp-198. If the first user BT´s the transaction the second user can add the value, but also for this user the same applies as for the first user: the value is blocked for the duration of the transaction for other users.

Does that help ?


Steve yes it would and thank you very much for your reply. The issue however is that the key value (in this case ISN) is generated in the client of the application with no reference to ADABAS and then forwarded as part of the RPC call. It is possible to make a precursor RPC call to request a “next number” for a new ISN but this would increase our RPC call rate etc and we want to keep our overheads down if possible.

Thank you again for your response - much appreciated.


Ursula thank you.

It didn’t make sense to me that only before images were stored there for deletes and updates as stores would be a gaping hole in functionality. We are actually happy to take any performance hit of a unique key in ADABAS (this will be our first and likely last) as it will save a long call sequence from open systems.

May I draw your attention to the following hyperlink where I found out about delete/update processing for UQDE?>

This link, by its title and content seems to be the whole story for LDEUQP when in fact it is only part of the story. Would it be possible to correct it? or at least add a hyperlink which explains the rest of the story?

Thank you for your response.


Hi Brent;

Okay, I understand the problem a bit better now. You are trying to have the server clean up a mess caused by the client. Yet, it seems that the client should “know” about the problem.

In your description:

How does the client “know” to give RPC B a value of 1234 rather than, say, 1235?

I do understand that the client might be external software rather than inhouse developed.

It would seem, however, that there should be a way for the client to recognize it has a duplicate and not send it, or, to lock a terminal so that the duplicate transaction cannot even be entered.

You mentioned “controlling” the RPC call rate as a rationale to not consider user assigned ISNs.

However, if frustrated users are typically sending two or three instances of the same transaction, you are already experiencing RPC rates way beyond what is really necessary.

I would really make an effort to discover why the front end could not solve the problem. If it is a bit of “canned” software, perhaps homegrown software could serve as an interface between the client and the server and do the necessary checking to eliminate redundant transactions being sent to the server.

Hi Brent,

now that you pointed me to the knowledge base article I gave some additional thoughts on this topic.
It is correct as I said that whenever user_1 inserts (N1 or as an after image of A1/A4) a unique value that during his open transaction a second user_2 will get a rsp-198 when inserting the same value and only when user_1 does end his transcation (with ET,BT,CL or automatic backout) then any other user can insert this value.

But this is not because the new value is inserted in the unique descriptor pool as I said ( :o ) - this is not necessary because of the dirty read of Adabas. Whenever an update is made this new value will be immediately be inserted in the index and each other user will see this value. If later the transaction is backed out it will be deleted again from the index.

By the way the rsp-198 does have a subcode:

1 - means the value is in the index, but you cannot see if it is an old value or a brand new where the transaction is not yet committed.
2- means the value is in the unique DE pool (must be from a A1/A4 or E1 command not yet comitted)
3 - the same as 2 but in a cluster environment on a different nucleus

What we do store in the unique descriptor pool are indeed only the Before Images, because these are immediately removed from the index and if they would not be in the extra pool a different user could not find them and a N1 or A1 could insert the value.

I will ask my support colleagues to update the knowledge base article to be more detailed there.

Sorry for the confusion :frowning:

And I also agree with Steve, that the best performance can be reached whenever the call is not made. Having a unique DE certainly costs more than a non-UQ descriptor, although we have no measurements for this.


Hi Steve

The client is using home grown software.

The client app receives the data typed by a user. The client sends a call to our ESB and then starts a “SPIN” process which activates a spinning icon on the user’s screen for a minute (apparently this process cannot be elongated or customised). The ESB fires an RPC with TIMEOUT=30s. The problem occurs when we hit our end of backups (online, TTSYN=120) and the RPCs are held up. The ESB has received a Timeout from the EntireX stub… The SPIN process ends and the user is frustrated and activates the process again (Click). Meantime the original RPC is still waiting and a new one is fired by the ESB.

I have met with the developers and they have assured me that the client-end suggestion I pseudo coded for them can not be implemented and that they must send the call to the mainframe and allow the mainframe reject the call, definitely not preferable.

I am quite aware that UQ DEs are bad news performance-wise however there is no client based solution which can be implemented in the opinion of the developers (I still believe that my code would have solved the problem).

I do not wish to attempt to control RPC rate to avoid user assigned ISNs but cannot trust that the developers can provide a solution which will provide a unique ISN. Certainly a precursor call to request an ISN will not solve the problem as separate ISNs will be returned by the backend if the precursor call does not time out. So the only other way to do this would be to require the developers to come up with a random number 1 - 4Billion, say, which will always be unique… Their solution is to provide a 60byte Alphanumeric string as they say they cannot guarantee uniqueness otherwise. Hence they are unable to generate a unique ISN to be used to Store the data.

Yes the rates are likely up during the TTSYN period due to user retries.

As mentioned above I have made the effort to understand and solve the problem in the client but the developers have not accepted my solution nor have they offered the possibility of a client based solution.

I am also thinking about why the TTSYN is set to 120. I have only been here for 3 years and have no historic perspective. I am thinking that if the ESB were to set TIMEOUT to 60s and I set TTSYN to 55 then this may also be a solution. Admittedly this means that the mountain is coming to Mohammed but nonetheless it is a solution. However we also experience this problem occasionally during the day if we have extremely high load so it is only a partial solution.

Just for extra info I have already discussed with the ESB team the availablility of a further attempt at a Receive against the timed out RPC and this is being considered…

Cheers Brent

Hi Ursula

Thanks for the correction. I appreciate the information.

I also agree with Steve and my wish is to solve the issue in the client however the developers, while not refusing, have stated that it cannot be done. I agree that if the call is not made then the CPU is not utilised.

Cheers and thanks


it would still be interesting to see an answer to Steve’s question

This may trigger additional suggestions.


Apologies for not having answered all questions…

The Client data is entered by the user and the unique key value is constructed from the entered data - upon clicking the Send button the RPC data including unique key value is sent. The second RPC occurs when the User simply Clicks on the Send button again once a “Spin” process, which presents as a whirling icon on the screen, initiated by the original/first click on send has terminated and the key is reconstructed from the data already present in the client dialogue and the RPC data is sent again.

Cheers and thanks for your interest.



Please allow me to make the context of an earlier message crystal clear:

Where I responded to Steve’s original response I said:

The issue however is that the key value (in this case ISN)

This was in the context of his response. The unique value we are receiving from the client is actually an A60 value.

I want to make sure that this does not lead anyone down the wrong path.



Hi Brent;

Some questions:

How many “users” are there? Is there such a thing as a “user-id” which uniquely identifies the user who initiated a transaction? What programming language & platform are the client?

Reasons for questions:

IF (okay, big if) there is a unique user-id that identifies every transaction, the problem would appear to be quite simple. There would need to be a table that shows the last A60 string for every user-id. I am assuming that the problem of successive enters only applies for the same info. You would never get set1 then set2 then another set1 from the same user.

The client would then use the user-id to locate the last A60 string for the user. If different, ship the RPC and update the table with the new A60 string. If the same, ignore the set2 and do not touch the table.

I do not, of course, know the application. There may not be a unique user-id, or, there may be tens of thousands of unique IDs (even that should not be a problem, even if the table were too large to make it an in-core array, it could be a small external file).

If there is not a unique user-id, there is at least one other (albeit less efficient) solution. Based of the possible number of transactions to be processed in an “end of backups” interval, you could build a table with two entries for every transaction, a time stamp and an A60 string.

A prospective transaction’s A60 string would be used as a search key against the table to see if it exists. Table handling would be similar to the previous idea (above).

If you can pseudo code an idea, it can be implemented. Sorry, but I do not believe the claim from the client team that solutions are impossible.


I have tried, without success, to edit my previous post. Hence, I have added this post.

(added comment/question after initial post: Is there a “user-id” that is part of the A60 string?)

Hi Steve


concurrent users (but not necessarily active) would be in the region of hundreds with possible a couple of hundred active at any time.

Userid - yes

Java Server, browser client

Our mainframe programmers have implemented code which does :
Check key value
Perform insert

Yet we still have two inserts occurring so the gap between commands is quite small. I can only guess that the tran1 and tran2 Check commands run close together and then the inserts happen at a subsequent point.

To your last statement I agree and I am glad you picked up on that.

I will check on your question about uid in unique string however the uid is not necessary to identify the transaction. It can be identified by some of the transaction. I have also had the possibility raised by the developers of frustrated users now moving to another instance of the client and entering the transaction there but this should make no difference to the overall issue it is still a matter of how to stop duplicates when the client does not.

Cheers and thanks for your expenditutre of brain calories.


Correction Java Client with Java Server backend holding most data and some code, as well as M/F NATADA backend.