Ada 8 PE & MU occurrences of circa 65,000 - how to imple

Hi, I have created an Adabas file containing an MU of A250, of which I need to be able to use the full 65,000 occurrences as advertised by Adabas 8.
I’m told by my DBA that this is not possible as the true extent of the occurrences is governed by the amount of space allocated which is 5x block size. Is this a valid restriction, and if so, then how can I overcome this issue as this is simply unacceptable and will result in a major redesign affecting project deadlines?

1 AA MSG-ID A 50 N D
1 AB STATUS A 1 N D
1 AC SRC-SYS A 8 N
1 AD TARG-SYS A 8 N
1 AE MSG-TYPE A 10 N
1 AF COMPILED-BY A 8 N
M 1 AG MSG-DATA A 250 N
G 1 AH TRANSMISSION-REQ-GRP
2 AI REQ-TRANS-DATE N 8.0 N
2 AJ REQ-TRANS-TIME N 7.0 N
2 AK REQ-TRANS-MECH A 8 N
G 1 AL LAST-CHG-DATA
2 AM LAST-CHG-DATE N 8.0 N
2 AN LAST-CHG-TIME N 7.0 N
2 AO LAST-CHG-PGM A 8 N
1 S0 REQ-DATE-TIME-MECH A 23 N S

  •    -------- SOURCE FIELD(S) -------            
    
  •    REQ-TRANS-DATE(1-8)                         
    
  •    REQ-TRANS-TIME(1-7)                         
    
  •    REQ-TRANS-MECH(1-8)
    

In my opinion, since you have not explained why, the simplest solution would be to create another Adabas File with the same prime key as the record together with an occurrence number (1-65000). It may be a bit more expensive in terms of I/O having to get the additional records to assemble what you state you want. I have problems with you wanting all 65000 occurrences of the MU in the record for processing at the same time, so in fact it may be more efficient to get only those elements you want to process. If you are trying to do an examine to check if that value already exists, having a separate file may indeed be much more efficient.

Yes Peter, creating a separate adabas file is an option that was considered, but as efficiency is paramount, it does not make sense to include the unnessary I/O’s. This data is purely for support investigation / audit purposes and I would assume that in the normal course of events, the data will never even be accessed beyond the initial store.

Debbie, did you consider using LOBs ?

In my opinion, a LOB capable of handling 65000 x 250 bytes would have the same problem with the physical Adabas Buffer size that the MU has, and would in fact take a lot more spaces (or work) since not all messages to be stored in the MU would be the full 250 characters (I assume).

Debbie, while I cannot comment on the truth of the Physical Adabas Record limitation (it ia a Looong time since I was a Black Hat), I would suggest that you compromise and fill up the MU to the max for the Adabas record size (or slightly smaller) and the create a new record(s) for the remaining entries. Not as efficient as a single record but as someone at SAGD once advised me “If it doesn’t work… Don’t do it”.

I can’t reconcile the statements:

If the data will rarely be accessed beyond the initial store, then efficiency should be a modest consideration, aside from the store itself.

Regardless of whether you are able to store 250*65000 on a single “record” (check out spanned records) or you store them in chained records, separate files, etc, the bulk of the I/O will be required to put those bytes somewhere. The extra I/O to split it into arbitrary chunks will likely be much smaller than that to store the 16 mb of data.

Since you don’t expect much retrieval of it, perhaps the focus should be more on making it easier rather than “efficient”.

While a LOB capable of holding 65000x250 would (of course) require the same buffer sizes as a record with 250x65000 MUs you don’t suffer from the record size limitation, so from a design perspective a LOB is definitely easier to handle.