Data Archival decisions: EXTRACT/TRANSFER vs EXTRACT/ARCHIVE

Our IT management made a requirement to archive data to another mainframe Adabas database as we presented all the possible configurations of the product that we were presented with. In doing so, it appears that this is made possible by defining source and target Adabas files (target on a special archive-data only database) and defining actions like the following:

FIN10 = Database: 100 File: 368
FIN100 = Database: 300 File: 368

EXTRACT FIN10()
{
TRANSFER FIN10[
] TO FIN100;
}

Note: real archival rules will be a little more complex, but this is for example.

The archiving product, though, really seems to offer the most benefit by making use of vaults in where you can search the archive. Such rules would be like this:

Accumulator Setting:
Vault: DEMO

FIN6 = Database: 100 File: 365

EXTRACT FIN6()
{
ARCHIVE FIN6[
];
}

Is there any downside to mainly or exclusively using the product at first only to do EXTRACT/TRANSFER logic until there is some confidence by the organization to take it the step further? And at that point, could we just create additional rules (maybe somewhat different) to extract from the transfer target and archive to vault (or delete)?

Thanks in advance!

-Brian

Actually not. One use case is to “transfer” data to another Adabas database.
Later you can “archive” that data to a vault.
There is also a “Delete” function that allows you to delete e.g. transferred data.

Transfer, also offers the possibility to create a consistent sub-set of data in a
separate database. This data could be used for testing or training purposes

  • after obfuscation od course :slight_smile:

Regards,
Wolfgang

Hi Wolfgang,

Many thanks for your response. That gives me confidence to proceed with the plan to initially use EXTRACT/TRANSFER logic to move eligible data to the archive ADABAS db. Of course, any time the FDT changes on the source file I will have to make the same FDT change on the target file.

Conceptually, if we are required to retain data for 10 years, we could transfer data from the source file to the target file after 5 years, so each would hold 5 years of data. If at a later point, we decide they really only need access to the archive db data for years 6 and 7, we could extract from the archive db file if older than 7 years and archive to a vault.

As data ages in the vault, once it ages beyond 10 years, we should then be able to delete such data from it.

That is a great point about using this product for extracting a subset of production data for QA testing purposes. I can see how that would be made possible and how this tool can make this easier than other methods.

-Brian

Ok this is odd for several reasons. Hopefully someone can help educate me. :slight_smile:

I set up a test where I created a file on another database with the same FDT as an existing one. The intention was to copy everything over from the main file to the new one.

The existing file has 352.532 records in it.

The source file is db100 fnr118, and the target file is db300 fnr118. The extraction syntax looks like this:

EXTRACT Dev_fin_trans_xref()
{
TRANSFER Dev_fin_trans_xref[
] TO Archive_fin_trans_xref;
}

Upon completion, the action gives stats that 352,532 were extracted and the same number accumulated and the overall number agrees as well (action type = transfer). However, the number of records as shown in SYSAOS in the target file is just 338,323 when I checked first following completion. But then several minutes later it did eventually get up to 352,532. It seems the data still could be loading even though the action shows as completed.

The other odd thing is … while it seems one cannot create an action without referencing a vault, I don’t actually make use of a vault in the extraction syntax. However, the vault did get used:

$ pwd
/opt/softwareag/testvault/data
$ ls -l
total 33120
-rw-rw-rw- 1 SAGUSER SYS1 16921804 Mar 24 15:24 1603241853550700.0000
-rw-rw-rw- 1 SAGUSER SYS1 1582 Mar 24 15:01 1603241853550701.0000

Why does it write to the vault even though I am not archiving to a vault? Is there a way to code this so it doesn’t actually write to a vault, too?

Thanks,

Brian

Hi Brian,
Once an activity has completed all processing has stopped.
My belief would be that the lag in the SYSAOS record count is due to the fact that not all updated blocks in the Adabas buffer pool (eg. the FCB) had been buffer-flushed out to disk.

TRANSFER activities do write information to a vault - as a means of providing an audit trail.
What about defining 2 Vaults? One Vault dedicated to your ARCHIVE activities and a different Vault dedicated solely for your TRANSFER activities.
If you don’t want to keep the TRANSFER audit trail then periodically delete the ‘Transfer Vault’ off the disk.

Hi Geoff,

Thanks for the insight. In checking now, the vault appears now to be void of data, so it must have only been used as intermediary storage and its content was temporary. I will take your advice of having the transfer vault be just that and regularly cleaning it out. If someday we do use a vault to hold archived data, it will be a separate OMVS file system mounted to cheaper storage.

-Brian

Since we’re only doing transfers - no data is actually kept in the vault - it is important to monitor this file system and keep it clean. Even if the vault is storing audit details, it stores a LOT of it. Not keeping track of this will risk having a process die with Response:248 Subcode:0x0085700B, meaning the file system is full.

I cleaned it out and started the archive action again. Fortunately with v1.6.1.4 we did not lose a record like we did on v1.6.1.3. I assume this is or soon will be released for a v1.7.1 fix level, and I highly urge everyone to apply the appropriate one (as in “Early Warning”).

-Brian