Browsing around I came across your post, you’re probably long-since through whatever hurdles you faced but just in case a.n.other finds this useful in future…
The archiving product will run anywhere (that it is licensed of course). The ‘extractor’ pulls data out of your Adabas whereever your Adabas lives (mainframe, unix, windows…) and the ‘accumulator’ transmits the extracted data into your choice of destination. Your destination could be another Adabas (in the same or different computer running a different operating system if you wish). Or your destination could be the ‘vault’ (a flat-file directory structure in the same or different platform to where it was extracted).
The basis of ‘extracting from’ and ‘accumulating to’ demands the software is ‘distributed’ in concept and in practice. In a mega-infrastructure you might be running any number (unlimited) of extract/accumulate pairs across any number of system pairs. And yes, that means you are not restricted to a single vault. You may decide to have different vaults for different divisions for example.
Given the distributed and platform-neutral nature of the software the mechanism for administering it needs to be neutral/easy too. You don’t want to have to be handling system-specific horrors such as JCL etc. This is where the ‘administration’ (browser) component comes into play. In this house you describe the things you want to be extracted, where you want them to be stored, when you want things to run, how often, under what restrictions, etc. You can also watch (monitor) all the concurrent actions running from this browser too, across all computers from the single browser seat.
Having lightly described the distributed basics (administration, extraction, accumulation, vault) finally there needs to be some glue to hold all this together. This is the ‘(Adabas) System Coordinator’. System Coordinator understands the network of computers where this software is installed (automatically during installation across the various machines). You will find your administration browser magically acquires this knowledge thereby making your life easier when picking the from/to pairs for the extract/accumulate actions you define.
The System Coordinator ‘network’ manages itself according to the actions you define so that extractors and accumulators will run when you want them to - automatically. You can set the ‘pace’ at which these things run too for example so as not to overload etc.
The communication bewteen all these components is by TCP/IP without need for the Net-Work product.
There is one thing that confuses people. Well two actually…
First is the administration browser cannot run on mainframe. The browser can only run on either (or both) Unix or Windows (LUW). This is quite an obvious limitation since the mainframe finds browser displays a bit difficult.
Second, the installation of all the components is performed from non-mainframe (LUW). This means anyone wishing to install components onto mainframe must do so from ‘off-host’ by getting the Installer to ‘push’ the mainframe components up onto your mainframe. For a single mainframe implementation a site would:
a) Drop the Installer on your choice of LUW stations.
b) Use the Installer to drop the Archiver onto your choice of LUW browser.
c) Use the Installer to drop (push) the Archiver up to your mainframe.
The pre-reqs (other than Adabas) get dropped as needed during the installation process (System Management Hub etc) where needed.
A brief (but still long) tale, hopefully it makes sense to whoever needs it in future.