Is it possible to read and write TSO datasets from mainframe Natural 3.1.6 using Broker? If so, does anyone have code examples they’d be willing to share? We are trying to replace our mainframe file transfer process where users upload/download data between pc and Adabas.
Users currently do this thru 3270 emulator (IND$FILE) to CICS temp storage which is then read and processed by a Natural program. We are replacing CICS with Com-Plete and must use TSO datasets in place of Temp Storage. The user side of this process must remain unchanged.
Appreciate any ideas or examples.
Thanks!
Chris Walsh
Can you elaborate on what the whole application is? I assume the temp storage process was an interim solution to a data interchange between the two platforms? What is the existing “user process”? What happens to the data once it is in a TSO dataset or CICS temp storage?
If your process is to move data from your PC application to the mainframe, you could establish a Natural RPC Server that writes the data to whatever the final target is. The client application would issue RPC Wrapper calls to send the data to the Natural server subprograms.
If the process is to move data from the mainframe to the PC, and it needs to know which PC it is going to (ie it will send the data to the same PC as the terminal emulator running Natural is on), life gets more interesting (can be done, but NOT as simple).
In any case, check out USR2021N - you can use this to dynamically allocate files and read/write to those files from Natural. This SYSEXT API can be called from batch Natural RPC, and I’m guessing, from COM-PLETE also (haven’t tested it, so can’t swear to it).
Our users transfer data to and from (upload & download) water meter readings between pc and mainframe. This is and needs to continue as a 2-step process. We have no access to our users networks or pc’s - so we cannot install or distribute “client” pieces to them.
Plus the change from CICS using temp storage (as interim file storage) to Com-Plete using TSO datasets must be transparent to them. Their exisiting procedures need to remain unchanged. Based on this, I will not be able to “streamline” the process to directly transfer between pc and mainframe.
Here are the 2 steps they perform for upload:
- Under CICS, transfer pc data to temp storage while outside of Natural. Under Com-Plete, the data target will be a TSO PDS member (or seq. dataset).
- They start online Natural application, which reads CICS temp storage using “read work file x” logic. Under Com-Plete, I need to be able to read the TSO dataset. As each record is read, it is manipulated and used to update Adabas.
Downloads are basically reversed:
- Online Natural appl read multiple Adabas files and writes data to CICS temp storage using “write work file x” logic. Under Com-Plete, the target will be a TSO dataset.
- They exit Natural and initiate transfer from temp stor (CICS) or TSO dataset (Com-Plete) to pc.
Once the transfer is finished, the interim data storage is no longer needed.
I’m familiar with USR2021N. I’ve never used RPC before so any “tips” that will help me expediate this would surely be appreciated. If I can access the TSO datasets via using RPC from within our Natural application, this would solve the problem. I will investigate.
Thanks
Chris Walsh
I think I’m unclear about the parts that are external to Natural - what is writing data to the CICS temporary storage (that you will move to PDS or sequential datasets)? If that process was to call Broker (via ACI or RPC), it could send the data to Natural and Adabas without going through intermediate datasets.
For the downloads: are these a response (reply) to each upload? or is the user initiating some other download? Where do the datasets go when they exit Natural and transfer to the PC - to an application program? Excel?? Could that program fetch the data from Natural / Adabas without requiring the user to initiate a file transfer?
Are your constraints on maintaining the 2 step process due to an out side package with limited input/output facilities?
All terminal emulation software (Extra!, Minisoft, TNHost, etc) that our users connect to us with have file transfer capability using an IBM program called “IND$FILE”. When the target system is specified as CICS, all data is transfered via temporary storage queues (transient data sets). For IND$FILE to work with Com-Plete, the target system must be specified as TSO.
Unfortunately, there is no way to option the emulation software to directly call Broker. The software is definitely limited in this area. We are an application service provider - utility companies contract with us for service. As such, we have no control or access to their networks. Many of these customers connect to us using dedicated phone lines (not IP based). Since we are not in a position to have our customers change how they connect to our application (and transfer data), we must continue the existing 2-step process.
Downloads and uploads are done at separate times. Typical procedure is user will download information from our system to load into their local meter reading system - then they go into the field to read utility meters - then upon return, they upload the meter reading data back into our system so we can produce utility bills.
This would be a no-brainer if I could directly read the TSO datasets from online Natural. This whole process becomes moot when we upgrade to NAT4 later this year. Sounds like my interim choices are to try to learn RPC or get ESS in-house.