I have few queries, in regards to large EDI document processing that I was not able to find answers to from the SAG documents and previous posted questions. Assumption: Large doc config in B2BSetting> Config (tn.BigDocThreshold ) and ISAdmin (watt.server.tspace.location , watt.server.tspace.max , watt.server.tspace.timeToLive) already performed.
1) Suppose, there is a custom TPA setup for this document (i.e. Sender, Receiver and DocumentStandard) with splitOption as Transaction (which with regular/ small doc submitted will generate three documents and submit back to TN - which then needs to be handled with respective processing rule). What happens when a Large document is sent by partner as also identified by TN as large, will it be processed through Trading Partner Agreement in the same way (after doc is place in hardDrive) ?
2) For Large document processing, I would want to understand if we can triage the processing flow at Tier 2 service level based on whether the document is large or not. Meaning, for bizDoc as received in the Tier 2 service (as transitioned from the processing rule post initial TN processing marking doc as Large) holding Interchange EDI document - can we add branch condition based on bizDoc/ContentParts/StorageType and/ or bizDocLargeDocument?, if their value is tspace, True respectively then execute large document processing workflow ; otherwise perform regular processing workflow (for regular sized document).
3) For Large EDI document, for their processing at service level - what is recommended to be used (I was confused on this one):
a) Should we use the service - wm.tn.doc:getContentPartData, which will take bizdoc as input as sent to the Tier2 service from TN (processing Rule).
If so, I have a question for this -
i) How you define partName input for this in case of EDI (is it like group, transaction) - this part really confuses me, can we not just set getAs = Stream, and we get pointer to bizDoc with large section set as UndefData - which we can later process iteratively?
ii) Do we have to also call the service wm.b2b.editn:getTspace after first step (I am assuming not) or just continue with executing wm.b2b.edi:envelopeProcess , wm.b2b.edi:convertToValues as the case be!!
b) Or else shall we use the technique of processing the Document Iteratively, Segment by Segment i.e. Node iteration. If so, how we obtain reference to the document stored in hard disk in our service? Do we have to use wm.b2b.editn:getTspace service (this does’t provides the option to read as stream or part of document) - so how will it work out, is the stream read default and just the large section be set as undefData which we will have process through - Iterative, Segment by Segment processing technique.