Triaging on Large EDI Document processing in TN and Tier 2 Service

Hi wmUsers,
I have few queries, in regards to large EDI document processing that I was not able to find answers to from the SAG documents and previous posted questions. Assumption: Large doc config in B2BSetting> Config (tn.BigDocThreshold ) and ISAdmin (watt.server.tspace.location , watt.server.tspace.max , watt.server.tspace.timeToLive) already performed.

1) Suppose, there is a custom TPA setup for this document (i.e. Sender, Receiver and DocumentStandard) with splitOption as Transaction (which with regular/ small doc submitted will generate three documents and submit back to TN - which then needs to be handled with respective processing rule). What happens when a Large document is sent by partner as also identified by TN as large, will it be processed through Trading Partner Agreement in the same way (after doc is place in hardDrive) ?

2) For Large document processing, I would want to understand if we can triage the processing flow at Tier 2 service level based on whether the document is large or not. Meaning, for bizDoc as received in the Tier 2 service (as transitioned from the processing rule post initial TN processing marking doc as Large) holding Interchange EDI document - can we add branch condition based on bizDoc/ContentParts/StorageType and/ or bizDocLargeDocument?, if their value is tspace, True respectively then execute large document processing workflow ; otherwise perform regular processing workflow (for regular sized document).

3) For Large EDI document, for their processing at service level - what is recommended to be used (I was confused on this one):

a) Should we use the service -, which will take bizdoc as input as sent to the Tier2 service from TN (processing Rule).

If so, I have a question for this -
i) How you define partName input for this in case of EDI (is it like group, transaction) - this part really confuses me, can we not just set getAs = Stream, and we get pointer to bizDoc with large section set as UndefData - which we can later process iteratively?
ii) Do we have to also call the service wm.b2b.editn:getTspace after first step (I am assuming not) or just continue with executing wm.b2b.edi:envelopeProcess , wm.b2b.edi:convertToValues as the case be!!

b) Or else shall we use the technique of processing the Document Iteratively, Segment by Segment i.e. Node iteration. If so, how we obtain reference to the document stored in hard disk in our service? Do we have to use wm.b2b.editn:getTspace service (this does’t provides the option to read as stream or part of document) - so how will it work out, is the stream read default and just the large section be set as undefData which we will have process through - Iterative, Segment by Segment processing technique.

Hi Team,

If any of you - could respond to my query (even if part of it), it be really appreciated.

I think I was able to figure out answers to part of my question, so posting in here for reference to others:
1) Still open question, can someones respond.
2) I guess, yes we can triage based on bizDocLargeDocument? - for regular processing and largeDoc processing.
3) For large document processing part, we can just call - with inputs as below
bizdoc - bizDoc as received from TN to Tier 2 service.
partName - edidata
getAs - stream
This way we will have access to the ediDocument through harddisk tspace location - as a stream (till this point it serves the purpouse of not keeping doc in memory of TN). Next, we can navigate this document (partContent) - node by node - by using the iterator flag in wm.b2b.edi:convertToValues and process the document.

Clarification: Per my understanding after above step (where we get access to partContent on stream) - we don’t need to invoke wm.b2b.editn:getTspace (which we normally call to extract ediString from bizDoc). Is that correct