Large File Validation

Hi ,
There is EDI File for validation , Here my flow
wm.tn.doc:getContentPartData
getfile
wm.b2b.edi.util.streamToString
wm.b2b.edi.envelopeProcess
It’s working fine with small files like 10mb , 20 mb …, but it’s getting OutOfMemory error when I have 30mb or above.
I am not sure about the streamToString can handle Large file ?
I appretiate your suggestions .
Thanks,
Raj

The streamToString service will load all the bytes of the content part into memory and then convert them to a String. 10MB is not a small file.

Review the TN docs for information about large document processing.

Are you aware that the EDI for TN module will do envelope processing for you? I ask because you list wm.b2b.ed.envelopeProcess as one of the services you call. This is unnecessary when using the EDI for TN module.

Hi ,

I never work on TN large documents ., Is below services will work for me ?

wm.b2b.editn:getTspace
wm.tn.doc:getContentPartData
getfile
wm.b2b.editn:validateEnvelope

Thanks ,

Raj

I assume you are receiving one big interchange that contains one or more groups each with multiple transaction sets. Let me know if that assumption is wrong.

It appears that you are trying to do the de-enveloping yourself. You don’t need to do so. Let TN do most of the work for you.

EDI for TN can do the splitting for you. It can quite easily split out each group and transaction set. When the interchange is received, it will split the interchange according to the applicable EDI TPA. EDI for TN can also automatically validate the interchange and send the appropriate FA/997.

My recommendation is to always split to the transaction set level. It is rare that a group of transaction sets MUST be processed together.

Then you can set up rules to 1) ignore the group document since you really don’t need it; 2) process each transaction set individually.

You don’t need to use both getTspace and getContentPartData. You don’t need to use getfile at all. And you likely don’t need to use validateEnvelope.

Read the EDIModuleUsersGuide.pdf (Chapter 19 for large doc handling) and the EDIModuleConceptsGuide.pdf. Lots of good info on how to process EDI. It’s likely that your transactions sets themselves are not that large–just the interchange that holds a group of them. If you let TN do the work for you, you won’t have to worry about large doc handling. But, if your transaction sets are indeed large (ship notices can get big, for example) then you’ll need to implement the handling described in the docs to avoid loading things completely into memory.

Here is my existing process ., it is outbound process which will get the flatfile and convert into EDI ,Then validate same with HIPPA standards (validate large ST) then dispatch the file to patner.

After EDI FILE ,
wm.tn.doc:getContentPartData
getfile
wm.b2b.edi.util.streamToString ----- having problem here , some time getting OutOfMemory() error for large files.
wm.b2b.edi.envelopeProcess

Then after this EDIConCat , validate largeST (hippa module)

raj

I must be missing something. Why are you using getfile?

As mentioned eariler, the streamToString is the problem. You can’t do that.

Are you loading the entire flat file into memory? That’s probably an issue too, if it is large. You’ll have the bytes, the string representation, the IS document representation, then as you convert to EDI, you’ll have the EDI as IS document, and as a String. Your 10MB file is now taking up much, much more than that in memory.

You might consider doing validation earlier in the process. If you’re creating the EDI document, why wait until after giving it to TN to validate it? You can have TN validate it too. Why are you “reprocessing” the EDI data in a service?

Perhaps an overview of the entire process is in order. Something like:

  1. Flat file picked up using file polling port.
  2. Flat file services used to convert to an IS document.
  3. Translate to an EDI IS document.
  4. Convert EDI IS document to string and add envelopes.
  5. Submit the EDI to TN for delivery to the partner.

Does that capture the high level process? If so, you can do validation in step 3 or in step 5.

I have one schedule service which will pick up the Flatfiles and convert them into small files., then submit node to TN
In this process I do have two processing rules.,

  1. For converting to EDI. (synchronous)

    Get the TranportDocument , convert to EDI (addGroupEnvelop for setting GS HEADER , ADDcEnvelop for setting ISA hea
    der Information ) And SUbmit doc to TN.

  2. For Validating And Dispatch (Asynchronous)
    Retrieve the content Pat from Bizdoc , get File (edi file) , stremtostring , then envelopProcess(Validate the edi doc) and Validate doc againt hippa (EDIConcat , validate LargeST) then dispatch the same to patner.

My first service working fine for any size , but second process is failing for large files above 20 mb ., I appreciate your
reply with any other suggestion .
Thanks,
Raj

In previoue reply ., I meantined submmitng to TN means submitting tranport doc to TN which will have sENDER ID , RECEIVER ID,
DOCUMNETID ,Filename , filenames.
In our process after creating edi we moving files to one directory then getting same file by using getfile then validate

Oh I see. To make sure I understand:

  1. Scheduled task service picks up a file and splits into several smaller files.

  2. Each file, with content, is submitted to TN as a flat file. Each file is now a TN document. Or is the content NOT provided to TN?

  3. Each TN document is processed by a service as configured by a sync rule. This service converts that to EDI and writes the contents to a file.

  4. Then the service creates a “management” document holding the data elements you mentioned. The EDI content is not given to TN, only the control info.

  5. For each “management” document in TN you have another rule that invokes a custom service (not using TN delivery services). That service reads the contents of the TN document. From that document you determine the EDI filename.

  6. Read the EDI file, convert it all to an IS document for validation. Then validate.

Is this accurate?

In step 3, do you append each document to a single file? Or does each EDI document get written to its own file? In other words, are you doing some sort of EDI batching/grouping?

I’m not sure why you’re using EDIconcat in the structure you’ve shown. Since you already loaded the complete file into memory (via streamToString) EDIconcat doesn’t help.

Hi Rob ,
Sorry for confusion

  1. Scheduled task picks up a file and splits into several smaller files ., smaller files move to processing directory .

  2. Content not providing to TN .
    3 & 4) After converting to EDI I just move the file to processing directory. (Content never provide to TN) .,I just provide the tranport doc (“management”).

  3. Second rule will invoke the service , this service reads the content(edi file) from processing directory (getfile , here
    I am using streamToString(for conevrting stream into string variable IData) then validate envelopthe same using wm_b2b_edi_envelopeProcess .,

  4. looping through the Segments (ISA/GS/ST), get the EDI concat data (EDICONCAT) then validate the ST (LargeST).

  5. Then encrypt the file .pgp format and dispatch to Patner.
    I really appreciate your suggestion and help.

Strange approach by not storing the content in TN but that’s another discussion…

Again, the streamToString is the problem. You MUST eliminate the usage of that for large file handling.

You may need to review the use of the PGP encryption as well. It may be loading the complete document into memory too.

Hi Rob ,

I knew it is strange but Is there any way to avoid this ? This is running process and the Client does’t want to change this .

Let me know which is better way to avoid memory usage ? I knew using Large Doc halding with TN will avoid this .

Thanks

Raj

For 6.5, refer to Chapter 7, Handling Large Documents. It describes an approach for processing and validating the document.

Thanks Rob for your time and suggestion . By the way we are on 6.0.1.

Thanks,
Raj

Ouch. That’s quite an old version and is no longer supported by wM tech support. Hopefully the client will be updating soon.

I don’t have 6.0.1 materials any more, but the 6.1 docs still have Chapter 7, Handling Large Documents. I imagine the 6.0.1 docs do as well.