Email Adapter multiple record parsing problem

HI,

I am facing problem with parsing the email message . i am getting a looped structure record format message which i need to parse .
Can i use the email adapter receive operation with the combination of File I/O adapter format operation to define the record type and layout to get the content of the message . please advice .
the message format is
header
record
child record
endheader

here message contain one or more records which is structure .
and each record contains one or more child reocrd which is again small structure .

please give procedure to process the parsing options.

thanks

murali

Murali,

I had the same requirement. I created a separate flow which I named convertErrorRecordToString. It takes the Errors records from the pub.record:validation flow and converts to a string.

The input is Errors Record - output is stringOut.

With the record as input, create a loop. Loop though each record. During the inner loop, you will need to create a series of maps that concatenates each value.

What I typically do is write a value, then I add a carriage return. I have a carriage return as my line delimiter. When the email is written, it comes out perfect.

Loop:

MAPs:

Map - temp = item 1 + item 2
map - delimiter
Map - temp = temp + next item
map - delimiter
Map temp = temp + next item
map - delimiter

The map of the delimiter can be cut and paste. I use a temp variable in the inner loop.
Hope this helps.

Ray – Murali is working with Enterprise, not IS.

Murali – You should be able to write the e-mail contents to disk. Then use the I/O Adapter from there.

Hi Rob,

You are absolutely correct, What my worry is if i write to disk and use i/o adapter this may leads to performence issue . what i am looking is is there any way to extend the e-mail adapter so that i can provide one more operation template as like stream operation templates in File I/o . please sugggest the way to extend the exsting adapters with additional operational templates .

     The other solution which i am thinking is write common operation which takes input as string (which contain the e-mail body) and give output as sequence of records . here i need to simulte the file stream operations as common operations . It would be appreciate if any one done in this fashion . 

            To look the code of operation template where can i look in the adapter ? 

thanks & regards

murali

How fast does this integration need to be? I assume that since the data is coming via e-mail, speed isn’t much of a concern as long as the integration doesn’t take all day. Usually, a couple of minutes is okay and is driven not by the need to respond to a single integration but by the overall load–e.g. need to process 10,000 e-mails a day or something. Is this the case?

AFAIK, there is no way to add to the available operation templates of an adapter. These operations are part of the core of each of adapter. Adding a new operation would require modifying the adapter itself.

Writing a common operation is certainly doable. But wouldn’t you be duplicating the capability of the I/O adapter in terms of parsing?

I may be wrong on the performance thing but my feeling is that you’re making more work for yourself trying to avoid a performance issue which may not even exist. I would suggest creating a solution using the e-mail and I/O adapters and then measuring performance to see if it is acceptable. Configuring these adapters and doing the test ought to be less than a days worth of effort.

Hi Rob,

Thank you very much for your feedback .

If i write the email message as file using any unix script or/nt script . i can’t filter option at the ES level . due to this if filter option changes i need to change the conversion script .

please suggest the way to simulate FileI/O stream operation as a common operation which take string as input and produce the record structures as out put .

thanks & regards

murali

I don’t understand. What would the unix/nt script be doing? What specific filtering do you need? As I understood it, here would be the steps:

  1. E-mail adapter retrieves the mail message.
  2. Custom step in this adapter would write the contents to disk.
  3. I/O adapter detects the new file.
  4. I/O adapter publishes one or more events using data from each row/record in the file.
  5. The events flow to the subscribers, which can use filtering if desired.

What am I missing? Where do you need the filtering to be?

Hi Rob ,

a small question,how can you implement the step 2 ?

i mean with out using File I/O write operation , do we need to use java I/o classed to create the files in the custom step?

regards

murali

Using java.io classes would be one way.

Another would be to publish an event with the e-mail contents/string and have the I/O adapter write the file. Which of course another I/O adapter configured op would then pick up and process.

If you don’t want to wait for a file poll operation (15 seconds or so, depending on configuration) then you could have the e-mail adapter write the file and then publish a notification event containing the filename. This event would go to the I/O adapter which would then process the file right away instead of waiting for the next poll.

Hi ,

Can any one suggest to simulate the FIle i/O Format options as common operations. so that i can pass string as input (contains hirerachical records ) and get the out put as set of records.

thanks & regards

murali

Murali, if Rob’s suggestions are not helping you, maybe you should try writing thsi code by hand using Java.

You input will be a string containing your test. You output will be Struct object.

Assign your inputs and outputs for this common operation and then create a Custom Code step. Your Custom Code step inputs and outputs must be defined. Enterprise Integrator will generate the framework for your code and it is your responsibilty to do the actual parsing/mapping.

To edit the code generated by EI, click on your Custom code step and then select the “Script” tab.

Good luck.

Thank you Dan,

      Is there anybody having code for the standard parsing/mapping techniques such as fixedlength records ,delimted length file format etc . 

      Please let me know how extactly wemethods implemeted in File I/o  Format operation templates . is there any way to look at the webmethods adapter code to go thru the templates logic . 

thanks & regards

murali

It would be violation of your license agreement with webMethods to look at the underlying code so I would avoid that path.

In re-reading the thread, I just am not sure why you are trying to write this component by hand. Why not just use the IO Adapter? I think that you will find it as fast or faster as any component that you write by hand. But that is just my opinion.

Remember that with an IO adapter, you can subscribe to any document published to your Broker. The fastest way to implement your solution would be to publish your String to the broker and let the IO adapter subscribe to it. Have the IO adapter do your mappings into the specified template and publish out the results. Any adapter can then subscribe to this output from the IO adapter.

It sounds like you are trying to do something by hand that webMethods’ products can do for you. Whenever possible, you should take advantage of the product features because they are unit-tested, certified, and supported by webMethods.

Hi,

I have a scenario, I want to read 10,000 records from a flat file and then send those to target SQL Server DB.  

I configured a Operation with Create Multiple Outputs at source end and Insert Operation at target end. It working but it will take long time.

Can you guy’s suggest me, what is the best way to do this?

Thanks,
Sreeni

Hi,

I have a scenario, I want to read 10,000 records from a flat file and then send those to target SQL Server DB.  

I configured a Operation with Create Multiple Outputs at source end and Insert Operation at target end. It working but it will take long time.

Can you guy’s suggest me, what is the best way to do this?

Thanks,
Sreeni