Hi,
I came across an issue where sometimes the actual data contains
the delimiter(in this case *) as part of the data. Is there anyway to
find the existence of this delimiter in any of the data elements before
calling the convertToString call on the document.
You can retreive the delimiters info from EDITPA from TN.Define a partner specific TPA and getEDITPA will have delimiters and map it to your convertToString.
One approach is to identify the fields that might contain delimiter characters and then do a replace on each of them to either change each character to a non-delimiter character–such as * to _ or space–or delete the character from the data.
As rmg points out, you can retrieve the delimiters for a partner first and then base your find/replace on those delimiters instead of hard-coding.
So that I understand the question. Are you creating outbound X12 from a flat file or some other type of data that may contain symbols such as |, *, :?
OR
Is some knucklehead sending delimiters embeded within the elements?
in the former, I imagine if you have a way of determining what outbound TP it is from the flat file, you could look up the delimiters in the agreement like RMG mentions. That way you dont send out bogus X12.
However, in the latter, if an “" were included within an REF03 as part of a description, where that partner is using "” as the element separator. I think you need to reject the document and ask the TP to do a compliance check before sending.
This may just be my FED/DOD background talking, but you should never try to code for poorly formed X12. Since there is no limit to how badly a partner can attempt to create it, and it can lead to a mis interpretation.
I dont see how you could distinguish a delimiter from a wildcard in X12 received.
I agree with your comments on “*” delimiter if this present in the segments/fields say MSG or REF03 etc…as we also know it is common used delimiter in the EDI tsets used as a FieldSeperator…This can lead X12 parsing issues etc…
I agree with Jim that X12 data that contains delimiters within the elements should be rejected, especially since X12 does not have allowances for a release character. I have faced the second scenario many times, as our back-end system uses part numbers that begin with a ‘*’ character. I usually do as Rob suggests and pass the offending fields through the replace service in the mapping. In certain circumstances it may be possible to scrub the entire string at once before mapping it to an internal format IS document, if it arrives in a format that makes this possible.
Hi All,
Once again thanks for your suggestions and comments.
The webMethods server gets an xml document generated from one of
our backend systems, which we map to x12_4010_850 and then
use convertToString to generate a edi document.
What is happening is due to some user error some of the fields randomly
gets an edi delimiter and this completely messes up the generated edi
document.
I was thinking of the option which Rob was suggesting, but doing
a replace on a field by field basis is a tedious job. I was thinking
if there is anyway I can built a java service to traverse
through the 850 record and replace the delimiter if it exists in any
field.
Please let me know if I’m moving in the wrong direction
“What is happening is due to some user error some of the fields randomly
gets an edi delimiter and this completely messes up the generated edi
document.”
Can you elaborate at what stage this delimiters are messing up?after convertToString? or before you map to 4010 850 IDATA document?
One possibility is to convert the document to an XML string, pass the string to the replace service, replacing the delimiters with nothing, and convert the resulting string back to the original document type.
In my experience, when data from a back-end system contains delimiters, it is restricted to certain fields. For example, data from SAP may contain ‘*’ in the Name and Address fields because that is valid data from an SAP perspective. So it is a data issue and it is most likely possible to narrow the number of fields for the search/replace to a reasonable number and resolve in the the wM mapping service prior to convertToString.
If the delimiter in the XML document is truly random, it’s seems that it would be a programming bug related to how the XML document is being generated and should be corrected there.
It can seem tedious to replace on a field by field basis, but as Mary points out, the number of fields that this needs to be done to usually number in the single digits. There just aren’t that many to worry about. The usual suspects are address fields and free-form comment/description fields.
Be careful if you use a technique where you convert the entire doc to a string of some format and back. That approach can be full of special cases that are hard to account for.
I think there might be a “scrub record” service of some sort floating around on this forum or on Advantage that traverses an entire IData structure to do work similar to this. The advantage of this approach is that it is relatively easy to replace all offending chars with a service call or two. The down-side is that it is relatively expensive in terms of memory and processing time–especially when the vast majority of the data will never have a char to be replaced and the times where a char actually occurs in the data is usually relatively rare.
Hi All,
Once again thanks for your suggestions and comments
I have actually tried rob’s approach and once we do it for a limited
number of fields then the delimiter pops up in some other field.
I’m thinking the tims approach (convert to xml replace the delimiter and convert it back to record )to be something which would make the
process much easier and simple to do
But I’m open to other suggestions too
EDI guys, what is the recommended approach when a partner can’t or won’t send you valid EDI documents?
Why try to accomodate and appease a partner that is unwilling or incapable of sending valid EDI documents? You’re not working on the source of the problem, IMHO. No amount of technology bandaids will ever be enough to ensure the quality of service level that the business leaders of your company and your partner’s expect.
I don’t think the situation is this cut-and-dried. An XML file is created that contains data which is problematic only when transformed into an EDI document. It’s not that a partner is submitting invalid EDI documents. This is as much a issue with X12 as it is with the data. IMO, a system creating an XML doc shouldn’t have to know the detailed EDI specifications of the ultimate receiver of that document so as to avoid the use of certain characters.
I strongly advise against this approach. What will you do when one of your partners wants to use > as a delimiter (which is not completely out of the question)?
Although I suggested the approach, I agree with Rob in general on this. IMO, this is only (possibly) feasible in a very limited application. It would definitely be hazardous to incorporate this into a general processing method for EDI. As others have suggested, it is definitely advisable to look into fixing the problem at the source, if at all possible.
Agreed. There are only two workable approaches to this issue, IMO:
Replace the characters in question on a field by field basis in the transformation service. Requires knowledge of the fields at design time.
Scrub the document in its entirety by traversing the document and performing a replace on each field. This approach doesn’t care what the fields are until run-time but simply traverses all fields that exist.
The chars to replace should be read from the TPAs and never hard-coded.
The issue becomes more fun if you do both X12 and EDIFACT–one will need to account for different approaches those standards have.
If you’re interested in the second approach, there is a service in WmSamples called walkAnIData that gives an example of traversing the fields of a document at run-time.