I need to validate the input elements(almost 10) for certain lengths and if the length exceeds, I need to truncate to
the certain length (as specified in schema). If the truncation happens, I need to write all the orders
whose fields got truncated in a file.
What is the good design for this?
I have the schema validation implemented in the FS from xsd. Before that I want to include the above validation using java.
I don’t want to hard code the lengths of these fields in the java code. I want to read from xsd.
How can I read the lengths of fields from the XSD file?
I think schema validation may not be the right approach for this. You’re not validating–you’re applying data mapping rules.
You may be able to get the field limitations from the IS schema but I’m not quite sure how. If you can’t, then you can put the field limits in a config file somewhere. With this info, your mapping would check field lengths and write the order number to the log file you mentioned as needed.
Why do you think Schema validation is not good? There are almost 50 fields in the xml.
I need to truncate the fields for only 10 fields. For all the other fields, I need to follow the schema validation.
I want to use java service, so that performance might be much faster.
I created one input document containing 10 fields and passed as input to java service. I also created output document from the java service with all these fields with correct lengths. All the validation occurs in the java service.
Is this the right approach?
You can use schema validation, but you’ll probably need to define the 10 fields to not have any length limits–otherwise the validation will fail and you’ll have to have code that determines which failures are real and which can be ignored.
Here is what I’d do:
Define the xsd that has all the field definitions the way they should be with length limits and such.
Provide that xsd (or wsdl) to partners and tell them to conform.
From the xsd, create the corresponding IS schema and doc type.
When an incoming document fails validation, reply to the partner with an error and reject the document. Don’t truncate fields and don’t try to fix the invalid document–it’s the partner’s job to conform to the xsd.
If you want to allow documents from partners to have some fields that are too long for one or more of the target systems that will receive that data:
Define the xsd that has all the field definitions the way they should be with length limits and such. For the fields where you will truncate, don’t specify a max length.
Provide that xsd (or wsdl) to partners and tell them to conform.
From the xsd, create the corresponding IS schema and doc type.
When an incoming document fails validation, reply to the partner with an error and reject the document. The fields that are defined as unlimited length will pass the validation.
During mapping to the target document(s), truncate the fields that need to be truncated. Log the orders that have fields that are too long if you want to, but the value of doing so is probably limited–you can’t tell your partner that they’re doing things wrong if they are conforming to your xsd.
In other words, use validation the way it was intended. Don’t try to implement “validate this but not that”–it introduces unnecessary complexity.
As for “…performance might be much faster” you’re probably optimizing too soon. Chances are, a FLOW service will be more than sufficient. There are many threads on the forums about this. If performance was truly the only issue to consider, you’d be writing all this in C/C++ and not in Java and certainly not in an integration server where you don’t have much control over how your Java methods are managed and executed.