Hi I am new to webMethods …is it possible to store certain parameters within the flow when the flow is invoked that can be reused when the flow is re-invoked.
Problem is the same file gets sent and processed one after the other.I want to avoid it by checking in the flow if the file got just processed thru the flow.
Hi,
Re-invoked in the same transaction or at a later time, after a day or week?
You can add a flag or a temp variable and set it to ‘Y’ after processing for the first time, and look for this flag before processing the second time.
HTH.
S
S
thanks smathangi for your reply.
How and where do I store this flag and how do I retrieve back…I guess once the flow service finishes execution, all the variables are dropped(Garbage collected).
I figured I could use clearPipeline with preserve list to store my variables…but don’t know somehow using this way when I re-invoke the same flow, variables in the preserve lists are found to be not stored.
any more ideas…or steps that I need to follow…
Hi,
What’s ur exact requirement … what kind of file you are trying to process… does this flow have any trading network link… i.e. duplicate detection can be used??
please elaborate then we can suggest some solution. Unfortunately i havn’t understood much out of the problem description.
-nD
Unless you’re storing that variable somewhere persistent (disk, static memory var, pub.storage store, etc.) then this will not work. Simply setting a pipeline var will not be carried over to subsequent runs of a service.
If you need to track files to know if you’ve already processed them, then you’ll need to store that info somewhere. Preferrably a DB table.
Can you describe your process? Are you picking up files from a directory? If so, what do you do with them after you’ve processed them? The easiest thing may be to simply move or delete the file so that it doesn’t get picked up in the next run.
Thanks for the reply.
no this is not coming from TN, this is A2A…sending system can not prevent from sending duplicate or repeat(something that will take longer than expected time to fix) meanwhile we are trying to detect this duplicate file or second run of the same file in the flow service.
I think I can not preserve pipeline beyond the current session so I am looking for the alternate options.
Files are received from one system(picked up from a directory) , processed in webMethods and delivered to SAP as an IDOC…Repeat IDOC with the same PONumber creates an Incident in Production ladscape, unneccesarily increasing number of incidents…
IS provides facilities for detecting duplicates. You might need to modify your integration a bit to let it do so. Refer to the docs for details.
Probably the easiest approach is to simply record the PO numbers that you process into a DB table. At the start, read for the PO number. If present, stop. If not, proceed and then record the PO number in the table.
Depending upon volume and how long this solution stays in place, you may need a scheme for purging old/expired PO numbers from the table to reclaim space.
instead of sotring in DB…thinking of some temp storage using utility services like PSutilities service storeDatatoMemory whenevr file is processed and getDatafromMemory to verify if the file is processed previously and removedatafromMemory(need to think when?)
what say?.
How many PO numbers do you need to keep track of and for how long? What happens if IS goes down for a brief time?
I think you need to get away from trying to store the PO numbers in memory. Using pub.storage services isn’t advisable either as that facility isn’t intended for data such as this.
What are the issues/concerns with storing the numbers in a DB table?
If you are using a file port, one of the configuration parameters is the number of invocation threads.
If you set 1 thread, the same file won’t be processed twice.
Or are you getting the file again after a period of time?
Regards.
My understanding is that it’s not the same file. It is a new file but contains a PO number that has been seen and processed before. The source system is spitting out duplicates.
…its not the same fileName…its content repeats the same PONumber that was just processed in the previous attempt resulting in duplicate processing error
You can use HashMap/Hashtable, but this is not recommended in production, since its not shared among clustered IS and that too, if IS went down and after IS restart you can’t retrieve back values from HashMap/Hashtable. What Rob suggested is the best, instead using DB, in other way you can do is store the PONumbers in a file, refer this file before processing the request, instead using DB.
Regards,
Sam
Please disregard my previous post.
You can go for a file storage if you are sure that duplicates are always from just the previous file if not then better to take Rob’s suggestion to go for a db table.
please be advise that if you are saving this information in memory (using any HASH table or datastore) it will be gone on server restart, so you ur vital information will be lost.
Saving in DB looks a good option to me as well…
-nD
To clarify for those coming upon this thread in the future…
When properly using a file polling port there is not a risk of a single file being processed multiple times even when “Maximum Number of Invocation Threads” is greater than 1. The file management facilities of the polling port are such that this will not happen.
please have a look below suggestions:
create a file in the file system and keep appending the file with new PO number…
on every run match the new Po number with the PO numbers stored in the temp file…
for successful Match Do nothing or throw error.
else keep processing …
if the duplicate PO is coming in the very next file …
then use pub.storage service … but use it carefully in clustered environment else you will see lock issues on the storage … and you will not be able to release the lock then …