1)one document is getting published to the broker.some 10 subscribers are there.5 subscribers must process that published doc irrespective of time.but other selected 5 subscribers must process that doc only at 8 am every day.how to handle this scenario…?
2)suppose a flat file schema that has both record id and record with no id…
ex…
Assuming webM is receiving data thru FTP.
One way is to add a time stamp field in your publishable document and populate this field thru your publish service. And for your subscriptions that are to be executed at 8 AM have their trigger filters evaluate the time stamp field such that if its 8 AM then execute the service else will be discarded. But this may not be a good solution. What if a decision was made to change scheduling from 8 AM to 10 AM and so on.Each time you need to go back and change your trigger filter.
So the other approach(And i guess better one) is to seperate your integrations such that source system sends two files with different names and you handle them thru separate publishable service and separate publishable documents(or same publishable document if you can set a flag, say filename and have subscriptions triggers evaluate flag for filename) . More complex approach but by doing this way the source system(outside webM) have full control of data it is generating.
I guess you can create a scheme with ‘recordWithNoID’ assuming that theirs no recordID.
5 subscribers process published doc irrespective of time no changes.
5 subscribers must process that doc only at 8 am
Schedule a flow service to suspend processing of the 5 subcriber trigger (let’s say after 10am) everyday
Schedule a flow service to resume processing of the 5 subcriber trigger at 8am everyday
Threads and memory will be ocupied.
option 2
5 subscribers process published doc irrespective of time no changes.
5 subscribers must process that doc only at 8 am
Subcribe the document and invoke a flow service to save the document in a storage (file or database)
Schedule a flow service to process the message in that (file or database) at 8am everyday
Manually handle the status of the processing and clean up the records…
There may be many ways to do this . But as i said in my previous note that you need to watch out for things like scheduling chge etc. Your integration need to be as transparent as possible to source and target, And should be doing mapping,formatting etc but not data control.
Anyways that’s my view. Act according to your constraints . And any of the above solutions should work.
There’s no ‘sample flow services package involving above two scenarios’ .But you can download some sample packages from ‘advantage’ website. Please go thru them and post specific questions.