We are having a requirement where I want the subscriber to FTP in such a way that it can append to the existing file. How can I achieve this, wat is the best method in wM.
Secondly, If a publisher has published N number of canonicals, each of this canonical is identified with different ID. Can a single subscriber read all these N number canonicals and create a single flat file?
Does a wM allows to schedule a subscriber instead of triggering mechanism.
Can you elaborate on what you mean by “subscriber to FTP?” Do you mean an IS service invoked via FTP? If so, the service can look for and open an existing file and append to it.
[/color][/font]
Yes. You’ll need a way to determine when all published documents have been received. One approach is for the documents to have the documents track which number of how many they are. For example, if there are 10 documents, the first would indicate 1 of 10, the second 2 of 10 and so on. When all have received the file is complete for further processing.
A “last one” flag might work but that depends on document order and timing if multiple threads are possible. If everything is serialized throughout the process then this can work.
Lastly, a timing approach can work. Write all documents received to a file. At specific times of the day, or after a time has passed after the first document has been received, process the file assuming all that have been received is all that will be processed for now.
Yes, services can be scheduled instead of fired by a Broker/Local/JMS trigger.
Just a quick scenario question: what will happen to the rest of the docs in the queue if by chance the second scheduler kicks in and disables the trigger when the actuall tirgger service is in middle of process? Will the document feed to the trigger service stop if this happens?
Is there any remote possibilty that the queued up docs get lost when the broker goes for a restart before the trigger was enabled?
Let the trigger service process the docs as they come in but write the data to temp files on local drive with a unique id(doc id/timestamp) in the file name.
Have the append service scheduled either at less frequency or once a day at EOD which will pick up all the temp files append them into a single master file. If data continuity is important then sort on the unique id in the file name while picking up.
Any threads in progress will complete. New threads will not be started.
Recall there are 2 parts of a trigger, document retrieval and document processing. The first will stop retrieval of docs from the broker. The second will stop the kicking off of processing of documents in the local queue. The common approach is to suspend retrieval and leave document processing enabled (allows the local queue to drain).
Yes.
Volatile documents will be lost on Broker restart. Guaranteed documents will not.