How do we add a transaction bracket around the messages in a broker queue. How do we group a set of messages? Do we have a concept of groupId and Message sequence similar to a MQ server implementation? Also do we have the concept of a backout queue?
My broker client subscribes to the documents on the broker and processes them further (data transformations and insertion into a database).
When the broker client encounters a failure while a processing a particular document in a group, i want to do a rollback on the entire group of messages and send all the messages to the original queue or a backout queue.
pubId - Client ID of the event’s publisher. If the publishing client is connected to a different Broker than the recipient, the ID will be fully qualified (prefixed with the name of the publisher’s Broker). The pubId is the identifier of the client that published the message.
pubSeqn - A 64-bit value representing the event’s publish sequence number. The use of publish sequence numbers is described in Chapter 11.
Neither of these will be useful for what you’re describing.
Review the aforementioned pdf. Chapter 8. Transaction Semantics should be useful as would the reference material for BrokerTransactionalClient.
A tidbit that may be helpful is that there is no notion of “requeuing” an event. Once acknowledged (explicitly or implicitly) the event is removed from the queue. To leave it on the queue for later, you basically return an error/failure status. Broker has no notion of a rollback/backout queue, at least via the Java API.
Another approach might be to interact with the Broker via its JMS provider interface. I think there may be more facilities there for doing what you describe.
Thank you for the response. I will go through the java API docs for broker as you mentioned. Also what techniques are available if the publishing and subscribing clients are flow services on the Integration Server.
Using IS as the publisher and subscriber makes things a bit more interesting. There’s less flexibility in terms of the interaction with Broker–IS manages a lot of the details and there isn’t any way to override what it does.
Managing a group of published documents introduces complexity. If possible, try to avoid publishing individual documents for a group of items that need to be processed atomically. Publish them in a single document. For example, I’ve seen cases where line items in a PO are published individually. Putting them back together on the receiving end(s) in a fail-safe manner in the face of outages and failures is not trivial.
If the data is large and thus doesn’t lend itself to publishing through Broker, then you might consider other techniques. For example, replicating product catalog information usually involves large amounts of data. Publishing the catalog as a single document isn’t reasonable. But publishing all the items in the catalog individually and then assuring that all items get processed by all subscribers reliably as a single transaction isn’t reasonable either.
Another approach is to place the large item in a location that can be accessed by all subscribers. Then publish a document that indicates “Catalog update available at \server\share\filename0010.dat”. The subscribers then retrieve the file and process it in a manner most suitable for what the subscriber needs to do.
If you cannot put all the documents together in a single doc, but still need to treat them as an atomic group, you’ll need to put some control information into the document and/or the envelope. You can use the activation (essentially a groupId), appSeqn and appLastSeqn envelope fields to control things. I don’t know for sure how you can use Broker as your “rollback” for all the events. The thing to overcome is that IS manages the retrieval and acknowledgement of documents from Broker. The only control you have on the subscriber side is to throwExceptionForRetry–but this only “naks” the document being processed, not an entire group of docs.
There isn’t a way for an IS subscriber to get a group of docs and process them atomically without some custom work, AFAIK. A join in a trigger provides a way to group multiple docs but 1) the joined docs must be of different types; 2) only the first instance of a given doc type will be joined (can’t join multiple instances of the same doc type).
Add in the possibility that a production environment will have multiple IS instances connected to a single Broker and processing documents in parallel and the solution becomes even more complex.