Review the PublishSubscribeDevGuide.pdf as well.
Using IS as the publisher and subscriber makes things a bit more interesting. There’s less flexibility in terms of the interaction with Broker–IS manages a lot of the details and there isn’t any way to override what it does.
Managing a group of published documents introduces complexity. If possible, try to avoid publishing individual documents for a group of items that need to be processed atomically. Publish them in a single document. For example, I’ve seen cases where line items in a PO are published individually. Putting them back together on the receiving end(s) in a fail-safe manner in the face of outages and failures is not trivial.
If the data is large and thus doesn’t lend itself to publishing through Broker, then you might consider other techniques. For example, replicating product catalog information usually involves large amounts of data. Publishing the catalog as a single document isn’t reasonable. But publishing all the items in the catalog individually and then assuring that all items get processed by all subscribers reliably as a single transaction isn’t reasonable either.
Another approach is to place the large item in a location that can be accessed by all subscribers. Then publish a document that indicates “Catalog update available at \server\share\filename0010.dat”. The subscribers then retrieve the file and process it in a manner most suitable for what the subscriber needs to do.
If you cannot put all the documents together in a single doc, but still need to treat them as an atomic group, you’ll need to put some control information into the document and/or the envelope. You can use the activation (essentially a groupId), appSeqn and appLastSeqn envelope fields to control things. I don’t know for sure how you can use Broker as your “rollback” for all the events. The thing to overcome is that IS manages the retrieval and acknowledgement of documents from Broker. The only control you have on the subscriber side is to throwExceptionForRetry–but this only “naks” the document being processed, not an entire group of docs.
There isn’t a way for an IS subscriber to get a group of docs and process them atomically without some custom work, AFAIK. A join in a trigger provides a way to group multiple docs but 1) the joined docs must be of different types; 2) only the first instance of a given doc type will be joined (can’t join multiple instances of the same doc type).
Add in the possibility that a production environment will have multiple IS instances connected to a single Broker and processing documents in parallel and the solution becomes even more complex.