Queuing engine for IS

Hello all,

Do you guys have any experience of creating a ‘queuing engine’ in IS so the process of sending iDOCs from R/3 to IS then onto a database/external server is split into 2 parts?

The steps would be as follows:-

(a) part 1 – send iDOCs to IS and store on BC

(b) part 2 – process on IS (e.g. convert to XML) and send to database/external server

Regards,

Steve.

Have you looked at using Broker for this? In this case, the docs from R/3 would go to IS which would be transformed as necessary and then published to the Broker. Then you can have one or more subscriptions, each would be independent processes, that would do whatever is needed such as write to a DB, send via http, etc.

When you say “send iDOCs to IS and store on BC” what do you mean specifically? Is “BC” in this case Business Connector? Do you have a BC installation (which is an SAP branded version of IS) that is separate from a wM IS? Just trying to understand your “part 1” completely.

The broker would not be an option for us due to architecture limitations.

In terms of the first step this would be sending the iDOCs from a SAP R/3 system to an IS server.

I would suggest that this step would stop when the iDOCs are received by the IS which would then release the threads from the associated RC connection.

How does BC fit into this? Do you have separate BC and IS installations?

We have IS with an SAP adaptor for iDOC processing from R/3.

Hope this helps to clarify.

I see. So no BC in the picture. Just one IS and you want to release the SAP->IS connection as soon as possible, correct?

You could use TN to help with transformation and delivery, but I imagine that that’s not something you’d like to pursue.

Probably the simplest approach is to invoke services in new threads. The IS Java API has facilities to do that (Service.doThreadInvoke). That said, something is nagging me that says I’m forgetting something obvious. Perhaps someone else can chime in?

Anyway, what you’d do is store the doc as you mentioned in “part 1”, however you need to do that. Then invoke a service via a thread to kick off transformation and writing to the DB.

Steve,

I am just curious as to what the “architecture limitations” are that would keep you from being able to use the Broker. Based on the problem description, the Broker would fit perfectly.

Assuming that you truly cannot use the Broker (and based on that, I’m assuming that 3rd party “queueing engines” are also out of the question), all that is left is for you to build your own. Now, queueing is a concept that can be implemented in many ways, using many different technologies, so it will depend on how much time you want to put into it.

One very simple way would be to use a file as your queue. Get the data, append to a file, and close the connection. Then have a scheduled service, read and process the data, then delete the file. You need to be careful here and account for when the reader and writer are accessing the file at the same time. This can be easily addressed by renaming or moving the file before doing anything with it.

If you have a database table available, you can use a similar concept as the one above, but insert rows into the table (instead of appending to file), and read and delete rows from the table (instead of deleting file.)

If you need other suggestions, I could tell you a couple more. None will be as good as using the Broker though. :slight_smile:

  • Percio

What about using IS “local publish”? That is a queueing mechanism that is better than writing your own, but not nearly so good as having the Broker.

Mark