Need inputs for solving my problem

Hi,

My requirement is to watch the DB table, if any insert operation takes place send the newly inserted data in the XML document, calling the webService provided
by target client and pushing the data to them.

My thought is creating a insert DB notifictaion, poll the DB table and if any insert operation takes place then reading inserted data, preparing one xml document
and pushing it to the target client by SOAP webService ( I am the consumer of the webService ). Imagine due to some reason if I am unable to hit the target server
then how can I preserve the data which I got from the DB notification and how can I disable the notification to poll the DB for a few mins till the target Server
becomes available ( My thought is using transient error handling technique to check target Server
and if not reachable then disable the notification ).

Kindly shed some light to step ahead further.

Seems a typical use case for JDBC Adapter Notification.
in IS, try to create a: Adapter Notification
Select JDBC Adapter, Select Insert Notification, Select your connection name (if you haven’t create one, create it first).

For more details, check JDBC Adapter Installation and User’s Guide.

Thanks wang for your reply.

I will take care of creating notification, mapping those fields to xml document. After this when I hit the target server to push data and if client (target server)doesn’t able to accept then how can I store/preserve this xml data. I need some efficient way to handle this situation. Kindly share any thoughts on this.

Hi,

My 2 cents:
Keep a unique id for each transaction with the help of UUID, when you are having issues with Target, convert the XML Doc to xmlString, store it in DB, disable the DB notifcation to stop polling till you are able to make a hand shake with Target Server. Once the climate becomes normal,first you read data from db which was stored when the Target was having issues, once done then enable the notification and keep going.

Thanks,

Did you explore basic notification instead of insert db notification. There is a difference that will suite your requirement.

Thanks a lot to everyone.

Mahesh, Can you more details on what you want to tell. If possible can you give some rough design …

Hi Al,

Now I have n number of clients whose details i am getting as part of one of the field’s of the XML message. Let’s say
client1 is up,
client2 is down,
client3 is up,
client4 is up,
client6 is down,


client n is up.

If I disable the scheduler just because of one client is down then other clients get impacted but I want to stop the flow
towards that client which is down and rest of the flow to other clients which are up should continue. Kindly help me
how to handle this situation.

I am really thankful to all for your time.

This is a classic pub/sub scenario. Simply publish your message to the bus (i.e. Broker or UM) and have a subscriber/trigger per client. By default, your notification messages should already be published.

Percio

Thanks Percio for your inputs.

But if any new client is getting added then we have to add a new trigger and new service to process the document. Any other optimal solution which doesn’t make me to write any additional code though I have to support a new client.

Thanks,

Sure. There are multiple ways to skin a cat though, so here’s one possible solution:

First, create a publishable document that contains the data elements to be sent to the client + some client specific fields (e.g. URL, username, and password – or perhaps just a client ID that can be used to look up the other info from a config file)

Then implement something like this:

  1. Receive the data from the insert notification

  2. Retrieve the list of clients from a config file

  3. For each client: map the data from the notification + the client info into the publishable document above and publish it

  4. Use a concurrent trigger (with at least as many threads as potential clients) to subscribe to the message

  5. The trigger service then sends the request to the appropriate client using the information in the document

This approach gives you one trigger for all the clients. It assumes though that the requests can be received out of sequence (I don’t know enough about the data or how often it’s generated to know whether that’s a problem).

Please note that you could achieve parallelism without using pub/sub, but with pub/sub you can take advantage of the trigger retry mechanism in case the client is offline.

If it’s an option, TN could also fit nicely into the picture.

Percio

By the way, I didn’t mention it before because it may have seemed obvious from my original comment, but for the sake of completeness:

Another approach is for you to publish the messages to a bus to which your clients have access. Your clients could then subscribe the messages directly from there, for example, using JMS. This would give you the most flexibility in terms of on-boarding/off-boarding new clients with very little effort since your application would always send the data to one place regardless (i.e. the bus).

This may require a shift in mindset though because, although many customers tend to use pub/sub for internal communication, it is not as commonly used for external integrations.

Percio

Thanks percio for your inputs. Let me try to implement.

Hi,

I tried to implement transient error handling mechanism for Guaranteed publishable doc.

I have configured the trigger to handle Transient Error Handling with retry failure as Suspend and retry later option.
I have also configured the retry intervals and max retry attempts.

The throwForRetry is also present in the catch block for the service that gets invoked by the trigger.
But I can see, trigger is not getting disabled.

In the subscribing service trying to invoke the webService, so as part of Monitoring service used pub.client.http( http:servername:port) and based on output setting isAvailable flag. But i can see trigger is not getting suspended. PFA snaps.

Please need help …
trigger.JPG
Monitoring Service.JPG

It’s not intuitive, especially for developers coming from the Java world, but the throwExceptionForRetry step needs to be outside of the catch block. Typically, in the catch block, you will determine whether you want to retry the message based on the error. If so, then you set a flag that gets evaluated by a BRANCH statement outside the catch block. I’m attaching a screenshot with a sample.

Percio

Thanks percio for your help, let me try.

I tried but its not suspending trigger. I am on wM 8.0.1.0 version. Any IS extended settings needed ? Kindly suggest any inputs.

Anil,

First thing, you’ve used the terms “disabled” and “suspended” interchangeably so I just want to make sure: how are you checking to see if the trigger is getting suspended?

Disabling a trigger and suspending it are two different things. It’s not uncommon for people to accidentally switch these terms but it’s important to understand the difference. After the retries get exhausted, the trigger’s document processing will be suspended but the trigger remains enabled. To see whether the trigger has been suspended, please go to the Trigger Management page on the IS Admin Console.

Assuming for now that the trigger is indeed not getting suspended, here are two general thoughts based on your code:

  1. You may be mixing up a couple of ideas. You do NOT have to call the monitoring service from your trigger/processing service. I suppose you could but that wasn’t the intention. The IS will start calling the monitoring service once your trigger gets suspended. As soon as the monitoring service returns isAvailable = true, the IS will automatically resume your trigger.

  2. Your BRANCH statement is currently problematic. According to your BRANCH statement, your service will always end in an exception because you’re using a $default label on the EXIT $flow and signal FAILURE step.

Now… just to make sure we’re not getting ahead of ourselves, let’s do this:

  1. Change your trigger so that it retries 0 times
  2. Remove the monitoring service from your trigger settings

This will ensure that once you call throwExceptionForRetry, your trigger will get suspended right-away and it will remain suspended.

With this done, go ahead and change your processing service so that all it does is call throwExceptionForRetry. If you want, you can add a call to debugLog or tracePipeline in the beginning so you can easily check from the log whether your service was executed.

With this in place, your trigger should get suspended no matter what. So, go ahead and publish a document. Did it get suspended? If so, then you can now start adding your logic back in and testing it as you move along. If the trigger stops behaving as you would expect it to, it should then be easier for you to pinpoint what code or configuration change caused the issue.

Good luck,
Percio

Sorry for my wording b/w suspend, disabled. I suppose to say,Suspended.

I can see trigger is getting suspended and auto enabled when the target resource is avail.

My Next question is, when the target resource is down, trigger is in suspended status and publisher is keep publishing docs (published 50 docs ). Till the time resource gets available where the published docs gets stored if the publishable doc is of Guaranteed and we are using native broker. I checked in MWS ->Administration → Messaging ->Broker Servers → Doc Types → found my doc but didn’t able to figure out how many docs are been queued in Broker because of target server unavailability. I am using suspend&retry mechanism of trigger which I stated earlier.

Please help me on this part.

Eventually I found those docs under Clients.

Thanks Percio & others for help & Time.

Hi,

But if I don’t use the concept of Suspend And Retry Latter mechanism, how Broker make sure that failed transactions should be get processed till they successful ?

Any thoughts on this …