Storage of token in webmethods.io

Hello,
I Have one API which generates token. This token expires in every 5 min. I want to make it always available so, I am trying to put that token in storage of flow services by scheduling it for every 4 min. But I am unable to do so. My Storage PUT service is not working fine, my value in the storage does not binge updated each time. I am unable to configure the lock mechanism. I tried to put lock and unlock before storage PUT but flow service is failing at lock repeatedly. What should be the right way to do this?

hi @aaditya_chandras,
Can you please check the implementation, As this is quite simple Usecase and in the past it had worked for me.
I want to highlight one more point. This implementation will not work in cluster IS because the IS are in cluster but in stateless manner.

Regards
Vikash Sharma

@aaditya_chandras Using built-in storage in flowservice could be risky, as your data will be lost incase of any server failure restart or crash… so better to go with any public storage.

Regarding your implementation , is it ok to share the sample with us ? Need to have a look before proposing anything.

Regards,
Bharath

I have tried putting simple put there. When we call it first time it works normal, but second time it starts crashing. it goes to the loading.

I have tried putting shared lock there

But still there is no result.


.
If I am doing any mistake, Please convey me.

I haven’t tried this in webMethods.io, but token APIs are a good fit for service caching

https://docs.webmethods.io/integration/flow_services/flowservices/#co-caching_flowservices

Wrap the call to the token API in a Flow service and set the cache expiration to a time less than the token expiration.

In theory, the cached service should always return a valid token

Correct me if I am wrong. Caching flow service is suitable when we have fixed input type, right? But in this case request payload varies for every call. So we are looking for some storage service which we can use to store response. Later as you suggested we can wrap that in caching flow wrap later with cache.

Does the token REST API really vary in its inputs? It seems like it doesn’t, since you are essentially making a manual pre-cache process using a scheduled service that uses the storage services.

We receive a response from a third-party API containing an API key, ID, and secret key.

Secretary key is required used by our consumers, and multiple consumers with different consumer id may request it. To ensure constant availability, we plan to store this response in storage. I know we can use cache here.

but it’s valid for only 5 minutes & To maintain availability, we’ll refresh the token every 4 minutes. and to refresh token above response needs to be given as input again the Third party API.

An option to consider, which we use by default, is to never keep the token. Whenever an interaction is needed, we always request the token first and never cache it. Simplifies the code and never need to worry about flushing a cache in any scenario.

Not a suitable approach in all scenarios. Given the token expires in just 5 minutes, may fit in this case too.

For caching, if you do that then explicitly set the scope to include ONLY the pipeline elements needed to make the login call.

For data that has a life-span of just 5 minutes, this should not be a concern. The token is transient data and it does not matter if it is “lost.” Next call will just get a new one.

[Edit]
Regarding the “data will be lost” items, if you’re referring to the pub.storage services then this is not accurate. From the docs:

...short-term store for information that needs to persist across server restarts.

The docs also note that the storage facilities are “…provided to support shared storage of application resources and transient data”

Each node in the cluster would have its own cache/token value. Shouldn’t be a problem as long as subsequent request for a token does not invalidate previously obtained tokens (I’ve never seen a system do that, but there might be some out there that do.).

If we call third party API for every client request number of calls towards third party API will be too high which will affect the costing. The one secret key can cater multiple requests. So want to store it and make it available.

That is indeed one of the considerations if authentication calls are part of the call limits and the functional calls are relatively high/frequent.

We used storage services originally for similar needs but then for token retention (when we do that – as noted above, by default we do not) we dropped to Java to use a lock-less approach with java.util.concurrent.atomic.AtomicReference. Can provide more info if desired.

yes you can share logic you are using but we are working on wM.io

It uses Java so won’t be usable on wM.io.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.