We just completed migrating all of our webMethods integrations (total of 16MB in adl’s) to a new server over the past weekend and it took us 13 hours to load all the components! We used the broker_load command utility to import all of the components. We had to split up the components because of the 8MB import limit with the enterprise server version 411.
The server we migrated to already had half of the integrations on it already. So therefore we imported just over 8MB of adl’s. In the beginning the importing was pretty quick (2-3 minutes for 500-800KB adl’s). By the end of the process it was taking 30-45 minutes for a 200KB import. We have a fairly substantial box - Dual P3/555mhz with 1G of RAM.
I am just wondering if others out there have been feeling the same pain as we have experienced. I am looking for suggestions as to a more time efficient way to accomplish a full broker refresh. We are looking for a refresh process to update our development and staging servers so that they mimic production. However we don’t want to spend 13 hours each time to refresh each environment.
13 hours is not acceptable. I’ve never seen it take that long before. We had over 40 MBs of adl split component files and to
refresh a broker usually took about 1 hour to load about 25 multiple ADL files. What is your Virtual Memory setting on the server? If you are paging out to VM then that will slow things down quite a bit. Also, make sure that you have the tools and brokers patched, I remember a 411 patch that was supposed to speed up imports. Again my experience is on UNIX not x86.
The best refresh process we found, was to employ a “shadow” broker which had the same config as production which could be shutdown and the Guar/Pers files etc… copied to a test/qa box and restarted. This way you don’t have to load the production ADL image, it is pre-loaded. This shadow broker was updated the same as the production broker and also served as a quick fail-over in case of the primary broker crashing or becoming corrupted.
Thanks for the quick response Steve. We currently have our swap file set at 1535mb.
We have also down-graded the server to a single processor for the moment because of the Dr. Watsons that we were experiencing with the JDK 122-001.
Perhaps this contributes to the slow import process. I will look into the patches for the tools. Also thanks for the suggestion in regards to the shadow broker idea.
Another detail that is sometimes overlooked is using the import export tools on a different machine from where the broker resides. This will force the entire set of data to go over the network and whatever client-server communication is needed.
In the past importing on the same machine has significantly improved the performance.
Hi,
I have the same problem of Kev.
An ADL of 200kB takes over two hours to be imported on a broker of 31MB (guar size).
When I try to import an ADL of 500kB about, I catch this error:
[1993] Maximum transaction size exceeded.
But wM documentation say that this error could be shown for import greater than 8MB. And in my case?
Thanks to the guru that will help me.
Bye, Nello
P.S. the two ADL are regularly imported in few seconds in an empty broker. I applied the latest patch level advised by wM.
we are using an unsupport tool from wM that allows any size of ADL to be imported into the 411 wM Broker without any problem. We are using it before we move to 5 or 6. It works. You can talk to wM support.
You most likely get the “Maximum transaction size exceeded” error because the tool used to import the adl uses guaranteed events when communicating the information from the ADL file to the broker, and thus triggering its own limitation. This is the reason why wM recommends to keep the size of brokers to less than 8MB. The 5.0 version of the broker supposedly removes the 8MB restriction, which might be your best option.
You’re right about the 256MB being the limit of the broker-guar storage. But I think the import tools, broker_load or manager, makes use of guaranteed events/documents when communicating with the broker during the import, which is why the 8MB restriction applies to importing of ADL files. It is the guaranteed documents that are limited to 8MB. Persistent or volatile documents do not have this restriction.
I am guessing that the unsupported tool either avoids creating guaranteed events larger than 8MB or they changed the import to make use of persistent or volatile documents instead of guaranteed.
the 8M limit is related to the transaction size, not the adl. for instance, if you try to add a new small integration (100k let’s say) to a very big existing integration with lots of custom code. the integrator will need to build the whole integration again and save it back to the broker. the transaction may actually be greater than 8M because of the underlying activities. the 8M is the max size of the temp file for the guar file.
the wM unsupported tool would modify how the import was done therefore you avoid the 8M limit. no modification would be done to your integration; the integrity of the integration remain the same using the unsupport tool, wM called it large load utility. talk to wM.
LS, just wondering about this unsupported tool that you are using. Did it speed up the importing process for you? Do you use it when importing into your production enviroment?
Ken, if you use the tool at the same host as the broker server is running, you will see faster import. Yes we are using the tool to load adl to our production broker server because we tend to have huge and highly dependent integrations and the tool works. it is actually a modified enterprise integrator. but remember it is unsupported therefore at the beginning we were reluntant to use it as well. we will move to 5/6 soon so the issue will go away. fyi, v5 has mush faster upload speed.
Hi all,
I now found the solution to my problem.
The problem was that all my integration component was publishing an event “AppErrors”, trough a custom code, to trace the application error.
So all these components were decompiled and I caught the 1193.
To prove this concept, I imported the ADL, changing the name of “AppErrors” event into “AppErrorsFoo”; the result was OK and no problem with 1193.
Now I have an architecture with more Enterprise Server including more brokers in territories connected by gateways. So I reduced the guar file and indeed now every Enterprise Systems publish its own AppErrorsxxx.
i wish to thank very much LS for this period that opened my eyes: “if you try to add a new small integration (100k let’s say) to a very big existing integration with lots of custom code. the integrator will need to build the whole integration again and save it back to the broker. the transaction may actually be greater than 8M because of the underlying activities. the 8M is the max size of the temp file for the guar file.”
Bye,
Nello
P.S. the name of the large load utility is something like “EI46-largeload-DOM-WNT.zip”. The wM support can give it.
The large load utility is unsupported… They can give it, but it doesn’t guarantee anything. That means if the broker instance gets corrupted you won’t find the golden path to emerald city. The theory of LS is correct, the EI does rebuild the whole funky code and that could go bigger than 8 Megs.
But splitting up an ADL that is bigger in export (like 9 megs or above) than the transaction limit has a much safer way by using a recommended export plan. It’s a bit confusing because you’ve to import a bit different (but hey… isn’t that the reason why shell scripting exist ?). If people prefer an unsupported tool, fine. If someone wants it otherwise, let me know.
FKATSD Shyam
~Formal Known As Technical Support Dude~