Integration Server and Java

Flow services are not converted to Java code. System methods run to read the flow service and execute the commands within it, but at no time is the flow “code” converted to real Java.

The argument for Java versus Flow should perhaps be conducted at two levels: developer and project manager.

The developer will likely always argue for Java - there are third-party tools such as Eclipse that make it easy to code and debug, there is excellent documentation and a supportive community, it’s fun to program in, the runtime performance is “better” within the webMethods suite, and etc, etc.

The project manager who is concerned with time to market, return on investment, and mid-term maintainability will likely argue for Flow. It is easier (and less costly - in dollars and project time) to train someone on Flow than Java - so development and maintenance can be performed with a larger pool of “programmers”.

But the bottom line is this:
I have worked with Java since .9beta and Flow since 1999. Like any good developer, I choose the appropriate tool for the job, looking at both the immediate task and its impact within the lifecycle of the greater application/initiative. Sometimes that is Java, sometimes that is Flow.

No, they are not. As Phil Leary points out in his reply lower in the thread, FLOW is more or less interpreted on the fly.

I certainly respect your experience and opinion. I have no doubt about your effectiveness within IS. However, whether something can be done faster in Java or FLOW will almost certainly depend on the task at hand. Time and time again I’ve seen solutions developed in FLOW an order of magnitude faster than the equivalent in Java. I’ve also seen the reverse, but that’s much more rare.

This is indeed the entire point of the thread. If you’re a Java programmer, and not comfortable with FLOW, you’ll certainly be faster coding Java services. But you’ll be doing so at the expense of debugging, maintenance, etc. The primary programming language of IS is FLOW, not Java. Put a capable FLOW programmer next to a Java programmer, and they will compare favorably in terms of productivity. For some tasks, the FLOW programmer will easily finish first. For some other tasks, the Java programmer will finish first.

That’s certainly doable. I assume you make heavy use of the Java API and the Service.invoke method? I assume that a good part of the code is simply building pipelines and retrieving pipeline values. This begs the question: Why are you using IS? Why not WebSphere or WebLogic, environments that are geared for Java programming?

This is arguable. I have never encountered a case where I thought debugging was hard, or slow. I’ve done a fair amount of raw Java programming and I’d argue that the amount of effort and the speed are about the same.

Speed of code execution is but one measure of a solution. Most of the time, this is immaterial to the overall solution and other factors are more important.

Agreed.

Agreed. Exception management within FLOW is quite awful.

That’s one of the primary reasons for the success of IS–you don’t need to be a Java programmer. It’s definitely a plus if you are but if you’re not, you can still be quite effective.

Do your solutions use FLOW services at all? Or is everything mostly Java services? Do you use the built-in services extensively? How much FLOW background do you have?

This thread was prompted by post such as “how can I do a Java service that will parse XML” and other posts that plainly show a lack of understanding of the “sweet spot” of IS and the basics of FLOW and the built-in services. It wasn’t to say never use Java. The key point was “know the tools you’re working with” and which to use when. My opinion is too many default to using Java services when using FLOW services would be a smarter choice.

The other extreme you seem to follow, do it as Java services first, rarely as FLOW services, seems odd to me but I can see how such an approach has a certain appeal.

It would be interesting to see the two approaches compared head-to-head. Pit a developer proficient in FLOW (primary) and Java (secondary) against a developer proficient in Java (primary) and FLOW (secondary) and see who is able to build a “better” solution based on several criteria.

Each flow step is interpreted and as for consequence to invoke a Java method that perform the given step action. The most evident about this point is the underlying JVM can only understand Java, and does not know nothing about properietary wM Flow language and commands. Arguing that each step is not converted to Java invocation just make no sense.

I agree that a good IS Flow developer will be very fast in terms of coding a specific feature. I coded flow exclusively for about 3 years before discovering how easy it was to code within eclipse in Java. I was considered pretty knowledgeable and competent in writing efficient and reusable flow services.

However, the time to build an if-then-else or a switch-case sequence in FLOW will always be far longer than coding it in Java, even if you are not a Java master. Writing (or actually typing) few lines of Java code is for me way faster than dragging and dropping the different steps and populating their properties to specify the if/case conditions. Now think about regex in Flow vs. in Java, and you will probably agree the easiness of Java to refer regex groups in Java vs. in Flow… Or think about the CPU time to concatenate String in Flow, compared to using StringBuffer in Java, and you will be remain perplex about the time to execute in Flow vs. Java (about 400% factor, we measured it actually). Or think about adding 2 ints in Flow rather than typing a + b in Java, you will get more and more frustrated. Maybe its just a matter of badly designed IDE, but I still convinced about the benefits for me to code in Java rather than Flow (as long as you do not code Java directly in Developer IDE, which is almost useless).

That is however a good point. The bad using Java is to get away of the pipeline mentality, except where absolutely required. Our code is now almost exclusively Java only, using standards -mostly open-source- libraries (as webMethods itself does internally). Because some features are wM only, we still have to invoke some Service.invoke methods, however wrapped in a framework that makes our code “pipeline”-free (actually IData free). We keep these invocations as rare as possible, and pretty rarely needs them. And our strategy is indeed to be as decoupled as possible from wM code, in order to not be bound to them eternally. IS was implanted before I joined the company, and there were lots of debate about the ROI it brings in term of maintainibility, etc. Our inhouse consultants use to code FLOW only, mostly by lack of Java knowledge, and the biggest issues in production are caused by the Flow services of their packages. Because they badly use memory, performance, and reusability.

Developer IDE does not provide the same amount of debugging features than Eclipse does in term of watching expression, evaluating/modifying variables, catching exceptions, etc. Developer IDE is also very slow compared to Eclipse.

Agreed. But memory (large file handling), IO and CPU usage are fairly important factors of stability in a production environment. And all these are pretty badly handled when using flow services.

Agreed. Being able to code Java in the IS brings definitively more than just coding Flow. Because all are not, IS made the right choice to scope more potential users than just Java developers.

Mostly Java, because you lose the boosted performance when dealing with flow with Java code. We use either built-in service provided at the Java API of Broker and IS, or we use open-source mechanism to achieve the same features as those provided only at a flow level by webMethods. Most of the built-in services are in fact coded in Java, therefore relying on JRE available classes and methods. I have about 4 years of experience of FLOW, and use almost exclusively Eclipse and Java since about more than 1 year.

Bottom line, as we already agreed upon, there are good and bad developers.

To take it yet a little further, I tend to think that if you take a developer that badly codes in Java and badly codes in Flow, it may be easier to troubleshoot its bad flow service. If you take a developer that both masters Flow and a Java code, I tend to think that he will easier troubleshoot and read Java services than Flow services.

Nevertheless, do not take me wrong. I feel IS is a good product, cover most of the integration needs a company may need, and because it is based on Java, which is very extensive. That is a big plus. We use a lot of other products of webMethods other than IS and we are not about to challenge this product for another one - yet.

Certainly executing FLOW services causes methods within various Java classes to be executed. But your statement that “FLOW is converted to Java” makes it sound like FLOW is translated to Java source/byte code in-place and then executed. You may not have meant to imply that but that’s what came across.

Taking the analogy to the next level, FLOW is “converted” to Java byte code, which in turn is “converted” to machine code. So ultimately we should all be writing in assembler? :slight_smile:

The run-time performance of any environment is only as good as the administrators and developers. FLOW and IS, in and of themselves, all handle large files, IO and CPU just fine. Certainly it’s easy to abuse these things though. Not managing memory appropriately or not closing files or not profiling to see where bottlenecks are killers in any run-time environment, not just IS. As you mentioned at the outset, sloppy programming will hurt any environment. I assume you’ve seen good and bad examples of both IS and Java app server environments.

All of your points are well taken. I won’t counter any more of your points though because ultimately, based on what you’ve stated, I don’t think you should be using IS in any way. You (when I say you, I mean your group/company) clearly do not want to do things the “IS way” and should therefore move on to tools that help you do things the way you want–which you have at least partly done already. Coding most everything in Java and using IS as a run-time environment for that code seems to me to be a serious mismatch.

It’s not clear, from the exchange so far, the value IS actually brings to your environment. If you want to do development in Java, WS and WL (or your favorite open-source run-time) are far better environment choices.

Amen brother.

Thanks for sharing your thoughts. There is always something good in challenging an architecture: I truly appreciate that.

We currently use Monitor and Modeler to model BPM for our integration needs, and built a kind of layer on top of IS to let our consultant implement customer specific needs by only configuring this (framework) layer using descriptive XML and XSL files.

As opposite to earlier developments, where each customer needs was coded in a separate package, involving more RAM usage, and introducing more often than less bugs or preventing efficient re-usability, they now only need to focus on mapping and transformations; no need to worry about how to code the request files retrieval or the result files delivery (i.e. to a dropbox, email, http, etc), no need to worry about how to make sure large file is properly handled, no need to worry about how to log errors in a consistent way, enabling monitoring and alerting of integration support or production teams on errors or failed transactions, etc… All these are taken care by our framework.

Adding a customer is almost as simple as adding some configuration XML files, routing XML configuration files, and adding/coding some XSL files for the diverse required transformations. Nonetheless, the framework already comes with standard integration processes (i.e. CSV import or CSV export of our business application entities), so that for some customers, no custom XSL files is even required as the customer gets the CSV files we sell him as part of our offering directly from our framework layer. When a customer specific needs happen to not be yet supported by the layer, there is 2 ways:

  • either we refuse it to the customer
  • or the consultant code a separate package and immediately request the DEV to implement the missing feature, so this one get available for any potential new customer in the future. A migration is then planned to remove the custom package and use the feature out of the framework, once available.

Because all this is coded in Java, and indeed reusable in another EAI, we cover the potential switch to another product, as long as this one would offer the same kind of features we use from webMethods: web server (TomCat), JVM (IS), MoM (Broker), BPM (Modeler/Monitor), and File Repository/DB for logging purposes. Note that we completely removed the usage of TN, because this was not suited to our needs and we needed some additional protocols that were not yet covered by the TN (i.e. SFTP is not supported on 6.1, authentication others than BASIC, or when using SAML in SOAP documents). We also use the SOAP interface of the IS so that our internal applications can use the IS as a web services provider.

Hope this helps you see what kind of usage we make from the IS: alone the Monitor to recode in a standalone Java application would be a very challenging aspect of getting away from IS. That’s in part why we stick by it. By using a more Java approach of any custom code we create just covers our butt if we intend to switch for another EAI/SOA system in the future. Not clear what’s wrong with this strategy.

Your argument now seems to be that Java is your approach because of the potential reusability in another EAI suite. And you go further to state that as long as another EAI offered the same product categories (web server, JVM, MoM, BPM), that you could easily move to that other suite.

I understand from your email you’ve done some reasonably complex things in webMethods, but I’m confused by your argument that you could easily move those things to another EAI by virtue of having used Java.

That statement implies that none of your Java utilizes the webMethods API, all of your Models are pure BPEL, and your Broker messages are JMS (rather than the proprietary Broker format).

If this is not true, then your efforts are not reusable in another EAI.

If it is true, then I’m with Rob - it’s not clear what value the IS brings to your environment.

The discussions are indeed always fun and a good chance to learn and appreciate other approaches and other points of view. Thanks for taking the time to discuss all this.

Question: Is your framework code hosted on IS? Or is it hosted elsewhere?

If on IS, much of what you describe about your framework sounds exactly like the frameworks I’ve either helped create or use at virtually every client I’ve worked at. All have the same goal–isolate the things that differ in each integration (mapping, etc.) and share the things that are common (routing, large file handling, partner management, logging, monitoring, alerts, etc.) The main difference between what you’ve described and what I’m describing is the implementation–you’ve chosen to use Java primarily while my past projects have primarily used FLOW services with a smattering of Java mixed in as needed.

There is nothing wrong at all with the custom Java strategy. It’s one adopted by many. My initial reaction, though, is that you should go all the way with it. No need to stay tied to IS in any way.

Carve the Modeler/Monitor stuff off to an isolated service area–host just the models on IS and move everything else to WS, WLS or whatever. Modeler/Monitor, after all, used to be a separate entity entirely and was only recently folded into the IS environment. It doesn’t seem unreasonable to create a “BPM server” that only runs models and only connects to Broker. If you do that, then your Java-based integration framework could be free of any and all IS ties. I think IS support of Tomcat has gone away, but I may be mistaken.

A custom integration environment, where you assemble components such as you listed, is definitely a reasonable approach. What gives me concern are hybrid approaches, such as running IS but then having most of the work be done by custom Java code. IS is not a Java app server. You get virtually none of the benefits that IS provides but you’re paying for the license. The value in a tool like IS is not having to invent/develop much of the pieces. If you develop those pieces anyway, then why license the tool? My concern lies with your use of IS primarily–as a Java app server, it is not well suited since it is an integration server which has a much different focus (as you well know). If you’re building most of the integration stuff yourself, the use of IS as a run-time (BPM is a separate issue) is questionable.

I understand that IS was inherited and so a transition period where there is a mix of both approaches is inevitable.

The path your environment has taken isn’t surprising. I reiterate that a custom Java approach is absolutely reasonable. But I think you got there for the very reason that this thread was created–you ran into a road-block (couldn’t do something in TN, or some set of stuff in IS didn’t do quite what you wanted) and because of the Java background, you knew how to do it there. So that’s what you did, instead of figuring out how to extend the tools you already had. None of the items you listed are insurmountable issues (custom TN delivery services, certificates, etc., I think Mark Carlson has created a SAML solution and has thread on the forums about it) but because of a few issues, you threw everything out.

You threw away the investment you already had in the integrations that had been done. I don’t know if this applies in your case but I’ve seen where this is typically justified by an environment that is unstable (from sloppy development practices) and the claim that the only way to recover is to completely rewrite it. When in fact, it’s highly likely that the existing integrations could’ve been salvaged and reworked to provide the stability items you mentioned.

You could have implemented a framework with the wM tools you had and moved integrations to that. But the “do it in Java” mentality took over and all that was lost. You decided to use other tools, not wM tools, but then stayed with IS anyway. Again, nothing wrong with a Java approach, but my point is just go the rest of the way. IS seems to provide very little value in your environment. Move the BPM to its own IS server and move the rest to a Java app server environment. You’ll be much happier, IMO.

Anyone else have thoughts on these topics? Come join the debate!

What was probably not clear in my recent posts was that we did not thrown anything away from the legacy code. The custom packages coded in the past are still in production, even if they do not make adequate use of the OS resource and badly affect the overall system performance.

What we did was to build up a framework, like a second Java layer on top of IS Java layer itself, to support future customers needs. This increased the scalability of our systems by allowing to run more customers integrations on a single integration line.

Legacy is one very crucial aspect. In our (SaaS) space, where several hundreds of different customers are implemented by our professional services, you cannot throw away any existing code to/or rebuilt it from scratch. Migration strategies are evaluated but someone has to pay the price for it. To leverage the implementation and migration cost, we intend to use a single code line -leveraging the IS features available-, which can be configured by descriptive files. Because we have to support the legacy environment and packages, decisions were taken to not switch to alternate or hybrid integration systems (one which would host “pure” Java apps and the other hosting legacy WM code).

We encapsulated the wM APIs we do need for long the system is hosted on IS in custom Java libraries, so that only these libraries will have to be recoded once a day we would change for another “pure” Java app server environment, if ever. However, IS indeed comes with a lot of features that would be needed to recode if you were starting all over again. Thread management, JDBC Pools, Service invocations framework, scheduled tasks, Monitoring including resubmission-ability, are few of the features we inherit by using the IS as Java container. So no we would not be able to switch without single line of code modification to another system, but this would be feasible by reimplementing the API wrappers. But the added value the framework proven to provide (in term of quality, performance, etc.) in itself justifies the choice of this architecture. And we strongly believe that coding it in Java would never be as efficient and resource saver as if we had coded it in Flow. Is there any drawbacks in terms of readability, coder environment setup, etc, in this solution: yes. Do these overweight the added performance that Java provides over Flow: definitively not.

BTW, I might have induced we stopped using the TN because of the missing protocols. That is not the case. We stopped using it after involving webMethods PS, who themselves recognized the tool was mainly though for B2B purposes, and was not suited for our needs. The IS is mainly used as a (functional) Broker between the customer systems and our internal hosted customers applications (one to many application for one to many customers). So a MoM approach seemed to be more adapted for this kind of system compared to the P2P approach that uses TN. The fact that TN did not support GnuPG or SAML out of the box were just other factors. And initially, we did implement custom TN delivery service to work around this issues.

Note that the statements in my messages only reflects my opinion solely based on the experience I had in our company’s environment. No doubt that experiencing other companies environments and architectures running webMethods would probably help me better understand the value Flow provides over Java.

Ah, I see. Excellent points. Bummer that you can’t at least go fix the old ones.

I think that may be a bit optimistic, but you’re the one in the trenches and fully understand the environment so I’ll trust that a cutover to use other tools is doable by your team.

I wonder how many of those you’d really need to code yourself. Most of those things are provided out of the box by app servers. They provide bean and thread management, JDBC pooling, services access, etc. Scheduled tasks could be externalized using a job manager like AutoSys.

This is the root of our difference of opinion. A processing framework is definitely a plus–I’m a big believer in them. I don’t claim that Java isn’t more efficient in terms of performance but I contend that for the vast majority of integrations and installations, the performance difference is meaningless. And where speed is important (determined through profiling) then one optimizes the key pieces rather than do everything in Java.

In the majority of places I’ve worked, the reverse holds true. Performance is important, but not the most important. Generally speaking, “just enough” performance is sufficient and one doesn’t need to resort to dropping to Java for everything.

I’ve had debates with many people, including some from PS (and on these forums), about the suitability of TN as a general purpose “broker.” I think people lock themselves into thinking that TN is only good for talking with the outside world. I think it applies more broadly than that. Generally speaking, communicating with an outside app can be almost exactly the same as talking with an internal app. You need to configure communication. You need to track things. You need to retry. With message queuing, you end up building such facilities. (Don’t get me started on pub/sub. :slight_smile: )

IMO, TN is applicable in far more scenarios than people usually think. I’ve seen home-grown solutions for “internal” interactions that conceptually look an awful lot like TN.

I don’t know if there is much more than “one doesn’t need to be a Java programmer.” As you pointed out, there are some things that are easier to code by typing Java source than click and dragging. There some things that are easier to do using Developer and FLOW.

I like the higher level of abstraction that a services approach offers. In Java, one tends to get bogged down in classes and methods, which are generally lower-level constructs. Certainly, higher-level abstractions are doable in Java, coding in Developer just makes you think about things as services more naturally.

This surely depends of the type of integration you are doing. Our integrations are mostly batched and have to handle huge files (we often speak about several hundreds of megs). Typically NetChange business processes driven by customer needs. For these integration types, which are representing most of our integrations, large file and adequate memory handling is crucial. And even if IS provides some built-in IS services to handle large XML files, flatfile handling is much more challenging. For these types of integration, we first used Flow (maybe missing appropriate IS education and not using the appropriate services) and faced lot of memory and performance issues. We then used Java, mostly using streaming algorithms, and this boosted our performances and improved our memory usage in a dramatic way. If these kind of integrations were only a small part of our business integration offering, I would probably agree that other types could be coded in Flow. But this is not our reality. So bottom line, Java MAY be suited and preferred over Flow depending of what kind of integration you are running. Would be our business to manage PO or other financial integration needs, the conclusion would probably be the opposite.

I think the TN indeed covers lots of features required for a P2P context, and can do the job (it did for many years for us); my biggest grief is about the TN Console: it proved to be almost unusable when you handle several hundreds of profiles (almost thousands of profiles), especially when a customer is assigned more than one profile (because by us, each customer has one to many internal endpoints). The Console was slow and unfortunately, the web interface does not cover the same set of features that does the console. So I am not criticizing the architecture of TN but more the way the implemented it. That may be very subjective thought.

Good points again. One could argue that IS (and certainly Broker) are not quite the right tools for batch-style integrations. Indeed, the whole idea behind integration tools, when the term “EAI” was first in vogue, was to provide “near real-time” data sharing and avoid batch processes. Thus, the tools were geared to do lots of small transactions, not a few big ones.

Alas, many tools are inappropriately used for large batch processing. Broker was quite often abused: “Hey, I’m trying to publish a bunch of 750MB events and Broker keeps having trouble.”

I’m not saying this is necessarily the case in your situation, but it does raise questions.

Large file handling in IS, particularly node iterating, isn’t an appealing approach. It works but seems odd to many. Handling streams within IS also takes a bit of extra effort than the good ol’ load-the-entire-file-into-memory-and-go approach. The documentation and samples tends to lead one to not consider large files, and thus people run into issues quite late in the development cycle.

However, large file handling and using streams within IS also improves memory usage and boosts performance.

TN overall is fine, but the UI of TN Console and TN Web are indeed a bit lacking in several ways. Lots of partners, like you mentioned, can be troublesome in the UI. The combining of design-time setup and run-time monitoring in the same UI was something of a mistake IMO. The access controls aren’t granular enough and the TN Web UI is definitely hampered. It would be awesome if wM spent some time and effort on the UI and administration.

What we did was to publish the data itself in the event to the Broker but a document unique identifier instead (data is stored in the central repo before to publish the event). So no large data is transitioning through the Broker itself.

Agreed.

Agreed.

Again agreed.

Isn’t that great to see we share the same opinion on all these points? :slight_smile:

In the “be careful what you wish for department”, I’m sure this effort would be part of the migration of the TN UI to My webMethods Server.

I’m a fan of Portal, but wary of administration portlets developed by other product teams. Case in point is the new Messaging portlet that replaces WmBrokerAdmin in Broker 6.5.2. Functional, yes. Usable, yes. Intuitive and powerful. Not even close.

Mark

Yeah, I figured if they do anything with the TN UI it will portal based. Blech.

Hi all,
I would like to add a few comments to the topic “Java or Flow?”. Up to now the discussion has been more on the “general” level, so I would like to fill in a few practical examples from my recent experience. I should mention that I’m quite familiar with both, Java and Flow, and that I think that both have their merrits in their particular fields. In general Flow is the tool of choice for simple mapping tasks, while Java should be taken, once the mapping logic becomes a bit more involved or you need to make numeric calculations. Here are my examples:

  1. The price for one item is $19.99. The customer ordered n items. If n is
    10 or more, we substract a 2% discount, if he ordered 100 or more, we
    give a 5% discount. Some special customers get an extra discount of 1%…
    For these customers the customer number starts with an ‘S’.

    In Java:

    double total, discount = 0.0;
    int n = …; // Is taken from the pipeline
    String customerNumber = …; Is taken from the pipeline

    if (n >= 10 && n < 100) discount = 0.02;
    else if (n >= 100) discount = 0.05;
    if (customerNumber.charAt(0) == ‘S’) discount += 0.01;

    total = 19.99 * n * (1.0 - discount);

    Four simple lines that took me less than a minute to write down. Try to
    do this in Flow with the pub.math Services and you are going to bump your
    head against the wall…

  2. I had to do a mapping of a legacy database table into an SAP IDoc.
    The value for the IDoc field “X” was stored sometimes in DB table field
    “A” and sometimes in “B”. In addition the value needed to be prepended
    with a prefix based on the “account type”.

    In Java:

    String value, prefix;
    Values tabLine = …;
    String type = …;

    value = tabLine.getString(“A”);
    if (value == null || value.length() == 0) value = tabLine.getString(“B”);

    if (type.equals(“SB”) || type.equals(“PX”))
    prefix = “T”;
    else if (type.equals(“PV”) || type.equals(“PB”) || type.equals(“HQ”))
    prefix = “H”;
    else prefix = “”;

    idocSegment.setField(“X”, prefix + value);

    In Flow: (need to call our variable “fieldValue”, as “value” is already
    used by pub.string:length…)

-MAP (/tabLine/A to /fieldValue)
-BRANCH on /fieldValue
—$null: MAP (/tabLine/B to /fieldValue)
—$default: SEQUENCE
---------------INVOKE pub.string:length (inString = fieldValue)
---------------BRANCH on /value
------------------0: MAP (/tabLine/B to /fieldValue)
-BRANCH on /type
—“SB”: MAP (set prefix to “T”)
—“PX”: MAP (set prefix to “T”)
—“PV”: MAP (set prefix to “H”)
—“PB”: MAP (set prefix to “H”)
—“HQ”: MAP (set prefix to “H”)
—$default: MAP (set prefix to “”)
-INVOKE pub.string:concat (prefix and fieldValue)
-MAP (/value to /idocSegment/X)

In both these cases the Java solution is much more elegant and easier to
understand. (And performs much better: just consider: each INVOKE in a
Flow makes a clone of the current pipeline, invokes the Service with that
clone and then merges the results back into the current pipeline!! Don’t know,
why webMethods did it this way, as I can’t see an advantage over passing
the original pipeline, but this is the way it’s done, and it results in a massive
memory consumption, especially if you are processing high data volumes.
So if a simple float calculation, which would take a couple dozen machine
code instructions, is done via invocation of 5 pub.math Services, then
this leads to a lot of overhead: 5 pipeline clones, 10 audit.log events,
5 lookups of the service name to find the corresponding Java method for the
Service, 5 JNI calls into the JVM’s C/C++ code to execute that method via
“Java reflection”, etc. Plus additional work for the garbage collector, which
eventually will need to clean up the pipeline clones and other objects that
got created during that process. So I’m pretty sure that the Flow solution
here is 100 or even a 1000 times slower than the simple one line Java code.)

However in those cases, where there is a straightforward mapping like field “A”
goes to field “X”, field “B” to “Y”, etc, which can be done with a few MAPs,
LOOPs and BRANCHes (but not too many INVOKEs…), then doing it in Flow
will get the job done more quickly.

Regards, Lanzelot

This is the popular conventional wisdom but I almost always disagree. There are indeed times when Java is the way to go–but I disagree with the generalization that FLOW is for simple mapping while Java is for complex mapping. I think Java use should be isolated and focus on specific, generally applicable operations. Java services are rarely needed.

Your discount calculation is a reasonable example of when using Java may be appropriate (ignoring for the moment the ill-advised use of the double data type for money values).

[highlight=java]
double total, discount = 0.0;
int n = …; // Is taken from the pipeline
String customerNumber = …; Is taken from the pipeline

if (n >= 10 && n < 100) discount = 0.02;
else if (n >= 100) discount = 0.05;
if (customerNumber.charAt(0) == ‘S’) discount += 0.01;

total = 19.99 * n * (1.0 - discount);
[/highlight]

Here is the same in FLOW:

1  [COLOR=#0066cc]BRANCH[/color]
    1.1   [COLOR=#0066cc]%/itemPrice% >= 10 && %/itemPrice% < 100: MAP [/color][COLOR=#0066cc](set discountPercent to 0.02)    [/color]
    1.2   [COLOR=#0066cc]%/itemPrice% >= 100: MAP[/color][COLOR=#0066cc] (set discountPercent to 0.05)     [/color]
    1.3   [COLOR=#0066cc]$default: MAP [/color][COLOR=#0066cc](set discountPercent to 0.0)[/color]
2   [COLOR=#0066cc]BRANCH on '/customerID'[/color]
    2.1   [COLOR=#0066cc]/^S/: MAP (customerID starts with S--[/color][COLOR=#0066cc]add 0.01 to discountPercent)[/color]
3   [COLOR=#0066cc]MAP [/color]
[COLOR=#0066cc]  (subtract transformer set discountPercent to 1.0-discountPercent)[/color]
[COLOR=#0066cc][COLOR=#0066cc]  (multiply transformer set extendedAmount to itemPrice * itemQuantity)[/color]
[/COLOR]4   [COLOR=#0066cc]MAP[/color]
[COLOR=#0066cc]  (multiply transformer set discountAmount to discountPercent * extendedAmount)[/color]

No head bumping. Very straight forword. And I can step through it to debug.

“Elegant” is a loaded term that means different things to different people. IMO, they are equally easy to understand.

Did you measure to see if this is the case?

Assumptions are fine to start with. And I’ve little doubt that testing will prove out your hypothesis. However, it’s been my contention throughout this thread that starting off with Java based on the assumption that it is faster is wrong-headed.

My contention is to use Java only when absolutely necessary–when something cannot be done in FLOW (e.g. managing a hashtable) or when profiling shows that a particular service needs to be faster. Starting with a Java service right away based solely on performance without considering the other “-ilities” (maintainability, debuggability, readability, etc.) is short-sighted.

Dear All,

             i have a need to save an excel file which was generated by a service, to the windows explorer location. like c:\.
        please help me how to write a code?

Flow:

  • standardize layout of service,
  • is under Developer refactorization process,
  • can be written by business man (not programmer),
  • each step is in memory of IS another objects and any flow is a huge tree of objects - there isn’t any compilation to bytecode, all flow processing is executing on declarative stucture of step objects - slow and memory consuming,
  • if You have about 4500 flow services (we have more), IS will need 1 hour to stand up … if JVM 2 GB of memory will not depleted earlier
  • is very hard to write complex logic in flow,
  • is very hard to maintain and refactorize flow sources (we have huge canonnical model of documents),
  • flow code have a tendency do disintegrates themself or working not proper (buggs in WM),
  • flow is not so intuitive like many other languages,
  • WM flow is not a open standard (like W3C XProc is),
  • some (many) things can’t be done in flow,
  • physical adapter (connector) code can’t be write in flow.

Java:

  • is hard to write something wit use of IData, IDataCursor (simple wrappers is needed),
  • is hard to write something with use of Developer,
  • better is use real IDE like Eclipse or NetBeans - compiling code and packing to jar and deploing to /code/jars,
  • real tool for generation of services skeletons and stub classes and document wrapper classes is needed (I wrote one by myself),
  • all others …

Flow:

  • can be used by man of the business,
  • is under Developer refactorization,
  • You see what You have (WYSIWYG ?),
  • flow bases on declarative approach: each step is at least one object, and each flow is a huge tree of objects in memory of IS
    – result: incredible consumtion of memory
    – practical: having 4500 flow services (we have more), after 1 hour fight with loading those flows, IS is unable to stand up any way (memory depletion)
    – performance (especially with loops)
  • WM Flow is not a open standard (as W3C XProc is),
  • is hard to write complex logic in flow,
  • flow is not intuitive (Java is),
  • flow “code” is very hard to maintaining and refactorizing,
  • flow services sometimes are disintegrated by bugs in WM tools.