Automated Code deployment in webmethods

Interesting discussion :slight_smile: :+1:

Allow me to add one thought: In my opinion we need to be careful to distinguish application design (in the sense of DDD - Domain-Driven Design) and deployment. Both are of course closely inter-related, but looking at them at least kind-of independently makes things a lot easier for me.

My approach to the application design is roughly like this:

  • I have several types of packages
    • Application level: The part the business sides cares about. An example in my case was syncing opportunities from the CRM system with cases in the export control system. So services would be e.g.
      • Scan CRM for new opportunities and create a new case in the export control system. The crucial point is that this package has no idea about the specifics of the CRM system; those are dealt with in a β€œsystem” package. All this package nows is that for the context of the task at hand, an opportunity is considered β€œnew”, if it does not have a case ID assigned from the export control system. So it invokes a service with the input caseIdExportControl == null. All that is pseudo-code, but I hope you get the idea.
      • scan export control system for cases that are completed and sync back the decision to the opportunity
    • System connection level: Handle connections and specifics of external systems.
      • E.g. provide a generic search capability for opportunities in the CRM system. So this package does not know what an empty field caseIdExportControl means for the business. But it does know how to execute a query (via the aforementioned service with a given set of criteria.
      • Usually, this package would also contain the connection to the external system. If a different specific connection is needed, that can done with run-time override.
  • It is vital to have proper dependencies between packages. In my case not only for connections, but also things like the Java classloader.
  • There are also additional helper/util/common packages for things like business-level auditing, configuration management, etc.

I found it made things considerable easier. Yes, I needed to come up with some custom tooling. But as you probably guessed from other statements in this forum, that is not something I shy away from. :wink:

Not sure if this makes things clearer or rather muddied the waters, to be honest. If there is sufficient interest, I can probably come up with something more concise in a blog post.

Closing on a more practical note: The ART Connection Updater can disable a connection before IS starting up, as well as adjust the connection pool settings.

My only concern using a custom tool is tool maturity. When we implement a new project there are always some edge cases that we forget to address at the beginning or forgot to implement. These kind of issues can only be resolved overtime. Taking connections apart and deploying them in a different package is usually safer imo. But containerization can make this overcomplicated unfortunately.

Having been in the same shoes, I can fully understand your concern about tool maturity. That is in my opinion one of the advantages of having them as open source. You can look at the implementation and build an informed opinion yourself.