Versioning with SVN in Local Service Development

Hi All,

I am looking into setting up the environment configuration and working process for using SVN in webMethods 97 with Local Service Development.
For internal reasons I decided on using the Polarion Subversive Plugin and connectors and now testing out multiple scenarios.

In the meantime, can anyone recommend best practices for both configuration and process? Otherwise, any common pitfalls or trips and tricks would also be nice!

I know that there are limitations when going for the Eclipse plugins and probably CrossVista would be the recommended approach (used it and indeed, without any marketing involved, I can say that it covers the best the features you have when working with Java for example), this is why I’d like to hear from you any feedback based on your experience.

Many thanks in advance,

Hi Ana -

I have seen documents that describe why Local Development is a preferred practice. I have not, however, seen a document that describes the preferred practices of Local Development itself. Perhaps, through this forum, we can put something together collaboratively.

As a starting point, I’d like to ask: are there more specific questions you’re seeking answers for?


Hi Percio,

Great, thanks!


  1. Branching or not? (meaning use merge or not?)
    As we all know, SVN is very good for version control on multiple type of assets including videos etc but it’s handling these generic. I have read in documentation and on forums that it’s not recommended to merge in webMethods, webMethods is a flow language, the packages/assets have the xml representation and so on.
    On the other hand from 9.6, webMethods supports Git. This could mean using merge extensively, unless the packages are very granular…
    With these in mind I am looking into adding to the process a step to lock the asset from the repository before the development and unlock afterwards. There are pros and cons for this: if someone forgets to unlock the element, another developer might wait for no reason; if someone goes on vacation someone else should have the right access rights to be able to unlock their assets. But in the same time it should eliminate the risk of erroneous merges. (I did not try yet to find some examples when the merge fails).

  2. Use Eclipse based plugin in Designer or directly Tortoise SVN only?
    The Eclipse based SVN plugin setup has several limitations when handling the assets, this is why the Tortoise SVN Client could be used to overcome that. In the same time, why not use Tortoise SVN only even if this means that the communication with the SVN server will not be done from Designer but from Tortoise instead.

  3. Create the Projects in the SVN repository per package or not?
    I have read about this in an older post on the tech community, but I still have difficulty in finding a good reason why I would do that. I know that each revision on SVN corresponds to a new file tree structure of the entire project (if one asset is modified, the new revision will actually contain the modified assets + all the other assets from the project) but I do not see any issues with that.
    Any idea on this?

  4. How to commit configuration assets to SVN?
    Here I mean: TN docs, TN processing rules, TN partners, TPAs, Active Transfer events. The issue is that these are not files present in the working copy; probably one option is to build a script that extracts the needed configuration, commits this in a file to the repository and when building have in Jenkins a job that handles these in the expected manner.

  5. How to initially upload the packages in the repository?
    Do the packages have to be of type local projects before the upload to SVN or can all the packages be first uploaded to the SVN and once developers have to make modifications will also change the packages to Local Projects?

II. Local Service Development
The purpose of local service development setup is to offer developers their own development environment to develop, execute, test and debug in their own local environment. It solves issues faced by large, geographically dispersed development teams. Following the Network Proximity principle, issues related to network latency, connections etc are solved.

The default installation of Designer workstation includes the standard IS setup: no JDBC adapters, no TN. It seems focused on covering the basic needs to test your flows in isolation (not E2E).
However, it also means that in most of the cases developers will not be able to completely develop and test their solution in the local environment and also they will not be able to commit to SVN all the assets required to make the build. Examples: B2B solution, Active Transfer solutions for MFT, integration with SAP systems etc.

I would go for setting up a local service development environment that enables developers to completely develop and test their solution. This is why I’d like to know if my opinion is still in line with the purpose of this SAG product.
Several reasons for this:

  • developers work in one environment only, no dependency with 2nd environment is created in order to complete the implementation and initial testing
  • the end vision is that the project on the svn repository will contain everything required to deploy and test the solution to all upper environments (to achieve CI and CD); the commits will be done from the Local Service Development environments only.

If you go for building a complete Local Development environment it means that a solution should be found for connectivity with 3rd party applications, TN installation (in case of B2B gateway solution), Active Transfer setup (in case of MFT solution). Apart from the technical items, there could be also license limitations: without TN license you can configure 2 TN partners only (if I remember correctly) and Active Transfer requires it’s own dedicated license; on top of the extra hardware resources etc…

I have seen Local Service Development used for IS only but I do not see it a good start for setting up Continuous Delivery for the reasons I mentioned above. I am inclined to go for building a complete Local DEV environment for the developers. What do you think? What is your previous experience on this?

There is another thread I opened for some specific questions I had, most of these are clear now (Assets change tracking with SVN and Polarion Subversive Plugin and Connector - webMethods - Software AG Tech Community & Forums).



I would not recommend using merge extensively without specific tools; merging flow.xml files will probably result in corrupt files. Addressing conflicts in this environment will not be easy.

The best I have found so far involves CrossVista’s TEAM product which has a visual diff and merge plugin for Designer (disclaimer: my company is a CrossVista partner).

There perhaps other similar tools in the market?

On the other hand, depending on the type of licensing for development you have, you may not be allowed to install all the products in your workstation (including the databases). Please check with your software licensing manager.

Best Regards,

1 Like

Hi Gerardo,

Indeed, I used CrossVista for a different project: it offers pretty much everything you need for handling aseets, deployment etc. However, there are customers that for one reason or another decide to not invest into this tool and go for one of the free solutions supported by webMethods.

Thanks, I will check the details of the license we have for Local Service Development. Can you please elaborate on the license type? Do you mean that this could also include support for TN and Active Transfer? In a non-local service development environment the License for Active Transfer is separate from the IS/TN ones.


Whoa! That’s a lot of stuff! :slight_smile: It may take me a while to get through all these, so I’ll reply to each, one at a time. Please note that these are just my opinions. You’ll probably find that answers to many of these come down to personal preference, which is true for many other topics as well and it’s one reason why I boycotted the term “best practices.” :slight_smile:


Whether coding in C#, Java, or webMethods Flow, whether using Subversion, Git, or Mercurial, branching and merging is a pain, so if you can avoid it, I recommend you do so. Avoiding branches/merges and doing all development in “trunk” is an idea that has actually been promoted by several experts, including the authors of the book Continuous Delivery, which I highly recommend. In that book, the author suggests using things like “feature hiding” and “branching by abstraction” to avoid branches/merges. If needed, I can elaborate further on these topics.

When it comes to webMethods, merging can be especially difficult, because as you’ve pointed out, many assets (like Flow services) are stored as complex XML files. Those are the bad news. Now, here are the good news that some people overlook. Similar to how Java services are methods in a Java class, Flow services should also be small, modular pieces of code that execute a very specific operation, and because of their specificity and size, as long as your developers perform frequent commits, the need for simultaneous changes by two or more developers to a single Flow service should be virtually zero. Of course, this has some basic dependencies on the size of the team, size of the code base, number of simultaneous projects, etc, etc. But in my experience, it’s typically true in a well disciplined team.

So… my suggestion is to keep it simple, at least in the beginning, and do all your development in trunk. You’ll find that this may also greatly simplify your automated build and CI efforts.


1 Like

Eclipse Plug-in vs. Tortoise SVN

I typically find the need to use both. For assets that exist within the Designer/Eclipse workspace, I find that using an Eclipse plug-in (I use Subclipse, mostly) greatly increases my productivity because I can do everything from a single location. Unfortunately, however, that is not true for all assets. Optimize, TN, and Broker assets, for example, typically have to be exported to the file system and then committed. For those scenarios, I tend to favor Tortoise SVN.


1 Like

Create the Projects in the SVN repository per package or not?

I’ve heard similar suggestions before. I have seen people create project-specific folders in VCS and then add the packages under those folders. I can see how this may possibly make building and deployment easier (perhaps), especially since Deployer has a “projects” concept, but I personally do not follow this approach because some packages will undoubtedly be shared across projects. Therefore, my VCS structure has absolutely no reference to or dependency on specific projects.

Attached is a sample Subversion structure from a customer project.


1 Like

How to commit configuration assets to SVN

Although I have plenty of experience with TN, none of my TN projects in the past involved versioning TN assets. However, from what I know, TN works in a very similar manner as Broker and Optimize. In other words, you have to export the assets (typically from MWS) to your local disk and then commit it from there.

A few gotchas to be aware of based on my experience with Broker, Optimize, etc:

  • Sometimes the assets are exported in a zipped file so I recommend that you commit the unzipped files

  • Also, sometimes the export process combines many assets into a single definition file (e.g. Broker export). Doing version control on a massive file that represents multiple assets is almost as bad as versioning a zipped file. Therefore, as much as it pains me, I recommend exporting each asset individually (in these scenarios) so they each have their own file and you can version them individually.

  • Finally, the export process for some products (e.g. Optimize) also generates an ACDL file that is later used by downstream components like the Asset Build Environment and Deployer. The name and contents of the ACDL file are directly tied to the exported assets and you’ll find that building/deploying one without the other is nearly impossible. Just beware because if you want to have a generic VCS structure that can be leveraged by multiple projects, you may have to get painfully familiar with these files.

Having a script that automates the exporting and committing of these assets is not a bad idea because it will simplify the steps, increasing productivity and ensuring consistency. Just be careful not to automate too much, too soon. I typically recommend that my customers get comfortable enough with the manual steps before automating them so they know exactly what’s happening behind the scenes.


1 Like

How to initially upload the packages in the repository

In my experience with the GCS Local Development plug-in, which works hand-in-hand with Designer Workstation but is slightly different, you do not have to convert the packages to a local project first. You can commit all your packages to Subversion, then when the first developer pulls it down, he/she can convert it to a local project. In fact, Eclipse will typically pop up a wizard that will ask you what type of project to use.

Just remember to ignore files that do not need to be versioned when committing your packages, such as java.frag, flow.xml.bak, and the classes folder.


1 Like

Local Service Development

Your views on Local Service Development are 100% correct. It is very difficult to unlock the full potential of local development without being able to fully replicate the target environments locally. The technical limitations are minor. I, for one, have been running multiple suites on my laptop for years. This leaves us with the licensing limitations.

I can assure you that Software AG’s Product Management is aware of this issue and they have plans to address it. I know this because I worked for Software AG and I was part of this conversation more than once.

Now, until it is officially addressed, what can you do?

The best advice I can give you is to work closely with your sales rep. Years ago, the benefits of local development and continuous integration/delivery/deployment weren’t well known by non-techies. However, this has changed. Today, the sales force and other business units are more aware of their benefits in helping IT deliver solutions to production more quickly and more reliably, and there’s almost nothing more important to your rep than seeing you, and other customers like you, deliver successful projects to production.

So, especially if you have already made an investment in Designer Workstation, sit down with your rep, have a frank conversation about your local development needs, and he/she should be able to work with you to get you what you need.


1 Like

Hi Percio,

Well… :slight_smile: I wanted to make sure we kick off this topic with some nice and useful discussion items :smiley: and your answers are extremely informative and extremely useful!
You’ve answered all my questions in record time and with an impressive level of accuracy, many thanks for that! This is what I call leveraging the community’s power! :smiley:

I agree with you: some design/implementation decisions are closely linked to both personal preference and customer’s preference/legacy/setup etc. In the same time, after reading your answers, I am glad to see that we are on the same page when it comes to taking into consideration several golden rules. In the end the term
“best practice” can be seen as a set of principles/rules that were created after delivering similar solutions at different customers and/or on different projects.

If the book you are referring to is “Continuous Delivery” by Jez Humble and David Farley, I’m on it (started reading it a couple of days ago). It is indeed very good, surprisingly not too technical. In my opinion it can be read and understood by a broad audience, not only by techies :slight_smile:
You’ve pointed out a very important detail here: keeping the flow services as granular as possible. Always when the merge topic comes into discussion, due to the one very small chance of merging, people decide to avoid it and go for the implementation of the lock mechanism. In these situations the keep it simple principle wins 99% of the times!

Eclipse Plug-in vs. Tortoise SVN
Nice and clear, no further questions.

Create the Projects in the SVN repository per package or not?
Many thanks for attaching the sample: naming convention is nice and clean and via the structure of the folders (broker: docTypes and grups etc) the project on the repository is not just an abstract location but it also has the webMethods “touch”.

How to commit configuration assets to SVN
Conclusion here is that it can be done, but it requires thorough investigation and testing.
Indeed, one common pitfall is to start automating everything without having the actual process reliable and cristal clear. I will keep your recommendation in mind!

How to initially upload the packages in the repository
“Commit all packages to subversion and convert these to Local Service Development on by one when needed.” After reading your reply, it makes sense and I expect it to work. I admit I did not look too much into this yet.
Also the SVN should be clean and contain only the needed files - duly noted.

Local Service Development
Thanks for the great advice, I will do my best to make sure that we get the support needed to use Local Development as initially meant.
It is sad to see that some SoftwareAG customers currently use Local Service Development for only a fraction of it’s actual capabilities. My overall impression is that it is still not clear for them what the role of Local Service Development product is.

I’ve just remembered another question I wanted to bring up: working with SVN from webMethods via Eclipse based plugins will not identify any dependencies between services. In other words: if you modify servA and servB and you commit only the change on servA your service execution will fail. Having a solution in place to identify dependencies during commit operation would help in this regard. I’m just wondering how difficult would it be to build this? Did you see a standalone tool to offer this functionality?
There are indeed a couple of steps prior to the actual commit when the developer can still notice that he/she forgot one asset and also the pop-up showing the assets modified in the working copy will show everything that is not in sync with the repository. Furthermore, you have the regression tests that help in identifying any new bugs.
I am a fan of the magic pop-up windows that make things easier and reduce the human error risk, hence my question. :slight_smile:

Many thanks again for your answers! You’ve helped in clarifying my initial questions and in the same time raised awareness on near future items to take into consideration!



Regarding your last question, one of the simplest solutions I can offer is not to do the commits at a service level. In other words, instead of right-clicking on the service itself and choosing “Commit”, you should right-click on the package. You will see that the commit dialog box will automatically include everything that has changed within that package. Even if you need to commit services across multiple packages, you can highlight all the packages at once in the Project Explorer view and then choose commit. This should not only ensure that you don’t miss anything, but it will also bundle your changes under a single commit, which saves time and makes sense in most cases.

Now, this of course relies on the developer committing at the right level. If he/she fails to do this, then you must rely on your Continuous Integration process to catch the mistake. This can get very elaborate, including scanning the code for this type of mistake prior to building and testing. You could even incorporate this type of logic into a pre-commit hook to give you the magic pop-up feel. It’s all a matter of time and money. :slight_smile:

Hope this helps,

Hi Ana,

Another solution you could have is to create a CI environment (with Jenkins or other tools) where you can have deployment consistency validation done for the developer (picks the name of the package, commits it, runs static validation tools, makes deployments for each environment, etc).

It is another tool/server to care for but you can better separate the roles of developer and deployment manager.

Regarding license issues: the standard package usually contains enough development licenses (which are , obviously, much much cheaper than production licences). Verify with the your software licensing manager.

Finally, merging/comparing code, specially flow services, is still hard. You may want to support Vlad Stan’s effort on creating an open source tool: – he has already created RichAudit

Best Regards,


I agree, there should be a balance between the time and money invested vs the benefits this brings. Commit of all the changes in your workspace, might also lead to uploading more assets than you actually want, it depends on how you work and how many features/bugs/etc you implement/fix in parallel.
In the same time, it’s very important to have a good procedure in place that is clear to all the developers. In case of mistakes, there’s always the extra quality check layer: the unit tests/regression tests.



Thanks for your reply.

I will go for a simple setup first that works, is reliable, transparent and solves the basic needs but I’ll keep your suggestion in mind for later on.

The license question I have is related to Local Service Development, it reminds me of quite similar discussions around the Broker license compability with the UMS one, but that’s for a different thread. (the main point is that it depends a lot on how you handle the communication with the Software AG sales representative).

I’ll have a look in detail into Vlad’s open source tools, thanks.

Best Regards,

TN Assets Versioning and Automatic Deployment

I’d like to get back to the question related to versioning of the TN assets. Let’s take as example a medium size webMethods platform that includes a considerable amount of B2B integrations but also classic ESB implementations.
Versioning should be implemented for this, keeping in mind that the end goal is to set up a nice&clean Continuous Delivery in place.

The challenge I see is the following:

  1. From CI/CD perspective, it is advisable to have a common way of handling all the assets:
  • developers implement/test their changes on the workstation designer;
  • once done, they commit their changes to the SVN;
  • CI process is triggered as soon as the changes since the last build are detected (I will not go into detail on CI now)
    With these in mind, it means that the same approach applies for TN related assets as well.
  1. From the B2B perspective, there are several things to take into account:
  • webMethods is supporting now batch TN Partners On Boarding - there is a nice webinar where this is described. In this webinar is mentioned that after the B2B solution grows (will support all the required business standards, delivery methods etc), you will probably not have to make major changes on the flow side since the functionality needed for document exchange with the partners is already implemented. In most of the cases you will only have to configure/import the new partners. (let’s say for example’s sake: 50 new partners should be imported)
    This is why you could already start the on-boarding process from QA or even from PROD. The idea behind this is quite nice: you would slowly move the partners on-boarding from Developers to Support or even your Business staff.

The question that comes to my mind now is: if you’ll start onboarding partners from QA/PROD, it means you are skipping the first steps: the assets will not be on SVN, DEV environment will not be in sync to QA nor PROD. Regression testing will be skipped (that is in a lower environment → the CI one).

This could be seen as a catch22 situation… :slight_smile: any of the 2 options are good but if you go with one or another you will deviate from the other one.

I guess it all resumes to a simple question (Percio had a remark on this in his previous posts here): do we really need to store in SVN the TN assets?
Should these be deployed separately via Deployer from top (PROD) to bottom (wsd/dev) or from bottom to top depending on the situation?

Any suggestions on how to address this?

Many thanks!

FYI - I changed the topic name to “Versioning with SVN in Local Service Development”, what is discussed here is not specific to the SVN Plugin type used to implement version control via Workstation Designer.



I definitely think there’s value in committing TN assets to VCS and ensuring that changes to those assets follow the same standard process as everything else. In my past experience we did not do this simply because the Asset Build Environment did not exist yet, and without it, there was no way (out-of-the-box) to deploy these assets from VCS. In other words, the benefit of committing these assets to VCS wasn’t as great as perhaps it is today.

In the book Continuous Delivery, you’re going to see that the author promotes the idea that nothing at all should make its way to production without going through VCS. One of the main reasons is that you should be able to fully re-create your environments from VCS if needed. I haven’t found a webMethods shop capable of doing this yet, but it’s a noble goal.

The partner on-boarding idea is neat, but I haven’t seen it done in practice. In my experience, most customers don’t feel comfortable on-boarding partners directly into QA or Production. Most partners are not interested in that either. Even if you do take this approach, however, you would probably want to at least build a feedback mechanism so that profiles that are introduced in higher systems get propagated down and into the VCS.

Now, if one is unsure whether committing TN profiles is worth the trouble, one idea is to begin with things like processing rules and doc types, which should be exactly the same across all environments, and therefore, are an easy and natural fit into a Continuous Integration/Delivery/Deployment solution. Once a certain level of comfort is reached with these assets, then introduce assets like profiles, which are naturally different from environment to environment. In other words, gradually increase the complexity and capability of your system as you go.

Hope this helps,

1 Like


Indeed I have seen that from the first chapter of the Continuous Delivey book the author clearly highlights several key ideas:

  1. make sure there is transparency in everything going on in the CD process → have good auditing in place
  2. keep the workflow of handling the assets from developing phase up to the release to target environments as generic as possible (there should be very few, preferably no customization in the process)
    All these in order to be able to completely recreate a stable version of the environment if needed with almost no pain, time or effort costs but in the same time, be confident that it will work.

I have seen customers that are still in an incipient phase of setting up WSD and CI: there are many dependencies between the environments, WSD is not used to it’s full capability, and the initial feeling is that there is no control and real working procedure in place. This could lead to an unpleasant result without the right guidance and people.
In the same time, there are also customers that are now looking into having the onboarding process directly from QA or PROD directly - this is one of the reasons why I raised the question here.

Your suggestion on the TN approach fits with what I had in mind. Furtherore, I see it as the logical and wise approach to follow.

Many thanks again, as always, informative and helpful!