The current architecture has Broker and SQL server on a Windows Cluster + SAN… and then IS running on Solaris 10.
The driver for this architecture is “its been implemented succesfully by another team” rather than to fit our specific requirements. (not that we really have any either!).
As a developer I would like to know what are the real pros/cons of Unix vs Windows for hosting IS.
I would personally like it hosted on Windows so we can use the .NET integration (we are not doing any yet, but most of us are .NET developers so would be nice to have the option!), but without a real value proposition it wont happen. Maybe Solaris is the best host, but it seems to me it wont do anything magical that can’t be done on Windows. (To my understanding, specific performance targets are not current requirement of ours anyway).
The debate of Unix vs. Windows has little to do with IS. IS will run fine on either, assuming competent sys admin skills on the platform. Sizing can also differ but that’s the same for any multi-platform app and as you mentioned, isn’t usually an issue in any case.
Go with the platform that has the best internal support from your sys admin team. IMO, you should forget the .NET support and focus on standards-based interfaces. A wayward .dll can pooch your entire JVM and crash IS.
What’s weird is Broker on Windows and IS on Solaris. Usually I see the reverse. There is no reason Broker shouldn’t be on Solaris–that’s where it was born and grew up! Windows was the “other” platform.
See, I don’t agree with that sentiment. If wm are offering a product/feature it should work and be reliable.
A .NET .dll cannot crash the CLR. So long as wm are hosting the CLR correctly, there is NO WAY that some .NET code can crash the IS (assuming it is ‘safe’ code)! I have no idea how their architecture interacts with the CLR, but it should not be able to take down a JVM.
I assume that was a joke given the platform it runs on unless of course you are skipping the weekly patches and reboots.
I think Rob was trying to point out a fairly accepted design principal when dealing with Integrations.
When hooking up diverse applications, if you can abstract and hide the underlying proprietary implementation whether it be .Net, webMethods IS or anything else then your interfaces can (if designed correctly) become less brittle. The implementation can be moved and or changed to other infrastructure/implementations without having to change the public interface.
This has in practice proven to be beneficial and more resilient to change.
The windows piece was called ribbing. :p: I’ve run IS servers on Unix, Linux and yes even windows. There’s no reason it can’t be stable on any of these platforms. Decent admins and good architecture combined with solid development are always the keys to making any platform reliable. Without those no platform is going to save you.
The real point of the post was about how you wire things together. You can certainly use the various platforms native interfaces if you want to but I would call that questionable architecture. Integration is about tying things together without tying things together.
In terms of using .NET I don’t anticipate using it for anything more than where we currently use Java. (And the ability to use Java is obviously something that is broadly required or else it wouldn’t have been such an integral part of IS product).
But given I am a .NET developer where poss I would rather do non-Flow stuff in .NET rather than Java. Nothing against Java, but there are only so many technologies one can master!
I agree its probably not the best idea to use .NET to step outside the IS-environment. In other words directly integrate with something via .NET. (but then I would never rule it out!)