Can we mask the IS directory ns/packagename/foldername/servicename to the user so that when the user logs in he directly goes to that directory.I know we can do this in Windows or unix level users. But can we do this using webMethods IS FTP users - is my question?
IS is not a general-purpose FTP server. If you need one of those there are plenty of commercial and open source products.
IS FTP is designed to invoke a service in response to transferring a file to IS. By default, the transferred file is never written to disk, but rather is stored in memory and its contents passed to the service that corresponds to the file path. You should never do this with very large files as you will consume all available memory very quickly.
Are you sure about this? (Or perhaps I’m not understanding this correctly)
It’s just we have a fresh install of wMethods 6.5 on both Windows and UNIX.
On both environments if you set up a user as a TNPartner then log in via FTP, this creates a physical directory under the root FTP directory.
i.e. IntegrationServer/userFtpRoot/ by default.
Then if the user changes to their account directory and pushes a file, it writes this file directly to disk with the filename provided by the client. (As per a normal FTP server). We’ve proved this with large file transfers, as you can actually watch the file growing in size in the directory.
Once the transfer is complete this activates the ‘pub.client.ftp:putCompletedNotification’ Notification process, which you pick up with a trigger, the trigger runs your flow service, which you add the ftpPutNotification document to.
This document contains the username and the filename (including full path) plus other stuff.
The files themselves are left in the directory and at no point is the file loaded into memory, it only resides on the drive.
If you want to do something with the file, you presumably have to add some code to your flow service to pick-up the file. And at that point of course you can stream it, if it’s a big file.
The question to which I replied did not involve TN and my reply was correct. IS still is not a general purpose FTP server and files sent to the IS FTP port are never written to disk unless you develop a custom content handler to do so (which I have written about here a few years ago).
This capability was introduced in version 6.5 (and it’s not a TN thing–the user account does not need to be a TNPartner).
This addition looks very useful to me. Only wish I had known about it sooner! I guess I should start reading the enhancements and fixes docs more closely.
Mark’s comment is still correct–when doing FTP to IS the “old-fashioned” way, the file is never written to disk. It is loaded into memory and the service, as represented by the directory that the file is put to, is invoked. I’m thinking that I’ll probably never use this old way ever again!
I wasn’t aware that there was another version of FTP other than the standard FTP as accessed from the Ports screen, which uses the file system (in 6.5+).
We’ve only used wm 6.5, and this IS FTP your talking about was never mentioned during training, so I wasn’t aware of it. I guess that perhaps wm are trying to play down the older IS FTP in preference for the new disk based version.
“I guess that perhaps wm are trying to play down the older IS FTP”
I’m not sure this is the case. Any service can be invoked via FTP or HTTP. I don’t see that changing, especially since it isn’t really “old IS FTP” vs. “new IS FTP”. It’s still the same FTP listener, I imagine, simply with facilities to know the difference between being cd’d to a service vs. a file directory. A good addition for sure, though.
[begin rant] (not directed at you Boothy)
I think they are finally relenting to the tendency of IS users that “abuse” IS. Lot’s of people assume that since one can connect to IS using the FTP protocol that it is an FTP server that behaves like all the usual FTP servers they are familiar with. With this new facility it is closer to that, but I’m sure it still lacks the features found in most “real” FTP servers.
Too many people want to do the same ol’ thing they’ve always done by processing things in batch files and passing around multi-megabyte files. IS wasn’t really designed for that sort of processing, though it can be made to do so. Over the years it keeps getting better but the docs, samples and the way things are described tend to lead those who are unfamiliar with IS (and streams and such) to load big files completely into memory.
One of the points of integration tools was that we weren’t supposed to batch things anymore. Instead, things should be processed as they happen, which tends to mean lots of little transactions instead of a few monster transactions.
I think that’s why the “traditional” approach of IS was to simply load the complete content into memory, since the idea behind near real-time integrations was lots of little transactions. But it seems lots of people want to throw 100MB files at it but then wonder why it falls over when they unknowingly load multiples of those complete files into memory.
I know what you mean regarding batch processing. Most products now tend to be designed for real-time type processing, small files often. Rather than for batch processing, large files periodically.
Unfortunately many businesses still don’t work that way. I’ve been doing integration work for quite a few years now. (I’m new to wm as a tool set, but not to the business). And in my experience many businesses, or at least many specific systems within a business, just don’t suite the real-time model. Which means most integration tools just don’t fit properly. But as other parts of the business do, they buy something like wm, then have to make wm ‘fit’ the rest of the way they do business, as they don’t want to buy a second tool set better suited to batch processing.
We integrate between about 80+ internal systems, some new, some legacy. A mixture of old mainframes built 20 years ago to UNIX (HP and AIX), Linux and Windows systems running customised and off-the-shelf packages. Plus then we have external 3rd parties running systems for us, about 30+, and then client connections, about 200+.
And unfortunately most of these interfaces, even the new ones, use batch processing and files rather than messages.
It doesn’t seem to matter what we say to the customer, or what advantages we tell them real-time and message based routing has, they still want batched files run on a rigid schedule that are then FTP’d around the place! So even the latest systems that are implemented, still use batches.
I would guess that we are not the only business doing this. Maybe wm is noticing this, and so starting to accommodate what the customer wants, rather than what wm (or the rest of the industry) think the customer needs.
It is true that many processes are still best handled in batches.
The product has definitely evolved over the years, and the large file handling has steadily improved. The main problem I’ve seen is developers that make assumptions about IS does (e.g. assume that it is an FTP server) and don’t dig into the details. Then they blame the platform when the solution they’ve developed doesn’t do well under load. There are lots of things that could be improved about IS, don’t get me wrong. But us developers need to be responsible and knowledgeable users.