Ftp Vs http for large file transfer

Hi everybody,

Is ftp better or worse than http ?

I read and hear ftp is favoured over http for large file tranmsissions. There are layers built around ftp like sftp, ftp of smime data(AS3) to address security issues posed by ftp.

When we scratch the ground little deeper, both http and ftp are built on the same tcp/ip stack which is actually the mechanics of data/byte transfer. Am after figuring out technically what makes ftp faster, reliable for large file transfers over http. Or is it just a myth ?

Suggestions/Comments are welcome


I believe both ftp and http transports equally faster for small to large file transfers and its all depends on the transfering bandwidth and performance of the server to process the request to target.

just some thoughts.

DG -

As RMG mentioned, both HTTP and FTP are roughly equivalent in speed and reliability. In a TCP/IP network, each network layer has it’s own integrity mechanisms (for eg: checksumming). If you are using HTTPS, I think the SSL protocol uses checksums as well.

Some drawbacks of FTP in webMethods is that IS only supports receiving of FTP files via it’s FTP port - and does that very well - as soon as the FTP receive finishes, it invokes a service automatically. However, IS does not support spooling of FTP files for pickup by remote partners - for that you have to run a separate, non-webMethods, FTP server. Also, FTP uses two ports (in active FTP mode, as opposed to passive FTP), as compared to one port for HTTP. This causes possible timing issues which must be carefully addressed (for eg: partners logs into FTP server and tries to pickup file which is still being written to disk)

I hardly know anything about AS2 or AS3 etc, but it probably has well defined fields for digital signatures, non-repudiation functionality, etc. This is probably doable in HTTPS - but I don’t know much about digitally signing documents going over an HTTPS transport.

If there aren’t specialized protocol requirements (eg: you don’t need to interface with AS3 customers, etc), I’d go with HTTP.


We use both FTP and HTTP for flat file processing.

My only concern for HTTP for large flat file procesisng is we sometimes encounter HTTP time-out before webMethods finishing processing the inbound files.

In one scenario, where we receive a large EDI Flat File via HTTP, webMethods is processing the EDI file (de-enveloping into seperate X12 transaction), and the HTTP Connectin time-out before webMethods responsds to the trading partner (HTTP Response Code 200 ?), and the trading partner re-sends the EDI again.

Thanks for all your valuable comments-

I am inclined towards suggesting http/s for larger file transmissions as opposed to ftp. FTP seems to be relatively complicated in terms of security, firewall etc

In your case I believe http client time out is controlled by your partners http program. Your partners http client times out bcoz your wM server is not returning any http return code in the timeout wait time set by your partner. There are couple options to deal with this like- Requesting for increased wait time, if AS2- requesting for Asyncronous MDN etc.

Now, I am contemplating to write a large file (over 50 MB) Content Handler (LFCH). This LFCH should interrogate all mime type inbound transmissions for Content-length http header and make a decision based on its size. Like, if Content-length (CL) is over 50 MB, read the stream and as you read write it to disk and free up memory- DONT build up 50 MB memory in the IS to store it as a pipeline variable. If CL is under 50 MB, allow default content handlers to take control.

I am finding it hard to register/relate my large file content handler based on Content Length. Looks like it is tightly cupled wth Content Type.

Pls Advise,Suggest,Comment-



I have a query like, How can I find the processing time to generate canonicals in document tracker once the files are FTPed? Our System is taking from FTP and subscribe to the database.

If anyone have some idea other than using debugLog ,plz sent across.