I have built a very simple service to navigate a directory structure and to view text files from a browser. To display the file content, I read a file using pub.file:getFile and converting the bytes (or the stream) to a String. I then output the String using a simple output template
%value content%
Everything runs fine if the file size is small. However, I get an OutOfMemory error if I try to read large files.
I am implementing some sort of pagination, which would work for text-based files (i.e. I output, say, 3000 lines at a time), but I would like to have the possibility (esp. for large binary files) to flush the whole stream to the browser (which would enable, for example, to save the file locally by right-clicking “Save target as…”).
However, I could not find a way to get to the server output stream, so that I can flush the output to the browser as I read the file… I wonder if this is at all possible…
This is an interesting use of the Integration Server as a file server. Be careful with security. If you want to be able to send the entire file as a monolithic http response, it would have to be loaded in memory anyway and the only way to handle it is to increase the memory on the IS (which is where I’m assuming you’re getting the OutOfMemory errors). However, you might be able to rig something so that the http response contains the stream (from getFile), which would be handled by the client–the functionality I’m proposing would be similar to an http download with a file chooser, etc. I haven’t done this, so it’s theoretical, but it should be possible.
What you are looking at is a solution with sockets & streams. Which will include a lot of Java programming. And I am not even sure if you should do this with webMethods at all… There are very nice programs for these purposes. Such as Apache You might want to think about a hybrid solution.
In the mean time, I’ve worked around the problem by outputting the file in chunks. I give the user the possibility to select on screen the beginning and end line (with a max of 10000 lines, large enough but not so much to cause an OutOfMemory), if the file is bigger that 10000 lines.
Internally, I did this with a little java service which reads the file using java.io.* standard classes.
It works for text files, does not work for large binary files… but it does the job I needed.
Giorgio, good to have it working. But since this is the Architecture forum there is a point I’d like to make. If you need a solution to a specific problem, don’t waste too much time on thinking of ways to solve it with wM. For many issues there are simple solutions in other applications. As they say “each to its own”. For file serving you need a file server, for FTP you need an Ftp server. Most of these functionalities are mimicked in wM but if you need the real thing, try to use the real thing
Again, many thanks to all those who have taken the time to reply.
Just to give a bit of background: the files I was interested in are the wM server/error/audit logs, plus some custom logs we are creating also in the logs directory. And trying to pre-empt your further comments
reason why the custom logs are placed in the standard logs directory -> so that they are easier to manage and benefit from some archiving scripts that we have in place for files inside the logs directory
why I tried to use wM instead of an ftp server -> because we did not like ftp running on that box.
why didn’t I try to use a web server -> because none was installed on that box
why wM -> because I liked the idea of restricting accesses via ACL
why not the wM standard console to view the logs -> because it does not work for the custom logs