deleting Files after pollng the folder is not working correctly

The scheduler runs and poll the directory every half-hour. The basic functions performed by the flow is to list the files in the directory and ftp those IDOC-xml files. Once the FTPing is successful, the file info is passed to the deleteFile service along with the targetDirectory. If the FTPing is not successful after several repeats the file remains in the directory to be picked up in the next run. But, for some reason few files are left invariably in the directory inspite of successful FTPing. Following is the basic structure of my service :

listFilesInDirectory
Loop filenameList
getFile
other services
FTP
If successful
deleteFile
Else
next record (file)

Can someone tell me what am i doing wrong? How do I fix this?

Hi Sanjaysuman,

The flow looks OK to me, why don’t you trace into the steps and when the FTP is successful for the files (which are not getting deleted) see that you have the correct file and directory information at the deleteFile step.

Regards,
Sandesh

Hi Sanjaysuman,
I beleave you have the java service deleteFile. Does the deleteFile service checks whether a file is deletable or not? If a file is not deletable, does it output any status code so that you could check in your Flow service?

Also, explore using filePolling port. Then you don’t have to rely on the listFilesInDirectory and deleteFile services.

Regards,
Bhawesh

Hi Bhawesh,

Can you please elaborate on how to set up filePolling port or guide me in finding some notes on setting one up .

Thanks,
sanjay

Hi Bhawesh,

I think i got the answer. Setting up filePolling port capabilty is not available in SAP Business Connector 4.7. So, I think i am left with the only option of listing files in the directory.

thanks,
sanjay

Sanjay,

you should also check file existence before deleteFile triggers.
There is a file:checkFileExistence service in PSUtilities package,try with it.

HTH,
RMG

This won’t work over an FTP connection. It is for local file systems only.

You don’t show any try/catch blocks. An error in any of the steps can leave your FTP session in a bad state for subsequent work if you don’t catch these errors to close the session.

How long does a single scheduled batch run take? I assume it is not longer than 30 minutes, but if it is, you’ll have overlapping batch runs which may cause trouble (depending on the patches loaded into your system).

What prevents you from processing files twice? Doing so may not be a problem for your particular integration but if it is, you may want to consider this approach (which is more or less what the file poller does for files on a local file system):

get a list of the files using a pattern (*.dat maybe)
for each file in the list
–rename the file (add .tmp maybe) to mark it as “reserved”
–get the file
–process the file
–delete the file

This will prevent processing a file twice and will prevent a “bad” file from getting in the way and preventing processing other “good” files.

Hi Sanjay,

I’m asssuming what you’re doing is “put”-ing the file to remote.
If you’re using IS version 6 onwards, I agree with Bhawesh to elaborate “filePooling” port.

If you can’t use this, by doint try-and-catch block as raemon suggested, probably you can get what errors that happened in your flow logic and whether or not it gets to “succesful” state when it doesn’t do the deletion.

Try to see if you can output the value to a file everytime it’s unsucessful, so you know exactly that if it doesn’t get deleted it goes to the branch that states “unsucessful” value or it really gets into “succesful” but the file not get deleted.

Also, check if you can make a more reliable “successful” check by checking file existance “ftp:client:ftp:dir” as suggested by rmg

Similar principal can be used if you do a “get” as well.

You’ll get there …

Kurt