I have a table with alphanumeric data in Russian language.
After exporting this data into text file, using WRITE WORK FILE command, character data is displayed normally. But after looking at the file, I found that the Russian characters in the file are replaced with similar English. I think the problem is that the data is uploaded in the ANSI character set.
This makes it impossible to search the character data upon request.
The data is already stored in the database is incorrect. This is a mistake the previous programmer.
I could use EXAMINE TRANSLATE, but the data have English names of firms. This will lead to the wrong data.
The only option - manually modify the fields where there are English names.
First, the long term goal is clearly to end up with just russian characters and no english characters.
If the existing file is relatively small it may be best to simply re map the bad data.
However, if you are dealing with a large file, you might want to consider something along the following lines. I presume the search criteria you mention is in Russian. Can you simply run the search criteria through the same process as the data goes through? This would yield english characters as your new search criteria.
I have no problems with the export of tables that contain only the Russian data. In this case, I use the EXAMINE TRANSLATE.
But there is a table in which the fields are Russian and English data simultaneously. In this case, the EXAMINE TRANSLATE not fit. Because then the British data will be converted too.
The task is to upload data to a text file from which then import the data into another database.
I have thoroughly studied the loading of data into the database. Maybe you’re right and possibly invert the load in the unloading.