I undefined a schema there by also removing all instances. The data altogether under this schema was in GBs. I was expecting a comparable amount of disk space to be released. But there was not even a minute change in the disk space. Why? I also tried by removing the collection. Still the same situation. Again, why? Doesn’t Tamino remove any data? Never? In that case, is there anyway to detect unused data files and remove them?
Best regards,
Gopal
PS. Environment is: Tamino v4.2.1.8. Redhat AS v2.1
I could not find the answer yet. But I am guessing. I think the OS-disk-space statistics reflect the sum of the tamino-spaces (ie., data-space + index-space + journal-space, …), but not the total size of the data + indices + … That is, the OS is not really aware how much data there is in a Tamino database. And, if, for instance, I create a database with data-space of 10gb, index-space 3gb (forget about other spaces for simplicity), then OS assumes that this Tamino database is occupying 13gb. As long as I keep on putting, or removing data, and until I don’t exceed these spaces, OS statistics will show that the data altogether occupy 13gb. When I load data beyond this, let’s say 7gb in excess (assuming that auto-expansion is “on”), then OS statistics will show about 20gb altogether–because those Tamino-database-spaces got auto-expanded to fit this additional data. If, for instance, I removed some data now, then still the OS will show 20gb because the Tamino-spaces never shrink (only auto-expand, if necessary).
The way I have used to get rid of too much allocated space for the data and/or index “spaces” is to make a backup, and then choose “Create from backup” into a new DB with smaller sizes.
If you can create the DB succesfully you can delete the old DB, and rename the new DB into the “old” name.
Unload and load will of course do the same, but loading may of course take some time…