I can’t update more than ~1000 small records without causing this. Is there somewhere internally where I can adjust how much memory Tamino allocates for itself? 16GB should be enough for anything, the DB itself is only 10GB.
some DB info:
Index space - 1482.08 MB of total 3258.53 MB
Data space - 9336.66 MB of total 27000.00 MB
Journal space - 2.93 GB
Is this a “one time thing” or do you plan for massupdates like this in geneeral ?
If this is only “one time” perhaps you could limit the number of updates with a[position() < 1000] filter ?
For me it looks like you reached the cache limit … see doc. below.
including the warning regarding parallel updates - the more cached the more locked !
XQuery document cache size
This parameter defines the capacity of the document cache that is used for XQuery processing for each request. Document caching improves the performance of certain queries like join queries or sort-by queries where the sorting is not done via an index. Also certain XQuery update statements can benefit from a big document cache. The capacity of the XQuery document cache is specified in MB. The default capacity is 20 MB. For applications with a high parallel query load it may be necessary to reduce the parameter.
It’s a constant thing. Tamino is used as the DB for an enterprise system we have. One of the things stored in it is email notifications. I need to mark them as “sent” en masse to ensure they are not sent out. I also need to update our users to be disabled en masse for our testing environments (since everytime we restore a DB in testing from our production environment we must change this information).
It is frustrating to have to break things up and do it incrementally instead of being able to set it and walk away to do something else until it finishes. It kills my ability to automate much of anything. And I can’t find Software AG’s PHP API on this site anymore which isn’t helping much either.