I am processing batches of 250 documents (3KB in size). My schema has 3 indexes (2 standard and text, 1 text), the text can be quite long (>370 characters). Load times are increasing from 126ms avg for the 1st batch to 1416ms avg for the 64th batch. I have my code below. Can I gain some speed by using long transactions and committing after every 10 or 20 documents? Any other ideas?
String URI = “http://localhost/tamino/dbname”;
TConnection conn = ConnectionFactory.getInstance().newConnection(URI);
conn.setLockwaitMode(TLockwaitMode.YES);
TAccessLocation tal = TAccessLocation.newInstance(“collection”);
TXMLObjectAccessor xmlAccessor = conn.newXMLObjectAccessor(tal, TDOMObjectModel.getInstance());
// load newDoc with xml document here
TXMLObject obj = TXMLObject.newInstance(newDoc.toString());
try {xmlAccessor.insert(obj);
}