(I only just noticed that you added the requested additional info, hence this rather late reply)
The problem is, as I suspected, that you are not using transactions to batch your update operations together. Effectively, each add operation you do becomes a single transaction (a Sesame repository connection by default runs in autocommit mode), and this is slow and ineffecient.
To change this, start a transaction (using RepositoryConnection.begin()
), then add your data, and finally call RepositoryConnection.commit()
to finalize the transaction.
Here's how you should modify your first code example:
Repository myRepository = new HTTPRepository(serverURL, repositoryId);
myRepository.initialize();
RepositoryConnection con = myRepository.getConnection();
ValueFactory f = myRepository.getValueFactory();
i = 0;
j = 1000000;
try {
con.begin(); // start the transaction
while(i < j) {
URI event = f.createURI(ontologyIRI + "event"+i);
URI hasTimeStamp = f.createURI(ontologyIRI + "hasTimeStamp");
Literal timestamp = f.createLiteral(fields.get(0));
con.add(event, hasTimeStamp, timestamp);
i++;
}
con.commit(); // finish the transaction: commit all our adds in one go.
}
finally {
// always close the connection when you're done with it.
con.close();
}
The same applies to your code with the SPARQL update. For more information on how to work with transactions, have a look at the Sesame manual, particularly the chapter about using the Repository API.
As an aside: since you're working over HTTTP, there is a risk that if your transaction becomes too large, it will start consuming a lot of memory in your client. If this starts happening you may want to break up your update into several transactions. But with an update consisting of a million triples you should still be alright I think.