My solution to this problem was to replace successive save operations with successive modifications to a Python dictionary of dictionaries: one sub-dictionary for each row of the intended contents of the datastore. Using a dictionary of dictionaries rather than a list of dictionaries makes it easier to write to the relevant sub-dictionary, albeit with two minor annoyances:
- duplicate unique keys in the dictionaries;
- the need to generate a list of dictionaries from the dictionary of dictionaries before saving to the datastore, on account of the Scraperwiki datastore (IIUC) not accepting the former structure but accepting the latter.
NB. For a significant number of data rows, saving a list of dictionaries to the datastore in one operation is much faster than iterating over those dictionaries and saving them to the datastore one at a time.
Code example:
import scraperwiki
superdictionary = {}
superlist = []
superdictionary['1'] = {"a":1, "b":"Foo"}
superdictionary['1'].update({"c":"Bar"})
superdictionary['2'] = {"a":2, "b":"Grue", "c":"Gnu"}
for subdictionary in superdictionary:
superlist.append(superdictionary[subdictionary])
scraperwiki.sqlite.save(["a"], superlist)
should produce:
a b c
1 Foo Bar
2 Grue Gnu