About byte[] aka BLOB in greendao:
Looking at de.greenrobot.dao.query.WhereCondition.PropertyCondition.checkValueForType
conditions for byte[]
are not supported at the moment, because the following lines will always throw an exception if value is of type byte[]
.
if (value != null && value.getClass().isArray()) {
throw new DaoException("Illegal value: found array, but simple object required");
}
Solution 1 - modify and contribute to greendao:
You could modify the uper lines, so that the exception is only thrown if the type of value and the type of the Property don't fit.
if (value != null) {
if (value.getClass().isArray() && !property.type.isArray()) {
throw new DaoException("Illegal value: found array, but " +
"simple object required");
}
if (!value.getClass().isArray() && property.type.isArray()) {
throw new DaoException("Illegal value: found simple object, " +
"but array required");
}
}
Maybe this will already solve the problem, but there may be other parts in greendao stopping to work with this edit or that will break the query. For example the binding of parameters to queries may not work with arrays.
Solutinon 2 - Use queryRaw(String where, String... selectionArg)
This is pretty straight forward and shouldn't be a problem with some knowledge about SQLite.
Solution 3 - Using a lookup-table
Assume the original table:
ORIG
-------------------------------
UUID BLOB
...
You can modify ORIG
and add a autoincrement-primarykey:
db.execSQL("ALTER TABLE 'ORIG' " +
"ADD COLUMN 'REF_ID' INT PRIMARYKEY AUTOINCREMENT;");
The sync service should already take care about the uniqueness of ORIG.UUID
and ignore the new ORIG.REF_ID
-column. For inserting new UUIDs the sync service will probably use INSERT
causing a new autoincremented value in ORIG.REF_ID
.
For updating an existing UUID
the sync service will probably use UPDATE ... WHERE UUID=?
and no new ORIG.REF_ID
-value will be created, but the old value will be kept.
Summarized the ORIG
-table has a new Bijection between column REF_ID
and column UUID
.
Now you can create another table:
ORIG_IDX
------------------------------
UUID TEXT PRIMARYKEY
REF_ID INT UNIQUE
(If your data is smaller than 8 bytes it will also fit into a INT
instead TEXT
, but I don't know if there is a built-in cast/coversion from BLOB
to INT
.)
ORIG.IDX.UUID
will be the String-representation of ORIG.UUID
.
ORIG_IDX.REF_ID
is foreign key for ORIG.REF_ID
.
ORIG_IDX
is filled and updated by triggers:
db.execSQL("CREATE TRIGGER T_ORIG_AI AFTER INSERT ON 'ORIG' BEGIN " +
"INSERT 'ORIG_IDX' SET 'REF_ID' = NEW.REF_ID, 'UUID' = NEW.UUID" +
"END;");
Create corresponding triggers for UPDATE
and DELETE
.
You can create the tables ORIG
and ORIG_IDX
using greendao and then query a requested uuid with:
public Orig getOrig(String uuid) {
OrigIdx origIdx = OrigIdxDao.queryBuilder().where(
QrigIdxDao.Properties.UUID.eq(uuid)).unique();
if (origIdx != null) {
return origIdx.getOrig();
}
return null;
}
I think String-primarykey is not supported yet, so dao.load(uuid)
won't be available.
CONCERING AN EXTENDED TABLE:
You could use a string
primarykey-column and provide conversion-methods in the keep-sections of your entity. You will have to compute the primarykey-column before you do an insert.
If there are other tools inserting data (for example your sync service) you'd have to use a trigger to compute your primary-key before the insert happens. This doesn't seem possible using SQLite. Thus the primarykey-constraint will fail on inserts by the sync service, so this solution will not work with primarykey!