Even though you asked not to, I have to give a big warning: There be dragons down this path!
Think of it this way: To write data that is always as the SQL Layer expects you will have to re-implement the SQL Layer.
Academic demonstration follows :)
Staring table and row:
CREATE TABLE test.t(id INT NOT NULL PRIMARY KEY, str VARCHAR(32)) STORAGE_FORMAT tuple;
INSERT INTO test.t VALUES (1, 'one');
Python to read the current and add a new row:
import fdb
import fdb.tuple
fdb.api_version(200)
db = fdb.open()
# Directory for SQL Layer table 'test'.'t'
tdir = fdb.directory.open(db, ('sql', 'data', 'table', 'test', 't'))
# Read all current rows
for k,v in db[tdir.range()]:
print fdb.tuple.unpack(k), '=>', fdb.tuple.unpack(v)
# Write (2, 'two') row
db[tdir.pack((1, 2))] = fdb.tuple.pack((2, u'two'))
And finally, read the data back from SQL:
test=> SELECT * FROM t;
id | str
----+-----
1 | one
2 | two
(2 rows)
What is happening here:
- Create a table with keys and values as Tuples using the STORAGE_FORMAT option
- Insert a row
- Import and open FDB
- Open the Directory of the table
- Scan all the rows and unpack for printing
- Add a new row by creating Tuples containing the expected values
The key contains three components (something like (230, 1, 1)
):
- The directory prefix
- The ordinal of the table, identifier within the SQL Layer Table Group
- The value of the PRIMARY KEY
The value contains the columns in the table, in the order they were declared.
Now that we have a simple proof of concept, here are a handful reasons why this is challenging to keep your data correct:
- Schema generation, metadata and data format versions weren't checked
- PRIMARY KEY wasn't maintained and is still in the "internal" format
- No secondary indexes to maintain
- No other tables in the Table Group to maintain (i.e. test table is a single table group)
- Online DDL was ignored, which (basically) doubles the amount of work to do during DML
It's also important to note that these cautions are only for writing data you want to access through the SQL Layer. The inverse, reading data the SQL Layer wrote, much easier as it doesn't have to worry about these problems.
Hopefully that gives you a sense of the scope!