This is based on mik's answer, but I found a hole in his where appending more than one line of hex introduces an additional 0 hex character at the beginning of each string, when you use HEXTORAW
in each append line. When you pull that hex back out of the database and compare it to what you thought you were putting in, you see this. If the hex was an image, and you bind those image bytes to an Image.Source
, the zero is ignored if it there is only one line appended, but if you have multiple lines, it introduces this extra byte for each chunk and corrupts your data and you can't display the image. I imagine the same takes place for regular files and other data you want to upload.
Instead, I appended all my hex to a CLOB, which keeps it as a string of hex and also has the same 4 GB limit that a BLOB field does. So only this uncorrupted string is what gets written to the BLOB as RAW
when a hex string is greater than the 32767 character/byte limit:
DECLARE
buf BLOB;
cBuf CLOB;
BEGIN
dbms_lob.createtemporary(buf, FALSE);
dbms_lob.createtemporary(cBuf, FALSE);
dbms_lob.append(cBuf, '0EC1D7FA6B411DA5814');
--...lots of hex data...
dbms_lob.append(cBuf, '0EC1D7FA6B411DA5814');
-- now we append the CLOB of hex to the BLOB as RAW
dbms_lob.append(buf, HEXTORAW(cBuf));
UPDATE MyTable
SET blobData = buf
WHERE ID = 123;
END;
My scenario was where I was using SQLite as essentially a backup database, but I still needed a way to keep Oracle (my main database) in sync when a document was uploaded, when a connection to it could be re-established.
As a more complete answer of how to build this SQL programmatically, I thought I should show this, since I did so with my app. The code in my C# application would put the bytes of a file into hex, then I had a string variable with the above SQL that I would write to a file, and a service would later use that to update Oracle when the connection returns. So this is how I broke up how I fed my hex into this SQL string and file (and later, Oracle):
// This is all staged so someone can see how you might go from file
// to bytes to hex
string filePath = txtFilePath.Text; // example of getting file path after
// OpenFileDialog places ofd.FileName in a textbox called txtFilePath
byte[] byteArray = File.ReadAllBytes(filePath);
string hexString = getHexFromBytes(byteArray); // Google: bytes to hex
// Here is the meat...
if (hexString.Length > 0)
{
string sqlForOracle = "DECLARE buf BLOB; " +
"cBuf CLOB; " +
"BEGIN " +
"dbms_lob.createtemporary(buf, FALSE); " +
"dbms_lob.createtemporary(cBuf, FALSE); "; +
"dbms_lob.open(buf, dbms_lob.lob_readwrite); ";
int chunkSize = 32766;
if (hexString.Length > chunkSize)
{
sqlForOracle += "dbms_lob.open(cBuf, dbms_lob.lob_readwrite); ";
int startIdx = 0;
decimal hexChunks = decimal.Divide(hexString.Length / chunkSize);
for (int i = 0; i < hexChunks; i++)
{
int remainingHex = hexString.Length - (i * chunkSize);
if (remainingHex > chunkSize)
sqlForOracle += "dbms_lob.append(cBuf, '" + hexString.Substring(startIdx, chunkSize + "'); ";
else
sqlForOracle += "dbms_lob.append(cBuf, '" + hexString.Substring(startIdx, remainingHex) + "'); ";
startIdx = startIdx + chunkSize;
}
sqlForOracle += "dbms_lob.close(cBuf); ";
// Now we append the CLOB to the BLOB
sqlForOracle += "dbms_lob.append(buf, HEXTORAW(cBuf)); ";
}
else // write it straight to BLOB as we are below our chunk limit
sqlForOracle += "dbms_lob.append(buf, HEXTORAW('" + hexString + "')); ";
sqlForOracle += "dbms_lob.close(buf); ";
sqlForOracle += "UPDATE MyTable SET blobDate = buf WHERE ID = 123; END;";
}
sqlForOracle
is later written to a file using FileStream
and StreamWriter
, and the service sees if the file exists, reads it in, and updates Oracle with it.
UPDATES
Mik's answer is actually fine, as-is, if you use an even number with your chunks, so mine actually unnecessarily introduces an extra step if you don't need to use odd-numbered chunks. A larger file (it would have to rival your RAM, though) would therefore unnecessarily impact performance as it gets written to memory twice (CLOB, then BLOB) before conversion, as well, so take heed, but I did want to show in the C# just how the chunks would get broken up and how the SQL would actually get programmatically written. If you want to only use buf
, just simply replace all the cBuf
variables with buf
, except you only need one dbms_lob.createtemporary()
statement, and obviously only one set of .open()
and .close()
tags.
And so, about those tags, I also read an "AskTom" forum on Oracle.com where they say that doing a dbms_lob.open()
and .close()
on your lob appending is, while optional, more beneficial to performance when dealing with number of appends > 2000 (or 2000 * 32766 = 65.532 MB), where it takes almost double the time (178.19%) to complete, and just gets worse from there: Of course, it depends on the file sizes being dealt with if this is actually useful to you or not. I added them in, above.