First of all it would be good general advice to double check you've correctly understood how LabVIEW stores data in memory and whether any of your VI's are using more memory than they need to.
If you still need to squeeze this data into the minimum space, you could do something like:
- Instead of a 1D array of n values, use a 2D array of ceiling(n/16) x 17 U16's. Each U16 is going to hold one bit from each of 16 of your data values.
- To read value m from the array, get the 17 U16's from row m/16 of the array and get bit (m MOD 16) from each U16, then combine them to create the value you need.
- To write to the array, get the relevant 17 U16's, replace the relevant bit of each with the bits representing the new value, and replace the changed U16's in the array.
I guess this won't be fast but maybe you can optimise it for the particular operations you need to do on this data.
Alternatively, could you perhaps use some sort of data compression? I imagine that would work best if you can organise the data into 'pages' containing some set number of values. For example, you could take a 1D array of SGL, flatten it to a string, then apply the compression to the string, and store the compressed string in a string array. I believe OpenG includes zip tools, for example.