I'd start with std::vector<std::vector<double>>
for storage, unless the structure was highly static.
To produce my array-of-arrays, I'd produce a std::vector<double*>
via transformation of my above storage, using syntax like transform_to_vector( storage, []( std::vector<double>& v ) { return v.data(); } )
(transform_to_vector
left as an exercise to the reader).
Keeping the two in sync would be a simple matter of wrapping it in a small class.
If the jagged array is relatively fixed in size, I'd take a std::vector<std::size_t>
to create my buffer (or maybe a std::initializer_list<std::size_t>
-- actually, a template<typename Container>
, and I'd just for( : )
over it twice, and let the caller pick what container it provided me), then create a single std::vector<double>
with the sum of the sizes, then build a std::vector<double*>
at the dictated offsets.
Resizing this gets expensive, which is a disadvantage.
A nice property of using std::vector
is that newer APIs have full access to the pretty begin
and end
values. If you have a single large buffer, you can expose a range view of the sub arrays to new code (a structure containing a double* begin()
and double* end()
, and while we are at it a double& operator[]
and std::size_t size() const { return end()-begin(); }
), so they can bask in the glory of full on C++ container-style views while keeping C compatibility for legacy interfaces.