On one hand, it is true that with gsl_vector you can use gsl BLAS which is a big advantage. On the other hand, it is also true that gsl interface is quite cumbersome for a c++ programmer. So, neither solutions are truly satisfactory. However, I strongly prefer the use of gsl_matrix because
(i) with some effort you can write a small wrapper class that ameliorate the cumbersome C interface of gsl_matrix (it is much harder to deal with the lack of BLAS library in std::vector).
(ii) gsl_matrix is just a wrapper to an one dimensional continuous array where m(i,j) = array[i*N + j]
for square matrix (even if the matrix is not square gsl_matrix still implements it as one dimensional array). In std::vector<gsl_vector*>
, you will need to "malloc" each gsl_vector individually and this implies that memory won't be contiguous. This hits performance because the lack of "spatial locality" in memory allocation usually increases cache misses substantially.
If you have the choice to use a completely different solution, I would implement the tensor calculation using StaticMatrix or DynamicMatrix classes in Blaze lib
Why Blaze?
(i) The StaticMatrix or DynamicMatrix interface is much better than std::vector<gsl_vector*>
or gsl_matrix
(ii) Blaze is the fastest BLAS lib available in C++. It is faster than gsl if you have available Intel MKL (remember that Intel MKL is faster than gsl BLAS). Why so? Because Blaze uses a new technique called "Smart Expression Template" . Basically, researchers in Germany showed in a series of articles paper 1 paper 2 that the "Expression Template" technique, which is the standard technique in many C++ BLAS libraries, is terrible for matrix operations (BLAS 3 operations) because compiler can't be smarter than low level code. However, "Expression Template" can be used as a smarter wrapper to low level BLAS libraries like intel MKL. So they create "Smart Expression Template" technique which is just a wrapper to your choice of low level blas lib. Their benchmarks are astonishing