After having had some time to think about this, I have decided to report, the best achieved speedup multiplied by the Pearson linear correlation coefficient.
Such a plot looks as follows:
The best achieved speedup per instance of (r,b) is weighted by how "close to linear" it is, information contained on the Pearson linear correlation coefficient. Since the former is a value defined in [-1,1], then, for speedups far from linear, we will have a 0, while negative values will show slowdown, when this is expected. In the attached plot, we can see that the parallel solver, will indeed shod proper scalability for small values of the bandwidth, and it will get worse as this value gets increased.
If you guys have any hint, or any corrections, please let me know ;)