Question

What is a best practice to get around SSAS 2 billion distinct value limitation on a column. My data set grows by 2 billions rows every 10 month and one of the measures in the cube is a row count that runs on the PK. Because adding partitions does not help resolve the problem, would creating new cubes with identical info be the right approach?

Was it helpful?

Solution

Do you have to run the count on a column? This should only be necessary if you have duplicates in your table. Could you not just count rows? Then just omit the primary key column.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top