Frage

What is a best practice to get around SSAS 2 billion distinct value limitation on a column. My data set grows by 2 billions rows every 10 month and one of the measures in the cube is a row count that runs on the PK. Because adding partitions does not help resolve the problem, would creating new cubes with identical info be the right approach?

War es hilfreich?

Lösung

Do you have to run the count on a column? This should only be necessary if you have duplicates in your table. Could you not just count rows? Then just omit the primary key column.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top