Question

What's the optimal scalability of some algorithm when I implement it in a distributed manner?

Intuitively, it seems to me that any algorithm can scale at most linearly with number of computing nodes. I.e, if algorithm A takes T units of time with 1 computing node on input I, it can't run faster than T/n units of time with n computing nodes on the same input I.

Is my intuition correct or are there some weird counter-examples to it?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top