So what is it exactly you need to measure? As big O is for algorithm performance, it seems the reasonable thing to measure is the time complexity for each sheduling decision, i.e. how many operations are required to match one task to one node.
Round robin
Both the list of ready nodes and the list of waiting tasks can be implemented as queues, and operations on queues can be implemented constant amortized time. So except for the time required when you need to allocate new memory for your queue, you operate in O(1).
Weighted round robin
If your weights are fixed and commensurable, then you can achieve the same complexity here as well: simply place each node into the cyclic queue several times, proportional to its weight. One can formulate an algorithm to space them evenly within the queue. For most practical applications, some moderate quantization will lead to weights which can be represented as (i.e. scaled to) reasonably small integers.
Other considerations
Do you get any feedback at all about whether your nodes are busy or idle? If you do, then RR should work well enough, since you didn't state any fairness requirements. If you don't have idle feedback, then WRR might make sense. But in that case, the number z
in your formulas appears somewhat confusing, since how can tasks be waiting if you don't know whether nodes are ready to accept them? If they are waiting because the balancer can't handle them fast enough, then this still shouldn't matter for your considerations, as you can imagine the queue of tasks as before the scheduling algo: the length of that queue has no impact on what happens in the scheduler. Again assuming all tasks are the same to you, without priorities, QoS guarantees or similar meta information.