Pergunta

Let us suppose we have a sequence of values $C(i)$ that represent some counter for a given $i$ for $i \in \lbrace 1, \cdots, n \rbrace$. Let us assume some uniform distribution $U$ where selecting any integer between $1$ and $n$ has equal probability. A simple process using this information is to loop for some fixed number of iterations $M$ that, for ease, is a multiple of $n$, and do the following:

  1. Randomly obtain some constant $k \leq n$ number of integers from $U$ and place them into a set $V$
  2. For $i^* = \arg \min V$, do update $C(i^*) \leftarrow C(i^*) + 1$

Ultimately, I want to be able to investigate the quantity $P\left( | C(i) - \frac{1}{n}\sum_{j=1}^n C(j) | \leq \epsilon \right)$ for any $i$. The question I have is if this simple algorithm allows, in probability, for the values of $C(i)$ to remain close to their average values regardless of how many iterations we do. This probability is obviously of a form where one could take advantage of say Hoeffding's inequality but I am not quite sure how to make use of the algorithm structure to get to that point.

I am not very well versed in randomized algorithms and so any insight into how to approach the analysis of such a simple algorithm would be insightful.

Nenhuma solução correta

Licenciado em: CC-BY-SA com atribuição
Não afiliado a cs.stackexchange
scroll top