Pergunta

Let's say that I want to create a pseudorandom number generator, and I'd like to make sure that it's truly close to random. For simplicity, let's assume that I want to output a 0 or a 1 randomly.

I have heard of the monobit, runs, poker test etc. but are there "machine learning" ways to evaluate a pseudorandom number generator?

As in, one could try to predict the number that will be outputted given the first k numbers that were previously outputted, and the performance of that model would give you how well the pseudorandom generator is performing.

It might be way over my head, but could a generative adversarial network learn a truly pseudorandom generator that way?

Nenhuma solução correta

Licenciado em: CC-BY-SA com atribuição
scroll top