Using machine learning to evaluate a random number generator
-
31-10-2019 - |
문제
Let's say that I want to create a pseudorandom number generator, and I'd like to make sure that it's truly close to random. For simplicity, let's assume that I want to output a 0 or a 1 randomly.
I have heard of the monobit, runs, poker test etc. but are there "machine learning" ways to evaluate a pseudorandom number generator?
As in, one could try to predict the number that will be outputted given the first k numbers that were previously outputted, and the performance of that model would give you how well the pseudorandom generator is performing.
It might be way over my head, but could a generative adversarial network learn a truly pseudorandom generator that way?
올바른 솔루션이 없습니다
제휴하지 않습니다 datascience.stackexchange