Many Operating Systems will allow processes to request more virtual addresses (nominally available memory) than it has virtual memory to support, on the assumption that the processes may not actually access all the pages. Famously, this allows Sparse Arrays to be practical on such systems. But, as you access each page the CPU generates an interrupt and the OS must find physical memory to back that page (swapping out to non-RAM swap disk/files etc too if configured) - when all options are exhausted (or sometimes when your OS is dangerously close to the limit and some protective process decides it's better to kill some processes than let known critical ones start failing), you may get an error like you've observed. Ultimately, there's no control over this at the C++ level. You can reserve and write all pages quickly so you'll likely fail before doing all your processing, but even then you may be terminated in a desperately low-memory situation.
Separately, you may be able to fit a lot more circles in to memory if you store them by value. That said, you may not if sizeof(Circle) > sizeof(Circle*)
and fragmentation is limiting you, in which case you might try a std::deque
. Anyway:
try
{
std::vector<Circle> array;
array.reserve(80000000);
for (long long i = 0; i < 80000000; i++) {
array.emplace_back(1, i);
}
catch (const bad_alloc& ba)
{
std::cerr << "Memory Exhaustion\n";
}