If you are trying to solve a problem on a single computer, I do not think it is practical to use Spark. The point of Spark is that it provides a way to distribute computation across multiple machines, especially in cases where the data does not fit on a single machine.
That said, just set spark.executor.memory
to 20g
to get 20 GB of virtual memory. Once the physical memory is exhausted, swap will be used instead. If you have enough swap configured, you will be able to make use of 20 GB. But your process will most likely slow down to a crawl when it starts swapping.