문제

Working on what could often be called "medium data" projects, I've been able to parallelize my code (mostly for modeling and prediction in Python) on a single system across anywhere from 4 to 32 cores. Now I'm looking at scaling up to clusters on EC2 (probably with StarCluster/IPython, but open to other suggestions as well), and have been puzzled by how to reconcile distributing work across cores on an instance vs. instances on a cluster.

Is it even practical to parallelize across instances as well as across cores on each instance? If so, can anyone give a quick rundown of the pros + cons of running many instances with few cores each vs. a few instances with many cores? Is there a rule of thumb for choosing the right ratio of instances to cores per instance?

Bandwidth and RAM are non-trivial concerns in my projects, but it's easy to spot when those are the bottlenecks and readjust. It's much harder, I'd imagine, to benchmark the right mix of cores to instances without repeated testing, and my projects vary too much for any single test to apply to all circumstances. Thanks in advance, and if I've just failed to google this one properly, feel free to point me to the right answer somewhere else!

도움이 되었습니까?

해결책

When using IPython, you very nearly don't have to worry about it (at the expense of some loss of efficiency/greater communication overhead). The parallel IPython plugin in StarCluster will by default start one engine per physical core on each node (I believe this is configurable but not sure where). You just run whatever you want across all engines by using the DirectView api (map_sync, apply_sync, ...) or the %px magic commands. If you are already using IPython in parallel on one machine, using it on a cluster is no different.

Addressing some of your specific questions:

"how to reconcile distributing work across cores on an instance vs. instances on a cluster" - You get one engine per core (at least); work is automatically distributed across all cores and across all instances.

"Is it even practical to parallelize across instances as well as across cores on each instance?" - Yes :) If the code you are running is embarrassingly parallel (exact same algo on multiple data sets) then you can mostly ignore where a particular engine is running. If the core requires a lot of communication between engines, then of course you need to structure it so that engines primarily communicate with other engines on the same physical machine; but that kind of problem is not ideally suited for IPython, I think.

"If so, can anyone give a quick rundown of the pros + cons of running many instances with few cores each vs. a few instances with many cores? Is there a rule of thumb for choosing the right ratio of instances to cores per instance?" - Use the largest c3 instances for compute-bound, and the smallest for memory-bandwidth-bound problems; for message-passing-bound problems, also use the largest instances but try to partition the problem so that each partition runs on one physical machine and most message passing is within the same partition. Problems which would run significantly slower on N quadruple c3 instances than on 2N double c3 are rare (an artificial example may be running multiple simple filters on a large number of images, where you go through all images for each filter rather than all filters for the same image). Using largest instances is a good rule of thumb.

다른 팁

A general rule of thumb is to not distribute until you have to. It's usually more efficient to have N servers of a certain capacity than 2N servers of half that capacity. More of the data access will be local, and therefore fast in memory versus slow across the network.

At a certain point, scaling up one machine becomes uneconomical because the cost of additional resource scales more than linearly. However this point is still amazingly high.

On Amazon in particular though, the economics of each instance type can vary a lot if you are using spot market instances. The default pricing more or less means that the same amount of resource costs about the same regardless of the instance type, that can vary a lot; large instances can be cheaper than small ones, or N small instances can be much cheaper than one large machine with equivalent resources.

One massive consideration here is that the computation paradigm can change quite a lot when you move from one machine to multiple machines. The tradeoffs that the communication overhead induce may force you to, for example, adopt a data-parallel paradigm to scale. That means a different choice of tools and algorithm. For example, SGD looks quite different in-memory and in Python than on MapReduce. So you would have to consider this before parallelizing.

You may choose to distribute work across a cluster, even if a single node and non-distributed paradigms work for you, for reliability. If a single node fails, you lose all of the computation; a distributed computation can potentially recover and complete just the part of the computation that was lost.

All things considered equal (cost, CPU perf, etc.) you could choose the smallest instance that can hold all of my dataset in memory and scale out. That way

  • you make sure not to induce unnecessary latencies due to network communications, and
  • you tend to maximize the overall available memory bandwidth for your processes.

Assuming you are running some sort of cross-validation scheme to optimize some meta parameter of your model, assign each core a value to test and choose an many instances as needed to cover all the parameter space in as few rounds as you see fit.

If your data does not fit in the memory of one system, of course you'll need to distribute across instances. Then it is a matter of balancing memory latency (better with many instances) with network latency (better with fewer instances) but given the nature of EC2 I'd bet you'll often prefer to work with few fat instances.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top