Live Notebook

You can run this notebook in a live session or view it on Github.

This notebook shows how to use Dask to parallelize embarrassingly parallel workloads where you want to apply one function to many pieces of data independently. It will show three different ways of doing this with Dask:

This example focuses on using Dask for building large embarrassingly parallel computation as often seen in scientific communities and on High Performance Computing facilities, for example with Monte Carlo methods. This kind of simulation assume the following:

• We have a function that runs a heavy computation given some parameters.

• We need to compute this function on many different input parameters, each function call being independent.

• We want to gather all the results in one place for further analysis.

## Start Dask Client for Dashboard¶

Starting the Dask Client will provide a dashboard which is useful to gain insight on the computation. We will also need it for the Futures API part of this example. Moreover, as this kind of computation is often launched on super computer or in the Cloud, you will probably end up having to start a cluster and connect a client to scale. See dask-jobqueue, dask-kubernetes or dask-yarn for easy ways to achieve this on respectively an HPC, Cloud or Big Data infrastructure.

The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same time is very useful when learning.

[1]:

from dask.distributed import Client, progress
client

[1]:


### Cluster

• Workers: 1
• Cores: 4
• Memory: 8.36 GB

## Define your computation calling function¶

This function does a simple operation: add all numbers of a list/array together, but it also sleeps for a random amount of time to simulate real work. In real use cases, this could call another python module, or even run an executable using subprocess module.

[2]:

import time
import random

def costly_simulation(list_param):
time.sleep(random.random())
return sum(list_param)


We try it locally below

[3]:

%time costly_simulation([1, 2, 3, 4])

CPU times: user 11.2 ms, sys: 431 µs, total: 11.6 ms
Wall time: 629 ms

[3]:

10


## Define the set of input parameters to call the function¶

We will generate a set of inputs on which we want to run our simulation function. Here we use Pandas dataframe, but we could also use a simple list. Lets say that our simulation is run with four parameters called param_[a-d].

[4]:

import pandas as pd
import numpy as np

input_params = pd.DataFrame(np.random.random(size=(500, 4)),
columns=['param_a', 'param_b', 'param_c', 'param_d'])

[4]:

param_a param_b param_c param_d
0 0.573847 0.245311 0.801500 0.694358
1 0.733293 0.581941 0.195200 0.774918
2 0.418747 0.612614 0.603784 0.396713
3 0.458972 0.575935 0.286522 0.922770
4 0.350449 0.524918 0.716147 0.200630

Without using Dask, we could call our simulation on all of these parameters using normal Python for loops.

Let’s only do this on a sample of our parameters as it would be quite long otherwise.

[5]:

results = []

[6]:

%%time
for parameters in input_params.values[:10]:
result = costly_simulation(parameters)
results.append(result)

CPU times: user 73.5 ms, sys: 15.4 ms, total: 89 ms
Wall time: 4.39 s

[7]:

results

[7]:

[2.3150154931094784,
2.285352387829805,
2.0318580649088878,
2.2441991405649806,
1.7921440341124673,
2.472443893667735,
0.6422815890187654,
1.1625093270873839,
2.2953738086235167,
1.0709457806706006]


Note that this is not very clever as we can easily parallelize code.

There are many ways to parallelize this function in Python with libraries like multiprocessing, concurrent.futures, joblib or others. These are good first steps. Dask is a good second step, especially when you want to scale across many machines.

## Use Dask Delayed to make our function lazy¶

We can call dask.delayed on our funtion to make it lazy. Rather than compute its result immediately, it records what we want to compute as a task into a graph that we’ll run later on parallel hardware. Using dask.delayed is a relatively straightforward way to parallelize an existing code base, even if the computation isn’t embarrassingly parallel like this one.

Calling these lazy functions is now almost free. In the cell below we only construct a simple graph.

[8]:

import dask
lazy_results = []

[9]:

%%time

for parameters in input_params.values[:10]:
lazy_results.append(lazy_result)

CPU times: user 1.17 ms, sys: 0 ns, total: 1.17 ms
Wall time: 885 µs

[10]:

lazy_results[0]

[10]:

Delayed('costly_simulation-f07608a7-8a6d-42a0-b66d-f932a8770964')


## Run in parallel¶

The lazy_results list contains information about ten calls to costly_simulation that have not yet been run. Call .compute() when you want your result as normal Python objects.

If you started Client() above then you may want to watch the status page during computation.

[11]:

%time dask.compute(*lazy_results)

CPU times: user 45.5 ms, sys: 3.73 ms, total: 49.2 ms
Wall time: 1.47 s

[11]:

(2.3150154931094784,
2.285352387829805,
2.0318580649088878,
2.2441991405649806,
1.7921440341124673,
2.472443893667735,
0.6422815890187654,
1.1625093270873839,
2.2953738086235167,
1.0709457806706006)


Notice that this was faster than running these same computations sequentially with a for loop.

We can now run this on all of our input parameters:

[12]:

import dask
lazy_results = []

for parameters in input_params.values:
lazy_results.append(lazy_result)

futures = dask.persist(*lazy_results)  # trigger computation in the background


(although we’re still only working on our local machine, this is more practical when using an actual cluster)

[13]:

client.cluster.scale(10)  # ask for ten 4-thread workers


Then get the result:

[14]:

results = dask.compute(*futures)
results[:5]

[14]:

(2.3150154931094784,
2.285352387829805,
2.0318580649088878,
2.2441991405649806,
1.7921440341124673)


## Using the Futures API¶

The same example can be implemented using Dask’s Futures API by using the client object itself. For our use case of applying a function across many inputs both Dask delayed and Dask Futures are equally useful. The Futures API is a little bit different because it starts work immediately rather than being completely lazy.

For example, notice that work starts immediately in the cell below as we submit work to the cluster:

[15]:

futures = []
for parameters in input_params.values:
future = client.submit(costly_simulation, parameters)
futures.append(future)


We can explicitly wait until this work is done and gather the results to our local process by calling client.gather:

[16]:

results = client.gather(futures)
results[:5]

[16]:

[2.3150154931094784,
2.285352387829805,
2.0318580649088878,
2.2441991405649806,
1.7921440341124673]


But the code above can be run in fewer lines with client.map() function, allowing to call a given function on a list of parameters.

As for delayed, we can only start the computation and not wait for results by not calling client.gather() right now.

It shall be noted that as Dask cluster has already performed tasks launching costly_simulation with Futures API on the given input parameters, the call to client.map() won’t actually trigger any computation, and just retrieve already computed results.

[17]:

futures = client.map(costly_simulation, input_params.values)


Then just get the results later:

[18]:

results = client.gather(futures)
len(results)

[18]:

500

[19]:

print(results[0])

2.3150154931094784


We encourage you to watch the dashboard’s status page to watch on going computation.

## Doing some analysis on the results¶

One of the interests of Dask here, outside from API simplicity, is that you are able to gather the result for all your simulations in one call. There is no need to implement a complex mechanism or to write individual results in a shared file system or object store.

Just get your result, and do some computation.

Here, we will just get the results and expand our initial dataframe to have a nice view of parameters vs results for our computation

[20]:

output = input_params.copy()
output['result'] = pd.Series(results, index=output.index)
output.sample(5)

[20]:

param_a param_b param_c param_d result
234 0.037117 0.652786 0.201661 0.321972 1.213537
127 0.993528 0.935445 0.687208 0.671724 3.287906
7 0.193457 0.028784 0.650875 0.289394 1.162509
9 0.585997 0.162184 0.263253 0.059511 1.070946
386 0.786096 0.753567 0.618727 0.033695 2.192085

Then we can do some nice statistical plots or save result locally with pandas interface here

[21]:

%matplotlib inline
output['result'].plot()

[21]:

<matplotlib.axes._subplots.AxesSubplot at 0x7fbaadaa2850>

[22]:

output['result'].mean()

[22]:

2.0046525146626593

[23]:

filtered_output = output[output['result'] > 2]
print(len(filtered_output))
filtered_output.to_csv('/tmp/simulation_result.csv')

257


## Handling very large simulation with Bags¶

The methods above work well for a size of input parameters up to about 100,000. Above that, the Dask scheduler has trouble handling the amount of tasks to schedule to workers. The solution to this problem is to bundle many parameters into a single task. You could do this either by making a new function that operated on a batch of parameters and using the delayed or futures APIs on that function. You could also use the Dask Bag API. This is described more in the documentation about avoiding too many tasks.

Dask Bags hold onto large sequences in a few partitions. We can convert our input_params sequence into a dask.bag collection, asking for fewer partitions (so at most 100,000, which is already huge), and apply our function on every item of the bag.

[24]:

import dask.bag as db
b = db.from_sequence(list(input_params.values), npartitions=100)
b = b.map(costly_simulation)

[25]:

%time results_bag = b.compute()

CPU times: user 813 ms, sys: 111 ms, total: 924 ms
Wall time: 8.1 s


Looking on Dashboard here, you should see only 100 tasks to run instead of 500, each taking 5x more time in average, because each one is actually calling our function 5 times.

[26]:

np.all(results) == np.all(results_bag)

[26]:

True