You can run this notebook in a live session Binder or view it on Github.

Dask Arrays

Dask arrays coordinate many Numpy arrays, arranged into chunks within a grid. They support a large subset of the Numpy API.

Start Dask Client for Dashboard

Starting the Dask Client is optional. It will provide a dashboard which is useful to gain insight on the computation.

The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.

In [1]:
from dask.distributed import Client, progress
client = Client(processes=False, threads_per_worker=4,
                n_workers=1, memory_limit='2GB')
client
Out[1]:

Client

Cluster

  • Workers: 1
  • Cores: 4
  • Memory: 2.00 GB

Create Random array

This creates a 10000x10000 array of random numbers, represented as many numpy arrays of size 1000x1000 (or smaller if the array cannot be divided evenly). In this case there are 100 (10x10) numpy arrays of size 1000x1000.

In [2]:
import dask.array as da
x = da.random.random((10000, 10000), chunks=(1000, 1000))
x
Out[2]:
dask.array<random_sample, shape=(10000, 10000), dtype=float64, chunksize=(1000, 1000)>

Use NumPy syntax as usual

In [3]:
y = x + x.T
z = y[::2, 5000:].mean(axis=1)
z
Out[3]:
dask.array<mean_agg-aggregate, shape=(5000,), dtype=float64, chunksize=(500,)>

Call .compute() when you want your result as a NumPy array.

If you started Client() above then you may want to watch the status page during computation.

In [4]:
z.compute()
Out[4]:
array([1.00069139, 0.99517692, 1.00081863, ..., 1.01089258, 0.98694301,
       0.99065289])

Persist data in memory

If you have the available RAM for your dataset then you can persist data in memory.

This allows future computations to be much faster.

In [5]:
y = y.persist()
In [6]:
%time y[0, 0].compute()
CPU times: user 2.92 s, sys: 528 ms, total: 3.44 s
Wall time: 1.81 s
Out[6]:
1.4125380317672358
In [7]:
%time y.sum().compute()
CPU times: user 452 ms, sys: 24 ms, total: 476 ms
Wall time: 325 ms
Out[7]:
99990900.7338699