Live Notebook
You can run this notebook in a live session or view it on Github.
Dask DataFrames¶
Dask Dataframes coordinate many Pandas dataframes, partitioned along an index. They support a large subset of the Pandas API.
Start Dask Client for Dashboard¶
Starting the Dask Client is optional. It will provide a dashboard which is useful to gain insight on the computation.
The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.
[1]:
from dask.distributed import Client, progress
client = Client(n_workers=2, threads_per_worker=2, memory_limit='1GB')
client
[1]:
Client
|
Cluster
|
Create Random Dataframe¶
We create a random timeseries of data with the following attributes:
It stores a record for every 10 seconds of the year 2000
It splits that year by month, keeping every month as a separate Pandas dataframe
Along with a datetime index it has columns for names, ids, and numeric values
This is a small dataset of about 240 MB. Increase the number of days or reduce the frequency to practice with a larger dataset.
[2]:
import dask
import dask.dataframe as dd
df = dask.datasets.timeseries()
Unlike Pandas, Dask DataFrames are lazy and so no data is printed here.
[3]:
df
[3]:
id | name | x | y | |
---|---|---|---|---|
npartitions=30 | ||||
2000-01-01 | int64 | object | float64 | float64 |
2000-01-02 | ... | ... | ... | ... |
... | ... | ... | ... | ... |
2000-01-30 | ... | ... | ... | ... |
2000-01-31 | ... | ... | ... | ... |
But the column names and dtypes are known.
[4]:
df.dtypes
[4]:
id int64
name object
x float64
y float64
dtype: object
Some operations will automatically display the data.
[5]:
import pandas as pd
pd.options.display.precision = 2
pd.options.display.max_rows = 10
[6]:
df.head(3)
[6]:
id | name | x | y | |
---|---|---|---|---|
timestamp | ||||
2000-01-01 00:00:00 | 1058 | Norbert | 0.14 | 0.21 |
2000-01-01 00:00:01 | 1012 | Ingrid | 0.96 | -0.60 |
2000-01-01 00:00:02 | 987 | Norbert | -0.97 | -0.39 |
Use Standard Pandas Operations¶
Most common Pandas operations operate identically on Dask dataframes
[7]:
df2 = df[df.y > 0]
df3 = df2.groupby('name').x.std()
df3
[7]:
Dask Series Structure:
npartitions=1
float64
...
Name: x, dtype: float64
Dask Name: sqrt, 157 tasks
Call .compute()
when you want your result as a Pandas dataframe.
If you started Client()
above then you may want to watch the status page during computation.
[8]:
computed_df = df3.compute()
type(computed_df)
[8]:
pandas.core.series.Series
[9]:
computed_df
[9]:
name
Alice 0.58
Bob 0.58
Charlie 0.58
Dan 0.58
Edith 0.58
...
Victor 0.58
Wendy 0.58
Xavier 0.58
Yvonne 0.58
Zelda 0.58
Name: x, Length: 26, dtype: float64
Persist data in memory¶
If you have the available RAM for your dataset then you can persist data in memory.
This allows future computations to be much faster.
[10]:
df = df.persist()
Time Series Operations¶
Because we have a datetime index time-series operations work efficiently
[11]:
%matplotlib inline
[12]:
df[['x', 'y']].resample('1h').mean().head()
[12]:
x | y | |
---|---|---|
timestamp | ||
2000-01-01 00:00:00 | 1.59e-03 | -7.11e-03 |
2000-01-01 01:00:00 | -6.15e-03 | 4.16e-03 |
2000-01-01 02:00:00 | -3.55e-03 | -1.47e-02 |
2000-01-01 03:00:00 | 1.10e-02 | -3.52e-03 |
2000-01-01 04:00:00 | -1.51e-02 | 5.70e-03 |
[13]:
df[['x', 'y']].resample('24h').mean().compute().plot()
[13]:
<AxesSubplot:xlabel='timestamp'>

[14]:
df[['x', 'y']].rolling(window='24h').mean().head()
[14]:
x | y | |
---|---|---|
timestamp | ||
2000-01-01 00:00:00 | 0.14 | 0.21 |
2000-01-01 00:00:01 | 0.55 | -0.19 |
2000-01-01 00:00:02 | 0.05 | -0.26 |
2000-01-01 00:00:03 | -0.01 | -0.21 |
2000-01-01 00:00:04 | -0.04 | -0.11 |
Random access is cheap along the index, but must still be computed.
[15]:
df.loc['2000-01-05']
[15]:
id | name | x | y | |
---|---|---|---|---|
npartitions=1 | ||||
2000-01-05 00:00:00.000000000 | int64 | object | float64 | float64 |
2000-01-05 23:59:59.999999999 | ... | ... | ... | ... |
[16]:
%time df.loc['2000-01-05'].compute()
CPU times: user 13.5 ms, sys: 11.9 ms, total: 25.3 ms
Wall time: 42.2 ms
[16]:
id | name | x | y | |
---|---|---|---|---|
timestamp | ||||
2000-01-05 00:00:00 | 990 | Charlie | -0.39 | 0.87 |
2000-01-05 00:00:01 | 1034 | Wendy | -0.61 | 0.25 |
2000-01-05 00:00:02 | 990 | George | -0.62 | 0.57 |
2000-01-05 00:00:03 | 1096 | Zelda | 0.22 | 0.58 |
2000-01-05 00:00:04 | 1011 | Jerry | -0.33 | 0.16 |
... | ... | ... | ... | ... |
2000-01-05 23:59:55 | 945 | Ray | -0.67 | 0.26 |
2000-01-05 23:59:56 | 1000 | Michael | 0.17 | 0.46 |
2000-01-05 23:59:57 | 981 | George | 0.45 | -0.26 |
2000-01-05 23:59:58 | 956 | Oliver | 0.30 | 0.69 |
2000-01-05 23:59:59 | 1030 | Edith | 0.61 | 0.39 |
86400 rows × 4 columns
Set Index¶
Data is sorted by the index column. This allows for faster access, joins, groupby-apply operations, etc.. However sorting data can be costly to do in parallel, so setting the index is both important to do, but only infrequently.
[17]:
df = df.set_index('name')
df
[17]:
id | x | y | |
---|---|---|---|
npartitions=30 | |||
Alice | int64 | float64 | float64 |
Alice | ... | ... | ... |
... | ... | ... | ... |
Zelda | ... | ... | ... |
Zelda | ... | ... | ... |
Because computing this dataset is expensive and we can fit it in our available RAM, we persist the dataset to memory.
[18]:
df = df.persist()
Dask now knows where all data lives, indexed cleanly by name. As a result oerations like random access are cheap and efficient
[19]:
%time df.loc['Alice'].compute()
CPU times: user 331 ms, sys: 19.6 ms, total: 350 ms
Wall time: 2.42 s
[19]:
id | x | y | |
---|---|---|---|
name | |||
Alice | 1026 | -0.99 | 8.07e-01 |
Alice | 1003 | -0.98 | 8.48e-01 |
Alice | 986 | -0.86 | -1.51e-01 |
Alice | 969 | 0.64 | 1.27e-01 |
Alice | 1007 | 0.78 | 9.35e-01 |
... | ... | ... | ... |
Alice | 974 | 0.33 | -5.91e-01 |
Alice | 1002 | 0.45 | 8.60e-01 |
Alice | 1070 | 0.40 | 6.67e-03 |
Alice | 972 | -0.69 | -8.54e-01 |
Alice | 990 | 0.89 | 7.59e-01 |
99913 rows × 3 columns
Groupby Apply with Scikit-Learn¶
Now that our data is sorted by name we can easily do operations like random access on name, or groupby-apply with custom functions.
Here we train a different Scikit-Learn linear regression model on each name.
[20]:
from sklearn.linear_model import LinearRegression
def train(partition):
est = LinearRegression()
est.fit(partition[['x']].values, partition.y.values)
return est
[21]:
df.groupby('name').apply(train, meta=object).compute()
[21]:
name
Alice LinearRegression()
Bob LinearRegression()
Charlie LinearRegression()
Dan LinearRegression()
Edith LinearRegression()
...
Victor LinearRegression()
Wendy LinearRegression()
Xavier LinearRegression()
Yvonne LinearRegression()
Zelda LinearRegression()
Length: 26, dtype: object
Further Reading¶
For a more in-depth introduction to Dask dataframes, see the dask tutorial, notebooks 04 and 07.