You can run this notebook in a live session Binder or view it on Github.

DataFrames: Read and Write Data

Dask Dataframes can read and store data in many of the same formats as Pandas dataframes. In this example we read and write data with the popular CSV and Parquet formats, and discuss best practices when using these formats.

[1]:
from IPython.display import HTML

HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/0eEsIA0O1iE?rel=0&amp;controls=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>')
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/IPython/core/display.py:689: UserWarning: Consider using IPython.display.IFrame instead
  warnings.warn("Consider using IPython.display.IFrame instead")
[1]:

Start Dask Client for Dashboard

Starting the Dask Client is optional. It will provide a dashboard which is useful to gain insight on the computation.

The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.

[2]:
from dask.distributed import Client
client = Client(n_workers=1, threads_per_worker=4, processes=False, memory_limit='2GB')
client
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/dask/config.py:168: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  data = yaml.load(f.read()) or {}
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/distributed/config.py:20: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  defaults = yaml.load(f)
[2]:

Client

Cluster

  • Workers: 1
  • Cores: 4
  • Memory: 2.00 GB

Create artificial dataset

First we create an artificial dataset and write it to many CSV files.

You don’t need to understand this section, we’re just creating a dataset for the rest of the notebook.

[3]:
import dask
df = dask.datasets.timeseries()
df
[3]:
Dask DataFrame Structure:
id name x y
npartitions=30
2000-01-01 int64 object float64 float64
2000-01-02 ... ... ... ...
... ... ... ... ...
2000-01-30 ... ... ... ...
2000-01-31 ... ... ... ...
Dask Name: make-timeseries, 30 tasks
[4]:
import os
import datetime

if not os.path.exists('data'):
    os.mkdir('data')

def name(i):
    """ Provide date for filename given index

    Examples
    --------
    >>> name(0)
    '2000-01-01'
    >>> name(10)
    '2000-01-11'
    """
    return str(datetime.date(2000, 1, 1) + i * datetime.timedelta(days=1))

df.to_csv('data/*.csv', name_function=name);

Read CSV files

We now have many CSV files in our data directory, one for each day in the month of January 2000. Each CSV file holds timeseries data for that day. We can read all of them as one logical dataframe using the dd.read_csv function with a glob string.

[5]:
!ls data/*.csv | head
data/2000-01-01.csv
data/2000-01-02.csv
data/2000-01-03.csv
data/2000-01-04.csv
data/2000-01-05.csv
data/2000-01-06.csv
data/2000-01-07.csv
data/2000-01-08.csv
data/2000-01-09.csv
data/2000-01-10.csv
[6]:
!head data/2000-01-01.csv
timestamp,id,name,x,y
2000-01-01 00:00:00,1035,Frank,-0.9083150345556057,-0.4344375690852609
2000-01-01 00:00:01,961,Edith,-0.41102303875525914,0.2920878187539573
2000-01-01 00:00:02,1035,Frank,0.18935165013918942,-0.8990062173948214
2000-01-01 00:00:03,987,Patricia,0.22824954566590794,0.6702174071087221
2000-01-01 00:00:04,950,George,0.5591237896655585,0.17110790026503597
2000-01-01 00:00:05,1033,Xavier,-0.4357442459686234,0.9723347410907133
2000-01-01 00:00:06,980,Kevin,-0.5406182896099077,0.9848371809170315
2000-01-01 00:00:07,959,Michael,-0.13334744731968873,0.8130501482053696
2000-01-01 00:00:08,1030,Charlie,-0.9758021626366122,0.03828606860007766
[7]:
!head data/2000-01-30.csv
timestamp,id,name,x,y
2000-01-30 00:00:00,983,Wendy,0.6860596828886865,-0.5263252741103555
2000-01-30 00:00:01,950,Ingrid,-0.4613806338403199,0.7337789410536006
2000-01-30 00:00:02,1028,Oliver,0.2615764892384007,-0.17238598564609675
2000-01-30 00:00:03,984,Kevin,0.22344170251333595,-0.2682144653966012
2000-01-30 00:00:04,1001,Zelda,-0.8477491358306266,-0.5372708933175052
2000-01-30 00:00:05,1023,Norbert,-0.9497111547581893,-0.8374842725089127
2000-01-30 00:00:06,960,Oliver,0.7138147769667484,0.7961583603249944
2000-01-30 00:00:07,998,Quinn,-0.4314697800862548,0.37235952158738317
2000-01-30 00:00:08,1015,Victor,-0.4657085669061303,0.8712963902529196

We can read one file with pandas.read_csv or many files with dask.dataframe.read_csv

[8]:
import pandas as pd

df = pd.read_csv('data/2000-01-01.csv')
df.head()
[8]:
timestamp id name x y
0 2000-01-01 00:00:00 1035 Frank -0.908315 -0.434438
1 2000-01-01 00:00:01 961 Edith -0.411023 0.292088
2 2000-01-01 00:00:02 1035 Frank 0.189352 -0.899006
3 2000-01-01 00:00:03 987 Patricia 0.228250 0.670217
4 2000-01-01 00:00:04 950 George 0.559124 0.171108
[9]:
import dask.dataframe as dd

df = dd.read_csv('data/2000-*-*.csv')
df
[9]:
Dask DataFrame Structure:
timestamp id name x y
npartitions=30
object int64 object float64 float64
... ... ... ... ...
... ... ... ... ... ...
... ... ... ... ...
... ... ... ... ...
Dask Name: from-delayed, 90 tasks
[10]:
df.head()
[10]:
timestamp id name x y
0 2000-01-01 00:00:00 1035 Frank -0.908315 -0.434438
1 2000-01-01 00:00:01 961 Edith -0.411023 0.292088
2 2000-01-01 00:00:02 1035 Frank 0.189352 -0.899006
3 2000-01-01 00:00:03 987 Patricia 0.228250 0.670217
4 2000-01-01 00:00:04 950 George 0.559124 0.171108

Tuning read_csv

The Pandas read_csv function has many options to help you parse files. The Dask version uses the Pandas function internally, and so supports many of the same options. You can use the ? operator to see the full documentation string.

[11]:
pd.read_csv?
[12]:
dd.read_csv?

In this case we use the parse_dates keyword to parse the timestamp column to be a datetime. This will make things more efficient in the future. Notice that the dtype of the timestamp column has changed from object to datetime64[ns].

[13]:
df = dd.read_csv('data/2000-*-*.csv', parse_dates=['timestamp'])
df
[13]:
Dask DataFrame Structure:
timestamp id name x y
npartitions=30
datetime64[ns] int64 object float64 float64
... ... ... ... ...
... ... ... ... ... ...
... ... ... ... ...
... ... ... ... ...
Dask Name: from-delayed, 90 tasks

Do a simple computation

Whenever we operate on our dataframe we read through all of our CSV data so that we don’t fill up RAM. This is very efficient for memory use, but reading through all of the CSV files every time can be slow.

[14]:
%time df.groupby('name').x.mean().compute()
CPU times: user 7.19 s, sys: 516 ms, total: 7.7 s
Wall time: 5.2 s
[14]:
name
Alice       0.001719
Bob        -0.000799
Charlie    -0.002834
Dan        -0.001684
Edith      -0.002680
Frank       0.000284
George     -0.001190
Hannah      0.001327
Ingrid     -0.001632
Jerry      -0.000538
Kevin       0.001736
Laura      -0.002611
Michael    -0.001191
Norbert     0.000124
Oliver      0.001567
Patricia    0.000946
Quinn      -0.000287
Ray         0.001596
Sarah       0.002605
Tim         0.000939
Ursula      0.000196
Victor     -0.001830
Wendy       0.002670
Xavier     -0.000809
Yvonne     -0.002480
Zelda      -0.001230
Name: x, dtype: float64
[ ]:

Write to Parquet

Instead, we’ll store our data in Parquet, a format that is more efficient for computers to read and write.

[15]:
df.to_parquet('data/2000-01.parquet', engine='pyarrow')
[16]:
!ls data/2000-01.parquet/
_common_metadata  part.16.parquet  part.23.parquet  part.3.parquet
part.0.parquet    part.17.parquet  part.24.parquet  part.4.parquet
part.10.parquet   part.18.parquet  part.25.parquet  part.5.parquet
part.11.parquet   part.19.parquet  part.26.parquet  part.6.parquet
part.12.parquet   part.1.parquet   part.27.parquet  part.7.parquet
part.13.parquet   part.20.parquet  part.28.parquet  part.8.parquet
part.14.parquet   part.21.parquet  part.29.parquet  part.9.parquet
part.15.parquet   part.22.parquet  part.2.parquet

Read from Parquet

[17]:
df = dd.read_parquet('data/2000-01.parquet', engine='pyarrow')
df
[17]:
Dask DataFrame Structure:
timestamp id name x y
npartitions=30
datetime64[ns] int64 object float64 float64
... ... ... ... ...
... ... ... ... ... ...
... ... ... ... ...
... ... ... ... ...
Dask Name: read-parquet, 30 tasks
[18]:
%time df.groupby('name').x.mean().compute()
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
CPU times: user 1.8 s, sys: 312 ms, total: 2.11 s
Wall time: 1.53 s
[18]:
name
Alice       0.001719
Bob        -0.000799
Charlie    -0.002834
Dan        -0.001684
Edith      -0.002680
Frank       0.000284
George     -0.001190
Hannah      0.001327
Ingrid     -0.001632
Jerry      -0.000538
Kevin       0.001736
Laura      -0.002611
Michael    -0.001191
Norbert     0.000124
Oliver      0.001567
Patricia    0.000946
Quinn      -0.000287
Ray         0.001596
Sarah       0.002605
Tim         0.000939
Ursula      0.000196
Victor     -0.001830
Wendy       0.002670
Xavier     -0.000809
Yvonne     -0.002480
Zelda      -0.001230
Name: x, dtype: float64

Select only the columns that you plan to use

Parquet is a column-store, which means that it can efficiently pull out only a few columns from your dataset. This is good because it helps to avoid unnecessary data loading.

[19]:
%%time
df = dd.read_parquet('data/2000-01.parquet', columns=['name', 'x'], engine='pyarrow')
df.groupby('name').x.mean().compute()
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:707: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels = getattr(columns, 'labels', None) or [
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:734: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead
  return pd.MultiIndex(levels=new_levels, labels=labels, names=columns.names)
/home/travis/miniconda/envs/test/lib/python3.6/site-packages/pyarrow/pandas_compat.py:751: FutureWarning: .labels was deprecated in version 0.24.0. Use .codes instead.
  labels, = index.labels
CPU times: user 1.5 s, sys: 220 ms, total: 1.72 s
Wall time: 1.27 s

Here the difference is not that large, but with larger datasets this can save a great deal of time.