DataFrames: Read and Write Data
Contents
Live Notebook
You can run this notebook in a live session or view it on Github.
DataFrames: Read and Write Data¶
Dask Dataframes can read and store data in many of the same formats as Pandas dataframes. In this example we read and write data with the popular CSV and Parquet formats, and discuss best practices when using these formats.
[1]:
from IPython.display import YouTubeVideo
YouTubeVideo("0eEsIA0O1iE")
[1]:
Start Dask Client for Dashboard¶
Starting the Dask Client is optional. It will provide a dashboard which is useful to gain insight on the computation.
The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.
[2]:
from dask.distributed import Client
client = Client(n_workers=1, threads_per_worker=4, processes=True, memory_limit='2GB')
client
[2]:
Client
Client-e20d2897-0de0-11ed-a12a-000d3a8f7959
Connection method: Cluster object | Cluster type: distributed.LocalCluster |
Dashboard: http://127.0.0.1:8787/status |
Cluster Info
LocalCluster
6de9824d
Dashboard: http://127.0.0.1:8787/status | Workers: 1 |
Total threads: 4 | Total memory: 1.86 GiB |
Status: running | Using processes: True |
Scheduler Info
Scheduler
Scheduler-f935f86a-49c7-4bf7-a1cf-50e7d7648a8e
Comm: tcp://127.0.0.1:46871 | Workers: 1 |
Dashboard: http://127.0.0.1:8787/status | Total threads: 4 |
Started: Just now | Total memory: 1.86 GiB |
Workers
Worker: 0
Comm: tcp://127.0.0.1:42199 | Total threads: 4 |
Dashboard: http://127.0.0.1:44125/status | Memory: 1.86 GiB |
Nanny: tcp://127.0.0.1:45137 | |
Local directory: /home/runner/work/dask-examples/dask-examples/dataframes/dask-worker-space/worker-n5409ak8 |
Create artificial dataset¶
First we create an artificial dataset and write it to many CSV files.
You don’t need to understand this section, we’re just creating a dataset for the rest of the notebook.
[3]:
import dask
df = dask.datasets.timeseries()
df
[3]:
id | name | x | y | |
---|---|---|---|---|
npartitions=30 | ||||
2000-01-01 | int64 | object | float64 | float64 |
2000-01-02 | ... | ... | ... | ... |
... | ... | ... | ... | ... |
2000-01-30 | ... | ... | ... | ... |
2000-01-31 | ... | ... | ... | ... |
[4]:
import os
import datetime
if not os.path.exists('data'):
os.mkdir('data')
def name(i):
""" Provide date for filename given index
Examples
--------
>>> name(0)
'2000-01-01'
>>> name(10)
'2000-01-11'
"""
return str(datetime.date(2000, 1, 1) + i * datetime.timedelta(days=1))
df.to_csv('data/*.csv', name_function=name);
Read CSV files¶
We now have many CSV files in our data directory, one for each day in the month of January 2000. Each CSV file holds timeseries data for that day. We can read all of them as one logical dataframe using the dd.read_csv
function with a glob string.
[5]:
!ls data/*.csv | head
data/2000-01-01.csv
data/2000-01-02.csv
data/2000-01-03.csv
data/2000-01-04.csv
data/2000-01-05.csv
data/2000-01-06.csv
data/2000-01-07.csv
data/2000-01-08.csv
data/2000-01-09.csv
data/2000-01-10.csv
[6]:
!head data/2000-01-01.csv
timestamp,id,name,x,y
2000-01-01 00:00:00,1009,Jerry,0.9005427499558429,0.3212344670325944
2000-01-01 00:00:01,940,Quinn,0.46795036754868247,-0.01884571513893385
2000-01-01 00:00:02,1017,Ingrid,0.9442706585905265,-0.9229268785155369
2000-01-01 00:00:03,1034,Tim,0.010273653581192255,-0.2850042344432575
2000-01-01 00:00:04,963,Bob,-0.9556052127604173,-0.409805293606079
2000-01-01 00:00:05,992,Ray,0.49090905386189876,-0.8364030355424359
2000-01-01 00:00:06,999,Ray,-0.1791414361782142,0.9108295350480047
2000-01-01 00:00:07,1017,Tim,-0.6121437272121055,0.5585754365941122
2000-01-01 00:00:08,1037,Dan,-0.6931099564135064,-0.6357258139372404
[7]:
!head data/2000-01-30.csv
timestamp,id,name,x,y
2000-01-30 00:00:00,1067,Quinn,-0.9275010814781244,0.7051035850972305
2000-01-30 00:00:01,1011,Quinn,-0.8288674460103511,-0.3018417020358921
2000-01-30 00:00:02,933,Laura,-0.5165326137868189,0.9195088929096915
2000-01-30 00:00:03,1040,Ray,0.8073954879070395,0.9243639047927026
2000-01-30 00:00:04,963,Wendy,0.791167365074305,0.2941664104084778
2000-01-30 00:00:05,1008,Bob,0.38959445411393334,-0.32793662786416844
2000-01-30 00:00:06,1008,Ray,-0.2127878456673038,0.040117377007003796
2000-01-30 00:00:07,1038,Ingrid,0.3092567914432629,0.11665005655447458
2000-01-30 00:00:08,985,Hannah,-0.42749597352375934,-0.3888014211219375
We can read one file with pandas.read_csv
or many files with dask.dataframe.read_csv
[8]:
import pandas as pd
df = pd.read_csv('data/2000-01-01.csv')
df.head()
[8]:
timestamp | id | name | x | y | |
---|---|---|---|---|---|
0 | 2000-01-01 00:00:00 | 1009 | Jerry | 0.900543 | 0.321234 |
1 | 2000-01-01 00:00:01 | 940 | Quinn | 0.467950 | -0.018846 |
2 | 2000-01-01 00:00:02 | 1017 | Ingrid | 0.944271 | -0.922927 |
3 | 2000-01-01 00:00:03 | 1034 | Tim | 0.010274 | -0.285004 |
4 | 2000-01-01 00:00:04 | 963 | Bob | -0.955605 | -0.409805 |
[9]:
import dask.dataframe as dd
df = dd.read_csv('data/2000-*-*.csv')
df
[9]:
timestamp | id | name | x | y | |
---|---|---|---|---|---|
npartitions=30 | |||||
object | int64 | object | float64 | float64 | |
... | ... | ... | ... | ... | |
... | ... | ... | ... | ... | ... |
... | ... | ... | ... | ... | |
... | ... | ... | ... | ... |
[10]:
df.head()
[10]:
timestamp | id | name | x | y | |
---|---|---|---|---|---|
0 | 2000-01-01 00:00:00 | 1009 | Jerry | 0.900543 | 0.321234 |
1 | 2000-01-01 00:00:01 | 940 | Quinn | 0.467950 | -0.018846 |
2 | 2000-01-01 00:00:02 | 1017 | Ingrid | 0.944271 | -0.922927 |
3 | 2000-01-01 00:00:03 | 1034 | Tim | 0.010274 | -0.285004 |
4 | 2000-01-01 00:00:04 | 963 | Bob | -0.955605 | -0.409805 |
Tuning read_csv¶
The Pandas read_csv
function has many options to help you parse files. The Dask version uses the Pandas function internally, and so supports many of the same options. You can use the ?
operator to see the full documentation string.
[11]:
pd.read_csv?
[12]:
dd.read_csv?
In this case we use the parse_dates
keyword to parse the timestamp column to be a datetime. This will make things more efficient in the future. Notice that the dtype of the timestamp column has changed from object
to datetime64[ns]
.
[13]:
df = dd.read_csv('data/2000-*-*.csv', parse_dates=['timestamp'])
df
[13]:
timestamp | id | name | x | y | |
---|---|---|---|---|---|
npartitions=30 | |||||
datetime64[ns] | int64 | object | float64 | float64 | |
... | ... | ... | ... | ... | |
... | ... | ... | ... | ... | ... |
... | ... | ... | ... | ... | |
... | ... | ... | ... | ... |
Do a simple computation¶
Whenever we operate on our dataframe we read through all of our CSV data so that we don’t fill up RAM. This is very efficient for memory use, but reading through all of the CSV files every time can be slow.
[14]:
%time df.groupby('name').x.mean().compute()
CPU times: user 211 ms, sys: 20.6 ms, total: 232 ms
Wall time: 3.26 s
[14]:
name
Alice 0.004810
Bob -0.000236
Charlie -0.003038
Dan 0.002005
Edith -0.001287
Frank 0.000691
George -0.002461
Hannah -0.004205
Ingrid 0.001781
Jerry -0.000149
Kevin 0.000707
Laura 0.002090
Michael -0.004071
Norbert -0.001131
Oliver -0.002930
Patricia 0.000120
Quinn 0.000870
Ray 0.000424
Sarah -0.000817
Tim 0.003061
Ursula 0.002109
Victor -0.001035
Wendy -0.002654
Xavier 0.000702
Yvonne 0.000308
Zelda -0.001066
Name: x, dtype: float64
[ ]:
Write to Parquet¶
Instead, we’ll store our data in Parquet, a format that is more efficient for computers to read and write.
[15]:
df.to_parquet('data/2000-01.parquet', engine='pyarrow')
[16]:
!ls data/2000-01.parquet/
part.0.parquet part.16.parquet part.23.parquet part.4.parquet
part.1.parquet part.17.parquet part.24.parquet part.5.parquet
part.10.parquet part.18.parquet part.25.parquet part.6.parquet
part.11.parquet part.19.parquet part.26.parquet part.7.parquet
part.12.parquet part.2.parquet part.27.parquet part.8.parquet
part.13.parquet part.20.parquet part.28.parquet part.9.parquet
part.14.parquet part.21.parquet part.29.parquet
part.15.parquet part.22.parquet part.3.parquet
Read from Parquet¶
[17]:
df = dd.read_parquet('data/2000-01.parquet', engine='pyarrow')
df
[17]:
timestamp | id | name | x | y | |
---|---|---|---|---|---|
npartitions=30 | |||||
datetime64[ns] | int64 | object | float64 | float64 | |
... | ... | ... | ... | ... | |
... | ... | ... | ... | ... | ... |
... | ... | ... | ... | ... | |
... | ... | ... | ... | ... |
[18]:
%time df.groupby('name').x.mean().compute()
CPU times: user 132 ms, sys: 13.1 ms, total: 145 ms
Wall time: 942 ms
[18]:
name
Alice 0.004810
Bob -0.000236
Charlie -0.003038
Dan 0.002005
Edith -0.001287
Frank 0.000691
George -0.002461
Hannah -0.004205
Ingrid 0.001781
Jerry -0.000149
Kevin 0.000707
Laura 0.002090
Michael -0.004071
Norbert -0.001131
Oliver -0.002930
Patricia 0.000120
Quinn 0.000870
Ray 0.000424
Sarah -0.000817
Tim 0.003061
Ursula 0.002109
Victor -0.001035
Wendy -0.002654
Xavier 0.000702
Yvonne 0.000308
Zelda -0.001066
Name: x, dtype: float64
Select only the columns that you plan to use¶
Parquet is a column-store, which means that it can efficiently pull out only a few columns from your dataset. This is good because it helps to avoid unnecessary data loading.
[19]:
%%time
df = dd.read_parquet('data/2000-01.parquet', columns=['name', 'x'], engine='pyarrow')
df.groupby('name').x.mean().compute()
CPU times: user 130 ms, sys: 6.46 ms, total: 136 ms
Wall time: 851 ms
[19]:
name
Alice 0.004810
Bob -0.000236
Charlie -0.003038
Dan 0.002005
Edith -0.001287
Frank 0.000691
George -0.002461
Hannah -0.004205
Ingrid 0.001781
Jerry -0.000149
Kevin 0.000707
Laura 0.002090
Michael -0.004071
Norbert -0.001131
Oliver -0.002930
Patricia 0.000120
Quinn 0.000870
Ray 0.000424
Sarah -0.000817
Tim 0.003061
Ursula 0.002109
Victor -0.001035
Wendy -0.002654
Xavier 0.000702
Yvonne 0.000308
Zelda -0.001066
Name: x, dtype: float64
Here the difference is not that large, but with larger datasets this can save a great deal of time.