{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Analyze web-hosted JSON data\n", "============================\n", "\n", "This notebook reads and processes JSON-encoded data hosted on the web using a combination of [Dask Bag](https://docs.dask.org/en/latest/bag.html) and [Dask Dataframe](https://docs.dask.org/en/latest/dataframe.html).\n", "\n", "This data comes from [mybinder.org](https://mybinder.org) a web service to run Jupyter notebooks live on the web (you may be running this notebook there now). My Binder publishes records for every time someone launches a live notebook like this one, and stores that record in a publicly accessible JSON file, one file per day. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction to the dataset\n", "\n", "This data is stored as JSON-encoded text files on the public web. Here are some example lines." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import dask.bag as db\n", "db.read_text('https://archive.analytics.mybinder.org/events-2018-11-03.jsonl').take(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that it includes one line for every time someone started a live notebook on the site. It includes the time that the notebook was started, as well as the repository from which it was served.\n", "\n", "In this notebook we'll look at many such files, parse them from JSON to Python dictionaries, and then from there to Pandas dataframes. We'll then do some simple analyses on this data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Start Dask Client for Dashboard\n", "\n", "Starting the Dask Client is optional. It will start the dashboard which \n", "is useful to gain insight on the computation. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from dask.distributed import Client, progress\n", "client = Client(threads_per_worker=1, \n", " n_workers=4,\n", " memory_limit='2GB')\n", "client" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Get a list of files on the web\n", "\n", "The mybinder.org team maintains an index file that points to all other available JSON files of data. Lets convert this to a list of URLs that we'll read in the next section." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import dask.bag as db\n", "import json" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "db.read_text('https://archive.analytics.mybinder.org/index.jsonl').map(json.loads).compute()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "filenames = (db.read_text('https://archive.analytics.mybinder.org/index.jsonl')\n", " .map(json.loads)\n", " .pluck('name')\n", " .compute())\n", "\n", "filenames = ['https://archive.analytics.mybinder.org/' + fn for fn in filenames]\n", "filenames[:5]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create Bag of all events\n", "\n", "We now create a [Dask Bag](https://docs.dask.org/en/latest/bag.html) around that list of URLs, and then call the `json.loads` function on every line to turn those lines of JSON-encoded text into Python dictionaries that can be more easily manipulated." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "events = db.read_text(filenames).map(json.loads)\n", "events.take(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Most Popular Binders\n", "\n", "Lets do a simple frequency count to find those binders that are run the most often." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "events.pluck('spec').frequencies(sort=True).take(20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Convert to Dask Dataframe\n", "\n", "Finally, we can convert our bag of Python dictionaries into a [Dask Dataframe](https://docs.dask.org/en/latest/dataframe.html), and follow up with more Pandas-like computations.\n", "\n", "We'll do the same computation as above, now with Pandas syntax." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = events.to_dataframe()\n", "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.spec.value_counts().nlargest(20).to_frame().compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Persist in memory\n", "\n", "This dataset fits nicely into memory. Lets avoid downloading data every time we do an operation and instead keep the data local in memory." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = df.persist()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Honestly, at this point it makes more sense to just switch to Pandas, but this is a Dask example, so we'll continue with Dask dataframe." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Investigate providers other than Github\n", "\n", "Most binders are specified as git repositories on GitHub, but not all. Lets investigate other providers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import urllib" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.provider.value_counts().compute()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(df[df.provider == 'GitLab']\n", " .spec\n", " .map(urllib.parse.unquote, meta=('spec', object))\n", " .value_counts()\n", " .to_frame()\n", " .compute())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(df[df.provider == 'Git']\n", " .spec\n", " .apply(urllib.parse.unquote, meta=('spec', object))\n", " .value_counts()\n", " .to_frame()\n", " .compute())" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" } }, "nbformat": 4, "nbformat_minor": 4 }