Data Science Project on Time Series

As an example of working with some time series data, let’s take a look at bicycle counts on Seattle’s Fremont Bridge. This data comes from an automated bicycle counter, installed in late 2012, which has inductive sensors on the east and west sidewalks of the bridge. The hourly bicycle counts can be downloaded from here.

Once this dataset is downloaded, we can use Pandas to read the CSV output into a DataFrame. We will specify that we want the Date as an index, and we want these dates to be automatically parsed:

import pandas as pd
data = pd.read_csv("fremont-bridge.csv", index_col= 'Date', parse_dates=<strong>True</strong>)
data.head()

For convenience, we’ll further process this dataset by shortening the column names and adding a “Total” column:

data.columns = ["West", "East"]
data["Total"] = data["West"] + data["East"] 
data.head()

Now let’s take a look at the summary statistics for this data:

data.dropna().describe()

Visualizing the data

We can gain some insight into the dataset by visualizing it. Let’s start by plotting the raw data:

import matplotlib.pyplot as plt
import seaborn
seaborn.set()
data.plot()
plt.ylabel("Hourly Bicycle count")
plt.show()

The ~25,000 hourly samples are far too dense for us to make much sense of. We can gain more insight by resampling the data to a coarser grid. Let’s resample by week:

weekly = data.resample("W").sum()
weekly.plot(style=[':', '--', '-'])
plt.ylabel('Weekly bicycle count')
plt.show()

This shows us some interesting seasonal trends: as you might expect, people bicycle more in the summer than in the winter, and even within a particular season the bicycle use varies from week to week.

Another way that comes in handy for aggregating the data is to use a rolling mean, utilizing the pd.rolling_mean() function. Here we’ll do a 30-day rolling mean of our data, making sure to center the window:

daily = data.resample('D').sum()
daily.rolling(30, center=True).sum().plot(style=[':', '--', '-'])
plt.ylabel('mean hourly count')
plt.show()

The jaggedness of the result is due to the hard cutoff of the window. We can get a smoother version of a rolling mean using a window function—for example, a Gaussian window.

daily.rolling(50, center=True,
              win_type='gaussian').sum(std=10).plot(style=[':','--', '-'])
plt.show()

Digging into the data

While the smoothed data views are useful to get an idea of the general trend in the data, they hide much of the interesting structure. For example, we might want to look at the average traffic as a function of the time of day. We can do this using the GroupBy functionality:

import numpy as np
by_time = data.groupby(data.index.time).mean()
hourly_ticks = 4 * 60 * 60 * np.arange(6)
by_time.plot(xticks= hourly_ticks, style=[':', '--', '-'])
plt.ylabel("Traffic according to time")
plt.show()
Aman Kharwal
Aman Kharwal

I'm a writer and data scientist on a mission to educate others about the incredible power of data📈.

Articles: 1498

9 Comments

  1. As i understood, here we are analyzing the data on time basis and we have to split and train the data to get a proper prediction model. am i correct sir?

  2. Hi Aman,

    I could see only EDA of the time-series data and nothing beyond that. I would like to know whether the objective of this page is how to do EDA on time-series data.

    Thanks
    Srinath

  3. Hi Aman,
    have a quick question, we should apply resampling on date column from “data” right? actually am getting error while executing the code at this place as
    TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of ‘RangeIndex’

    this is the code am trying to run
    weekly = data.resample(“W”).sum()
    weekly.plot(style=[‘:’, ‘–‘, ‘-‘])
    plt.ylabel(‘Weekly bicycle count’)
    plt.show()

    Please guide. Thanks in advance

Leave a Reply