There is a lot of difference between the data you get to practice data science skills and the data you get in the real world. Honestly speaking, many datasets you will get in the process of actual-world data science tasks will always come with some amount of missing data. Sometimes it gets tough when different data sources indicate different ways of missing data.
In this article, I will focus on handling missing data, and I will discuss how we can do this with the pandas library in Python. My focus will be purely on the tools that pandas provide for handling missing data.
Also, read – Training and Test Sets in Machine Learning.
Handling Missing Data with Python
The first sentinel value used by Pandas is None.
A python singleton object that is often used for missing data in Python code. Because it is a Python object, None
cannot be used in any arbitrary NumPy/Pandas array, but only in arrays with data type 'object'
(i.e., arrays of Python objects):
import numpy as np
import pandas as pd
vals1 = np.array([1, None, 3, 4])
vals1
Code language: Python (python)
array([1, None, 3, 4], dtype=object)
This dtype=object
means that the best common type representation NumPy could infer for the contents of the array is that they are Python objects. While this kind of object array is useful for some purposes, any operations on the data will be done at the Python level, with much more overhead than the typical fast actions seen for arrays with native types:
for dtype in ['object', 'int']:
print("dtype =", dtype)
%timeit np.arange(1E6, dtype=dtype).sum()
print()
Code language: Python (python)
dtype = object 10 loops, best of 3: 78.2 ms per loop dtype = int 100 loops, best of 3: 3.06 ms per loop
The use of Python objects in an array also means that if you perform aggregations like sum()
or min()
across an array with a None
value, you will generally get an error:
vals1.sum()
Code language: Python (python)
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-4-749fd8ae6030> in <module>() ----> 1 vals1.sum() /Users/jakevdp/anaconda/lib/python3.5/site-packages/numpy/core/_methods.py in _sum(a, axis, dtype, out, keepdims) 30 31 def _sum(a, axis=None, dtype=None, out=None, keepdims=False): ---> 32 return umr_sum(a, axis, dtype, out, keepdims) 33 34 def _prod(a, axis=None, dtype=None, out=None, keepdims=False): TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
This reflects the fact that addition between an integer and None
is undefined.
Missing Numerical data
The other missing data representation, NaN
(an acronym for Not a Number), is different; it is a particular floating-point value recognized by all systems that use the standard IEEE floating-point representation:
vals2 = np.array([1, np.nan, 3, 4])
vals2.dtype
Code language: Python (python)
dtype('float64')
Notice that NumPy chose a native floating-point type for this array: this means that unlike the object array from before, this array supports fast operations pushed into compiled code. You should be aware that nan is a bit like a data virus–it infects any other object it touches. Regardless of the operation, the result of arithmetic with NaN
will be another NaN
:
1 + np.nan
Code language: Python (python)
nan
0 * np.nan
Code language: Python (python)
nan
Note that this means that aggregates over the values are well defined (i.e., they don’t result in an error) but not always useful:
vals2.sum(), vals2.min(), vals2.max()
Code language: Python (python)
(nan, nan, nan)
NumPy does provide some special aggregations that will ignore these missing values:
np.nansum(vals2), np.nanmin(vals2), np.nanmax(vals2)
Code language: Python (python)
(8.0, 1.0, 4.0)
Keep in mind that NaN
is specifically a floating-point value; there is no equivalent NaN value for integers, strings, or other types.
NaN and None in Pandas
NaN
and None
both have their place, and Pandas is built to handle the two of them nearly interchangeably, converting between them where appropriate:
pd.Series([1, np.nan, 2, None])
Code language: Python (python)
0 1.0 1 NaN 2 2.0 3 NaN dtype: float64
For types that don’t have an available sentinel value, Pandas automatically type-casts when NA values are present. For example, if we set a value in an integer array to np.nan
, it will automatically be upcast to a floating-point type to accommodate the NA:
x = pd.Series(range(2), dtype=int)
x
Code language: Python (python)
0 0 1 1 dtype: int64
x[0] = None
x
Code language: Python (python)
0 NaN 1 1.0 dtype: float64
Notice that in addition to casting the integer array to floating-point, Pandas automatically converts the None
to a NaN
value.
Operating on Null Values
As we have seen, Pandas treats None
and NaN
as essentially interchangeable for indicating missing or null values. To facilitate this convention, there are several useful methods for detecting, removing, and replacing null values in Pandas data structures. They are:
isnull()
: Generate a boolean mask indicating missing valuesnotnull()
: Opposite ofisnull()
dropna()
: Return a filtered version of the datafillna()
: Return a copy of the data with missing values filled or imputed
We will conclude this section with a brief exploration and demonstration of these routines.
Detecting null Values
Pandas data structures have two useful methods for detecting null data: isnull()
and notnull()
. Either one will return a Boolean mask over the data. For example:
data = pd.Series([1, np.nan, 'hello', None])
data.isnull()
Code language: Python (python)
0 False 1 True 2 False 3 True dtype: bool
data[data.notnull()]
Code language: Python (python)
0 1 2 hello dtype: object
The isnull()
and notnull()
methods produce similar Boolean results for DataFrame
s.
Dropping null values
In addition to the masking used before, there are convenience methods, dropna()
(which removes NA values) and fillna()
(which fills in NA values). For a Series
, the result is straightforward:
data.dropna()
Code language: Python (python)
0 1 2 hello dtype: object
For a DataFrame
, there are more options. Consider the following DataFrame
:
df = pd.DataFrame([[1, np.nan, 2],
[2, 3, 5],
[np.nan, 4, 6]])
df
Code language: Python (python)
0 | 1 | 2 | |
---|---|---|---|
0 | 1.0 | NaN | 2 |
1 | 2.0 | 3.0 | 5 |
2 | NaN | 4.0 | 6 |
We cannot drop single values from a DataFrame
; we can only drop full rows or full columns. Depending on the application, you might want one or the other, so dropna()
gives a number of options for a DataFrame
.
By default, dropna()
will drop all rows in which any null value is present:
df.dropna()
Code language: Python (python)
0 | 1 | 2 | |
---|---|---|---|
1 | 2.0 | 3.0 | 5 |
Alternatively, you can drop NA values along a different axis; axis=1
drops all columns containing a null value:
df.dropna(axis='columns')
Code language: Python (python)
2 | |
---|---|
0 | 2 |
1 | 5 |
2 | 6 |
But this drops some useful data as well; you might instead be interested in dropping rows or columns with all NA values, or a majority of NA values. This can be specified through the how
or thresh
parameters, which allow excellent control of the number of nulls to allow through.
The default is how='any'
, such that any row or column (depending on the axis
keyword) containing a null value will be dropped. You can also specify how='all'
, which will only drop rows/columns that are all null values:
df[3] = np.nan
df
Code language: Python (python)
0 | 1 | 2 | 3 | |
---|---|---|---|---|
0 | 1.0 | NaN | 2 | NaN |
1 | 2.0 | 3.0 | 5 | NaN |
2 | NaN | 4.0 | 6 | NaN |
df.dropna(axis='columns', how='all')
Code language: Python (python)
0 | 1 | 2 | |
---|---|---|---|
0 | 1.0 | NaN | 2 |
1 | 2.0 | 3.0 | 5 |
2 | NaN | 4.0 | 6 |
For finer-grained control, the thresh
parameter lets you specify a minimum number of non-null values for the row/column to be kept:
df.dropna(axis='rows', thresh=3)
Code language: Python (python)
0 | 1 | 2 | 3 | |
---|---|---|---|---|
1 | 2.0 | 3.0 | 5 | NaN |
Here the first and last row have been dropped because they contain only two non-null values.
Handling Missing Data by Filling Null Values
Sometimes rather than dropping NA values, you’d rather replace them with a valid value. This value might be a single number like zero, or it might be some sort of imputation or interpolation from the good values.
You could do this in-place using the isnull()
method as a mask, but because it is such a common operation Pandas provides the fillna()
method, which returns a copy of the array with the null values replaced.
Consider the following Series
:
data = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'))
data
Code language: Python (python)
a 1.0 b NaN c 2.0 d NaN e 3.0 dtype: float64
We can fill NA entries with a single value, such as zero:
data.fillna(0)
Code language: Python (python)
a 1.0 b 0.0 c 2.0 d 0.0 e 3.0 dtype: float64
We can specify a forward-fill to propagate the previous value forward:
# forward-fill
data.fillna(method='ffill')
Code language: Python (python)
a 1.0 b 1.0 c 2.0 d 2.0 e 3.0 dtype: float64
Or we can specify a back-fill to propagate the next values backward:
# back-fill
data.fillna(method='bfill')
Code language: Python (python)
a 1.0 b 2.0 c 2.0 d 3.0 e 3.0 dtype: float64
For DataFrame
s, the options are similar, but we can also specify an axis
along which the fills take place:
df
Code language: Python (python)
0 | 1 | 2 | 3 | |
---|---|---|---|---|
0 | 1.0 | NaN | 2 | NaN |
1 | 2.0 | 3.0 | 5 | NaN |
2 | NaN | 4.0 | 6 | NaN |
df.fillna(method='ffill', axis=1)
Code language: Python (python)
0 | 1 | 2 | 3 | |
---|---|---|---|---|
0 | 1.0 | 1.0 | 2.0 | 2.0 |
1 | 2.0 | 3.0 | 5.0 | 5.0 |
2 | NaN | 4.0 | 6.0 | 6.0 |
Also, read – 10 Machine Learning Projects to Boost your Portfolio.
I hope you liked this article on handling missing data, feel free to ask questions on this topic, or any topic you want in the comments section below.