ROC Curve in Machine Learning

The Receiver Operating Characteristic (ROC) curve is a popular tool used with binary classifiers. It is very similar to the precision/recall curve. Still, instead of plotting precision versus recall, the ROC curve plots the true positive rate (another name for recall) against the false positive rate (FPR).

The FPR is the ratio of negative instances that are incorrectly classified as positive. It is equal to 1 – the true negative rate (TNR), which is the ratio of negative cases that are correctly classified as negative. The TNR is also called specificity.

Plotting The ROC Curve

I will explain this by using the classification that I did in my article on Binary Classification. I will continue this from where I left in the Binary Classification. To plot the ROC curve, you first use the roc_curve() function to compute the TPR and FPR for various threshold values:

from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)

Then you can plot the FPR against the TPR using Matplotlib:

def plot_roc_curve(fpr, tpr, label=None): plt.plot(fpr, tpr, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'k--') # dashed diagonal plt.axis([0, 1, 0, 1]) # Not shown in the book plt.xlabel('False Positive Rate (Fall-Out)', fontsize=16) # Not shown plt.ylabel('True Positive Rate (Recall)', fontsize=16) # Not shown plt.grid(True) # Not shown plt.figure(figsize=(8, 6)) # Not shown plot_roc_curve(fpr, tpr) plt.plot([4.837e-3, 4.837e-3], [0., 0.4368], "r:") # Not shown plt.plot([0.0, 4.837e-3], [0.4368, 0.4368], "r:") # Not shown plt.plot([4.837e-3], [0.4368], "ro") # Not shown save_fig("roc_curve_plot") # Not shown
Code language: Python (python)
image for post

So there is a trade-off: the higher the recall (TPR), the more false positives (FPR) the classifier produces. The dotted line represents the ROC curve of a purely random classifier; a good classifier stays as far away from that line as possible (toward the top-left corner).

In the output above, The ROC plots, the false positive rate against the true positive rate for all possible thresholds; the red circle highlights the chosen ratio (at 43.68% recall).

One way to compare classifiers is to measure the area under the curve (AUC). A perfect classifier will have a ROC AUC equal to 1, whereas a purely random classifier will have a ROC AUC equal to 0.5. Scikit-Learn provides a function to compute the ROC AUC: 

from sklearn.metrics import roc_auc_score roc_auc_score(y_train_5, y_scores)
Code language: Python (python)


Since the ROC curve is so similar to the precision/recall (PR) curve, you may wonder how to decide which one to use. As a rule of thumb, you should prefer the PR curve whenever the positive class is rare or when you care more about the false positives than the false negatives. Otherwise, use the ROC curve.

Using The ROC Curve in Classification

Let’s now train a RandomForestClassifier and compare its ROC curve and ROC AUC score to those of the SGDClassifier. First, you need to get scores for each instance in the training set:

from sklearn.ensemble import RandomForestClassifier forest_clf = RandomForestClassifier(n_estimators=100, random_state=42) y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3, method="predict_proba")
Code language: Python (python)

The roc_curve() function expects labels and scores, but instead of scores, you can give it class probabilities. Let’s use the positive class’s probability as the score:

y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)
Code language: Python (python)

Now you are ready to plot the ROC. It is useful to plot the first ROC curve as well as to see how they compare:

plt.figure(figsize=(8, 6)) plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD") plot_roc_curve(fpr_forest, tpr_forest, "Random Forest") plt.plot([4.837e-3, 4.837e-3], [0., 0.4368], "r:") plt.plot([0.0, 4.837e-3], [0.4368, 0.4368], "r:") plt.plot([4.837e-3], [0.4368], "ro") plt.plot([4.837e-3, 4.837e-3], [0., 0.9487], "r:") plt.plot([4.837e-3], [0.9487], "ro") plt.grid(True) plt.legend(loc="lower right", fontsize=16) save_fig("roc_curve_comparison_plot")
Code language: Python (python)
Roc Curve

As you can see in the output above, the RandomForestClassifier’s curve looks much better than the SGDClassifier’s: it comes much closer to the top-left corner. As a result, its ROC AUC score is also significantly better:

roc_auc_score(y_train_5, y_scores_forest)
Code language: Python (python)


I hope you liked this article. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium, to read more amazing articles.

Follow Us:

Default image
Aman Kharwal

I am a programmer from India, and I am here to guide you with Data Science, Machine Learning, Python, and C++ for free. I hope you will learn a lot in your journey towards Coding, Machine Learning and Artificial Intelligence with me.

Leave a Reply

Do NOT follow this link or you will be banned from the site!