Classification with Neural Networks using Python

In machine learning, classification means categorizing the known classes, For example, categorizing the most profitable and non-interested customers from a dataset for advertising a particular product. You must have trained a classification model with a machine learning algorithm before. Here is an example. But if you want to learn about training a classification model with neural networks, this article is for you. In this article, I will take you through the task of classification with neural networks using Python.

Classification with Neural Networks using Python

Classification is the task of categorizing the known classes based on their features. In most classification problems, machine learning algorithms will do the job, but while classifying a large dataset of images, you will need to use a neural network. If you have never trained a neural network and want to learn how neural networks work, you can learn everything about a neural network from here.

Now let’s come back to classification with neural networks. In this section, I will take you through the task of image classification with neural network using Python. Here, I will be using the famous MNIST fashion dataset, which contains 70,000 clothing fashion images. Here our task is to train an image classification model with neural networks.

I will start this task by importing the necessary Python libraries and the dataset:

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
fashion = keras.datasets.fashion_mnist
(xtrain, ytrain), (xtest, ytest) = fashion.load_data()

Before moving forward, let’s have a quick look at one of the samples of the images from the dataset:

imgIndex = 9
image = xtrain[imgIndex]
print("Image Label :",ytrain[imgIndex])
plt.imshow(image)
Image Label : 5
Fashion MNIST sample

Now let’s have a look at the shape of both the training and test data:

print(xtrain.shape)
print(xtest.shape)
(60000, 28, 28)
(10000, 28, 28)

Building a Neural Network Architecture

Now I will build a neural network architecture with two hidden layers:

model = keras.models.Sequential([
    keras.layers.Flatten(input_shape=[28, 28]),
    keras.layers.Dense(300, activation="relu"),
    keras.layers.Dense(100, activation="relu"),
    keras.layers.Dense(10, activation="softmax")
])
print(model.summary())
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 flatten_3 (Flatten)         (None, 784)               0         
                                                                 
 dense_6 (Dense)             (None, 300)               235500    
                                                                 
 dense_7 (Dense)             (None, 100)               30100     
                                                                 
 dense_8 (Dense)             (None, 10)                1010      
                                                                 
=================================================================
Total params: 266,610
Trainable params: 266,610
Non-trainable params: 0
_________________________________________________________________
None

Before training our model, I will split the training data into training and validation sets:

xvalid, xtrain = xtrain[:5000]/255.0, xtrain[5000:]/255.0
yvalid, ytrain = ytrain[:5000], ytrain[5000:]

Training a Classification Model with Neural Networks

Now here’s how we can train a neural network for the task of image classification:

model.compile(loss="sparse_categorical_crossentropy",
              optimizer="sgd",
              metrics=["accuracy"])
history = model.fit(xtrain, ytrain, epochs=30, 
                    validation_data=(xvalid, yvalid))
Epoch 1/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0979 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 2/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0985 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 3/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0992 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 4/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0996 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 5/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0990 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 6/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.1009 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 7/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0978 - val_loss: 2.3028 - val_accuracy: 0.0936
Epoch 8/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0993 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 9/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0975 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 10/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.1004 - val_loss: 2.3029 - val_accuracy: 0.1006
Epoch 11/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0984 - val_loss: 2.3029 - val_accuracy: 0.1006
Epoch 12/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0976 - val_loss: 2.3026 - val_accuracy: 0.1006
Epoch 13/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0996 - val_loss: 2.3027 - val_accuracy: 0.1018
Epoch 14/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.1003 - val_loss: 2.3026 - val_accuracy: 0.1006
Epoch 15/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0993 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 16/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0984 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 17/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0993 - val_loss: 2.3029 - val_accuracy: 0.1006
Epoch 18/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0991 - val_loss: 2.3029 - val_accuracy: 0.1006
Epoch 19/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0969 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 20/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0990 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 21/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0985 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 22/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0994 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 23/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.1000 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 24/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0986 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 25/30
1407/1407 [==============================] - 4s 3ms/step - loss: 2.3027 - accuracy: 0.0995 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 26/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0987 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 27/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0990 - val_loss: 2.3026 - val_accuracy: 0.1006
Epoch 28/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.0982 - val_loss: 2.3028 - val_accuracy: 0.1006
Epoch 29/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.1001 - val_loss: 2.3027 - val_accuracy: 0.1006
Epoch 30/30
1407/1407 [==============================] - 5s 3ms/step - loss: 2.3027 - accuracy: 0.1001 - val_loss: 2.3027 - val_accuracy: 0.1006

Now let’s have a look at the predictions:

new = xtest[:5]
predictions = model.predict(new)
print(predictions)
[[9.9999964e-01 2.4463721e-32 0.0000000e+00 3.4137151e-09 0.0000000e+00
  0.0000000e+00 3.3486748e-07 1.0429964e-13 1.8272030e-38 1.7518649e-36]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
  0.0000000e+00 1.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
 [0.0000000e+00 1.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
  0.0000000e+00 9.3842902e-14 8.4208933e-12 7.5564799e-34 0.0000000e+00]
 [0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
  0.0000000e+00 9.9307209e-01 6.9278893e-03 2.5936122e-36 0.0000000e+00]
 [2.3715086e-03 0.0000000e+00 2.3619191e-19 3.4704832e-26 0.0000000e+00
  0.0000000e+00 1.2520461e-01 8.7242389e-01 1.0916292e-16 0.0000000e+00]]

Here is how we can look at the predicted classes:

classes = np.argmax(predictions, axis=1)
print(classes)
[0 6 1 6 7]

So this is how you can train a classification model with neural networks using Python.

Summary

Classification is the task of categorizing the known classes based on their features. In most classification problems, machine learning algorithms will do the job, but while classifying a large dataset of images, you will need to use a neural network. I hope you liked this article on classification with neural network using Python. Feel free to ask valuable questions in the comments section below.

Aman Kharwal
Aman Kharwal

I'm a writer and data scientist on a mission to educate others about the incredible power of data📈.

Articles: 1433

Leave a Reply