Data Augmentation is a technique in Deep Learning which helps in adding value to our base dataset by adding the gathered information from various sources to improve the quality of data of an organisation. Data Augmentation is one of the most important processes that makes the data very much informational.
Improving the data is very important for every business because data is considered as the oil of business. Data Augmentation can be applied to any form of the dataset, which mainly includes text, images, and audio.
Data Augmentation helps in the study of the insights based on customers, product sales, and various other departments where the use of additional information can help a business to study the insights of the data very deeply. In this article, I will show you the most used field of Data Augmentation that is Image Augmentation.
Data Augmentation
Here I will show you some manual image augmentation and manipulation using TensorFlow. In Deep Learning, Data Augmentation is a very common technique to improve the results and overfitting. To move further in this article, you need to install a package using the pip installer command: pip install tensorflow_docs.
Now, let’s import all the libraries we need for this task:
import urllib
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers
AUTOTUNE = tf.data.experimental.AUTOTUNE
import tensorflow_docs as tfdocs
import tensorflow_docs.plots
import tensorflow_datasets as tfds
import PIL.Image
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12, 5)
import numpy as np
Code language: Python (python)
Now, let’s go through all the data augmentation features using an image, and later I will apply those features in the whole dataset to train a Deep Learning Model. The image that I will use in this article, can be downloaded from here. Now let’s read the image and have a quick look at it.
image_path = tf.keras.utils.get_file("cat.jpg")
PIL.Image.open(image_path)
Code language: Python (python)

Now I will simply read and decode the above image in the format of tensor:
image_string=tf.io.read_file(image_path)
image=tf.image.decode_jpeg(image_string,channels=3)
Code language: Python (python)
Now, I will create a function to visualize the comparison between the original image and the augmented image:
def visualize(original, augmented):
fig = plt.figure()
plt.subplot(1,2,1)
plt.title('Original image')
plt.imshow(original)
plt.subplot(1,2,2)
plt.title('Augmented image')
plt.imshow(augmented)
Code language: Python (python)
Data Augmentation on a Single Image
Let’s start with flipping the image, here I will flip the image either horizontally or vertically:
flipped = tf.image.flip_left_right(image)
visualize(image, flipped)
Code language: Python (python)

Now, let’s move further by applying the grayscale features to the image:
grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
plt.colorbar()
Code language: Python (python)

Now, I will move further with adding the Saturation factor to the image:
saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)
Code language: Python (python)

Now, I will move further with changing the brightness levels of the image:
bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)
Code language: Python (python)

Now, I will rotate the image with 90 degree angle:
rotated = tf.image.rot90(image)
visualize(image, rotated)
Code language: Python (python)

Now, before applying all the above features on the dataset, let’s have a look at one more feature that is cropping the image:
cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image,cropped)
Code language: Python (python)

Data Augmentation on a Dataset and Training a Model
Now, I will apply the data augmentation features on a dataset, and then use that augmented dataset for training a model.
dataset, info = tfds.load('mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
num_train_examples= info.splits['train'].num_examples
Code language: Python (python)
Now, I will create a function to augment the images in the dataset, and then I will map the function on the dataset. This will return a dataset that will augment the data on the fly.
def convert(image, label):
image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
return image, label
def augment(image,label):
image,label = convert(image, label)
image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
image = tf.image.resize_with_crop_or_pad(image, 34, 34) # Add 6 pixels of padding
image = tf.image.random_crop(image, size=[28, 28, 1]) # Random crop back to 28x28
image = tf.image.random_brightness(image, max_delta=0.5) # Random brightness
return image,label
BATCH_SIZE = 64
# Only use a subset of the data so it's easier to overfit, for this tutorial
NUM_EXAMPLES = 2048
Code language: Python (python)
Now, I will create a augmented dataset:
augmented_train_batches = (
train_dataset
# Only train on a subset, so you can quickly see the effect.
.take(NUM_EXAMPLES)
.cache()
.shuffle(num_train_examples//4)
# The augmentation is added here.
.map(augment, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
Code language: Python (python)
And, now I will create a non-augmented dataset to draw comparisons:
non_augmented_train_batches = (
train_dataset
# Only train on a subset, so you can quickly see the effect.
.take(NUM_EXAMPLES)
.cache()
.shuffle(num_train_examples//4)
# No augmentation.
.map(convert, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
Code language: Python (python)
Now, I will set up a validation set. Note that a validation set will never change whether you are using augmentation in your model or not:
validation_batches = (
test_dataset
.map(convert, num_parallel_calls=AUTOTUNE)
.batch(2*BATCH_SIZE)
)
Code language: Python (python)
Now I will create and compile a fully connected neural networks model with two layers:
def make_model():
model = tf.keras.Sequential([
layers.Flatten(input_shape=(28, 28, 1)),
layers.Dense(4096, activation='relu'),
layers.Dense(4096, activation='relu'),
layers.Dense(10)
])
model.compile(optimizer = 'adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
Code language: Python (python)
Training the Model
Now, I will first train the model without data augmentation:
model_without_aug = make_model()
no_aug_history = model_without_aug.fit(non_augmented_train_batches, epochs=50, validation_data=validation_batches)
Code language: Python (python)
Epoch 1/50 32/32 [==============================] - 3s 102ms/step - loss: 0.7278 - accuracy: 0.7769 - val_loss: 0.3283 - val_accuracy: 0.9029 Epoch 2/50 32/32 [==============================] - 3s 99ms/step - loss: 0.1549 - accuracy: 0.9526 - val_loss: 0.3250 - val_accuracy: 0.9104 Epoch 3/50 32/32 [==============================] - 3s 95ms/step - loss: 0.1067 - accuracy: 0.9663 - val_loss: 0.2956 - val_accuracy: 0.9217 Epoch 4/50 32/32 [==============================] - 3s 93ms/step - loss: 0.0550 - accuracy: 0.9819 - val_loss: 0.3056 - val_accuracy: 0.9262 Epoch 5/50 32/32 [==============================] - 3s 96ms/step - loss: 0.0270 - accuracy: 0.9922 - val_loss: 0.3168 - val_accuracy: 0.9280 Epoch 6/50 32/32 [==============================] - 3s 97ms/step - loss: 0.0177 - accuracy: 0.9937 - val_loss: 0.3839 - val_accuracy: 0.9214 Epoch 7/50 32/32 [==============================] - 3s 95ms/step - loss: 0.0264 - accuracy: 0.9917 - val_loss: 0.4449 - val_accuracy: 0.9086 Epoch 8/50 32/32 [==============================] - 3s 94ms/step - loss: 0.0545 - accuracy: 0.9854 - val_loss: 0.6276 - val_accuracy: 0.8819 Epoch 9/50 32/32 [==============================] - 3s 93ms/step - loss: 0.0681 - accuracy: 0.9800 - val_loss: 0.3126 - val_accuracy: 0.9298 Epoch 10/50 32/32 [==============================] - 3s 96ms/step - loss: 0.0335 - accuracy: 0.9897 - val_loss: 0.4641 - val_accuracy: 0.9058 Epoch 11/50 32/32 [==============================] - 3s 97ms/step - loss: 0.0657 - accuracy: 0.9800 - val_loss: 0.4502 - val_accuracy: 0.9053 Epoch 12/50 32/32 [==============================] - 3s 97ms/step - loss: 0.0912 - accuracy: 0.9751 - val_loss: 0.4676 - val_accuracy: 0.9055 Epoch 13/50 32/32 [==============================] - 3s 94ms/step - loss: 0.0359 - accuracy: 0.9873 - val_loss: 0.3761 - val_accuracy: 0.9262 Epoch 14/50 32/32 [==============================] - 3s 94ms/step - loss: 0.0131 - accuracy: 0.9937 - val_loss: 0.4399 - val_accuracy: 0.9217 Epoch 15/50 32/32 [==============================] - 3s 96ms/step - loss: 0.0068 - accuracy: 0.9971 - val_loss: 0.3926 - val_accuracy: 0.9294 Epoch 16/50 32/32 [==============================] - 3s 95ms/step - loss: 0.0062 - accuracy: 0.9980 - val_loss: 0.3854 - val_accuracy: 0.9314 Epoch 17/50 32/32 [==============================] - 3s 98ms/step - loss: 0.0212 - accuracy: 0.9966 - val_loss: 0.4145 - val_accuracy: 0.9364 Epoch 18/50 32/32 [==============================] - 3s 97ms/step - loss: 0.0101 - accuracy: 0.9961 - val_loss: 0.4364 - val_accuracy: 0.9272 Epoch 19/50 32/32 [==============================] - 3s 94ms/step - loss: 0.0278 - accuracy: 0.9922 - val_loss: 0.5049 - val_accuracy: 0.9229 Epoch 20/50 32/32 [==============================] - 3s 95ms/step - loss: 0.0622 - accuracy: 0.9849 - val_loss: 0.4006 - val_accuracy: 0.9273 Epoch 21/50 32/32 [==============================] - 3s 97ms/step - loss: 0.0055 - accuracy: 0.9976 - val_loss: 0.4556 - val_accuracy: 0.9296 Epoch 22/50 32/32 [==============================] - 3s 96ms/step - loss: 0.0109 - accuracy: 0.9961 - val_loss: 0.4552 - val_accuracy: 0.9260 Epoch 23/50 32/32 [==============================] - 3s 97ms/step - loss: 0.0356 - accuracy: 0.9902 - val_loss: 0.5692 - val_accuracy: 0.9145 Epoch 24/50 32/32 [==============================] - 3s 95ms/step - loss: 0.0432 - accuracy: 0.9902 - val_loss: 0.4462 - val_accuracy: 0.9287 Epoch 25/50 32/32 [==============================] - 3s 97ms/step - loss: 0.0151 - accuracy: 0.9961 - val_loss: 0.4916 - val_accuracy: 0.9259 Epoch 26/50 32/32 [==============================] - 3s 97ms/step - loss: 0.0102 - accuracy: 0.9966 - val_loss: 0.4660 - val_accuracy: 0.9258 Epoch 27/50 32/32 [==============================] - 3s 95ms/step - loss: 0.0035 - accuracy: 0.9995 - val_loss: 0.3922 - val_accuracy: 0.9380 Epoch 28/50 32/32 [==============================] - 3s 94ms/step - loss: 4.4921e-04 - accuracy: 1.0000 - val_loss: 0.4107 - val_accuracy: 0.9382 Epoch 29/50 32/32 [==============================] - 3s 95ms/step - loss: 9.6380e-04 - accuracy: 0.9995 - val_loss: 0.4102 - val_accuracy: 0.9375 Epoch 30/50 32/32 [==============================] - 3s 95ms/step - loss: 1.4121e-04 - accuracy: 1.0000 - val_loss: 0.4050 - val_accuracy: 0.9376 Epoch 31/50 32/32 [==============================] - 3s 94ms/step - loss: 5.3728e-05 - accuracy: 1.0000 - val_loss: 0.4047 - val_accuracy: 0.9386 Epoch 32/50 32/32 [==============================] - 3s 98ms/step - loss: 3.7863e-05 - accuracy: 1.0000 - val_loss: 0.4060 - val_accuracy: 0.9390 Epoch 33/50 32/32 [==============================] - 3s 96ms/step - loss: 3.3242e-05 - accuracy: 1.0000 - val_loss: 0.4072 - val_accuracy: 0.9391 Epoch 34/50 32/32 [==============================] - 3s 94ms/step - loss: 3.0050e-05 - accuracy: 1.0000 - val_loss: 0.4084 - val_accuracy: 0.9396 Epoch 35/50 32/32 [==============================] - 3s 97ms/step - loss: 2.7223e-05 - accuracy: 1.0000 - val_loss: 0.4097 - val_accuracy: 0.9398 Epoch 36/50 32/32 [==============================] - 3s 96ms/step - loss: 2.4499e-05 - accuracy: 1.0000 - val_loss: 0.4112 - val_accuracy: 0.9398 Epoch 37/50 32/32 [==============================] - 3s 96ms/step - loss: 2.1698e-05 - accuracy: 1.0000 - val_loss: 0.4132 - val_accuracy: 0.9402 Epoch 38/50 32/32 [==============================] - 3s 95ms/step - loss: 1.9247e-05 - accuracy: 1.0000 - val_loss: 0.4153 - val_accuracy: 0.9402 Epoch 39/50 32/32 [==============================] - 3s 95ms/step - loss: 1.7082e-05 - accuracy: 1.0000 - val_loss: 0.4168 - val_accuracy: 0.9402 Epoch 40/50 32/32 [==============================] - 3s 96ms/step - loss: 1.5243e-05 - accuracy: 1.0000 - val_loss: 0.4182 - val_accuracy: 0.9404 Epoch 41/50 32/32 [==============================] - 3s 96ms/step - loss: 1.3809e-05 - accuracy: 1.0000 - val_loss: 0.4200 - val_accuracy: 0.9403 Epoch 42/50 32/32 [==============================] - 3s 99ms/step - loss: 1.2567e-05 - accuracy: 1.0000 - val_loss: 0.4208 - val_accuracy: 0.9407 Epoch 43/50 32/32 [==============================] - 3s 96ms/step - loss: 1.1590e-05 - accuracy: 1.0000 - val_loss: 0.4224 - val_accuracy: 0.9409 Epoch 44/50 32/32 [==============================] - 3s 95ms/step - loss: 1.0728e-05 - accuracy: 1.0000 - val_loss: 0.4231 - val_accuracy: 0.9411 Epoch 45/50 32/32 [==============================] - 3s 96ms/step - loss: 9.9013e-06 - accuracy: 1.0000 - val_loss: 0.4251 - val_accuracy: 0.9410 Epoch 46/50 32/32 [==============================] - 3s 96ms/step - loss: 9.2411e-06 - accuracy: 1.0000 - val_loss: 0.4253 - val_accuracy: 0.9412 Epoch 47/50 32/32 [==============================] - 3s 95ms/step - loss: 8.6455e-06 - accuracy: 1.0000 - val_loss: 0.4268 - val_accuracy: 0.9414 Epoch 48/50 32/32 [==============================] - 3s 94ms/step - loss: 8.1020e-06 - accuracy: 1.0000 - val_loss: 0.4275 - val_accuracy: 0.9411 Epoch 49/50 32/32 [==============================] - 3s 98ms/step - loss: 7.6395e-06 - accuracy: 1.0000 - val_loss: 0.4288 - val_accuracy: 0.9412 Epoch 50/50 32/32 [==============================] - 3s 95ms/step - loss: 7.1791e-06 - accuracy: 1.0000 - val_loss: 0.4297 - val_accuracy: 0.9412
And, Now I will train the model with Augmentation:
model_with_aug = make_model()
aug_history = model_with_aug.fit(augmented_train_batches, epochs=50, validation_data=validation_batches)
Code language: Python (python)
Epoch 1/50 32/32 [==============================] - 3s 97ms/step - loss: 2.3180 - accuracy: 0.2847 - val_loss: 1.1624 - val_accuracy: 0.6775 Epoch 2/50 32/32 [==============================] - 3s 94ms/step - loss: 1.3399 - accuracy: 0.5381 - val_loss: 0.8038 - val_accuracy: 0.7815 Epoch 3/50 32/32 [==============================] - 3s 99ms/step - loss: 0.9384 - accuracy: 0.6826 - val_loss: 0.5355 - val_accuracy: 0.8389 Epoch 4/50 32/32 [==============================] - 3s 97ms/step - loss: 0.8110 - accuracy: 0.7310 - val_loss: 0.4035 - val_accuracy: 0.8878 Epoch 5/50 32/32 [==============================] - 3s 94ms/step - loss: 0.6298 - accuracy: 0.7871 - val_loss: 0.3538 - val_accuracy: 0.8846 Epoch 6/50 32/32 [==============================] - 3s 97ms/step - loss: 0.5988 - accuracy: 0.8037 - val_loss: 0.3261 - val_accuracy: 0.9021 Epoch 7/50 32/32 [==============================] - 3s 97ms/step - loss: 0.5618 - accuracy: 0.8223 - val_loss: 0.3066 - val_accuracy: 0.8979 Epoch 8/50 32/32 [==============================] - 3s 96ms/step - loss: 0.4548 - accuracy: 0.8530 - val_loss: 0.3066 - val_accuracy: 0.9012 Epoch 9/50 32/32 [==============================] - 3s 96ms/step - loss: 0.4961 - accuracy: 0.8423 - val_loss: 0.2741 - val_accuracy: 0.9128 Epoch 10/50 32/32 [==============================] - 3s 93ms/step - loss: 0.4198 - accuracy: 0.8569 - val_loss: 0.2845 - val_accuracy: 0.9084 Epoch 11/50 32/32 [==============================] - 3s 96ms/step - loss: 0.4199 - accuracy: 0.8687 - val_loss: 0.2223 - val_accuracy: 0.9291 Epoch 12/50 32/32 [==============================] - 3s 96ms/step - loss: 0.3781 - accuracy: 0.8696 - val_loss: 0.2124 - val_accuracy: 0.9300 Epoch 13/50 32/32 [==============================] - 3s 96ms/step - loss: 0.3615 - accuracy: 0.8804 - val_loss: 0.2269 - val_accuracy: 0.9278 Epoch 14/50 32/32 [==============================] - 3s 96ms/step - loss: 0.3543 - accuracy: 0.8809 - val_loss: 0.2238 - val_accuracy: 0.9306 Epoch 15/50 32/32 [==============================] - 3s 97ms/step - loss: 0.3862 - accuracy: 0.8755 - val_loss: 0.2006 - val_accuracy: 0.9364 Epoch 16/50 32/32 [==============================] - 3s 94ms/step - loss: 0.3346 - accuracy: 0.8965 - val_loss: 0.1717 - val_accuracy: 0.9455 Epoch 17/50 32/32 [==============================] - 3s 100ms/step - loss: 0.2652 - accuracy: 0.9170 - val_loss: 0.1905 - val_accuracy: 0.9390 Epoch 18/50 32/32 [==============================] - 3s 96ms/step - loss: 0.2810 - accuracy: 0.9146 - val_loss: 0.1646 - val_accuracy: 0.9472 Epoch 19/50 32/32 [==============================] - 3s 95ms/step - loss: 0.2845 - accuracy: 0.9053 - val_loss: 0.1897 - val_accuracy: 0.9416 Epoch 20/50 32/32 [==============================] - 3s 96ms/step - loss: 0.2635 - accuracy: 0.9048 - val_loss: 0.1781 - val_accuracy: 0.9436 Epoch 21/50 32/32 [==============================] - 3s 94ms/step - loss: 0.3179 - accuracy: 0.8994 - val_loss: 0.1534 - val_accuracy: 0.9520 Epoch 22/50 32/32 [==============================] - 3s 95ms/step - loss: 0.2240 - accuracy: 0.9233 - val_loss: 0.1753 - val_accuracy: 0.9423 Epoch 23/50 32/32 [==============================] - 3s 95ms/step - loss: 0.2690 - accuracy: 0.9194 - val_loss: 0.1811 - val_accuracy: 0.9428 Epoch 24/50 32/32 [==============================] - 3s 94ms/step - loss: 0.2374 - accuracy: 0.9263 - val_loss: 0.1654 - val_accuracy: 0.9482 Epoch 25/50 32/32 [==============================] - 3s 94ms/step - loss: 0.2740 - accuracy: 0.9131 - val_loss: 0.1659 - val_accuracy: 0.9492 Epoch 26/50 32/32 [==============================] - 3s 95ms/step - loss: 0.2468 - accuracy: 0.9233 - val_loss: 0.1644 - val_accuracy: 0.9501 Epoch 27/50 32/32 [==============================] - 3s 96ms/step - loss: 0.2514 - accuracy: 0.9199 - val_loss: 0.1597 - val_accuracy: 0.9479 Epoch 28/50 32/32 [==============================] - 3s 97ms/step - loss: 0.1930 - accuracy: 0.9370 - val_loss: 0.1561 - val_accuracy: 0.9509 Epoch 29/50 32/32 [==============================] - 3s 97ms/step - loss: 0.2026 - accuracy: 0.9355 - val_loss: 0.1688 - val_accuracy: 0.9500 Epoch 30/50 32/32 [==============================] - 3s 98ms/step - loss: 0.2216 - accuracy: 0.9268 - val_loss: 0.1661 - val_accuracy: 0.9490 Epoch 31/50 32/32 [==============================] - 3s 97ms/step - loss: 0.2534 - accuracy: 0.9209 - val_loss: 0.1772 - val_accuracy: 0.9464 Epoch 32/50 32/32 [==============================] - 3s 100ms/step - loss: 0.2998 - accuracy: 0.9023 - val_loss: 0.1512 - val_accuracy: 0.9514 Epoch 33/50 32/32 [==============================] - 3s 100ms/step - loss: 0.2218 - accuracy: 0.9224 - val_loss: 0.1733 - val_accuracy: 0.9447 Epoch 34/50 32/32 [==============================] - 3s 97ms/step - loss: 0.2016 - accuracy: 0.9380 - val_loss: 0.1539 - val_accuracy: 0.9530 Epoch 35/50 32/32 [==============================] - 3s 96ms/step - loss: 0.1676 - accuracy: 0.9438 - val_loss: 0.1465 - val_accuracy: 0.9556 Epoch 36/50 32/32 [==============================] - 3s 98ms/step - loss: 0.1608 - accuracy: 0.9458 - val_loss: 0.1570 - val_accuracy: 0.9546 Epoch 37/50 32/32 [==============================] - 3s 100ms/step - loss: 0.1625 - accuracy: 0.9438 - val_loss: 0.1807 - val_accuracy: 0.9462 Epoch 38/50 32/32 [==============================] - 3s 107ms/step - loss: 0.1715 - accuracy: 0.9438 - val_loss: 0.1639 - val_accuracy: 0.9525 Epoch 39/50 32/32 [==============================] - 3s 95ms/step - loss: 0.1760 - accuracy: 0.9390 - val_loss: 0.1534 - val_accuracy: 0.9521 Epoch 40/50 32/32 [==============================] - 3s 94ms/step - loss: 0.1685 - accuracy: 0.9473 - val_loss: 0.1546 - val_accuracy: 0.9532 Epoch 41/50 32/32 [==============================] - 3s 97ms/step - loss: 0.1442 - accuracy: 0.9526 - val_loss: 0.1524 - val_accuracy: 0.9550 Epoch 42/50 32/32 [==============================] - 3s 98ms/step - loss: 0.1863 - accuracy: 0.9370 - val_loss: 0.1653 - val_accuracy: 0.9480 Epoch 43/50 32/32 [==============================] - 3s 95ms/step - loss: 0.1574 - accuracy: 0.9468 - val_loss: 0.1570 - val_accuracy: 0.9557 Epoch 44/50 32/32 [==============================] - 3s 95ms/step - loss: 0.1691 - accuracy: 0.9438 - val_loss: 0.1502 - val_accuracy: 0.9551 Epoch 45/50 32/32 [==============================] - 3s 96ms/step - loss: 0.1629 - accuracy: 0.9429 - val_loss: 0.1501 - val_accuracy: 0.9555 Epoch 46/50 32/32 [==============================] - 3s 101ms/step - loss: 0.1672 - accuracy: 0.9409 - val_loss: 0.1530 - val_accuracy: 0.9548 Epoch 47/50 32/32 [==============================] - 3s 97ms/step - loss: 0.1924 - accuracy: 0.9429 - val_loss: 0.1564 - val_accuracy: 0.9534 Epoch 48/50 32/32 [==============================] - 3s 94ms/step - loss: 0.1623 - accuracy: 0.9453 - val_loss: 0.1431 - val_accuracy: 0.9574 Epoch 49/50 32/32 [==============================] - 3s 97ms/step - loss: 0.1680 - accuracy: 0.9443 - val_loss: 0.1384 - val_accuracy: 0.9593 Epoch 50/50 32/32 [==============================] - 3s 100ms/step - loss: 0.1736 - accuracy: 0.9414 - val_loss: 0.1370 - val_accuracy: 0.9596
So, Our Augmented model produced the accuracy of 95 percent on the validation set, which is slightly higher than the accuracy of non-augmented model which is 94 percent.
plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "accuracy")
plt.title("Accuracy")
plt.ylim([0.75,1])
Code language: Python (python)

Also Read: Binary Classification Model in Machine Learning.
With respect to the loss, the non-augmented model is overfitting the dataset. Whereas the Augmented model, is fitting and training on the dataset very well. I hope you liked this article on Data Augmentation in Deep Learning. Feel free to ask your valuable questions in the comments section below.