In Machine Learning Scalars, Vectors, Matrices and Tensors are the part of Linear Algebra which is used in the mathematical computations in Machine Learning. In this article, I will take you through what are Scalars, Vectors, Matrices and Tensors in Machine Learning.

## What is Linear Algebra?

Linear algebra is the study of vectors and linear functions. In broad terms, vectors are things you can add and linear functions are functions of vectors that respect vector addition. The goal of Linear Algebra is to teach you to organize information about vector spaces in a way that makes problems involving linear functions of many variables easy.

**Also, Read – Analyze IPL with Python.**

Understanding the concept of linear algebra is very important for understanding and working with many machine learning algorithms, especially in building Neural Networks.

## What are Scalars, Vectors, Matrices and Tensors?

Scalars, Vectors, Matrices and Tensors are the most important mathematical concepts of Linear Algebra.

**Scalars**: A scalar is only a single number, unlike most other objects studied in linear algebra, which are usually arrays of multiple numbers. We write the scalars in italics. We usually give lowercase variable names to scalars. When we present them, we are specifying what type of number they are.

**Vectors**: A vector is an array of numbers. The numbers are listed in order. We can identify each individual number by its index in that order. Typically, we give vectors lowercase names written in bold types, such as x.

The elements of the vector are identified by writing its name in italics, with a subscript. The first element of x is x1, the second element is x2, and so on. We also need to indicate the type of numbers stored in the vector.

Simply put, we can think of vectors as identifying points in space, each element giving the coordinate along a different axis.

**Matrices**: A matrix is a 2D array of numbers, so each element is identified by two subscripts instead of just one. We usually give matrices uppercase variable names with bold characters, such as **A**.

We usually identify the elements of a matrix by using its name in italics but not in bold, and the subscripts are listed with separating commas.

**Tensors**: In some cases, we’ll need an array with more than two axes. In the general case, an array of numbers arranged on a regular grid with a varying number of axes is called a tensor. We note a tensor named “A” with this font: A.

I hope you liked this article on Understanding Scalars, Vectors, matrices and tensors in Machine Learning. Feel free to ask your valuable questions in the comments section below.

**Also, Read – StandardScaler in Machine Learning.**