# Linear Algebra for Machine Learning

Linear algebra is one of the most important topics in machine learning. In this article, I will introduce you to the basic concepts of linear algebra for machine learning using NumPy.

## Why Linear Algebra for Machine Learning?

Machine learning and deep learning models depend on data. Their performance is highly dependent on the amount of data. So, we tend to collect as much data as possible to build a robust and accurate model. Data is collected in various formats, like numbers, images, text, sound waves, etc. However, we need to convert the data to numbers to analyze and model it.

It is not enough to convert the data to scalars (unique numbers). As the number of data increases, operations performed with scalars begin to be inefficient. We need to be vectorized or matrix operations to perform calculations efficiently. This is where linear algebra comes in.

## Numpy for Linear Algebra for Machine Learning

Linear algebra is used for matrix multiplication, decompositions, determinants, and other square mathematical computations. In python Unlike some languages ​​like MATLAB, multiplying two two-dimensional arrays with * is an element-by-element product instead of a matrix dot product. As such, there is a function point, both an array method and a function in the NumPy namespace, for matrix multiplication:

```.wp-block-code {
border: 0;
}

.wp-block-code > div {
overflow: auto;
}

.hljs {
box-sizing: border-box;
}

.hljs.shcb-code-table {
display: table;
width: 100%;
}

.hljs.shcb-code-table > .shcb-loc {
color: inherit;
display: table-row;
width: 100%;
}

.hljs.shcb-code-table .shcb-loc > span {
display: table-cell;
}

.wp-block-code code.hljs:not(.shcb-wrap-lines) {
white-space: pre;
}

.wp-block-code code.hljs.shcb-wrap-lines {
white-space: pre-wrap;
}

.hljs.shcb-line-numbers {
border-spacing: 0;
counter-reset: line;
}

.hljs.shcb-line-numbers > .shcb-loc {
counter-increment: line;
}

.hljs.shcb-line-numbers .shcb-loc > span {
}

.hljs.shcb-line-numbers .shcb-loc::before {
border-right: 1px solid #ddd;
content: counter(line);
display: table-cell;
text-align: right;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
white-space: nowrap;
width: 1%;
}
```import numpy as np
x = np.array([[1., 2., 3.], [4., 5., 6.]])
y = np.array([[6., 23.], [-1, 7], [8, 9]])
print("X: ",x)
print("Y: ",y)``````
```X:  [[1. 2. 3.]
[4. 5. 6.]]
Y:  [[ 6. 23.]
[-1.  7.]
[ 8.  9.]]```

A matrix product between a 2D array and a 1D array of appropriate size results in a 1D array:

`` x.dot(y) # equivalently np.dot(x, y)``
```array([[ 28.,  64.],
[ 67., 181.]])```
`` np.dot(x, np.ones(3))``
`array([ 6., 15.])`

np.linalg is the standard method to use linear algebra using Numpy:

``````from numpy.linalg import inv, qr
X = np.random.randn(5, 5)
mat = X.T.dot(X)
inv(mat)``````
```array([[16.2579557 , 22.47067503, -6.17716328, -1.01907543, -2.60056365],
[22.47067503, 31.53016374, -8.60042272, -1.39021859, -3.5552882 ],
[-6.17716328, -8.60042272,  2.8295435 ,  0.26466478,  1.0559826 ],
[-1.01907543, -1.39021859,  0.26466478,  0.22326941,  0.16491872],
[-2.60056365, -3.5552882 ,  1.0559826 ,  0.16491872,  0.77391684]])```
``mat.dot(inv(mat))``
```array([[ 1.00000000e+00,  4.44696288e-15,  4.58598904e-15,
-1.68766639e-16, -6.69620996e-16],
[ 6.23747293e-15,  1.00000000e+00, -1.86299369e-15,
4.13611149e-16,  1.09809792e-15],
[ 6.61119672e-16,  2.12440627e-15,  1.00000000e+00,
-1.92386167e-16, -1.46115511e-16],
[-7.15526768e-15, -1.35769396e-15,  5.27292750e-16,
1.00000000e+00, -5.20736322e-17],
[-1.40706387e-15, -1.48669390e-15, -2.88782497e-16,
-1.03133706e-16,  1.00000000e+00]])```
``````q, r = qr(mat)
r``````
```array([[-5.61233918,  3.52133006, -1.00287921, -2.82220284, -0.86079658],
[ 0.        , -1.10385831, -3.16692525, -6.58249649,  0.53146287],
[ 0.        ,  0.        , -1.17576023,  0.79144719,  2.50675231],
[ 0.        ,  0.        ,  0.        , -3.96436144,  1.50681719],
[ 0.        ,  0.        ,  0.        ,  0.        ,  0.21747225]])```

The scientific Python community is hopeful that there may be a matrix multiplication infix operator implemented someday, providing a syntactically nicer alternative to using np.dot. But for now, this is the way.

## Other Numpy Linalg Functions For Practice

• diag – Returns the diagonal elements of a square matrix as a 1D array, or converts a 1D array to a square array with zeros on the diagonal.
• dot – Matrix multiplication
• trace – To compute the sum of the diagonal elements
• det – To compute the matrix determinant
• eig – To Compute the eigenvalues and eigenvectors of a square matrix
• inv – Compute the inverse of a square matrix
• pinv – Compute the Moore-Penrose pseudo-inverse inverse of a square matrix
• qr – Compute the QR decomposition
• svd – Compute the singular value decomposition (SVD)
• solve – Solve the linear system Ax = b for x, where A is a square matrix
• lstsq – Compute the least-squares solution to y = Xb

Also, Read – Predict Titanic Survival with Machine Learning.

I hope you liked this article on Linear Algebra for Machine Learning. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium to learn every topic of Machine Learning.