# Tag Performance Evaluation

## Performance Evaluation Metrics in Machine Learning

In Machine Learning, the performance evaluation metrics are used to calculate the performance of your trained machine learning models. This helps in finding how better your machine learning model can perform on a dataset that it has never seen before.…

## F-Beta Score in Machine Learning

F-beta is the weighted harmonic mean of the precision and recall. It is used as a performance evaluation measure for classification-based machine learning models. If you’ve never used this performance measurement metric before to evaluate your classification models, this article…

## Classification Report in Machine Learning

A classification report is a performance evaluation metric in machine learning. It is used to show the precision, recall, F1 Score, and support of your trained classification model. If you have never used it before to evaluate the performance of…

## Calculation of Accuracy using Python

In machine learning, accuracy is one of the most important performance evaluation metrics for a classification model. The mathematical formula for calculating the accuracy of a machine learning model is 1 – (Number of misclassified samples / Total number of…

## Explained Variance in Machine Learning

In machine learning, variance is the difference between the actual samples of the dataset and the predictions made by the model. When working on a regression-based machine learning problem, it is very useful to know how much of the variance…

## R2 Score in Machine Learning

The R2 score is one of the performance evaluation measures for regression-based machine learning models. It is also known as the coefficient of determination. If you want to learn how to evaluate the performance of a machine learning model using…

## Mean Squared Error in Machine Learning

In machine learning, the mean squared error (MSE) is used to evaluate the performance of a regression model. In regression models, the RMSE is used as a metric to measure model performance and the MSE score is used to evaluate…

## Confusion Matrix in Machine Learning

Introduction to Confusion Matrix in Machine Learning

## ROC and AUC in Machine Learning

The ROC and AUC curve in Machine Learning is used to measure the performance of a binary classification model. In this article, I will explain to you what the ROC and AUC curve is in machine learning. ROC and AUC…

## Bias and Variance in Machine Learning

Introduction to Bias and Variance in Machine Learning

## Cross-Validation in Machine Learning

In cross-validation, we run the process of our machine learning model on different subsets of data to get several measures of model quality. For example, we could start by dividing the data into 5 parts, each 20% of the full…

## Evaluate a Machine Learning Model

When you make an accurate prediction using your trained Machine Learning model, then the next step is often to measure the performance of your model. Data Scientists and other Machine Learning Experts spend a larger part to evaluate a Machine…

## ROC Curve in Machine Learning

The Receiver Operating Characteristic (ROC) curve is a popular tool used with binary classifiers. It is very similar to the precision/recall curve. Still, instead of plotting precision versus recall, the ROC curve plots the true positive rate (another name for…

## Precision and Recall

In Machine Learning, Precision and Recall are the two most important metrics for Model Evaluation. Precision represents the percentage of the results of your model, which are relevant to your model. The recall represents the percentage total of total pertinent…