After we have performed data cleaning, pre-processing, and wrangling, we typically begin by inputting the data into a high-performing model and obtaining output probabilities. However, it’s important to evaluate the model’s effectiveness. The better the effectiveness, the better the model’s performance, which is our ultimate goal. This is where the Confusion Matrix becomes relevant, as it serves as a performance measurement for machine learning classification.

This blog is aimed to address the following question:

- what is confusion matrix?
- when to use Precision and when to use Recall

**1.What is Confusion Matrix? Why you need it?**

A confusion matrix is a table that is used to evaluate the performance of a machine learning model for binary or multiclass classification problems. The name stems from the fact thatÂ **it makes it easy to see whether the system is confusing two classes. **It summarises the actual and predicted classifications made by the model and presents them in a matrix format. The matrix is made up of four components: true positives, false positives, true negatives, and false negatives.

- True positives (TP) are instances where the model correctly predicts a positive class.
- False positives (FP) are instances where the model incorrectly predicts a positive class. FP is also known as Type 1 Error.
- True negatives (TN) are instances where the model correctly predicts a negative class.
- False negatives (FN) are instances where the model incorrectly predicts a negative class. FN is also known as Type 2 Error.

Let me spice things up a bit by using meme generator to help you grasp this concept better.

In this case, we want to detect the memes that are actually dogs. **Type I error (FP)** refers to the memes that are cats but are diagnosed as dogs. Whereas **Type II error (FN)** indicates the doge memes that it failed to recognise.

**2.When to use Precision and when to use Recall**

Before we solve this question, we will need to know, what is precision and what is recall. Precision and recall are both metrics used to evaluate the performance of a machine learning model, specifically for binary classification problems(i.e. 2-class classification problem).

**Precision**: the proportion of true positives (correctly predicted positive class) out of all positive predictions made by the model (true positives + false positives). In other words, precision measures the accuracy of positive predictions.**Recall:**the proportion of true positives (correctly predicted positive class) out of all actual positive instances in the dataset (true positives + false negatives). Recall measures the completeness or sensitivity of the model in detecting positive instances.

now let’s assign some number to the confusion matrix model and do some math:

In general, high precision and low recall indicate that the model makes very few positive predictions, but those that it makes are highly accurate. Conversely, high recall and low precision indicate that the model makes many positive predictions, but only a subset of them are accurate.

Which metric to use depends on the specific problem and the associated costs of different types of errors. For example, in medical diagnosis, high precision is often more important than high recall, as missing a positive case can have severe consequences. In fraud detection, high recall may be more important, as there is high cost associated with false negatives.

In my future blog I will further explain the concepts with more instances, so stay tuned!