True/False Positives and Negatives

A binary classifier can be viewed as classifying instances as positive or negative:

The basis of precision, recall, and F1-Score comes from the concepts of True PositiveTrue NegativeFalse Positive, and False Negative. The following table illustrates these (consider value 1 to be a positive prediction):

Examples of True/False Positive and Negative

True Positive (TP)

The following table shows 3 examples of a True Positive (TP). The first row is a generic example, where 1 represents the Positive prediction. The following two rows are examples with labels. Internally, the algorithms would use the 1/0 representation, but I used labels here for a more intuitive understanding.

Examples of True Positive (TP) relations.

False Positive (FP)

These False Positives (FP) examples illustrate making wrong predictions, predicting Positive samples for a actual Negative samples. Such failed prediction is called False Positive.

True Negative (TN)

For the True Negative (TN) example, the cat classifier correctly identifies a photo as not having a cat in it, and the medical image as the patient having no cancer. So the prediction is Negative and correct (True).

False Negative (FN)

In the False Negative (FN) case, the classifier has predicted a Negative result, while the actual result was positive. Like no cat when there is a cat. So the prediction was Negative and wrong (False). Thus it is a False Negative.

Confusion Matrix

A confusion matrix is sometimes used to illustrate classifier performance based on the above four values (TP, FP, TN, FN). These are plotted against each other to show a confusion matrix:

Using the cancer prediction example, a confusion matrix for 100 patients might look something like this: