# Evaluation metrics¶

## Classification¶

### AUC (Area Under the Curve)¶

Summarises the ROC curve with a single number, equal to the integral of the curve.

Sometimes referred to as AUROC (Area Under the Receiver Operating Characteristics).

### F1-score¶

The F1-score is the harmonic mean of the precision and the recall.

Using the harmonic mean has the effect that a good F1-score requires both a good precision and a good recall. ### Precision¶

The probability that an example is in fact a positive, given that it was classified as one. Where TP is the number of true positives and FP is the number of false positives.

### Recall¶

The probability of classifying an example as a positive given that it is infact a positive. Where TP is the number of true positives and FN is the number of false negatives.

### ROC curve¶

Plots the true positive rate against the false positive rate for different values of the threshold in a binary classifier.

ROC stands for Receiver Operating Characteristic.

The ROC curve can be described with one number using the ROC.

## Language modelling¶

### Bits per character (BPC)¶

Used for assessing character-level language models.

Identical to the cross-entropy loss, but uses base 2 for the logarithm: where are the character classes. equals 1 if example i is character c and 0 otherwise. is the predicted probability that example i is character c.

### Perplexity¶

Used to measure how well a probabilistic model predicts a sample. It is equivalent to the exponential of the cross-entropy loss.

## Object detection¶

### Intersection over Union (IoU)¶

An accuracy score for two bounding boxes, where one is the prediction and the other is the target. It is equal to the area of their intersection divided by the area of their union.

### Mean Average Precision¶

The main evaluation metric for object detection.

To calculate it first define the overlap criterion. This could be that the IoU for two bounding boxes be greater than 0.5. Since the ground truth is always that the class is present, this means each predicted box is either a true-positive or a false-positive. This means the precision can be calculated using TP/(TP+FN).

## Ranking¶

### Cumulative Gain¶

A simple metric for ranking that does not take position into account. Where is the relevance of document .

### Discounted Cumulative Gain (DCG)¶

Used for ranking. Takes the position of the documents in the ranking into account. Where is the relevance of the document in position .

### Mean Reciprocal Rank (MRR)¶ Where is a query taken from a set of queries and is the rank of the first document that is relevant for query .

### Normalized Discounted Cumulative Gain (NDCG)¶

Used for ranking. Normalizes the DCG by dividing by the score that would be achieved by a perfect ranking. NDCG is always between 0 and 1. Where and IDCG is the Ideal Discounted Cumulative Gain, the DCG that would be produced by a perfect ranking: ### Precision @ k¶

The proportion of documents returned in the top k results which are relevant. ie the number of relevant documents divided by k.

## Regression¶

### RMSE¶

Root Mean Squared Error. ### R-squared¶

A common metric for evaluating regression algorithms that is easier to interpret than the RMSE but only valid for linear models.

Intuitively, it is the proportion of the variance in the y variable that has been explained by the model. As long as the model contains an intercept term the R-squared should be between 0 and 1. where , the mean of y.

## Translation¶

### BLEU¶

Score for assessing translation tasks. Also used for image captioning. Stands for BiLingual Evaluation Understudy.

Ranges from 0 to 1, where 1 corresponds to being identical to the reference translation. Often uses multiple reference translations.

BLEU: a Method for Automatic Evaluation of Machine Translation, Papineni et al. (2002)