# Normalization¶

## Batch normalization¶

Normalizes the input vector to a layer to have zero mean and unit variance, making training more efficient. Training deep neural networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization. This phenomenon is referred to as internal covariate shift.

Adding to the normalized input and scaling it by ensures the model does not lose representational power as a result of the normalization.

Batch Normalization is often found to improve generalization performance (Zhang et al. (2016)).

**Proposed in**

### Training¶

The batch-normalized version of the inputs, , to a layer is:

Where and are learned and is a small hyperparameter that prevents division by zero. If there are multiple batch normalization layers a separate and will be learned for each of them.

and are moving averages of the mean and variance of . They do not need to be learned. The moving averages are calculated independently for each feature in .

Batch normalization does not work well with small batch sizes (Wu and He, 2018). Small batches cause the statistics to become inaccurate. This can cause problems when training models with large images where large batches will not fit in memory.

### Inference¶

Batch normalization’s stabilizing effect is helpful during training but unnecessary at inference time. Therefore, once the network is trained the population mean and variance are used for normalization, rather than the batch mean and variance. This means the networks output can depend only on the input, not also on other examples in the batch.

### Application to RNNs¶

Batch normalization is difficult to apply to RNNs since it requires storing the batch statistics for every time step in the sequence. This can be problematic if a sequence input during inference is longer than those seen during training.

Coojimans et al. (2016) propose a variant of the LSTM that applies batch normalization to the hidden-to-hidden transitions.

### Conditional batch normalization¶

The formula is exactly the same as normal batch normalization except and are not learned parameters, but rather the outputs of functions.

Was used to achieve state of the art results on the CLEVR visual reasoning benchmark.

Learning Visual Reasoning Without Strong Priors, Perez et al. (2017)

## Group normalization¶

Group normalization implements the same formula as batch normalization but takes the average over the feature dimension(s) rather than the batch dimension. This means it can be used with small batch sizes, unlike batch normalization, which is useful for many computer vision applications where memory-consuming high resolution images naturally restrict the batch size.

Where and are learned and is a small hyperparameter that prevents division by zero. Separate gamma and beta are learned for each group normalization layer. and make sure the model does not lose any representational power from the normalization.

**Proposed in**

## Layer normalization¶

Can be easily applied to RNNs, unlike batch normalization.

If the hidden state at time of an RNN is given by:

Then the layer normalized version is:

where and are the mean and variance of .

## Weight normalization¶

The weights of the network are reparameterized as:

where is a learnt scalar and is a learnt vector.

This guarantees that without the need for explicit normalization.

Simple to use in RNNs, unlike batch normalization.

Unlike batch normalization, weight normalization only affects the weights - it does not normalize the activations of the network.