View on GitHub

NumPyNet

Neural Networks library in pure numpy

L1 normalization Layer

The l1 normalizatioon layer normalizes the data with respect to the selected axis, using the l1 norm, computed as such:

![](https://latex.codecogs.com/gif.latex?   x   1&space;=&space;\sum{i=0}^{N}&space; x_i )

Where N is the dimension of the selected axis. The normalization is computed as:

![](https://latex.codecogs.com/gif.latex?\hat&space;x&space;=&space;\frac{x}{   x   _1&space;+&space;\epsilon})

Where ε is a small (order of 10-8) constant used to avoid division by zero.

The backward, in this case, is computed as:

Where δl is the previous layer’s delta, and δl-1 is the next layer delta.

The code below is an example on how to use the single layer:

import os

from NumPyNet.layers.l1norm_layer import L1Norm_layer

import numpy as np

# those functions rescale the pixel values [0,255]->[0,1] and [0,1->[0,255]
img_2_float = lambda im : ((im - im.min()) * (1./(im.max() - im.min()) * 1.)).astype(float)
float_2_img = lambda im : ((im - im.min()) * (1./(im.max() - im.min()) * 255.)).astype(np.uint8)

filename = os.path.join(os.path.dirname(__file__), '..', '..', 'data', 'dog.jpg')
inpt = np.asarray(Image.open(filename), dtype=float)
inpt.setflags(write=1)
inpt = img_2_float(inpt) # preparation of the image

# add batch = 1
inpt = np.expand_dims(inpt, axis=0)

# instantiate the layer
layer = L1Norm_layer(axis=None) # axis=None just sum all the values

# FORWARD

layer.forward(inpt)
forward_out = layer.output # the shape of the output is the same as the one of the input

# BACKWARD

delta = np.zeros(shape=inpt.shape, dtype=float)
layer.backward(delta, copy=False)

To have a look more in details on what’s happening, here are presented the defition of the functions forward and backward:

def forward(self, inpt):
  '''
  Forward of the l1norm layer, apply the l1 normalization over
  the input along the given axis
  Parameters:
    inpt: the input to be normaliza
  '''
  self._out_shape = inpt.shape

  norm = np.abs(inpt).sum(axis=self.axis, keepdims=True)
  norm = 1. / (norm + 1e-8)
  self.output = inpt * norm
  self.scales = -np.sign(self.output)
  self.delta  = np.zeros(shape=self.out_shape, dtype=float)

The forward function is an implemenatation of what’s stated before:

def backward(self, delta, copy=False):
  '''
  Compute the backward of the l1norm layer
  Parameter:
    delta : global error to be backpropagated
  '''

  self.delta += self.scales
  delta[:]   += self.delta

As for forward, backward is just a simple implementation of what’s described above.