View on GitHub

rFBP

Replicated Focusing Belief Propagation algorithm

Replicated Focusing Belief Propagation

The Replicated Focusing Belief Propagation (rFBP) is an entropy-maximization based algorithm operating as the underlying learning rule of feed-forward binary neural networks. Here, the entropy is defined as:

,

where is the whole weights set, N is its size and is the standard Boltzmann term. Further, is the energy of such weights set, which is equal to the number of wrong predictions on the training set produced by . When , only those with null energy, i.e. perfect solutions, sum up in the entropy. At the time being, the rFBP only works with .

The realization of the rFBP is equivalent to evolve a spin-glass system composed of several interacting replicas with the Belief Propagation algorithm. The density distribution of the system is modelled by:

,

where Z is the partition function.

Such spin-glass model depends necessarily on two parameters: y and . The former is a temperature-alike related variable, similar to the one usually exploited by Gradient Descend approaches, but it can be also interpreted as the number of interacting replicas of the system. The latter is the penalization term associated to the distance between two weights sets. Indeed, the term in the entropy is larger, when and are closer.

The Belief Propagation algorithm needs to be to adjusted by adding incoming extra messages for all weights, in order to involve the interacting replicas of the system. This extra term is represented by:

,

where and stand respectively for the i-th weight and a representation of all i-th replicas.

The rFBP is therefore an adjusted Belief Propagation algorithm, whose general procedure can be summarized as follows:

The rFBP algorithm focuses the replicated system to fall step by step into weights sets extremely closed to many perfect solutions ( such that ), which ables them to well generalize out of the training set [Baldassi101103].