Multi-Layer Perceptron
MLP is a model of feed-forward Artificial Neural Networks composed by
number of separate (hidden) layers. Information is passed from one
layer to the next following a given activation function. Training of
MLP is done by back-propagation.
More information on Wikipedia.
Parameters:
- # Neurons: number of neurons per hidden layer; input and output nodes are not counted
- # Layers: number of hidden layers
- Activation function: output function for all neurons in the network
- sigmoid: beta*(1-exp(-alpha*x)) / (1 + exp(-alpha*x))
- gaussian: beta*exp(-alpha*x*x)