OpenANN
1.1.0
An open source library for artificial neural networks.
|
Fully connected higher-order layer. More...
#include <SigmaPi.h>
Classes | |
struct | Constraint |
A helper class for specifying weight constrains in a higher-order neural network Derive a new class from this interface and simple reimplement the function call operator for the corresponding higher-order term. More... | |
struct | HigherOrderUnit |
Public Member Functions | |
SigmaPi (OutputInfo info, bool bias, ActivationFunction act, double stdDev) | |
Construct a SigmaPi layer that can be extended with different higher-order nodes. More... | |
virtual OutputInfo | initialize (std::vector< double * > ¶meterPointers, std::vector< double * > ¶meterDerivativePointers) |
See OpenANN::Layer::initialize(std::vector<double*>& pParameter, std::vector<double*>& pDerivative) More... | |
virtual SigmaPi & | secondOrderNodes (int numbers) |
Add a specific number of second-order node to this layer. More... | |
virtual SigmaPi & | thirdOrderNodes (int numbers) |
Add a specific number of third-order node to this layer. More... | |
virtual SigmaPi & | fourthOrderNodes (int numbers) |
Add a specific number of fourth-order node to this layer. More... | |
virtual SigmaPi & | secondOrderNodes (int numbers, const Constraint &constrain) |
Add a specific number of second-order nodes that uses the same weight sharing topology. More... | |
virtual SigmaPi & | thirdOrderNodes (int numbers, const Constraint &constrain) |
Add a specific number of third-order nodes that uses the same weight sharing topology. More... | |
virtual SigmaPi & | fourthOrderNodes (int numbers, const Constraint &constrain) |
Add a specific number of fourth-order nodes that uses the same weight sharing topology. More... | |
virtual size_t | nodenumber () const |
virtual size_t | parameter () const |
virtual void | initializeParameters () |
Initialize the parameters. More... | |
virtual void | updatedParameters () |
Generate internal parameters from externally visible parameters. More... | |
virtual void | forwardPropagate (Eigen::MatrixXd *x, Eigen::MatrixXd *&y, bool dropout=false, double *error=0) |
Forward propagation in this layer. More... | |
virtual void | backpropagate (Eigen::MatrixXd *ein, Eigen::MatrixXd *&eout, bool backpropToPrevious) |
Backpropagation in this layer. More... | |
virtual Eigen::MatrixXd & | getOutput () |
Output after last forward propagation. More... | |
virtual Eigen::VectorXd | getParameters () |
Get the current values of parameters (weights, biases, ...). More... | |
Public Member Functions inherited from OpenANN::Layer | |
virtual | ~Layer () |
Protected Types | |
typedef std::vector < HigherOrderUnit > | HigherOrderNeuron |
Protected Attributes | |
OutputInfo | info |
bool | bias |
ActivationFunction | act |
double | stdDev |
Eigen::MatrixXd | x |
Eigen::MatrixXd | a |
Eigen::MatrixXd | y |
Eigen::MatrixXd | yd |
Eigen::MatrixXd | deltas |
Eigen::MatrixXd | e |
std::vector< double > | w |
std::vector< double > | wd |
std::vector< HigherOrderNeuron > | nodes |
Fully connected higher-order layer.
For encoding invariances into the topology of the neural network you can specify a weight constraint for a given higher-order node.
[1] Max B. Reid, Lilly Spirkovska and Ellen Ochoa Rapid training of higher-order neural network for invariant pattern recognition Proc. IJCNN Int. Conf. Neural Networks, Vol. 1, pp. 689-692, 1989
[2] C. L. Gilles and T. Maxwell Learning, invariance, and generalization in high-order neural networks Appl. Opt, Vol. 26, pp. 4972-4978, 1987
|
protected |
OpenANN::SigmaPi::SigmaPi | ( | OutputInfo | info, |
bool | bias, | ||
ActivationFunction | act, | ||
double | stdDev | ||
) |
Construct a SigmaPi layer that can be extended with different higher-order nodes.
info | OutputInfo of previous, connected layer |
bias | flag if this layer supports a bias term for the next, connected layers |
act | specifies using activation function for all higher-order nodes |
stdDev | defines the standard deviation for the random weight initialization |
|
virtual |
Backpropagation in this layer.
ein | pointer to error signal of the higher layer |
eout | returns a pointer to error signal of the layer (derivative of the error with respect to the input) |
backpropToPrevious | backpropagate errors to previous layers |
Implements OpenANN::Layer.
|
virtual |
Forward propagation in this layer.
x | pointer to input of the layer (with bias) |
y | returns a pointer to output of the layer |
dropout | enable dropout for regularization |
error | error value, will be updated with regularization terms |
Implements OpenANN::Layer.
|
virtual |
Add a specific number of fourth-order node to this layer.
numbers | number of nodes to add |
|
virtual |
Add a specific number of fourth-order nodes that uses the same weight sharing topology.
numbers | number of nodes to add |
constrain | specifies shared weight groups for signal korrelations from higher-order terms |
|
virtual |
|
virtual |
Get the current values of parameters (weights, biases, ...).
Implements OpenANN::Layer.
|
virtual |
|
virtual |
Initialize the parameters.
This is usually called before each optimization.
Implements OpenANN::Layer.
|
inlinevirtual |
|
inlinevirtual |
|
virtual |
Add a specific number of second-order node to this layer.
numbers | number of nodes to add |
|
virtual |
Add a specific number of second-order nodes that uses the same weight sharing topology.
numbers | number of nodes to add |
constrain | specifies shared weight groups for signal korrelations from higher-order terms |
|
virtual |
Add a specific number of third-order node to this layer.
numbers | number of nodes to add |
|
virtual |
Add a specific number of third-order nodes that uses the same weight sharing topology.
numbers | number of nodes to add |
constrain | specifies shared weight groups for signal korrelations from higher-order terms |
|
virtual |
Generate internal parameters from externally visible parameters.
This is usually called after each parameter update.
Implements OpenANN::Layer.
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |