OpenANN  1.1.0
An open source library for artificial neural networks.
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
Getting Started

We will solve a very simple problem here to demonstrate the API of OpenANN.

XOR Data Set

The XOR problem cannot be solved by the perceptron (a neural network with just one neuron) and was the reason for the death of neural network research in the 70s until backpropagation was discovered.

The data set is simple:

$ x_1 $$ x_2 $$ y_1 $
011
000
110
101

That means $ y_1 $ is on whenever $ x_1 \neq x_2 $. The problem is that you cannot draw a line that separates the two classes 0 and 1. They are not linearly separable as you can see in the following picture. Therefore, we need at least one hidden layer to solve the problem. In the next sections you will find C++ code and Python code that solves this problem.

xor.png

C++

#include <OpenANN/OpenANN>
#include <Eigen/Core>
#include <iostream>
using namespace OpenANN;
int main()
{
// Create dataset
const int D = 2; // number of inputs
const int F = 1; // number of outputs
const int N = 4; // size of training set
Eigen::MatrixXd X(N, D); // inputs
Eigen::MatrixXd T(N, F); // desired outputs (targets)
// Each row represents an instance
X.row(0) << 0.0, 1.0;
T.row(0) << 1.0;
X.row(1) << 0.0, 0.0;
T.row(1) << 0.0;
X.row(2) << 1.0, 1.0;
T.row(2) << 0.0;
X.row(3) << 1.0, 0.0;
T.row(3) << 1.0;
DirectStorageDataSet dataSet(&X, &T);
// Make the result repeatable
// Create network
// Add an input layer with D inputs, 1 hidden layer with 2 nodes and an
// output layer with F outputs. Use logistic activation function in hidden
// layer and output layer.
makeMLNN(net, LOGISTIC, LOGISTIC, D, F, 1, 2);
// Add training set
net.trainingSet(dataSet);
// Set stopping conditions
// Train network, i.e. minimize sum of squared errors (SSE) with
// Levenberg-Marquardt optimization algorithm until the stopping criteria
// are satisfied.
train(net, "LMA", MSE, stop);
// Use network to predict labels of the training data
for(int n = 0; n < N; n++)
{
Eigen::VectorXd y = net(dataSet.getInstance(n));
std::cout << y << std::endl;
}
return 0;
}

Compile it with pkg-config and g++ (and really make sure that pkg-config is installed otherwise you might got misleading errors):

g++ main.cpp -o openann `pkg-config --cflags --libs openann`

Python

1 from openann import *
2 import numpy
3 
4 if __name__ == "__main__":
5  # Create dataset
6  X = numpy.array([[0, 1], [0, 0], [1, 1], [1, 0]])
7  Y = numpy.array([[1], [0], [0], [1]])
8  D = X.shape[1]
9  F = Y.shape[1]
10  N = X.shape[0]
11  dataset = DataSet(X, Y)
12 
13  # Make the result repeatable
14  RandomNumberGenerator().seed(0)
15 
16  # Create network
17  net = Net()
18  net.input_layer(D)
19  net.fully_connected_layer(3, Activation.LOGISTIC)
20  net.output_layer(F, Activation.LOGISTIC)
21 
22  # Train network
23  stop_dict = {"minimal_value_differences" : 1e-10}
24  lma = LMA(stop_dict)
25  lma.optimize(net, dataset)
26 
27  # Use network
28  for n in range(N):
29  y = net.predict(X[n])
30  print(y)

More Examples

Classification

Regression

Reinforcement Learning

We also have some Benchmarks that show how you can use ANNs and compare different architectures.