OpenANN  1.1.0
An open source library for artificial neural networks.
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
SARCOS Inverse Dynamics Problem

The SARCOS dataset is taken from this website.

This is an inverse dynamics problem, i.e. we have to predict the 7 joint torques given the joint positions, velocities and accelerations. Hence, we have to solve a regression problem with 21 inputs and 7 outputs and a very nonlinear function.

The optimization problem is very hard. Underfitting is a much bigger problem than overfitting. For this reason, we need a very big network that has four hidden layers with 200 nodes each. The deep architecture makes the optimization problem very hard but it is more efficient than a shallow network. However, we can do two things to increase the optimization speed drastically: we use a non-saturating activation function (rectified linear units) and mini-batch stochastic gradient descent.

You can start the benchmark with the script:

  python benchmark.py [download] [run]

Note that you need SciPy to load the dataset and matplotlib to display some results.

The output will look like

  Dimension 1: nMSE = 0.938668% (training) / 0.903342% (validation)
  Dimension 2: nMSE = 0.679012% (training) / 0.647091% (validation)
  Dimension 3: nMSE = 0.453497% (training) / 0.442720% (validation)
  Dimension 4: nMSE = 0.242476% (training) / 0.240360% (validation)
  Dimension 5: nMSE = 1.010049% (training) / 1.044068% (validation)
  Dimension 6: nMSE = 0.851110% (training) / 0.796895% (validation)
  Dimension 7: nMSE = 0.474232% (training) / 0.465929% (validation)

You see the normalized mean squared error (nMSE) for each output dimension on the training set and the test set. The nMSE is the mean squared error divided by the variance of the corresponding output dimension. In addition, a plot that compares the actual and the predicted output of one dimension will occur.