OpenANN  1.1.0
An open source library for artificial neural networks.
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
P300 Speller

This program demonstrates how neural networks can be used to classify electroencephalography (EEG) data.

In this example we train a single layer perceptron (SLP) to recognize P300 potentials. This is needed in order to spell characters with brain-computer interfaces (BCI).

The benchmarks can be accelerated with CUDA. We need at least 6 GByte RAM.

Here we use the data set II from the BCI competition III. You can download the data set from http://www.bbci.de/competition/iii. Note that you have to register first. You need the files in ASCII format. The downloaded files will be

In order to test the performance on the test set, we have to download the target characters of the test set separately and we must generate the expected targets for the classifier. This will be done by a script.

To execute the benchmark you can run the Python script:

python benchmark.py directory [download] [run]

download will download the targets for test sets and run will start the benchmark. The directory should point to the location where the datasets are stored.

The output could be

$ P300Speller /path/to/dataset-directory/
Loaded data set A in 33 s.
Loaded data set B in 33 s.
Iter.   Time    5 trials        15 trials       (average of 10 runs, 2 data sets)
decimation, 1344 parameters
..........
..........
16.60   145.45  64.10           93.50
decimation, compression, 800 parameters
..........
..........
12.35   59.60   63.75           93.90
lowpass filter, compression, 800 parameters
..........
..........
16.10   85.30   64.15           94.05
compression, 1200 parameters
..........
..........
16.10   143.75  55.30           87.80

Here we tested 4 configurations with different preprocessing and compression methods. We performed 10 runs for each configuration and calculated the average number of iterations (Iter.), training time (Time), accuracy with 5 trials and accuracy with 15 trials.