Skip to main content

METHODS article

Front. Neurosci., 02 December 2010
Sec. Neurogenomics

Pyff---A Pythonic Framework for Feedback Applications and Stimulus Presentation in Neuroscience

  • 1 Machine Learning Laboratory, Berlin Institute of Technology, Berlin, Germany
  • 2 Bernstein Center for Computational Neuroscience, Berlin, Germany
  • 3 Department of Computing Science, University of Glasgow, Glasgow, Scotland
  • 4 Bernstein Focus: Neurotechnology, Berlin, Germany
  • 5 Fraunhofer FIRST (IDA), Berlin, Germany

This paper introduces Pyff, the Pythonic feedback framework for feedback applications and stimulus presentation. Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e., feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain–computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open-source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation.

1 Introduction

During the past years, the neuroscience community has been moving toward increasingly complex stimulation paradigms (Pfurtscheller et al., 2006; Brouwer and van Erp, 2010; Schreuder et al., 2010) that aim to investigate human function in a more natural setting. A technical bottleneck in this process is programming these complex stimulations. In particular, the rapidly growing field of brain–computer interfacing (BCI, Dornhege et al., 2007) requires stimulus presentation programs that can be used within a closed loop, i.e., feedback applications that are driven by a control signal that is derived from ongoing brain activity.

The present paper suggests a Python-based framework for experimental paradigms that combines the ease in programming and the inclusion of all necessary functionality for flawless stimulus presentation. This is well in line with the growing interest toward using Python in the neuroscience community (Jurica and VanLeeuwen, 2009; Spacek et al., 2008; Brüderle et al., 2009; Drewes et al., 2009; Ince et al., 2009; Pecevski et al., 2009; Strangman et al., 2009). Pyff provides a powerful yet simple and highly accessible framework for the development of complex experimental paradigms containing multi-media. To this end, it accommodates a standardized interface for implementing experimental paradigms, support for special hardware such as eye trackers and EEG equipment as well as large library of ready-to-go experiments. Class experience shows that non-expert programmers typically learn the use of our framework within 2 days. Note that a C++ implementation can easily take one order of magnitude more time to learn than the corresponding Python implementation and even for an experienced programmer a factor of two still remains (Prechelt, 2000). The primary aim of Pyff was to provide a convenient basis for programming paradigms in the context of brain–computer interfacing. To that end, Pyff can also easily be linked to BCI systems like BCI2000 (Schalk et al., 2004), and the Berlin BCI via a standard communication protocol, see Section 3. Furthermore, Pyff can be used for general stimulus presentation. This allows a seamless transition from experiments in the fields of cognitive psychology, neuroscience, or psychophysics to BCI studies.

Since Pyff is open source, it makes an ideal basis for a vivid exchange of experimental paradigms between research groups and it releases the user from needing to reprogram standard paradigms. Furthermore, providing paradigm implementations as supplementary material within the Pyff framework will ease the reproducibility of published results. The following sections of this paper introduce the software concept, detail a number of typical paradigms and conclude. Several appendices expand on the software engineering side and the code of a sample paradigm is discussed in detail.

Throughout this paper, we use the term stimulus and feedback applications synonymous with experimental paradigm. The former term is common in the BCI field, whereas the latter is better known in neuroscience. The difference between a feedback application and a stimulus application is defined by the setup of the experiment. If the experimental setup forms a closed loop, such as in a neurofeedback paradigm, we call the application a feedback application. If the loop is not closed, we call it stimulus presentation. When referring to the actual software implementation of a paradigm, we use the term Feedback (with a capital f), synonymous for stimulus and feedback applications.

2 Related Work

Pyff is a general, high-level framework for the development of experimental paradigms within the programming language Python. In particular, Pyff can receive control signals of a BCI system to drive a feedback application within a closed-loop mode. Other software related to Pyff can be grouped in the following three categories:

(1) General Python module for visual or auditory presentation

(2) Packages for stimulus presentation and experimental control

(3) Feedback applications for the use with BCI systems

Software of categories (1) and (2) can be used within Pyff, while (3) is an alternative to Pyff. In the following, we will shortly discuss prominent examples of the three groups.

(1) These modules are usually used to write games and other software applications and can be used within Pyff to control stimulus presentation. Pygame1 is a generic platform for gaming applications. It can be readily used to implement visual and auditory stimulus presentation. Similar to Pygame, pyglet2 is a framework for developing games and visually rich applications and therefore suited for visual stimulus applications. PyOpenGL3 provides bindings to OpenGL and related APIs, but requires the programmer to be familiar with OpenGL.

In Pyff, a Pygame base class (see Section 13) exists, that facilitates the development of experimental paradigms based on this module.

(2) There are some comprehensive Python libraries that provide means for creating und running experimental paradigms. Their advanced functionality for stimulus presentation can be used within Pyff. Vision Egg (Straw, 2008) is a high-level interface to OpenGL. It was specifically designed to produce stimuli for vision research experiments. PsychoPy (Peirce, 2007) is a platform-independent experimental control system written in Python. It provides means for stimulus presentation and response collection as well as simple data analysis. PyEPL (Geller et al., 2007) is a another Python library for object-oriented coding of psychology experiments which supports the presentation of visual and auditory stimuli as well as manual and sound input as responses.

Pyff provides a VisionEggFeedback base class which allows for easily writing paradigms using Vision Egg for stimulus presentation. The other two modules have up-to-date not been used within Pyff.

The Psychophysics Toolbox (Brainard, 1997) is a free set of Matlab and GNU/Octave functions for vision research. Being available since the 1990s, it is now a mature research tool and particularly popular among psychologists and neuroscientists. Currently, there is no principle framework to couple the Psychophysics Toolbox to a BCI system.

In addition to this, there are also commercial solutions such as E-Prime (Psychology Software Tools, Inc) and Presentation (Neurobehavioral Systems) which are software for experiment design, data collection, and analysis.

(3) BCI2000 (Schalk et al., 2004) is a general-purpose system for BCI research that is free for academic and educational purposes. It is written in C + + and runs under Microsoft Windows. BCPy2000 (Schreiner, 2008) is an extension that allows developers to implement BCI2000 modules in Python, which is less complex and less error prone than C++. BCPy200 is firmly coupled to BCI2000 resulting in a more constraint usage compared to Pyff.

3 Overview of Pyff

This section gives an overview of our framework. A more complete and technical description is available in the Appendix.

The Pythonic feedback framework (Pyff) is a framework to develop experimental paradigms. The foremost design goal was to make the development of such applications fast and easy, even for users with little programming experience. For this reason we decided to the Python programming language as it is easier to learn than low level languages like C++. The code is shorter and clearer and thus leads to faster and less error prone results. Python is slower than a low level language like C++, but usually fast enough for multi media applications. In rare cases where Python is too slow for complex calculations, it is easy to port the computationally intensive parts to C and then call them within Python.

The framework consists of four parts: the Feedback Controller, a graphical user interface (GUI), a set of Feedbacks and a collection of Feedback base classes (see Figure 1).

FIGURE 1
www.frontiersin.org

Figure 1. Overview of the Pyff framework. The framework consists of the Feedback Controller, the GUI, a collection of Feedbacks and Feedback base classes.

The Feedback Controller controls the execution of the stimulus and feedback applications and forward incoming signals from an arbitrary data source such as a BCI system to the applications. To enable as many existing systems as possible to communicate with Pyff, we developed a simple communications protocol based on the user datagram protocol (UDP). The protocol allows for transportation of data over a network using extensible markup language (XML) to encode the signal. This protocol enables virtually any software that is able to output it’s data in some form to send it to Pyff with only minor modifications.

The GUI controls the Feedback Controller remotely. Within the GUI the experimenter can select and start Feedbacks as well as inspect and manipulate their variables (e.g., number of trials, position, and color of visual objects). The ability to inspect the Feedback application’s internal variables in real time while the application is running makes the GUI an invaluable debugging tool. Being able to modify these variables on the fly also provides a great way to explore different settings in a pilot experiment. The GUI also uses the aforementioned communication protocol and thus does not need to run on the same machine as the Feedback Controller. This can be convenient for experiments where the subject is in a different room than the experimenter. Note, that the use of the GUI is optional as everything can also be controlled remotely via UDP/XML, see Section 11.

Pyff not only provides a platform to develop Feedback applications and stimuli easily, but it is also equipped with a variety of paradigms (see Section 4 for examples). The BBCI group will continue to publish feedback and stimulus applications under a Free Software License, making them available to other research groups and we hope that others will also join our effort.

The collection of Feedback base classes provides a convenient set of standard methods for paradigms which can be used in derived Feedback classes to speed up the development of new Feedback applications. This standard functionality reduces the overheads of developing a new Feedback as well as minimizing code duplication. To give an example, Pygame is frequently used to provide the graphical output of the Feedback. Since some things need to be done in every Feedback using Pygame (i.e., initializing the graphics or regularly polling Pygame’s event queue), we created the PygameFeedback base class. It contains methods required by all Pygame-based Feedbacks and some convenient helper methods we find useful. Using this base class for in a Pygame-based Feedback can drastically reduce the amount of new code required. It helps to concentrate on the code needed for the actual paradigm instead of dealing with the quirks of the library used. Pyff already provides some useful base classes like VisionEggFeedback, EventDrivenFeedback, and VisualP300. Our long term goal is to provide a rich set of base classes for standard experimental paradigms to ease the effort of programming new Feedbacks even more.

4 Selected Feedbacks

Pyff allows for the rapid implementation of one’s own paradigms, but it also comes equipped with a variety of ready-to-use paradigms. In the following sections, we will present a few examples.

4.1 Hex-o-Spell for Continuous Input Signals

The Hex-o-Spell is a text-entry device that is operated via timing-based changes of a continuous control signal (Müller and Blankertz, 2006; Williamson et al., 2009). These properties render it a suitable paradigm for BCI experiments in which brain-state discriminating strategies are employed that also have a fine temporal resolution.

The structure of the Hex-o-Spell Feedback with all its visual components is shown in Figure 2A. The main visual elements are an arrow that is surrounded by an array of six hexagons in the center of the screen, a large text board that displays the spelled text, and a bar of varying height that indicates the current value of the control signal. The hexagons surrounding the central arrow contain the symbols that are used for spelling.

FIGURE 2
www.frontiersin.org

Figure 2. The Hex-o-Spell Feedback. (A) All components of the Hex-o-Spell are visible. The Feedback is in selection stage one and the currently spelled text consists of the letter “B” only. (B) The symbol layout after spelling of “BERL.” (C) The second hexagon (clockwise, from the top) has been selected in stage one and the Feedback is now in stage two.

The actual selection of a symbol is a two stage process and involves the subject controlling the orientation and length of the arrow. How exactly the parameters of the arrow are manipulated by the subject is explained in the next paragraph, while the remainder of this paragraph outlines the symbol selection process. In the first stage each hexagon contains five symbols. The subject has to select the hexagon that contains the desired symbol by making the arrow point to it and then confirming the choice by making the arrow grow until it reaches the hexagon. After this confirmation, the symbol content of the selected hexagon is distributed over the entire hexagon array, such that each hexagon now contains maximally one symbol only. There is always one hexagon that contains no symbol, which, in conjunction with the delete symbol “<,” represents a practically unlimited undo option. The location of individual symbols in stage two reflects the positions they had in the single hexagon in stage one (compare Figures 2B,C). Now the subject has to position the arrow such that it points to the hexagon that contains the desired symbol and, again, confirm the selection by making the arrow grow until the respective hexagon is reached. After the symbol has been selected, the spelled text in the text board is updated accordingly. The Hex-o-Spell Feedback now returns to stage one, i.e., the hexagons again show their original symbol content and the symbol selection process can begin anew. All major events, including for example on- and offsets of transition animations between stages, symbol selection, and GUI interaction (play, pause, stop) are accompanied by sending an event-specific integer code to the parallel port of the machine that runs the Hex-o-Spell Feedback. These codes can be incorporated in the marker structure of the EEG recording software and used for later analysis.

In order to operate the Hex-o-Spell symbol selection mechanism, the subject has to control the behavior of the arrow. The arrow is always in one of three distinct states: (1) clockwise rotation, (2) no rotation, and (3) no rotation and growth, i.e., increase in length until a certain maximum length. Upon returning from state (3) to state (2), the arrow shrinks back to default length. The states of the arrow are directly linked to the control signal from the Feedback Controller, which is required to be in the range between −1 and 1. Two thresholds, t1 and t2 with −1 < t1 < t2 < 1, partition the control signal range in three disjunct regions and thereby allow switching of arrow states by altering the control signal strength. The two thresholds are visualized as part of the control signal bar and therefore provide the subject with feedback as to how much more they have to in-/decrease signal strength in order to achieve a certain arrow state. The thresholds, time constants that determine rotation and scaling speed as well as other parameters of the Feedback can be adjusted during the experiment to allow for further accommodation to the subject.

So far all ingredients that are essential for operating the Hex-o-Spell Feedback have been explained. Additionally, our implementation includes mechanisms that speed up the spelling of words considerably by exploiting certain statistical properties of natural language. We achieve this by making those symbols easier accessible that are more likely to be selected next. In stage one the arrow starts always pointing to the hexagon containing the most probable next symbol. Additionally the positions of letters within each hexagon in stage one are arranged so that the arrow always starts pointing to the most probable symbol in stage two, followed by the second most probable, etc. The selection probability distribution model is updated after each new symbol. With these text-entry aids, the Hex-o-Spell Feedback allows for faster spelling rates of up to 7.6 symbols/min (Blankertz et al., 2007).

This Feedback requires the following packages to be installed: NumPy4 and Panda3D5. Numpy is essential for the underlying geometrical computations (angles, position, etc.) and the handling of the language model data. Panda3D provides the necessary subroutines for rendering the visual feedback. Both packages are freely available for all major platforms from their respective websites.

4.2 ERP-Based Hex-o-Spell

The ERP-based Hex-o-Spell (Treder and Blankertz, 2010) is an adaptation of the standard Hex-o-Spell (see Section 4.1) utilizing event related potentials (ERPs) to select the symbols. The ERP is the brain response following an external or internal event. It involves early positive and negative components usually associated with sensory processing and later components reflecting cognitive processes. The main course of selecting symbols works in a two stage process as described in Section 4.1, but in the ERP-based variant the symbols are not selected by a rotating arrow, but by the ERPs caused by intensification of elements on the screen: Discs containing the symbols are intensified in random order and if the attended disc is intensified (Figure 3), a characteristic ERP is elicited. After 10 rounds of intensifications the BCI system has enough confidence to decide which of the presented discs the subject wanted to select.

FIGURE 3
www.frontiersin.org

Figure 3. Stage one and two of the ERP-based Hex-o-Spell. Instead of an arrow, intensified discs are used to present the current selection.

Intensification is realized by up-sizing the disc including the symbol(s) by 62.5%. Each intensification is accompanied by a trigger which is sent to the EEG using the send_parallel() method of the Feedback base class. The triggers mark the exact points in time of the intensifications and are essential for the BCI system to do ERP-based BCI.

The ERP Hex-o-Spell Feedback class is derived from the VisualP300 base class, provided by Pyff. This base class provides many useful methods to quickly write Visual ERP-based feedbacks or stimuli. For the actual drawing on the screen, Pygame1 is used.

4.3 SSVEP-Based Hex-o-Spell

The brain responds to flickering visual stimuli by generating steady state visual evoked potentials (SSVEP, Herrmann, 2001) of corresponding frequency and its harmonics. Various studies have used this phenomena to characterize the spatial attention of a subject within a set of stimuli with differing frequencies. Cheng et al. (2002) showed a multi-class SSVEP-based BCI, where the subject had to select 1 of 10 numbers and two control buttons with mean information transfer rate of 27.15 bits/min. Müller-Putz and Pfurtscheller (2008) demonstrated that subjects where able to control a hand prosthesis using SSVEP.

In our SSVEP-based variant of the Hex-o-Spell, the selection of symbols is again a two stage process as described in Section 4.1. In the SSVEP-based variant there is no rotating arrow but the hexagons are all blinking. The blinking frequencies are pairwise different and fixed for each hexagon. Two different approaches can be used to select the hexagon containing the given letter: overt- and covert attention. In the overt attention case the subject is required to look at the hexagon with the letter, whereas in the case of covert attention, the subject must look at the dot in the middle and only concentrate on the desired hexagon. In both cases it is tested if the subject looks at the required spot by means of an eye tracker. If the subject aims the gaze somewhere else, the trial is stopped and, after showing an error message accompanied by a sound, it is restarted.

Before the experiment one might be interested in the optimal blinking frequencies for the subject. The SSVEP-based Hex-o-Spell, supports a training mode where one centered hexagon is blinking in different frequencies. After testing different frequencies in random order, the experimenter can chose the six frequencies with the highest signal to noise ration in the power spectrum of the recorded SSVEP at the flicker frequency of the stimulus or one of its harmonics. Since harmonics of different flicker frequencies can overlap, care must be taken so that the chosen frequencies can be well discriminated (Krusienski and Allison, 2008).

Since in SSVEP experiments appropriate frequencies of the flashing stimuli are very important, special attention was given to this matter. Thus, the Feedback was programmed using Vision Egg (Straw, 2008) which allows very accurate presentation of time-critical stimuli. We verified that the specified frequencies where exactly produced on the monitor by measuring them with an oscilloscope (Handyscope HS3 by Bitzer Digitaltechnik).

4.4 ERP-Based Photo Browser

The photobrowser Feedback enables users to select images from a collection of images, e.g., for selecting images from an album for future presentation, or simply for enjoyment. Images are selected by detecting a specific ERP response of the EEG upon attended target images compared to non-attended non-target images, similar to the ERP-based Hex-o-Spell, see Section 4.2. The photobrowser uses the images themselves as stimuli, tilting and flashing some subgroup of the displayed images at regular intervals (see Figure 4). Users simply focus attention on a particular image, and the system randomly stimulates the whole collection. After several stimulation cycles, there is sufficient evidence from the ERP signals to detect the image that the user is responding to.

FIGURE 4
www.frontiersin.org

Figure 4. (A) The ERP-based photobrowser in operation. (B) Close up of images in non-intensified state. (C) Same images during the intensification cycle, with two images intensified.

Using the Pyff framework for communication with a BCI system, the photobrowser implementation displays a 5 × 6 grid of photographs. The subject watches the whole grid, but attends to an image to be selected for further use. This can be done by direct gaze, but alternatively also covert attention mechanisms without direct gaze can be utilized by the subject. [For an investigation of the effect of target fixation see (Treder and Blankertz, 2010)]. During this time, the implementation stimulates subsets of images in a series of discrete steps, simultaneously sending a synchronization trigger into the EEG system, so that the timing of the stimulation can be matched to the onset of the visual stimulation. The photobrowser goes through a series of stimulation cycles, where subsets of the photographs are highlighted. After a sufficient number of these cycles, enough evidence is built up to determine the photograph the user is interested in selecting.

Stimulated images flash white, with a subtle white grid superimposed, and are rotated and scaled at the same time. This gives the visual impression of the images suddenly glowing and popping. The specific set of the visual parameters used (e.g., scale factor, tilt factor, flash duration, flash color) can all be configured to maximize the ERP response. The timing of these stimulations is also completely adjustable via the Pyff interface. The browser uses an immediate onset followed by an exponential decay for all of the stimulation parameters (for example, at the instant of stimulation the image turns completely white and then returns to its normal luminosity according to an exponential schedule).

The feedback uses PyOpenGL3 and Pygame1 for rapid display updates and high-quality image transformation. OpenGL provides the hardware transformation and alpha blending required for acceptable performance. NumPy4 is used in the optimization routines to optimize the group of images stimulated on each cycle. Pyff provides access to the parallel ports for hard synchronization between the visual stimuli and the EEG recordings. Even in this time-critical application, Python with these libraries has good enough timing to allow the use of the ERP paradigm.

4.5 Goalkeeper

The Goalkeeper Feedback is intended to be used for rapid-response BCIs (e.g., Ramsey et al., 2009). In a rapid-response BCI, subjects are forced to alter their brain states in response to a cue as fast as possible in order to achieve a given task. The major aim of such experiments is to increase the bit-rates of BCI systems.

The main components of this Feedback are a ball and a keeper bar (see Figure 5). The ball starts at the top of the screen and then descends automatically with a predefined velocity while the keeper is controlled by the subject. The task of the subject is to alter the keeper position via the BCI such that the keeper catches the ball. The powerbar at the bottom of the screen visualizes the classifier output and thus gives immediate feedback to the subject, thereby helping them to perform the task more successfully.

FIGURE 5
www.frontiersin.org

Figure 5. The Goalkeeper Feedback: (A) Feedback during the trial start animation. (B) Feedback during the trial.

Each trial starts with an animation (two hemispheres approaching each other) that is intended to make the beginning of the actual trial (i.e., the descent of the ball) predictable for the subject. After the animation, the ball will choose one of two directions at random (i.e., left or right) and start to descend. Since subjects normally need some time to adapt their brain states according to the direction of ball movement, the keeper is not controllable (i.e., the classifier output is not used) in the first period of the descent (e.g., the first 300 ms). There are three main positions for the keeper: middle (initial position), left, and right. If the classifier threshold on either side is reached, the keeper will change its position to the respective side. This is typically realized as a non revertible jump according to the goalkeeper metaphor, but optionally a different behavior can be chosen. The velocity of the ball can be gradually increased during the experiment in order to force the subjects to speed up their response.

In addition to a number of general variables which can be used to define the time course (e.g., the time of a trial or the start animation) or the layout (e.g., the size of the visual components of the Feedback), there are also several main settings governing the work flow of the Feedback: (1) The powerbar can either show the direct classifier output or integrate the classifier output over time in order to smooth rapid fluctuations. (2) The keeper can either be set to perform exactly one move, or the Feedback can allow for a return of the keeper back to other positions. (3) The position change of the keeper can either be realized as a rapid jump or a smooth movement with a fixed duration. (4) Two additional control signals can optionally be visualized in the start animation by color changes between green and red of the two hemispheres. This option is inspired by the observation that a high pre-stimulus amplitude of sensorimotor rhythms (SMR) promotes better feedback performance in the subsequent trial (Maeder et al., 2010). (5) The trial can be prolonged if the keeper is still in the initial position (i.e., the subject did not yet reach the threshold for either side).

The implementation of the Feedback is done in a subclass of the MainLoop_Feedback base class provided by Pyff and using Pygame1 and the Python image library (PIL)6 for presentation.

4.6 Other Feedbacks and Stimuli

While the previous selection of paradigms had a focus on brain–computer interface research, Pyff also ships with a growing number that are widely used in neuroscience, neuroergonomics, and psychophysics. The following list gives a few examples.

d2 test A computerized version of the d2 test (Brickenkamp, 1972), a Psychological pen and paper test to assess concentration- and performance ability of a subject. The complete listing of the application is in Section 13.

n-back A parametric paradigm to induce workload. Symbols are presented in a chronological sequence and upon each presentation participants are required to match the current symbol with the nth preceding symbol (Gevins and Smith, 2000).

boring clock A computerized version of the Mackworth clock test, a task to investigate long-term vigilance and sustained attention. Participants monitor a virtual clock whose pointer makes rare jumps of two steps and they press a button upon detecting such an event (Mackworth, 1948).

oddball A versatile implementation of the oddball paradigm using visual, auditory, or tactile stimuli.

4.7 Support for Special Hardware

Since Python can utilize existing libraries (e.g., C-libraries, dlls). It is easy to use special hardware within Pyff. Pyff already provides a Python module for the IntelliGaze eye tracker by Alea Technologies and the g.STIMbox by g.tec. Other modules will follow.

5 Using Pyff

So far, we stressed the point that implementing an experiment using Pyff is fast and easy, but what exactly needs to be done? Facing the task of implementing a paradigm one has three options: First, check if a similar solution already exists in Pyff that can be used for the given experiment with minor modifications. If it is a standard experiment in BCI or psychology, chances are high that it is already included in Pyff. Second, if there is no application available matching the given requirements, a new Feedback has to be written. In this case one has to check if one of the given base classes match the task or functionality of the paradigm (e.g., is it a P300 task?, is it written using Pygame?, etc.). If so, the base class can be used to develop the feedback or stimulus application which will drastically reduce the code and thus the time required to create the Feedback. Third, if everything has to be written from scratch and no base class seems to fit, one should write the code in a way that is well structured and reusable. A base class may be distilled from the application to reduce the amount of code to be written the next time a similar task appears. In the second and third case, the authors would ideally send their code to us in order to include it in the Pyff framework. Thus the collection of stimuli and base classes would grow, making the first case more probable over time.

5.1 Ease of Use

We have successfully used Pyff for teaching and experiments in our group since 2008. Our experience shows that researchers and students from various backgrounds quickly learn how to utilize Pyff to get their experiments done. We discuss three cases exemplary: In the early stages of Pyff, we asked a student to re-implement a given Matlab-Feedback in Pyff, to test if Pyff is feasible for our needs. We used Matlab for our experiments back then and we wanted to test if our new framework is capable of substituting the Matlab solutions. The paradigm was a Cursor-Arrow task, a standard BCI experiment. Given only the framework and documentation, the student completed the task within a few days without any questions. The resulting Feedback looked identical to the Matlab version but ran much smoother. Shortly later we wanted to compare the effort needed to implement a Feedback in Matlab and with our framework. We asked a student who was proficient in Matlab and Python to implement a Feedback in Matlab and our framework. The paradigm was quite simple: a ball is falling from the top of the screen, and the users tasks was to “catch” the ball with a bar on the bottom of the screen, which can only be moved to the left or right. The student implemented both Feedbacks within days without any questions regarding Pyff or Matlab. After the programming he told us that he had no problems with any of the two implementations. Since the given task was relatively simple he could use Matlab’s plotting primitives to draw the Feedback on screen which was easier than with Pygame, which he used to draw the primitives on the screen, where he had to read the documentation first. He reported however that a more complex paradigm would also have required much more effort for the Matlab solution and only a little more for the Pyff version. In a third test we wanted a rather complex paradigm and see how well our framework copes with the requirements. The idea was to simulate a liquid floating on a plane which can tilt in any direction. The plane has three or more corners and the user’s task is to tilt the plane in a way that the liquid floats to a designated corner. The simulation included a realistic physical model of liquid motion. A Ph.D. student implemented the Feedback in Pyff, the physical model was developed in C and then SWIG (Beazley, 1996) was used to wrap the C-code in Python. He implemented the Feedback without any questions regarding the framework. The result is an impressive simulation, which runs very smoothly within our framework.

These three examples indicate that students and researchers without experience with Pyff have no difficulties implementing paradigms in a short time. The tasks varied from simple Matlab to Python comparisons to significantly more complex applications. This is consistent with our more than 2 years of experience using Pyff in our lab, an environment where co-workers and students use it regularly for teaching and experiments.

6 Conclusion

The Pythonic Feedback Framework provides a platform for writing high-quality stimulus and feedback applications with minimal effort, even for non-computer scientists.

Pyff’s concept of Feedback base classes allows for rapid feedback and stimulus application development, e.g., oddball paradigms, ERP-based typewriters, Pygame-based applications, etc. Moreover, Pyff already includes a variety of stimulus presentations and feedback applications which are ready to be used instantly or with minimal modifications. This list is ever growing as we constantly develop new ones and other groups will hopefully join the effort.

By providing an interface utilizing well known standard protocols and formats, this framework should be adaptable to most existing neuro systems. Such a unified framework creates the unique opportunity of exchanging neuro feedback applications and stimuli between different groups, even if individual systems are used for signal acquisition, processing, and classification.

At the time of writing Pyff has been used in four labs and several publications (Ramsey et al., 2009; Acqualagna et al., 2010; Höhne et al., 2010; Maeder et al., 2010; Schmidt et al., 2010; Treder and Blankertz, 2010; Venthur et al., 2010).

We consider Pyff as stable software which is actively maintained. To date, new versions are released every few months that include bug fixes and new features. We plan to continue development of Pyff as our group uses it to conduct many experiments and other groups are starting to adopt it. As such, we realize that backward compatibility of the API is very important and we work hard to avoid breakage of existing experiments when making changes.

Pyff has a homepage7, where users can download current as well as older versions of Pyff. The homepage also provides online documentation for each Pyff version and a link to Pyff’s mailing list for developers and users and to Pyff’s repository.

Pyff is free software and available under the terms of the GNU general public license (GPL)8. Pyff currently requires Python 2.69 and PyQT version 410 or later. Some Feedbacks may require additional Python modules. Pyff runs under all major operating systems including Linux, Mac, and Windows.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This work was partly supported by grants of the Bundesministerium für Bildung und Forschung (BMBF) (FKZ 01IB001A, 01GQ0850) and by the FP7-ICT Programme of the European Community, under the PASCAL2 Network of Excellence, ICT-216886. This publication only reflects the authors’ views. Funding agencies are not liable for any use that may be made of the information contained herein. We would also like to thank the reviewers, who helped to substantially improve the manuscript.

Footnotes

  1. ^Pygame homepage. http://pygame.org/
  2. ^pyglet homepage. URL http://pyglet.org/
  3. ^Pyopengl homepage. URL http://pyopengl.sourceforge.net/
  4. ^Numpy homepage. URL http://numpy.scipy.org/
  5. ^Panda3d homepage. URL http://panda3d.org/
  6. ^Pil homepage. URL http://www.pythonware.com/products/pil/
  7. ^Pyff homepage. URL http://bbci.de/pyff/
  8. ^GNU general public license. URL http://www.gnu.org/copyleft/gpl.html
  9. ^Python homepage. URL http://python.org/
  10. ^Pyqt homepage. URL http://www.riverbankcomputing.co.uk/software/pyqt/

References

Acqualagna, L., Treder, M. S., Schreuder, M., and Blankertz, B. (2010). A novel brain–computer interface based on the rapid serial visual presentation paradigm, Proc. 32nd Ann. Int. IEEE EMBS Conf. 2686–2689.

Beazley, D. M. (1996). Swig: an easy to use tool for integrating scripting languages with c and c++, TCLTK’96: Proceedings of the 4th conference on USENIX Tcl/Tk Workshop, 1996, USENIX Association, Berkeley, CA, USA, 1996, pp. 15–15. URL http://www.swig.org/

Blankertz, B., Krauledat, M., Dornhege, G., Williamson, J., Murray-Smith, R., and Müller, K.-R. (2007). A note on brain actuated spelling with the Berlin brain-computer interface. Lect. Notes Comput. Sci. 4555, 759.

CrossRef Full Text

Brainard, D. (1997). The psychophysics toolbox. Spat. Vis. 10, 433–436.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Brickenkamp, R. (1972). Test d2. Göttingen, Germany: Hogrefe Verlag für Psychologie.

Brickenkamp, R., and Zillmer, E. (1998). D2 Test of Attention. Göttingen, Germany: Hogrefe and Huber.

Brouwer, A.-M., and van Erp J. B. F. (2010). A tactile p300 brain-computer interface. Front. Neuroprosthetics 4:19. doi: 10.3389/fnins.2010.00019.

CrossRef Full Text

Brüderle, D., Müller, E., Davison, A., Muller, E., Schemmel, J., and Meier, K. (2009). Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system. Front. Neuroinformatics 3:17. doi: 10.3389/neuro.11.017.2009.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Cheng, M., Gao, X., Gao, S., and Xu, D. (2002). Design and implementation of a brain-computer interface with high transfer rates. IEEE Trans. Biomed. Eng. 49, 1181–1186.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Dornhege, G., del, J., Millán, R., Hinterberger, T., McFarland, D., and Müller, K.-R. (eds.). (2007). Toward Brain-Computer Interfacing. Cambridge, MA: MIT Press.

Drewes, R., Zou, Q., and Goodman, P. (2009). Brainlab: a Python toolkit to aid in the design, simulation, and analysis of spiking neural networks with the NeoCortical simulator. Front. Neuroinformatics 3:16. doi: 10.3389/neuro.11.016.2009.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Geller, A., Schleifer, I., Sederberg, P., Jacobs, J., Kahana, M. (2007). PyEPL: a cross-platform experiment-programming library. Behav. Res. Methods 39, 950–958.

Pubmed Abstract | Pubmed Full Text

Gevins, A., and Smith, M. (2000). Neurophysiological measures of working memory and individual differences in cognitive ability and cognitive style. Cereb. Cortex 10, 829.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Herrmann, C. S. (2001). Human EEG responses to 1-100 hz flicker: resonance phenomena in visual cortex and their potential correlation to cognitive phenomena. Exp. Brain Res. 137, 346–353.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Höhne, J., Schreuder, M., Blankertz, B., and Tangermann, M. (2010). Two-dimensional auditory P300 speller with predictive text system, Proc. 32nd Ann. Int. IEEE EMBS Conf. 4185–4188.

Ince, R., Petersen, R., Swan, D., and Panzeri, S. (2009). Python for information theoretic analysis of neural data. Front. Neuroinformatics 3:4. doi: 10.3389/neuro.11.004.2009.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Jurica, P., and Van Leeuwen, C. (2009). OMPC: an open-source MATLAB®-to-Python compiler. Front. Neuroinformatics 3:5. doi: 10.3389/neuro.11.005.2009.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Krusienski, D. J., and Allison, B. Z. (2008). Harmonic coupling of steady-state visual evoked potentials. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2008, 5037–5040.

Pubmed Abstract | Pubmed Full Text

Lutz, M. (2006). Programming Python. Sebastopol, CA, USA: O’Reilly Media, Inc.

Mackworth, N. (1948). The breakdown of vigilance during prolonged visual search. Q. J. Exp. Psychol. 1, 6–21.

CrossRef Full Text

Maeder, C., Sannelli, C., Haufe, S., Lemm, S., and Blankertz, B. (2010). Effect of prestimulus SMR amplitude on BCI performance. Poster at the TOBI Workshop ‘Integrating Brain-Computer Interfaces with Conventional Assistive Technology’ in Graz.

Müller-Putz, G., and Pfurtscheller, G. (2008). Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans. Biomed. Eng. 55, 361–364.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Müller, K.-R., and Blankertz, B. (2006). Toward noninvasive brain-computer interfaces. IEEE Signal Process Mag. 23, 125–128.

Pecevski, D., Natschläger, T., and Schuch, K. (2009). PCSIM: a parallel simulation environment for neural circuits fully integrated with python. Front. Neuroinformatics 3:11. doi: 10.3389/neuro.11.011.2009.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Peirce, J. W. (2007). Psychopy–psychophysics software in python. J. Neurosci. Methods 162, 8–13.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Pfurtscheller, G., Leeb, R., Keinrath, C., Friedman, D., Neuper, C., Guger, C., and Slater, M. (2006). Walking from thought. Brain Res. 1071, 145–152.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Prechelt, L. (2000). An empirical comparison of C, C + +, Java, Perl, Python, Rexx and Tcl. IEEE Comput. 33, 23–29.

Ramsey, L., Tangermann, M., Haufe, S., and Blankertz, B. (2009). Practicing fast-decision BCI using a “goalkeeper” paradigm. BMC Neurosci. 10 (Suppl. 1), P69.

CrossRef Full Text

Schalk, G., McFarland, D., Hinterberger, T., Birbaumer, N., and Wolpaw, J. (2004). Bci2000: a general-purpose brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 51, 1034–1043.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Schmidt, N. M., Blankertz, B., and Treder, M. S. (2010). Alpha-modulation induced by covert attention shifts as a new input modality for EEG-based BCIs. Proc. 2010 IEEE Conf. Syst. Man Cybernet., (in press).

Schreiner, T. (2008). Development and Application of a Python Scripting Framework for bci2000. Master’s thesis. Universität Tübingen, Tübingen.

Schreuder, M., Blankertz, B., and Tangermann, M. (2010). A new auditory multi-class brain-computer interface paradigm: spatial hearing as an informative cue. PLoS One 5, e9813. doi: 10.1371/journal.pone.0009813.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Spacek, M., Blanche, T., and Swindale, N. (2008). Python for large-scale electrophysiology. Front. Neuroinformatics 2:9. doi: 10.3389/neuro.11.009.2008.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Strangman, G., Zhang, Q., and Zeffiro, T. (2009). Near-infrared neuroimaging with NinPy. Front. Neuroinformatics 3:12. doi: 10.3389/neuro.11.012.2009.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Straw, A. D. (2008). Vision Egg: an open-source library for realtime visual stimulus generation. Front. Neuroinformatics 2:4. doi: 10.3389/neuro.11/004.2008.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Tanenbaum, A. S. (2001). Modern Operating Systems. Upper Saddle River, NJ, USA: Prentice Hall PTR.

Treder, M. S., and Blankertz, B. (2010). (C)overt attention and visual speller design in an ERP-based brain-computer interface. Behav. Brain Funct. 6, 28.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Venthur, B., Blankertz, B., Gugler, M. F., and Curio, G. (2010). Novel applications of BCI technology: psychophysiological optimization of working conditions in industry, in: Proc. 2010 IEEE Conf. Syst. Man Cybernet. in press.

Williamson, J., Murray-Smith, R., Blankertz, B., Krauledat, M., and Müller, K.-R. (2009). Designing for uncertain, asymmetric control: interaction design for brain-computer interfaces. Int. J. Hum. Comput. Stud. 67, 827–841.

CrossRef Full Text

Keywords: neuroscience, BCI, Python, framework, feedback, stimulus presentation

Citation: Venthur B, Scholler S, Williamson J, Dähne S, Treder MS, Kramarek MT, Müller K and Blankertz B (2010) Pyff – a Pythonic framework for feedback applications and stimulus presentation in neuroscience. Front. Neurosci. 4:179. doi: 10.3389/fnins.2010.00179

Received: 14 June 2010; Accepted: 02 October 2010;
Published online: 02 December 2010.

Edited by:

David N. Kennedy, University of Massachusetts Medical School, USA

Reviewed by:

Jonathan Peirce, Notthingham University, UK
K. Jarrod Millman, University of California at Berkeley, USA
Satrajit S. Ghosh, Massachusetts Institute of Technology, USA

Copyright: © 2010 Venthur, Scholler, Williamson, Dähne, Treder, Kramarek, Müller and Blankertz. This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

*Correspondence: Bastian Venthur, Machine Learning Laboratory, Berlin Institute of Technology, Franklinstraße 28/29 10587 Berlin, Germany. e-mail: bastian.venthur@tu-berlin.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.