Original Research ARTICLE

Front. Neuroinform., 24 August 2012 | doi: 10.3389/fninf.2012.00024

Decoding semantics across fMRI sessions with different stimulus modalities: a practical MVPA study

  • 1Akama Laboratory, Graduate School of Decision Science and Technology, Tokyo Institute of Technology, Tokyo, Japan
  • 2Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
  • 3Centre for Mind/Brain Sciences, University of Trento, Rovereto, Italy
  • 4Department of E&I S, Tokyo City University, Yokohama, Japan
  • 5Department of Computer Science and Electronic Engineering, University of Essex, Colchester, UK

Both embodied and symbolic accounts of conceptual organization would predict partial sharing and partial differentiation between the neural activations seen for concepts activated via different stimulus modalities. But cross-participant and cross-session variability in BOLD activity patterns makes analyses of such patterns with MVPA methods challenging. Here, we examine the effect of cross-modal and individual variation on the machine learning analysis of fMRI data recorded during a word property generation task. We present the same set of living and non-living concepts (land-mammals, or work tools) to a cohort of Japanese participants in two sessions: the first using auditory presentation of spoken words; the second using visual presentation of words written in Japanese characters. Classification accuracies confirmed that these semantic categories could be detected in single trials, with within-session predictive accuracies of 80–90%. However cross-session prediction (learning from auditory-task data to classify data from the written-word-task, or vice versa) suffered from a performance penalty, achieving 65–75% (still individually significant at p « 0.05). We carried out several follow-on analyses to investigate the reason for this shortfall, concluding that distributional differences in neither time nor space alone could account for it. Rather, combined spatio-temporal patterns of activity need to be identified for successful cross-session learning, and this suggests that feature selection strategies could be modified to take advantage of this.

Keywords: fMRI, MVPA, GLM, machine learning, computational neurolinguistics, individual variability, embodiment

Citation: Akama H, Murphy B, Na L, Shimizu Y and Poesio M (2012) Decoding semantics across fMRI sessions with different stimulus modalities: a practical MVPA study. Front. Neuroinform. 6:24. doi: 10.3389/fninf.2012.00024

Received: 29 March 2012; Paper pending published: 10 May 2012;
Accepted: 30 July 2012; Published online: 24 August 2012.

Edited by:

Ulla Ruotsalainen, Tampere University of Technology, Finland

Reviewed by:

Graham J. Galloway, The University of Queensland, Australia
Satrajit S. Ghosh, Massachusetts Institute of Technology, USA

Copyright © 2012 Akama, Murphy, Na, Shimizu and Poesio. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.

*Correspondence: Hiroyuki Akama, Graduate School of Decision Science and Technology, Tokyo Institute of Technology, W9-10, 2-12-1, O-okayama, Meguro-ku, Tokyo, 152-8552, Japan. e-mail: akama@dp.hum.titech.ac.jp

Back to top