Machine Learning for Decoding Human Brain States from Functional MR Images

 

By Tom M. Mitchell, Carnegie Mellon University

 

Over the past decade, functional Magnetic Resonance Imaging (fMRI) has emerged as a powerful new instrument to observe activity in the human brain. A typical fMRI experiment can produce a three-dimensional image characterizing the human subject's brain activity every half second, at a spatial resolution of a few millimeters.  fMRI is already causing a revolution in the fields of Psychology and Cognitive Neuroscience.  We consider the role for Machine Learning algorithms in analyzing fMRI data.  In particular, we focus on training classifiers to decode the cognitive state of a human subject based on their observed fRMI brain activation.  We present several case studies in which we have trained classifiers to distinguish cognitive states such as whether the human subject is looking at a picture or a sentence, and whether the word the subject is viewing is a word describing food, people, or buildings.  We will describe the results in these fMRI studies, and examine the machine learning methods needed to successfully train classifiers in the face of such extremely high dimensional ($10^5$ features), extremely sparse (tens of training examples), noisy data sets.