Decoding neural activities using machine learning methods is an emerging area since a few years ago. The neural data is usually obtained by presenting the word and/or image of a concept to an experiment participant and recording his brain images (e.g., fMRI, EEG). The types of concepts tested so far are very simple (e.g., concrete nouns, and more recently adjective-noun compositions), but I believe experiments on more complex and abstract concepts are to be expected in the near future (or are already on progress!). Given the neural imaging data, one natural task is to find out the mapping between concepts and images. An intermediate layer of semantic features can be added between concepts and images, which is intuitive and also makes things more tractable. So now the problems are what the right semantic features are, and how to find out the mappings between these layers.
For the first problem, in earlier work this is somewhat manually constructed. In "Predicting Human Brain Activity Associated with the Meanings of Nouns" (Science, May 2008), the semantic features are the co-occurrence frequency of the stimulus noun with 25 manually selected verbs in a large corpus. In a more recent paper, "A Neurosemantic Theory of Concrete Noun Representation Based on the Underlying Brain Codes" (PLoS ONE, Jan 2010), these features are discovered from the fMRI data by means of factor analysis (and the result is very interesting: the three main features are related to manipulation, shelter and eating, all of which are the most important things for the survival of our primitive ancestors). With the semantic features specified, the second problem can be done by simply applying common machine learning predictors like Naive Bayes.
Friday, August 6, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment