Monday, January 5, 2009
Modeling online sentence parsing by particle filter
This is an interesting paper from NIPS2008: modeling the effects of memory on human online sentence processing with particle filters. When we read or listen to a sentence, we receive the words incrementally (i.e., one after another), and construct a mental comprehension of the sentence. Dynamic programming can be used to parse a sentence incrementally, but since it can always find the right parsing, it can't explain why people may fail to comprehend "garden-path sentences" in their first attempt. For example, when reading the sentence "the horse raced past the barn fell", we may fail at the last word because we are likely to take "raced" as the main verb before reading the last word "fell". Previous work used pruning to model such an effect: only a set of high-probability partial parsings are kept after receiving each word, so the correct parsing may be dropped halfway. This paper adopts a different idea: in a resource-bounded way, estimating the posterior of partial parsings given the words that have been received. So particle filter becomes a natural choice, where each word is an observation and the partial parsings are the hidden states. Again, the right parsing may be dropped halfway because only a set of particles are maintained, which explains the garden-path effect. In addition, this method can explain the "digging-in effect", which says a longer garden-path sentence is harder to comprehend than a shorter one. For example, compare this sentence with the previous one: "the horse raced past the barn that is big and old fell". The explanation is, with more words in the sentence before the disambiguation point, the particles for the right parsing are more likely to be dropped due to resampling.
Labels:
cognitive science
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment