- Rethinking LDA: Why Priors Matter. A second Dirichlet prior is added on top of the Dirichlet prior of the topic distributions in LDA, instead of fixed symmetric hyperparameters. This makes the model much more robust. For efficiency consideration, point estimation can be used on the parameters of the first Dirichlet prior.
- Decoupling Sparsity and Smoothness in the Discrete Hierarchical Dirichlet Process. A Dirichlet prior (with small parameters) has two effects on the posterior of the discrete distribution (e.g., the topic distribution in LDA): sparsity and smoothness. The two are controlled by the same concentration parameter. To decouple the two, this paper adds a set of Bernoulli variables to the model.
- Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations. An application of the beta process (i.e., the Indian buffet process) to computer vision for image dictionary learning, which can be extended for classification, denoising and inpainting, etc.
- Differential Use of Implicit Negative Evidence in Generative and Discriminative Language Learning. Discriminative and generative languages learning differ in the use of the "implicit negative evidence" i.e., the absence of sentences. Psychological experiments show that human are capable of both, depending on how the task is presented.
- Randomized Pruning: Efficiently Calculating Expectations in Large Dynamic Programs. In problems like grammar induction, dynamic programming is usually used to compute expectations, which can be very slow on large data. Pruning can accelerate the computation but introduces bias. Here is a new technique that uses MCMC.
- Reading Tea Leaves: How Humans Interpret Topic Models. A human evaluation of the quality of the topics learned by topic models. They actually used Amazon Mechanical Turk to conduct the evaluation. One interesting result is that the held-out likelihood (or perplexity) may have a negative correlation with the measured quality of the topics.
- Posterior vs Parameter Sparsity in Latent Variable Models. A novel kind of sparsity bias that uses the posterior plus a regularization term as the object function, which is optimized by an EM-like algorithm. The regularization term can represent sparsity bias that can't be easily expressed as priors of model parameters (e.g., the Dirichlet prior).
- An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism. An extension of Indian buffet process that adds a second (or more) layer of features, which are connected to the first layer by noisy-or, i.e., a low level feature is the noisy-or of a few high level features. This new model learns more compact features and has better performance in various tasks than IBP.
Monday, December 21, 2009
NIPS 2009 My List
Here is a list of NIPS2009 papers that are interesting to me.
Labels:
list
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment