Just listened to an interesting talk by Dimitris Margaritis on his work (with Facundo Bromberg) "Improving the Reliability of Causal Discovery from Small Data Sets using Argumentation", which will be published in JMLR. It's a new constraint based method of learning graphical model structures, in the presence of inaccurate independence tests.
Learning graphical model structures is an important but difficult problem in machine learning, in which one has to learn the structure of the graphical model from a set of data generated by the model. There are two types of methods for that, score-based and constraint-based. The former searches for a structure (usually by greedy search) that maximizes some score, e.g. likelihood or posterior. The latter does independence tests among variables, takes the results as constraints, and solve a constraint satisfaction problem.
Traditionally, in the constraint-based method, each independence test removes those candidate structures inconsistent with the test result (according to a set of axioms). This actually assumes the tests to be reliable, which however is mostly untrue in practice. So what can be done when facing unreliable independence tests? This work presented by Margaritis proposes to do multiple tests and find out all the contradictions among the results. There are also a reliability score associated with each test (e.g., p-value). So now we can try to accept and reject testing results to make them consistent as well as optimize the overall reliability. Margaritis actually uses a simple, somewhat greedy method which, in my understanding, may not optimize the reliability, but the experimental results are still better than the traditional methods that don't consider the reliability issue.
Thursday, October 2, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment