Ph.D. Yale University, 1979
Explanation-based learning is a branch of machine learning in which prior knowledge of the world is integrated into the process of automatically forming new concepts. Many interesting machine learning problems with real world applications are hard. With no additional restrictions learning to select an appropriate behavior or course of action to carry out in the real world is in the worst case computationally intractable. Furthermore, experience with real-world systems indicates that many tasks that seem to require intelligence fall at the difficult end of the spectrum.
In explanation-based learning, the interaction among background knowledge axioms provides an ordering on the space of conceivable hypotheses. Plausible rather than logical inferencing is under study as defining relevant knowledge interactions. This points to a new semantics for declarative knowledge representations. Explanation-based learning has been applied to robotics, planning, experimentation, and control of dynamical systems.
In the News