Caesar, Hockenmaier, Hoiem Win NSF CAREER Awards
Three University of Illinois computer science faculty members were honored with Faculty Early Career Development (CAREER) Awards from the National Science Foundation. Matthew Caesar, Julia Hockenmaier, and Derek Hoiem have each been honored with the agency’s most prestigious awards given to young faculty.
Matthew Caesar proposes to design the first interactive debugging system for modern networked systems with his “Getting RID of Bugs: Realizing Interactive Debugging of Networked Systems.” Caesar takes the position that manual labor is a necessary evil of debugging problems in networked systems, but believes that the system can be made vastly simpler with in-network support for debugging.
“The Internet is the most complex distributed software infrastructure ever created,” says Caesar. It’s this complexity that makes it particularly prone to bugs introduced by human error. Research to date on debugging the modern networked system has focused on automation. According to Caesar however, the enormous complexity of such systems and their fundamental need for domain-specific knowledge has rendered such approaches limited in practice, leaving debugging a painstakingly manual process.
Caesar will build on his previous work in network architecture and network failure diagnosis to develop new techniques and tools for interactive debugging on WAN systems. His work aims to make significant contributions to network architecture and protocol design by creating a new network layer substrate that allows for tight controls on network execution and extensions to support debugging in untrusted environments.
Caesar explains the need for his systems by pointing out that network and service providers today spend billions hiring armies of skilled developers and troubleshooters. “Networks that can be rapidly repaired after exceptions are an essential component of disaster survival and recovery for business and communication systems, and it can accelerate deployment of networks in underdeveloped regions lacking experienced technicians.
Julia Hockenmaier is focusing on the task of statistical parsing ¬ or, finding the most likely grammatical analysis of a sentence ¬ with her “Bayesian Models for Lexicalized Grammars.” Hockenmaier’s approach is based on a combination of computational and linguistic expertise that she sees as key to long-term success in the field of Natural Language Processing.
“Although NLP applications are a part of our everyday lives – with grammar checkers in word processing applications, dialog and speech recognition systems in customer service hotlines, and translation systems provided by search engines, we are far from having accurate NLP systems for a wide range of domains,” says Hockenmaier.
Hockenmaier’s work promises to overcome some of the fundamental limitations of current approaches by advancing NLP through the application of machine learning to linguistically motivated representations.
“In particular, we argue that the preceding linguistic context in which an utterance occurs provides information that can help us better know which interpretation is correct,” explains Hockenmaier. “Models such as mine which take this context into account promise to be less dependent on training data, and thus significantly better at adapting to new domains and instances.”
In the long-term, Hockenmaier’s work could provide the foundation for language models that prefer grammatical sentences that forma coherent text, enabling better speech recognition and machine translation.
Derek Hoiem’s work in “Large-Scale Recognition Using Shared Structures, Flexible Learning, and Efficient Search” aims to enable computers to interpret objects in images.
By developing algorithms to recognize parts, materials, pose, and other properties of objects, Hoiem aims to give computers the ability to make predictions about new objects that they encounter.
“Humans have an amazing ability to look at an object and identify its parts, materials, pose, and other attributes, even if the object is not familiar,” explains Hoiem. Such abilities enable us to respond appropriately to unexpected events, such as when a child with a tricycle rolls out onto the road. Computers have the potential to prevent automobile accidents or relieve us of mundane household chores, but these applications require algorithms that can respond appropriately to new objects.
Due to recent advances in pattern recognition, computers can assign objects into one of a pre-defined set of categories. But each new object must be learned separately, and, Hoiem adds, “Computers are at a complete loss when faced with an unfamiliar object.” “In our view, large-scale recognition is a problem of designing object representations that enable new objects to be understood in terms of existing ones.”
To do this, Hoiem will design structure representations that share appearance and spatial layout models across related categories. Structured models will be learned from detailed annotations in the CORE image dataset that Hoiem and his colleagues created. For example, when learning from examples of cats, the computer would learn to predict the locations of the head, legs, tail, and to identify the pose, as well as to find cats in new images. Then, upon seeing a dog for the first time, the computer could identify the dog as some kind of animal and find its head and legs, even if the name “dog” is not yet known. According to Hoiem, major technical challenges include developing efficient methods for identifying objects, creating methods to incrementally learn about new objects, and finding effective ways of learning from a mixture of loose and detailed annotations.
Ultimately, Hoiem hopes that his work will lead to visual object recognition algorithms that are more detailed, flexible, and accurate, with applications in vehicle safety, security, assistive technologies, household robotics, and multimedia search and organization.