Ph.D. University of Illinois, 1980
My research focuses on program analysis, transformation, and optimization strategies. The main objective is to develop methodologies to facilitate the programmer's task of creating reliable, easy-to-maintain programs that achieve excellent performance. On-going projects include the study of program optimization strategies and the design of compiler techniques for new parallel programming constructs.
Program optimization can be conceived as the search for the best form of a computation in a set generated by semantic-preserving transformations. Even for simple computations, the size of this set is large enough that simplistic solutions are either impractical or ineffective. For example, modern compilers use simple heuristics whose effect is so poorly understood that raising the optimization level has an unpredictable effect on performance. We are studying several approaches to the optimization problem that have proven quite effective in a number of realistic situations. Our solutions include the use of accurate analytical models of performance and the application of machine learning techniques combined with empirical search.
Parallel programming is another area with interesting open problems. Today's message-passing paradigm is considered by many as the assembly language of parallel programming. Despite many attempts, no alternative parallel programming paradigm has successfully replaced message-passing. We are studying a new class of structured parallel programming constructs based on hierarchically tiled (or partitioned) arrays (HTAs). Programs based on HTAs tend to be highly readable and easy to develop, making this a promising strategy to simplify parallel programming. We are in the process of designing the constructs and compiler techniques necessary to make the
In the News