Architecture, Compilers, and Parallel Computing

a supercomputer installationAs we approach the end of Moore’s Law, and as mobile devices and cloud computing become pervasive, all aspects of system design—circuits, processors, memory, compilers, programming environments—must become more energy efficient, resilient, and programmable.

Our research groups explore energy efficiency via low-voltage design techniques, specialized hardware accelerators, adaptive runtime techniques in high performance computing, efficient memory architectures for heterogeneous mobile systems, novel architectures for exascale systems, and other projects. We examine resilience through tolerating variation during chip fabrication, failure-tolerant processor architectures, scalable resilience protocols, and automated software debugging and recovery techniques. We explore programmability through architectural support for synchronization, automatic parallelization and vectorization, performance-portability for heterogeneous mobile systems, high-performance implementations of scripting languages, and highly scalable parallel run-time systems.

In addition to collaborating with major companies, our software artifacts like LLVM and Charm++ are widely used in industry, government labs, and academic research.

CS Faculty and Their Research Interests

Sarita Adve parallel computing, memory architecture, power- and reliability-aware architectures 
Vikram Adve compiler infrastructures and techniques, secure architectures, heterogeneous systems 
Christopher Fletcher architectures for security and machine learning
William Gropp programming models and systems for parallel computing 
Laxmikant Kale large-scale parallel systems; runtime systems, tools, and frameworks for high-performance computing 
David Padua compiler techniques for parallel computing 
Marc Snir large-scale parallel systems, algorithms, and libraries 
Edgar Solomonik communication complexity
Lawrence Rauchwerger joining fall 2019; parallel and distributed programming environments
Josep Torrellas parallel architectures, power- and reliability-aware hardware/software architectures 
Craig Zilles compilers, dynamic optimization, computer science education 

Affiliate Faculty

Deming Chen, Electrical & Computer Engineering hardware/software co-design for system-on-chip; reconfigurable computing; GPU computing and optimization
Jian Huang, Electrical & Computer Engineering computer systems, systems architecture, systems security, memory and storage systems
Wen-mei Hwu, Electrical & Computer Engineering HPC and parallel systems, compilers, GPU programming
Nam Sung Kim, Electrical & Computer Engineering non-conventional computer architecture: bio-inspired, molecular, cellular, and analog-digital hybrid computing 
Rakesh Kumar, Electrical & Computer Engineering power- and reliability-aware architectures, approximate computing 
Steve Lumetta, Electrical & Computer Engineering parallel computing, architecture, reliability, architectures for genomic applications 
Sanjay Patel, Electrical & Computer Engineering high-performance and parallel systems
Shobha Vasudevan, Electrical & Computer Engineering system verification and security; analog and digital hardware validation 
Martin Wong, Electrical & Computer Engineering computer-aided design of integrated circuits

Adjunct Faculty

Maria J. Garzaran, Intel compilers, hardware-software interaction, software frameworks for high-performance computing
Rob A. Rutenbar, University of Pittsburgh accelerator architecture, approximate computing, FPGA, VLSI, CAD 

Architecture, Compilers, and Parallel Computing Research Efforts and Groups

Architecture, Compilers, and Parallel Computing News

Google Faculty Research Awards

CS Faculty Receive Google Faculty Research Awards

April 11, 2019   Google Faculty Research Awards are given to a select group conducting computer science and engineering research. This year two Illinois CS faculty were chosen, as well as three ECE faculty who are CS affiliates.
Illinois CS Professor Gul Agha

Gul Agha Joins Illinois Innovators Podcast to Discuss Concurrent Computing and the Actor Model

March 27, 2019   Agha joined the College of Engineering's podcast this month. His Actors model provided a basis for a number of research projects in concurrent programming.
Blue Waters Senior Associate Director William Kramer and NCSA Director William Gropp

What Having the World's Fastest Supercomputer Means for Chicago

March 22, 2019  

Crain's Chicago Business -- Argonne National Laboratory will spend more than $500 million to build the world's fastest supercomputer. The new machine, Aurora, will be the first to operate at exascale. Supercomputers are innovation engines and talent magnets, says Bill Kramer, an Illinois CS professor and senior associate director of the Blue Waters Project at NCSA. NCSA Director Bill Gropp added that "We are confident that Illinois will continue to be a leader in applying advanced computing."

Professors Marc Snir and David Padua

European Universities Honor Snir, Padua for Contributions to Parallel Computing, HPC

January 11, 2019   The École Normale Supérieure de Lyon honored Snir for decades of achievements; the Universidad de Valladolid recognized Padua as a parallel computing pioneer.
PhD student Edward Hutter

Hutter's Work On Communication Costs Leads to DOE Computational Science Graduate Fellowship

December 4, 2018   PhD student Edward Hutter's work in communication-cost analysis and reduction earns him a Department of Energy Computational Science Graduate Fellowship.