Architecture, Compilers, and Parallel Computing

As we approach the end of Moore’s Law, and as mobile devices and cloud computing become pervasive, all aspects of system design—circuits, processors, memory, compilers, programming environments—must become more energy efficient, resilient, and programmable.

Our research groups explore energy efficiency via low-voltage design techniques, specialized hardware accelerators, adaptive runtime techniques in high performance computing, efficient memory architectures for heterogeneous mobile systems, novel architectures for exascale systems, and other projects. We examine resilience through tolerating variation during chip fabrication, failure-tolerant processor architectures, scalable resilience protocols, and automated software debugging and recovery techniques. We explore programmability through architectural support for synchronization, automatic parallelization and vectorization, performance-portability for heterogeneous mobile systems, high-performance implementations of scripting languages, and highly scalable parallel run-time systems.

In addition to collaborating with major companies on a wide range of research projects, our software artifacts like LLVM and Charm++ are widely used in industry, government labs, and academic research.

Strengths and Impact

Recent highlights include many national honors and awards to our faculty. This includes election to the National Academy of Engineering for Bill Gropp; NSF CAREER Awards to Christopher Fletcher, Sasa Misailovic, and Edgar Solomonik; ACM/IEEE Kennedy Award and election to the American Academy of Arts and Sciences for Sarita Adve; IEEE Computer Society Harry H. Goode Memorial Award to David Padua; IEEE Computer Society Harry H. Goode Memorial Award and IEEE Computer Society Technical Achievement Award to Josep Torrellas; IEEE Computer Society Cray Award to Marc Snir (2013); IEEE Sidney Fernbach award to Laxmikant Kale (2012); the election of Bill Gropp to IEEE Computer Society President; and the award of a national AI Research Institute (AIFRAMS) to Illinois and led by Vikram Adve.

The faculty in this area are very active in the professional community. They participate in many program committees, organize workshops, and give lectures. They publish in the most competitive venues of the area, often making the University of Illinois the most represented institution in the top conferences. They also receive many Best Paper Awards every year (e.g., Top Picks from Computer Architecture). The faculty also develop software that they distribute to the community.

The faculty are active in large grants from NSF, DARPA, and industry. For example, large ongoing grants include the Center for Cognitive Computer Systems Research (C3SR) with IBM, the Strategic Research Alliance Center on Computer Security with Intel, and participation in the SRC-DARPA JUMP center efforts and DARPA Electronic Resurgence Initiative programs. There are also many collaborations with computer companies and laboratories.

The area is also known for placing its PhD students in top academic and research positions in the United States. Recent successes include placing students in tenure-track Assistant Professor positions at MIT, CMU, and Wisconsin. There is a large number of graduates from this area in the faculty of top departments in the nation, including MIT, CMU, Cornell, Princeton, U of Washington, Georgia Tech, U of Michigan, USC, NCSU, and U of Wisconsin, to name a few. In addition, many of the area’s graduates are in leading positions at IBM, Intel, Microsoft, and other companies.

Seminars

Faculty & Affiliate Faculty

Computer Architecture; Parallel Computing; Memory Systems; Domain-Specific and Heterogeneous Systems; Resiliency; Approximate Computing; Augmented, Virtual, Mixed, and Extended reality

Compilers, Parallel Computing, Heterogeneous Parallel Systems, Hardware-Software Codesign, Edge Computing

Parallel Algorithms and Libraries, Parallel Graph Algorithms, Performance Modeling

Hardware/Software Co-Design for System-On-Chip; Reconfigurable Computing; GPU Computing and Optimization

Architectures for Security and Machine Learning

Data-Oriented Architectures, Processing-in-Memory, Memory/Storage Systems, Hardware/Software Co-Design, Architectures for Emerging Domains

Programming Models and Systems for Parallel Computing, Parallel I/O

Computer Systems, Systems Architecture, Systems Security, Memory and Storage Systems

HPC and Parallel Systems, Compilers, GPU Programming

Large-Scale Parallel Systems, Adaptive Runtime Systems, CSE Applications, Tools, Frameworks for High-Performance Computing

Non-Conventional Computer Architecture: Bio-Inspired, Molecular, Cellular, and Analog-Digital Hybrid Computing 

HPC, Reconfigurable Computing, GPU Computing and Optimization

Power- and Reliability-Aware Architectures, Approximate Computing 

Parallel Computing, Architecture, Reliability, Architectures for Genomic Applications 

Compilers and Code Generation, Machine Learning based Compiler Optimizations, Autotuning, Neural Network Optimizations, Program Analysis, Domain Specific Languages

 

Program Optimization Systems, Probabilistic Programming, Approximate Computing Techniques

Compilers, Parallelism, Machine Learning, Automatic Differentiation, HPC, Scientific Computing

Quality of Experience, Tele-Immersion, Multi-View Visualization, Embedded Sensors, Distributed and Parallel Systems

Parallel Numerical Algorithms, Performance Modeling

Compiler Techniques for Parallel Computing, Compiler Evaluation and Testing, Autotuning Strategies and Systems

High-Performance and Parallel Systems

Parallel Computing, Compilers for Parallel Computing, Parallel Generic and Graph Libraries, Parallel Architecture, Exascale Computing

Large-Scale Parallel Systems, Algorithms, Libraries 

High-Performance Computing, Communication Cost Analysis, Tensor Computations, Quantum Simulation

Computer Architecture, Parallel Computing, Energy-Efficient Architectures, Hardware/Software Co-Design, Programmability, Graph Architectures, Secure Architectures, Distributed Computing, Cloud Computing, Memory Systems



Adjunct Faculty

Compilers, Hardware-Software Interaction, Software Frameworks for High-Performance Computing, Message Passing Software

Accelerator Architecture, Approximate Computing, FPGA, VLSI, CAD

Related News