As we approach the end of Moore’s Law, and as mobile devices and cloud computing become pervasive, all aspects of system design—circuits, processors, memory, compilers, programming environments—must become more energy efficient, resilient, and programmable.
Our research groups explore energy efficiency via low-voltage design techniques, specialized hardware accelerators, adaptive runtime techniques in high performance computing, efficient memory architectures for heterogeneous mobile systems, novel architectures for exascale systems, and other projects. We examine resilience through tolerating variation during chip fabrication, failure-tolerant processor architectures, scalable resilience protocols, and automated software debugging and recovery techniques. We explore programmability through architectural support for synchronization, automatic parallelization and vectorization, performance-portability for heterogeneous mobile systems, high-performance implementations of scripting languages, and highly scalable parallel run-time systems.
In addition to collaborating with major companies, our software artifacts like LLVM and Charm++ are widely used in industry, government labs, and academic research.
CS Faculty and Their Research Interests
||parallel computing, memory architecture, power- and reliability-aware architectures
||compiler infrastructures and techniques, secure architectures, heterogeneous systems
|Maria J. Garzaran
||compilers, hardware-software interaction, software frameworks for high-performance computing
||programming models and systems for parallel computing
||large-scale parallel systems; runtime systems, tools, and frameworks for high-performance computing
||compiler techniques for parallel computing
||large-scale parallel systems, algorithms, and libraries
||parallel architectures, power- and reliability-aware hardware/software architectures
||compilers, dynamic optimization, computer science education
|Deming Chen, Electrical & Computer Engineering
||hardware/software co-design for system-on-chip; reconfigurable computing; GPU computing and optimization
|Wen-mei Hwu, Electrical & Computer Engineering
||HPC and parallel systems, compilers, GPU programming
|Nam Sung Kim, Electrical & Computer Engineering
||non-conventional computer architecture: bio-inspired, molecular, cellular, and analog-digital hybrid computing
|Rakesh Kumar, Electrical & Computer Engineering
||power- and reliability-aware architectures, approximate computing
|Steve Lumetta, Electrical & Computer Engineering
||parallel computing, architecture, reliability, architectures for genomic applications
|Sanjay Patel, Electrical & Computer Engineering
||high-performance and parallel systems
|Shobha Vasudevan, Electrical & Computer Engineering
||system verification and security; analog and digital hardware validation
|Martin Wong, Electrical & Computer Engineering
||computer-aided design of integrated circuits
|Rob A. Rutenbar, University of Pittsburgh
||accelerator architecture, approximate computing, FPGA, VLSI, CAD
Architecture, Compilers, and Parallel Computing Research Efforts and Groups
Architecture, Compilers, and Parallel Computing News
December 1, 2017
The Spurlock Museum exhibit, "Knowledge at Work," which includes CS @ ILLINOIS artifacts, runs through Dec. 21.
November 15, 2017
Rob A. Rutenbar is being honored for his pioneering contributions to algorithms and tools for analog and mixed-signal designs.
November 1, 2017
On November 6, Dr. Alex Aiken will present his research on parallel programming.
September 21, 2017
Eight outstanding new faculty members are joining CS @ ILLINOIS during the next academic year. In total, thirty-six talented teachers and researchers have joined the department since 2013.
September 20, 2017
Banerjee's work had a profound impact on parallel computing. His fast, effective data-dependence test has been widely used for developing compilers.