NSF CAREER Award Positions Bo Li to Provide a More Secure Framework for Advances in Machine Learning

6/11/2021 Aaron Seidlitz, Illinois CS

As an Illinois CS professor focused on the intersection of AI with Security and Privacy, Li’s body of work positions her perfectly to research new ways in which machine learning can work more reliably.

Written by Aaron Seidlitz, Illinois CS

Over recent years, Illinois CS professor Bo Li remained acutely aware of the advances in machine learning (ML) that continue to push the boundaries of trustworthy machine learning and computational reasoning.

Bo Li
Bo Li

Still, as these advances stir excitement and allow for advancement, Li’s research focus in Security and Privacy positions her well to also make sure this progress is as safe as possible. In fact, that’s the point of emphasis for her new five-year, $500,000 NSF CAREER Award that started on June 3.

“Data analytics and machine learning have fundamentally altered energy, healthcare and the finance industry,” Li said. “For instance, autonomous vehicles are on the verge of transforming transportation, and virtual assistants are now common actors in everyday lives. Despite the wide application of ML, recent studies have shown that adversarial attacks can fool, evade, and mislead ML models in ways that would have profound security implications.”

The professor pointed to previous examples of this in her research work.

In one project, Li showed that by putting printed stickers onto a stop sign, the camera perception system of an autonomous vehicle can mistake it as a speed limit sign.

Moreover, Li’s follow-up work showed that a “generated adversarial 3D object can easily fool the LiDAR or even sensor-fusion based ML systems of autonomous vehicles.”

Such attacks, she said, can lead to severe consequences, like a car accident, that raised security concerns amidst the research community and beyond.

“Facing these adversarial attacks against ML, we can see that humans are less vulnerable against them,” Li said. “This makes me believe that there should be a way to incorporate exogenous information, such as intrinsic information (e.g., properties of models and data) and extrinsic information (e.g., domain knowledge) to help improve the robustness of ML.”

To fulfill this need, as outlined in her NSF abstract, three primary “aims” focus on “designing learning algorithms to improve the ML robustness and providing rigorous certification for ML robustness.” The three aims include:

  1. Efficient algorithms to automatically extract and integrate intrinsic information from ML models and datasets to enhance the robustness of given ML models
  2. An efficient and novel end-to-end machine learning pipeline integrating human knowledge and inference, which will also provide explanation for ML predictions as a byproduct
  3. Security improvement and guarantees, demonstrated in real-world applications, such as autonomous driving

To pursue all goals outlined in the abstract, Li said the NSF award will help her fill out the research team with two more PhD students. Current PhD students include Linyi Li, whose work focuses on the certified robustness of ML, and Zhuolin Yang, who works on designing the knowledge enabled ML pipeline.

Li also said the CAREER Award will help the group purchase necessary computation resources for completing the project.


Share this story

This story was published June 11, 2021.