Illinois Team, KingFisher, One of Ten Teams to Advance in the Alexa Prize SimBot Challenge

2/22/2023 Amazon Science

KingFisher is a team of several graduate and undergraduate students from the University of Illinois Urbana-Champaign led by faculty advisor and Illinois CS professor Julia Hockenmaier.

Written by Amazon Science

Ten university teams have been selected to participate in the live interactions phase of the Alexa Prize SimBot Challenge. As of February, those teams have advanced to the semifinals based on their performance during the initial customer feedback period. Alexa customers can interact with their SimBots by saying "Alexa, play with robot" on Echo Show or Fire TV devices. Your ratings and feedback help the student teams improve their bots leading up to the competition finals.

Team KingFisher group photo, including Illinois CS professor Julia Hockenmaier in the center, shot in the hallway of the Thomas M. Siebel Center for Computer Science.
Team KingFisher, with Illinois CS professor Julian Hockenmaier (top row, center) as advisor.

A team from the University of Illinois Urbana-Champaign, KingFisher, includes several graduate and undergraduate students and is advised by Illinois Computer Science professor Julia Hockenmaier.

"It is really exciting to see the enthusiasm and creativity of all ten university teams participating in this interactive embodied AI challenge," said Govind Thattai, Alexa AI principal applied scientist. "The teams have developed very impressive bots to solve the vision and language challenges for robotic task completion. We look forward to the tougher competition during this semifinals phase and how the teams will rise to the challenge."

The Alexa Prize is a unique industry-academia partnership program which provides an agile real-world experimentation framework for scientific discovery. University students have the opportunity to launch innovations online and rapidly adapt based on feedback from Alexa customers.

“The SimBot Challenge is focused on helping advance development of next-generation assistants that will assist humans in completing real-world tasks by harnessing generalizable AI methodologies such as continuous learning, teachable AI, multimodal understanding, and common-sense reasoning,” said Reza Ghanadan, a senior principal scientist in Alexa AI and head of Alexa Prize. “We have created a number of advanced software tools, as well as robust conversational and embodied AI models and data, to help lower the barriers to innovation for our university teams while accelerating research leveraging Alexa to invent and validate AI assistants capable of natural, reliable human-AI interactions.”

The 10 university teams selected to participate in the live interactions phase challenge are:

Team name

University

Student team leader

Faculty advisor

Symbiote Carnegie Mellon University Nikolaos G. Katerina Fragkiadaki
GauchoAI University of California, Santa Barbara Jiachen L. Xifeng Yan
KingFisher University of Illinois Abhinav A. Julia Hockenmaier
KnowledgeBot Virginia Tech Minqian L. Lifu Huang
SalsaBot Ohio State University Chan Hee S. Yu Su
SEAGULL University of Michigan Yichi Z. Joyce Chai
SlugJARVIS University of California, Santa Cruz Jing G. Xin Wang
ScottyBot Carnegie Mellon University Jonathan F. Yonatan Bisk
EMMA Heriot-Watt University Amit P. Alessandro Suglia
UMD-PRG University of Maryland David S. Yiannis Aloimonos

The SimBot Challenge included a public benchmark phase (which ran from January through April 2022) and is now in the live interactions phase (December 2022 to April 2023). Participants in both phases build machine-learning models for natural language understanding, human-robot interaction, and robotic task completion.

During the live interactions challenge phase, university teams are competing to develop a bot that best responds to commands and multimodal sensor inputs from within a virtual world. Similar to previous Alexa Prize challenges, Alexa customers participate in this phase as well.

In this case, customers interact with virtual robots powered by universities’ AI models on their Echo Show or Fire TV devices, seeking to solve progressively harder tasks within the virtual environment. After the interaction, they may provide feedback and ratings for the university bots. That feedback is shared with university teams to help advance their research.

TEACh dataset

In conjunction with the SimBot Challenge, Amazon publicly released TEACh, a new dataset of more than 3,000 human-to-human dialogues between a simulated user and simulated robot communicating with each other to complete household tasks.

In TEACh, the simulated user cannot interact with objects in the environment and the simulated robot does not know the task to be completed, requiring them to communicate and collaborate to successfully complete tasks. The public benchmark phase of the SimBot Challenge which ended in June 2022 was based on the TEACh dataset Execution from Dialog History (EDH) benchmark which evaluates a model’s ability to predict subsequent simulated robot actions, given the dialogue history between the user and the robot, and past robot actions and observations.


Read the original article from Amazon Science.


Share this story

This story was published February 22, 2023.