Hockenmaier Helps Explain ChatGPT's Impact on the Academic Community During Panel Discussion

3/22/2023 Aaron Seidlitz, Illinois CS

The panel, organized by the College of Education, included Illinois Computer Science professor Julia Hockenmaier for her expertise in natural language processing, alongside five other UIUC faculty members.

Written by Aaron Seidlitz, Illinois CS

Illinois Computer Science professor Julia Hockenmaier, an expert in computational linguistics or natural language processing (NLP), recently took an opportunity to shed fuller light on a development in Artificial Intelligence that has caused new debate about the role of technology in education and research, ChatGPT.

Illinois CS professor Julia Hockenmaier standing in a hallway of the Thomas M. Siebel Center for Computer Science.

Hockenmaier joined a panel discussion on March 8 entitled “The Impact of ChatGPT on the Research of Teaching and Learning,” organized by the College of Education at the University of Illinois Urbana-Champaign. The panel featured six Illinois faculty members with a wide range of teaching and research experience.

In just one hour, the panelists covered a series of questions directed, first, toward their specific academic experience and its intersection with the emerging technology. Then they answered a series of general questions from the audience.

An introduction from moderator Jessica Li, the Associate Dean for Research with the College of Education and Education Director for the Bureau of Educational Research, focused on the fast rise of ChatGPT – from its launch on November 30, 2022, to the time it reached one million users on December 4 and then 100 million users after two months.

Hockenmaier first answered a question about the current state of development for ChatGPT and similar tools, as well as its risk factors in the context of research.

She explained that language models are trained to predict which words appear in which context, either in a fill-the-blank style fashion, or when only part of a sentence or text is given, and the model has to predict the next word.

This is an old idea, going back at least to a 1948 article by Claude Shannon that also laid the foundation for information theory. Contemporary large language models (LLMs) like ChatGPT use neural networks ("deep learning") for this purpose and are trained on vast amounts of text. Importantly, neural nets do not use symbols as internal representations, but rely entirely on numerical representations such as vectors, matrices, or tensors.

To understand how neural models of language represent words as vectors, a linguistic phenomenon described by the so-called "distributional hypothesis" (going back to a 1954 article by the linguist Zellig Harris) is important: words that appear in similar contexts (e.g. 'coffee' and 'tea', or 'coffee' and 'cup', 'coffee' and 'hot', but also 'hot' and 'cold') have similar meanings.

As a consequence, such semantically related words end up being represented in neural models by vectors that are very similar to each other.

“This allows these models to generalize much better than classical models, and the kind of neural networks used in current LLMs are also exceptionally good at capturing very large contexts. This means that they generate text that’s very fluent. It sounds like it’s written by native speakers and is topically coherent,” Hockenmaier said.

There is also a phenomenon called the "Eliza effect" that goes back to how users interacted with, and trusted, the first chatbot, Eliza, created in the mid-1960s. Eliza emulated a psychotherapist, and although Eliza was based on much simpler technology, users completely trusted it, and started "pouring their hearts out" to it, much to the surprise of its creator, Joseph Weizenbaum.

Although the public now has much greater experience with systems like chatbots, which might mitigate the Eliza effect, the increased capabilities of the current technology make it even more convincing, and often lead users to overestimate the "intelligence" or reasoning capabilities of these systems.  

The danger in interacting with purely LLM-based systems like ChatGPT, Hockenmaier explained, comes from the fact that the models don’t yet have any countermeasures that ensure its output is factually correct.

“If you ask about a scientific topic, the results might regurgitate parts of Wikipedia because that's what it memorized,” she said. “The results might be restricted to terms that are all very on-topic because they all have similar vectors. But the models don’t understand the text they’ve trained on, and since there is nothing that allows them to perform any kind of reasoning, you shouldn’t ever rely on their output.

“How to detect these fictitious mistakes and how to make these models more factually accurate is an active topic of research. We don’t really know yet how to do that, or at least not well enough.”

The other panelists included Nigel Bosch of the College of Education, Cynthia D’Angelo of the College of Education, John Gallagher of the Department of English, Roxana Girju of the Department of Linguistics, and Mike Twidale of the iSchool.

As Li said during the moderator’s introduction, it is hard for academicians and the public, alike, to digest and form a reliable understanding of the kind of tool ChatGPT is and what it can or can’t do.

By bringing together panelists from these different but connecting areas throughout UIUC, Li and these experts attempted to simplify the issue within the academic community.

“There are a lot of questions that need to be answered, and many of us are trying to answer a few today,” Li said.


Share this story

This story was published March 22, 2023.