UCLA IS doctoral candidate will speak on her work on bias in AI algorithms and interfaces.
Seul Lee, a doctoral candidate in the UCLA Department of Information Studies, will present a workshop on “Fairness and Bias in AI Algorithms and Interfaces,” at the annual conference of the Association for Library and Information Science Education (ALISE), which will take place Oct. 14-17 in Portland, Oregon.
Lee’s workshop will explore biases in AI algorithms and interfaces used on online platforms such as digital archives, libraries, social media, search engines, and other AI-powered services, encouraging critical thinking about credibility of information, transparency of curation processes, accuracy of representations, and the complexities surrounding information authority, formats, and editorial supervision. Lee will also share strategies for recognizing and evaluating these biases that are inherent in each media.
Lee holds her bachelor’s degree in management information systems; her master’s degree in data science from King’s College, and a graduate certificate in digital humanities from UCLA. Her research investigates the information biases, algorithmic subjectivities, and the pivotal role of digital literacy education. Her recent work in digital literacy education focuses on evaluating the credibility of information in AI-generated content like ChatGPT to identify and address potential biases and errors that may occur during the process.
Lee’s dissertation is centered on user-generated content, examining the factors that lead to biases and mis/disinformation, and their impact on information behaviors. She is involved in the SMASH Project, a partnership between the UCLA Department of Information Studies and the UCLA Department of Education that studies the impacts of online hate speech on middle and high school students in the U.S. The SMASH Project is funded by the UCLA Initiative for the Study of Hate.
What are some of the greatest challenges that AI poses to the information fields?
Seul Lee: I believe one of the biggest challenges is that there are a lot of important elements missing in current digital literacy education for younger generations who are exposed to these technologies more rapidly and often than any other generation, and the teachers and parents who are responsible for educating them are not being provided with adequate literacy education and training.
Current AI literacy education tends to emphasize effective and efficient use of these technologies or their regulation. However, I believe it is more crucial to teach the younger generations to critically assess what might be missing from the information presented to them and why it is framed in a certain way. They need to think about who is involved in shaping this process, how information is curated, processed, presented and maintained, and how it can be biased. This is particularly important with AI tools like ChatGPT, which offer straightforward question-and-answer formats that may give the impression of an absolute truth. Thus, educating students to question why a particular answer is given and to engage with these tools with a critical mindset is important.
Are you mainly focused on that age group, or are you doing just society at large as well?
Lee: For ALISE, I’m targeting information practitioners, professionals, and educators. The SMASH Project and other projects… are targeting K-12 students, and what we found is they’re really good at using these technologies, but when we talk to teachers or parents, they’re not as good at it as their children. They are not necessarily adept at, nor do they naturally consider, the potential biases or errors that such information might contain.
Also, we are aimed at college students who use this technology, because they [do] these everyday information – seeking behaviors like searching information using AI services, but when they see this information presented to them, they also may not necessarily consider what is missing in that information and why that information is presented in a certain way.
What are some of the benefits of AI, and how can they be leveraged?
Lee: When I talk to these students or college students, a lot of them use ChatGPT or other AI services well. They are beneficial for them as learning tools. This rapid popularity reflects the fact that these AI-powered tools are accessible and intuitive to younger generations. Nevertheless, currently, when I’m studying ChatGPT and other latest generative AI services to figure out what kind of biases can be involved in the process of reproducing these answers, and searching for scholarly articles, there are not many – a lot of them are from Google or OpenAI. This also reflects that there is still a lack of sufficient information about how these tools should be taught and integrated in our current education, and that neither education nor legal action has kept up with the pace of services.
How can we prepare students, educators, and the public to practice a greater levels of critical thinking when it comes to these tools?
Lee: Of course, it’s important to educate educators and parents, like all individuals, including students. Let’s say, we scroll down your social media feed, and you find some interesting article or news. I’m not sure whether people really evaluate the credibility of that information. But sometimes, we see who posted the article, who wrote the article, or what sources were cited.
Those are our everyday information evaluation practices. I’m going to apply those practices to [viewing] misinformation [and] fake news, and I will ask participants how they do evaluation in their everyday information seeking behaviors. I’m planning to delve further into this by utilizing free open-source software to systematically identify the source of the article, the elements that can be analyzed, whether it qualifies as misinformation, and who authored or shared to the article.
For scholars, especially in the information fields, it’s quite common to think about our own positionality, what kind of biases can happen from our own educational or social background, gender, race, or things like that. For instance, the way that ChatGPT responds is starting from our prompts, how we ask questions. And actually, it depends on our interpretation. Even if everyone gets the same answer, they think and interpret it differently. A lot of people in my field do the fact checking when they encounter a news outlet, the Google those websites and check whether they are right or left, and how far. However, it gets really difficult when [information is] written in a different language or when we lack sufficient background or context about it.
What is your involvement in the SMASH Project?
Lee: I’ve been working with it for two years. It’s under the UCLA Initiative to Study Hate. I’ve been working with Professor Anne Gilliland to write a scholarly article about our [SMASH] findings. My job is also writing articles and data analysis. The main topic is online hate speech and cyberbullying and their impact on K-12 students in the L.A. area. My interest is in literacy education, and also methodological problems. For instance, when we try to gather data, there are really sensitive questions like when we ask, “Have you ever been cyberbullied?” or, “Have you done any online hate speech?” How to ask these questions in a thoughtful way is one thing and how to get the accurate data is another thing.
If we think about this in terms of research, it’s difficult if we don’t get this kind of information. Writing about what kind of methodological or procedural problems we have when we do this kind of research is part of my work.
We also found that when we try to do research on this project, you don’t have a great definition of online hate speech or cyberbullying. It really depends on the recipient. If someone says, “It was not my intention for that to be cyberbullying or online hate speech,” can we consider that [is] not cyberbullying or online hate speech?
I’ve been also working for the East Asian Library at UCLA on the early Korean American immigrants’ oral history collection. We encounter a lot of difficulties, like [not having] informed consent, because it was recorded in 1901. Back then, they didn’t have that informed consent form [or] privacy issues. We have to be careful [with] sensitive information about the figures who participate in independence movements.
As a scholar in information studies, I really enjoy examining the methodological issues, biases, and errors that can arise in the creation, curation, representation, and sharing of information. It is particularly rewarding to see these questions being raised and thoughtfully reconsidered across various sectors, leading to meaningful changes. Although it often fails, it is also important to question the vast amount of information we encounter every day. Before we share anything online, we can also reflect on the weight of the information we post by considering whether it is accurate, how it may be interpreted or disseminated by different audiences, or how it can be distorted or misunderstood without appropriate context.
With younger generations, we have to remember they will have long and extensive digital lives, experiencing a world that differs significantly from our own. By reevaluating our current teaching methods and assessment practices, and by encouraging students to ask questions about the information they encounter, we can nurture their capacity for independent judgment and critical thinking to effectively manage and safely navigate the overwhelming amounts of information they will encounter in digital world they will live in.
Self-portrait by Seul Lee