Leveraging Artificial Intelligence for Assigning ILR Ratings to Authentic Content

Authors

  • Jordan Eason Doctoral Student at the University of Coimbra, Portuga

Keywords:

Artificial Intelligence, AI Chatbots, ChatGPT, Leveling Content, ILR Scale

Abstract

When teaching Brazilian Portuguese and other languages for the Department of Defense (DoD), the decision on which authentic texts to choose can be difficult. One complicating factor is choosing an authentic text that is at the correct proficiency level for the students—not too challenging and not too easy. The DoD uses the Interagency Language Roundtable (ILR) levels to indicate the complexity of any text or listening passage. Learning how to “level” (i.e., assign a rating score) a passage according to the ILR is a skill that must be developed and takes a significant amount of time. This paper leverages experience working for the DoD as a member of a team that was assigned the task to level audios, videos, and text; during this process, differences in opinions among members occurred regarding whether a passage was ratable/unratable, or able/unable to fit in a specific level on the ILR scale. I argue that machine learning processes available in artificial intelligence (AI), specifically a natural language processing (NLP) platform such as Open AI’s ChatGPT or Google’s Bard, among other AI chatbots, offer human raters a tool that can assist them in increasing their efficiency while removing potential subjectivity from the leveling process.

Downloads

Published

2025-07-10

Issue

Section

Commentary