Faculty Panel Explores Artificial Intelligence’s Role in the Classroom


Honors Student Government and the Artificial Intelligence Society hosted a faculty panel on artificial intelligence (AI) and academia on Oct. 14, bringing together six different faculty from the College of Arts and Sciences and the Quinlan School of Business.

The discussion began with a definition of artificial intelligence as computer systems capable of performing tasks normally requiring human intelligence. They also discussed Loyola’s AI policy.

Marco Alvarado, a fourth-year political science and psychology student and chair of Honors Student Government, presented the idea to the administration.

“AI is evolving rapidly, and I’ve noticed that academia is still trying to catch up,” Alvarado said. “That’s what inspired me to create this panel, to get a broad range of perspectives on AI.”

Avery Boland, a fourth-year neuroscience and mathematics student, said conversations around AI can be difficult, but that doesn’t mean people are incapable of having them.

Boland said to ensure disagreements don’t turn into harmful conversations, tools should be used like respect, knowing how to approach disagreements and good communication skills.

Associate writing program director and English professor Julie Chamberlin said she opposes an “abstinence-only” approach to the AI ​​debate.

“When you tell people not to use it, they end up using it the wrong way,” Chamberlin said. “I prefer to give students the knowledge to understand where AI falls short of human creativity and intelligence.”

Jillian Rossman, a fourth-year information systems and analytics major, is president of the AI ​​Society. She said that within the student organization, she and her fellow officers emphasized nuance when talking about AI.

“It’s much more important to have nuanced discussions than just saying ‘AI sucks,'” Rossman said. “We obviously have a bias towards AI, but we also talk a lot about bias, environmental impact and ethics. »

Alvarado focused his questions during the panel on how different fields can improve the use of AI and how students should use it in the future.

Computer science professor Leo Irakliotis said universities need to recognize that they are very conservative when it comes to technology.

“Institutions are typically one, two, even three generations behind their students in adopting new technologies,” Irakliotis said. “Sometimes teachers can be even further behind than the school itself. »

Boland said professors are unable to see the benefits of AI and tend to take a cautious approach.

“It can be an extremely valuable educational tool, especially beyond academia,” Boland said. “Many public school students have very large classes and do not receive individual support. »

Boland said that when tutoring costs money that most families don’t have, ChatGPT is a powerful tool.

If kids need to hear something in a different way than teachers can provide, ChatGPT can reiterate the message in more logical phrasing, Boland said.

Rossman said that when people talk about AI, it’s important to think about communities outside the United States.

“Many studies have shown countries in Africa where students have jumped several grade levels because of AI,” Rossman said. “For people who lack basic access to education, AI can make a huge difference.”

Tess Tchorbadjiev, a first-year political science student, said that while she sometimes uses AI, she believes students should remain dependent on their own thinking.

“If we let AI think for us, we lose that independence and those critical thinking skills that are already starting to fade,” Tchorbadjiev said. “The more we rely on AI rather than our own brains, the less capable we become. »

Irakliotis said his biggest fear about AI is that people will ignore the problem until it makes it irrelevant.

Gutenberg invented the printing press, rendering monasteries useless – despite their monopolization of knowledge just months before. Irakliotis said there was still a window before irrelevance to defend future generations.

Data centers use water to generate heat and rely on more than five million gallons of water per day. Associated Press reported.

“These models can’t do everything people claim to do, and that has environmental, ethical and practical consequences,” Chamberlin said. “The energy they consume is not sustainable.”

Tchorbadjiev said that to solve the problem of environmental damage, there was a need to focus on sustainability.

While it is possible to reduce the negative environmental impact of AI, it is expensive and requires major investments, meaning its potential benefits are sometimes not worth the cost, according to Tchorbadjiev.

Chamberlin said there needs to be more collaboration between the tech industry and educators.

“If there had been more collaboration from the beginning, we could have created tools that supported learning rather than disrupting it,” Chamberlin said. “The ship has sailed – AI is here and we need to deal with it. »

Leave a Reply

Your email address will not be published. Required fields are marked *