Skip to main content
Academics with Impact

Can AI and Higher Education Coexist?

A faculty-development program made possible by the support of an anonymous donor takes a crucial first step toward a harmonious relationship between AI and higher education.

Robotic hand and human hand holding one side of a textbook open

Published:

Total reading time: 4 minutes

Can artificial intelligence and higher education coexist? The gift of an anonymous donor has made seeking the answer to that exigent question possible.

This past summer, 75 faculty members were selected to participate in the first phase of a donor-supported, two-year faculty-development program. The undertaking is co-led by Aubrey H. Wang, PhD, the interim director of Saint Joseph’s University’s Office of Teaching and Learning, and Peter C. Norberg, PhD, senior associate provost for academic and faculty support.

“The goal [of phase one] was to develop a large group of discipline-specific faculty experts who are educated in the use and have an understanding of ethical implications of artificial intelligence in the classroom,” says Wang. “I would say that goal was met.”

The key first step, Wang explains, was for faculty members to be taught the basics of how AI functions; what are the limitations and strengths of AI? From there, they explored ways to engage with their students and get feedback on how they are already using AI systems. Creating this dialogue allows faculty to assess their curriculum and decide if their exams, projects and assignments need to be reimagined with AI in mind. 

“Our faculty are excited,” says Vana M. Zervanos, EdD, MBA, associate dean in the Haub School of Business. “They have more agency because they are informed. It doesn't mean all the answers are there, but they see that AI and intellectualism can coexist.”

Zervanos provides an example. Students taking decisions and system sciences courses may be asked to design an algorithm that could be used in supply chain management. To avoid students passing off an AI’s work as their own, the professor could instead ask students to use AI to design a version of the algorithm and then ask students to design their own version. What are the differences? Why might they exist? How could the algorithm be improved?

“This is a way to get students to think critically,” Zervanos concludes. “AI is not meant to be a substitute for critical thinking. In fact, our hope is that AI can be additive thinking.”

Both Wang and Zervanos agree that this approach hinges on AI literacy.

AI is not meant to be a substitute for critical thinking. In fact, our hope is that AI can be additive thinking.”

Vana M. Zervanos, EdD, MBA

Associate Dean, Haub School of Business

“The true fear,” Wang explains, “is that without AI literacy, faculty and students could very easily inadvertently consume AI in a way that increases inequity, compromises academic integrity, perpetuates bias, and violates privacy and data security.”

A common example of this is when students are assigned papers on controversial topics. Data and content found on the internet is the lifeblood of AI systems, and it is often tainted with misinformation and biases. Relying on an AI chatbot to provide background information for the paper can occasionally result in false or misleading information making its way into the final project, thereby perpetuating its bias and errors.

AI literacy is a rising tide that lifts all boats, but it especially benefits students. 

“We are very much interested in student impact,” Zervanos says. “This starts with faculty. Do they have conceptual understandings around how AI has impacted our society and education? Let’s make sure that in the classroom and outside the classroom, faculty will do their due diligence and not take this lightly, because there are potentials for abuses and misunderstandings.” 

AI provides opportunities in education, and it presents dangers. That is why having a strong foundational understanding of the basics is the jumping off point. Phase two of the program, which will take place in the summer of 2025, will take the next step. Faculty members who participated in phase one will be invited to deeply investigate the impact and use of AI within their specific academic discipline.

The constant drumbeat of learn, apply, analyze is, according to Wang and Zervanos, non-negotiable. At the end of the day, AI is not going away but it can be safeguarded through evaluation and vigilance. 

“The future of AI is not inevitable,” Wang says, reciting a popular refrain from ed-tech expert Melanie Mitchell. “But it is ours to create.”