AI: For Better or Worse?
Saint Joseph’s faculty see opportunities for artificial intelligence to benefit their respective fields, despite the ethical and existential challenges.

Artificial intelligence has been touted as a society-altering force on par with electricity and the internet. Machines and applications, supplemented by artificial intelligence algorithms, can now perform tasks and operations at speeds wholly unattainable for human beings. With this mind-bending potential comes uncertainty and its close relative: fear.
No field nor industry has been untouched by this boom of AI progress. In higher education, questions have arisen about the relevancy of homework, tests and how to best prepare students for the world outside the classroom. What good are these staples of education if AI chatbots erase the need for manual learning? Saint Joseph’s University is taking steps to educate its faculty on the impact AI can have on the classroom, as well as on the ethical implications of AI.
Four professors were asked: How are AI tools impacting the disciplines within the liberal arts grounded in human inquiry? What are the pros and cons of AI from your disciplinary perspective?
Though each echoed that sentiment of uncertainty around AI, each also saw opportunities for positive impact.
Jeffrey Hyson, PhD
Assistant Professor, History
The computational and analytical powers of artificial intelligence certainly have great potential for historical research projects involving massive quantities of text or data, though human scholars still need to do the essential work of identifying significance and providing context. When it comes to generative AI, however, most historians are much more wary and uneasy of this new technology. As the CUNY professor Angus Johnston, PhD, has put it, a tool like ChatGPT “isn’t a thing-knowing machine, it’s a thing-saying machine.” When AI apps “answer” historical queries, they’re just generating a series of words that follow patterns from the texts they were trained on, rather than producing a “right answer.” Despite repeated updates, these apps routinely commit factual errors or cite nonexistent publications, making them highly unreliable for research purposes.
Many historians also question the ethics of generative AI, since these tools have been trained on copyrighted material without permission or compensation of the original authors. The massive environmental costs of generative AI also raise serious questions about its sustainability in an age of climate crisis. But perhaps most fundamentally, generative AI offers an image of “knowledge production” that mistakes the product for the process. When our students rely on ChatGPT to write their papers, they are skipping over the whole purpose of the assignment: thinking critically, solving problems, finding connections, turning thoughts into prose. History, like all of the liberal arts, is hard work — but the work is how we learn. Letting generative AI do that work isn’t really doing history.
Generative AI offers an image of 'knowledge production' that mistakes the product for the process.”
Jeffrey Hyson, PhD
Assistant Professor, HistoryMadhu Mahalingam, PhD
Professor of Practice, Chemistry and Biochemistry
As exemplified by the Nobel Prize in Chemistry for 2024 — awarded for the development of AlphaFold, an AI model that predicts complex protein structures and enables the building of new proteins — generative artificial intelligence (GAI) tools are already playing a big role in pushing the boundaries of knowledge in chemistry. GAI refers to any artificial intelligence application that uses algorithms to analyze large amounts of data and predict outputs based on user prompts. In order to achieve such scientific breakthroughs, students in the discipline must develop critical thinking, problem-solving, scientific inquiry and communication skills. These outcomes are developed with the help of a laboratory component in each area of the discipline. GAI could be beneficial if integrated as a “scaffold” for these learning outcomes through effective prompt engineering.
The promise of AI for education lies in its potential as a personalized tutor, enabling all students to achieve better outcomes. AI is already being integrated into the workplace. AI literacy is therefore essential for both faculty and students in all disciplines. This implies understanding the full scope of potential benefits and problems including AI hallucinations – an instance of AI producing an incorrect or inaccurate response to a prompt as well as academic integrity and accountability for student learning. Further, if faculty are limited to using the free versions to ensure equal access to all students, it could lead to a suboptimal AI interaction compromising the learning experience.
Steve Rossi, MFA
Assistant Professor, Art

AI definitely provides potential benefits in the realm of the fine arts. Artists use AI as one tool among many others for art-making purposes. The determining factor for anything to be considered art is the artist’s intent; whether or not it is engaging work is for the audience to decide. By that measure, AI-generated outputs can definitely be considered art. Human-crafted art will always stay relevant since the experience of viewing art is essentially about human connection. Even in AI-generated art practices, a person is still providing the prompts to shape the output, and it is the artist's intentions, their background experiences and sensibilities, that are determining the prompts.
That said, in the commercial arts — in the realms of graphic design, illustration and animation, for instance — AI-generated content poses a much different question. In the commercial arts, the information and message is the focus, rather than the artist’s intent. AI tools have the capacity to radically alter entire commercial art industries and have already begun to do so.
Jenny Spinner, PhD
Professor, English, Writing & Journalism

AI is a tool. It’s not a replacement for critical thinking, for curiosity, for in-depth research and reporting. It doesn’t have emotional intelligence, which is such an important part of being a journalist, of developing deep listening skills when talking to sources.
Journalism professors have to give careful thought to their learning objectives and decide which AI tools might help, or interfere, with those objectives and outcomes. That determination might differ from class to class. It will certainly differ from professor to professor.
Last semester I showed a class how AI can be used to help do an AP style check. But, we were just using the free version of ChatGPT. It did okay. It wasn’t perfect. For it to be the most useful, you still need to have a rudimentary knowledge of AP style so you can determine when ChatGPT is wrong. We also had to feed it the Hawk Style Manual (our internal style guide) for it to get closer.
For now, in the classroom, I focus on AI for what I have determined to be efficiency tasks, helping to write headlines and cutlines, helping to clean up style, transcribing interviews, etc. [My students and I] are learning together.