Editors’ Newsroom: Polishing our words or replacing our voices? Generative artificial intelligence in academic writing
Main Article Content
Like many of you, I first encountered ChatGPT when it was introduced to the public in late 2022, sparking my immediate interest (Stojanov, 2023). The potential I saw was enormous; students could use it for conceptual understanding (Stojanov et al., 2024), content creators to boost creativity (Doshi & Hauser, 2023), and authors with English as a second or other language (ESOL) for translation (Jiao et al., 2023). As an ESOL writer myself I have faced the harsh reality of reviewers commenting on my “bad English.” Yet, to be fair, writing with clarity is important, otherwise the message is lost. Enter large language models (LLMs) such as ChatGPT. Now, suddenly, the prospect of another reviewer commenting on “poor language” seems rather distant—ChatGPT can perfect my writing. However, this newfound ease presents an unexpected concern: the homogenization of writing.
Many of the authors published in academic journals, including Social Behavior and Personality: an international journal (SBP), are not native English speakers. Over the past year I have noticed fewer language issues in the papers I’ve evaluated as a peer reviewer or associate editor. Presumably, just like me, authors use generative artificial intelligence (AI) to edit their writing. At the same time, I have noticed a peculiar trend. I have been an associate editor of SBP since January 2020, and a peer reviewer since 2019, yet I did not encounter words such as “elucidate,” “unravel,” or “intricate” in submissions prior to 2023/2024. Indeed, they were not frequently used in any academic journals. A Scopus search for these terms reveals a notable increase in their use between 2023 and 2024, which seems to reflect a broader pattern of word choice. From my personal experience, these words are the hallmark of ChatGPT writing (and possibly other LLMs). Indeed, others have noted similar trends (Comas-Forgas et al., 2024). However, it is important to note that this is not necessarily an indication that authors using these words are relying on AI, or—even if they are—that AI is writing for them. For example, when I input a paragraph in “bad” English, the LLM returns an improved version, often filled with these hallmark words. It is very likely that these authors are using LLM to polish their text and writing, just like I do.
I am fully supportive of such use. If anything, this allows for lowering of geographical and linguistic barriers, increasing equity, and speeding up the progress of science. However, such formulaic writing does raise concerns for me that in the not-so-distant future a generation will grow up exposed to homogenized writing devoid of linguistic richness (Agarwal et al., 2024; Vanmassenhove et al., 2021) and form unhealthy ideas about what an “intelligent” text should sound like. Is it possible that linguistic biases in LLM will reshape our understanding of what is “smart” and “intelligent”? As Hollywood has influenced beauty standards, could LLMs influence our perception of what constitutes a well-written text? Might writers feel their work is inadequate if it doesn’t align with these patterns?
Moreover, studies suggest that we are becoming increasingly reliant on AI for decision making, leading us to trust AI-generated suggestions over our own judgment (Klingbeil et al., 2024; Suresh et al., 2020). In a writing context, when faced with a choice between an original sentence and an AI-polished version, we may reason that AI knows better. This reliance could drive an increasing uniformity in writing, as more individuals adopt AI tools to enhance their work, resulting in fewer original texts overall (Doshi & Hauser, 2023; Vanmassenhove et al., 2021).
To avoid such scenarios, we as writers need to take responsibility not to blindly accept the paraphrases or translations that LLMs offer, but to use these as inspiration for ways to amend our writing ourselves. By prioritizing thoughtful engagement over simple copy-and-paste, we can ensure that our writing remains authentic to our individual voices. It makes little sense to have a tool that can cut through the noise of unclear language and sharpen our ideas, only to use it to obscure them with generic phrasing. The true potential of generative AI lies in enabling us to express our voices more clearly and effectively. Let’s do that.
References
Agarwal, D., Naaman, M., & Vashistha, A. (2024). AI suggestions homogenize writing toward western styles and diminish cultural nuances. ArXiv. https://arxiv.org/abs/2409.11360
Comas-Forgas, R., Koulouris, A., & Kouis, D. (2024). ‘AI-navigating’ or ‘AI-sinking’? An analysis of verbs in research articles titles suspicious of containing AI-generated/assisted content. Learned Publishing, 38(1), Article e1647. https://doi.org/10.1002/leap.1647
Doshi, A., & Hauser, O. (2023). Generative artificial intelligence enhances creativity but reduces the diversity of novel content. Science Advances, 10(28), Article 5290. https://doi.org/10.2139/ssrn.4535536
Jiao, W., Wang, W., Huang, J., Wang, X., Shi, S., & Tu, Z. (2023). Is ChatGPT a good translator? Yes with GPT-4 as the engine. ArXiv. https://arxiv.org/abs/2301.08745
Klingbeil, A., Grützner, C., & Schreck, P. (2024). Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI. Computers in Human Behavior, 160, Article 108352. https://doi.org/10.1016/j.chb.2024.108352
Stojanov, A. (2023). Learning with ChatGPT 3.5 as a more knowledgeable other: An autoethnographic study. International Journal of Educational Technology in Higher Education, 20, Article 35. https://doi.org/10.1186/s41239-023-00404-7
Stojanov, A., Liu, Q., & Koh, J. H. L. (2024). University students’ self-reported reliance on ChatGPT for learning: A latent profile analysis. Computers and Education: Artificial Intelligence, 6, Article 100243. https://doi.org/10.1016/j.caeai.2024.100243
Suresh, H., Lao, N., & Liccardi, I. (2020). Misplaced trust: Measuring the interference of machine learning in human decision-making. WebSci ’20: Proceedings of the 12th ACM Conference on Web Science, 2020, 315–324. https://dl.acm.org/doi/10.1145/3394231.3397922
Vanmassenhove, E., Shterionov, D., & Gwilliam, M. (2021). Machine translationese: Effects of algorithmic bias on linguistic complexity in machine translation. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, 2021, 2203–2213. https://www.doi.org/10.18653/v1/2021.eacl-main.188
Agarwal, D., Naaman, M., & Vashistha, A. (2024). AI suggestions homogenize writing toward western styles and diminish cultural nuances. ArXiv. https://arxiv.org/abs/2409.11360
Comas-Forgas, R., Koulouris, A., & Kouis, D. (2024). ‘AI-navigating’ or ‘AI-sinking’? An analysis of verbs in research articles titles suspicious of containing AI-generated/assisted content. Learned Publishing, 38(1), Article e1647. https://doi.org/10.1002/leap.1647
Doshi, A., & Hauser, O. (2023). Generative artificial intelligence enhances creativity but reduces the diversity of novel content. Science Advances, 10(28), Article 5290. https://doi.org/10.2139/ssrn.4535536
Jiao, W., Wang, W., Huang, J., Wang, X., Shi, S., & Tu, Z. (2023). Is ChatGPT a good translator? Yes with GPT-4 as the engine. ArXiv. https://arxiv.org/abs/2301.08745
Klingbeil, A., Grützner, C., & Schreck, P. (2024). Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI. Computers in Human Behavior, 160, Article 108352. https://doi.org/10.1016/j.chb.2024.108352
Stojanov, A. (2023). Learning with ChatGPT 3.5 as a more knowledgeable other: An autoethnographic study. International Journal of Educational Technology in Higher Education, 20, Article 35. https://doi.org/10.1186/s41239-023-00404-7
Stojanov, A., Liu, Q., & Koh, J. H. L. (2024). University students’ self-reported reliance on ChatGPT for learning: A latent profile analysis. Computers and Education: Artificial Intelligence, 6, Article 100243. https://doi.org/10.1016/j.caeai.2024.100243
Suresh, H., Lao, N., & Liccardi, I. (2020). Misplaced trust: Measuring the interference of machine learning in human decision-making. WebSci ’20: Proceedings of the 12th ACM Conference on Web Science, 2020, 315–324. https://dl.acm.org/doi/10.1145/3394231.3397922
Vanmassenhove, E., Shterionov, D., & Gwilliam, M. (2021). Machine translationese: Effects of algorithmic bias on linguistic complexity in machine translation. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, 2021, 2203–2213. https://www.doi.org/10.18653/v1/2021.eacl-main.188
Ana Stojanov, Associate Editor, Scientific Journal Publishers, New Zealand. Email: [email protected]