Generative AI and the Shaping of Academic Voice: Language, Normalization, and Epistemic Justice in AI-mediated Assessment

Linda Bradley, Kyria Rebeca Finardi

Abstract


Despite the potential of Generative Artificial Intelligence (GenAI) to support research and academic production, attention must be paid to its implications for human learning and the development of voices and epistemologies in multilingual contexts. A central concern in critiques of GenAI is that language is frequently treated as a transparent medium, rather than as a site where biases and hierarchies are produced and reproduced by AI systems. This paper argues that GenAI intersects with existing hierarchies of academic knowledge production, assessment, and validation, especially in relation to the voices of multilingual authors. This study reports findings from an empirical analysis of academic writing produced by multilingual Master’s students under contrasting assessment conditions: a restricted digital examination environment and an unrestricted setting with full access to online tools. Rather than seeking to detect AI use, the analysis examines how different assessment regimes shape register consistency, source integration, authorial voice and linguistic diversity. Findings suggest that unrestricted environments are associated with higher surface-level academic quality, but also with increased pressures toward normalization, potentially erasing linguistic diversity, weakening authorial voice and marginalizing alternative rhetorical and epistemic traditions. The paper concludes by advocating language-aware approaches to GenAI to enable epistemic justice and language diversity.


Full Text:

PDF


DOI: https://doi.org/10.22158/selt.v14n1p10

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © SCHOLINK INC.  ISSN 2372-9740 (Print)  ISSN 2329-311X (Online)