How AI Constructs Stance: A Corpus-Based Comparison of Interactional Metadiscourse in Student and AI Essays
Abstract
The application of Large Language Models (LLMs) in the writing has prompted a need to scrutinize how those models establish authorship and stance. This study adopts a corpus-based approach to investigate the interactional metadiscourse in AI-generated essays versus human writing, under Hyland’s framework. From a parallel corpus of approximately 1.3 million words from the DAIGT V2 dataset, the research combines quantitative frequency analysis with qualitative concordance investigation. The findings reveal that while AI models demonstrate structural proficiency, they exhibit a deficit in rhetorical conviction and reader engagement characterized by a detached stance. Compared to the dialogic nature of human writing, AI models show underuse of boosters and rhetorical questions, and a reliance on vague attitude markers, and hence lack the capability of building writer-reader relationship. These results highlight the deficiency of AI writing’s stance-taking and offer corresponding pedagogical implications.
Full Text:
PDFDOI: https://doi.org/10.22158/eltls.v7n6p103
Refbacks
- There are currently no refbacks.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright © SCHOLINK INC. ISSN 2640-9836 (Print) ISSN 2640-9844 (Online)