Quality Assessment of Large Language Models Empowering Traditional Chinese Medicine English Translation: A Comparative Study with Online Translation Tools
Abstract
Objective: To compare the quality differences in English translations of modern traditional Chinese Medicine (TCM) texts between artificial intelligence (AI)-driven large language models (LLMs) and mainstream online translation platforms, providing empirical evidence for TCM translation practice and research in the AI era. Methods: A representative passage from the TCM textbook Fundamentals of Chinese Medicine was selected as the source text. The text was input into online translation services to generate English translations. The multidimensional quality metrics (MQM) framework was adopted to construct a scoring card covering dimensions such as terminology accuracy, omission, mistranslation, and grammar. Two TCM translation experts independently scored the translations, and the average total penalty points were calculated to measure translation quality. Results: Advanced international LLMs (Claude 3.5 Sonnet, ChatGPT-4o mini, Gemini 1.5 Flash) produced the highest quality translations, with a total penalty score of only 2, significantly outperforming traditional online translation tools. The domestic LLM (ERNIE 4.0) performed next best but was still markedly superior to all conventional online platforms. Translations from traditional online tools commonly suffered from terminological inaccuracies, mistranslations, and grammatical awkwardness. Conclusion: Current mainstream LLMs demonstrate a significant advantage in translating modern TCM texts regarding terminological accuracy and linguistic fluency, achieving a basically usable level. However, LLMs still exhibit limitations when processing specific TCM terms with abstract meanings. Future research should further explore their application potential in translating ancient TCM classics and develop optimized prompt strategies integrated with expert knowledge.
Full Text:
PDFDOI: https://doi.org/10.22158/elsr.v7n1p110
Refbacks
- There are currently no refbacks.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright © SCHOLINK INC. ISSN 2690-3644 (Print) ISSN 2690-3652 (Online)