Challenges and Verification: The Reproduction of the “Sapir-Whorf Hypothesis” by Large Language Models
Abstract
The Sapir-Whorf Hypothesis posits that language structures significantly influence cognitive processes, yet its application and validation within artificial intelligence, particularly large language models (LLMs), remains largely unexplored. While LLMs like GPT-4 have shown promise in natural language processing tasks, their ability to engage with complex cognitive theories such as the Sapir-Whorf Hypothesis is uncertain. This study aims to investigate the extent to which LLMs can reproduce, challenge, and expand upon the Sapir-Whorf Hypothesis through three case studies: language and cognitive development, the impact of language on social behavior, and the role of language in memory and cognition. A mixed-methods approach, combining qualitative and quantitative analyses, was employed to evaluate the depth and relevance of LLM-generated responses. The results reveal that LLMs are capable of generating relevant responses aligned with theoretical principles but struggle to provide nuanced, context-specific insights, especially in real-world applications such as bilingualism and trauma-related memory. This study contributes to the understanding of LLMs' potential and limitations in theoretical reasoning, offering valuable insights for future improvements in AI models for cognitive science and theoretical research.
Full Text:
PDFDOI: https://doi.org/10.22158/eltls.v7n6p130
Refbacks
- There are currently no refbacks.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright © SCHOLINK INC. ISSN 2640-9836 (Print) ISSN 2640-9844 (Online)