DynaLLM-C3N: Dynamic Fusion of Cross-Modal Correlation and Large Language Model Reasoning for Multimodal Fake News Detection
Abstract
The growth of false information on social media suggests a need for better ways to spot it. However, current methods are not good at detecting differences between text and images. In fact, they seem to suggest that deep-level problems may be holding back progress in this area. To address these issues, we have developed DynaLLM-C3N. It is a new system that uses language models to detect false information and link it to other data sources. Research shows it uses C3N to detect links between images and text. It can also use language models to verify information and generate new features. Our findings show that it can do all of this better than other methods. It can also use a special process to give different types of data the same importance. We have tested it on the Twitter and Weibo datasets and have found that it is better than current methods. The study shows that dynamic fusion balances the complementary strengths of the two approaches. However, the most significant results suggest that the dynamic fusion process may successfully balance the strengths of data-driven correlation modelling and symbolic reasoning. This suggests that better detection accuracy is possible. However, the evidence suggests that enhanced interpretability could indicate the framework's broader applicability across different detection contexts. This suggests that the source code could be made available to the public when it is published, which would help more people to use these key findings.
Full Text:
PDFDOI: https://doi.org/10.22158/jetss.v8n1p166
Refbacks
- There are currently no refbacks.
Copyright (c) 2026 Yuxin Zhao, Jize Li

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright © SCHOLINK INC. ISSN 2642-2336 (Print) ISSN 2642-2328 (Online)