DXVNet-ViT-Huge (JFT) Multimode Classification Network Based on Vision Transformer

Haoran Li, Daiwei Li, Haiqing Zhang, Xincheng Luo, Lang Xu, LuLu Qu

Abstract


Aiming at the problem that traditional CNN network is not good at extracting global features of images, Based on DXVNet network, Conditional Random Fields (CRF) component and pre-trained ViT-Huge (Vision Transformer) are adopted in this paper Transformer model expands and builds a brand new DXVNet-ViT-Huge (JFT) network. CRF component can help the network learn the constraint conditions of each word corresponding prediction label, improve the D-GRU method based word label prediction errors, and improve the accuracy of sequence annotation. The Transformer architecture of the ViT (Huge) model can extract the global feature information of the image, while CNN is better at extracting the local features of the image. Therefore, the ViT (Huge) Huge pre-training model and CNN pre-training model adopt the multi-modal feature fusion technology. Two complementary image feature information is fused by Bi-GRU to improve the performance of network classification. The experimental results show that the newly constructed Dxvnet-Vit-Huge (JFT) model achieves good performance, and the F1 values in the two real public data sets are 6.03% and 7.11% higher than the original DXVNet model, respectively.

Full Text:

PDF


DOI: https://doi.org/10.22158/jetr.v4n1p59

Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Haoran Li, Daiwei Li, Haiqing Zhang, Xincheng Luo, Lang Xu, LuLu Qu

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © SCHOLINK INC.  ISSN 2690-3695 (Print)  ISSN 2690-3709 (Online)