Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "al-Hamadani, S.A.S."

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Conference Object
    Sentiment Analysis for Arabic Using Deep Learning
    (Springer Science and Business Media Deutschland GmbH, 2026) al-Hamadani, S.A.S.; Sever, H.
    With the explosive growth of digital communication, understanding sentiment in online content has become increasingly critical for a wide range of applications, from customer feedback analysis to social media monitoring. However, sentiment analysis for Arabic presents unique challenges due to the language's rich morphology, diverse dialects, and complex syntactic structures. These challenges are further amplified in multimodal settings, where the fusion of textual, visual, and auditory cues is required to capture the full spectrum of human emotion. To address these issues, this paper introduces a new framework for Arabic Multimodal Sentiment Analysis (AMSA), combining multi-level deep learning approaches across text, audio, and visual modalities. Our approach utilizes state-of-the-art transformer-based architecturees, including Multimodal Transformer (MulT) and Early Fusion models, to tackle both linguistic complexity and multimodal alignment. Specifically, we leverage DeBERTa for extracting rich textual features, ViT (Vision Transformer) for visual cues, and Whisper for capturing nuanced audio signals, creating robust and contextualized representations. Experimental results on a curated Arabic multimodal dataset demonstrate the effectiveness of this approach, with our proposed MulT model achieving an F1 score of 72.73%, reflecting a substantial improvement of 13.98% in F1 score and 14.6% in accuracy over existing baselines. These findings highlight the power of cross-modal attention mechanisms and early fusion strategies in accurately capturing subtle sentiments across multiple modalities. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback