📢 Announcement

Welcome to the fourth iteration of LC4. This year emphasizes agentic AI systems for low‑resource, cross‑domain, cross‑lingual, and cross‑modal content analysis. Invited speakers and final dates will be announced soon. Official site: lc4workshop.github.io.

Mode: In‑person workshop with remote presentation option for participants who cannot travel.

🎯 Overview

LC4 invites work on novel computational methods to detect, classify, and model multimedia content across domains, languages, and modalities— with a special focus on agentic content monitoring, adaptive reasoning, and trust‑aware decision‑making in supervision‑sparse settings. We especially welcome contributions that leverage foundational models (LLMs and multimodal agents) for robust, interpretable, and socially impactful analysis in low‑resource contexts.

🧩 Topics of Interest

  • Multimodal & agentic foundational models for cross‑domain multimedia analysis (sentiment, finance, hate speech, cyberbullying, threats, disaster, education).
  • LLM pipelines for social media in low‑resource & code‑mixed languages (prompting, chain‑of‑thought, retrieval‑augmentation).
  • Hybrid NLP–Vision–Speech approaches with LLMs/multimodal agents for low‑resource content detection and analysis.
  • Handling missing modalities; generative imputation & LLM reasoning for modality recovery.
  • Corpora, annotation guidelines, and agent‑assisted annotation frameworks for cross‑domain/cross‑lingual/cross‑modal analysis.
  • Robustness, interpretability, and adaptability evaluations of LLM/agent systems in low‑resource settings.
  • Bias/fairness studies under cross‑domain low‑resource settings; amplification/mitigation by LLMs & agents.
  • Design of cross‑domain/cross‑lingual/cross‑modal metrics, incl. hallucination detection and incompleteness identification.
  • Annotation quality & reliability; agent‑in‑the‑loop methods, inter‑rater calibration with LLMs, bias detection.
  • Agentic AI frameworks for continuous monitoring and adaptation to evolving domains, genres, and languages.

Invited Speakers: TBA

❓ Guiding Research Questions

Domain Shift
How do domain changes affect the generation and analysis of multimedia—especially in agentic LLM‑driven systems?
Cross‑Lingual Transfer
How can knowledge be transferred from high‑ to low‑resource languages (adaptation, prompting, RAG)?
Cross‑Modal Alignment
How do we improve alignment using statistical similarity and LLM‑based semantic reasoning?
Supervision‑Sparse
How can agentic workflows best utilize multilingual data in sparse‑supervision settings?
Foundational Models
When do LLMs/multimodal agents alleviate—or exacerbate—cross‑domain/cross‑lingual/cross‑modal challenges?
Accountability
How do we ensure accountability, fairness, and explainability of LLM/agent systems in low‑resource multimedia contexts?

📅 Key Dates TBA

First Call for PapersSep 10, 2025
Paper Submission DueSep 30, 2025
NotificationOct 20, 2025
Camera‑ready DueNov 12, 2025
Workshop DateDec 18-20, 2025

🧑‍⚖️ Program Committee

  • Bhuvana J, SSN College of Engineering, Chennai
  • Dhanalakshmi V, Subramania Bharathi School of Tamil Language & Literature, Pondicherry University, India
  • Dhivya Chinnappa, JP Morgan Chase & Co., Texas
  • Krishna Madgula, Zopsmart Technologies
  • Onkar Krishna, Hitachi Ltd., Japan
  • Ramesh Kannan R, Vellore Institute of Technology, Chennai
  • Saima Mohan, Hitachi India R&D
  • Sathyaraj T, Sri Krishna Adithya College of Arts and Science, Coimbatore
  • Sharath Kumar, Hitachi India R&D
  • Soubraylu Sivakumar, SRM Institute of Science and Technology, Chennai
  • Yuta Koreeda, Hitachi Ltd., Japan

📚 Selected References

  1. Gkotsis et al. (2016). The language of mental health problems in social media. CLPsych@HLT‑NAACL.
  2. Engonopoulos et al. (2013). Language and cognitive load in a dual task environment. Cognitive Science, 35.
  3. Zampieri et al. (2019). Predicting the Type and Target of Offensive Posts in Social Media. NAACL.
  4. Chakravarthi et al. (2022). DravidianCodeMix: sentiment & offensive language identification dataset for Dravidian code‑mixed text. LRE.
  5. Ravikiran et al. (2022). Findings of the Shared Task on Offensive Span Identification from Code‑Mixed Tamil‑English Comments. arXiv:2205.06118.