Ana içeriğe atla

Computational Approaches to Ideological Bias, Narratives, and Responsible AI

tarihinde Adsız tarafından gönderildi
Seminar📅 27.10.2025 — 18:00
👤 Speaker:
Dr. Yusuf Mücahit Çetinkaya
📍 Location:
Online
⏲ Duration:
60 min.
📝 Abstract:

Social media platforms are frequently exploited for manipulation, leading to increased polarization and the spread of narratives with hidden agendas. This talk will present computational approaches to detect, analyze, and understand these phenomena. First, we will explore social media manipulation by examining Cross-Partisan Interactions (CPIs) on Twitter, analyzing the user characteristics, topics, and stances that define these dialogues. Next, the talk introduces NARRA-SCALE, a novel framework for mapping ideological positioning by integrating network analysis with narrative and stance detection. Finally, this methodology is extended to conduct ideological diagnostics at scale, quantifying the inherent biases across 33 major LLMs. The research aims to provide tools for identifying polarization hotspots, demonstrate the mechanics of social media manipulation and contribute to the development of responsible AI and healthier online ecosystems.

👥 Biography:

Dr. Yusuf Mücahit Çetinkaya is a Postdoctoral Scholar in the School of Informatics at the University of Edinburgh. He received B.Sc., M.Sc., and Ph.D. degrees from the Department of Computer Engineering at Middle East Technical University, graduating as the top student in his bachelor's degree. Prior to his current role, he was a Ph.D. Research Scholar at the Ira A. Fulton Schools of Engineering, Arizona State University. Alongside his academic work, he has served as a technical consultant for technology companies, contributing to mission-critical projects involving big data and AI systems. His research interests include NLP, Computational Social Science, and LLMs, with a focus on bias and stance detection, narrative analysis, and social media manipulation. His current work bridges large-scale social media analysis and ideological bias detection in LLMs to foster transparency and accountability in AI systems.

Time - Location
2025-10-27 18:00:00