Spread Evaluation and Engagement Dynamics of AI-Generated Content on Extremist and Disinformation Telegram Channels

Challenge

Telegram has become a central hub for the spread of disinformation, conspiracy theories, and extremist narratives. At the same time, generative AI (GenAI) is transforming the production of online media, from low-quality synthetic “slop” to highly realistic deepfakes. Despite growing public concern, there is still a lack of systematic research on the prevalence of AI-generated content (AIGC) on Telegram, its role in shaping engagement dynamics, and the purposes for which it is employed in disinformation and extremist ecosystems.

Objective

The thesis aims to explore the role of GenAI within Telegram’s information environment with a distinct focus on extremist and disinformation channels. Specifically, it should:

  • measure the prevalence of AI-generated media (images and videos),
  • analyze whether such content influences engagement compared to non-AI media,
  • examine how quality, style, and topical framing affect spread and interaction.

Methodology & Expected Results

The work can be methodologically grounded in a comparative data analysis of Telegram channels and groups. Possible approaches include:

  • Data collection from Telegram (via open datasets or independent scraping, subject to ethical and legal standards),
  • Detection of AI-generated media using machine learning tools or heuristic classification,
  • Content classification by topic (e.g., politics, health, monetization) and quality (deepfake-like vs. low-effort), possibly with the use of ML,
  • Engagement analysis through statistical comparison of reactions, forwards, and comments.

A mixed-methods design is encouraged, combining computational detection with qualitative content analysis. Expected artifacts may include:

  • a dataset annotated by AI-generated vs. non-AI content,
  • a classification model or heuristic framework for AIGC detection,
  • a statistical assessment of engagement differences,
  • optional visualization of results in dashboards or reports.

Impact

The results can provide valuable insights into the interplay between synthetic media and online extremism. The study contributes to academic debates on disinformation and polarization, informs public discussions on the risks of GenAI, and offers practical knowledge for journalists, policymakers, and civil society organizations seeking to understand and mitigate the spread of manipulated content.

Requirements for the Candidate

  • Familiarity with data collection and analysis
  • Basic knowledge of machine learning (e.g., image classification, deepfake detection) is an advantage
  • Interest in disinformation, extremist communication, and digital democracy
  • Ability to design and execute an independent research strategy within the scope of an elaborate Master’s thesis
  • Interest in publishing the results are a plus