Skip to main navigation Skip to search Skip to main content

AI applications in supporting online vulnerable communities
: leveraging LLMs to responsibly support People Who Use Drugs (PWUD)

  • Kaixuan Wang

Student thesis: Doctoral Thesis (PhD)

Abstract

Vulnerable communities face health inequalities that constrain their ability to manage risks. For People Who Use Drugs (PWUD), stigma and criminalisation drive them from formal care toward online communities for life-saving support. However, these online spaces suffer from inconsistent coverage where delays can be fatal. Large Language Models (LLMs) offer responsive information delivery and show promise in healthcare, yet their effectiveness and safety when applied to PWUD's vulnerabilities remain unknown. This thesis conducts a three-phase investigation into design requirements and safety implications of employing LLMs for supporting PWUD.

The research begins with an exploratory workshop involving harm reduction practitioners, researchers, and a moderator from online PWUD communities to identify opportunities for LLM-based interventions. To ground these opportunities in lived realities of online peer support, the second phase involves interviews with moderators examining socio-technical challenges faced by online PWUD communities. Informed by these studies and underpinned by responsible computing principles, the final phase comprises designs and develops a domain-specific LLM-based system. The prototype provides timely harm reduction information to underserved PWUD and is evaluated for capabilities and safety risks through technical benchmarks and expert reviews.

The findings reveal that LLMs can help address PWUD's need for responsive, personalised, and non-judgemental harm reduction information. Expert moderators reported the prototype offered more effective support than existing automated tools on Reddit, helping to reduce severe health risks. However, evaluation highlights specific risks in current LLMs that necessitate clear operational boundaries and structured human oversight. This thesis provides an empirical foundation for responsibly designing AI-driven tools that facilitate online harm reduction support, informing stakeholders of LLM capabilities and limitations in this high-stakes context. Two paths emerge for future research: developing human-AI collaboration frameworks grounded in PWUD perspectives and establishing safety protocols to govern deployment.
Date of Award3 Jul 2026
Original languageEnglish
Awarding Institution
  • University of St Andrews
SupervisorLoraine Clarke (Supervisor)

Keywords

  • Human-computer interaction
  • Harm reduction
  • People Who Use Drugs (PWUD)
  • Large Language Models (LLM)
  • Responsible AI
  • Ethical AI
  • Content moderation
  • LLM benchmark

Access Status

  • Full text embargoed until
  • 04 Apr 2027

Cite this

'