🔮 mī lyte: Alignment, feasibility, and optimization of an LLM emotion regulation intervention to deliver just-in-time mindfulness skills and PrEP education
Illustrative example: an evidence-based mī lyte recommendation delivered via the web app UI.
A prototype developed by Dr. Simone Skeen for mHEAL: the Mindfulness for Health Equity Lab at Brown University, grounded in Mindfulness-Based Queer Resilience © Dr. Shufang Sun. Uniquely, to our knowledge, mī lyte combines a mindfulness-based stress resilience (MBSR; System 1) and a PrEP education (System 2) intervention, leveraging the unique strengths of Transformer-based LLMs and traditional (pre-scripted) text message banks, respectively, in an Information-Motivation-Behavioral skills–grounded manner.1
Conceptual model: mindfulness decoupling stress reactivity from syndemic HIV risk while complementing PrEP education.
Tapping a retrieval-augmented generation pipeline, user-queried stressors to mī lyte elicit MBSR skills recommendations to practice in real time. "Retrieval-augmented generation" refers to a system in which specialized knowledge is “retrieved” from a knowledge base – Dr. Sun’s internet mindfulness-based intervention manual (NCCIH #K23AT011173), in mī lyte – to “augment” a response “generated” by an LLM.2 This knowledge base can be personalized by the end user to incorporate individual resilience assets: inspiring song lyrics, aspects of faith and spirituality. Prioritizing safety, mī lyte System 1 will firmly but politely refuse queries that are outside of the context of its MBSR knowledge base.
PrEP education encoded in a pre-scripted text message bank will be navigable to users via a deterministic dialog tree. Circumventing the probabilistic process by which LLMs generate responses, System 2 eliminates the possibility of “factuality hallucinations,” i.e. inaccurate medical information, addressing an urgent challenge in healthcare AI.3
Footnotes
-
Dubov A, Altice FL, Fraenkel L. An Information–Motivation–Behavioral skills model of PrEP uptake. AIDS Behav. 2018;22(11):3603-3616. doi:10.1007/s10461-018-2095-4 ↩
-
Lewis P, Perez E, Piktus A, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks. arXiv. Preprint posted online April 12, 2021. doi:10.48550/arXiv.2005.11401 ↩
-
Obradovich N, Khalsa SS, Khan WU, et al. Opportunities and risks of large language models in psychiatry. NPP—Digit Psychiatry Neurosci. 2024;2(1). doi:10.1038/s44277-024-00010-z ↩