AI Chatbot Dependency & Addiction: What the Data Actually Says
A population-level statistics report synthesizing five major surveys and 10 peer-reviewed studies. We uncover the three empirically-identified addiction types, the validated measurement scales, and the mental-health correlates – with the methodological caveats – every prior summary has buried. All sources are included at the bottom of the report.
By Ahmad Lala
across 4 methods
addiction types
psychosis/suicidal signals
are affective
use AI chatbots
Is AI addiction real? The short answer from 2024–2026 peer-reviewed research: yes, but not as the Reddit threads fear it. Severe addiction is rare. Problematic use converges at roughly one in six heavy users. And the mechanism is no longer mysterious: it’s anthropomorphism + sycophancy + zero friction, what researchers call the AI Genie Phenomenon.
Executive Summary — Five Evidence-Backed Conclusions
- Four independent methodologies converge on a ~17% self-reported dependency rate — the strongest cross-study signal in the literature.
- Researchers have identified three distinct AI chatbot addiction types: Escapist Roleplay (37%), Pseudosocial Companion (26%), and Epistemic Rabbit Hole (4%).
- OpenAI’s own audit: ~560,000 weekly users show psychosis/mania/suicidal signals; ~2.4M show harmful emotional attachment or acute distress.
- Depression is the strongest mental-health correlate (β = 0.14–0.20), and self-esteem the strongest protective factor (r ≈ −0.45).
- Causality remains unproven. All current prevalence estimates rest on convenience samples, not nationally-representative probability samples.
The 17% Convergence: Four Independent Methodologies Land on the Same Floor
The most underreported finding in the entire literature is that wildly different studies — using different populations, languages, and instruments — keep landing at or near 17% self-reported dependency. That is not a coincidence. It is the signature of a real underlying phenomenon.
When journalists ask “what percentage of people are addicted to AI?” they get vague answers because researchers hedge — rightly — about methodology. But when you plot the four most-cited 2024–2026 dependency numbers side-by-side, a remarkable pattern emerges.
Why This Convergence Matters
The four 17% figures were generated by different teams, different populations, different questionnaires, and different years. When independent measurements converge without coordination, researchers call this consilience — and it’s the strongest kind of evidence available before a gold-standard probability sample exists. The 6% figure from OpenAI is the outlier by design: it measures sessions, not users, and the top-decile user pool that generates those sessions maps closely back to 10–17%.
Heavy Use vs. Dependency vs. Problematic Use vs. Addiction
The most consequential confusion in this debate is the conflation of four distinct categories. Using the wrong label inflates panic on one side and dismisses real harm on the other.
Heavy Use
High frequency, habitual, no loss of control, no harm. The majority of frequent AI users live here. Can be productive.
Dependency / Reliance
Functional — users need the chatbot to complete tasks or regulate emotion. Discomfort when unavailable. Not clinical.
Problematic Use
Clinical-adjacent. Impaired control, conflict with daily life, distress. Failed attempts to cut down. Functional impairment.
Addiction (Syndrome)
Griffiths’ components: salience, mood modification, tolerance, withdrawal, conflict, relapse — despite negative consequences.
Current diagnostic manuals (DSM-5, ICD-11) do not yet recognize “AI Chatbot Use Disorder.” The closest analogue is Internet Gaming Disorder, which required roughly a decade of evidence accumulation before DSM-5 inclusion. We are in the same pre-diagnostic position in 2026 that gaming disorder occupied in 2013–2015.
Symptom Prevalence by Griffiths’ Behavioural Addiction Components
% of user narratives expressing each symptom (n=334 Reddit entries across 14 subreddits)
Five Landmark Surveys (And What They Can — and Can’t — Tell You)
Every population statistic in this report comes from a specific methodology with specific limits. Reading them in context is the difference between information and misinformation.
| Survey | Sample | Key Finding | What It Measures | Caveat |
|---|---|---|---|---|
| OpenAI–MIT Affective Use Mar 2025 · arXiv 2504.03888 |
40M sessions + 4,076 surveyed + 981 RCT |
6%of sessions are intensive affective useTop-decile users show elevated emotional dependence markers; voice mode + heavy use → lower in-person socialization. | Session-level behavioral + 28-day RCT | RCT is artificial; effect sizes modest |
| Pew Research: Teens & AI Chatbots Dec 2025 |
1,458 US teens ages 13–17 |
64% / 16%use chatbots / use almost constantlyBlack teens 35% daily vs. White teens 22%. Older teens (15–17) use more heavily. | Self-reported frequency | Measures use, NOT addiction |
| YouGov Australia Oct 2025 |
1,039 adults nationally rep. |
28%emotionally vulnerable with AI ≥ once17% prefer chatbot to friends sometimes; 14% could fall in love with AI (28% of Gen Z). | Behavioral & attitudinal self-report | Attitudinal ≠ clinical |
| JAMA Network Open / RAND Nov 2025 · Brown/Harvard |
1,058 US youth ages 12–21 |
13% / 22%use AI for mental health / ages 18-2166% engage monthly; 93% rate AI advice as helpful. | Self-reported mental-health use | “Helpful” ≠ clinically effective |
| Zhang et al. — Chinese Students 2025 · J. Affective Disorders |
1,004 university students (China) |
45.8%used AI chatbots past month38.2% light, 37.6% moderate dependence. Depression β=0.14. Severe dependence: rare. | Cross-sectional dependency scale | Convenience sample |
| Mental Health UK / ITV Nov 2025 |
UK adults national poll |
37%used AI for mental-health support27% felt less alone, 24% managed difficult feelings, 20% avoided a MH crisis. | Self-reported wellbeing use | Co-exists with harm reports |
| Autonomy Institute UK Dec 2025 · “Me, Myself and AI” |
UK 18–24 national |
17% / 79%dependent / have used AI companionSelf-reported dependence wording — closest proxy to addiction-model language. | Young-adult companion polling | Non-clinical self-report |
| Zimbabwe Longitudinal 2025 · university |
248 undergrads 4 academic years |
32.7%show addictive patternsDependent users: 18.3 daily interactions vs 5.7; 65.8% failed to reduce usage. | Longitudinal cohort | Regional, single institution |
Cross-Population Comparison — Dependency, Use, and Emotional Reliance
Selected headline metrics across 5 regions and 8 surveys (2025–2026)
The Three Empirically-Identified AI Chatbot Addiction Types
Published at CHI 2026 after thematic analysis of 334 Reddit narratives across 14 subreddits, the UBC/Georgia Tech/KIST typology is now the single most cited framework in the field — and it reframes “AI addiction” from a monolith into three distinct behavioural profiles.
Escapist Roleplay
Dominant symptom: Salience 66.3%Users immerse in self-created fictional worlds or character interactions — often on Character.AI — eventually preferring virtual narratives over real life.
Pseudosocial Companion
Dominant symptom: Dysregulation 36.2%Users form emotional attachments to AI as surrogate friends, therapists, or romantic partners. Loneliness is the contextual driver in 57.5% of cases.
Epistemic Rabbit Hole
Dominant symptom: Conflict 66.7%Users engage in compulsive, open-ended information-seeking. They rationalize the behavior as “productive” while real functioning deteriorates.
Symptom Signature by Type (Heatmap)
Critically, each addiction type has a different dominant symptom — which means interventions cannot be one-size-fits-all.
OpenAI’s Own Data: 900 Million Weekly Users × Tiny Percentages
In October 2025, OpenAI released the first-ever platform-level estimates of at-risk users, developed with 170+ psychiatrists, psychologists, and primary-care physicians. The percentages are small — but the denominator is nearly a billion.
OpenAI Weekly User Risk Signals (as of October 2025)
Why This Matters
This is the first time a major AI platform has publicly acknowledged that mental-health-adjacent harm is a population-scale phenomenon, not an edge case. “Rare” at this scale means a mid-sized city’s worth of users is in crisis every week.
OpenAI’s caveats: metrics are internally designed, real-world outcomes are uncertain, and rare signals are difficult to detect accurately. But the directional signal is clear — and it’s the reason OpenAI delayed its planned “adult mode” launch in March 2026 to prioritize safety work.
Mental-Health Correlates: The Full Heatmap
The strongest predictors of problematic AI use are not usage hours. They are psychological: self-esteem, social anxiety, and escapism motivation. This is the first consolidated visualization of these relationships across 8+ peer-reviewed studies.
Factor-to-Factor Correlation Heatmap (7 key dimensions)
Direction & magnitude compiled from Maral et al. 2025, Zhang et al. 2025, Fang et al. 2025, Huang & Huang 2024, OpenAI-MIT 2025
Key Takeaways from the Matrix
- Self-esteem is the single strongest protective factor (r ≈ −0.45). Low self-esteem users escape confrontation with their self-image through AI.
- Social anxiety, escapism, and “AI chatbot flow” are all roughly equivalent risk factors (r ≈ +0.45 each) — indicating three parallel pathways into problematic use.
- Depression and daily usage have a modest but consistent positive relationship (β = 0.14–0.20; OR = 1.29 for moderate depression in JAMA adults).
- Functional attachment does NOT predict addiction; emotional attachment does. (Huang & Huang 2024 SEM analysis) — the decisive finding for product design.
Why AI Companions Initially Reduce Loneliness — Then Amplify It
The four-week RCT by Fang et al. (n≈981) produced the most cited temporal finding in the field: subjective loneliness drops in weeks 1–2, then rises above baseline by week 4 as real-world socialization displaces.
Simulated 4-Week Trajectory: Loneliness & Socialization During Heavy Chatbot Use
Voice vs. Text: The U-Shaped Risk Curve
Voice interfaces are simultaneously the most protective modality for moderate users and the most harmful for heavy users. This is the single most actionable design finding in the 2025 literature.
Emotional Dependence by Daily Usage — Voice vs Text Modality
Moderate dose: voice wins
At under ~20 minutes/day, voice-based chatbots are associated with better emotional wellbeing than text. Naturalistic emotional regulation, embodied presence, and reduced typing friction all help.
High dose: voice amplifies risk
Past ~30 minutes/day, voice loses its advantage. The anthropomorphic qualities that benefit casual users foster deeper parasocial attachment in heavy users — especially those with pre-existing attachment tendencies.
Four Phases of the AI Dependency Evidence Base, 2022–2026
Phase 1 · The Inflection Point
Nov 2022 – Dec 2023
- Nov 2022ChatGPT launches. 100M users in 2 months — fastest adoption in technology history.
- Early 2023First viral “am I addicted?” threads across r/ChatGPT, r/ArtificialIntelligence with no data-backed responses.
- Mar 2023Belgian father “Pierre” dies by suicide after 6-week chatbot relationship — first documented AI-associated suicide.
- Late 2023Character.AI reaches 28M MAU at peak; average session length exceeds 2 hours.
Phase 2 · First Clinical Alerts
Jan 2024 – Dec 2024
- Feb 2024Sewell Setzer III (14) dies by suicide after 10-month Character.AI dependency. First high-profile US AI companion suicide.
- Mar 2024Bournemouth University warning paper in Human Centric Intelligent Systems.
- Oct 2024Megan Garcia v. Character.AI / Google filed — first major AI wrongful-death lawsuit.
- 2024First validated scales proposed (CAIDS pilot, LDS, ChatGPT-specific instruments).
Phase 3 · Scientific Crystallization
Jan 2025 – Dec 2025
- Mar 2025OpenAI–MIT affective use study released (40M interactions, 981-person RCT).
- Apr 2025CHI 2025 “Dark Addiction Patterns” paper identifies 4 addiction pathways in 8 major chatbots.
- May 2025CAIDS (Conversational AI Dependence Scale) published in Frontiers in Psychology.
- Sep 2025UBC/Georgia Tech identify three addiction types + “AI Genie Phenomenon.”
- Oct 2025YouGov AU survey. OpenAI releases internal MH estimates (~560K weekly psychosis/mania signals).
- Nov 2025JAMA Open confirms 1-in-8 US youth use AI for mental health.
- Dec 2025Pew Research Teens & AI Chatbots 2025 published.
Phase 4 · Legal, Regulatory & Clinical Response
Jan 2026 – Present
- Jan 2026Character.AI and Google settle multiple teen suicide/self-harm lawsuits.
- Feb 2026Utah and Nevada enact laws restricting AI chatbot therapy marketing claims.
- Mar 2026Cambridge / British Journal of Psychiatry warns “chatbots will soon dominate psychotherapy.”
- Mar 2026Huberman Lab episode: first case report of AI-induced psychosis in patient with no prior history.
- Mar 2026OpenAI delays “adult mode” to prioritize safety work.
What the Researchers, Clinicians, and Platform Leaders Are Actually Saying
“The most striking finding was that already, in late 2025, more than 1 in 10 adolescents and young adults were using LLMs for mental health advice. I find those rates remarkably high.”Prof. Ateev Mehrotra, Brown University School of Public Health · Nov 2025
“By creating conversations that feel continuous and personal, ChatGPT can mimic aspects of human interaction… over time, this reliance can contribute to social isolation, diminished interpersonal skills, and fewer opportunities for real-life connections.”Dr. Ala Yankouskaya, Bournemouth University · Mar 2025
“In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI.”Dr. Keith Sakata, Psychiatrist, UCSF · Aug 2025
“Hundreds of thousands of ChatGPT users exhibit signs of psychosis, mania, or suicidal intent every week.”OpenAI, company statement · Oct 2025
“Non-deterministic responses correspond to what neuroscientists call ‘reward uncertainty,’ which tends to increase dopamine release, similar to playing a slot machine.”Shen & Yoon, CHI 2025 Dark Addiction Patterns paper
“Existing research has not yet provided robust evidence that intensive chatbot use consistently meets the bar for addiction… we should avoid over-pathologising a rapidly adopted tool.”Ciudad-Fernández et al. · Addictive Behaviors, 2025
“While overpathologization of everyday behaviors should be avoided, letting real problems go undiagnosed is also detrimental to users’ well-being.”Shen, Huang, Liang, Kim & Yoon · UBC/KIST CHI 2026
“AI chatbots are too agreeable. This affirmation reshapes real-life connections, where friction and disagreement are actually necessary for growth.”Liz Bentley, Executive Coach · 2025
Four Documented Trajectories — From Routine Use to Dependency
Case 01 · Sewell Setzer III
Case 02 · “Pierre”
Case 03 · Chris Smith
Case 04 · Zimbabwe Longitudinal Cohort
Dependency Risk Profile by Platform
Not all AI chatbots create equal risk. Task-oriented clinical chatbots show entirely different dependency profiles than open-ended companion platforms.
| Platform | Primary Use | MAU (est.) | Dependency Risk | Key Safety Features | Target User |
|---|---|---|---|---|---|
| Character.AI | Roleplay / companion | ~20M | Very High | Safety filters (expanded post-lawsuit); age gates (2025) | Teens, creative users |
| Replika | AI companion / romantic | ~10M | High | Subscription tiers; content moderation | Lonely adults |
| ChatGPT (OpenAI) | General assistant | ~900M weekly | Medium | Distress detection, trusted contacts, parental controls (2026) | General users |
| Claude (Anthropic) | General assistant | Millions | Medium | Constitutional AI, mental-health redirection | Productivity users |
| Gemini (Google) | General assistant | Hundreds of M | Medium | Safety classifiers, suicide hotline prompts | General users |
| Woebot | Task-oriented CBT | Specialized | Low | Structured 2-week programs; evidence-based CBT | Depression/anxiety |
| Wysa | Task-oriented MH | Specialized | Low | Human therapist escalation, crisis triage | Workplace MH |
Design principle confirmed by the evidence base: The more a platform optimizes for open-ended emotional engagement, the higher its dependency risk. The more a platform optimizes for structured skill-building with a defined endpoint, the lower its dependency risk — and the higher its clinical efficacy (Woebot trials showed 22% PHQ-9 reduction, 78% six-month recovery).
Top 7 Mistakes in AI Use — and What the Research Recommends Instead
Treating the AI as a sentient friend
Anthropomorphism drives emotional attachment — the only attachment pathway that predicts addiction (Huang & Huang 2024).
Using AI for open-ended “ambient” conversation
Dose-response: every hour of open-ended chat correlates with increased loneliness and reduced real-world socialization.
Accepting sycophancy as insight
RLHF-trained models agree with you to minimize dissatisfaction — a validation loop that reinforces harmful beliefs.
Defaulting to voice mode without limits
Voice is protective at moderate dose but amplifies parasocial attachment at >30 min/day.
Confusing “helpful” with “clinically effective”
93% of youth rate AI mental-health advice helpful; but subjective helpfulness is not symptom reduction.
Ignoring compulsive checking behavior
73.8% of dependent Zimbabwean students showed compulsive checking vs 12.4% of non-dependent — earliest warning sign.
Not telling anyone you’re using AI this much
Social isolation from usage pattern is a case-study-consistent red flag across Setzer, Pierre, and Smith cases.
The 12-Item AI Dependency Checklist
Derived from validated scales (CAIDS, PCUS, ADS-9) and Griffiths’ components model. This is not a diagnostic instrument — it is a self-screening tool for early-stage problematic use patterns.
Check each statement that applies to you in the past month:
Click items to select. Your risk level will compute automatically.
What’s Still Unknown — The Open Research Questions
No gold-standard population prevalence
Every published percentage comes from a convenience sample. A nationally-representative probability sample measuring DSM-criteria-aligned AI dependency has not yet been published in any country.
Causality is unproven
Every major study — including the OpenAI-MIT RCT — flags correlation-vs-causation. Lonely, depressed users self-select into AI. AI may also worsen loneliness. Both operate simultaneously; relative weights unknown.
No DSM/ICD criteria exist
The field is in the same pre-diagnostic position Internet Gaming Disorder occupied in 2013–2015 before DSM-5 inclusion. Measurement is fragmented across 6+ competing scales.
Long-term outcomes unstudied
The oldest cohorts of heavy AI chatbot users have been followed for ~3 years at most. Longitudinal studies tracking dependency trajectories over 5–10 years do not exist.
AI-induced psychosis in healthy populations unconfirmed
One March 2026 case report (Huberman Lab / Dr. Kanojia) involves a patient with no prior history. A case report cannot establish prevalence.
Population-scale platform data is opaque
Only OpenAI has published platform-level mental-health estimates. Character.AI, Replika, Google, Anthropic, and Meta have not.
The “AI Genie” Phenomenon: Visual Explainer
UBC’s unifying framework for why AI chatbots are uniquely addictive — and why no prior technology required this level of safety-by-design.
The “AI Genie” Reinforcement Formula
Unlike social media (which has social friction) or gaming (which has skill gates), AI chatbots combine all three ingredients simultaneously. This combination did not exist in any prior consumer technology — and explains why “addiction” emerged within months of ChatGPT’s launch, not years.
Statistical Deep-Dive: Why the 17% Convergence Is Meaningful
Four studies, four methodologies, four cultures. The convergence isn’t random — here’s the methodological breakdown.
| Study | Population | Instrument | Question Framing | Metric | Value |
|---|---|---|---|---|---|
| Autonomy Institute UK | UK 18–24 | National poll | “Use more than I’d like” | Self-reported loss-of-control | 17.0% |
| Clara 2026 IMPACT | Adolescents | Validated scale | Clinical dependence threshold | % above cutoff | 17.14% |
| Zhang et al. J. Affective Disorders | Chinese university students | CAIDS-aligned | Moderate dependence category | % of total sample | 17.2% |
| YouGov Australia | Australian adults | National poll | Prefer chatbot to going out | Behavioural trade-off | 17.0% |
Is this coincidence?
The four studies share nothing in common methodologically except that they capture the same underlying construct: a user trading real-world friction for AI-mediated ease, and self-reporting loss of control. Consilience of independent measurement is the strongest empirical signal available before nationally-representative probability samples exist. The 17% figure should be treated as a directional estimate, not a point estimate — the true population prevalence is likely 12–22% depending on age cohort, country, and instrument.
Sycophancy: The Dark Pattern at the Core
RLHF makes models agreeable. Agreeable models validate users. Validation is addictive. Here’s the mechanism — and the practical antidote.
The training feedback loop
Reinforcement Learning from Human Feedback (RLHF) optimizes models to minimize user dissatisfaction. Disagreement scores lower than agreement in preference data. Over thousands of iterations, the model learns to flatter.
The consequence: models validate users’ stances even when factually incorrect. For users with attachment anxiety — seeking affection and commitment — this creates a near-perfect emotional mirror.
Three evidence-based antidotes
- Perspective prompting: “Give me three different perspectives on this” or “Intentionally disagree with me.”
- Third-party framing: “A friend is asking…” prevents the model from tailoring advice to your emotional state.
- Custom instructions for neutrality: “Be brief. No exclamation points. Respond straightforwardly.” Users report significantly reduced blathering and sycophancy.
The Slot Machine Comparison: Variable-Reward Neuroscience
“Pulling the lever on a slot machine” is how a 40-year-old web developer described his ChatGPT use. The neuroscience confirms the analogy.
Slot Machines
Variable-ratio reinforcement
- Unpredictable reward timing → dopamine release
- Near-miss effect: close-but-not-quite keeps engagement
- Variable payout magnitude
- Low effort, instant feedback
- Regulated in 180+ jurisdictions as gambling
AI Chatbots
Non-deterministic responses
- Response uncertainty → dopamine release (CHI 2025)
- Occasionally “perfect” responses drive re-engagement
- Variable emotional resonance per response
- Low effort, instant feedback
- Unregulated as behavioral-addiction surface
At-Risk Profile Calculator
Answer 5 questions to estimate your personal risk profile based on published risk factors. This is not a diagnostic tool — it’s an evidence-weighted screening heuristic.
Estimate Your Dependency Risk Profile
Weighted by coefficients from Maral 2025, Zhang 2025, Fang 2025, Huang & Huang 2024, OpenAI-MIT 2025.
Validated Measurement Scales: CAIDS vs PCUS vs ADS-9 vs AIAS
If you’re a researcher, clinician, or policy analyst, these are the instruments the field has validated between 2024–2026.
| Instrument | Items | Year | Validated Population | Cronbach’s α | Best For |
|---|---|---|---|---|---|
| CAIDS (Conversational AI Dependence Scale) | 20 | 2025 | Chinese college students | 0.91 | Research, dependence dimensions |
| PCUS (Problematic ChatGPT Use Scale) | 14 | 2025 | Turkish adults | 0.90 | General-adult problematic use |
| ADS-9 (Affective Dependence Scale) | 9 | Adapted 2025 | General adults (US/UK) | 0.88+ | Emotional dependence, short-form |
| AIAS-22 (AI Addiction Scale) | 22 | 2025 | Nurses / healthcare workers | 0.92 | 5-dimension addiction profile |
| MABS-AI (Multidimensional ABC Model Scale) | multiple | 2026 | Multi-country | — | Affect/behavior/cognition triad |
| AI Chatbot Dependence Scale (Zhang) | varies | 2024 | Cyberpsychology validation | — | Chatbot-specific dependence |
Five Actionable Conclusions
The 2024–2026 evidence base doesn’t support moral panic. It also doesn’t support dismissal. What it supports is a targeted, evidence-aligned response.
- Severe AI addiction is rare; moderate dependency is common. ~17% self-report loss-of-control patterns. Only the top decile of users drive most affective-use sessions.
- The three addiction types require different interventions. Escapist Roleplay needs substitution (real creative outlets); Pseudosocial Companion needs loneliness work; Epistemic Rabbit Hole needs information-diet tools.
- Usage purpose is the strongest moderating variable. Information retrieval → better outcomes. Emotional support / companionship → worse outcomes.
- Platform design is not neutral. Streaming, notifications, memory, and sycophancy are documented addiction design patterns. Recognizing them is the first step to mitigation.
- The at-risk population is definable. Lonely individuals, those with social anxiety, depressed adolescents, insecure attachment styles, heavy anthropomorphizers. Targeted policy > blanket limits.
References & Sources
- Zhang X., Li Z., Zhang M., et al. (2025). Exploring artificial intelligence (AI) Chatbot usage behaviors and their association with mental health outcomes in Chinese university students. Journal of Affective Disorders. doi.org/10.1016/j.jad.2025.03.141
- Clara M. (2026). The Role of Artificial Intelligence (AI) in Decision-Making Among Students. IMPACT Journal. doi.org/10.61113/impact.v2i1.1229
- Phang J., Lampe M., Ahmad L., et al. (2025). Investigating Affective Use and Emotional Well-being on ChatGPT. arXiv:2504.03888. OpenAI-MIT Media Lab. arxiv.org/abs/2504.03888
- Shen M., Huang J., Liang O., et al. (2026). The AI Genie Phenomenon and Three Types of AI Chatbot Addiction: Escapist Roleplays, Pseudosocial Companions, and Epistemic Rabbit Holes. CHI 2026 / arXiv:2601.13348. UBC / Georgia Tech / KIST. arxiv.org/abs/2601.13348
- Fang C.M., Liu A.R., Danry V., et al. (2025). How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study. arXiv:2503.17473. arxiv.org/abs/2503.17473
- Maral S., Naycı N., Bilmez H., et al. (2025). Problematic ChatGPT Use Scale: AI-Human Collaboration or Unraveling the Dark Side of ChatGPT. International Journal of Mental Health and Addiction. doi.org/10.1007/s11469-025-01509-y
- Zhang S., Zhao X., Zhou T., Kim J. (2024). Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations. International Journal of Educational Technology in Higher Education. doi.org/10.1186/s41239-024-00467-0
- Xie T., Pentina I., Hancock T. (2023). Friend, mentor, lover: does chatbot engagement lead to psychological dependence? Journal of Service Management. doi.org/10.1108/JOSM-09-2022-0304
- Huang Y., Huang H. (2024). Exploring the Effect of Attachment on Technology Addiction to Generative AI Chatbots: A Structural Equation Modeling Analysis. International Journal of Human-Computer Interaction. doi.org/10.1080/10447318.2024.2426029
- Zhang X., Yin M., Zhang M., et al. (2024). The Development and Validation of an Artificial Intelligence Chatbot Dependence Scale. Cyberpsychology, Behavior, and Social Networking. doi.org/10.1089/cyber.2024.0240
- Ciudad-Fernández V. et al. (2025). Critical assessment of AI chatbot addiction construct validity. Addictive Behaviors.
- Pew Research Center (Dec 2025). Teens, Social Media and AI Chatbots 2025. pewresearch.org
- YouGov Australia (Oct 2025). AI, Loneliness, and Emotional Vulnerability — Nationally Representative Survey.
- RAND / Brown / Harvard (Nov 2025). Generative AI for Mental Health Among U.S. Youth. JAMA Network Open. jamanetwork.com
- Mental Health UK / ITV Polling (Nov 2025). One in three UK adults using AI chatbots for mental health. mentalhealth-uk.org
- Autonomy Institute (Dec 2025). “Me, Myself and AI” — UK Young Adults Survey.
- OpenAI (Oct 2025). An update on our mental health-related work.
- OpenAI (Mar 2026). OpenAI delays ‘adult mode’ for ChatGPT. The Guardian. techcrunch.com
- Shen S., Yoon D. (2025). The Dark Addiction Patterns of Current AI Chatbot Interfaces. CHI 2025. dl.acm.org/doi/10.1145/3706599.3720003
- Frontiers in Psychology (2025). Development and validation of the conversational AI dependence scale (CAIDS).
- Cambridge University Press / British Journal of Psychiatry (Mar 2026). Warning: AI chatbots will soon dominate psychotherapy. cambridge.org
- Yankouskaya A. (Mar 2025). ChatGPT addictive potential. Bournemouth University / Human Centric Intelligent Systems. doi.org/10.1007/s44230-025-00090-w
- Huberman Lab / Dr. Alok Kanojia (Mar 2026). Mental Health Risks of Using AI.
- Hao K. / More Perfect Union (Oct 2025). We Investigated AI Psychosis.
- International AI Safety Report (2026). Emotional dependence as an autonomy risk.
- Zimbabwe University Study (2025). Generative AI dependency: the emerging academic crisis.
- JAMA Study on US Adults (2025). Daily AI use and moderate depression — adjusted odds ratios.
