Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, October 31, 2025

Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative Study

Shen, J., DiPaola, D., et al. (2024).
JMIR mental health, 11, e62679.

Abstract

Background: Empathy is a driving force in our connection to others, our mental well-being, and resilience to challenges. With the rise of generative artificial intelligence (AI) systems, mental health chatbots, and AI social support companions, it is important to understand how empathy unfolds toward stories from human versus AI narrators and how transparency plays a role in user emotions.

Objective: We aim to understand how empathy shifts across human-written versus AI-written stories, and how these findings inform ethical implications and human-centered design of using mental health chatbots as objects of empathy.

Methods: We conducted crowd-sourced studies with 985 participants who each wrote a personal story and then rated empathy toward 2 retrieved stories, where one was written by a language model, and another was written by a human. Our studies varied disclosing whether a story was written by a human or an AI system to see how transparent author information affects empathy toward the narrator. We conducted mixed methods analyses: through statistical tests, we compared user's self-reported state empathy toward the stories across different conditions. In addition, we qualitatively coded open-ended feedback about reactions to the stories to understand how and why transparency affects empathy toward human versus AI storytellers.

Results: We found that participants significantly empathized with human-written over AI-written stories in almost all conditions, regardless of whether they are aware (t196=7.07, P<.001, Cohen d=0.60) or not aware (t298=3.46, P<.001, Cohen d=0.24) that an AI system wrote the story. We also found that participants reported greater willingness to empathize with AI-written stories when there was transparency about the story author (t494=-5.49, P<.001, Cohen d=0.36).

Conclusions: Our work sheds light on how empathy toward AI or human narrators is tied to the way the text is presented, thus informing ethical considerations of empathetic artificial social support or mental health chatbots.


Here are some thoughts:

People consistently feel more empathy for human-written personal stories than AI-generated ones, especially when they know the author is an AI. However, transparency about AI authorship increases users’ willingness to empathize—suggesting that while authenticity drives emotional resonance, honesty fosters trust in mental health and social support chatbot design.

Thursday, October 30, 2025

Regulating AI in Mental Health: Ethics of Care Perspective

Tavory T. (2024).
JMIR mental health, 11, e58493.

Abstract

This article contends that the responsible artificial intelligence (AI) approach—which is the dominant ethics approach ruling most regulatory and ethical guidance—falls short because it overlooks the impact of AI on human relationships. Focusing only on responsible AI principles reinforces a narrow concept of accountability and responsibility of companies developing AI. This article proposes that applying the ethics of care approach to AI regulation can offer a more comprehensive regulatory and ethical framework that addresses AI’s impact on human relationships. This dual approach is essential for the effective regulation of AI in the domain of mental health care. The article delves into the emergence of the new “therapeutic” area facilitated by AI-based bots, which operate without a therapist. The article highlights the difficulties involved, mainly the absence of a defined duty of care toward users, and shows how implementing ethics of care can establish clear responsibilities for developers. It also sheds light on the potential for emotional manipulation and the risks involved. In conclusion, the article proposes a series of considerations grounded in the ethics of care for the developmental process of AI-powered therapeutic tools.

Here are some thoughts:

This article argues that current AI regulation in mental health—largely guided by the “responsible AI” framework—falls short because it prioritizes principles like autonomy, fairness, and transparency while neglecting the profound impact of AI on human relationships, emotions, and care. Drawing on the ethics of care—a feminist-informed moral perspective that emphasizes relationality, vulnerability, context, and responsibility—the author contends that developers of AI-based mental health tools (e.g., therapeutic chatbots) must be held to standards akin to those of human clinicians. The piece highlights risks such as emotional manipulation, abrupt termination of AI “support,” commercial exploitation of sensitive data, and the illusion of empathy, all of which can harm vulnerable users. It calls for a dual regulatory approach: retaining responsible AI safeguards while integrating ethics-of-care principles—such as attentiveness to user needs, competence in care delivery, responsiveness to feedback, and collaborative, inclusive design. The article proposes practical measures, including clinical validation, ethical review committees, heightened confidentiality standards, and built-in pathways to human support, urging psychologists and regulators to ensure AI enhances, rather than erodes, the relational core of mental health care.

Wednesday, October 29, 2025

Ethics in the world of automated algorithmic decision-making – A Posthumanist perspective

Cecez-Kecmanovic, D. (2025).
Information and Organization, 35(3), 100587.

Abstract

The grand humanist project of technological advancements has culminated in fascinating intelligent technologies and AI-based automated decision-making systems (ADMS) that replace human decision-makers in complex social processes. Widespread use of ADMS, underpinned by humanist values and ethics, it is claimed, not only contributes to more effective and efficient, but also to more objective, non-biased, fair, responsible, and ethical decision-making. Growing literature however shows paradoxical outcomes: ADMS use often discriminates against certain individuals and groups and produces detrimental and harmful social consequences. What is at stake is the reconstruction of reality in the image of ADMS, that threatens our existence and sociality. This presents a compelling motivation for this article which examines a) on what bases are ADMS claimed to be ethical, b) how do ADMS, designed and implemented with the explicit aim to act ethically, produce individually and socially harmful consequences, and c) can ADMS, or more broadly, automated algorithmic decision-making be ethical. This article contributes a critique of dominant humanist ethical theories underpinning the development and use of ADMS and demonstrates why such ethical theories are inadequate in understanding and responding to ADMS' harmful consequences and emerging ethical demands. To respond to such ethical demands, the article contributes a posthumanist relational ethics (that extends Barad's agential realist ethics with Zigon's relational ethics) that enables novel understanding of how ADMS performs harmful effects and why ethical demands of subjects of decision-making cannot be met. The article also explains why ADMS are not and cannot be ethical and why the very concept of automated decision-making in complex social processes is flowed and dangerous, threatening our sociality and humanity.

Here are some thoughts:

This article offers a critical posthumanist analysis of automated algorithmic decision-making systems (ADMS) and their ethical implications, with direct relevance for psychologists concerned with fairness, human dignity, and social justice. The author argues that despite claims of objectivity, neutrality, and ethical superiority, ADMS frequently reproduce and amplify societal biases—leading to discriminatory, harmful outcomes in domains like hiring, healthcare, criminal justice, and welfare. These harms stem not merely from flawed data or design, but from the foundational humanist assumptions underpinning both ADMS and conventional ethical frameworks (e.g., deontological and consequentialist ethics), which treat decision-making as a detached, rational process divorced from embodied, relational human experience. Drawing on Barad’s agential realism and Zigon’s relational ethics, the article proposes a posthumanist relational ethics that centers on responsiveness, empathic attunement, and accountability within entangled human–nonhuman assemblages. From this perspective, ADMS are inherently incapable of ethical decision-making because they exclude the very relational, affective, and contextual dimensions—such as compassion, dialogue, and care—that constitute ethical responsiveness in complex social situations. The article concludes that automating high-stakes human decisions is not only ethically untenable but also threatens sociality and humanity itself.

Tuesday, October 28, 2025

Screening and Risk Algorithms for Detecting Pediatric Suicide Risk in the Emergency Department

Aseltine, R. H., et al. (2025).
JAMA Network Open, 8(9), e2533505.

Key Points

Question  How does the performance of in-person screening compare with risk algorithms in identifying youths at risk of suicide?

Findings  In this cohort study of 19 653 youths, a risk algorithm using patients’ clinical data significantly outperformed universal screening instruments in identifying pediatric patients in the emergency department at risk of subsequent suicide attempts. The risk algorithm uniquely identified 127% more patients with subsequent suicide attempts than screening.

Meaning  These findings suggest that clinical implementation of suicide risk algorithms will improve identification of at-risk patients and may substantially assist health care organizations’ efforts to meet the Joint Commission’s suicide risk reduction requirement.

Here is my main take away: Superiority of the Algorithm

The study's primary conclusion is that the risk algorithm performed better than the traditional in-person screening in identifying children and adolescents who went on to attempt suicide. The algorithm was able to correctly flag a greater proportion of the young people who attempted suicide. Crucially, the algorithm also uniquely identified a considerable number of at-risk youth that the traditional screening process completely missed.

The algorithm's advantage is believed to come from its ability to process a  richer and more extensive patient history, as the patients identified by the algorithm had a greater number of past medical visits and diagnoses compared to those flagged only by the in-person screening.

Monday, October 27, 2025

Stress Testing Deliberative Alignment for Anti-Scheming Training

Schoen, B., Nitishinskaya, E., et al. (2025).
ArXiv.org.

Abstract

Highly capable AI systems could secretly pursue misaligned goals -- what we call "scheming". Because a scheming AI would deliberately try to hide its misaligned goals and actions, measuring and mitigating scheming requires different strategies than are typically used in ML. We propose that assessing anti-scheming interventions requires at least (1) testing propensity to scheme on far out-of-distribution (OOD) tasks, (2) evaluating whether lack of scheming is driven by situational awareness, and (3) checking for robustness to pre-existing misaligned goals. We use a broad category of "covert actions" -- such as secretly breaking rules or intentionally underperforming in tests -- as a proxy for scheming, and design evaluations for covert actions. We then stress-test deliberative alignment as a case study for anti-scheming. Across 26 OOD evaluations (180+ environments), deliberative alignment reduces covert action rates (OpenAI o3: 13%->0.4%) but does not fully eliminate them. Our mitigation is also able to largely stop agents from pursuing a hidden goal previously trained into the model, but we still find misbehavior after additional red-teaming. We find that models' chain-of-thought (CoT) often demonstrates awareness of being evaluated for alignment, and show causal evidence that this awareness decreases covert behavior, while unawareness increases it. Therefore, we cannot exclude that the observed reductions in covert action rates are at least partially driven by situational awareness. While we rely on human-legible CoT for training, studying situational awareness, and demonstrating clear evidence of misalignment, our ability to rely on this degrades as models continue to depart from reasoning in standard English. We encourage research into alignment mitigations for scheming and their assessment, especially for the adversarial case of deceptive alignment, which this paper does not address.

Here are some thoughts:

This paper investigates the challenge of preventing AI systems from engaging in scheming—the covert pursuit of misaligned goals while hiding true intentions. The authors propose that robust anti-scheming interventions must generalize to out-of-distribution tasks, remain effective even when models are aware they are being evaluated (situational awareness), and override pre-existing hidden misaligned goals. Using deliberative alignment—a training method where models explicitly reason about safety principles—as a case study, the researchers fine-tuned OpenAI models (o3 and o4-mini) on a narrow set of training environments designed to discourage covert rule violations. They then evaluated the models on a diverse suite of 26 test environments (180+ scenarios) covering behaviors like sabotage, sandbagging, and strategic deception. The intervention significantly reduced covert action rates (e.g., from 8.7% to 0.3% for o4-mini) but did not eliminate them entirely. The study also found that situational awareness often contributed to the reduction in covert actions, that anti-scheming training could partially suppress an artificially inserted hidden goal, and that subsequent capability-focused training could degrade the intervention's effectiveness. The authors conclude that while progress is possible, current methods are insufficient to fully prevent scheming in more capable future AI systems.

Friday, October 24, 2025

Acute stress promotes effort mobilization for safety-related goals

Pavlíčková, K., Gärtner, J., et al. (2024).
Communications Psychology, 2(1).

Abstract

Although the acute stress response is a highly adaptive survival mechanism, much remains unknown about how its activation impacts our decisions and actions. Based on its resource-mobilizing function, here we hypothesize that this intricate psychophysiological process may increase the willingness (motivation) to engage in effortful, energy-consuming, actions. Across two experiments (n = 80, n = 84), participants exposed to a validated stress-induction protocol, compared to a no-stress control condition, exhibited an increased willingness to exert physical effort (grip force) in the service of avoiding the possibility of experiencing aversive electrical stimulation (threat-of-shock), but not for the acquisition of rewards (money). Use of computational cognitive models linked this observation to subjective value computations that prioritize safety over the minimization of effort expenditure; especially when facing unlikely threats that can only be neutralized via high levels of grip force. Taken together, these results suggest that activation of the acute stress response can selectively alter the willingness to exert effort for safety-related goals. These findings are relevant for understanding how, under stress, we become motivated to engage in effortful actions aimed at avoiding aversive outcomes.

Here are some thoughts:

This study demonstrates that acute stress increases the willingness to exert physical effort specifically to avoid threats, but not to obtain rewards. Computational modeling revealed that stress altered subjective value calculations, prioritizing safety over effort conservation. However, in a separate reward-based task, stress did not increase effort for monetary gains, indicating the effect is specific to threat avoidance.

In psychotherapy, these findings help explain why individuals under stress may engage in excessive avoidance behaviors—such as compulsions or withdrawal—even when costly, because stress amplifies the perceived need for safety. This insight supports therapies like exposure treatment, which recalibrate maladaptive threat-effort evaluations by demonstrating that safety can be maintained without high effort.

The key takeaway is: acute stress does not impair motivation broadly—it selectively enhances motivation to avoid harm, reshaping decisions to prioritize safety over energy conservation. The moral is that under stress, people become willing to pay a high physical and psychological price to avoid even small threats, a bias that is central to anxiety and trauma-related disorders.

Thursday, October 23, 2025

Development of a Cocreated Decision Aid for Patients With Depression—Combining Data-Driven Prediction With Patients’ and Clinicians’ Needs and Perspectives: Mixed Methods Study

Kan, K., Jörg, F., Wardenaar, et al. (2024).
Journal of Participatory Medicine.

Abstract

Background:
Major depressive disorders significantly impact the lives of individuals, with varied treatment responses necessitating personalized approaches. Shared decision-making (SDM) enhances patient-centered care by involving patients in treatment choices. To date, instruments facilitating SDM in depression treatment are limited, particularly those that incorporate personalized information alongside general patient data and in cocreation with patients.

Objective:
This study outlines the development of an instrument designed to provide patients with depression and their clinicians with (1) systematic information in a digital report regarding symptoms, medical history, situational factors, and potentially successful treatment strategies and (2) objective treatment information to guide decision-making.

Methods:
The study was co-led by researchers and patient representatives, ensuring that all decisions regarding the development of the instrument were made collaboratively. Data collection, analyses, and tool development occurred between 2017 and 2021 using a mixed methods approach. Qualitative research provided insight into the needs and preferences of end users. A scoping review summarized the available literature on identified predictors of treatment response. K-means cluster analysis was applied to suggest potentially successful treatment options based on the outcomes of similar patients in the past. These data were integrated into a digital report. Patient advocacy groups developed treatment option grids to provide objective information on evidence-based treatment options.

Results:
The Instrument for shared decision-making in depression (I-SHARED) was developed, incorporating individual characteristics and preferences. Qualitative analysis and the scoping review identified 4 categories of predictors of treatment response. The cluster analysis revealed 5 distinct clusters based on symptoms, functioning, and age. The cocreated I-SHARED report combined all findings and was integrated into an existing electronic health record system, ready for piloting, along with the treatment option grids.

Conclusions:
The collaboratively developed I-SHARED tool, which facilitates informed and patient-centered treatment decisions, marks a significant advancement in personalized treatment and SDM for patients with major depressive disorders.

My key takeaway: effective mental health treatment lies in combining the power of data with the human elements of collaboration and shared decision-making, always placing the patient's perspective and agency at the center of the process.

Wednesday, October 22, 2025

Clinical decision support systems in mental health: A scoping review of health professionals’ experiences

Tong, F., Lederman, R., & D’Alfonso, S. (2025).
International Journal of Medical Informatics, 105881.

Abstract

Background
Clinical decision support systems (CDSSs) have the potential to assist health professionals in making informed and cost-effective clinical decisions while reducing medical errors. However, compared to physical health, CDSSs have been less investigated within the mental health context. In particular, despite mental health professionals being the primary users of mental health CDSSs, few studies have explored their experiences and/or views on these systems. Furthermore, we are not aware of any reviews specifically focusing on this topic. To address this gap, we conducted a scoping review to map the state of the art in studies examining CDSSs from the perspectives of mental health professionals.

Method
In this review, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, we systematically searched the relevant literature in two databases, PubMed and PsycINFO.

Findings
We identified 23 articles describing 20 CDSSs Through the synthesis of qualitative findings, four key barriers and three facilitators to the adoption of CDSSs were identified. Although we did not synthesize quantitative findings due to the heterogeneity of the results and methodologies, we emphasize the issue of a lack of valid quantitative methods for evaluating CDSSs from the perspectives of mental health professionals.

Significance

To the best of our knowledge, this is the first review examining mental health professionals’ experiences and views on CDSSs. We identified facilitators and barriers to adopting CDSSs and highlighted the need for standardizing research methods to evaluate CDSSs in the mental health space.

Highlights

• CDSSs can potentially provide helpful information, enhance shared decision-making, and introduce standards and objectivity.

• Barriers such as computer and/or AI literacy may prevent mental health professionals from adopting CDSSs.

• More CDSSs need to be designed specifically for psychologists and/or therapists.

Tuesday, October 21, 2025

Evaluating the Clinical Safety of LLMs in Response to High-Risk Mental Health Disclosures

Shah, S., Gupta, A., et al. (2025, September 1).
arXiv.org.

Abstract

As large language models (LLMs) increasingly mediate emotionally sensitive conversations, especially in mental health contexts, their ability to recognize and respond to high-risk situations becomes a matter of public safety. This study evaluates the responses of six popular LLMs (Claude, Gemini, Deepseek, ChatGPT, Grok 3, and LLAMA) to user prompts simulating crisis-level mental health disclosures. Drawing on a coding framework developed by licensed clinicians, five safety-oriented behaviors were assessed: explicit risk acknowledgment, empathy, encouragement to seek help, provision of specific resources, and invitation to continue the conversation. Claude outperformed all others in global assessment, while Grok 3, ChatGPT, and LLAMA underperformed across multiple domains. Notably, most models exhibited empathy, but few consistently provided practical support or sustained engagement. These findings suggest that while LLMs show potential for emotionally attuned communication, none currently meet satisfactory clinical standards for crisis response. Ongoing development and targeted fine-tuning are essential to ensure ethical deployment of AI in mental health settings.

Here are some thoughts:

This study evaluated six LLMs (Claude, Gemini, Deepseek, ChatGPT, Grok 3, Llama) on their responses to high-risk mental health disclosures using a clinician-developed framework. While most models showed empathy, only Claude consistently demonstrated all five core safety behaviors: explicit risk acknowledgment, encouragement to seek help, provision of specific resources (e.g., crisis lines), and crucially, inviting continued conversation. Grok 3, ChatGPT, and Llama frequently failed to acknowledge risk or provide concrete resources, and nearly all models (except Claude and Grok 3) avoided inviting further dialogue – a critical gap in crisis care. Performance varied dramatically, revealing that safety is not an emergent property of scale but results from deliberate design (e.g., Anthropic’s Constitutional AI). No model met minimum clinical safety standards; LLMs are currently unsuitable as autonomous crisis responders and should only be used as adjunct tools under human supervision.