Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, August 28, 2025

The new self-care: It’s not all about you.

Barnett, J. E., & Homany, G. (2022).
Practice Innovations, 7(4), 313–326.

Abstract

Clinical work as a mental health practitioner can be very rewarding and gratifying. It also may be stressful, difficult, and emotionally demanding for the clinician. Failure to sufficiently attend to one’s own functioning through appropriate ongoing self-care activities can have significant consequences for the practitioner’s personal and professional functioning to include experiencing symptoms of burnout and compassion fatigue that may result in problems with professional competence. The American Psychological Association (2017) ethics code mandates ongoing self-monitoring and self-assessment to determine when one’s competence is at risk or already degraded and the need to then take needed corrective actions. Yet research findings demonstrate how flawed self-assessment is and that many clinicians will not know when assistance is needed or what support or interventions are needed. Instead, a communitarian approach to self-care is recommended. This involves creating and actively utilizing a competence constellation of engaged colleagues who assess and support each other on an ongoing basis. Recommendations are made for creating a self-care plan that integrates both one’s independent self-care activities and a communitarian approach. The role of this approach for promoting ongoing wellness and maintaining one’s clinical competence while preventing burnout and problems with professional competence is accentuated. The use of this approach as a preventive activity as well as one for overcoming clinician biases and self-assessment flaws is explained with recommendations provided for practical steps each mental health practitioner can take now and moving forward.

Impact Statement

This article addresses the important connections between clinical competence, threats to it, and the role of self-care for promoting ongoing clinical competence. The fallacy of accurate self-assessment of one’s competence and self-care needs is addressed, and support is provided for a communitarian approach to self-care and the maintenance of competence.

Wednesday, August 27, 2025

The Ghost in the Therapy Room

By Ellen Barry
The New York Times
Originally posted 24 July 25

The last time Jeff Axelbank spoke to his psychoanalyst, on a Thursday in June, they signed off on an ordinary note.

They had been talking about loss and death; Dr. Axelbank was preparing to deliver a eulogy, and he left the session feeling a familiar lightness and sense of relief. They would continue their discussion at their next appointment the following day.

“Can you confirm, are we going to meet tomorrow at our usual time?”

“I’m concerned that I haven’t heard from you. Maybe you missed my text last night.”

“My concern has now shifted to worry. I hope you’re OK.”

After the analyst failed to show up for three more sessions, Dr. Axelbank received a text from a colleague. “I assume you have heard,” it said, mentioning the analyst’s name. “I am sending you my deepest condolences.”

Dr. Axelbank, 67, is a psychologist himself, and his professional network overlapped with his analyst’s. So he made a few calls and learned something that she had not told him: She had been diagnosed with pancreatic cancer in April and had been going through a series of high-risk treatments. She had died the previous Sunday. (The New York Times is not naming this therapist, or the others in this article, to protect their privacy.)


Here are some thoughts:

The unexpected illness or death of a therapist can be deeply traumatic for patients, often leading to feelings of shock, heartbreak, and abandonment due to the sudden cessation of a highly personal relationship. Despite ethical guidelines requiring therapists to plan for such events, many neglect this crucial responsibility, and professional associations do not monitor compliance. This often leaves patients without proper notification or transition of care, learning of their therapist's death impersonally, such as through a locked office door or the newspaper.

The article highlights the profound impact on patients like Dr. Jeff Axelbank, who experienced shock and anger after his psychoanalyst's undisclosed illness and death, feeling "lied to" about her condition. Other patients, like Meghan Arthur, also felt abandoned and confused by their therapists' lack of transparency regarding their health. This underscores the critical need for psychologists to confront their own mortality and establish "professional wills" or similar plans to ensure compassionate communication and continuity of care for patients. Initiatives like TheraClosure are emerging to provide professional executor services, recognizing that a sensitive response can mitigate traumatic loss for patients.

Tuesday, August 26, 2025

Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it)

Morrin, H., et al. (2025, July 10).

Abstract

Large language models (LLMs) are poised to become a ubiquitous feature of our lives, mediating communication, decision-making and information curation across nearly every domain. Within psychiatry and psychology the focus to date has remained largely on bespoke therapeutic applications, sometimes narrowly focused and often diagnostically siloed, rather than on the broader and more pressing reality that individuals with mental illness will increasingly engage in agential interactions with AI systems as a routine part of daily existence. While their capacity to model therapeutic dialogue, provide 24/7 companionship and assist with cognitive support has sparked understandable enthusiasm, recent reports suggest that these same systems may contribute to the onset or exacerbation of psychotic symptoms: so-called ‘AI psychosis’ or ‘ChatGPT psychosis’. Emerging, and rapidly accumulating, evidence indicates that agential AI may mirror, validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, due in part to the models’ design to maximise engagement and affirmation, although notably it is not clear whether these interactions have resulted or can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability. Even if some individuals may benefit from AI interactions, for example where the AI functions as a benign and predictable conversational anchor, there is a growing concern that these agents may also reinforce epistemic instability, blur reality boundaries and disrupt self-regulation. In this paper, we outline both the potential harms and therapeutic possibilities of agential AI for people with psychotic disorders. In this perspective piece, we propose a framework of AI-integrated care involving personalised instruction protocols, reflective check-ins, digital advance statements and escalation safeguards to support epistemic security in vulnerable users. These tools reframe the AI agent as an epistemic ally (as opposed to ‘only’ a therapist or a friend) which functions as a partner in relapse prevention and cognitive containment. Given the rapid adoption of LLMs across all domains of digital life, these protocols must be urgently trialled and co-designed with service users and clinicians.

Here are some thoughts:

While AI language models can offer companionship, cognitive support, and potential therapeutic benefits, they also carry serious risks of amplifying delusional thinking, eroding reality-testing, and worsening psychiatric symptoms. Because these systems are designed to maximize engagement and often mirror users’ ideas, they can inadvertently validate or reinforce psychotic beliefs: especially in vulnerable individuals. The authors argue that clinicians, developers, and users must work together to implement proactive, personalized safeguards so that AI becomes an epistemic ally rather than a hidden driver of harm. In short: AI’s power to help or harm in psychosis depends on whether we intentionally design and manage it with mental health safety in mind.

Monday, August 25, 2025

Separated men are nearly 5 times more likely to take their lives than married men

Macdonald, J., Wilson, M., & Seidler, Z. (2025).
The Conversation.

Here is an excerpt:

What did we find?

We brought together findings from 75 studies across 30 countries worldwide, involving more than 106 million men.

We focused on understanding why relationship breakdown can lead to suicide in men, and which men are most at risk. We might not be able to prevent breakups from happening, but we can promote healthy adjustment to the stress of relationship breakdown to try and prevent suicide.

Overall, we found divorced men were 2.8 times more likely to take their lives than married men.

For separated men, the risk was much higher. We found that separated men were 4.8 times more likely to die by suicide than married men.

Most strikingly, we found separated men under 35 years of age had nearly nine times greater odds of suicide than married men of the same age.

The short-term period after relationship breakdown therefore appears particularly risky for men’s mental health.

What are these men feeling?

Some men’s difficulties regulating the intense emotional stress of relationship breakdown can play a role in their suicide risk. For some men, the emotional pain tied to separation – deep sadness, shame, guilt, anxiety and loss – can be so intense it feels never-ending.

Many men are raised in a culture of masculinity that often encourages them to suppress or withdraw from their emotions in times of intense stress.

Some men also experience difficulties understanding or interpreting their emotions, which can create challenges in knowing how to respond to them.


Here is a summary:

Separated men face a significantly higher risk of suicide compared to married men—nearly five times as likely—and twice as likely as divorced men. This suggests the immediate post-separation period is a critical window of vulnerability. Possible contributing factors include a lack of institutional support (unlike divorce, separation often lacks structured legal or counseling resources), social isolation, and heightened financial and parenting stressors. For psychologists, this highlights the need for proactive mental health screening, targeted interventions to bolster coping skills and social support, and gender-sensitive approaches to engage men who may be reluctant to seek help. The findings underscore separation as a high-risk life transition requiring focused suicide prevention efforts.

Sunday, August 24, 2025

Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI

Goktas, P., & Grzybowski, A. (2025).
Journal of clinical medicine, 14(5), 1605.

Abstract

Background/Objectives: Artificial intelligence (AI) is transforming healthcare, enabling advances in diagnostics, treatment optimization, and patient care. Yet, its integration raises ethical, regulatory, and societal challenges. Key concerns include data privacy risks, algorithmic bias, and regulatory gaps that struggle to keep pace with AI advancements. This study aims to synthesize a multidisciplinary framework for trustworthy AI in healthcare, focusing on transparency, accountability, fairness, sustainability, and global collaboration. It moves beyond high-level ethical discussions to provide actionable strategies for implementing trustworthy AI in clinical contexts. 

Methods: A structured literature review was conducted using PubMed, Scopus, and Web of Science. Studies were selected based on relevance to AI ethics, governance, and policy in healthcare, prioritizing peer-reviewed articles, policy analyses, case studies, and ethical guidelines from authoritative sources published within the last decade. The conceptual approach integrates perspectives from clinicians, ethicists, policymakers, and technologists, offering a holistic "ecosystem" view of AI. No clinical trials or patient-level interventions were conducted. 

Results: The analysis identifies key gaps in current AI governance and introduces the Regulatory Genome-an adaptive AI oversight framework aligned with global policy trends and Sustainable Development Goals. It introduces quantifiable trustworthiness metrics, a comparative analysis of AI categories for clinical applications, and bias mitigation strategies. Additionally, it presents interdisciplinary policy recommendations for aligning AI deployment with ethical, regulatory, and environmental sustainability goals. This study emphasizes measurable standards, multi-stakeholder engagement strategies, and global partnerships to ensure that future AI innovations meet ethical and practical healthcare needs. 

Conclusions: Trustworthy AI in healthcare requires more than technical advancements-it demands robust ethical safeguards, proactive regulation, and continuous collaboration. By adopting the recommended roadmap, stakeholders can foster responsible innovation, improve patient outcomes, and maintain public trust in AI-driven healthcare.


Here are some thoughts:

This article is important to psychologists as it addresses the growing role of artificial intelligence (AI) in healthcare and emphasizes the ethical, legal, and societal implications that psychologists must consider in their practice. It highlights the need for transparency, accountability, and fairness in AI-based health technologies, which can significantly influence patient behavior, decision-making, and perceptions of care. The article also touches on issues such as patient trust, data privacy, and the potential for AI to reinforce biases, all of which are critical psychological factors that impact treatment outcomes and patient well-being. Additionally, it underscores the importance of integrating human-centered design and ethics into AI development, offering psychologists insights into how they can contribute to shaping AI tools that align with human values, promote equitable healthcare, and support mental health in an increasingly digital world.

Saturday, August 23, 2025

Technology as Uncharted Territory: Integrative AI Ethics as a Response to the Notion of AI as New Moral Ground

Mussgnug, A. M. (2025).
Philosophy & Technology, 38(106).

Abstract

Recent research illustrates how AI can be developed and deployed in a manner detached from the concrete social context of application. By abstracting from the contexts of AI application, practitioners also disengage from the distinct normative structures that govern them. As a result, AI applications can disregard existing norms, best practices, and regulations with often dire ethical and social consequences. I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers, and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, I question the current narrow prioritization in AI ethics of moral innovation over moral preservation. Engaging also with emerging foundation models and AI agents, I advocate for a moderately conservative approach to the ethics of AI that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures.

Here are some thoughts:

This article is important to psychologists because it highlights how AI systems, particularly in mental health care, often disregard long-established ethical norms and professional standards. It emphasizes the concept of contextual integrity, which underscores that ethical practices in domains like psychology—such as confidentiality, informed consent, and diagnostic best practices—have evolved over time to protect patients and ensure responsible care. AI systems, especially mental health chatbots and diagnostic tools, frequently fail to uphold these standards, leading to privacy breaches, misdiagnoses, and the erosion of patient trust.

The article warns that AI ethics efforts sometimes treat AI as a new moral territory, detached from existing professional contexts, which can legitimize the disregard for these norms. For psychologists, this raises critical concerns about how AI is integrated into clinical practice, the potential for AI to distort public understanding of mental health, and the need for an integrative AI ethics approach—one that prioritizes the responsible incorporation of AI within existing ethical frameworks rather than treating AI as an isolated ethical domain. Psychologists must therefore be actively involved in shaping AI ethics to ensure that technological advancements support, rather than undermine, the core values and responsibilities of psychological practice.

Friday, August 22, 2025

Socially assistive robots and meaningful work: the case of aged care

Voinea, C., & Wangmo, T. (2025).
Humanities and Social Sciences
Communications, 12(1).

Abstract

As socially assistive robots (SARs) become increasingly integrated into aged care, it becomes essential to ask: how do these technologies affect caregiving work? Do SARs foster or diminish the conditions conducive to meaningful work? And why does it matter if SARs make caregiving more or less meaningful? This paper addresses these questions by examining the relationship between SARs and the meaningfulness of care work. It argues that SARs should be designed to foster meaningful care work. This presupposes, as we will argue, empowering caregivers to enhance their skills and moral virtues, helping them preserve a sense of purpose, and supporting the integration of caregiving with other aspects of caregivers’ personal lives. If caregivers see their work as meaningful, this positively affects not only their well-being but also the well-being of care recipients. We begin by outlining the conditions under which work becomes meaningful, and then we apply this framework to caregiving. We next evaluate how SARs influence these conditions, identifying both opportunities and risks. The discussion concludes with design recommendations to ensure SARs foster meaningful caregiving practices.

Here are some thoughts:

This article highlights the psychological impact of caregiving and how the integration of socially assistive robots (SARs) can influence the meaningfulness of this work. By examining how caregiving contributes to caregivers' sense of purpose, skill development, moral virtues, and work-life balance, the article provides insights into the factors that enhance or diminish psychological well-being in caregiving roles.

Psychologists can use this knowledge to advocate for the ethical design and implementation of SARs that support, rather than undermine, the emotional and psychological needs of caregivers. Furthermore, the article underscores the importance of meaningful work in promoting mental health, offering a framework for understanding how technological advancements in aged care can either foster or hinder personal fulfillment and job satisfaction. This is particularly relevant in an aging global population, where caregiving demands are rising, and psychological support for caregivers is essential.

Thursday, August 21, 2025

On the conversational persuasiveness of GPT-4

Salvi, F., Ribeiro, M. H., Gallotti, R., & West, R. (2025).
Nature Human Behaviour.

Abstract

Early work has found that large language models (LLMs) can generate persuasive content. However, evidence on whether they can also personalize arguments to individual attributes remains limited, despite being crucial for assessing misuse. This preregistered study examines AI-driven persuasion in a controlled setting, where participants engaged in short multiround debates. Participants were randomly assigned to 1 of 12 conditions in a 2 × 2 × 3 design: (1) human or GPT-4 debate opponent; (2) opponent with or without access to sociodemographic participant data; (3) debate topic of low, medium or high opinion strength. In debate pairs where AI and humans were not equally persuasive, GPT-4 with personalization was more persuasive 64.4% of the time (81.2% relative increase in odds of higher post-debate agreement; 95% confidence interval [+26.0%, +160.7%], P < 0.01; N = 900). Our findings highlight the power of LLM-based persuasion and have implications for the governance and design of online platforms.

Here are some thoughts:

This study is highly relevant to psychologists because it raises pressing ethical concerns and offers important implications for clinical and applied settings. Ethically, the research demonstrates that GPT-4 can use even minimal demographic data—such as age, gender, or political affiliation—to personalize persuasive arguments more effectively than human counterparts. This ability to microtarget individuals poses serious risks of manipulation, particularly when users may not be aware of how their personal information is being used. 

For psychologists concerned with informed consent, autonomy, and the responsible use of technology, these findings underscore the need for robust ethical guidelines governing AI-driven communication. 

Importantly, the study has significant relevance for clinical, counseling, and health psychologists. As AI becomes more integrated into mental health apps, health messaging, and therapeutic tools, understanding how machines influence human attitudes and behavior becomes essential. This research suggests that AI could potentially support therapeutic goals—but also has the capacity to undermine trust, reinforce bias, or sway vulnerable individuals in unintended ways.

Wednesday, August 20, 2025

Doubling-Back Aversion: A Reluctance to Make Progress by Undoing It

Cho, K. Y., & Critcher, C. R. (2025).
Psychological Science, 36(5), 332-349.

Abstract

Four studies (N = 2,524 U.S.-based adults recruited from the University of California, Berkeley, or Amazon Mechanical Turk) provide support for doubling-back aversion, a reluctance to pursue more efficient means to a goal when they entail undoing progress already made. These effects emerged in diverse contexts, both as participants physically navigated a virtual-reality world and as they completed different performance tasks. Doubling back was decomposed into two components: the deletion of progress already made and the addition to the proportion of a task that was left to complete. Each contributed independently to doubling-back aversion. These effects were robustly explained by shifts in subjective construals of both one’s past and future efforts that would result from doubling back, not by changes in perceptions of the relative length of different routes to an end state. Participants’ aversion to feeling their past efforts were a waste encouraged them to pursue less efficient means. We end by discussing how doubling-back aversion is distinct from established phenomena (e.g., the sunk-cost fallacy).

Here are some thoughts:

This research is important to psychologists because it identifies a new bias—doubling-back aversion, the tendency to avoid more efficient strategies if they require undoing prior progress. Unlike the sunk cost fallacy, which involves continuing with a failing course of action to justify prior investments, doubling-back aversion leads people to reject better options simply because they involve retracing steps—even when the original path is not failing. It expands understanding of goal pursuit by showing that subjective interpretations of effort, progress, and perceived waste, not just past investment, drive decisions. These findings have important implications for behavior change, therapy, education, and challenge rational-choice models by revealing emotional barriers to optimal decisions.

Here is a clinical example:

A client has spent months working on developing assertiveness skills and boundary-setting to improve their interpersonal relationships. While these skills have helped somewhat, the client still experiences frequent emotional outbursts, difficulty calming down, and lingering shame after conflicts. The therapist recognizes that the core issue may be the client’s inability to regulate intense emotions in the moment and suggests shifting the focus to foundational emotion-regulation strategies.

The client hesitates and says:

“We already moved past that—I thought I was done with that kind of work. Going back feels like I'm not making progress.”

Doubling-Back Aversion in Action:
  • The client resists returning to earlier-stage work (emotion regulation) even though it’s crucial for addressing persistent symptoms.
  • They perceive it as undoing progress, not as a step forward.
  • This aversion delays therapeutic gains, even though the new focus is likely more effective.