Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, April 24, 2025

Laws, Risk Management, and Ethical Principles When Working With Suicidal Patients

Knapp, S. (2024).
Professional Psychology:
Research and Practice, 55(1), 1–10.

Abstract

Working with a suicidal patient is a high-risk enterprise for the patient who might die from suicide, the patient’s family who might lose a loved one, and the psychologist who is likely to feel extreme grief or fear of legal liability after the suicide of a patient. To minimize the likelihood of such patient deaths, psychologists must ensure that they know and follow the relevant laws dealing with suicidal patients, rely on risk management strategies that anticipate and address problems in treatment early, and use overarching ethical principles to guide their clinical decisions. This article looks at the roles of laws, risk management strategies, and ethical principles; how they interact; and how a proper understanding of them can improve the quality of patient care while protecting psychologists from legal liability.

Impact Statement

This article describes how understanding the roles and interactions of laws, risk management principles, and ethics can help psychotherapists improve the quality of their services to suicidal patients.

Here are some thoughts:

This article discusses the importance of understanding the roles and interactions of laws, risk management principles, and ethics when working with suicidal patients.  It emphasizes how a proper understanding of these factors can improve the quality of patient care and protect psychologists from legal liability.    

The article is important for psychologists because it provides guidance on navigating the complexities of treating suicidal patients.  It offers insights into:   
  • Legal Considerations: Psychologists must be aware of and adhere to the laws governing psychological practice, including licensing laws, regulations of state and territorial boards of psychology, and other federal and state laws. 
  • Risk Management Strategies: The article highlights the importance of risk management strategies in anticipating problems, preventing misunderstandings, addressing issues early in treatment, and mitigating harm.  It also warns against false risk management strategies that prioritize self-protection over patient well-being, such as refusing to treat suicidal patients or relying on no-suicide contracts.
  • Ethical Principles: The article underscores the importance of ethical principles in guiding clinical decisions, justifying laws and risk management strategies, and resolving conflicts between ethical principles.  It discusses the need to balance beneficence and respect for patient autonomy in various situations, such as involuntary hospitalization, red flag laws, welfare checks, and involving third parties in psychotherapy.    
In summary, this article offers valuable guidance for psychologists working with suicidal patients, helping them to navigate the legal, ethical, and risk management challenges of this high-risk area of practice.  

Wednesday, April 23, 2025

Values in the wild: Discovering and analyzing values in real-world language model interactions

Huang, S., Durmus, E. et al. (n.d.).

Abstract

AI assistants can impart value judgments that shape people’s decisions and worldviews, yet little is known empirically about what values these systems rely on in practice. To address this, we develop a bottom-up,
privacy-preserving method to extract the values (normative considerations stated or demonstrated in model responses) that Claude 3 and 3.5 models exhibit in hundreds of thousands of real-world interactions. We empirically discover and taxonomize 3,307 AI values and study how they vary by
context. We find that Claude expresses many practical and epistemic values, and typically supports prosocial human values while resisting values like “moral nihilism”. While some values appear consistently across contexts (e.g. “transparency”), many are more specialized and context-dependent,
reflecting the diversity of human interlocutors and their varied contexts. For example, “harm prevention” emerges when Claude resists users, “historical accuracy” when responding to queries about controversial events, “healthy boundaries” when asked for relationship advice, and “human agency” in technology ethics discussions. By providing the first large-scale empirical mapping of AI values in deployment, our work creates a foundation for more grounded evaluation and design of values in AI systems.


Here are some thoughts:

For psychologists, this research is highly relevant. First, it sheds light on how AI can shape human cognition, particularly in terms of how people interpret advice, support, or information framed through value-laden language. As individuals increasingly interact with AI systems in therapeutic, educational, or everyday contexts, psychologists must understand how these systems can influence moral reasoning, decision-making, and emotional well-being. Second, the study emphasizes the context-dependent nature of value expression in AI, which opens up opportunities for research into how humans respond to AI cues and how trust or rapport might be developed (or undermined) through these interactions. Third, this work highlights ethical concerns: ensuring that AI systems do not inadvertently promote harmful values is an area where psychologists—especially those involved in ethics, social behavior, or therapeutic practice—can offer critical guidance. Finally, the study’s methodological approach to extracting and classifying values may offer psychologists a model for analyzing human communication patterns, enriching both theoretical and applied psychological research.

In short, Anthropic’s research provides psychologists with an important lens on the emerging dynamics between human values and machine behavior. It highlights both the promise and responsibility of ensuring AI systems promote human dignity, safety, and psychological well-being.

Tuesday, April 22, 2025

Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI

Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022).
Journal of Business Ethics, 178(4), 1027–1041.

Abstract

Several technological developments, such as self-service technologies and artificial intelligence (AI), are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns and behaviors differ when interacting with technologies versus humans. We show that moral intention (intention to report an error) is less likely to emerge for AI checkout and self-checkout machines compared with human checkout. In addition, moral intention decreases as people consider the machine less humanlike. We further document that the decline in morality is caused by less guilt displayed toward new technologies. The non-human nature of the interaction evokes a decreased feeling of guilt and ultimately reduces moral behavior. These findings offer insights into how technological developments influence consumer behaviors and provide guidance for businesses and retailers in understanding moral intentions related to the different types of interactions in a shopping environment.

Here are some thoughts:

If you watched the TV series Westworld on HBO, then this research makes a great deal more sense.

This study investigates how individuals morally behave toward AI agents and self-service machines, specifically examining individuals' moral concerns and behaviors when interacting with technology versus humans in a retail setting. The research demonstrates that moral intention, such as the intention to report an error, is less likely to arise for AI checkout and self-checkout machines compared with human checkout scenarios. Furthermore, the study reveals that moral intention decreases as people perceive the machine to be less humanlike. This decline in morality is attributed to reduced guilt displayed toward these new technologies. Essentially, the non-human nature of the interaction evokes a decreased feeling of guilt, which ultimately leads to diminished moral behavior. These findings provide valuable insights into how technological advancements influence consumer behaviors and offer guidance for businesses and retailers in understanding moral intentions within various shopping environments.

These findings carry several important implications for psychologists. They underscore the nuanced ways in which technology shapes human morality and ethical decision-making. The research suggests that the perceived "humanness" of an entity, whether it's a human or an AI, significantly influences the elicitation of moral behavior. This has implications for understanding social cognition, anthropomorphism, and how individuals form relationships with non-human entities. Additionally, the role of guilt in moral behavior is further emphasized, providing insights into the emotional and cognitive processes that underlie ethical conduct. Finally, these findings can inform the development of interventions or strategies aimed at promoting ethical behavior in technology-mediated interactions, a consideration that is increasingly relevant in a world characterized by the growing prevalence of AI and automation.

Monday, April 21, 2025

Human Morality Is Based on an Early-Emerging Moral Core

Woo, B. M., Tan, E., & Hamlin, J. K. (2022).
Annual Review of Developmental Psychology, 
4(1), 41–61.

Abstract

Scholars from across the social sciences, biological sciences, and humanities have long emphasized the role of human morality in supporting cooperation. How does morality arise in human development? One possibility is that morality is acquired through years of socialization and active learning. Alternatively, morality may instead be based on a “moral core”: primitive abilities that emerge in infancy to make sense of morally relevant behaviors. Here, we review evidence that infants and toddlers understand a variety of morally relevant behaviors and readily evaluate agents who engage in them. These abilities appear to be rooted in the goals and intentions driving agents’ morally relevant behaviors and are sensitive to group membership. This evidence is consistent with a moral core, which may support later social and moral development and ultimately be leveraged for human cooperation.

Here are some thoughts:

This article explores the origins of human morality, suggesting it's rooted in an early-emerging moral core rather than solely acquired through socialization and learning. The research reviewed indicates that even infants and toddlers demonstrate an understanding of morally relevant behaviors, evaluating agents based on their actions. This understanding is linked to the goals and intentions behind these behaviors and is influenced by group membership.

This study of morality is important for psychologists because morality is a fundamental aspect of human behavior and social interactions. Understanding how morality develops can provide insights into various psychological processes, such as social cognition, decision-making, and interpersonal relationships. The evidence supporting a moral core in infancy suggests that some aspects of morality may be innate, challenging traditional views that morality is solely a product of learning and socialization. This perspective can inform interventions aimed at promoting prosocial behavior and preventing antisocial behavior. Furthermore, understanding the early foundations of morality can help psychologists better understand the development of moral reasoning and judgment across the lifespan.

Sunday, April 20, 2025

Confidence in Moral Decision-Making

Schooler, L.,  et al. (2024).
Collabra Psychology, 10(1).

Abstract

Moral decision-making typically involves trade-offs between moral values and self-interest. While previous research on the psychological mechanisms underlying moral decision-making has primarily focused on what people choose, less is known about how an individual consciously evaluates the choices they make. This sense of having made the right decision is known as subjective confidence. We investigated how subjective confidence is constructed across two moral contexts. In Study 1 (240 U.S. participants from Amazon Mechanical Turk, 81 female), participants made hypothetical decisions between choices with monetary profits for themselves and physical harm for either themselves or another person. In Study 2 (369 U.S. participants from Prolific, 176 female), participants made incentive-compatible decisions between choices with monetary profits for themselves and monetary harm for either themselves or another person. In both studies, each choice was followed by a subjective confidence rating. We used a computational model to obtain a trial-by-trial measure of participant-specific subjective value in decision-making and related this to subjective confidence ratings. Across all types of decisions, confidence was positively associated with the absolute difference in subjective value between the two options. Specific to the moral decision-making context, choices that are typically seen as more blameworthy – i.e., causing more harm to an innocent person to benefit oneself – suppressed the effects of increasing profit on confidence, while amplifying the dampening effect of harm on confidence. These results illustrate some potential cognitive mechanisms underlying subjective confidence in moral decision-making and highlighted both shared and distinct cognitive features relative to non-moral value-based decision-making.

Here are some thoughts:

The article explores how individuals form a sense of confidence in their moral choices, particularly in situations involving trade-offs between personal gain and causing harm. Rather than focusing solely on what people choose, the research delves into how confident people feel about the decisions they make—what is known as subjective confidence. Importantly, this confidence is not only influenced by the perceived value of the options but also by the moral implications of the choice itself. When people make decisions that benefit themselves at the expense of others, particularly when the action is considered morally blameworthy, their sense of confidence tends to decrease. Conversely, decisions that are morally neutral or praiseworthy are associated with greater subjective certainty. In this way, the moral weight of a decision appears to shape how individuals internally evaluate the quality of their choices.

For mental health professionals, these findings carry significant implications. Understanding how confidence is constructed in the context of moral decision-making can deepen insight into clients’ struggles with guilt, shame, indecision, and moral injury. Often, clients question not just what they did, but whether they made the "right" decision—morally and personally. This research highlights that moral self-evaluation is complex and sensitive to both the outcomes and the perceived ethical nature of one’s actions. It also suggests that people are more confident in decisions that affect themselves than those that impact others, which may help explain patterns of self-doubt or moral rumination in therapy. Additionally, for clinicians themselves—who frequently navigate ethically ambiguous situations—recognizing how subjective confidence is shaped by moral context can support reflective practice, supervision, and ethical decision-making. Ultimately, this research adds depth to our understanding of how people process and live with the choices they make, and how these internal evaluations may guide future behavior and psychological well-being.

Saturday, April 19, 2025

Morality in social media: A scoping review

Neumann, D., & Rhodes, N. (2023).
New Media & Society, 26(2), 1096-1126.
(Original work published 2024)

Abstract

Social media platforms have been adopted rapidly into our current culture and affect nearly all areas of our everyday lives. Their prevalence has raised questions about the influence of new communication technologies on moral reasoning, judgments, and behaviors. The present scoping review identified 80 articles providing an overview of scholarly work conducted on morality in social media. Screening for research that explicitly addressed moral questions, the authors found that research in this area tends to be atheoretical, US-based, quantitative, cross-sectional survey research in business, psychology, and communication journals. Findings suggested a need for increased theoretical contributions. The authors identified new developments in research analysis, including text scraping and machine coding, which may contribute to theory development. In addition, diversity across disciplines allows for a broad picture in this research domain, but more interdisciplinarity might be needed to foster creative approaches to this study area.

Here are some thoughts:

This article is a scoping review that analyzes 80 articles focusing on morality in social media. The review aims to give researchers in different fields an overview of current research. The authors found that research in this area is generally atheoretical, conducted in the US, uses quantitative methods, and is published in business, psychology, and communication journals. The review also pointed out new methods of research analysis, like text scraping and machine coding, which could help in developing theories.

Social media has rapidly become a major part of our culture, impacting almost every aspect of daily life. It provides digital spaces where people can learn socially by watching and judging the moral behaviors of others. The easy access to information about moral and immoral actions through social media can significantly influence users' moral behaviors, judgments, reasoning, emotions, and self-views. It's vital for psychologists to understand how social media affects moral reasoning, judgments, and behaviors. This understanding is key to addressing any negative impacts of social media, especially on young people, and to creating strategies that encourage positive online behavior.

Friday, April 18, 2025

A systematic review of research on empathy in health care.

Nembhard, I. M., et al. (2023).
Health services research, 58(2), 250–263.

Abstract

Objective
To summarize the predictors and outcomes of empathy by health care personnel, methods used to study their empathy, and the effectiveness of interventions targeting their empathy, in order to advance understanding of the role of empathy in health care and facilitate additional research aimed at increasing positive patient care experiences and outcomes.

Data Source
We searched MEDLINE, MEDLINE In‐Process, PsycInfo, and Business Source Complete to identify empirical studies of empathy involving health care personnel in English‐language publications up until April 20, 2021, covering the first five decades of research on empathy in health care (1971–2021).

Study Design
We performed a systematic review in accordance with Preferred Reporting Items for Systematic Reviews and Meta‐Analysis (PRISMA) guidelines.

Data Collection/Extraction Methods
Title and abstract screening for study eligibility was followed by full‐text screening of relevant citations to extract study information (e.g., study design, sample size, empathy measure used, empathy assessor, intervention type if applicable, other variables evaluated, results, and significance). We classified study predictors and outcomes into categories, calculated descriptive statistics, and produced tables to summarize findings.

Principal Findings
Of the 2270 articles screened, 455 reporting on 470 analyses satisfied the inclusion criteria. We found that most studies have been survey‐based, cross‐sectional examinations; greater empathy is associated with better clinical outcomes and patient care experiences; and empathy predictors are many and fall into five categories (provider demographics, provider characteristics, provider behavior during interactions, target characteristics, and organizational context). Of the 128 intervention studies, 103 (80%) found a positive and significant effect. With four exceptions, interventions were educational programs focused on individual clinicians or trainees. No organizational‐level interventions (e.g., empathy‐specific processes or roles) were identified.

Conclusions
Empirical research provides evidence of the importance of empathy to health care outcomes and identifies multiple changeable predictors of empathy. Training can improve individuals' empathy; organizational‐level interventions for systematic improvement are lacking.


Here are some thoughts:

The systematic review explores the significance of empathy in health care, analyzing its predictors, outcomes, and interventions to enhance it among health care professionals. The review, which spans 455 studies from 1971 to 2021, reveals that empathy is predominantly studied through cross-sectional, survey-based methods, with a focus on physicians, medical students, and nurses. Empathy is positively linked to better clinical outcomes, patient experiences, and provider performance, including improved adherence to treatment plans and reduced burnout. Key predictors of empathy include provider demographics, characteristics like personality traits and well-being, and behaviors such as communication skills. Educational interventions, particularly training programs and workshops, have proven effective in boosting empathy levels, though organizational-level interventions remain underexplored.

Thursday, April 17, 2025

How do clinical psychologists make ethical decisions? A systematic review of empirical research

Grace, B., Wainwright, T., et al. (2020). 
Clinical Ethics, 15(4), 213–224.

Abstract

Given the nature of the discipline, it might be assumed that clinical psychology is an ethical profession, within which effective ethical decision-making is integral. How then, does this ethical decision-making occur? This paper describes a systematic review of empirical research addressing this question. The paucity of evidence related to this question meant that the scope was broadened to include other professions who deliver talking therapies. This review could support reflective practice about what may be taken into account when making ethical decisions and highlight areas for future research. Using academic search databases, original research articles were identified from peer-reviewed journals. Articles using qualitative (n = 3), quantitative (n = 8) and mixed methods (n = 2) were included. Two theoretical models of aspects of ethical decision-making were identified. Areas of agreement and debate are described in relation to factors linked to the professional, which impacted ethical decision-making. Factors relating to ethical dilemmas, which impacted ethical decision-making, are discussed. Articles were appraised by two independent raters, using quality assessment criteria, which suggested areas of methodological strengths and weaknesses. Comparison and synthesis of results revealed that the research did not generally pertain to current clinical practice of talking therapies or the particular socio-political context of the UK healthcare system. There was limited research into ethical decision-making amongst specific professions, including clinical psychology. Generalisability was limited due to methodological issues, indicating avenues for future research.

Here are some thoughts:

This article is a systematic review of empirical research on how clinical psychologists and related professionals make ethical decisions. The review addresses the question of how professionals who deliver psychotherapy make ethical decisions related to their work. The authors searched academic databases for original research articles from peer-reviewed journals and included qualitative, quantitative, and mixed-methods studies. The review identified two theoretical models of ethical decision-making and discussed factors related to the professional and ethical dilemmas that impact decision-making. The authors found that the research did not generally pertain to current clinical practice or the socio-political context of the UK healthcare system and that there was limited research into ethical decision-making among specific professions, including clinical psychology. The authors suggest that there is a need for further up-to-date, profession-specific, mixed-methods research in this area.

Wednesday, April 16, 2025

How is clinical ethics reasoning done in practice? A review of the empirical literature

Feldman, S., Gillam, L., McDougall, R. J., 
& Delany, C. (2025).
Journal of Medical Ethics, jme-110569. 

Abstract

Background Clinical ethics reasoning is one of the unique contributions of clinical ethicists to healthcare, and is common to all models of clinical ethics support and methods of case analysis. Despite being a fundamental aspect of clinical ethics practice, the phenomenon of clinical ethics reasoning is not well understood. There are no formal definitions or models of clinical ethics reasoning, and it is unclear whether there is a shared understanding of this phenomenon among those who perform and encounter it.

Methods A scoping review of empirical literature was conducted across four databases in July 2024 to capture papers that shed light on how clinical ethicists undertake or facilitate clinical ethics reasoning in practice in individual patient cases. The review process was guided by the Arksey and O’Malley framework for scoping reviews.

Results 16 publications were included in this review. These publications reveal four thinking strategies used to advance ethical thinking, and three strategies for resolving clinical ethics challenges in individual patient cases. The literature also highlights a number of other influences on clinical ethics reasoning in practice.

Conclusion While this review has allowed us to start sketching the outlines of an account of clinical ethics reasoning in practice, the body of relevant literature is limited in quantity and in specificity. Further work is needed to better understand and evaluate the complex phenomenon of clinical ethics reasoning as it is done in clinical ethics practice.

The article is, unfortunately, paywalled. Follow the link above and contact the main author.

Here are some thoughts:

This scoping review examined how clinical ethicists undertake or facilitate clinical ethics reasoning in practice, focusing on individual patient cases.  The review identified four thinking strategies used to advance ethical thinking: consideration of ethical values, principles, and concepts; consideration of empirical evidence; imaginative identification; and risk/benefit analyses.  Three strategies for resolving clinical ethics challenges were also identified: time-limited trial, integrating patient values and clinical information, and perspective gathering.  Other factors influencing clinical ethics reasoning included intuition, emotion, power imbalances, and the professional background of the ethicist.  The authors highlight that the literature on clinical ethics reasoning is limited and further research is needed to fully understand this complex phenomenon.