Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, February 20, 2025

Enhancing competencies for the ethical integration of religion and spirituality in psychological services

Currier, J. M. et al. (2023).
Psychological Services, 20(1), 40–50.

Abstract

Advancement of Spiritual and religious competencies aligns with increasing attention to the pivotal role of multiculturalism and intersectionality, as well as shifts in organizational values and strategies, that shape the delivery of psychological services (e.g., evidence-based practice). A growing evidence base also attests to ethical integration of peoples’ religious faith and/or spirituality (R/S) in their mental care as enhancing the utilization and efficacy of psychological services. When considering the essential attitudes, knowledge, and skills for addressing religious and spiritual aspects of clients’ lives, lack of R/S competencies among psychologists and other mental health professionals impedes ethical and effective practice. The purpose of this article is to discuss the following: (a) skills for negotiating ethical challenges with spiritually integrated care; and (b) strategies for assessing a client’s R/S. We also describe systemic barriers to ethical integration of R/S in mental health professions and briefly introduce our Spiritual and Religious Competencies project. Looking ahead, a strategic, interdisciplinary, and comprehensive approach is needed to transform the practice of mental health care in a manner that more fully aligns with the values, principles, and expectations across our disciplines’ professional ethical codes and accreditation standards. We propose that explicit training across mental health professions is necessary to more fully honor R/S diversity and the importance of this layer of identity and intersectionality in many peoples’ lives.

Impact Statement

Psychologists and other mental health professionals often lack necessary awareness, knowledge, and skills to address their clients’ religious faith and/or spirituality (R/S). This article explores ethical considerations regarding Spiritual and Religious Competencies in training and clinical practice, approaches to R/S assessment, as well as barriers and solutions to ethical integration of R/S in psychological services.

Wednesday, February 19, 2025

The Moral Psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A. (2023).
Annual Review of Psychology, 75(1), 653–675.

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Here are some thoughts:

This article delves into the evolving moral landscape shaped by artificial intelligence (AI). As AI technology progresses rapidly, it introduces a new category for moral consideration: intelligent machines.

Machines as moral agents are capable of making decisions that have significant moral implications. This includes scenarios where AI systems can inadvertently cause harm through errors, such as misdiagnosing a medical condition or misclassifying individuals in security contexts. The authors highlight that societal expectations for these machines are often unrealistically high; people tend to require AI systems to outperform human capabilities significantly while simultaneously overestimating human error rates. This disparity raises critical questions about how many mistakes are acceptable from machines in life-and-death situations and how these errors are distributed among different demographic groups.

In their role as moral patients, machines become subjects of human moral behavior. This perspective invites exploration into how humans interact with AI—whether cooperatively or competitively—and the potential biases that may arise in these interactions. For instance, there is a growing concern about algorithmic bias, where certain demographic groups may be unfairly treated by AI systems due to flawed programming or data sets.

Lastly, machines serve as moral proxies, acting as intermediaries in human interactions. This role allows individuals to delegate moral decision-making to machines or use them to mask unethical behavior. The implications of this are profound, as it raises ethical questions about accountability and the extent to which humans can offload their moral responsibilities onto AI.

Overall, the article underscores the urgent need for a deeper understanding of the psychological dimensions associated with AI's integration into society. As encounters between humans and intelligent machines become commonplace, addressing issues of trust, bias, and ethical alignment will be crucial in shaping a future where AI can be safely and effectively integrated into daily life.

Tuesday, February 18, 2025

Pulling Out the Rug on Informed Consent — New Legal Threats to Clinicians and Patients

Underhill, K., & Nelson, K. M. (2025).
New England Journal of Medicine.

In recent years, state legislators in large portions of the United States have devised and enacted new legal strategies to limit access to health care for transgender people.1 To date, 26 states have enacted outright bans on gender-affirming care, which thus far apply only to minors. Other state laws create financial or procedural obstacles to this type of care, such as bans on insurance coverage, requirements to obtain opinions from multiple clinicians, or consent protocols that are stricter than thosefor other health care.

These laws target clinicians who provide gender-affirming care, but all clinicians — in every jurisdiction and specialty — should take note of the intrusive legal actions that are emerging in the regulation of health care for transgender people. Like the development of restrictive abortion laws, new legal tactics for attacking gender-affirming care are likely to guide legislative opposition to other politically contested
medical interventions. Here we consider one particular legal strategy that, if more widely adopted, could
challenge the legal infrastructure underlying U.S. health care.

The article is paywalled. :(

The author was kind and sent a copy to me.

Here are some thoughts.

The article discusses the increasing legal strategies employed by state legislators to restrict access to healthcare for transgender people, particularly minors. It focuses on a new legal technique in Utah that allows patients who received "hormonal transgender treatment" or surgery on "sex characteristics" as minors to retroactively revoke their consent until the age of 25, potentially exposing clinicians to legal claims. This law challenges the core of the clinician-patient relationship and the legal infrastructure of U.S. healthcare by undermining the principle of informed consent.

The authors argue that Utah's law places an unreasonable burden on clinicians, extending beyond gender-affirming care and potentially deterring them from providing necessary medical services to minors. They express concern that this legal strategy could spread to other states and be applied to other politically contested medical interventions, such as contraception or vaccination. The authors conclude that allowing patients to withdraw consent retroactively threatens the foundation of the U.S. health care system, as it undermines clinicians' ability to rely on informed consent at the time of care and could destabilize access to various healthcare services.

Monday, February 17, 2025

Probabilistic Consensus through Ensemble Validation: A Framework for LLM Reliability

Naik, N. (2024).
arXiv (Cornell University). 

Large Language Models (LLMs) have shown significant advances in text generation but often lack the reliability needed for autonomous deployment in high-stakes domains like healthcare, law, and finance. Existing approaches rely on external knowledge or human oversight, limiting scalability. We introduce a novel framework that repurposes ensemble methods for content validation through model consensus. In tests across 78 complex cases requiring factual accuracy and causal consistency, our framework improved precision from 73.1% to 93.9% with two models (95% CI: 83.5%-97.9%) and to 95.6% with three models (95% CI: 85.2%-98.8%). Statistical analysis indicates strong inter-model agreement (κ > 0.76) while preserving sufficient independence to catch errors through disagreement. We outline a clear pathway to further enhance precision with additional validators and refinements. Although the current approach is constrained by multiple-choice format requirements and processing latency, it offers immediate value for enabling reliable autonomous AI systems in critical applications.

Here are some thoughts.

The article presents a novel framework aimed at enhancing the reliability of Large Language Models (LLMs) through ensemble validation, addressing a critical challenge in deploying AI systems in high-stakes domains like healthcare, law, and finance. LLMs have demonstrated remarkable capabilities in text generation; however, their probabilistic nature often leads to inaccuracies that can have serious consequences when applied autonomously. The authors highlight that existing solutions either depend on external knowledge or require extensive human oversight, which limits scalability and efficiency.

In their research, they tested the framework across 78 complex cases requiring factual accuracy and causal consistency. The results showed a significant improvement in precision, increasing from 73.1% to 93.9% with two models and achieving 95.6% with three models. This improvement was attributed to the use of model consensus; by requiring agreement among multiple independent models, the approach narrows down the range of possible outcomes to those most likely to be correct. The statistical analysis indicated strong inter-model agreement while maintaining enough independence to identify errors through disagreement.

The implications of this research are particularly important for psychologists and professionals in related fields. As AI systems become more integrated into clinical practice and research, ensuring their reliability is paramount for making informed decisions in mental health diagnosis and treatment planning. The framework's ability to enhance accuracy without relying on external knowledge bases or human intervention could facilitate the development of decision support tools that psychologists can trust. Additionally, understanding how ensemble methods can improve AI reliability may offer insights into cognitive biases and collective decision-making processes relevant to psychological research.

Sunday, February 16, 2025

Humor as a window into generative AI bias

Saumure, R., De Freitas, J., & Puntoni, S. (2025).
Scientific Reports, 15(1).

Abstract

A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While stereotyped groups for politically sensitive traits (i.e., race and gender) are less likely to be represented after making an image funnier, stereotyped groups for less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups) are more likely to be represented.

Here are some thoughts:

Here is a novel method developed to uncover biases in AI systems, revealing some unexpected results. The research highlights how AI models, despite their advanced capabilities, can exhibit biases that are not immediately apparent. The new approach involves probing the AI's decision-making processes to identify hidden prejudices, which can have significant implications for fairness and ethical AI deployment.

This research underscores a critical challenge in the field of artificial intelligence: ensuring that AI systems operate ethically and fairly. As AI becomes increasingly integrated into industries such as healthcare, finance, criminal justice, and hiring, the potential for biased decision-making poses significant risks. Biases in AI can perpetuate existing inequalities, reinforce stereotypes, and lead to unfair outcomes for individuals or groups. This study highlights the importance of prioritizing ethical AI development to build systems that are not only intelligent but also just and equitable.

To address these challenges, bias detection should become a standard practice in AI development workflows. The novel method introduced in this research provides a promising framework for identifying hidden biases, but it is only one piece of the puzzle. Organizations should integrate multiple bias detection techniques, encourage interdisciplinary collaboration, and leverage external audits to ensure their AI systems are as fair and transparent as possible.

Saturday, February 15, 2025

Does One Emotion Rule All Our Ethical Judgments

Elizabeth Kolbert
The New Yorker
Originally published 13 Jan 25

Here is an excerpt:

Gray describes himself as a moral psychologist. In contrast to moral philosophers, who search for abstract principles of right and wrong, moral psychologists are interested in the empirical matter of people’s perceptions. Gray writes, “We put aside questions of how we should make moral judgments to examine how people do make more moral judgments.”

For the past couple of decades, moral psychology has been dominated by what’s known as moral-foundations theory, or M.F.T. According to M.F.T., people reach ethical decisions on the basis of mental structures, or “modules,” that evolution has wired into our brains. These modules—there are at least five of them—involve feelings like empathy for the vulnerable, resentment of cheaters, respect for authority, regard for sanctity, and anger at betrayal. The reason people often arrive at different judgments is that their modules have developed differently, either for individual or for cultural reasons. Liberals have come to rely almost exclusively on their fairness and empathy modules, allowing the others to atrophy. Conservatives, by contrast, tend to keep all their modules up and running.

If you find this theory implausible, you’re not alone. It has been criticized on a wide range of grounds, including that it is unsupported by neuroscience. Gray, for his part, wants to sweep aside moral-foundations theory, plural, and replace it with moral-foundation theory, singular. Our ethical judgments, he suggests, are governed not by a complex of modules but by one overriding emotion. Untold generations of cowering have written fear into our genes, rendering us hypersensitive to threats of harm.

“If you want to know what someone sees as wrong, your best bet is to figure out what they see as harmful,” Gray writes at one point. At another point: “All people share a harm-based moral mind.” At still another: “Harm is the master key of morality.”

If people all have the same ethical equipment, why are ethical questions so divisive? Gray’s answer is that different people fear differently. “Moral disagreements can still arise even if we all share a harm-based moral mind, because liberals and conservatives disagree about who is especially vulnerable to victimization,” he writes.


Here are some thoughts:

Notably, I am a big fan of Kurt Gray and his research. Search this site for multiple articles.

Our moral psychology is deeply rooted in our evolutionary past, particularly in our sensitivity to harm, which was crucial for survival. This legacy continues to influence modern moral and political debates, often leading to polarized views based on differing perceptions of harm. Kurt Gray’s argument that harm is the "master key" of morality simplifies the complex nature of moral judgments, offering a unifying framework while potentially overlooking the nuanced ways in which cultural and individual differences shape moral reasoning. His critique of moral-foundations theory (M.F.T.) challenges the idea that moral judgments are based on multiple innate modules, suggesting instead that a singular focus on harm underpins our moral (and sometime ethical) decisions. This perspective highlights how moral disagreements, such as those over abortion or immigration, arise from differing assumptions about who is vulnerable to harm.

The idea that moral judgments are often intuitive rather than rational further complicates our understanding of moral decision-making. Gray’s examples, such as incestuous siblings or a vegetarian eating human flesh, illustrate how people instinctively perceive harm even when none is evident. This challenges the notion that moral reasoning is based on logical deliberation, emphasizing instead the role of emotion and intuition. Gray’s emphasis on harm-based storytelling as a tool for bridging moral divides underscores the power of narrative in shaping perceptions. However, it also raises concerns about the potential for manipulation, as seen in the use of exaggerated or false narratives in political rhetoric, such as Donald Trump’s fabricated tales of harm.

Ultimately, the article raises important questions about whether our evolved moral psychology is adequate for addressing the complex challenges of the modern world, such as climate change, nuclear weapons, and artificial intelligence. The mismatch between our ancient instincts and contemporary problems may be a significant source of societal tension. Gray’s work invites reflection on how we can better understand and address the roots of moral conflict, while cautioning against the potential pitfalls of relying too heavily on intuitive judgments and emotional narratives. It suggests that while storytelling can foster empathy and bridge divides, it must be used responsibly to avoid exacerbating polarization and misinformation.

Friday, February 14, 2025

High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare

Corfmat, M., Martineau, J. T., & Régis, C. (2025).
BMC Med Ethics 26, 4
https://doi.org/10.1186/s12910-024-01158-1

Abstract

Background
Considering the disruptive potential of AI technology, its current and future impact in healthcare, as well as healthcare professionals’ lack of training in how to use it, the paper summarizes how to approach the challenges of AI from an ethical and legal perspective. It concludes with suggestions for improvements to help healthcare professionals better navigate the AI wave.

Methods
We analyzed the literature that specifically discusses ethics and law related to the development and implementation of AI in healthcare as well as relevant normative documents that pertain to both ethical and legal issues. After such analysis, we created categories regrouping the most frequently cited and discussed ethical and legal issues. We then proposed a breakdown within such categories that emphasizes the different - yet often interconnecting - ways in which ethics and law are approached for each category of issues. Finally, we identified several key ideas for healthcare professionals and organizations to better integrate ethics and law into their practices.

Results
We identified six categories of issues related to AI development and implementation in healthcare: (1) privacy; (2) individual autonomy; (3) bias; (4) responsibility and liability; (5) evaluation and oversight; and (6) work, professions and the job market. While each one raises different questions depending on perspective, we propose three main legal and ethical priorities: education and training of healthcare professionals, offering support and guidance throughout the use of AI systems, and integrating the necessary ethical and legal reflection at the heart of the AI tools themselves.

Conclusions
By highlighting the main ethical and legal issues involved in the development and implementation of AI technologies in healthcare, we illustrate their profound effects on professionals as well as their relationship with patients and other organizations in the healthcare sector. We must be able to identify AI technologies in medical practices and distinguish them by their nature so we can better react and respond to them. Healthcare professionals need to work closely with ethicists and lawyers involved in the healthcare system, or the development of reliable and trusted AI will be jeopardized.


Here are some thoughts:

This article explores the ethical and legal challenges surrounding artificial intelligence (AI) in healthcare. The authors identify six critical categories of issues: privacy, individual autonomy, bias, responsibility and liability, evaluation and oversight, as well as work and professional impacts.

The research highlights that AI is fundamentally different from previous medical technologies due to its disruptive potential and ability to perform autonomous learning and decision-making. While AI promises significant improvements in areas like biomedical research, precision medicine, and healthcare efficiency, there remains a significant gap between AI system development and practical implementation in healthcare settings.

The authors emphasize that healthcare professionals often lack comprehensive knowledge about AI technologies and their implications. They argue that understanding the nuanced differences between legal and ethical frameworks is crucial for responsible AI integration. Legal rules represent minimal mandatory requirements, while ethical considerations encourage deeper reflection on appropriate behaviors and choices.

The paper suggests three primary priorities for addressing AI's ethical and legal challenges: (1) educating and training healthcare professionals, (2) providing robust support and guidance during AI system use, and (3) integrating ethical and legal considerations directly into AI tool development. Ultimately, the researchers stress the importance of close collaboration between healthcare professionals, ethicists, and legal experts to develop reliable and trustworthy AI technologies.

Thursday, February 13, 2025

New Proposed Health Cybersecurity Rule: What Physicians Should Know

Alicia Ault
MedScape.com
Originally posted 10 Jan 25

A new federal rule could force hospitals and doctors’ groups to boost health cybersecurity measures to better protect patients’ health information and prevent ransomware attacks. Some of the proposed requirements could be expensive for healthcare providers.

The proposed rule, issued by the US Department of Health and Human Services (HHS) and published on January 6 in the Federal Register, marks the first time in a decade that the federal government has updated regulations governing the security of private health information (PHI) that’s kept or shared online. Comments on the rule are due on March 6.

Because the risks for cyberattacks have increased exponentially, “there is a greater need to invest than ever before in both people and technologies to secure patient information,” Adam Greene, an attorney at Davis Wright Tremaine in Washington, DC, who advises healthcare clients on cybersecurity, told Medscape Medical News.

Bad actors continue to evolve and are often far ahead of their targets, added Mark Fox, privacy and research compliance officer for the American College of Cardiology.

In the proposed rule, HHS noted that breaches have risen by more than 50% since 2020. Damages from health data breaches are more expensive than in any other sector, averaging $10 million per incident, said HHS.


Here are some thoughts:

The article outlines a newly proposed cybersecurity rule aimed at strengthening the protection of healthcare data and systems. This rule is particularly relevant to physicians and healthcare organizations, as it addresses the growing threat of cyberattacks in the healthcare sector. The proposed regulation emphasizes the need for enhanced cybersecurity measures, such as implementing stronger protocols, conducting regular risk assessments, and ensuring compliance with updated standards. For physicians, this means adapting to new requirements that may require additional resources, training, and investment in cybersecurity infrastructure. The rule also highlights the critical importance of safeguarding patient information, as breaches can lead to severe consequences, including identity theft, financial loss, and compromised patient care. Beyond data protection, the rule aims to prevent disruptions to healthcare operations, such as delayed treatments or system shutdowns, which can arise from cyber incidents.

However, while the rule is a necessary step to address vulnerabilities, it may pose challenges for smaller practices or resource-limited healthcare organizations. Compliance could require significant financial and operational adjustments, potentially creating a burden for some providers. Despite these challenges, the proposed rule reflects a broader trend toward stricter cybersecurity regulations across industries, particularly in sectors like healthcare that handle highly sensitive information. It underscores the need for proactive measures to address evolving cyber threats and ensure the long-term security and reliability of healthcare systems. Collaboration between healthcare organizations, cybersecurity experts, and regulatory bodies will be essential to successfully implement these measures and share best practices. Ultimately, while the transition may be demanding, the long-term benefits—such as reduced risk of data breaches, enhanced patient trust, and uninterrupted healthcare services—are likely to outweigh the initial costs.

Wednesday, February 12, 2025

AI might start selling your choices before you make them, study warns

Monique Merrill
CourthouseNews.com
Originally posted 29 Dec 24

AI ethicists are cautioning that the rise of artificial intelligence may bring with it the commodification of even one's motivations.

Researchers from the University of Cambridge’s Leverhulme Center for the Future of Intelligence say — in a paper published Monday in the Harvard Data Science Review journal — the rise of generative AI, such as chatbots and virtual assistants, comes with the increasing opportunity for persuasive technologies to gain a strong foothold.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve”, Yaqub Chaudhary, a visiting scholar at the Center for Future of Intelligence, said in a statement.

When interacting even causally with AI chatbots — which can range from digital tutors to assistants to even romantic partners — users share intimate information that gives the technology access to personal "intentions" like psychological and behavioral data, the researcher said.

“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary added.

In fact, AI is already subtly manipulating and influencing motivations by mimicking the way a user talks or anticipating the way they are likely to respond, the authors argue.

Those conversations, as innocuous as they may seem, leave the door open for the technology to forecast and influence decisions before they are made.


Here are some thoughts:

Merrill discusses a study warning about the potential for artificial intelligence (AI) to predict and commodify human decisions before they are even made. The study raises significant ethical concerns about the extent to which AI can intrude into personal decision-making processes, potentially influencing or even selling predictions about our choices. AI systems are becoming increasingly capable of analyzing data patterns to forecast human behavior, which could lead to scenarios where companies use this technology to anticipate and manipulate consumer decisions before they are consciously made. This capability not only challenges the notion of free will but also opens the door to the exploitation of individuals' motivations and preferences for commercial gain.

AI ethicists are particularly concerned about the commodification of human motivations and decisions, which raises critical questions about privacy, autonomy, and the ethical use of AI in marketing and other industries. The ability of AI to predict and potentially manipulate decisions could lead to a future where individuals' choices are no longer entirely their own but are instead influenced or even predetermined by algorithms. This shift could undermine personal autonomy and create a society where decision-making is driven by corporate interests rather than individual agency.

The study underscores the urgent need for regulatory frameworks to ensure that AI technologies are used responsibly and that individuals' rights to privacy and autonomous decision-making are protected. It calls for proactive measures to address the potential misuse of AI in predicting and influencing human behavior, including the development of new laws or guidelines that limit how AI can be applied in marketing and other decision-influencing contexts. Overall, the study serves as a cautionary note about the rapid advancement of AI technologies and the importance of safeguarding ethical principles in their development and deployment. It highlights the risks of AI-driven decision commodification and emphasizes the need to prioritize individual autonomy and privacy in the digital age.