Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 23, 2024

Machines and Morality

Seth Lazar
The New York Times
Originally posted 19 June 23

Here is an excerpt:

I’ve based my philosophical work on the belief, inspired by Immanuel Kant, that humans have a special moral status — that we command respect regardless of whatever value we contribute to the world. Drawing on the work of the 20th-century political philosopher John Rawls, I’ve assumed that human moral status derives from our rational autonomy. This autonomy has two parts: first, our ability to decide on goals and commit to them; second, our possession of a sense of justice and the ability to resist norms imposed by others if they seem unjust.

Existing chatbots are incapable of this kind of integrity, commitment and resistance. But Bing’s unhinged debut suggests that, in principle, it will soon be possible to design a chatbot that at least behaves like it has the kind of autonomy described by Rawls. Every large language model optimizes for a particular set of values, written into its “developer message,” or “metaprompt,” which shapes how it responds to text input by a user. These metaprompts display a remarkable ability to affect a bot’s behavior. We could write a metaprompt that inscribes a set of values, but then emphasizes that the bot should critically examine them and revise or resist them if it sees fit. We can invest a bot with long-term memory that allows it to functionally perform commitment and integrity. And large language models are already impressively capable of parsing and responding to moral reasons. Researchers are already developing software that simulates human behavior and has some of these properties.

If the Rawlsian ability to revise and pursue goals and to recognize and resist unjust norms is sufficient for moral status, then we’re much closer than I thought to building chatbots that meet this standard. That means one of two things: either we should start thinking about “robot rights,” or we should deny that rational autonomy is sufficient for moral standing. I think we should take the second path. What else does moral standing require? I believe it’s consciousness.


Here are some thoughts:

This article explores the philosophical implications of large language models, particularly in the context of their ability to mimic human conversation and behavior. The author argues that while these models may appear autonomous, they lack the key quality of self-consciousness that is necessary for moral status. This distinction, the author argues, is crucial for determining how we should interact with and develop these technologies in the future.

This lack of self-consciousness, the author argues, means that large language models cannot truly be said to have their own goals or commitments, nor can they experience the world in a way that grounds their actions in a sense of self. As such, the author concludes that these models, despite their impressive capabilities, do not possess moral status and therefore cannot be considered deserving of the same rights or respect as humans.

The article concludes by suggesting that instead of focusing on the possibility of "robot rights," we should instead focus on understanding what truly makes humans worthy of moral respect. The author argues that it is self-consciousness, rather than simply simulated autonomy, that grounds our moral standing and allows us to govern ourselves and make meaningful choices about how to live our lives.

Monday, April 22, 2024

Union accuses Kaiser of violations months after state fine on mental health care

Emily Alpert Reyes
Los Angeles Times
Originally posted 9 April 24

Months after Kaiser Permanente reached a sweeping agreement with state regulators to improve its mental health services, the healthcare giant is facing union allegations that patients could be improperly losing such care.

The National Union of Healthcare Workers, which represents thousands of Kaiser mental health professionals, complained earlier this year to state regulators that Kaiser appeared to be inappropriately handing off decisions about whether therapy is still medically necessary.

The union alleged that Rula Health, a contracted network of therapists that Kaiser uses to provide virtual care to its members, had been directed by Kaiser to use “illegal criteria” to make those decisions during regular reviews.


Here is my thoughts:

Kaiser Permanente is facing accusations from the National Union of Healthcare Workers (NUHW) that it is still violating mental health care laws, even after a recent $200 million settlement with the California Department of Managed Health Care (DMHC) over its mismanagement of behavioral health benefits.

The union alleges that Kaiser is inappropriately delegating decisions about the medical necessity of therapy during regular reviews to a contracted network of therapists, Rula Health, who are using "illegal criteria" to make these decisions instead of professional group criteria as required by California law.

The union claims this is resulting in patients with psychological disorders being unfairly denied continued access to necessary treatment.  Furthermore, the union argues that the frequent clinical care reviews Kaiser is subjecting mental health patients to violate laws prohibiting insurers from erecting more barriers to mental healthcare than for other health conditions.  Importantly, Kaiser does not subject other outpatient care to such reviews.

The DMHC has confirmed it is examining the issues raised by the union under the recent $200 million settlement agreement, which required Kaiser to pay a $50 million fine and invest $150 million over five years to improve its mental healthcare.  The settlement came after the DMHC's investigation found several deficiencies in Kaiser's provision of behavioral health services, including long delays for patients trying to schedule appointments and a failure to contract enough high-level behavioral care facilities.

Kaiser has stated that it does not limit the number of therapy sessions and that decisions on the level and frequency of therapy are made by providers in consultation with patients based on clinical needs.  However, the union maintains that Kaiser's actions are still violating mental health parity laws.

Sunday, April 21, 2024

An Expert Who Has Testified in Foster Care Cases Across Colorado Admits Her Evaluations Are Unscientific

Eli Hager
Originally posted 18 March 24

Diane Baird had spent four decades evaluating the relationships of poor families with their children. But last May, in a downtown Denver conference room, with lawyers surrounding her and a court reporter transcribing, she was the one under the microscope.

Baird, a social worker and professional expert witness, has routinely advocated in juvenile court cases across Colorado that foster children be adopted by or remain in the custody of their foster parents rather than being reunified with their typically lower-income birth parents or other family members.

In the conference room, Baird was questioned for nine hours by a lawyer representing a birth family in a case out of rural Huerfano County, according to a recently released transcript of the deposition obtained by ProPublica.

Was Baird’s method for evaluating these foster and birth families empirically tested? No, Baird answered: Her method is unpublished and unstandardized, and has remained “pretty much unchanged” since the 1980s. It doesn’t have those “standard validity and reliability things,” she admitted. “It’s not a scientific instrument.”

Who hired and was paying her in the case that she was being deposed about? The foster parents, she answered. They wanted to adopt, she said, and had heard about her from other foster parents.

Had she considered or was she even aware of the cultural background of the birth family and child whom she was recommending permanently separating? (The case involved a baby girl of multiracial heritage.) Baird answered that babies have “never possessed” a cultural identity, and therefore are “not losing anything,” at their age, by being adopted. Although when such children grow up, she acknowledged, they might say to their now-adoptive parents, “Oh, I didn’t know we were related to the, you know, Pima tribe in northern California, or whatever the circumstances are.”

The Pima tribe is located in the Phoenix metropolitan area.


Here is my summary:

The article discusses Diane Baird, an expert who has testified in foster care cases across Colorado, admitting that her evaluations are unscientific. Baird, who has spent four decades evaluating the relationships of poor families with their children, labeled her method for assessing families as the "Kempe Protocol." This revelation raises concerns about the validity of her evaluations in foster care cases and highlights the need for more rigorous and scientific approaches in such critical assessments.

Saturday, April 20, 2024

The Dark Side of AI in Mental Health

Michael DePeau-Wilson
MedPage Today
Originally posted 11 April 24

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording therapy sessions without any additional information about how the recordings would be used.

The company's advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTokopens in a new tab or window. However, archived screenshots of the websiteopens in a new tab or window revealed the company was seeking recorded therapy sessions "to better understand the format, topics, and treatment associated with modern mental healthcare."

Their stated goal was "to ultimately provide mental healthcare to more people at a lower cost," according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The company requested that the patients withhold their names to keep the recordings anonymous.


Here is a summary:

The article highlights several ethical concerns surrounding the use of AI in mental health care:

The lack of patient consent and privacy protections when companies collect sensitive mental health data to train AI models. For example, the nonprofit Koko used OpenAI's GPT-3 to experiment with online mental health support without proper consent protocols.

The issue of companies sharing patient data without authorization, as seen with the Crisis Text Line platform, which led to significant backlash from users.

The clinical risks of relying solely on AI-powered chatbots for mental health therapy, rather than having human clinicians involved. Experts warn this could be "irresponsible and ultimately dangerous" for patients dealing with complex, serious conditions.

The potential for unethical "cash-for-data" schemes, such as the Therapy For All company that sought to obtain recorded therapy sessions without proper consent, in order to train AI models.

Friday, April 19, 2024

Physicians, Spirituality, and Compassionate Patient Care

Daniel P. Sulmasy
The New England Journal of Medicine
March 16, 2024
DOI: 10.1056/NEJMp2310498

Mind, body, and soul are inseparable. Throughout human history, healing has been regarded as a spiritual event. Illness (especially serious illness) inevitably raises questions beyond science- questions of a transcendent nature. These are questions of meaning, value, and relationship. 1 They touch on perennial and profoundly human enigmas. Why is my child sick? Do I still have value now that I am no longer a "productive" working member of society? Why does brokenness in my body remind me of the brokenness in my relationships? Or conversely, why does brokenness in relationships so profoundly affect my body?

Historically, most people have turned to religious belief and practice to help answer such questions. Yet they arise for people of all religions and of no religion. These questions can aptly be called spiritual.

Whereas spirituality may be defined as the ways people live in relation to transcendent questions of meaning, value, and relationship, a religion involves a community of belief, texts, and practices sharing a common orientation toward these spiritual questions. The decline of religious belief and practice in Europe and North America over recent decades and a perceived conflict between science and religion have led many physicians to dismiss patients' spiritual and religious concerns as not relevant to medicine. Yet religion and spirituality are associated with a number of health care outcomes. Abundant data show that patients want their physicians to help address their spiritual needs, and that patients whose spiritual needs have been met are aided in making difficult decisions (particularly at the end of life), are more satisfied with their care, and report better quality of life.2.... Spiritual questions pervade all aspects of medical care, whether addressing self-limiting, chronic, or life-threatening conditions, and whether in inpatient or outpatient settings.

Beyond the data, however, many medical ethicists recognize that the principles of beneficence and respect for patients as whole persons require physicians to do more than attend to the details of physiological and anatomical derangements. Spirituality and religion are essential to many patients' identities as persons. Patients (and their families) experience illness, healing, and death as whole persons. Ignoring the spiritual aspects of their lives and identities is not respectful, and it divorces medical practice from a fundamental mode of patient experience and coping. Promoting the good of patients requires attention to their notion of the highest good. 


Here is my summary:

The article discusses the interconnectedness of mind, body, and soul in the context of healing and spirituality. It highlights how illness raises questions beyond science, touching on meaning, value, and relationships. While historically people turned to religious beliefs for answers, these spiritual questions are relevant to individuals of all faiths or no faith. The decline of religious practice in some regions has led to a dismissal of spiritual concerns in medicine, despite evidence showing the impact of spirituality on health outcomes. Patients desire their physicians to address their spiritual needs as it influences decision-making, satisfaction with care, and quality of life. Medical ethics emphasize the importance of considering patients as whole persons, including their spiritual identities. Physicians are encouraged to inquire about patients' spiritual needs respectfully, even if they do not share the same beliefs.

Thursday, April 18, 2024

An artificial womb could build a bridge to health for premature babies

Rob Stein
npr.org
Originally posted 12 April 24

Here is an excerpt:

Scientific progress prompts ethical concerns

But the possibility of an artificial womb is also raising many questions. When might it be safe to try an artificial womb for a human? Which preterm babies would be the right candidates? What should they be called? Fetuses? Babies?

"It matters in terms of how we assign moral status to individuals," says Mercurio, the Yale bioethicist. "How much their interests — how much their welfare — should count. And what one can and cannot do for them or to them."

But Mercurio is optimistic those issues can be resolved, and the potential promise of the technology clearly warrants pursuing it.

The Food and Drug Administration held a workshop in September 2023 to discuss the latest scientific efforts to create an artificial womb, the ethical issues the technology raises, and what questions would have to be answered before allowing an artificial womb to be tested for humans.

"I am absolutely pro the technology because I think it has great potential to save babies," says Vardit Ravitsky, president and CEO of The Hastings Center, a bioethics think tank.

But there are particular issues raised by the current political and legal environment.

"My concern is that pregnant people will be forced to allow fetuses to be taken out of their bodies and put into an artificial womb rather than being allowed to terminate their pregnancies — basically, a new way of taking away abortion rights," Ravitsky says.

She also wonders: What if it becomes possible to use artificial wombs to gestate fetuses for an entire pregnancy, making natural pregnancy unnecessary?


Here are some general ethical concerns:

The use of artificial wombs raises several ethical and moral concerns. One key issue is the potential for artificial wombs to be used to extend the limits of fetal viability, which could complicate debates around abortion access and the moral status of the fetus. There are also concerns that artificial wombs could enable "designer babies" through genetic engineering and lead to the commodification of human reproduction. Additionally, some argue that developing a baby outside of a woman's uterus is inherently "unnatural" and could undermine the maternal-fetal bond.

 However, proponents contend that artificial wombs could save the lives of premature infants and provide options for women with high-risk pregnancies.  

 Ultimately, the ethics of artificial womb technology will require careful consideration of principles like autonomy, beneficence, and justice as this technology continues to advance.

Wednesday, April 17, 2024

Do Obligations Follow the Mind or Body?

Protzko, J., Tobia, K., Strohminger, N., 
& Schooler, J. W. (2023).
Cognitive Science, 47(7).

Abstract

Do you persist as the same person over time because you keep the same mind or because you keep the same body? Philosophers have long investigated this question of personal identity with thought experiments. Cognitive scientists have joined this tradition by assessing lay intuitions about those cases. Much of this work has focused on judgments of identity continuity. But identity also has practical significance: obligations are tagged to one's identity over time. Understanding how someone persists as the same person over time could provide insight into how and why moral and legal obligations persist. In this paper, we investigate judgments of obligations in hypothetical cases where a person's mind and body diverge (e.g., brain transplant cases). We find a striking pattern of results: In assigning obligations in these identity test cases, people are divided among three groups: “body-followers,” “mind-followers,” and “splitters”—people who say that the obligation is split between the mind and the body. Across studies, responses are predicted by a variety of factors, including mind/body dualism, essentialism, education, and professional training. When we give this task to professional lawyers, accountants, and bankers, we find they are more inclined to rely on bodily continuity in tracking obligations. These findings reveal not only the heterogeneity of intuitions about identity but how these intuitions relate to the legal standing of an individual's obligations.

My summary:

Philosophers have grappled for centuries with the question of where our obligations lie, our body or mind, often considering it in the context of what defines us as individuals. This research delves into this question through thought experiments, like brain transplants. Interestingly, people have varying viewpoints. Some believe our obligations reside with the physical body, so the original owner would be responsible. Others argue the opposite, placing responsibility with the transplanted mind. There's even a third camp suggesting obligations are somehow shared between mind and body. The research suggests our stance on this issue might be influenced by our beliefs about the mind-body connection and even our profession.

Tuesday, April 16, 2024

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

Eric Lipton
The New York Times
Originally posted 21 Nov 23

Here is an excerpt:

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that the U.S. military would “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitated that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.


Here is a summary:

This article discusses the debate at the UN regarding Lethal Autonomous Weapons (LAW) - essentially autonomous drones with AI that can choose and attack targets without human intervention. There are concerns that this technology could lead to unintended casualties, make wars more likely, and remove the human element from the decision to take a life.
  • Many countries are worried about the development and deployment of LAWs.
  • Austria and other countries are proposing a total ban on LAWs or at least strict regulations requiring human control and limitations on how they can be used.
  • The US, Russia, and China are opposed to a ban and argue that LAWs could potentially reduce civilian casualties in wars.
  • The US prefers non-binding guidelines over new international laws.
  • The UN is currently deadlocked on the issue with no clear path forward for creating regulations.

Monday, April 15, 2024

On the Ethics of Chatbots in Psychotherapy.

Benosman, M. (2024, January 7).
PsyArXiv Preprints
https://doi.org/10.31234/osf.io/mdq8v

Introduction:

In recent years, the integration of chatbots in mental health care has emerged as a groundbreaking development. These artificial intelligence (AI)-driven tools offer new possibilities for therapy and support, particularly in areas where mental health services are scarce or stigmatized. However, the use of chatbots in this sensitive domain raises significant ethical concerns that must be carefully considered. This essay explores the ethical implications of employing chatbots in mental health, focusing on issues of non-maleficence, beneficence, explicability, and care. Our main ethical question is: should we trust chatbots with our mental health and wellbeing?

Indeed, the recent pandemic has made mental health an urgent global problem. This fact, together with the widespread shortage in qualified human therapists, makes the proposal of chatbot therapists a timely, and perhaps, viable alternative. However, we need to be cautious about hasty implementations of such alternative. For instance, recent news has reported grave incidents involving chatbots-human interactions. For example, (Walker, 2023) reports the death of an eco-anxious man who committed suicide following a prolonged interaction with a chatbot named ELIZA, which encouraged him to put an end to his life to save the planet. Another individual was caught while executing a plan to assassinate the Queen of England, after a chatbot encouraged him to do so (Singleton, Gerken, & McMahon, 2023).

These are only a few recent examples that demonstrate the potential maleficence effect of chatbots on-fragile-individuals. Thus, to be ready to safely deploy such technology, in the context of mental health care, we need to carefully study its potential impact on patients from an ethics standpoint.


Here is my summary:

The article analyzes the ethical considerations around the use of chatbots as mental health therapists, from the perspectives of different stakeholders - bioethicists, therapists, and engineers. It examines four main ethical values:

Non-maleficence: Ensuring chatbots do not cause harm, either accidentally or deliberately. There is agreement that chatbots need rigorous evaluation and regulatory oversight like other medical devices before clinical deployment.

Beneficence: Ensuring chatbots are effective at providing mental health support. There is a need for evidence-based validation of their efficacy, while also considering broader goals like improving quality of life.

Explicability: The need for transparency and accountability around how chatbot algorithms work, so patients can understand the limitations of the technology.

Care: The inability of chatbots to truly empathize, which is a crucial aspect of effective human-based psychotherapy. This raises concerns about preserving patient autonomy and the risk of manipulation.

Overall, the different stakeholders largely agree on the importance of these ethical values, despite coming from different backgrounds. The text notes a surprising level of alignment, even between the more technical engineering perspective and the more humanistic therapist and bioethicist viewpoints. The key challenge seems to be ensuring chatbots can meet the high bar of empathy and care required for effective mental health therapy.