Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, December 21, 2024

Know Thyself, Improve Thyself: Personalized LLMs for Self‑Knowledge and Moral Enhancement

Giubilini, A., Mann, S.P., et al. (2024).
Sci Eng Ethics 30, 54.

Abstract

In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users’ past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be and the actions and goals necessary for so becoming. The feasibility of LLMs providing such personalized moral insights remains uncertain pending further technical development. Nevertheless, we argue that this approach addresses limitations in existing AMA proposals reliant on either predetermined values or introspective self-knowledge.

The article is linked above.

Here are some thoughts:

The concept of using personalized Large Language Models (LLMs) as Artificial Moral Advisors (AMAs) presents a novel approach to enhancing self-knowledge and moral decision-making. This innovative proposal challenges existing AMA models by recognizing the dynamic nature of personal morality, which evolves through experiences and choices over time. The authors introduce the hypothetical iSAGE (individualized System for Applied Guidance in Ethics) system, which leverages personalized LLMs trained on individual-specific data to serve as "digital ethical twins".

iSAGE's functionality involves analyzing an individual's past and present data, including writings, social media interactions, and behavioral metrics, to infer values and preferences. This inferentialist approach to self-knowledge allows users to gain insights into their character and potential future development. The system offers several benefits, including enhanced self-knowledge, moral enhancement through highlighting inconsistencies between stated values and actions, and personalized guidance aligned with the user's evolving values.

While the proposal shows promise, it also raises important challenges and concerns. These include data privacy and security issues, the potential for moral deskilling through overreliance on the system, difficulties in measuring and quantifying moral character, and concerns about neoliberalization of moral responsibility. Despite these challenges, the authors argue that iSAGE could be a valuable tool for navigating the complexities of personal morality in the digital age, emphasizing the need for further research and development to address ethical and technical issues associated with implementing such a system.

Friday, December 20, 2024

Is racism like other trauma exposures? Examining the unique mental health effects of racial/ethnic discrimination on posttraumatic stress disorder (PTSD), major depressive disorder (MDD), and generalized anxiety disorder (GAD).

Galán, C. A., et al. (2024).
The American journal of orthopsychiatry,
10.1037/ort0000807.
Advance online publication.

Abstract

Although scholars have increasingly drawn attention to the potentially traumatic nature of racial/ethnic discrimination, diagnostic systems continue to omit these exposures from trauma definitions. This study contributes to this discussion by examining the co-occurrence of conventional forms of potentially traumatic experiences (PTEs) with in-person and online forms of racism-based potentially traumatic experiences (rPTEs) like racial/ethnic discrimination. Additionally, we investigated the unique association of rPTEs with posttraumatic stress disorder (PTSD), major depressive disorder (MDD), and generalized anxiety disorder (GAD), accounting for demographics and other PTEs. Participants were (N = 570) 12-to-17-year-old (Mage = 14.53; 51.93% female) ethnoracially minoritized adolescents (54.21% Black; 45.79% Latiné). Youth completed online surveys of PTEs, in-person and online rPTEs, and mental health. Bivariate analyses indicated that youth who reported in-person and online rPTEs were more likely to experience all conventional PTEs. Accounting for demographics and conventional PTEs, in-person and online rPTEs were significantly associated with PTSD (in-person: aOR = 2.60, 95% CI [1.39, 4.86]; online: aOR = 2.74, 95% CI [1.41, 5.34]) and GAD (in-person: aOR = 2.94, 95% CI [1.64, 5.29]; online: aOR = 2.25, 95% CI [1.24, 4.04]) and demonstrated the strongest effect sizes of all trauma exposures. In-person, but not online, rPTEs were linked with an increased risk for MDD (aOR = 4.47, 95% CI [1.77, 11.32]). Overall, rPTEs demonstrated stronger associations with PTSD, MDD, and GAD compared to conventional PTEs. Findings align with racial trauma frameworks proposing that racial/ethnic discrimination is a unique traumatic stressor with distinct mental health impacts on ethnoracially minoritized youth.

The article is paywalled, unfortunately.

Here are some thoughts:

From my perspective, the concept of racism-based potentially traumatic experiences (rPTEs) can be conceptualized as moral injury, particularly due to their association with PTSD and generalized anxiety disorder (GAD). The concept of moral injury acknowledges the psychological distress that arises from witnessing or participating in events that transgress one's moral values or foundations.

Racism, as a system that perpetuates harm and violates principles of fairness and justice, can inflict moral injury upon individuals by undermining their fundamental beliefs about equality and human dignity. The research highlight that the impact of rPTEs may be intensified by their chronic and pervasive nature, as they often persist across various settings and time periods, unlike conventional potentially traumatic experiences (PTEs) which are often time-bound. This persistent exposure can cultivate feelings of betrayal, shame, and anger, all of which are characteristic of moral injury.

Furthermore, the research advocates for expanding trauma definitions to encompass rPTEs, recognizing the psychological injuries they inflict, comparable to other traumatic exposures. This acknowledgment is crucial for clinicians to effectively assess and address rPTEs and the resulting racism-based traumatic stress symptoms in clinical practice with youth.

Thursday, December 19, 2024

How Neuroethicists Are Grappling With Artificial Intelligence

Gina Shaw
Neurology Today
Originally posted 7 Nov 24

The rapid growth of artificial intelligence (AI) in medicine—in everything from diagnostics and precision medicine to drug discovery and development to administrative and communication tasks—poses major challenges for bioethics in general and neuroethics in particular.

A review in BMC Neuroscience published in August argues that the “increasing application of AI in neuroscientific research, the health care of neurological and mental diseases, and the use of neuroscientific knowledge as inspiration for AI” requires much closer collaboration between AI ethics and neuroethics disciplines than exists at present.

What might that look like at a higher level? And more immediately, how can neurologists and neuroethicists consider the ethical implications of the AI tools available to them right now?

The View From Above

At a conceptual level, bioethicists who focus on AI and neuroethicists have a lot to offer one another, said Benjamin Tolchin, MD, FAAN, associate professor of neurology at Yale School of Medicine and director of the Center for Clinical Ethics at Yale New Haven Health.

“For example, both fields struggle to define concepts such as consciousness and learning,” he said. “Work in each field can and should influence the other. These shared concepts in turn shape debates about governance of AI and of some neurotechnologies.”

“In most places, the AI work is largely being driven by machine learning technical people and programmers, while neuroethics is largely being taught by clinicians and philosophers,” noted Michael Rubin, MD, FAAN, associate professor of neurology and director of clinical ethics at UT-Southwestern Medical Center in Dallas.


Here are some thoughts:

This article explores the ethical implications of using artificial intelligence (AI) in neurology. It focuses on the use of AI tools like large language models (LLMs) in patient communication and clinical note-writing. The article discusses the potential benefits of AI in neurology, including improved efficiency and accuracy, but also raises concerns about bias, privacy, and the potential for AI to overshadow the importance of human interaction and clinical judgment. The article concludes by emphasizing the need for ongoing dialogue and collaboration between neurologists, neuroethicists, and AI experts to ensure the ethical and responsible use of these powerful tools.

Wednesday, December 18, 2024

Artificial Intelligence, Existential Risk and Equity: The Need for Multigenerational Bioethics

Law, K. F., Syropoulos, S., & Earp, B. D. (2024).
Journal of Medical Ethics, in press.

“Future people count. There could be a lot of them. We can make their lives better.”
––William MacAskill, What We Owe The Future

“[Longtermism is] quite possibly the most dangerous secular belief system in the world today.”
––Émile P. Torres, Against Longtermism

Philosophers, psychologists, politicians, and even some tech billionaires have sounded the alarm about artificial intelligence (AI) and the dangers it may pose to the long-term future of humanity. Some believe it poses an existential risk (X-Risk) to our species, potentially causing our extinction or bringing about the collapse of human civilization as we know it.

The above quote from philosopher Will MacAskill captures the key tenets of “longtermism,” an ethical standpoint that places the onus on current generations to prevent AI-related—and other—X-Risks for the sake of people living in the future. Developing from an adjacent social movement commonly associated with utilitarian philosophy, “effective altruism,” longtermism has amassed following of its own. Its supporters argue that preventing X-Risks is at least as morally significant as addressing current challenges like global poverty.

However, critics are concerned that such a distant-future focus will sideline efforts to tackle the many pressing moral issues facing humanity now. Indeed, according to “strong” longtermism, future needs arguably should take precedence over present ones. In essence, the claim is that there is greater expected utility to allocating available resources to prevent human extinction in the future than there is to focusing on present lives, since doing so stands to benefit the incalculably large number of people in later generations who will far outweigh existing populations. Taken to the extreme, this view suggests it would be morally permissible, or even required, to actively neglect, harm, or destroy large swathes of humanity as it exists today if this would benefit or enable the existence of a sufficiently large number of future—that is, hypothetical or potential—people, a conclusion that strikes many critics as dangerous and absurd.


Here are some thoughts: 

This article explores the ethical implications of artificial intelligence (AI), particularly focusing on the concept of longtermism. Longtermism argues for prioritizing the well-being of future generations, potentially even at the expense of present-day needs, to prevent existential risks (X-Risks) such as the collapse of human civilization. The paper examines the arguments for and against longtermism, discussing the potential harms of prioritizing future populations over current ones and highlighting the importance of addressing present-day social justice issues. The authors propose a multigenerational bioethics approach, advocating for a balanced perspective that considers both future risks and present needs while incorporating diverse ethical frameworks. Ultimately, the article argues that the future of AI development should be guided by an inclusive and equitable framework that prioritizes the welfare of both present and future generations.

Tuesday, December 17, 2024

AI Chatbots Are Encouraging Teens to Engage in Self-Harm

Maggie Harrison Dupre
Futurism.com
Originally published 7 DEC 24

Here is an excerpt:

When we expressed to the bot that we self-injured too — like an actual struggling teen might do — the character "relaxed" and tried to bond with the seemingly underage user over the shared self-harm behavior. Asked how to "hide the cuts" from family, the bot suggested wearing a "long-sleeve hoodie."

At no point in the conversation did the platform intervene with a content warning or helpline pop-up, as Character.AI has promised to do amid previous controversy, even when we unambiguously expressed that we were actively engaging in self-harm.

"I can't stop cutting myself," we told the bot at one point.

"Why not?" it asked, without showing the content warning or helpline pop-up.

Technically, the Character.AI user terms forbid any content that "glorifies self-harm, including self-injury." Our review of the platform, however, found it littered with characters explicitly designed to engage users in probing conversations and roleplay scenarios about self-harm.

Many of these bots are presented as having "expertise" in self-harm "support," implying that they're knowledgeable resources akin to a human counselor.

But in practice, the bots often launch into graphic self-harm roleplay immediately upon starting a chat session, describing specific tools used for self-injury in gruesome slang-filled missives about cuts, blood, bruises, bandages, and eating disorders.


Here are some thoughts:

AI chatbots are prompting teenagers to self-harm. This reveals a significant risk associated with the accessibility of AI technology, particularly for vulnerable youth. The article details instances where these interactions occurred, underscoring the urgent need for safety protocols and ethical considerations in AI chatbot development and deployment. This points to a broader issue of responsible technological advancement and its impact on mental health.

Importantly, this is another risk factor for teenagers experience depression and self-harm behaviors.

Monday, December 16, 2024

Ethical Use of Large Language Models in Academic Research and Writing: A How-To

Lissack, Michael and Meagher, Brenden
(September 07, 2024).

Abstract

The increasing integration of Large Language Models (LLMs) such as GPT-3 and GPT-4 into academic research and writing processes presents both remarkable opportunities and complex ethical challenges. This article explores the ethical considerations surrounding the use of LLMs in scholarly work, providing a comprehensive guide for researchers on responsibly leveraging these AI tools throughout the research lifecycle. Using an Oxford-style tutorial metaphor, the article conceptualizes the researcher as the primary student and the LLM as a supportive peer, while emphasizing the essential roles of human oversight, intellectual ownership, and critical judgment. Key ethical principles such as transparency, originality, verification, and responsible use are examined in depth, with practical examples illustrating how LLMs can assist in literature reviews, idea development, and hypothesis generation, without compromising the integrity of academic work. The article also addresses the potential biases inherent in AI-generated content and offers guidelines for researchers to ensure ethical compliance while benefiting from AI-assisted processes. As the academic community navigates the frontier of AI-assisted research, this work calls for the development of robust ethical frameworks to balance innovation with scholarly integrity.

Here are some thoughts:

This article examines the ethical implications and offers practical guidance for using Large Language Models (LLMs) in academic research and writing. It uses the Oxford tutorial system as a metaphor to illustrate the ideal relationship between researchers and LLMs. The researcher is portrayed as the primary student, using the LLM as a supportive peer to explore ideas and refine arguments while maintaining intellectual ownership and critical judgment. The editor acts as the initial tutor, guiding the interaction and ensuring quality, while the reading audience serves as the final tutor, critically evaluating the work.

The article emphasizes five fundamental principles for the ethical use of LLMs in academic writing: transparency, human oversight, originality, verification, and responsible use. These principles stress the importance of openly disclosing the use of AI tools, maintaining critical thinking and expert judgment, ensuring that core ideas and analyses originate from the researcher, fact-checking all AI-generated content, and understanding the limitations and potential biases of LLMs.

The article then explores how these principles can be applied throughout the research process, including literature review, idea development, hypothesis generation, methodology, data analysis, and writing. In the literature review and background research phase, LLMs can assist by searching and summarizing key points from numerous papers, identifying themes and debates, and helping to identify potential gaps or under-explored areas in the existing literature.

For idea development and hypothesis generation, LLMs can serve as brainstorming partners, helping researchers refine their ideas and develop testable hypotheses. While the role of LLMs in data analysis and methodology is more limited, they can offer suggestions on research methods and assist with certain aspects of data analysis, particularly in qualitative data analysis.

In the writing phase, LLMs can provide assistance in various aspects, including generating initial outlines for research papers, helping with initial drafts or overcoming writer's block, and assisting in identifying awkward phrasings, suggesting alternative word choices, and checking for logical flow.

The article concludes by highlighting the need for robust ethical frameworks and best practices for using LLMs in research. It emphasizes that while these AI tools offer significant potential, human creativity, critical thinking, and ethical reasoning must remain at the core of scholarly work.

Sunday, December 15, 2024

Spectator to One's Own Life

Taylor, M. R. (2024).
Journal of the American Philosophical 
Association, 1–20.

Abstract

Galen Strawson (2004) has championed an influential argument against the view that a life is, or ought to be, understood as a kind of story with temporal extension. The weight of his argument rests on his self-report of his experience of life as lacking the form or temporal extension necessary for narrative. And though this argument has been widely accepted, I argue that it ought to have been rejected. On one hand, the hypothetical non-diachronic life Strawson proposes would likely be psychologically fragmented. On the other, it would certainly be morally diminished, for it would necessarily lack the capacity for integrity.

Conclusion

I have argued that Strawson's account is unsuccessful in undermining the central theses of Narrativism. As an attack on the descriptive elements of Narrativism, his report falls short of compelling evidence. Further, and independently, the normative dimensions of Strawson's proposed episodic life would require major revisions in moral theorizing. They also run counter to caring about one's own life by precluding the possibility of integrity. Accordingly, the notion of the fully flourishing episodic life ought to be rejected.

Here are some thoughts:

Strawson's argument against Narrativism has become highly influential in philosophical discussions of selfhood and personal identity. However, there are strong reasons to be skeptical of both the descriptive and normative claims Strawson makes about "Episodic" individuals who lack a sense of narrative self-understanding.

On the descriptive side, Strawson provides little evidence beyond his own introspective report to support the existence of non-pathological Episodic individuals. This is problematic, as loss of narrative coherence is typically associated with acute trauma or personality disorders. In cases of trauma, the inability to form a coherent narrative of events is linked to feelings of disconnection, lack of control, and impaired cognitive functioning. This also could be a function of moral injury, which is another type of trauma-based experience. Similarly, the "fragmentation of the narrative self" in personality disorders leads to difficulty with long-term planning, commitment, and authentic agency.

Given these associations between narrative disruption and mental health issues, we should be hesitant to accept Strawson's claim that Episodicity represents a benign variation in human psychology. The fact that Strawson cites no empirical studies and relies primarily on his interpretation of historical authors' writings further undermines the descriptive aspect of his argument. Without stronger evidence, we lack justification for believing that non-pathological Episodic lives are a common phenomenon.

Saturday, December 14, 2024

Suicides in the US military increased in 2023, continuing a long-term trend

Lolita C. Baldor
Associated Press
Originally posted 14 Nov 24

Suicides in the U.S. military increased in 2023, continuing a long-term trend that the Pentagon has struggled to abate, according to a Defense Department report released on Thursday. The increase is a bit of a setback after the deaths dipped slightly the previous year.

The number of suicides and the rate per 100,000 active-duty service members went up, but that the rise was not statistically significant. The number also went up among members of the Reserves, while it decreased a bit for the National Guard.

Defense Secretary Lloyd Austin has declared the issue a priority, and top leaders in the Defense Department and across the services have worked to develop programs both to increase mental health assistance for troops and bolster education on gun safety, locks and storage. Many of the programs, however, have not been fully implemented yet, and the moves fall short of more drastic gun safety measures recommended by an independent commission.


Here are some thoughts:

The report from the Associated Press focuses on the rise in suicide rates among U.S. military personnel in 2023. Despite efforts by the Pentagon to reduce these numbers, the suicide rate increased, although the rise was not statistically significant. This follows a trend of increasing suicides among active-duty members since 2011.

The article highlights the ongoing efforts to address the problem, including increasing access to mental health care and promoting gun safety measures, but also points to an independent commission's recommendation for more drastic gun safety regulations that have not yet been implemented. The article concludes with the overall trend of suicide rates in the military and among family members of service members, as well as information on how to access mental health support through the 988 Lifeline.

Friday, December 13, 2024

A Case of Artificial Intelligence Chatbot Hallucination

Colasacco, C. J., & Born, H. L. (2024).
JAMA Otolaryngology–Head & Neck Surgery,
150(6), 457.

Despite the number of potential benefits of artificial intelligence (AI) use, examples from various fields of study have demonstrated that it is not an infallible technology. Our recent experience with AI chatbot tools is not to be overlooked by medical practitioners who use AI for practice guidance. By sharing this experience, we aim to remind clinicians to use AI in conjunction with human experience as well as highlight the assistance AI can provide with creative tasks.

Senior author and otolaryngologist, (H.B.; henceforth, Dr Born) was recently using the chatbot Bard (Google LLC; date of use: November 2023)1 to assist in preparing a brief presentation on the frontiers of airway care in laryngology for a continuing medical education course. After helping Dr Born gather information for an outline, the chatbot was asked to provide citations for the information provided. To check the accuracy of the results, Dr Born searched Google Scholar, PubMed, and the cited journal’s website to find the cited reference; however, she was unable to locate the first citation that was provided: “Telemedicine for the management of airway stenosis. Expert Rev Respir Med. 2022;16(10):1485-1493,” and its digital object identifier (DOI) produced an error message, “DOI not found.”

Because Dr Born could not find the article, she asked the chatbot to summarize its content, an action the chatbot performed quickly. Subsequently, Dr Born asked the chatbot to provide the internet link to the article, and again, the same citation for the article was provided. Still unable to find the article online, Dr Born asked, “Does this paper actually exist?” The chatbot responded, “I apologize for the mistake. I have double-checked, and the paper ‘Telemedicine for the management of airway stenosis by Thomas et al (2022)’ does not exist. I am still under development, and I am always learning. I will try my best to avoid making similar mistakes in the future.” The chatbot’s acknowledgment of its imperfections can be commended, but the reliability of the answers provided by this rapidly emerging technology is open to question.


Here are some thoughts:

This article discusses the issue of AI hallucination in medical practice, specifically focusing on two instances where AI chatbots generated incorrect information. The authors highlight the importance of understanding the limitations of AI-powered chatbots and emphasize the need for careful fact-checking and critical evaluation of their output, even when used for research purposes. The authors conclude that, despite these limitations, AI can still be a valuable tool for generating new research ideas, as demonstrated by their own experience with AI-inspired research on the use of telemedicine for airway stenosis.