Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, January 17, 2025

Men's Suicidal thoughts and behaviors and conformity to masculine norms: A person-centered, latent profile approach

Eggenberger, L., et al. (2024).
Heliyon, 10(20), e39094.

Abstract

Background

Men are up to four times more likely to die by suicide than women. At the same time, men are less likely to disclose suicidal ideation and transition more rapidly from ideation to attempt. Recently, socialized gender norms and particularly conformity to masculine norms (CMN) have been discussed as driving factors for men's increased risk for suicidal thoughts and behaviors (STBs). This study aims to examine the individual interplay between CMN dimensions and their association with depression symptoms, help-seeking, and STBs.

Methods

Using data from an anonymous online survey of 488 cisgender men, latent profile analysis was performed to identify CMN subgroups. Multigroup comparisons and hierarchical regression analyses were used to estimate differences in sociodemographic characteristics, depression symptoms, psychotherapy use, and STBs.

Results

Three latent CMN subgroups were identified: Egalitarians (58.6 %; characterized by overall low CMN), Players (16.0 %; characterized by patriarchal beliefs, endorsement of sexual promiscuity, and heterosexual self-presentation), and Stoics (25.4 %; characterized by restrictive emotionality, self-reliance, and engagement in risky behavior). Stoics showed a 2.32 times higher risk for a lifetime suicide attempt, younger age, stronger somatization of depression symptoms, and stronger unbearability beliefs.

Conclusion

The interplay between the CMN dimensions restrictive emotionality, self-reliance, and willingness to engage in risky behavior, paired with suicidal beliefs about the unbearability of emotional pain, may create a suicidogenic psychosocial system. Acknowledging this high-risk subgroup of men conforming to restrictive masculine norms may aid the development of tailored intervention programs, ultimately mitigating the risk for a suicide attempt.

Here are some thoughts:

Overall, the study underscores the critical role of social norms in shaping men's mental health and suicide risk. It provides valuable insights for developing targeted interventions and promoting healthier expressions of masculinity to prevent suicide in men.

This research article investigates the link between conformity to masculine norms (CMN) and suicidal thoughts and behaviors (STBs) in cisgender men. Using data from an online survey, the study employs latent profile analysis to identify distinct CMN subgroups, revealing three profiles: Egalitarians (low CMN), Players (patriarchal beliefs and promiscuity), and Stoics (restrictive emotionality, self-reliance, and risk-taking). Stoics demonstrated a significantly higher risk of lifetime suicide attempts, attributable to their CMN profile combined with beliefs about the unbearability of emotional pain. The study concludes that understanding CMN dimensions is crucial for developing targeted suicide prevention strategies for men.

Thursday, January 16, 2025

Faculty Must Protect Their Labor from AI Replacement

John Warner
Inside Higher Ed
Originally posted 11 Dec 24

Here is an excerpt:

A PR release from the UCLA Newsroom about a comparative lit class that is using a “UCLA-developed AI system” to substitute for labor that was previously done by faculty or teaching assistants lays out the whole deal. The course textbook has been generated from the professor’s previous course materials. Students will interact with the AI-driven courseware. A professor and teaching assistants will remain, for now, but for how long?

The professor argues—I would say rationalizes—that this is good for students because “Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically.”

(Note: Whenever I see someone touting the benefit of an AI-driven practice as good pedagogy, I wonder what is stopping them from doing it without the AI component, and the answer is usually nothing.)

An additional apparent benefit is “that the platform can help professors ensure consistent delivery of course material. Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching—and offer students a very similar experience.”


This article argues that he survival of college faculty in an AI-driven world depends on recognizing themselves as laborers and resisting trends that devalue their work. The rise of adjunctification—prioritizing cheaper, non-tenured faculty over tenured ones—offers a cautionary tale. Similarly, the adoption of generative AI in teaching risks diminishing the human role in education. Examples like UCLA’s AI-powered courseware illustrate how faculty labor becomes interchangeable, paving the way for automation and eroding the value of teaching. Faculty must push back against policies, such as shifts in copyright, that enable these trends, emphasizing the irreplaceable value of their labor and resisting practices that jeopardize the future of academic teaching and learning.

Wednesday, January 15, 2025

AI Licensing for Authors: Who Owns the Rights and What’s a Fair Split?

The Authors Guild. (2024, December 13). 
The Authors Guild. 
Originally published 12 Dec 24

The Authors Guild believes it is crucial that authors, not publishers or tech companies, have control over the licensing of AI rights. Authors must be able to choose whether they want to allow their works to be used by AI and under what terms.

AI Training Is Not Covered Under Standard Publishing Agreements

A trade publishing agreement grants just that: a license to publish. AI training is not publishing, and a publishing contract does not in any way grant that right. AI training is not a new book format, it is not a new market, it is not a new distribution mechanism. Licensing for AI training is a right entirely unrelated to publishing, and is not a right that can simply be tacked onto a subsidiary-rights clause. It is a right reserved by authors, a right that must be negotiated individually for each publishing contract, and only if the author chooses to license that right at all.

Subsidiary Rights Do Not Include AI Rights

The contractual rights that authors do grant to publishers include the right to publish the book in print, electronic, and often audio formats (though many older contracts do not provide for electronic or audio rights). They also grant the publisher “subsidiary rights” authorizing it to license the book or excerpts to third parties in readable formats, such as foreign language editions, serials, abridgements or condensations, and readable digital or electronic editions. AI training rights to date have not been included as a subsidiary right in any contract we have been made aware of. Subsidiary rights have a range of “splits”—percentages of revenues that the publisher keeps and pays to the author. For certain subsidiary rights, such as “other digital” or “other electronic” rights (which some publishers have, we believe erroneously, argued gives them AI training rights), the publisher is typically required to consult with the author or get their approval before granting any subsidiary licenses.


Here are some thoughts:

The Authors Guild emphasizes that authors, not publishers or tech companies, should control AI licensing for their works. Standard publishing contracts don’t cover AI training, as it’s unrelated to traditional publishing rights. Authors retain copyright for AI uses and must negotiate these rights separately, ensuring they can approve or reject licensing deals. Publishers, if involved, should be fairly compensated based on their role, but authors should receive the majority—75-85%—of AI licensing revenues. The Guild also continues legal action against companies for past AI-related copyright violations, advocating for fair practices and author autonomy in this emerging market.

Tuesday, January 14, 2025

Agentic LLMs for Patient-Friendly Medical Reports

Sudarshan, M., Shih, S, et al. (2024).
arXiv.org

Abstract

The application of Large Language Models (LLMs) in healthcare is expanding rapidly, with one potential use case being the translation of formal medical re-ports into patient-legible equivalents. Currently, LLM outputs often need to be edited and evaluated by a human to ensure both factual accuracy and comprehensibility, and this is true for the above use case. We aim to minimize this step by proposing an agentic workflow with the Reflexion framework, which uses iterative self-reflection to correct outputs from an LLM. This pipeline was tested and compared to zero-shot prompting on 16 randomized radiology reports. In our multi-agent approach, reports had an accuracy rate of 94.94% when looking at verification of ICD-10 codes, compared to zero-shot prompted reports, which had an accuracy rate of 68.23%. Additionally, 81.25% of the final reflected reports required no corrections for accuracy or readability, while only 25% of zero-shot prompted reports met these criteria without needing modifications. These results indicate that our approach presents a feasible method for communicating clinical findings to patients in a quick, efficient and coherent manner whilst also retaining medical accuracy. The codebase is available for viewing at http://github.com/ malavikhasudarshan/Multi-Agent-Patient-Letter-Generation.


Here are some thoughts:

The article focuses on using Large Language Models (LLMs) in healthcare to create patient-friendly versions of medical reports, specifically in the field of radiology. The authors present a new multi-agent workflow that aims to improve the accuracy and readability of these reports compared to traditional methods like zero-shot prompting. This workflow involves multiple steps: extracting ICD-10 codes from the original report, generating multiple patient-friendly reports, and using a reflection model to select the optimal version.

The study highlights the success of this multi-agent approach, demonstrating that it leads to higher accuracy in terms of including correct ICD-10 codes and produces reports that are more concise, structured, and formal compared to zero-shot prompting. The authors acknowledge that while their system significantly reduces the need for human review and editing, it doesn't completely eliminate it. The article emphasizes the importance of clear and accessible medical information for patients, especially as they increasingly gain access to their own records. The goal is to reduce patient anxiety and confusion, ultimately enhancing their understanding of their health conditions.

Monday, January 13, 2025

Exposure to Higher Rates of False News Erodes Media Trust and Fuels Overconfidence

Altay, S., Lyons, B. A., & Modirrousta-Galian, A. (2024).
Mass Communication & Society, 1–25.
https://doi.org/10.1080/15205436.2024.2382776

Abstract

In two online experiments (N = 2,735), we investigated whether forced exposure to high proportions of false news could have deleterious effects by sowing confusion and fueling distrust in news. In a between-subjects design where U.S. participants rated the accuracy of true and false news, we manipulated the proportions of false news headlines participants were exposed to (17%, 33%, 50%, 66%, and 83%). We found that exposure to higher proportions of false news decreased trust in the news but did not affect participants’ perceived accuracy of news headlines. While higher proportions of false news had no effect on participants’ overall ability to discern between true and false news, they made participants more overconfident in their discernment ability. Therefore, exposure to false news may have deleterious effects not by increasing belief in falsehoods, but by fueling overconfidence and eroding trust in the news. Although we are only able to shed light on one causal pathway, from news environment to attitudes, this can help us better understand the effects of external or supply-side changes in news quality.


Here are some thoughts:

The study investigates the impact of increased exposure to false news on individuals' trust in media, their ability to discern truth from falsehood, and their confidence in their evaluation skills. The research involved two online experiments with a total of 2,735 participants, who rated the accuracy of news headlines after being exposed to varying proportions of false content. The findings reveal that higher rates of misinformation significantly decrease general media trust, independent of individual factors such as ideology or cognitive reflectiveness. This decline in trust may lead individuals to turn away from credible news sources in favor of less reliable alternatives, even when their ability to evaluate individual news items remains intact.

Interestingly, while participants displayed overconfidence in their evaluations after exposure to predominantly false content, their actual accuracy judgments did not significantly vary with the proportion of true and false news. This suggests that personal traits like discernment skills play a more substantial role than environmental cues in determining how individuals assess news accuracy. The study also highlights a disconnection between changes in media trust and evaluations of specific news items, indicating that attitudes toward media are often more malleable than actual behavior.

The research underscores the importance of understanding the psychological mechanisms at play when individuals encounter misinformation. It points out that interventions aimed at improving news discernment should consider the potential for increased skepticism rather than enhanced accuracy. Moreover, the findings suggest that exposure to high levels of false news can lead to overconfidence in one's ability to judge news quality, which may result in the rejection of accurate information.

Overall, the study provides credible evidence that exposure to predominantly false news can have harmful effects by eroding trust in media institutions and fostering overconfidence in personal judgment abilities. These insights are crucial for developing effective strategies to combat misinformation and promote healthy media consumption habits among the public.

Sunday, January 12, 2025

Large language models can outperform humans in social situational judgments

Mittelstädt, J. M.,  et al. (2024).
Scientific Reports, 14(1).

Abstract

Large language models (LLM) have been a catalyst for the public interest in artificial intelligence (AI). These technologies perform some knowledge-based tasks better and faster than human beings. However, whether AIs can correctly assess social situations and devise socially appropriate behavior, is still unclear. We conducted an established Situational Judgment Test (SJT) with five different chatbots and compared their results with responses of human participants (N = 276). Claude, Copilot and you.com’s smart assistant performed significantly better than humans in proposing suitable behaviors in social situations. Moreover, their effectiveness rating of different behavior options aligned well with expert ratings. These results indicate that LLMs are capable of producing adept social judgments. While this constitutes an important requirement for the use as virtual social assistants, challenges and risks are still associated with their wide-spread use in social contexts.

Here are some thoughts:

This research assesses the social judgment capabilities of large language models (LLMs) by administering a Situational Judgment Test (SJT), a standardized test for work or critical situation decisions, to five popular chatbots and comparing their performance to a human control group. The study found that several LLMs significantly outperformed humans in identifying appropriate behaviors in complex social scenarios. While LLMs demonstrated high consistency in their responses and agreement with expert ratings, the study notes limitations including potential biases and the need for further investigation into real-world application and the underlying mechanisms of their social judgment. The results suggest LLMs possess considerable potential as social assistants, but also highlight ethical considerations surrounding their use.

Saturday, January 11, 2025

LLM-based agentic systems in medicine and healthcare

Qiu, J., Lam, K., Li, G. et al.
Nat Mach Intell (2024).

Large language model-based agentic systems can process input information, plan and decide, recall and reflect, interact and collaborate, leverage various tools and act. This opens up a wealth of opportunities within medicine and healthcare, ranging from clinical workflow automation to multi-agent-aided diagnosis.

Large language models (LLMs) exhibit generalist intelligence in following instructions and providing information. In medicine, they have been employed in tasks from writing discharge summaries to clinical note-taking. LLMs are typically created via a three-stage process: first, pre-training using vast web-scale data to obtain a base model; second, fine-tuning the base model using high-quality question-and-answer data to generate a conversational assistant model; and third, reinforcement learning from human feedback to align the assistant model with human values and improve responses. LLMs are essentially text-completion models that provide responses by predicting words following the prompt. Although this next-word prediction mechanism allows LLMs to respond rapidly, it does not guarantee depth or accuracy of their outputs. LLMs are currently limited by the recency, validity and breadth of their training data, and their outputs are dependent on prompt quality. They also lack persistent memory, owing to their intrinsically limited context window, which leads to difficulties in maintaining continuity across longer interactions or across sessions; this, in turn, leads to challenges in providing personalized responses based on past interactions. Furthermore, LLMs are inherently unimodal. These limitations restrict their applications in medicine and healthcare, which often require problem-solving skills beyond linguistic proficiency alone.


Here are some thoughts:

Large language model (LLM)-based agentic systems are emerging as powerful tools in medicine and healthcare, offering capabilities that go beyond simple text generation. These systems can process information, make decisions, and interact with various tools, leading to advancements in clinical workflows and diagnostics. LLM agents are created through a three-stage process involving pre-training, fine-tuning, and reinforcement learning. They overcome limitations of standalone LLMs by incorporating external modules for perception, memory, and action, enabling them to handle complex tasks and collaborate with other agents. Four key opportunities for LLM agents in healthcare include clinical workflow automation, trustworthy medical AI, multi-agent-aided diagnosis, and health digital twins. Despite their potential, these systems also pose challenges such as safety concerns, bias amplification, and the need for new regulatory frameworks.

This development is important to psychologists for several reasons. First, LLM agents could revolutionize mental health care by providing personalized, round-the-clock support to patients, potentially improving treatment outcomes and accessibility. Second, these systems could assist psychologists in analyzing complex patient data, leading to more accurate diagnoses and tailored treatment plans. Third, LLM agents could automate administrative tasks, allowing psychologists to focus more on direct patient care. Fourth, the multi-agent collaboration feature could facilitate interdisciplinary approaches in mental health, bringing together insights from various specialties. Finally, the ethical implications and potential biases of these systems present new areas of study for psychologists, particularly in understanding how AI-human interactions may impact mental health and therapeutic relationships.

Friday, January 10, 2025

The Danger Of Superhuman AI Is Not What You Think

Shannon Vallor
Noema Magazine
Originally posted 23 May 24

Today’s generative AI systems like ChatGPT and Gemini are routinely described as heralding the imminent arrival of “superhuman” artificial intelligence. Far from a harmless bit of marketing spin, the headlines and quotes trumpeting our triumph or doom in an era of superhuman AI are the refrain of a fast-growing, dangerous and powerful ideology. Whether used to get us to embrace AI with unquestioning enthusiasm or to paint a picture of AI as a terrifying specter before which we must tremble, the underlying ideology of “superhuman” AI fosters the growing devaluation of human agency and autonomy and collapses the distinction between our conscious minds and the mechanical tools we’ve built to mirror them.

Today’s powerful AI systems lack even the most basic features of human minds; they do not share with humans what we call consciousness or sentience, the related capacity to feel things like pain, joy, fear and love. Nor do they have the slightest sense of their place and role in this world, much less the ability to experience it. They can answer the questions we choose to ask, paint us pretty pictures, generate deepfake videos and more. But an AI tool is dark inside.


Here are some thoughts:

This essay critiques the prevalent notion of superhuman AI, arguing that this rhetoric diminishes the unique qualities of human intelligence. The author challenges the idea that surpassing humans in task completion equates to superior intelligence, emphasizing the irreplaceable aspects of human consciousness, emotion, and creativity. The essay contrasts the narrow definition of intelligence used by some AI researchers with a broader understanding that encompasses human experience and values. Ultimately, the author proposes a future where AI complements rather than replaces human capabilities, fostering a more humane and sustainable society.

Thursday, January 9, 2025

Moral resilience and moral injury of nurse leaders during crisis situations: A qualitative descriptive analysis

Bergman, A., Nelson, K., et al. (2024).
Nursing Management, 55(12), 16–26.

Nurse leaders are a heterogeneous group encompassing a variety of roles, settings, and specialties. What ties these diverse professionals together is a common code of ethics. Nurse leaders apply the provisions of their code of ethics not only to patient scenarios, but also to their interactions with nursing colleagues who rely on their leaders as advocates for ethical nursing practice. Successful nurse leaders embody principles of professionalism, utilize effective communication and interpersonal skills, have a broad familiarity with the healthcare system and its nuances and complexity, and demonstrate skillful business acumen.

Despite their extensive training, nurse leaders have long been an underappreciated and largely unseen force maintaining the health of the healthcare systems and functioning as a safety net for both patients and nursing staff. However, nurse leaders are under more scrutiny and subject to extraordinary stressors related to the COVID-19 pandemic. Some of these stressors occurred due to the ethical challenges placed on leaders navigating an unprecedented pandemic. Others reflect long-standing patterns within healthcare and nursing that were exacerbated during the pandemic.

Unresolved ethical issues combined with unrelenting stress can lead to escalating degrees of moral suffering that undermines integrity and well-being. Moral injury (MI) occurs when an individual compromises personal or professional values, violating the individual's sense of right and wrong and causing this person to question their ability to navigate ethical concerns with integrity. Conversely, moral resilience (MR) is the capacity to restore or sustain integrity in response to ethical or moral adversity. MR includes six pillars: personal integrity, relational integrity, buoyancy, self-regulation/self-awareness, moral efficacy, and self-stewardship.

This research substudy aimed to explore the experiences and scenarios that exposed nurse leaders to MI during the COVID-19 pandemic and the strategies and solutions that nurse leaders employ to bolster their MR and integrity. This research strives to amplify their stories in the hopes of developing practical solutions to organizational, professional, and individual concerns rooted in their ethical values as nurse leaders.

The article is linked above.

This qualitative study examines the moral injury (MI) and moral resilience (MR) of nurse leaders during the COVID-19 pandemic. Researchers surveyed US nurse leaders, analyzing both quantitative MI and MR scores and qualitative responses exploring their experiences. Five key themes emerged: absent nursing voice, unsustainable workload, lack of leadership support, need for leadership capacity building, and prioritization of finances over patient care. The Reina Trust & Betrayal Model framed the analysis, revealing widespread broken trust impacting all three dimensions of trust: communication, character, and capability. The study concludes with recommendations to rebuild trust and address nurse leader well-being.