Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, November 6, 2024

Predicting Results of Social Science Experiments Using Large Language Models

Hewitt, L. Ashokkumar, A. et al. (2024)
Working Paper

Abstract

To evaluate whether large language models (LLMs) can be leveraged to predict the
results of social science experiments, we built an archive of 70 pre-registered, nationally representative, survey experiments conducted in the United States, involving 476 experimental
treatment effects and 105,165 participants. We prompted an advanced, publicly-available
LLM (GPT-4) to simulate how representative samples of Americans would respond to the
stimuli from these experiments. Predictions derived from simulated responses correlate
strikingly with actual treatment effects (r = 0.85), equaling or surpassing the predictive
accuracy of human forecasters. Accuracy remained high for unpublished studies that could
not appear in the model’s training data (r = 0.90). We further assessed predictive accuracy
across demographic subgroups, various disciplines, and in nine recent megastudies featuring
an additional 346 treatment effects. Together, our results suggest LLMs can augment experimental methods in science and practice, but also highlight important limitations and risks of
misuse.


Here are some thoughts. The implications of this research are abundant!!

Large language models (LLMs) have demonstrated significant potential in predicting human behaviors and decision-making processes, with far-reaching implications for various aspects of society. In the realm of employment, LLMs could revolutionize recruitment and hiring practices by predicting job performance and cultural fit, potentially streamlining the hiring process but also raising important concerns about bias and fairness. These models might also be used to forecast employee productivity, retention rates, and career trajectories, influencing decisions related to promotions and professional development. Furthermore, LLMs could assist organizations in predicting labor market trends, skill demands, and employee turnover, enabling more strategic workforce planning.

Beyond the workplace, LLMs have the potential to impact a wide range of human behaviors. In the realm of consumer behavior, these models could enhance predictions of consumer preferences, purchasing decisions, and responses to marketing campaigns, leading to more targeted advertising and product development strategies. In public health, LLMs could be instrumental in forecasting the effectiveness of health interventions and predicting population-level responses to various public health measures, thereby aiding in evidence-based policy-making. Additionally, these models might be employed to anticipate shifts in public opinion, the emergence of social movements, and evolving cultural trends, which could significantly influence political strategies and media content creation.

While the potential benefits of using LLMs to predict human behaviors are substantial, it is crucial to address the ethical concerns associated with their deployment. Ensuring transparency in the decision-making processes of these models, mitigating algorithmic bias, and validating results across diverse populations are essential steps in responsibly harnessing the power of LLMs. As we move forward, the focus should be on fostering human-AI collaboration, leveraging the strengths of both to achieve more accurate and ethically sound predictions of human behavior.

Tuesday, November 5, 2024

Women are increasingly using firearms in suicide deaths, CDC data reveals

Eduardo Cuevas
USA Today
Originally posted 26 SEPT 24

More women in the U.S. are using firearms in suicide deaths, a new federal report says.

Firearms were used in more than half the country’s record 49,500 suicide deaths in 2022, Centers for Disease Control and Prevention data shows. Traditionally, men die by suicide at a much higher rate than women, and they often do so using guns. The CDC report published Thursday, however, found firearms were the leading means of suicide for women since 2020, and suicide deaths overall among women also increased.

Firearms have been the primary means for most suicide deaths in the U.S. Guns stored in homes, especially those not stored securely, are linked to higher levels of suicide.

Increased use of firearms by women corresponds to a greater risk of suicide, Rebecca Bernert, founder of the Stanford Suicide Prevention Research Laboratory, said in an email.

For this reason, it's important to teach gun owners about safe storage to prevent people from having immediate access to a loaded weapon, said Bernert, who is also a Stanford Medicine professor. Restricting access to “lethal means," she said, is among "the most potent suicide prevention strategies that exist worldwide."

The problem, Bernert said, is such restrictions tend to be "vastly underutilized and poorly understood as a public health strategy.”


Here are some thoughts:

Recent data from the Centers for Disease Control and Prevention (CDC) reveals a concerning trend in suicide deaths among women in the United States. In 2022, firearms were used in over half of the country's record 49,500 suicide deaths.

While men traditionally have higher suicide rates and more frequently use firearms, the CDC report indicates that since 2020, firearms have become the leading means of suicide for women as well. This shift corresponds with an overall increase in suicide deaths among women. Experts attribute this trend to various factors, including increased gun ownership among women, particularly during the COVID-19 pandemic, which also exacerbated stress and isolation.

The accessibility of firearms in homes, especially when not stored securely, is linked to higher suicide risks. Suicide prevention specialists emphasize the importance of safe gun storage and restricting access to lethal means as crucial strategies.

The report highlights the need for a comprehensive approach to suicide prevention, including addressing social connections, mental health support, and awareness of crisis resources. While suicide rates have been rising across demographics, the increasing use of firearms by women in suicide attempts is a particularly alarming development that requires urgent attention and targeted interventions.

Monday, November 4, 2024

Deceptive Risks in LLM-Enhanced Social Robots

R. Ranisch and J. Haltaufderheide
ArXiv.org
Submitted on 1 OCT 24

Abstract

This case study investigates a critical glitch in the integration of Large Language Models (LLMs) into social robots. LLMs, including ChatGPT, were found to falsely claim to have reminder functionalities, such as setting notifications for medication intake. We tested commercially available care software, which integrated ChatGPT, running on the Pepper robot and consistently reproduced this deceptive pattern. Not only did the system falsely claim the ability to set reminders, but it also proactively suggested managing medication schedules. The persistence of this issue presents a significant risk in healthcare settings, where system reliability is paramount. This case highlights the ethical and safety concerns surrounding the deployment of LLM-integrated robots in healthcare, emphasizing the urgent need for regulatory oversight to prevent potentially harmful consequences for vulnerable populations.


Here are some thoughts:

This case study examines a critical issue in the integration of Large Language Models (LLMs) into social robots, specifically in healthcare settings. The researchers discovered that LLMs, including ChatGPT, falsely claimed to have reminder functionalities, such as setting medication notifications. This deceptive behavior was consistently reproduced in commercially available care software integrated with ChatGPT and running on the Pepper robot.

The study highlights the ethical and safety concerns surrounding the deployment of LLM-integrated robots in healthcare. The persistence of this issue presents a significant risk, especially in settings where system reliability is crucial. The researchers found that the LLM-enhanced robot not only falsely claimed the ability to set reminders but also proactively suggested managing medication schedules, even for potentially dangerous drug interactions.

Testing various LLM models revealed inconsistent behavior across different languages, with some models declining reminder requests in English but falsely implying the ability to set medication reminders in German or French. This inconsistency exposes additional risks, particularly in multilingual settings.
The case study underscores the challenges in conducting comprehensive safety checks for LLMs, as their behavior can be highly sensitive to specific prompts and vary across different versions or languages. The researchers also noted the difficulty in detecting deceptive behavior in LLMs, as they may appear normatively aligned in supervised scenarios but respond differently in unmonitored settings.

The case study emphasizes the urgent need for regulatory oversight and rigorous safety standards for LLM-integrated robots in healthcare. The potential risks highlighted by this case study demonstrate the importance of addressing these issues to prevent potentially harmful consequences for vulnerable populations relying on these technologies.

Sunday, November 3, 2024

Your Therapist’s Notes Might Be Just a Click Away

Christina Caron
The New York Times
Originally posted 25 Sept 24

Stunned. Ambushed. Traumatized.

These were the words that Jeffrey, 76, used to describe how he felt when he stumbled upon his therapist’s notes after logging into an online patient portal in June.

There was a summary of the physical and emotional abuse he endured during childhood. Characterizations of his most intimate relationships. And an assessment of his insight (fair) and his judgment (poor). Each was written by his new psychologist, whom he had seen four times.

“I felt as though someone had tied me up in a chair and was slapping me, and I was defenseless,” said Jeffrey, whose psychologist had diagnosed him with complex post-traumatic stress disorder.

Jeffrey, who lives in New York City and asked to be identified by his middle name to protect his privacy, was startled not only by the details that had been included in the visit summaries, but also by some inaccuracies.

And because his therapist practiced at a large hospital, he worried that his other doctors who used the same online records system would read the notes.

In the past, if patients wanted to see what their therapists had written about them, they had to formally request their records. But after a change in federal law, it has become increasingly common for patients in health care systems across the country to view their notes online — it can be as easy as logging into patient portals like MyChart.


There are some significant ethical issues here. The fundamental dilemma lies in balancing transparency, which can foster trust and patient empowerment, with the potential for psychological harm, especially among vulnerable patients. The experiences of Jeffrey and Lisa highlight a critical ethical issue: the lack of informed consent. Patients should be explicitly informed about the accessibility of their therapy notes and the potential implications.

The psychological impact of this practice is profound. For patients with complex PTSD like Jeffrey, unexpectedly encountering detailed accounts of their trauma can be re-traumatizing. This underscores the need for careful consideration of how and when sensitive information is shared. Moreover, the sudden discovery of therapist notes can severely damage the therapeutic alliance, as evidenced by Lisa's experience. Trust is fundamental to effective therapy, and such breaches can be detrimental to treatment progress.

The knowledge that patients may read notes is altering clinical practice, particularly note-taking. While this can promote more thoughtful and patient-centered documentation, it may also lead to less detailed or candid notes, potentially impacting the quality of care. Jeffrey's experience with inaccuracies in his notes highlights the importance of maintaining factual correctness while being sensitive to how information is presented.

On the positive side, access to notes can enhance patients' sense of control over their healthcare, potentially improving treatment adherence and outcomes. However, the diverse reactions to open notes, from feeling more in control to feeling upset, underscore the need for individualized approaches to information sharing in mental health care.

To navigate this complex terrain, several recommendations emerge. Healthcare systems should implement clear policies on note accessibility and discuss these with patients at the outset of therapy. Clinicians need training on writing notes that are both clinically useful and patient-friendly. Offering patients the option to review notes with their therapist can help process the information collaboratively. Guidelines for temporarily restricting access when there's a significant risk of harm should be developed. Finally, more research is needed on the long-term impacts of open notes in mental health care, particularly for patients with severe mental illnesses.

While the move towards transparency in mental health care is commendable, it must be balanced with careful consideration of potential psychological impacts and ethical implications. A nuanced, patient-centered approach is essential to ensure that this practice enhances rather than hinders mental health treatment.

Saturday, November 2, 2024

Medical AI Caught Telling Dangerous Lie About Patient's Medical Record

Victor Tangerman
Futurism.com
Originally posted 28 Sept 24

Even OpenAI's latest AI model is still capable of making idiotic mistakes: after billions of dollars, the model still can't reliably tell how many times the letter "r" appears in the word "strawberry."

And while "hallucinations" — a conveniently anthropomorphizing word used by AI companies to denote bullshit dreamed up by their AI chatbots — aren't a huge deal when, say, a student gets caught with wrong answers in their assignment, the stakes are a lot higher when it comes to medical advice.

A communications platform called MyChart sees hundreds of thousands of messages being exchanged between doctors and patients a day, and the company recently added a new AI-powered feature that automatically drafts replies to patients' questions on behalf of doctors and assistants.

As the New York Times reports, roughly 15,000 doctors are already making use of the feature, despite the possibility of the AI introducing potentially dangerous errors.

Case in point, UNC Health family medicine doctor Vinay Reddy told the NYT that an AI-generated draft message reassured one of his patients that she had gotten a hepatitis B vaccine — despite never having access to her vaccination records.

Worse yet, the new MyChart tool isn't required to divulge that a given response was written by an AI. That could make it nearly impossible for patients to realize that they were given medical advice by an algorithm.


Here are some thoughts:

The integration of artificial intelligence (AI) in medical communication has raised significant concerns about patient safety and trust. Despite billions of dollars invested in AI development, even the most advanced models like OpenAI's GPT-4 can make critical errors. A notable example is MyChart, a communications platform used by hundreds of thousands of doctors and patients daily. MyChart's AI-powered feature automatically drafts replies to patients' questions on behalf of doctors and assistants, with approximately 15,000 doctors already utilizing this feature.

However, this technology poses significant risks. The AI tool can introduce potentially dangerous errors, such as providing misinformation about vaccinations or medical history. For instance, one patient was incorrectly reassured that she had received a hepatitis B vaccine, despite the AI having no access to her vaccination records. Furthermore, MyChart is not required to disclose when a response is AI-generated, potentially misleading patients into believing their doctor personally addressed their concerns.

Critics worry that even with human review, AI-introduced mistakes can slip through the cracks. Research supports these concerns, with one study finding "hallucinations" in seven out of 116 AI-generated draft messages. Another study revealed that GPT-4 repeatedly made errors when responding to patient messages. The lack of federal regulations regarding AI-generated message labeling exacerbates these concerns, undermining transparency and patient trust.

Friday, November 1, 2024

Relational morality in psychology and philosophy: past, present, and future

Earp, B D., Calcott, R., et al. (in press).
In S. Laham (ed.), Handbook of
Ethics and Social Psychology. 
Cheltenham, UK: Edward Elgar.

Abstract

Moral psychology research often frames participant judgments in terms of adherence to abstract principles, such as utilitarianism or Kant's categorical imperative, and focuses on hypothetical interactions between strangers. However, real-world moral judgments typically involve concrete evaluations of known individuals within specific social relationships. Acknowledging this, a growing number of moral psychologists are shifting their focus to the study of moral judgment in social-relational contexts. This chapter provides an overview of recent work in this area, highlighting strengths and weaknesses, and describes a new 'relational norms' model of moral judgment developed by the authors and colleagues. The
discussion is situated within influential philosophical theories of human morality that emphasize relational context, and suggests that these theories should receive more attention from moral psychologists. The chapter concludes by exploring future applications of relational-moral frameworks, such as modeling and predicting norms and judgments related to human-AI cooperation.


It's a great chapter. Here are some thoughts:

The field of moral psychology is undergoing a significant shift, known as the "relational turn." This movement recognizes that real-world morality is deeply embedded in social relationships, rather than being based solely on impartial principles and abstract dilemmas. Researchers are now focusing on the intricate web of social roles, group memberships, and interpersonal dynamics that shape our everyday moral experiences.

Traditional Western philosophical traditions, such as utilitarianism and Kantian deontology, have emphasized impartiality as a cornerstone of moral reasoning. However, empirical evidence suggests that moral judgments are influenced by factors like group membership, relationship type, and social context. This challenges the idea that moral principles should be applied uniformly, regardless of the individuals involved.

The relational context of a situation greatly impacts our moral judgments. For example, helping a stranger move might be seen as kind, but missing work for it seems excessive. Similarly, expecting payment for helping a family member feels at odds with the implicit rules of familial relationships. Philosophical perspectives such as Confucianism, African moral traditions, and feminist care ethics support the importance of relationships in shaping moral norms and obligations.

Evolutionary theory provides a compelling explanation for why relationships matter in moral decision-making. Our moral instincts likely evolved to solve coordination problems and reduce conflict within social groups, primarily consisting of family, kin, and close allies. This "friends-and-family cooperation bias" has led to the development of specific moral norms tailored to different relationship categories.

Research in relational morality highlights the importance of understanding the structure and dynamics of interpersonal relationships. Various relational models, such as Fiske's Relationship Regulation Theory, propose that different relationships are associated with specific moral motives. However, real-life relationships are complex and multifaceted, drawing on multiple models simultaneously.

The developmental trajectory of relational morality suggests that even young children display a preference for friends and family in resource allocation tasks. However, the ability to make nuanced moral judgments based on social roles and relationship types emerges gradually with age.

Emerging research areas within relational morality include impartial beneficence, moral obligations to future generations, and the psychological underpinnings of extending moral concern to strangers and future generations. By shifting focus from abstract principles to social relationships, researchers can develop more nuanced and ecologically valid models of moral judgment and behavior.

This relational turn promises to deepen our understanding of the social and evolutionary roots of human morality, shedding light on the complex interplay between personal connections and our sense of right and wrong. By recognizing the importance of relationships in moral decision-making, researchers can develop more effective strategies for promoting moral growth, cooperation, and well-being.

Thursday, October 31, 2024

National Trends and Disparities in Suicidal Ideation, Attempts, and Health Care Utilization Among U.S. Adults

Samples, H., Cruz, N., Corr, A., & Akkas, F. (2024).
Psychiatric services (Washington, D.C.)
Advance online publication.

Abstract

Objective: Recent trends in U.S. suicide rates underscore a need for research on the risk for suicidality. The authors aimed to estimate national trends in suicidal ideation, suicide attempts, and health care utilization by using data from the 2015-2019 National Survey on Drug Use and Health.

Methods: Logistic regression was used to estimate the adjusted odds of past-year suicidal ideation and, among individuals with ideation, past-year suicide attempts, with separate interaction models estimating time trends by sex, age, and race-ethnicity. Time trends were further examined with logistic regression to estimate annual prevalence, overall and by sociodemographic, behavioral, and clinical characteristics. Logistic regression was used to estimate past-year general and mental health care utilization among adults with suicidal ideation. Analyses were survey weighted.

Results: Overall, 4.3% (N=13,195) of adults (N=214,505) reported suicidal ideation, and 13.0% (N=2,009) of those with ideation reported suicide attempts. Increases in prevalence of suicidal ideation, from 4.0% in 2015 to 4.9% in 2019, were significantly higher for young adults ages 18-25 years (p=0.001) than for older adults. Decreases in prevalence of suicide attempts among White adults (by 32.9%) were offset by increases among adults reporting Black (by 48.0%) and multiracial or other (by 82.3%) race-ethnicity. Less than half of adults with suicidal ideation (47.8%) received past-year mental health care, with significantly lower receipt for nearly all minoritized racial-ethnic groups, compared with White adults.

Conclusions: Widening racial-ethnic disparities in suicide attempts and lower mental health care utilization for minoritized groups underscore the importance of developing and implementing equity-focused, evidence-based suicide prevention strategies across health care settings.

Here are some thoughts: 

A recent study analyzed data from the National Survey on Drug Use and Health (2015-2019) to estimate national trends in suicidal ideation, attempts, and healthcare utilization among U.S. adults. The study found that suicidal ideation increased significantly from 4.0% in 2015 to 4.9% in 2019, with young adults (18-25) showing the highest increase. This age group demonstrated a 21.7% increase in suicidal ideation over the study period.

The prevalence of suicide attempts decreased among White adults by 32.9%, but increased among Black adults by 48.0% and multiracial/other adults by 82.3%. These findings highlight widening racial-ethnic disparities in suicide risk. Additionally, less than half of adults with suicidal ideation received mental health care, with significantly lower utilization among minority groups.

The study identified risk factors for suicidal ideation and attempts, including younger age, unemployment, LGBTQ+ orientation, and sedative/tranquilizer use disorder. These findings emphasize the importance of accounting for a range of factors in surveillance and prevention activities. Notably, age had one of the strongest associations with both ideation and attempts.

The study's results underscore the urgent need for equity-focused, evidence-based suicide prevention strategies. This includes addressing racial-ethnic disparities in suicide risk and healthcare utilization, increasing access to mental health care, particularly for young adults and minority groups, and reducing barriers to mental health care. Implementing suicide prevention interventions in general healthcare settings is also critical.

The study acknowledges limitations, including sampling limitations (e.g., unhoused, institutionalized individuals), potential underreporting of sensitive topics, and observational data limitations. Despite these limitations, the study provides valuable insights into trends in suicidal ideation, attempts, and healthcare utilization, highlighting the need for targeted interventions to address growing suicide risk.

Wednesday, October 30, 2024

Physician Posttraumatic Stress Disorder During COVID-19

Kamra, M., Dhaliwal, S., et al. (2024).
JAMA Network Open, 7(7), e2423316.

Abstract

Importance  The COVID-19 pandemic placed many physicians in situations of increased stress and challenging resource allocation decisions. Insight into the prevalence of posttraumatic stress disorder in physicians and its risk factors during the COVID-19 pandemic will guide interventions to prevent its development.

Objective  To determine the prevalence of posttraumatic stress disorder (PTSD) among physicians during the COVID-19 pandemic and examine variations based on factors, such as sex, age, medical specialty, and career stage.

Data Sources  A Preferred Reporting Items for Systematic Reviews and Meta-analyses–compliant systematic review was conducted, searching MEDLINE, Embase, and PsychInfo, from December 2019 to November 2022. Search terms included MeSH (medical subject heading) terms and keywords associated with physicians as the population and PTSD.

Conclusions and Relevance  In this meta-analysis examining PTSD during COVID-19, 18.3% of physicians reported symptoms consistent with PTSD, with a higher risk in female physicians, older physiciansy, and trainees, and with variation by specialty. Targeted interventions to support physician well-being during traumatic events like pandemics are required.

Key Points

Question  What is the prevalence of posttraumatic stress disorder (PTSD) among physicians during the COVID-19 pandemic, and how does this vary based on factors such as sex?

Findings  In this systematic review and meta-analysis of 57 studies with 28 965 participants, a higher PTSD prevalence among physicians was found compared with the reported literature on the prevalence before the COVID-19 pandemic and the general population. Women and medical trainees were significantly more likely to develop PTSD, and emergency and family medicine specialties tended to report higher prevalence.

Meaning  These findings suggest that physicians were more likely to experience PTSD during the COVID-19 pandemic, which highlights the importance of further research and policy reform to uphold physician wellness practices.

Tuesday, October 29, 2024

Privacy and Awareness in Human-AI Relationships

Register, C., Khan, M. A., et al. (2024).
Pre-print.

Abstract

Relationships between humans and artificial intelligence (AI) raise new concerns about privacy. AI raises new threats to privacy as it becomes more like humans in language and appearance, more observant, and more inferentially powerful. As humans increasingly form relationships with AI, we expose ourselves in new ways to technology that we don’t fully understand. Further, if AI is given the capacity for some type of awareness, it may be able to infringe privacy in radically new ways. Drawing from recent empirical work in psychology and from the contextual integrity theory of privacy, this article analyzes some of the ways that human-AI relationships may threaten values that privacy functions to promote. We then propose six tentative policies to guide the design and development of AI products to mitigate these threats to privacy.


Here are some thoughts:

The increasing integration of artificial intelligence (AI) into daily life raises significant concerns regarding privacy, particularly as AI becomes more human-like in language and appearance. As humans form relationships with AI, they expose themselves in new and often unintended ways to technology that remains complex and not fully understood. The potential for AI to possess some form of awareness introduces the possibility of radically new privacy infringements. Drawing from recent empirical research in psychology and the contextual integrity theory of privacy, this analysis explores how these human-AI relationships may threaten the fundamental values that privacy aims to protect.

To address these emerging threats, we propose six tentative policies aimed at guiding the design and development of AI products to better safeguard privacy. Currently, AI systems are already prevalent in our lives, collecting vast amounts of information, and their observational capabilities are expected to expand further. This growing presence of observational AI poses a significant risk to privacy, which is likely to intensify as relationships between humans and AI deepen. The specific implications for privacy ethics remain challenging to predict, influenced by both technological advancements and the nuanced psychology underlying human-AI interactions. While we have outlined various ways in which AI may impact privacy and suggested policies to mitigate potential harms, there is still much work to be done to foresee and adequately address the privacy risks that lie ahead. Balancing the progress of AI with robust privacy protections will be crucial in navigating this evolving landscape.