Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, November 17, 2025

When being flexible matters: Ecological underpinnings for the evolution of collective flexibility and task allocation

Staps, M., & Tarnita, C. E. (2022).
PNAS, 119(18).

Abstract

Task allocation is a central feature of collective organization. Living collective systems, such as multicellular organisms or social insect colonies, have evolved diverse ways to allocate individuals to different tasks, ranging from rigid, inflexible task allocation that is not adjusted to changing circumstances to more fluid, flexible task allocation that is rapidly adjusted to the external environment. While the mechanisms underlying task allocation have been intensely studied, it remains poorly understood whether differences in the flexibility of task allocation can be viewed as adaptive responses to different ecological contexts—for example, different degrees of temporal variability. Motivated by this question, we develop an analytically tractable mathematical framework to explore the evolution of task allocation in dynamic environments. We find that collective flexibility is not necessarily always adaptive, and fails to evolve in environments that change too slowly (relative to how long tasks can be left unattended) or too quickly (relative to how rapidly task allocation can be adjusted). We further employ the framework to investigate how environmental variability impacts the internal organization of task allocation, which allows us to propose adaptive explanations for some puzzling empirical observations, such as seemingly unnecessary task switching under constant environmental conditions, apparent task specialization without efficiency benefits, and high levels of individual inactivity. Altogether, this work provides a general framework for probing the evolved diversity of task allocation strategies in nature and reinforces the idea that considering a system’s ecology is crucial to explaining its collective organization.

Significance

A central problem in evolutionary biology is explaining variation in the organization of task allocation across collective systems. Why do human cells irreversibly adopt a task during development (e.g., kidney vs. liver cell), while sponge cells switch between different cell types? And why have only some ant species evolved specialized castes of workers for particular tasks? Although it seems reasonable to suppose that such differences reflect, at least partially, the different ecological pressures that systems face, there is no general understanding of how a system’s dynamic environment shapes its task allocation. To this end, we develop a general mathematical framework that reveals how simple ecological considerations could potentially explain cross-system variation in task allocation—including in flexibility, specialization, and (in)activity.

Here are some thoughts:

Of interest to psychologists, this paper by Staps and Tarnita provides a formal ecological and evolutionary framework for understanding the adaptive value of behavioral flexibility, specialization, and inactivity, both in individuals and in groups. 

The model demonstrates that collective flexibility in task allocation—akin to cognitive and behavioral flexibility in humans—is not always advantageous and instead depends critically on the dynamics of the environment. This offers a principled explanation for why some systems, from neural networks to human teams, might exhibit rigid specialization while others maintain fluid, generalist roles. 

Furthermore, the work gives functional explanations for puzzling behaviors that seem suboptimal from a productivity standpoint, such as frequent task-switching even in stable conditions and high levels of inactivity. These insights can inform psychological research on motivation, team dynamics, and organizational behavior by suggesting that such "inefficiencies" may be evolutionary adaptations for enhancing responsiveness to future change. 

The framework bridges the gap between ultimate, evolutionary causes and proximate, mechanistic explanations of how individuals and groups allocate cognitive and behavioral resources.

Friday, November 14, 2025

Guilt drives prosociality across 20 countries

Molho, C., et al. (2025).
Nature Human Behaviour.

Abstract

Impersonal prosociality is considered a cornerstone of thriving civic societies and well-functioning institutions. Previous research has documented cross-societal variation in prosociality using monetary allocation tasks such as dictator games. Here we examined whether different societies may rely on distinct mechanisms—guilt and internalized norms versus shame and external reputation—to promote prosociality. We conducted a preregistered experiment with 7,978 participants across 20 culturally diverse countries. In dictator games, we manipulated guilt by varying information about the consequences of participants’ decisions, and shame by varying observability. We also used individual- and country-level measures of the importance of guilt over shame. We found robust evidence for guilt-driven prosociality and wilful ignorance across countries. Prosociality was higher when individuals received information than when they could avoid it. Furthermore, more guilt-prone individuals (but not countries) were more responsive to information. In contrast, observability by strangers had negligible effects on prosociality. Our findings highlight the importance of providing information about the negative consequences of individuals’ choices to encourage prosocial behaviour across cultural contexts.

Here is a summary of sorts:

A new international study spanning 20 countries suggests that guilt, rather than shame, is the key emotion motivating people to be generous toward anonymous strangers. The research, which utilized a type of economic decision-making task, found that participants consistently acted more generously when they were given full information about how their actions would negatively impact the recipient, an effect linked to avoiding guilt. 

Specifically, 60% of participants made the generous choice when they had full information, compared to only 41% when they could opt for willful ignorance. In contrast, making the participants' decisions public to activate reputational concerns and potential shame had a negligible effect on generosity across all cultures. 

In short: Knowing you might cause harm and feeling responsible (guilt) is what drives people to be generous, even when dealing with strangers, not the fear of being judged by others (shame).

Thursday, November 13, 2025

Moral decision-making in AI: A comprehensive review and recommendations

Ram, J. (2025).
Technological Forecasting and Social Change,
217, 124150.

Abstract

The increased reliance on artificial intelligence (AI) systems for decision-making has raised corresponding concerns about the morality of such decisions. However, knowledge on the subject remains fragmentary, and cogent understanding is lacking. This study addresses the gap by using Templier and Paré's (2015) six-step framework to perform a systematic literature review on moral decision-making by AI systems. A data sample of 494 articles was analysed to filter 280 articles for content analysis. Key findings are as follows: (1) Building moral decision-making capabilities in AI systems faces a variety of challenges relating to human decision-making, technology, ethics and values. The absence of consensus on what constitutes moral decision-making and the absence of a general theory of ethics are at the core of such challenges. (2) The literature is focused on narrative building; modelling or experiments/empirical studies are less illuminating, which causes a shortage of evidence-based knowledge. (3) Knowledge development is skewed towards a few domains, such as healthcare and transport. Academically, the study developed a four-pronged classification of challenges and a four-dimensional set of recommendations covering 18 investigation strands, to steer research that could resolve conflict between different moral principles and build a unified framework for moral decision-making in AI systems.


Highlights

• Moral decision-making in AI faces a variety of human decision complexity, technological, ethics, and use/legal challenges
• Lack of consensus about 'what moral decision-making is' is one of the biggest challenges in imbuing AI with morality
• Narrative building with relatively less modeling or experiment/empirical work hampers evidence-based knowledge development
• Knowledge development is skewed towards a few domains (e.g., healthcare) limiting a well-rounded systematic understanding
• Extensive work is needed on resolving technological complexities, and understanding human decision-making processes

Here is my concern:

We are trying to automate a human capability we don't fully understand, using tools we are still learning to utilize, to achieve a goal we can't universally define. The study brilliantly captures the profound complexity of this endeavor, showing that the path to a "moral machine" is as much about understanding ourselves as it is about advancing technology.

Wednesday, November 12, 2025

Self-Improvement in Multimodal Large Language Models: a survey.

Deng, S., Wang, K., et al. (2025, October 3).
arXiv.org.

Abstract

Recent advancements in self-improvement for Large Language Models (LLMs) have efficiently enhanced model capabilities without significantly increasing costs, particularly in terms of human effort. While this area is still relatively young, its extension to the multimodal domain holds immense potential for leveraging diverse data sources and developing more general self-improving models. This survey is the first to provide a comprehensive overview of self-improvement in Multimodal LLMs (MLLMs). We provide a structured overview of the current literature and discuss methods from three perspectives: 1) data collection, 2) data organization, and 3) model optimization, to facilitate the further development of self-improvement in MLLMs. We also include commonly used evaluations and downstream applications. Finally, we conclude by outlining open challenges and future research directions.

Here are some thoughts that summarize this paper. MLLMs are learning to improve without human oversight.

This survey presents the first comprehensive overview of self-improvement in Multimodal Large Language Models (MLLMs), a rapidly emerging paradigm that enables models to autonomously generate, curate, and learn from their own multimodal data to enhance performance without heavy reliance on human annotation. The authors structure the self-improvement pipeline into three core stages: data collection (e.g., via random sampling, guided generation, or negative sample synthesis), data organization (including verification through rules, external or self-based evaluators, and dataset refinement), and model optimization (using techniques like supervised fine-tuning, reinforcement learning, or Direct Preference Optimization). The paper reviews representative methods, benchmarks, and real-world applications in domains such as math reasoning, healthcare, and embodied AI, while also outlining key challenges—including modality alignment, hallucination, limited seed model capabilities, verification reliability, and scalability. The goal is to establish a clear taxonomy and roadmap to guide future research toward more autonomous, general, and robust self-improving MLLMs.

Tuesday, November 11, 2025

The AI Frontier in Humanitarian Aid — Embracing Possibilities and Addressing Risks

Barry, M., Hansen, J., & Darmstadt, G. L. (2025).
New England Journal of Medicine.

Here is how it opens:

During disasters, timely response is critical. For example, after an earthquake — such as the 7.7-magnitude earthquake that devastated Myanmar in March 2025 — people who are trapped under collapsed buildings face a steep decline in their chance of survival after 48 hours. Yet the scope of devastation, combined with limited resources for disaster response and uncertainty about on-the-ground conditions, can constrain rescue efforts. Responders have recently had a new tool at their disposal, however: artificial intelligence (AI).

Shortly after the Myanmar earthquake, a satellite captured images of the affected area, which were sent to Microsoft’s AI for Good Lab. Machine-learning tools were used to analyze the images and assess the location, extent, nature, and severity of the damage.1 Such information, which was gained without the grave risks inherent to entering an unstable disaster zone and much more rapidly than would have been possible with traditional data-gathering and analysis methods, can help organizations quickly and safely prioritize relief efforts in areas that are both highly damaged and densely populated.2 This example reflects one of several ways in which AI is being used to support humanitarian efforts in disaster and conflict zones.

Global conflicts, infectious diseases, natural disasters driven by climate change, and increases in the number of refugees worldwide are magnifying the need for humanitarian services. Regions facing these challenges commonly contend with diminished health care systems, damage to other infrastructure, and shortages of health care workers. The dismantling of the U.S. Agency for International Development and the weakening of the U.S. Centers for Disease Control and Prevention and the U.S. State Department further jeopardize access to vital funding, constrain supply chains, and weaken the capacity for humanitarian response.

The article is linked above.

Here are some thoughts:

This article outlines the transformative potential of AI as a novel and powerful tool in the realm of humanitarian aid and crisis response. It moves beyond theory to present concrete applications where AI is being deployed to save lives and increase efficiency in some of the world's most challenging environments. Key innovative uses include leveraging AI with satellite imagery to perform rapid damage assessments after disasters, enabling responders to quickly and safely identify the most critically affected areas. Furthermore, AI is being used to predict disasters through early-warning systems, support refugees with AI-powered chatbots that provide vital information in multiple languages, optimize the delivery of supplies via drones, and enhance remote healthcare by interpreting diagnostic images like radiographs. However, the article strongly cautions that this promising frontier is accompanied by significant challenges, including technical and financial barriers, the risk of algorithmic bias, and serious ethical concerns regarding privacy and human rights, necessitating a responsible and collaborative approach to its development and deployment.


Monday, November 10, 2025

Moral injury is independently associated with suicidal ideation and suicide attempt in high-stress, service-oriented occupations

Griffin, B. J., et al. (2025).
Npj Mental Health Research, 4(1).

Abstract

This study explores the link between moral injury and suicidal thoughts and behaviors among US military veterans, healthcare workers, and first responders (N = 1232). Specifically, it investigates the risk associated with moral injury that is not attributable to common mental health issues. Among the participants, 12.1% reported experiencing suicidal ideation in the past two weeks, and 7.4% had attempted suicide in their lifetime. Individuals who screened positive for probable moral injury (6.0% of the sample) had significantly higher odds of current suicidal ideation (AOR = 3.38, 95% CI = 1.65, 6.96) and lifetime attempt (AOR = 6.20, 95% CI = 2.87, 13.40), even after accounting for demographic, occupational, and mental health factors. The findings highlight the need to address moral injury alongside other mental health issues in comprehensive suicide prevention programs for high-stress, service-oriented professions.

Here are some thoughts:

This study found that moral injury—a psychological distress resulting from events that violate one's moral beliefs—is independently associated with a significantly higher risk of suicidal ideation and suicide attempts among high-stress, service-oriented professionals, including military veterans, healthcare workers, and first responders. Even after accounting for factors like PTSD and depression, those screening positive for probable moral injury had approximately three times higher odds of recent suicidal ideation and six times higher odds of a lifetime suicide attempt. The findings highlight the need to address moral injury specifically within suicide prevention efforts for these populations.

Sunday, November 9, 2025

The Cruelty is the Point: Harming the Most Vulnerable in America

This administration has weaponized bureaucracy, embarking on a chilling campaign of calculated cruelty. While many children, disabled, poor, and working poor grapple with profound food insecurity, their response is not to strengthen the social safety net, but to actively shred it.

They are zealously fighting all the way to the Supreme Court for the right to let families go hungry, stripping SNAP benefits from the most vulnerable. 

Yet the most deafening sound is the silence from the GOP—a complicit chorus where not a single supposed fiscal hawk or moral conservative dares to stand against this raw, unadulterated malice. 

Their collective inaction reveals a party that has abandoned any pretense of compassion, proving that for them, the poor and struggling are not a priority to protect, but a problem to be punished.

Saturday, November 8, 2025

Beyond right and wrong: A new theoretical model for understanding moral injury

Vaknin, O., & Ne’eman-Haviv, V. (2025).
European Journal of Trauma & Dissociation, 9(3), 100569.

Abstract

Recent research has increasingly focused on the role of moral frameworks in understanding trauma and traumatic events, leading to the recognition of "moral injury" as a clinical syndrome. Although various definitions exist, there is still a lack of consensus on the nature and consequences of moral injury. This article proposes a new theoretical model that broadens the study of moral injury to include diverse populations, suggesting it arises not only from traumatic experiences but also from conflicts between moral ideals and reality. By integrating concepts such as prescriptive cognitions, post hoc thinking, and cognitive flexibility, the model portrays moral injury as existing on a continuum, affecting a wide range of individuals. The article explores implications for treatment and emphasizes the need for follow-up empirical studies to validate the proposed model. It also suggests the possibility that moral injury is on a continuum, in addition to the possibility of explaining this process. This approach offers new insights into prevention and intervention strategies, highlighting the broader applicability of moral injury beyond military contexts.

Here are some thoughts:

This article proposes a new model suggesting that moral injury is not just a result of clear-cut moral violations (like in combat), but can also arise from everyday moral dilemmas where a person is forced to choose between competing "rights" or is unable to act according to their moral ideals due to external constraints.

Key points of the new model:

Core Cause: Injury stems from the internal conflict and tension between one's moral ideals ("prescriptive cognitions") and the reality of a situation, not necessarily from a traumatic betrayal or act.

The Process: It happens when a person faces a moral dilemma, makes a necessary but imperfect decision, experiences moral failure, and then gets stuck in negative "post-hoc" thinking without the cognitive flexibility to adapt their moral framework.

Broader Impact: This expands moral injury beyond soldiers to include civilians and professionals like healthcare workers, teachers, and social workers who face systemic ethical challenges.

New Treatment Approach: Healing should focus less on forgiveness for a specific wrong and more on building cognitive flexibility and helping people integrate moral suffering into a more adaptable moral identity.

In short, the article argues that moral injury exists on a spectrum and is a broader disturbance of one's moral worldview, not just a clinical syndrome from a single, overtly traumatic event.

Friday, November 7, 2025

High Self-Control Individuals Prefer Meaning over Pleasure

Bernecker, K., Becker, D., & Guobyte, A. (2025).
Social Psychological and Personality Science.

Abstract

The link between self-control and success in various life domains is often explained by people avoiding hedonic pleasures, such as through inhibition, making the right choices, or using adaptive strategies. We propose an additional explanation: High self-control individuals prefer spending time on meaningful activities rather than pleasurable ones, whereas the opposite is true for individuals with high trait hedonic capacity. In Studies 1a and 1b, participants either imagined (N = 449) or actually engaged in activities (N = 231, pre-registered) during unexpected free time. They then rated their experience. In both studies, trait self-control was positively related to the eudaimonic experience (e.g., meaning) of activities and unrelated to their hedonic experience (e.g., pleasure). The opposite was true for trait hedonic capacity. Study 2 (N = 248) confirmed these findings using a repeated-choice paradigm. The preference for eudaimonic over hedonic experiences may be a key aspect of successful long-term goal pursuit.


Here are some thoughts:

This research proposes a new explanation for why people with high self-control are successful. Rather than just being good at resisting temptation, they have a fundamental preference for activities that feel meaningful and valuable, known as eudaimonic experiences.

Across three studies, individuals with high trait self-control consistently chose to spend their free time on activities they found meaningful, both in hypothetical scenarios and in real-life situations. Conversely, individuals with a high "trait hedonic capacity"—a natural skill for enjoying simple pleasures—showed a clear preference for activities that were pleasurable and fun. The studies found that these traits predict not just what people choose to do, but also how they experience the same activities; a person with high self-control will find more meaning in an activity than their peers, while a person with high hedonic capacity will find more pleasure in it.

This inherent preference for meaning over pleasure may be a key reason why those with high self-control find it easier to pursue long-term goals, as they are naturally drawn to the sense of purpose that such goal-directed actions provide.