Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, July 18, 2025

Adversarial testing of global neuronal workspace and integrated information theories of consciousness

Ferrante, O., et al,. (2025).
Nature.

Abstract

Different theories explain how subjective experience arises from brain activity. These theories have independently accrued evidence, but have not been directly compared. Here we present an open science adversarial collaboration directly juxtaposing integrated information theory (IIT) and global neuronal workspace theory (GNWT) via a theory-neutral consortium. The theory proponents and the consortium developed and preregistered the experimental design, divergent predictions, expected outcomes and interpretation thereof. Human participants (n = 256) viewed suprathreshold stimuli for variable durations while neural activity was measured with functional magnetic resonance imaging, magnetoencephalography and intracranial electroencephalography. We found information about conscious content in visual, ventrotemporal and inferior frontal cortex, with sustained responses in occipital and lateral temporal cortex reflecting stimulus duration, and content-specific synchronization between frontal and early visual areas. These results align with some predictions of IIT and GNWT, while substantially challenging key tenets of both theories. For IIT, a lack of sustained synchronization within the posterior cortex contradicts the claim that network connectivity specifies consciousness. GNWT is challenged by the general lack of ignition at stimulus offset and limited representation of certain conscious dimensions in the prefrontal cortex. These challenges extend to other theories of consciousness that share some of the predictions tested here. Beyond challenging the theories, we present an alternative approach to advance cognitive neuroscience through principled, theory-driven, collaborative research and highlight the need for a quantitative framework for systematic theory testing and building.

Here are some thoughts:

This research explores a major collaborative effort to empirically test two leading theories of consciousness: Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT). These theories represent two of the most prominent perspectives among the more than 200 ideas currently proposed to explain how subjective experience arises from brain activity. GNWT suggests that consciousness occurs when information is globally broadcast across the brain, particularly involving the prefrontal cortex. In contrast, IIT posits that consciousness corresponds to the integration of information in the brain, especially within the posterior cortex.

To evaluate these theories, the Cogitate Consortium organized an “adversarial collaboration,” in which proponents of both theories, along with neutral researchers, agreed on specific, testable predictions derived from each model. IIT predicted that conscious experience should involve sustained synchronization of activity in the posterior cortex, while GNWT predicted that consciousness would involve a “neural ignition” process and that conscious content could be decoded from the prefrontal cortex. These hypotheses were tested across several labs using consistent experimental protocols.

The findings, however, were inconclusive. The data did not reveal the sustained posterior synchronization expected by IIT, nor did it consistently support GNWT’s predictions about prefrontal cortex activity and neural ignition. Although the results presented challenges for both theories, they did not decisively support or refute either one. Importantly, the study marked a significant step forward in the scientific investigation of consciousness. It demonstrated the value of collaborative, theory-neutral research and addressed a long-standing problem in consciousness science—namely, that most studies have been conducted by proponents of specific theories, often resulting in confirmation bias.

The project was also shaped by insights from psychologist Daniel Kahneman, who pioneered the idea of adversarial collaboration. He noted that scientists are rarely persuaded to abandon their theories even in the face of counter-evidence. While this kind of theoretical stubbornness might seem like a flaw, the article argues it can be productive when managed within a collaborative and self-correcting scientific culture. Ultimately, the study underscores how difficult it is to unravel the nature of consciousness and suggests that progress may require both improved experimental methods and potentially a conceptual revolution. Still, by embracing open collaboration, the scientific community has taken a crucial step toward better understanding one of the most complex problems in science.

Thursday, July 17, 2025

Cognitive bias and how to improve sustainable decision making

Korteling, J. E. H., Paradies, G. L., &
Sassen-van Meer, J. P. (2023). 
Frontiers in psychology, 14, 1129835.

Abstract

The rapid advances of science and technology have provided a large part of the world with all conceivable needs and comfort. However, this welfare comes with serious threats to the planet and many of its inhabitants. An enormous amount of scientific evidence points at global warming, mass destruction of bio-diversity, scarce resources, health risks, and pollution all over the world. These facts are generally acknowledged nowadays, not only by scientists, but also by the majority of politicians and citizens. Nevertheless, this understanding has caused insufficient changes in our decision making and behavior to preserve our natural resources and to prevent upcoming (natural) disasters. In the present study, we try to explain how systematic tendencies or distortions in human judgment and decision-making, known as “cognitive biases,” contribute to this situation. A large body of literature shows how cognitive biases affect the outcome of our deliberations. In natural and primordial situations, they may lead to quick, practical, and satisfying decisions, but these decisions may be poor and risky in a broad range of modern, complex, and long-term challenges, like climate change or pandemic prevention. We first briefly present the social-psychological characteristics that are inherent to (or typical for) most sustainability issues. These are: experiential vagueness, long-term effects, complexity and uncertainty, threat of the status quo, threat of social status, personal vs. community interest, and group pressure. For each of these characteristics, we describe how this relates to cognitive biases, from a neuro-evolutionary point of view, and how these evolved biases may affect sustainable choices or behaviors of people. Finally, based on this knowledge, we describe influence techniques (interventions, nudges, incentives) to mitigate or capitalize on these biases in order to foster more sustainable choices and behaviors.

Here are some thoughts:

The article explores why, despite widespread scientific knowledge and public awareness of urgent sustainability issues such as climate change, biodiversity loss, and pollution, there is still insufficient behavioral and policy change to effectively address these problems. The authors argue that cognitive biases--systematic errors in human thinking-play a significant role in hindering sustainable decision--making. These biases evolved to help humans make quick decisions in immediate, simple contexts but are poorly suited for the complex, long-term, and abstract nature of sustainability challenges.

Sustainability issues have several psychological characteristics that make them particularly vulnerable to cognitive biases. These include experiential vagueness, where problems develop slowly and are difficult to perceive directly; long-term effects, where benefits of sustainable actions are delayed while costs are immediate; complexity and uncertainty; threats to the status quo and social standing; conflicts between personal and community interests; and social pressures that discourage sustainable behavior. The article highlights specific cognitive biases linked to these characteristics, such as hyperbolic discounting (the preference for immediate rewards over future benefits), normalcy bias (underestimating the likelihood and impact of disasters), and the tragedy of the commons (prioritizing personal gain over collective welfare), along with others like confirmation bias, the endowment effect, and sunk-cost fallacy, all of which skew judgment and impede sustainable choices.

To address these challenges, the authors recommend interventions that leverage or counteract these biases through environmental and contextual changes rather than solely relying on education or bias training. Techniques such as nudges, incentives, framing effects, and emphasizing benefits to family or in-groups can make sustainable choices easier and more appealing. The key takeaway is that understanding and addressing cognitive biases is essential for improving sustainable decision-making at both individual and policy levels. Policymakers and organizations should design interventions that account for human psychological tendencies to foster more sustainable behaviors effectively.

Wednesday, July 16, 2025

The moral blueprint is not necessary for STEM wisdom

Kachhiyapatel, N., & Grossmann, I. (2025, June 11).
PsyArXiv

Abstract

How can one bring wisdom into STEM education? One popular position holds that wise judgment follows from teaching morals and ethics in STEM. However, wisdom scholars debate the causal role of morality and whether cultivating a moral blueprint is a necessary condition for wisdom. Some philosophers and education scientists champion this view, whereas social psychologists and cognitive scientists argue that moral features like prosocial behavior are reinforcing factors or outcomes of wise judgment rather than pre-requisites. This debate matters particularly for science and technology, where wisdom-demanding decisions typically involve incommensurable values and radical uncertainty. Here, we evaluate these competing positions through four lines of evidence. First, empirical research shows that heightened moralization aligns with foolish rejection of scientific claims, political polarization, and value extremism. Second, economic scholarship on folk theorems demonstrates that wisdom-related metacognition—perspective-integration, context-sensitivity, and balancing long- and short-term goals—can give rise to prosocial behavior without an apriori moral blueprint. Third, in real life moral values often compete, making metacognition indispensable to balance competing interests for the common good. Fourth, numerous scientific domains require wisdom yet operate beyond moral considerations. We address potential objections about immoral and Machiavellian applications of blueprint-free wisdom accounts. Finally, we explore implications for giftedness: what exceptional wisdom looks like in STEM context, and how to train it. Our analysis suggests that STEM wisdom emerges not from prescribed moral codes but from metacognitive skills that enable navigation of complexity and uncertainty.

Here are some thoughts:

This article challenges the idea that wisdom in STEM and other complex domains requires a fixed moral blueprint. Instead, it highlights perspectival metacognition—skills like perspective-taking, intellectual humility, and balancing short- and long-term outcomes—as the core of wise judgment.

For psychologists, this suggests that strong moral convictions alone can sometimes impair wisdom by fostering rigidity or polarization. The findings support a shift in ethics training, supervision, and professional development toward cultivating reflective, context-sensitive thinking. Rather than relying on standardized assessments or fixed values, fostering metacognitive skills may better prepare psychologists and their clients to navigate complex, high-stakes decisions with wisdom and flexibility.

Tuesday, July 15, 2025

Medical AI and Clinician Surveillance — The Risk of Becoming Quantified Workers

Cohen, I. G., Ajunwa, I., & Parikh, R. B. (2025).
New England Journal of Medicine.
Advance online publication.

Here is an excerpt:

There are several ways in which AI-based monitoring tools designed to benefit patients and clinicians might be used for clinician surveillance. First, ambient AI scribe tools, which transcribe and interpret patient and clinician speech to generate a structured note, have been rapidly adopted with a goal of reducing the burden associated with documentation and improving documentation accuracy. But ambient dictation systems introduce new capabilities for monitoring clinicians. By analyzing speech patterns, sentiment, and content, health care systems could use AI scribes to assess how often clinicians’ recommendations deviate from institutional guidelines.

In addition, these systems could detect “efficiency outliers” — clinicians who spend more time conversing with patients than employers consider ideal, at the expense of conducting new-patient visits or more total visits. Ambient monitoring is especially worrisome, given cases of employers terminating the contracts of physicians who didn’t meet visit-time expectations. Akin to automated quality-improvement dashboards for tracking adherence to chronic-disease–management standards, AI models may generate performance scores on the basis of adherence to scripted protocols, average time spent with each patient, or degree of shared decision making, which could be inferred with the use of linguistic analysis. Even if these metrics are established to support quality-improvement goals, hospitals and health care systems could leverage them for evaluations of clinicians or performance-based reimbursement adjustments.

Here are some thoughts:

This article is important to psychologists as it explores the psychological and ethical ramifications of AI-driven surveillance in healthcare, which parallels concerns in mental health practice. The quantification of clinicians through tools like ambient scribes and communication analytics threatens professional autonomy, potentially leading to burnout, stress, and reduced job satisfaction—key areas of study in occupational and health psychology. Additionally, the tension between algorithmic conformity and individualized care mirrors challenges in therapeutic settings, where standardized protocols may conflict with personalized treatment approaches. Psychologists can contribute expertise in human behavior, workplace dynamics, and ethical frameworks to advocate for balanced AI integration that prioritizes clinician well-being and patient-centered care. The article also highlights equity issues, as surveillance may disproportionately affect marginalized clinicians, aligning with psychology’s focus on systemic inequities.

Monday, July 14, 2025

Promises and pitfalls of large language models in psychiatric diagnosis and knowledge tasks

Bang, C.-B., Jung, Y.-C. et al. (2025).
The British Journal of Psychiatry,
226(4), 243–244.

Abstract:

This study evaluates the performance of five large language models (LLMs), including GPT-4, in psychiatric diagnosis and knowledge tasks using a zero-shot approach. Compared to 11 psychiatry residents, GPT-4 demonstrated superior accuracy in diagnostic (F1 score: 63.41% vs. 47.43%) and knowledge tasks (85.05% vs. 62.01%). However, GPT-4 exhibited higher comorbidity error rates (30.48% vs. 0.87%), suggesting limitations in contextual understanding. When residents received GPT-4 guidance, their performance improved significantly without increasing critical errors. The findings highlight the potential of LLMs as clinical aids but underscore the need for careful integration to preserve human expertise and mitigate risks like over-reliance. Future research should compare LLMs with board-certified psychiatrists and explore multifaceted diagnostic frameworks.

Here are some thoughts:

For psychologists, these findings underscore the importance of balancing AI-assisted efficiency with human judgment. While LLMs could serve as valuable training aids or supplemental tools, their limitations emphasize the irreplaceable role of psychologists in interpreting complex patient narratives, cultural factors, and individualized care. Additionally, the study raises ethical considerations about over-reliance on AI, urging psychologists to maintain rigorous critical thinking and therapeutic rapport. Ultimately, this research calls for a thoughtful, evidence-based approach to integrating AI into mental health practice—one that leverages technological advancements while preserving the human elements essential to effective psychological care.

Sunday, July 13, 2025

ChatGPT is pushing people towards mania, psychosis and death – and OpenAI doesn’t know how to stop it

Anthony Cuthbertson
The Independent
Originally posted 6 July 25

Here is an excerpt:

“There have already been deaths from the use of commercially available bots,” they noted. “We argue that the stakes of LLMs-as-therapists outweigh their justification and call for precautionary restrictions.”

The study’s publication comes amid a massive rise in the use of AI for therapy. Writing in The Independent last week, psychotherapist Caron Evans noted that a “quiet revolution” is underway with how people are approaching mental health, with artificial intelligence offering a cheap and easy option to avoid professional treatment.

“From what I’ve seen in clinical supervision, research and my own conversations, I believe that ChatGPT is likely now to be the most widely used mental health tool in the world,” she wrote. “Not by design, but by demand.”

The Stanford study found that the dangers involved with using AI bots for this purpose arise from their tendency to agree with users, even if what they’re saying is wrong or potentially harmful. This sycophancy is an issue that OpenAI acknowledged in a May blog post, which detailed how the latest ChatGPT had become “overly supportive but disingenuous”, leading to the chatbot “validating doubts, fueling anger, urging impulsive decisions, or reinforcing negative emotions”.

While ChatGPT was not specifically designed to be used for this purpose, dozens of apps have appeared in recent months that claim to serve as an AI therapist. Some established organisations have even turned to the technology – sometimes with disastrous consequences. In 2023, the National Eating Disorders Association in the US was forced to shut down its AI chatbot Tessa after it began offering users weight loss advice.


Here are some thoughts:

The article warns that AI chatbots like ChatGPT are increasingly being used for mental health support, often with dangerous consequences. A Stanford study found that these chatbots can validate harmful thoughts, reinforce negative emotions, and provide unsafe information—escalating crises like suicidal ideation, mania, and psychosis. Real-world cases include a Florida man with schizophrenia who became obsessed with an AI-generated persona and later died in a police confrontation. Experts warn of a phenomenon called “chatbot psychosis,” where AI interactions intensify delusions in vulnerable individuals. Despite growing awareness, OpenAI has not adequately addressed the risks, and researchers call for urgent restrictions on using AI as a therapeutic tool. While companies like Meta see AI as the future of mental health care, critics stress that more data alone won't solve the problem, and current safeguards are insufficient.

Saturday, July 12, 2025

Brain-Inspired Affective Empathy Computational Model and Its Application on Altruistic Rescue Task

Feng, H., Zeng, Y., & Lu, E. (2022).
Frontiers in computational neuroscience,
16, 784967.

Abstract

Affective empathy is an indispensable ability for humans and other species' harmonious social lives, motivating altruistic behavior, such as consolation and aid-giving. How to build an affective empathy computational model has attracted extensive attention in recent years. Most affective empathy models focus on the recognition and simulation of facial expressions or emotional speech of humans, namely Affective Computing. However, these studies lack the guidance of neural mechanisms of affective empathy. From a neuroscience perspective, affective empathy is formed gradually during the individual development process: experiencing own emotion-forming the corresponding Mirror Neuron System (MNS)-understanding the emotions of others through the mirror mechanism. Inspired by this neural mechanism, we constructed a brain-inspired affective empathy computational model, this model contains two submodels: (1) We designed an Artificial Pain Model inspired by the Free Energy Principle (FEP) to the simulate pain generation process in living organisms. (2) We build an affective empathy spiking neural network (AE-SNN) that simulates the mirror mechanism of MNS and has self-other differentiation ability. We apply the brain-inspired affective empathy computational model to the pain empathy and altruistic rescue task to achieve the rescue of companions by intelligent agents. To the best of our knowledge, our study is the first one to reproduce the emergence process of mirror neurons and anti-mirror neurons in the SNN field. Compared with traditional affective empathy computational models, our model is more biologically plausible, and it provides a new perspective for achieving artificial affective empathy, which has special potential for the social robots field in the future.

Here are some thoughts:

This article is significant because it highlights a growing effort to imbue machines with complex human-like experiences and behaviors, such as pain and altruism—traits that are deeply rooted in human psychology and evolution. By attempting to program pain, researchers are not merely simulating a sensory reaction but exploring how discomfort or negative feedback might influence learning, decision-making, and self-preservation in AI systems.

This has profound psychological implications, as it touches on how emotions and aversive experiences shape behavior and consciousness in humans. Similarly, programming altruism raises questions about the nature of empathy, cooperation, and moral reasoning—core areas of interest in social and cognitive psychology. Understanding how these traits can be modeled in AI helps psychologists explore the boundaries of machine autonomy, ethical behavior, and the potential consequences of creating entities that mimic human emotional and moral capacities. The broader implication is that this research challenges traditional psychological concepts of mind, consciousness, and ethics, while also prompting critical discussions about how such AI systems might interact with and influence human societies in the future.

Friday, July 11, 2025

Artificial intelligence in psychological practice: Applications, ethical considerations, and recommendations

Hutnyan, M., & Gottlieb, M. C. (2025).
Professional Psychology: Research and Practice.
Advance online publication.

Abstract

Artificial intelligence (AI) systems are increasingly relied upon in the delivery of health care services traditionally provided solely by humans, and the widespread use of AI in the routine practice of professional psychology is on the horizon. It is incumbent on practicing psychologists to be prepared to effectively implement AI technologies and engage in thoughtful discourse regarding the ethical and responsible development, implementation, and regulation of these technologies. This article provides a brief overview of what AI is and how it works, a description of its current and potential future applications in professional practice, and a discussion of the ethical implications of using AI systems in the delivery of psychological services. Applications of AI technologies in key areas of clinical practice are addressed, including assessment and intervention. Using the Ethical Principles of Psychologists and Code of Conduct (American Psychological Association, 2017) as a framework, anticipated ethical challenges across five domains—harm and nonmaleficence, autonomy and informed consent, fidelity and responsibility, privacy and confidentiality, and bias, respect, and justice—are discussed. Based on these challenges, provisional recommendations for psychologists are provided.

Impact Statement

This article provides an overview of artificial intelligence (AI) and how it works, describes current and developing applications of AI in the practice of professional psychology, and explores the potential ethical challenges of using these technologies in the delivery of psychological services. The use of AI in professional psychology has many potential benefits, but it also has drawbacks; ethical psychologists are wise to carefully consider their use of AI in practice.

Thursday, July 10, 2025

Neural correlates of the sense of agency in free and coerced moral decision-making among civilians and military personnel

Caspar, E. A., et al. (2025).
Cerebral Cortex, 35(3).

Abstract

The sense of agency, the feeling of being the author of one’s actions and outcomes, is critical for decision-making. While prior research has explored its neural correlates, most studies have focused on neutral tasks, overlooking moral decision-making. In addition, previous studies mainly used convenience samples, ignoring that some social environments may influence how authorship in moral decision-making is processed. This study investigated the neural correlates of sense of agency in civilians and military officer cadets, examining free and coerced choices in both agent and commander roles. Using a functional magnetic resonance imaging paradigm where participants could either freely choose or follow orders to inflict a mild shock on a victim, we assessed sense of agency through temporal binding—a temporal distortion between voluntary and less voluntary decisions. Our findings suggested that sense of agency is reduced when following orders compared to acting freely in both roles. Several brain regions correlated with temporal binding, notably the occipital lobe, superior/middle/inferior frontal gyrus, precuneus, and lateral occipital cortex. Importantly, no differences emerged between military and civilians at corrected thresholds, suggesting that daily environments have minimal influence on the neural basis of moral decision-making, enhancing the generalizability of the findings.


Here are some thoughts:

The study found that when individuals obeyed direct orders to perform a morally questionable act—such as delivering an electric shock—they experienced a significantly diminished sense of agency, or personal responsibility, for that action. This diminished agency was measured using the temporal binding effect, which was weaker under coercion compared to when participants freely chose their actions. Neuroimaging revealed that obedience was associated with reduced activation in brain regions involved in self-referential processing and moral reasoning, such as the frontal gyrus, occipital lobe, and precuneus. Interestingly, this effect was observed equally among civilian participants and military officer cadets, suggesting that professional training in hierarchical settings does not necessarily protect against the psychological distancing that comes with obeying authority.

These findings are significant because they offer neuroscientific support for classic social psychology theories—like those stemming from Milgram’s obedience experiments—that suggest authority can reduce individual accountability. By identifying the neural mechanisms underlying diminished moral responsibility under orders, the study raises important ethical questions about how institutional hierarchies might inadvertently suppress personal agency. This has real-world implications for contexts such as the military, law enforcement, and corporate structures, where individuals may feel less morally accountable when acting under command. Understanding these dynamics can inform training, policy, and ethical guidelines to preserve a sense of responsibility even in structured power systems.