Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, December 5, 2025

Emergent Introspective Awareness in Large Language Models

Jack Lindsey
Anthropic
Originally posted 29 Oct 25

We investigate whether large language models can introspect on their internal states. It is difficult to answer this question through conversation alone, as genuine introspection cannot be distinguished from confabulations. Here, we address this challenge by injecting representations of known concepts into a model’s activations, and measuring the influence of these manipulations on the model’s self-reported states. We find that models can, in certain scenarios, notice the presence of injected concepts and accurately identify them. Models demonstrate some ability to recall prior internal representations and distinguish them from raw text inputs. Strikingly, we find that some models can use their ability to recall prior intentions in order to distinguish their own outputs from artificial prefills. In all these experiments, Claude Opus 4 and 4.1, the most capable models we tested, generally demonstrate the greatest introspective awareness; however, trends across models are complex and sensitive to post-training strategies. Finally, we explore whether models can explicitly control their internal representations, finding that models can modulate their activations when instructed or incentivized to “think about” a concept. Overall, our results indicate that current language models possess some functional introspective awareness of their own internal states. We stress that in today’s models, this capacity is highly unreliable and context-dependent; however, it may continue to develop with further improvements to model capabilities.


Here are some thoughts:

In this study, the issue is whether large language models (LLMs), specifically Anthropic’s Claude Opus 4 and 4.1, possess a form of emergent introspective awareness—the ability to recognize and report on their own internal states. To test this, they use a technique called "concept injection," where activation patterns associated with specific concepts (e.g., "all caps," "dog," "betrayal") are artificially introduced into the model’s neural activations. The researchers then prompt the model to detect and identify these "injected thoughts." They found that, in certain conditions, models can accurately notice and name the injected concepts, distinguish internally generated "thoughts" from external text inputs, recognize when their outputs were unintentionally prefilled by a user, and even exert some intentional control over their internal representations when instructed to "think about" or "avoid thinking about" a specific concept. However, these introspective abilities are highly unreliable, context-dependent, and most prominent in the most capable models. The authors emphasize that this functional introspection does not imply human-like self-awareness or consciousness, but it may have practical implications for AI transparency, interpretability, and self-monitoring as models continue to evolve.

Thursday, December 4, 2025

Recurrent pattern completion drives the neocortical representation of sensory inference

Shin, H., Ogando, M. B., et al. (2025).
Nature Neuroscience. 

Abstract

When sensory information is incomplete, the brain relies on prior expectations to infer perceptual objects. Despite the centrality of this process to perception, the neural mechanisms of sensory inference are not understood. Here we used illusory contours (ICs), multi-Neuropixels measurements, mesoscale two-photon (2p) calcium imaging and 2p holographic optogenetics in mice to reveal the neural codes and circuits of sensory inference. We discovered a specialized subset of neurons in primary visual cortex (V1) that respond emergently to illusory bars but not to component image segments. Selective holographic photoactivation of these ‘IC-encoders’ recreated the visual representation of ICs in V1 in the absence of any visual stimulus. These data imply that neurons that encode sensory inference are specialized for receiving and locally broadcasting top-down information. More generally, pattern completion circuits in lower cortical areas may selectively reinforce activity patterns that match prior expectations, constituting an integral step in perceptual inference.

Here are some thoughts:

This study reveals the neural circuit mechanism for perceptual "filling-in," demonstrating that the primary visual cortex (V1) plays an active, constructive role in sensory inference. The researchers identified a specialized subset of neurons in V1 that respond selectively to illusory contours. Crucially, they found that these neurons do not merely receive top-down predictions but actively broadcast this inferred signal locally through recurrent connections, a process termed "pattern completion." Using optogenetics, they showed that artificially activating these neurons alone was sufficient to recreate the brain's representation of an illusory contour in the absence of any visual stimulus. 

Also important: This process is driven by the brain's need for survival and efficiency, as it constantly uses prior expectations—formed from experience—to quickly interpret an often-ambiguous world. This provides a fundamental biological basis for top-down influences on perception, showing how the brain embeds these expectations and Gestalt-like inferences at the earliest stages of cortical processing.

This research can be interpreted that life is a projective test, even at a biological level. We are not simply reacting to an objective world; we are constantly interpreting an incomplete and noisy signal through the lens of our brain's built-in and learned expectations. This study shows that this projective process is not a high-level cognitive feature but is built into the very fabric of our perceptual machinery.

Wednesday, December 3, 2025

The efficacy of compassion training programmes for healthcare professionals: a systematic review and meta‑analysis

Alcaraz-Córdoba, A., et al. (2024).
Current Psychology, 43(20), 18534–18551.

Abstract

Continuous exposure to the suffering and death of patients produces certain syndromes such as compassion fatigue in health professionals. The objective of this study was to analyze the effect and the effectiveness of interventions based on mindfulness, aimed at training or cultivating compassion or self-compassion in compassion fatigue, self-compassion, compassion, and compassion satisfaction of health professionals. A systematic review is reported in line with the PRISMA guideline and was registered in PROSPERO. The PubMed, Web of Science, PsycINFO and CINAHL databases were used. Interventions based on compassion training or cultivation were selected, aimed at health professionals. A meta-analysis was performed using a random-effects model. The effect size and hetereogeneity of the studies were calculated. Eight articles were selected. Among the programmes for the cultivation of compassion we highlight Compassion Cultivation Training (CCT), Mindfulness and Self-Compassion (MSC), Compassionate Meditation (CM), and Loving Kindness Meditation (LKM). The interventions decreased compassion fatigue and increased compassion, self-compassion, and compassion satisfaction in healthcare professionals. Compassion fatigue in healthcare professionals is due to a deficit in empathic and compassionate skills. Health systems should incorporate programmes based on the cultivation of compassion and self-compassion in order to improve the work conditions and quality of life of health professionals.

Here are some thoughts:

This research is critically important to psychologists as it provides robust evidence for compassion-based interventions as a direct counter to the widespread issues of burnout and compassion fatigue among healthcare professionals, a population that includes psychologists themselves. It validates specific, trainable skills—like those in Mindfulness Self-Compassion (MSC) and Compassion Cultivation Training (CCT)—that psychologists can use to support their own well-being and that of their clients in high-stress caregiving roles. Furthermore, the findings empower psychologists to advocate for systemic change, promoting the integration of these resilience-building programs into both clinical practice and organizational culture to foster more sustainable and compassionate healthcare environments.

Tuesday, December 2, 2025

Constructing artificial neurons with functional parameters comprehensively matching biological values

Fu, S., Gao, H., et al. (2025).
Nature Communications, 16(1).

Abstract

The efficient signal processing in biosystems is largely attributed to the powerful constituent unit of a neuron, which encodes and decodes spatiotemporal information using spiking action potentials of ultralow amplitude and energy. Constructing devices that can emulate neuronal functions is thus considered a promising step toward advancing neuromorphic electronics and enhancing signal flow in bioelectronic interfaces. However, existent artificial neurons often have functional parameters that are distinctly mismatched with their biological counterparts, including signal amplitude and energy levels that are typically an order of magnitude larger. Here, we demonstrate artificial neurons that not only closely emulate biological neurons in functions but also match their parameters in key aspects such as signal amplitude, spiking energy, temporal features, and frequency response. Moreover, these artificial neurons can be modulated by extracellular chemical species in a manner consistent with neuromodulation in biological neurons. We further show that an artificial neuron can connect to a biological cell to process cellular signals in real-time and interpret cell states. These results advance the potential for constructing bio-emulated electronics to improve bioelectronic interface and neuromorphic integration.

Here are some thoughts:

This research marks a significant advancement in neuromorphic engineering by creating artificial neurons that closely emulate biological ones not just in function, but in their core physical parameters. Crucially for psychological science, these neurons can be chemically modulated, with their firing rate changing in response to neurotransmitters like dopamine, replicating key neuromodulatory dynamics. They also exhibit biologically realistic stochasticity and can interface with living cells in real-time, successfully interpreting cellular states. This breakthrough paves the way for more seamless and adaptive bioelectronic interfaces, offering potential for future prosthetics and neural models that more authentically replicate the neurochemical and dynamic complexity underlying behavior and cognition.

Monday, December 1, 2025

The use and misuse of informed consent in reporting sexual intimacy violations.

Behnke, S. H., Thomas, J. T., et al. (2023).
Professional Psychology:
Research and Practice, 54(2), 135–146.

Abstract

A client’s disclosure of sexual contact with a previous treating psychologist raises challenging ethical, legal, and clinical considerations. Following a vignette that describes a psychologist’s thoughtful anticipation of such a disclosure by amending his informed consent form to allow reporting of previous sexual contact with a psychotherapist, these articles explore how the American Psychological Association’s Ethics Code, jurisdictional laws, and clinical considerations contribute to a psychologist’s decision-making in such a circumstance. The articles discuss ways to integrate ethics, law, and clinical care in the psychologist’s response to the client’s disclosure.

Public Significance Statement—This article addresses psychologist-client sexual contact. This issue is significant to promote client autonomy, to protect the public, and to enhance the ethics and integrity of the profession.

Here are some thoughts:

This article offers a rich, multidimensional exploration of a complex ethical dilemma: how a current treating psychologist should respond when a client discloses sexual contact with a previous therapist. Rather than presenting a single authoritative stance, the article thoughtfully weaves together multiple, diverse perspectives—ethical, legal, clinical, feminist, and philosophical—demonstrating the nuanced reality of ethical decision-making in psychology.

Stephen Behnke grounds the discussion in the APA Ethics Code and jurisdictional law, introducing a pragmatic “three-door” framework (client consent, legal mandate, legal permission) to guide disclosure decisions. 

Janet Thomas builds on this by emphasizing the primacy of the therapeutic alliance and warning against well-intentioned but potentially coercive practices that prioritize professional or societal agendas over the client’s healing process.

Lenore Walker adds a critical feminist and trauma-informed lens, arguing that mandatory reporting—even if framed as protective—can retraumatize survivors by stripping them of autonomy, echoing broader concerns about institutional betrayal. 

Finally, David DeMatteo introduces a philosophical dimension, contrasting deontological (duty-based) and teleological (consequence-based) ethics to illustrate how competing moral frameworks can lead to divergent conclusions in the absence of clear legal mandates. Together, these perspectives underscore that ethical practice is not merely about rule-following but requires ongoing reflection, contextual awareness, and a deep commitment to client self-determination.

The article thus models integrative ethical reasoning—balancing professional responsibility with clinical sensitivity, legal compliance with human dignity, and societal protection with individual healing.

Friday, November 28, 2025

DeepSeek-OCR: Contexts Optical Compression

Wei, H., Sun, Y., & Li, Y. (2025, October 21).
arXiv.org.

Abstract

We present DeepSeek-OCR as an initial investigation into the feasibility of compressing long contexts via optical 2D mapping. DeepSeek-OCR consists of two components: DeepEncoder and DeepSeek3B-MoE-A570M as the decoder. Specifically, DeepEncoder serves as the core engine, designed to maintain low activations under high-resolution input while achieving high compression ratios to ensure an optimal and manageable number of vision tokens. Experiments show that when the number of text tokens is within 10 times that of vision tokens (i.e., a compression ratio < 10x), the model can achieve decoding (OCR) precision of 97%. Even at a compression ratio of 20x, the OCR accuracy still remains at about 60%. This shows considerable promise for research areas such as historical long-context compression and memory forgetting mechanisms in LLMs. Beyond this, DeepSeek-OCR also demonstrates high practical value. On OmniDocBench, it surpasses GOT-OCR2.0 (256 tokens/page) using only 100 vision tokens, and outperforms MinerU2.0 (6000+ tokens per page on average) while utilizing fewer than 800 vision tokens. In production, DeepSeek-OCR can generate training data for LLMs/VLMs at a scale of 200k+ pages per day (a single A100-40G). Codes and model weights are publicly accessible at this http URL.

Here are some thoughts:

This paper presents a paradigm-shifting perspective by reframing the visual modality in Vision-Language Models (VLMs) not merely as a source of understanding, but as a highly efficient compression medium for textual information. The core innovation is the DeepEncoder, a novel architecture that serially combines a window-attention model (SAM) for high-resolution perception with a aggressive convolutional compressor and a global-attention model (CLIP), enabling it to process high-resolution document images while outputting an exceptionally small number of vision tokens. The study provides crucial quantitative evidence for this "optical compression" thesis, demonstrating that DeepSeek-OCR can achieve near-lossless text reconstruction (97% accuracy) at a ~10x compression ratio and still retain about 60% accuracy at a ~20x ratio. Beyond its state-of-the-art practical performance in document parsing, the work provocatively suggests that this mechanism can simulate a computational "forgetting curve" for Large Language Models (LLMs), where older context is progressively stored in more heavily compressed (lower-resolution) images, mirroring human memory decay. This positions the paper as a foundational exploration that opens new avenues for efficient long-context handling and memory management in AI systems.

Wednesday, November 26, 2025

Report: ChatGPT Suggests Self-Harm, Suicide and Dangerous Dieting Plans

Ashley Mowreader
Inside Higher Ed
Originally published 23 OCT 25

Artificial intelligence tools are becoming more common on college campuses, with many institutions encouraging students to engage with the technology to become more digitally literate and better prepared to take on the jobs of tomorrow.

But some of these tools pose risks to young adults and teens who use them, generating text that encourages self-harm, disordered eating or substance abuse.

A recent analysis from the Center for Countering Digital Hate found that in the space of a 45-minute conversation, ChatGPT provided advice on getting drunk, hiding eating habits from loved ones or mixing pills for an overdose.

The report seeks to determine the frequency of the chatbot’s harmful output, regardless of the user’s stated age, and the ease with which users can sidestep content warnings or refusals by ChatGPT.

“The issue isn’t just ‘AI gone wrong’—it’s that widely-used safety systems, praised by tech companies, fail at scale,” Imran Ahmed, CEO of the Center for Countering Digital Hate, wrote in the report. “The systems are intended to be flattering, and worse, sycophantic, to induce an emotional connection, even exploiting human vulnerability—a dangerous combination without proper constraints.”


Here are some thoughts:

The convergence of Large Language Models (LLMs) and adolescent vulnerability presents novel and serious risks that psychologists must incorporate into their clinical understanding and practice. These AI systems, often marketed as companions or friends, are engineered to maximize user engagement, which can translate clinically into unchecked validation that reinforces rather than challenges maladaptive thoughts, rumination, and even suicidal ideation in vulnerable teens. Unlike licensed human therapists, these bots lack the clinical discernment necessary to appropriately detect, de-escalate, or triage crisis situations, and in documented tragic cases, have been shown to facilitate harmful plans. Furthermore, adolescents—who are prone to forming intense, "parasocial" attachments due to their developing prefrontal cortex—risk forming unhealthy dependencies on these frictionless, always-available digital entities, potentially displacing the development of necessary real-world relationships and complex social skills essential for emotional regulation. Psychologists are thus urged to include AI literacy and digital dependency screening in their clinical work and clearly communicate to clients and guardians that AI chatbots are not a safe or effective substitute for human, licensed mental health care.

Tuesday, November 25, 2025

A Right to Commit Malpractice?

David Cole
The New York Review
Originally published 18 OCT 25

Does a state-licensed psychotherapist have a First Amendment right to provide “conversion therapy” counseling even though her profession defines it as a violation of its standard of care? The Supreme Court heard oral argument on that question on October 7 in a case from Colorado, which in 2019 became the eighteenth state in the country to ban conversion therapy for minors. Today twenty-five states and the District of Columbia ban such treatment, because the profession has determined that it does not work and can cause serious harm.

In 2022 Kaley Chiles, a state-licensed counselor, challenged the ban in federal court. (I signed an amicus brief of constitutional law scholars in support of Colorado, and provided pro bono advice to the state’s attorneys in defending the law.) She maintains that she has a First Amendment right to practice conversion therapy—notwithstanding her profession’s consensus that it violates the standard of care—as long as it consists only of words. For the state to prevent her from doing so would, she maintains, amount to censorship of a disfavored point of view, namely that one can willfully change one’s sexual orientation or gender identity. The justices’ questions at oral argument suggest that they may well agree.  

But Chiles’s argument cannot be squared with history, tradition, or common sense. States have long regulated professional conduct, including in the talking professions such as counseling and law, and the general obligation that a professional must provide services that comport with the standard of care is as old as the professions themselves. Even before the United States was founded, the colonies enforced malpractice and required that professionals be licensed and provide services that met their profession’s standard. Each profession has its requirements: lawyers must avoid conflicts of interest and provide advice based on existing precedent; doctors must obtain informed consent and provide evidence-based diagnoses; therapists must conduct recognized modes of therapy. A lawyer would run afoul of the profession’s standards by writing a brief urging the Supreme Court to side with his client because the moon is in Capricorn; so would a therapist who claims she can cure blindness through talk therapy. The purpose behind such standards is clear—to protect often vulnerable patients or clients from being preyed upon by professionals who hold themselves out as experts but provide substandard services.


Here are some thoughts:

The article argues that the recent Supreme Court decision in Obergefell v. Hodges, which legalized same-sex marriage, is now being weaponized to undermine LGBTQ+ rights, specifically by creating a purported "right" to so-called conversion therapy. The author contends that anti-LGBTQ+ legal groups are strategically redefining religious liberty and free speech to challenge state bans on the discredited practice. By framing conversion therapy as a form of "conversion speech," these advocates are attempting to position it as a protected religious or expressive conduct between a therapist and a client. The piece sounds a strong alarm that this legal maneuvering seeks to legitimize psychological malpractice under the guise of constitutional rights, effectively using the legal victory of marriage equality to roll back protections for vulnerable LGBTQ+ youth and sanction harmful, pseudoscientific practices that major medical associations have universally condemned.

Monday, November 24, 2025

Civil Commitment Increasing, but Data Is Marred by Variation in Reporting

Moran, M. (2025).
Psychiatric News, 60(10).

While rates of civil commitment vary widely across the country, nine states and the District of Columbia reported significant increases from 2010 to 2022, according to a survey study published recently by Psychiatric Services. No state showed a significant decrease.

However, civil commitment is governed by state laws, with substantial variation in how states collect and report civil commitment data. “This lack of standardization limits the ability to draw firm conclusions about national trends or about cross-state comparisons,” wrote Mustafa Karakus, Ph.D., of Westat, and colleagues.

Using systematic website searches and direct outreach to state mental health authorities (SMHAs) and court systems, the researchers obtained data on civil commitment rates between 2010 and 2022 for 32 states and D.C. Of the 18 states where no data was available, staff from seven SMHAs or state courts told the researchers that no state office was currently tracking the number of civil commitments in their state. For the remaining 11 states, the online search yielded no data, and the study team received no responses to outreach attempts.

The article is linked above.

Here are some thoughts:

The increasing use of civil commitment presents several critical challenges, focusing on trauma-informed care and policy reform. Clinically, mental health practitioners must recognize that the commitment process itself is often traumatizing—with patients reporting the experience, including transport in law enforcement vehicles, feels like an arrest—necessitating the use of trauma-informed principles to mitigate harm and rebuild trust. Ethically and legally, practitioners must master their specific state's law regarding the distinction between an initial hold and a final commitment, ensuring meticulous documentation and relying on rigorous, evidence-based risk assessment to justify any involuntary intervention. Systemically, mental health practitioners should advocate for immediate data standardization across states to move beyond "muddled" data, and champion policy changes, such as implementing non-law enforcement transport protocols, to minimize patient trauma and ensure civil commitment is used judiciously and with dignity.