Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, July 12, 2025

Brain-Inspired Affective Empathy Computational Model and Its Application on Altruistic Rescue Task

Feng, H., Zeng, Y., & Lu, E. (2022).
Frontiers in computational neuroscience,
16, 784967.

Abstract

Affective empathy is an indispensable ability for humans and other species' harmonious social lives, motivating altruistic behavior, such as consolation and aid-giving. How to build an affective empathy computational model has attracted extensive attention in recent years. Most affective empathy models focus on the recognition and simulation of facial expressions or emotional speech of humans, namely Affective Computing. However, these studies lack the guidance of neural mechanisms of affective empathy. From a neuroscience perspective, affective empathy is formed gradually during the individual development process: experiencing own emotion-forming the corresponding Mirror Neuron System (MNS)-understanding the emotions of others through the mirror mechanism. Inspired by this neural mechanism, we constructed a brain-inspired affective empathy computational model, this model contains two submodels: (1) We designed an Artificial Pain Model inspired by the Free Energy Principle (FEP) to the simulate pain generation process in living organisms. (2) We build an affective empathy spiking neural network (AE-SNN) that simulates the mirror mechanism of MNS and has self-other differentiation ability. We apply the brain-inspired affective empathy computational model to the pain empathy and altruistic rescue task to achieve the rescue of companions by intelligent agents. To the best of our knowledge, our study is the first one to reproduce the emergence process of mirror neurons and anti-mirror neurons in the SNN field. Compared with traditional affective empathy computational models, our model is more biologically plausible, and it provides a new perspective for achieving artificial affective empathy, which has special potential for the social robots field in the future.

Here are some thoughts:

This article is significant because it highlights a growing effort to imbue machines with complex human-like experiences and behaviors, such as pain and altruism—traits that are deeply rooted in human psychology and evolution. By attempting to program pain, researchers are not merely simulating a sensory reaction but exploring how discomfort or negative feedback might influence learning, decision-making, and self-preservation in AI systems.

This has profound psychological implications, as it touches on how emotions and aversive experiences shape behavior and consciousness in humans. Similarly, programming altruism raises questions about the nature of empathy, cooperation, and moral reasoning—core areas of interest in social and cognitive psychology. Understanding how these traits can be modeled in AI helps psychologists explore the boundaries of machine autonomy, ethical behavior, and the potential consequences of creating entities that mimic human emotional and moral capacities. The broader implication is that this research challenges traditional psychological concepts of mind, consciousness, and ethics, while also prompting critical discussions about how such AI systems might interact with and influence human societies in the future.

Friday, July 11, 2025

Artificial intelligence in psychological practice: Applications, ethical considerations, and recommendations

Hutnyan, M., & Gottlieb, M. C. (2025).
Professional Psychology: Research and Practice.
Advance online publication.

Abstract

Artificial intelligence (AI) systems are increasingly relied upon in the delivery of health care services traditionally provided solely by humans, and the widespread use of AI in the routine practice of professional psychology is on the horizon. It is incumbent on practicing psychologists to be prepared to effectively implement AI technologies and engage in thoughtful discourse regarding the ethical and responsible development, implementation, and regulation of these technologies. This article provides a brief overview of what AI is and how it works, a description of its current and potential future applications in professional practice, and a discussion of the ethical implications of using AI systems in the delivery of psychological services. Applications of AI technologies in key areas of clinical practice are addressed, including assessment and intervention. Using the Ethical Principles of Psychologists and Code of Conduct (American Psychological Association, 2017) as a framework, anticipated ethical challenges across five domains—harm and nonmaleficence, autonomy and informed consent, fidelity and responsibility, privacy and confidentiality, and bias, respect, and justice—are discussed. Based on these challenges, provisional recommendations for psychologists are provided.

Impact Statement

This article provides an overview of artificial intelligence (AI) and how it works, describes current and developing applications of AI in the practice of professional psychology, and explores the potential ethical challenges of using these technologies in the delivery of psychological services. The use of AI in professional psychology has many potential benefits, but it also has drawbacks; ethical psychologists are wise to carefully consider their use of AI in practice.

Thursday, July 10, 2025

Neural correlates of the sense of agency in free and coerced moral decision-making among civilians and military personnel

Caspar, E. A., et al. (2025).
Cerebral Cortex, 35(3).

Abstract

The sense of agency, the feeling of being the author of one’s actions and outcomes, is critical for decision-making. While prior research has explored its neural correlates, most studies have focused on neutral tasks, overlooking moral decision-making. In addition, previous studies mainly used convenience samples, ignoring that some social environments may influence how authorship in moral decision-making is processed. This study investigated the neural correlates of sense of agency in civilians and military officer cadets, examining free and coerced choices in both agent and commander roles. Using a functional magnetic resonance imaging paradigm where participants could either freely choose or follow orders to inflict a mild shock on a victim, we assessed sense of agency through temporal binding—a temporal distortion between voluntary and less voluntary decisions. Our findings suggested that sense of agency is reduced when following orders compared to acting freely in both roles. Several brain regions correlated with temporal binding, notably the occipital lobe, superior/middle/inferior frontal gyrus, precuneus, and lateral occipital cortex. Importantly, no differences emerged between military and civilians at corrected thresholds, suggesting that daily environments have minimal influence on the neural basis of moral decision-making, enhancing the generalizability of the findings.


Here are some thoughts:

The study found that when individuals obeyed direct orders to perform a morally questionable act—such as delivering an electric shock—they experienced a significantly diminished sense of agency, or personal responsibility, for that action. This diminished agency was measured using the temporal binding effect, which was weaker under coercion compared to when participants freely chose their actions. Neuroimaging revealed that obedience was associated with reduced activation in brain regions involved in self-referential processing and moral reasoning, such as the frontal gyrus, occipital lobe, and precuneus. Interestingly, this effect was observed equally among civilian participants and military officer cadets, suggesting that professional training in hierarchical settings does not necessarily protect against the psychological distancing that comes with obeying authority.

These findings are significant because they offer neuroscientific support for classic social psychology theories—like those stemming from Milgram’s obedience experiments—that suggest authority can reduce individual accountability. By identifying the neural mechanisms underlying diminished moral responsibility under orders, the study raises important ethical questions about how institutional hierarchies might inadvertently suppress personal agency. This has real-world implications for contexts such as the military, law enforcement, and corporate structures, where individuals may feel less morally accountable when acting under command. Understanding these dynamics can inform training, policy, and ethical guidelines to preserve a sense of responsibility even in structured power systems.

Wednesday, July 9, 2025

Management of Suicidal Thoughts and Behaviors in Youth. Systematic Review

Sim L, Wang Z, et al (2025).
Prepared by the Mayo Clinic Evidence-based 
Practice Center under

Abstract

Background: Suicide is a leading cause of death in young people and an escalating public health crisis. We aimed to assess the effectiveness and harms of available treatments for suicidal thoughts and behaviors in youths at heightened risk for suicide. We also aimed to examine how social determinants of health, racism, disparities, care delivery methods, and patient demographics affect outcomes.

Methods: We conducted a systematic review and searched several databases including MEDLINE®, Embase®, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and others from January 2000 to September 2024. We included randomized clinical trials (RCTs), comparative observational studies, and before-after studies of psychosocial interventions, pharmacological interventions, neurotherapeutics, emerging therapies, and combinations therapies. Eligible patients were youths (aged 5 to 24 years) who had a heightened risk for suicide, including youths who have experienced suicidal ideation, prior attempts, hospital discharge for mental health treatment, or command hallucinations; were identified as high risk on validated questionnaires; or were from other at-risk groups. Pairs of independent reviewers selected and appraised studies. Findings were synthesized narratively.

Results: We included 65 studies reporting on 14,534 patients (33 RCTs, 13 comparative observational studies, and 19 before-after studies). Psychosocial interventions identified from the studies comprised psychotherapy interventions (33 studies, Cognitive Behavior Therapy, Dialectical Behavior Therapy, Collaborative Assessment and Management of Suicidality, Dynamic Deconstructive Psychotherapy, Attachment-Based Family Therapy, and Family-Focused Therapy), acute (i.e., 1 to 4 sessions/contacts) psychosocial interventions (19 studies, acute safety planning, family-based crisis management, motivational interviewing crisis interventions, continuity of care following crisis, and brief adjunctive treatments), and school/community-based psychosocial interventions (13 studies, social network interventions, school-based skills interventions, suicide awareness/gatekeeper programs, and community-based, culturally tailored adjunct programs). For most categories of psychotherapies (except DBT), acute interventions, or school/community-based interventions, there was insufficient strength of evidence and uncertainty about suicidal thoughts or attempts. None of the studies evaluated adverse events associated with the interventions. The evidence base on pharmacological treatment for suicidal youths was largely nonexistent at the present time. No eligible study evaluated neurotherapeutics or emerging therapies.

Conclusion: The current evidence on available interventions intended for youths at heightened risk of suicide is uncertain. Medication, neurotherapeutics, and emerging therapies remain unstudied in this population. Given that most treatments were adapted from adult protocols that may not fit the developmental and contextual experience of adolescents or younger children, this limited evidence base calls for the development of novel, developmentally and trauma-informed treatments, as well as multilevel interventions to address the rising suicide risk in youths.

Tuesday, July 8, 2025

Behavioral Ethics: Ethical Practice Is More Than Memorizing Compliance Codes

Cicero F. R. (2021).
Behavior analysis in practice, 14(4), 
1169–1178.

Abstract

Disciplines establish and enforce professional codes of ethics in order to guide ethical and safe practice. Unfortunately, ethical breaches still occur. Interestingly, it is found that breaches are often perpetrated by professionals who are aware of their codes of ethics and believe that they engage in ethical practice. The constructs of behavioral ethics, which are most often discussed in business settings, attempt to explain why ethical professionals sometimes engage in unethical behavior. Although traditionally based on theories of social psychology, the principles underlying behavioral ethics are consistent with behavior analysis. When conceptualized as operant behavior, ethical and unethical decisions are seen as being evoked and maintained by environmental variables. As with all forms of operant behavior, antecedents in the environment can trigger unethical responses, and consequences in the environment can shape future unethical responses. In order to increase ethical practice among professionals, an assessment of the environmental variables that affect behavior needs to be conducted on a situation-by-situation basis. Knowledge of discipline-specific professional codes of ethics is not enough to prevent unethical practice. In the current article, constructs used in behavioral ethics are translated into underlying behavior-analytic principles that are known to shape behavior. How these principles establish and maintain both ethical and unethical behavior is discussed.

Here are some thoughts:

This article argues that ethical practice requires more than memorizing compliance codes, as professionals aware of such codes still commit ethical breaches. Behavioral ethics suggests that environmental and situational variables often evoke and maintain unethical decisions, conceptualizing these decisions as operant behavior. Thus, knowledge of ethical codes alone is insufficient to prevent unethical practice; an assessment of environmental influences is necessary. The paper translates behavioral ethics constructs like self-serving bias, incrementalism, framing, obedience to authority, conformity bias, and overconfidence bias into behavior-analytic principles such as reinforcement, shaping, motivating operations, and stimulus control. This perspective shifts the focus from blaming individuals towards analyzing environmental factors that prompt ethical breaches, advocating for proactive assessment to support ethical behavior.

Understanding these concepts is vital for psychologists because they too are subject to environmental pressures that can lead to unethical actions, despite ethical training. The article highlights that ethical knowledge does not always translate to ethical behavior, emphasizing that situational factors often play a more significant role. Psychologists must recognize subtle influences such as the gradual normalization of unethical actions (incrementalism), the impact of how situations are described (framing), pressures from authority figures, and conformity to group norms, as these can all compromise ethical judgment. An overconfidence in one's own ethical standing can further obscure these influences. By applying a behavior-analytic lens, psychologists can better identify and mitigate these environmental risks, fostering a culture of proactive ethical assessment within their practice and institutions to safeguard clients and the profession.

Monday, July 7, 2025

Subconscious Suggestion

Ferketic, M. (2025, Forthcoming)  

Abstract

Subconscious suggestion is a silent but pervasive force shaping perception, decision-making, and attentional structuring beneath awareness. Operating as internal impressive action, it passively introduces impulses, biases, and associative framings into consciousness, subtly guiding behavior without volitional approval. Like hypnotic suggestion, it does not dictate action; it attempts to compel through motivational pull, influencing perception and intent through saliency and potency gradients. Unlike previous theories that depict subconscious influence as abstract or deterministic, this work presents a novel structured, mechanistic, operational model of function, demonstrating from first principles how subconscious suggestion disperses influence into awareness, interacts with attentional deployment, and negotiates attentional sovereignty. Additionally, it frames free will not as exemption from subconscious force, but as mastery of its regulation, with autonomy emerging from the ability to recognize, refine, and command suggestive forces rather than be unconsciously governed by them.

Here are some thoughts:

Subconscious suggestion, as detailed in the article, is a fundamental cognitive mechanism that shapes perception, attention, and behavior beneath conscious awareness. It operates as internal impressive action—passively introducing impulses, biases, and associative framings into consciousness, subtly guiding decisions without direct volitional control. Unlike deterministic models of unconscious influence, this framework presents subconscious suggestion as a structured, mechanistic process that competes for attention through saliency and motivational potency gradients. It functions much like a silent internal hypnotist, not dictating action but attempting to compel through perceptual framing and emotional nudges.

For practicing psychologists, understanding this model is crucial—it provides insight into how automatic cognitive processes contribute to habit formation, emotional regulation, motivation, and decision-making. It reframes free will not as exemption from subconscious forces, but as mastery over them, emphasizing the importance of attentional sovereignty and volitional override in clinical interventions. This knowledge equips psychologists to better identify, assess, and guide clients in managing subconscious influences, enhancing therapeutic outcomes across conditions such as addiction, anxiety, compulsive behaviors, and maladaptive thought patterns.

Sunday, July 6, 2025

In similarity we trust: Like-mindedness, rather than just the type of moral judgment, drives inferences of trustworthiness

Chandrashekar, S., et al. (2025, May 26).
PsyArXiv Preprints

Abstract

Trust plays a central role in social interactions. Recent research has highlighted the importance of others’ moral decisions in shaping trust inference: individuals who reject sacrificial harm in moral dilemmas (which aligns with deontological ethics) are generally perceived as more trustworthy than those who condone sacrificial harm (which aligns with utilitarian ethics). Across five studies (N = 1234), we investigated trust inferences in the context of iterative moral dilemmas, which allow individuals to not only make deontological or utilitarian decisions, but also harm-balancing decisions. Our findings challenge the prevailing perspective: While we did observe effects of the type of moral decision that people make, the direction of these effects was inconsistent across studies. In contrast, moral similarity (i.e., whether a decision aligns with one’s own perspective) consistently predicted increased trust. Our findings suggest that trust is not just about adhering to specific moral frameworks but also about shared moral perspectives.

Here are some thoughts:

This research is important to practicing psychologists for several key reasons. It demonstrates that like-mindedness —specifically, sharing similar moral judgments or decision-making patterns—is a strong determinant of perceived trustworthiness. This insight is valuable across clinical, organizational, and social psychology, particularly in understanding how moral alignment influences interpersonal relationships.

Unlike past studies focused on isolated moral dilemmas like the trolley problem, this work explores iterative dilemmas, offering a more realistic model of how people make repeated moral decisions over time. For psychologists working in ethics or behavioral interventions, this provides a nuanced framework for promoting cooperation and ethical behavior in dynamic contexts.

The study also challenges traditional views by showing that individuals who switch between utilitarian and deontological reasoning are not necessarily seen as less trustworthy, suggesting flexibility in moral judgment may be contextually appropriate. Additionally, the research highlights how moral decisions shape perceptions of traits such as bravery, warmth, and competence—key factors in how people are judged socially and professionally.

These findings can aid therapists in helping clients navigate relational issues rooted in moral misalignment or trust difficulties. Overall, the research bridges moral psychology and social perception, offering practical tools for improving interpersonal trust across diverse psychological domains.

Saturday, July 5, 2025

Bias Is Not Color Blind: Ignoring Gender and Race Leads to Suboptimal Selection Decisions.

Rabinovitch, H. et al. (2025, May 27).

Abstract

Blindfolding—selecting candidates based on objective selection tests while avoiding personal information about their race and gender— is commonly used to mitigate bias in selection. Selection tests, however, often benefit people of a certain race or gender. In such cases, selecting the best candidates requires incorporating, rather than ignoring, the biasing factor. We examined people's preference for avoiding candidates’ race and gender, even when fully aware that these factors bias the selection test. We put forward a novel prediction suggesting that paradoxically, due to their fear of appearing partial, people would choose not to reveal race and gender information, even when doing so means making suboptimal decisions. Across three experiments (N = 3,621), hiring professionals (and laypeople) were tasked with selecting the best candidate for a position when they could reveal the candidate’s race and gender or avoid it. We further measured how fear for their social image corresponds with their decision, as well as how job applicants perceive such actions. The results supported our predictions, showing that more than 50% did not reveal gender and race information, compared to only 30% who did not reveal situational biasing information, such as the time of day in which the interview was held. Those who did not reveal information expressed higher concerns for their social and self-image than those who decided to reveal. We conclude that decision-makers avoid personal biasing information to maintain a positive image, yet by doing so, they compromise fairness and accuracy alike.

Public significance statements

Blindfolding—ignoring one’s gender and race in selection processes—is a widespread strategy aimed at reducing bias and increasing diversity. Selection tests, however, often unjustly benefit members of certain groups, such as men and white people. In such cases, correcting the bias requires incorporating, rather than ignoring, information about the candidates’ gender and race. The current research shows that decision-makers are reluctant to reveal such information due to their fear of appearing partial. Paradoxically, decision-makers avoid such information, even when fully aware that doing so may perpetuate bias, in order to protect their social image as impartial, but miss out on the opportunity to advance fairness and choose the best candidates.

Here are some thoughts:

This research is critically important to practicing psychologists because it sheds light on the complex interplay between bias, decision-making, and social image concerns in hiring processes. The study demonstrates how well-intentioned practices like "blindfolding"—omitting race or gender information to reduce discrimination—can paradoxically perpetuate systemic biases when selection tools themselves are flawed. Practicing psychologists must understand that ignoring personal attributes does not eliminate bias but can instead obscure its effects, leading to suboptimal and unfair outcomes. By revealing how decision-makers avoid sensitive information out of fear of appearing partial, the research highlights the psychological mechanisms—such as social and self-image concerns—that drive this avoidance. This insight is crucial for psychologists involved in organizational consulting, personnel training, or policy development, as it underscores the need for more nuanced strategies that address bias directly rather than avoiding it.

Additionally, the findings inform interventions aimed at promoting diversity, equity, and inclusion by showing that transparency and informed adjustments based on demographic factors may be necessary to achieve fairer outcomes. Ultimately, the research challenges traditional assumptions about neutrality in selection decisions and urges psychologists to advocate for evidence-based approaches that actively correct for bias while considering the broader implications of perceived fairness and merit.

Friday, July 4, 2025

The Psychology of Moral Conviction

Skitka, L. J.,  et al. (2020).
Annual Review of Psychology, 72(1),
347–366.

Abstract

This review covers theory and research on the psychological characteristics and consequences of attitudes that are experienced as moral convictions, that is, attitudes that people perceive as grounded in a fundamental distinction between right and wrong. Morally convicted attitudes represent something psychologically distinct from other constructs (e.g., strong but nonmoral attitudes or religious beliefs), are perceived as universally and objectively true, and are comparatively immune to authority or peer influence. Variance in moral conviction also predicts important social and political consequences. Stronger moral conviction about a given attitude object, for example, is associated with greater intolerance of attitude dissimilarity, resistance to procedural solutions for conflict about that issue, and increased political engagement and volunteerism in that attitude domain. Finally, we review recent research that explores the processes that lead to attitude moralization; we integrate these efforts and conclude with a new domain theory of attitude moralization.

Here are some thoughts:

The article provides valuable insights into how individuals perceive and process attitudes grounded in fundamental beliefs about right and wrong. It distinguishes morally convicted attitudes from other constructs, such as strong but nonmoral attitudes or religious beliefs, by highlighting that moral convictions are viewed as universally and objectively true and are relatively resistant to authority or peer influence. These convictions often lead to significant social and political consequences, including intolerance of differing views, resistance to compromise, increased political engagement, and heightened emotional responses. The article also explores the processes of attitude moralization—how an issue becomes infused with moral significance—and demoralization, offering a domain theory of attitude moralization that suggests different pathways depending on whether the initial attitude is perceived as a preference, convention, or existing moral imperative.

This knowledge is critically important to practicing psychologists because it enhances their understanding of how moral convictions shape behavior, decision-making, and interpersonal dynamics. For instance, therapists working with clients on issues involving conflict resolution, values clarification, or behavioral change must consider the role of moral conviction in shaping resistance to persuasion or difficulty in compromising. Understanding moral conviction can also aid psychologists in navigating cultural differences, addressing polarization in group settings, and promoting tolerance by recognizing how individuals intuitively perceive certain issues as moral. Furthermore, as society grapples with increasingly divisive sociopolitical challenges—such as climate change, immigration, and public health crises—psychologists can use these insights to foster dialogue, reduce moral entrenchment, and encourage constructive engagement. Ultimately, integrating the psychology of moral conviction into practice allows for more nuanced, empathetic, and effective interventions across clinical, organizational, and community contexts.