Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, July 9, 2025

Management of Suicidal Thoughts and Behaviors in Youth. Systematic Review

Sim L, Wang Z, et al (2025).
Prepared by the Mayo Clinic Evidence-based 
Practice Center under

Abstract

Background: Suicide is a leading cause of death in young people and an escalating public health crisis. We aimed to assess the effectiveness and harms of available treatments for suicidal thoughts and behaviors in youths at heightened risk for suicide. We also aimed to examine how social determinants of health, racism, disparities, care delivery methods, and patient demographics affect outcomes.

Methods: We conducted a systematic review and searched several databases including MEDLINE®, Embase®, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and others from January 2000 to September 2024. We included randomized clinical trials (RCTs), comparative observational studies, and before-after studies of psychosocial interventions, pharmacological interventions, neurotherapeutics, emerging therapies, and combinations therapies. Eligible patients were youths (aged 5 to 24 years) who had a heightened risk for suicide, including youths who have experienced suicidal ideation, prior attempts, hospital discharge for mental health treatment, or command hallucinations; were identified as high risk on validated questionnaires; or were from other at-risk groups. Pairs of independent reviewers selected and appraised studies. Findings were synthesized narratively.

Results: We included 65 studies reporting on 14,534 patients (33 RCTs, 13 comparative observational studies, and 19 before-after studies). Psychosocial interventions identified from the studies comprised psychotherapy interventions (33 studies, Cognitive Behavior Therapy, Dialectical Behavior Therapy, Collaborative Assessment and Management of Suicidality, Dynamic Deconstructive Psychotherapy, Attachment-Based Family Therapy, and Family-Focused Therapy), acute (i.e., 1 to 4 sessions/contacts) psychosocial interventions (19 studies, acute safety planning, family-based crisis management, motivational interviewing crisis interventions, continuity of care following crisis, and brief adjunctive treatments), and school/community-based psychosocial interventions (13 studies, social network interventions, school-based skills interventions, suicide awareness/gatekeeper programs, and community-based, culturally tailored adjunct programs). For most categories of psychotherapies (except DBT), acute interventions, or school/community-based interventions, there was insufficient strength of evidence and uncertainty about suicidal thoughts or attempts. None of the studies evaluated adverse events associated with the interventions. The evidence base on pharmacological treatment for suicidal youths was largely nonexistent at the present time. No eligible study evaluated neurotherapeutics or emerging therapies.

Conclusion: The current evidence on available interventions intended for youths at heightened risk of suicide is uncertain. Medication, neurotherapeutics, and emerging therapies remain unstudied in this population. Given that most treatments were adapted from adult protocols that may not fit the developmental and contextual experience of adolescents or younger children, this limited evidence base calls for the development of novel, developmentally and trauma-informed treatments, as well as multilevel interventions to address the rising suicide risk in youths.

Tuesday, July 8, 2025

Behavioral Ethics: Ethical Practice Is More Than Memorizing Compliance Codes

Cicero F. R. (2021).
Behavior analysis in practice, 14(4), 
1169–1178.

Abstract

Disciplines establish and enforce professional codes of ethics in order to guide ethical and safe practice. Unfortunately, ethical breaches still occur. Interestingly, it is found that breaches are often perpetrated by professionals who are aware of their codes of ethics and believe that they engage in ethical practice. The constructs of behavioral ethics, which are most often discussed in business settings, attempt to explain why ethical professionals sometimes engage in unethical behavior. Although traditionally based on theories of social psychology, the principles underlying behavioral ethics are consistent with behavior analysis. When conceptualized as operant behavior, ethical and unethical decisions are seen as being evoked and maintained by environmental variables. As with all forms of operant behavior, antecedents in the environment can trigger unethical responses, and consequences in the environment can shape future unethical responses. In order to increase ethical practice among professionals, an assessment of the environmental variables that affect behavior needs to be conducted on a situation-by-situation basis. Knowledge of discipline-specific professional codes of ethics is not enough to prevent unethical practice. In the current article, constructs used in behavioral ethics are translated into underlying behavior-analytic principles that are known to shape behavior. How these principles establish and maintain both ethical and unethical behavior is discussed.

Here are some thoughts:

This article argues that ethical practice requires more than memorizing compliance codes, as professionals aware of such codes still commit ethical breaches. Behavioral ethics suggests that environmental and situational variables often evoke and maintain unethical decisions, conceptualizing these decisions as operant behavior. Thus, knowledge of ethical codes alone is insufficient to prevent unethical practice; an assessment of environmental influences is necessary. The paper translates behavioral ethics constructs like self-serving bias, incrementalism, framing, obedience to authority, conformity bias, and overconfidence bias into behavior-analytic principles such as reinforcement, shaping, motivating operations, and stimulus control. This perspective shifts the focus from blaming individuals towards analyzing environmental factors that prompt ethical breaches, advocating for proactive assessment to support ethical behavior.

Understanding these concepts is vital for psychologists because they too are subject to environmental pressures that can lead to unethical actions, despite ethical training. The article highlights that ethical knowledge does not always translate to ethical behavior, emphasizing that situational factors often play a more significant role. Psychologists must recognize subtle influences such as the gradual normalization of unethical actions (incrementalism), the impact of how situations are described (framing), pressures from authority figures, and conformity to group norms, as these can all compromise ethical judgment. An overconfidence in one's own ethical standing can further obscure these influences. By applying a behavior-analytic lens, psychologists can better identify and mitigate these environmental risks, fostering a culture of proactive ethical assessment within their practice and institutions to safeguard clients and the profession.

Monday, July 7, 2025

Subconscious Suggestion

Ferketic, M. (2025, Forthcoming)  

Abstract

Subconscious suggestion is a silent but pervasive force shaping perception, decision-making, and attentional structuring beneath awareness. Operating as internal impressive action, it passively introduces impulses, biases, and associative framings into consciousness, subtly guiding behavior without volitional approval. Like hypnotic suggestion, it does not dictate action; it attempts to compel through motivational pull, influencing perception and intent through saliency and potency gradients. Unlike previous theories that depict subconscious influence as abstract or deterministic, this work presents a novel structured, mechanistic, operational model of function, demonstrating from first principles how subconscious suggestion disperses influence into awareness, interacts with attentional deployment, and negotiates attentional sovereignty. Additionally, it frames free will not as exemption from subconscious force, but as mastery of its regulation, with autonomy emerging from the ability to recognize, refine, and command suggestive forces rather than be unconsciously governed by them.

Here are some thoughts:

Subconscious suggestion, as detailed in the article, is a fundamental cognitive mechanism that shapes perception, attention, and behavior beneath conscious awareness. It operates as internal impressive action—passively introducing impulses, biases, and associative framings into consciousness, subtly guiding decisions without direct volitional control. Unlike deterministic models of unconscious influence, this framework presents subconscious suggestion as a structured, mechanistic process that competes for attention through saliency and motivational potency gradients. It functions much like a silent internal hypnotist, not dictating action but attempting to compel through perceptual framing and emotional nudges.

For practicing psychologists, understanding this model is crucial—it provides insight into how automatic cognitive processes contribute to habit formation, emotional regulation, motivation, and decision-making. It reframes free will not as exemption from subconscious forces, but as mastery over them, emphasizing the importance of attentional sovereignty and volitional override in clinical interventions. This knowledge equips psychologists to better identify, assess, and guide clients in managing subconscious influences, enhancing therapeutic outcomes across conditions such as addiction, anxiety, compulsive behaviors, and maladaptive thought patterns.

Sunday, July 6, 2025

In similarity we trust: Like-mindedness, rather than just the type of moral judgment, drives inferences of trustworthiness

Chandrashekar, S., et al. (2025, May 26).
PsyArXiv Preprints

Abstract

Trust plays a central role in social interactions. Recent research has highlighted the importance of others’ moral decisions in shaping trust inference: individuals who reject sacrificial harm in moral dilemmas (which aligns with deontological ethics) are generally perceived as more trustworthy than those who condone sacrificial harm (which aligns with utilitarian ethics). Across five studies (N = 1234), we investigated trust inferences in the context of iterative moral dilemmas, which allow individuals to not only make deontological or utilitarian decisions, but also harm-balancing decisions. Our findings challenge the prevailing perspective: While we did observe effects of the type of moral decision that people make, the direction of these effects was inconsistent across studies. In contrast, moral similarity (i.e., whether a decision aligns with one’s own perspective) consistently predicted increased trust. Our findings suggest that trust is not just about adhering to specific moral frameworks but also about shared moral perspectives.

Here are some thoughts:

This research is important to practicing psychologists for several key reasons. It demonstrates that like-mindedness —specifically, sharing similar moral judgments or decision-making patterns—is a strong determinant of perceived trustworthiness. This insight is valuable across clinical, organizational, and social psychology, particularly in understanding how moral alignment influences interpersonal relationships.

Unlike past studies focused on isolated moral dilemmas like the trolley problem, this work explores iterative dilemmas, offering a more realistic model of how people make repeated moral decisions over time. For psychologists working in ethics or behavioral interventions, this provides a nuanced framework for promoting cooperation and ethical behavior in dynamic contexts.

The study also challenges traditional views by showing that individuals who switch between utilitarian and deontological reasoning are not necessarily seen as less trustworthy, suggesting flexibility in moral judgment may be contextually appropriate. Additionally, the research highlights how moral decisions shape perceptions of traits such as bravery, warmth, and competence—key factors in how people are judged socially and professionally.

These findings can aid therapists in helping clients navigate relational issues rooted in moral misalignment or trust difficulties. Overall, the research bridges moral psychology and social perception, offering practical tools for improving interpersonal trust across diverse psychological domains.

Saturday, July 5, 2025

Bias Is Not Color Blind: Ignoring Gender and Race Leads to Suboptimal Selection Decisions.

Rabinovitch, H. et al. (2025, May 27).

Abstract

Blindfolding—selecting candidates based on objective selection tests while avoiding personal information about their race and gender— is commonly used to mitigate bias in selection. Selection tests, however, often benefit people of a certain race or gender. In such cases, selecting the best candidates requires incorporating, rather than ignoring, the biasing factor. We examined people's preference for avoiding candidates’ race and gender, even when fully aware that these factors bias the selection test. We put forward a novel prediction suggesting that paradoxically, due to their fear of appearing partial, people would choose not to reveal race and gender information, even when doing so means making suboptimal decisions. Across three experiments (N = 3,621), hiring professionals (and laypeople) were tasked with selecting the best candidate for a position when they could reveal the candidate’s race and gender or avoid it. We further measured how fear for their social image corresponds with their decision, as well as how job applicants perceive such actions. The results supported our predictions, showing that more than 50% did not reveal gender and race information, compared to only 30% who did not reveal situational biasing information, such as the time of day in which the interview was held. Those who did not reveal information expressed higher concerns for their social and self-image than those who decided to reveal. We conclude that decision-makers avoid personal biasing information to maintain a positive image, yet by doing so, they compromise fairness and accuracy alike.

Public significance statements

Blindfolding—ignoring one’s gender and race in selection processes—is a widespread strategy aimed at reducing bias and increasing diversity. Selection tests, however, often unjustly benefit members of certain groups, such as men and white people. In such cases, correcting the bias requires incorporating, rather than ignoring, information about the candidates’ gender and race. The current research shows that decision-makers are reluctant to reveal such information due to their fear of appearing partial. Paradoxically, decision-makers avoid such information, even when fully aware that doing so may perpetuate bias, in order to protect their social image as impartial, but miss out on the opportunity to advance fairness and choose the best candidates.

Here are some thoughts:

This research is critically important to practicing psychologists because it sheds light on the complex interplay between bias, decision-making, and social image concerns in hiring processes. The study demonstrates how well-intentioned practices like "blindfolding"—omitting race or gender information to reduce discrimination—can paradoxically perpetuate systemic biases when selection tools themselves are flawed. Practicing psychologists must understand that ignoring personal attributes does not eliminate bias but can instead obscure its effects, leading to suboptimal and unfair outcomes. By revealing how decision-makers avoid sensitive information out of fear of appearing partial, the research highlights the psychological mechanisms—such as social and self-image concerns—that drive this avoidance. This insight is crucial for psychologists involved in organizational consulting, personnel training, or policy development, as it underscores the need for more nuanced strategies that address bias directly rather than avoiding it.

Additionally, the findings inform interventions aimed at promoting diversity, equity, and inclusion by showing that transparency and informed adjustments based on demographic factors may be necessary to achieve fairer outcomes. Ultimately, the research challenges traditional assumptions about neutrality in selection decisions and urges psychologists to advocate for evidence-based approaches that actively correct for bias while considering the broader implications of perceived fairness and merit.

Friday, July 4, 2025

The Psychology of Moral Conviction

Skitka, L. J.,  et al. (2020).
Annual Review of Psychology, 72(1),
347–366.

Abstract

This review covers theory and research on the psychological characteristics and consequences of attitudes that are experienced as moral convictions, that is, attitudes that people perceive as grounded in a fundamental distinction between right and wrong. Morally convicted attitudes represent something psychologically distinct from other constructs (e.g., strong but nonmoral attitudes or religious beliefs), are perceived as universally and objectively true, and are comparatively immune to authority or peer influence. Variance in moral conviction also predicts important social and political consequences. Stronger moral conviction about a given attitude object, for example, is associated with greater intolerance of attitude dissimilarity, resistance to procedural solutions for conflict about that issue, and increased political engagement and volunteerism in that attitude domain. Finally, we review recent research that explores the processes that lead to attitude moralization; we integrate these efforts and conclude with a new domain theory of attitude moralization.

Here are some thoughts:

The article provides valuable insights into how individuals perceive and process attitudes grounded in fundamental beliefs about right and wrong. It distinguishes morally convicted attitudes from other constructs, such as strong but nonmoral attitudes or religious beliefs, by highlighting that moral convictions are viewed as universally and objectively true and are relatively resistant to authority or peer influence. These convictions often lead to significant social and political consequences, including intolerance of differing views, resistance to compromise, increased political engagement, and heightened emotional responses. The article also explores the processes of attitude moralization—how an issue becomes infused with moral significance—and demoralization, offering a domain theory of attitude moralization that suggests different pathways depending on whether the initial attitude is perceived as a preference, convention, or existing moral imperative.

This knowledge is critically important to practicing psychologists because it enhances their understanding of how moral convictions shape behavior, decision-making, and interpersonal dynamics. For instance, therapists working with clients on issues involving conflict resolution, values clarification, or behavioral change must consider the role of moral conviction in shaping resistance to persuasion or difficulty in compromising. Understanding moral conviction can also aid psychologists in navigating cultural differences, addressing polarization in group settings, and promoting tolerance by recognizing how individuals intuitively perceive certain issues as moral. Furthermore, as society grapples with increasingly divisive sociopolitical challenges—such as climate change, immigration, and public health crises—psychologists can use these insights to foster dialogue, reduce moral entrenchment, and encourage constructive engagement. Ultimately, integrating the psychology of moral conviction into practice allows for more nuanced, empathetic, and effective interventions across clinical, organizational, and community contexts.

Thursday, July 3, 2025

Mindfulness, moral reasoning and responsibility: towards virtue in Ethical Decision-Making.

Small, C., & Lew, C. (2019).
Journal of Business Ethics, 169(1),
103–117.

Abstract

Ethical decision-making is a multi-faceted phenomenon, and our understanding of ethics rests on diverse perspectives. While considering how leaders ought to act, scholars have created integrated models of moral reasoning processes that encompass diverse influences on ethical choice. With this, there has been a call to continually develop an understanding of the micro-level factors that determine moral decisions. Both rationalist, such as moral processing, and non-rationalist factors, such as virtue and humanity, shape ethical decision-making. Focusing on the role of moral judgement and moral intent in moral reasoning, this study asks what bearings a trait of mindfulness and a sense of moral responsibility may have on this process. A survey measuring mindfulness, moral responsibility and moral judgement completed by 171 respondents was used for four hypotheses on moral judgement and intent in relation to moral responsibility and mindfulness. The results indicate that mindfulness predict moral responsibility but not moral judgement. Moral responsibility does not predict moral judgement, but moral judgement predicts moral intent. The findings give further insight into the outcomes of mindfulness and expand insights into the models of ethical decision-making. We offer suggestions for further research on the role of mindfulness and moral responsibility in ethical decision-making.

Here are some thoughts:

This research explores the interplay between mindfulness, moral reasoning, and moral responsibility in ethical decision-making. Drawing on Rest’s model of moral reasoning—which outlines four phases (awareness, judgment, intent, and behavior)—the study investigates how mindfulness as a virtue influences these stages, particularly moral judgment and intent, and how it relates to a sense of moral responsibility. Regression analyses revealed that while mindfulness did not directly predict moral judgment, it significantly predicted moral responsibility. Additionally, moral judgment was found to strongly predict moral intent.

For practicing psychologists, this study is important for several reasons. First, it highlights the potential role of mindfulness as a trait linked to moral responsibility, suggesting that cultivating mindfulness may enhance ethical decision-making by fostering a greater sense of accountability toward others. This has implications for ethics training and professional development in psychology, especially in fields where practitioners face complex moral dilemmas. Second, the findings underscore the importance of integrating non-rationalist factors—such as virtues and emotional awareness—into traditional models of moral reasoning, offering a more holistic understanding of ethical behavior. Third, the research supports the use of scenario-based approaches in training professionals to navigate real-world ethical challenges, emphasizing the contextual nature of moral reasoning. Finally, the paper contributes to the broader literature on mindfulness by linking it to prosocial behaviors and ethical outcomes, which can inform therapeutic practices aimed at enhancing clients’ moral self-awareness and responsible decision-making.

Wednesday, July 2, 2025

Realization of Empathy Capability for the Evolution of Artificial Intelligence Using an MXene(Ti3C2)-Based Memristor

Wang, Y., Zhang, Y., et al. (2024).
Electronics, 13(9), 1632.

Abstract

Empathy is the emotional capacity to feel and understand the emotions experienced by other human beings from within their frame of reference. As a unique psychological faculty, empathy is an important source of motivation to behave altruistically and cooperatively. Although human-like emotion should be a critical component in the construction of artificial intelligence (AI), the discovery of emotional elements such as empathy is subject to complexity and uncertainty. In this work, we demonstrated an interesting electrical device (i.e., an MXene (Ti3C2) memristor) and successfully exploited the device to emulate a psychological model of “empathic blame”. To emulate this affective reaction, MXene was introduced into memristive devices because of its interesting structure and ionic capacity. Additionally, depending on several rehearsal repetitions, self-adaptive characteristic of the memristive weights corresponded to different levels of empathy. Moreover, an artificial neural system was designed to analogously realize a moral judgment with empathy. This work may indicate a breakthrough in making cool machines manifest real voltage-motivated feelings at the level of the hardware rather than the algorithm.

Here are some thoughts:

This research represents a critical step toward endowing machines with human-like emotional capabilities, particularly empathy. Traditionally, AI has been limited to algorithmic decision-making and pattern recognition, lacking the nuanced ability to understand or simulate human emotions. By using an MXene-based memristor to emulate "empathic blame," researchers have demonstrated a hardware-level mechanism that mimics how humans adjust their moral judgments based on repeated exposure to similar situations—an essential component of empathetic reasoning. This breakthrough suggests that future AI systems could be designed not just to recognize emotions but to adaptively respond to them in real time, potentially leading to more socially intelligent machines.

For psychologists, this research raises profound questions about the nature of empathy, its role in moral judgment, and whether artificially created systems can truly embody these traits or merely imitate them. The ability to program empathy into AI could change how we conceptualize machine sentience and emotional intelligence, blurring the lines between biological and artificial cognition. Furthermore, as AI becomes more integrated into social, therapeutic, and even judicial contexts, understanding how machines might "feel" or interpret human suffering becomes increasingly relevant. The study also opens up new interdisciplinary dialogues between neuroscience, ethics, and AI development, emphasizing the importance of considering psychological principles in the design of emotionally responsive technologies. Ultimately, this work signals a shift from purely functional AI toward systems capable of engaging with humans on a deeper, more emotionally resonant level.

Tuesday, July 1, 2025

The Advantages of Human Evolution in Psychotherapy: Adaptation, Empathy, and Complexity

Gavazzi, J. (2025, May 24).
On Board with Professional Psychology.
American Board of Professional Psychology.
Issues 5.

Abstract

The rapid advancement of artificial intelligence, particularly Large Language Models (LLMs), has generated significant concern among psychologists regarding potential impacts on therapeutic practice. 

This paper examines the evolutionary advantages that position human psychologists as irreplaceable in psychotherapy, despite technological advances. Human evolution has produced sophisticated capacities for genuine empathy, social connection, and adaptive flexibility that are fundamental to effective therapeutic relationships. These evolutionarily-derived abilities include biologically-rooted emotional understanding, authentic empathetic responses, and the capacity for nuanced, context-dependent decision-making. In contrast, LLMs lack consciousness, genuine emotional experience, and the evolutionary framework necessary for deep therapeutic insight. While LLMs can simulate empathetic responses through linguistic patterns, they operate as statistical models without true emotional comprehension or theory of mind. The therapeutic alliance, cornerstone of successful psychotherapy, depends on authentic human connection and shared experiential understanding that transcends algorithmic processes. Human psychologists demonstrate adaptive complexity in understanding attachment styles, trauma responses, and individual patient needs that current AI cannot replicate.

The paper concludes that while LLMs serve valuable supportive roles in documentation, treatment planning, and professional reflection, they cannot replace the uniquely human relational and interpretive aspects essential to psychotherapy. Psychologists should integrate these technologies as resources while maintaining focus on the evolutionarily-grounded human capacities that define effective therapeutic practice.