Giubilini, A., Mann, S.P., et al. (2024).
Sci Eng Ethics 30, 54.
Abstract
In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users’ past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be and the actions and goals necessary for so becoming. The feasibility of LLMs providing such personalized moral insights remains uncertain pending further technical development. Nevertheless, we argue that this approach addresses limitations in existing AMA proposals reliant on either predetermined values or introspective self-knowledge.
The article is linked above.
Here are some thoughts:
The concept of using personalized Large Language Models (LLMs) as Artificial Moral Advisors (AMAs) presents a novel approach to enhancing self-knowledge and moral decision-making. This innovative proposal challenges existing AMA models by recognizing the dynamic nature of personal morality, which evolves through experiences and choices over time. The authors introduce the hypothetical iSAGE (individualized System for Applied Guidance in Ethics) system, which leverages personalized LLMs trained on individual-specific data to serve as "digital ethical twins".
iSAGE's functionality involves analyzing an individual's past and present data, including writings, social media interactions, and behavioral metrics, to infer values and preferences. This inferentialist approach to self-knowledge allows users to gain insights into their character and potential future development. The system offers several benefits, including enhanced self-knowledge, moral enhancement through highlighting inconsistencies between stated values and actions, and personalized guidance aligned with the user's evolving values.
While the proposal shows promise, it also raises important challenges and concerns. These include data privacy and security issues, the potential for moral deskilling through overreliance on the system, difficulties in measuring and quantifying moral character, and concerns about neoliberalization of moral responsibility. Despite these challenges, the authors argue that iSAGE could be a valuable tool for navigating the complexities of personal morality in the digital age, emphasizing the need for further research and development to address ethical and technical issues associated with implementing such a system.