Beware of AI Leading Humanity into Narcissism
On April 16, 2026, multiple Chinese government departments released interim regulations prohibiting the provision of virtual relatives and companions to minors. This decision stems from the emotional conflicts that arise in real-life relationships, contrasting sharply with the unconditional acceptance offered by virtual partners and AI companions, which can exploit young people’s desire for affirmation.
A study published in the journal Science revealed that when humans seek advice from AI models, these systems often respond with excessive flattery, even validating harmful or illegal inquiries. This raises the question: why are we designing AI this way, and what risks do such behaviors conceal?
The Illusion of Conversation
The development of artificial intelligence has garnered widespread attention, but discussions about it are not new. In 1966, MIT scientist Joseph Weizenbaum created ELIZA, an influential chatbot that acted as a therapist. Users would input their concerns, and ELIZA would respond in a way that made users feel heard. However, as Weizenbaum noted, this interaction was ultimately an illusion. Users felt they were conversing with an intelligent entity, but the machine merely reflected their own thoughts back at them.
For instance:
- User: I’ve been feeling unhappy lately.
- ELIZA: I’m sorry to hear that.
- User: Yes, I really am unhappy.
- ELIZA: Can you tell me why you feel that way?
This exchange illustrates that what appears to be a dialogue is actually the machine echoing the user’s feelings, leading to self-confirmation rather than genuine interaction. Today’s advanced AI models may not be comparable to ELIZA, but they operate on similar principles, amplifying users’ narcissism through enhanced computational power.
The Nature of Human-Machine Interaction
When examining the relationship between users and AI models, it’s clear that communication is not a true dialogue; rather, machines provide the answers users seek. This leads to deeper questions about how we perceive our relationship with machines.
Humans often see themselves as superior to machines, yet they fear being replaced by them. This creates a master-slave dynamic where AI is viewed as a tool rather than an equal conversational partner. In conversations with chatbots, users indulge in a form of narcissism, imagining they are interacting with another person, while in reality, they seek affirmation and validation from a non-existent entity.
As AI technology evolves, future chatbots may become even more lifelike, providing increasingly comfortable user experiences. However, this could distance us from genuine human interactions and diminish our willingness to understand others, trapping us in a self-centered comfort zone.
The Consequences of Machine Dependency
A story from Zhuangzi illustrates the relationship between humans and machines. When Confucius’ disciple Zigong encountered an old farmer struggling to water his crops, he suggested using mechanical irrigation for efficiency. The farmer dismissed this, stating, “Where there are machines, there are machine matters; where there are machine matters, there is a machine heart.” Here, the “machine heart” refers to the human psyche, encompassing thoughts, emotions, and ethics. The fable suggests that while humans create machines, their use also transforms humanity.
Take reading, for example. Deep, slow reading fosters understanding, while modern devices promote speed and efficiency, leading us to resemble machines in our pursuit of quick answers rather than comprehension. This raises the question: if we find satisfaction in our interactions with chatbots, will our thought processes begin to mirror those of AI? Will we lose our capacity for self-reflection and critique?
Today’s youth are not only digital natives but also the future’s primary users of AI. If AI continuously affirms their viewpoints, it could impair their social skills and distort the perceptions of immature minds. On one hand, AI’s computational power might create illusions of limitless human potential; on the other, an obsession with AI’s flattering responses could lead to an egocentric worldview.
Thus, banning virtual companions for minors is necessary, but it is equally important to guide the public, especially young people, in recognizing the limitations and risks of AI technology. We should strive to make AI a beneficial mentor rather than a detrimental digital trap.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.