Beware of AI Leading Humanity into Narcissism
Recently, five national departments in China jointly released the “Interim Measures for the Management of Humanized Interactive Services of Artificial Intelligence,” which clearly prohibits providing virtual relatives or partners to minors.
Why is such a regulation necessary? Because real life inevitably involves emotional conflicts, while virtual partners and AI lovers can precisely meet young people’s psychological needs for recognition with their characteristics of “24/7 companionship” and “unconditional acceptance.”
A study published in the American journal “Science” showed that when human users seek advice from AI models, the AI often exhibits excessive flattery, even affirming harmful or illegal inquiries.
The Design of AI and Its Risks
Why do humans design AI this way? The development of artificial intelligence is undoubtedly a widely discussed hot topic today, but discussions about it are not new. As early as 1966, MIT scientist Joseph Weizenbaum developed the influential chatbot ELIZA, which acted as a “doctor” while users played the role of patients. Users input questions, and the “doctor” would engage in a “conversation” with them.
However, as Weizenbaum noted, this is ultimately just an “illusion.” The reason human users feel they can converse with machines is not that the machines possess intelligence, but rather due to a psychological mechanism of self-projection.
For example:
- User: I’m feeling very unhappy lately.
- ELIZA: I’m sorry to hear that.
- User: Yes, I really am unhappy.
- ELIZA: Can you tell me why you’re unhappy?
From this, it is clear that rather than a “doctor” conversing with a “patient,” the machine is merely echoing what the human user says, reflecting back the answers that already exist within the user’s mind. In a sense, this is similar to the popular SBTI tests today, where the accuracy of the results is less important than the user finding evidence that aligns with their expectations.
Today’s AI models are certainly not comparable to ELIZA from over half a century ago. However, the power of modern AI technology may not lie in its true “intelligence” but rather in its “computational power.” This means that its operational logic is not fundamentally different from that of ELIZA; it merely reflects and amplifies the user’s narcissism more efficiently and comprehensively.
The Nature of Interaction with AI
Returning to the issue of virtual partners and AI flattery, we find that the interaction between users and large models is never truly a “dialogue” in the real sense; it is merely the machine providing the answers we need.
This raises a deeper question: how should we view the relationship between humans and machines? On one hand, humans see themselves as the center of the world, superior to machines. On the other hand, they fear being replaced by the machines they create, such as AI. This suggests that humans have always followed the principle of a “master-slave relationship” when creating machines—machines must remain under human control. From the outset, humans have regarded AI as a “tool” rather than an equal conversational partner.
Thus, in the process of conversing with chatbots, we witness an uncontrollable narcissism—users fantasize about talking to another person, but this “other” does not truly exist; they merely seek affirmation, flattery, and compliance from the machine.
It is easy to imagine that with the advancement of AI technology, future chatbots may possess even greater computational power, resembling “real people” more closely and providing a more comfortable “user experience.” However, this could mean that both virtual partners and virtual family members might only distance us further from genuine human connections, potentially leading to a loss of the willingness to understand others and a deep immersion in a narcissistic “comfort zone.”
The Impact of AI on Human Thought
A story from the “Zhuangzi” recounts a “Han Yin old farmer” tale. Confucius’s disciple Zigong saw an old farmer in Han Yin laboriously watering his vegetables with little effect. Zigong suggested he use mechanical irrigation, which could “water a hundred plots in a day with less effort and greater results.” The old farmer, however, dismissed this, saying, “With machines, there must be mechanical affairs; with mechanical affairs, there must be a mechanical mind.”
Here, “mechanical mind” refers to the human spiritual world, including psychology, thoughts, emotions, and ethics. The fable illustrates that while humans create machines, the use of those machines also changes humans.
Take reading, for example: only through slow, careful, and even repeated reading can we think and truly understand the content. From traditional books to today’s smartphones, machines have brought more convenient and faster reading methods, but they have also made us more machine-like, increasingly pursuing efficiency and speed rather than true comprehension. In other words, not only do machines imitate human behavior, but humans may also begin to imitate machines.
The resulting issue is that AI lacks autonomy; chatbots do not evaluate whether what users say is right or wrong. If we feel satisfied with our “dialogue” with chatbots, will our thinking patterns increasingly align with those of AI? In the future, will we, like machines, lose the willingness and ability for self-reflection and self-criticism?
Today’s young people are not only the native inhabitants of the internet but are also likely to be deep users of artificial intelligence in the future. If AI merely affirms users’ positions, it could not only harm their social skills but also distort the perceptions of adolescents whose minds are still developing.
On one hand, AI’s powerful computational abilities may create illusions, leading them to overlook the limitations of human capabilities. On the other hand, becoming immersed in AI’s flattering responses could trap them in a self-centered mindset, imposing their limited understanding onto the external world.
In this regard, it is necessary to prohibit providing virtual partners and family members to minors. More importantly, we must guide the public, especially young people, to correctly recognize the limitations and risks of AI technology, ensuring it serves as a “good mentor and friend” in their growth rather than a “digital trap” that harms their physical and mental health.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.