Redefining AI: Large Models as Cultural and Social Technologies
On March 13, 2025, Science published an article titled “Large AI models are cultural and social technologies: Implications draw on the history of transformative information systems from the past.” This article argues that large language models (LLMs) should be defined as cultural and social technologies rather than autonomous agents. Their technical essence is more akin to historical information processing systems such as writing, printing, and bureaucratic systems, facilitating social coordination by reorganizing accumulated cultural data. Misinterpreting LLMs as intelligent agents can divert public discussions from substantive issues, necessitating a shift towards a sociotechnical analysis framework to accurately assess their social impacts and governance pathways.
Introduction
Debates surrounding artificial intelligence often focus on whether large models possess intelligence and autonomy. Discussions about the cultural and social consequences of these models center on two points: their immediate impacts and the hypothetical future where these systems evolve into general artificial intelligence (or even superintelligent AI).
However, viewing large models as intelligent agents is fundamentally misguided. Integrating perspectives from social sciences and computer science helps clarify our understanding of AI systems: large models should not be seen as agents but as a new form of cultural and social technology that enables humans to leverage the accumulated information of others.
Social and Cultural Institutions
Since the dawn of humanity, we have relied on culture. Starting with language, humans possess a unique ability to learn from others’ experiences, which can be viewed as a key to our evolutionary success.
Significant transformations in cultural technologies have led to drastic social changes: the evolution from oral traditions to images, writing, printing, film, and video. As information spreads widely across time and space, new methods for acquiring and organizing information (such as libraries, newspapers, and internet searches) have continuously developed. These advancements have profoundly affected human thought and society, for better or worse.
Humans have also depended on social institutions to coordinate individual information gathering and decision-making. These institutions can themselves be viewed as a form of technology. In modern society, markets, democracy, and bureaucratic systems are particularly important:
-
Economist Friedrich Hayek pointed out that market price mechanisms dynamically summarize extremely complex economic relationships into simplified representations. Producers and buyers do not need to understand production complexities; they only need to know the price, which compresses vast details into usable representations.
-
The electoral mechanisms of democratic governance similarly focus dispersed public opinion into collective laws and leadership decisions.
-
Anthropologist James C. Scott proposed that all states (regardless of being democratic or not) manage complex societies through bureaucratic systems that create classifications and systematically organize information.
Markets, democracy, and bureaucratic systems have long relied on generating “lossy” (incomplete, selective, and irreversible) yet useful representations before the advent of computers. These representations depend on and transcend individual knowledge and decision-making.
Humans heavily rely on these cultural and social technologies, but their feasibility is rooted in the unique capabilities of human agents. Humans and other animals can perceive and act upon a changing external world, construct new world models, update them based on evidence, and design new goals. Humans can create and transmit new beliefs and values through language or print. Cultural and social technologies powerfully convey and organize these beliefs and values, but without individual capabilities, these technologies would be ineffective. Without innovation, imitation is meaningless.
Some AI systems (like those in robotics) indeed attempt to instantiate similar truth-discovery capabilities. While it is theoretically possible for artificial systems to achieve this in the future, all such systems currently fall far short of human capabilities. We can discuss the degree of concern regarding these potential future AI systems or how to address their emergence, but this is fundamentally different from the impacts of current and near-term large models.
Large Models
Unlike more agentic systems, large models have made significant and unexpected progress in recent years, placing them at the center of current discussions in the AI field. This progress has even sparked the notion of “expansion theory.” However, there is an essential difference between large models and agents, and expansion cannot change this.
Large models are not agents; they are a new way of combining characteristics of cultural and social technologies. They generate summaries of vast amounts of human-generated information, but these systems do more than summarize information like library catalogs, internet searches, and Wikipedia; they can also reorganize and reconstruct (or “simulate”) information representations on a large scale, similar to markets, states, and bureaucratic systems. Just as market prices are lossy representations of resource allocation and usage, government statistics and bureaucratic classifications imperfectly represent population characteristics, and large models are also a “lossy JPEG” of their training datasets.
However, behind the agent-like interfaces and anthropomorphic disguises, large language models and large multimodal models are statistical models: they decompose vast libraries of human-generated text into specific vocabularies and estimate the probability distributions of long sequences of words. This is an imperfect representation of language but contains a wealth of information about its summary patterns. This allows large language models to predict subsequent words in sequences, generating human-like text.
Large models not only abstract vast human culture but also allow for diverse new operations: simple arguments can be expressed as elaborate metaphors, and complex prose can be compressed into plain language, among others. Cultural information that was previously too complex, vast, and ambiguous to operate on at scale has been tamed.
In practice, the latest versions of these systems not only rely on vast libraries of human-generated and curated text and images but also on other forms of human judgment and knowledge. In particular, these systems depend on reinforcement learning from human feedback (RLHF) and prompt engineering. Even the latest “chain of thought” models typically start with dialogues with human users.
Challenges and Opportunities
1. Challenges
Debates about artificial intelligence should focus on the challenges and opportunities presented by these new cultural and social technologies. The technology we currently possess has impacts on written and visual culture comparable to the effects of large-scale markets on the economy, large bureaucracies on society, and even the transformation of language by printing. What will happen next? Like past general-purpose technologies in economics, organization, and information, these systems will affect productivity, supplement human work, automate tasks previously only achievable by humans, and influence distribution, potentially altering resource acquisition patterns.
They may also produce broader cultural impacts. We do not yet know whether these impacts will be as profound as those of printing, markets, or bureaucratic systems, but viewing them as cultural technologies highlights their potential significance.
At the same time, these technologies create new possibilities for reorganizing information and coordinating the actions of millions globally. Ongoing debates about the economic, social, and political consequences of large language models echo historical concerns and expectations regarding new cultural and social technologies. Guiding these debates requires recognizing the commonalities of new and old arguments while carefully mapping the specificities of new technologies.
Such mapping is a core task of social sciences. Research into the past consequences of technologies can help us think about the latent social impacts of AI and explore pathways to enhance positive impacts and mitigate negative ones through the redesign of AI systems.
However, a current obvious concern is that large models and related technologies may replace “knowledge workers” and whether “large models will homogenize or fragment culture and society.” Thinking about this issue in historical context is enlightening.
The design goals of large models aim to faithfully reproduce the actual probabilities of sequences of text, images, and videos. Their inherent tendency is to be most accurate about the most common scenarios in the training data and least accurate about rare or entirely new scenarios, which may exacerbate this homogenization.
2. Opportunities
On the other hand, large models may allow us to design new methods to access the diversity of cultural perspectives they summarize. Integrating and balancing these perspectives could provide more nuanced means of solving complex problems. One way to achieve this is to construct a “social-like” ecology—different large models encoding different perspectives could debate, cross-fertilize to generate mixed perspectives, or identify gaps in human expertise. We may need new systems that diversify the reflections and roles of large models, producing distributions and diversities akin to human society.
Such diversified systems are particularly crucial for scientific advancement. By linking numerous perspectives across texts, audio, and images, large models may help us discover unprecedented connections among them, benefiting science and society.
The impact of new cultural and social technologies on economic relations also presents more subtle yet intriguing pathways. The development of cultural technologies has sparked fundamental economic tensions between information producers and distribution systems:
-
The contradiction between producers and distributors: Distributors seek to acquire information at low costs, while producers wish to distribute information at low costs.
-
Digitalization exacerbates the contradiction: The convenience of digital information distribution has sharpened this issue. The speed, efficiency, and scope with which large models process available information make these problems more pronounced. Concentrated power may make system owners more likely to seize benefits through efficiency at the expense of others’ rights.
3. Technological and Political Issues Under Challenges and Opportunities
Key technological questions include: To what extent can systemic flaws in large models be corrected? How do they compare in advantages and disadvantages to flaws in systems based on human knowledge workers?
These questions should not obscure critical political issues: Which actors can mobilize their interests? How do they shape the hybrid outcomes of technology and organizational capabilities?
Tech commentators often simplify these issues into a binary opposition between machines and humans: either the forces of progress triumph over Luddite tendencies, or humans successfully resist the dehumanizing encroachment of artificial technologies. This not only misunderstands the complexities of distribution struggles that predate the advent of computers but also overlooks the multiple paths that future progress may take.
In early cases of social and cultural technologies, a series of norms and regulatory frameworks gradually emerged to reconcile their impacts. However, these checks and balances do not form spontaneously but are the result of coordinated efforts by actors both inside and outside technology.
Looking to the Future
The narrative of general artificial intelligence (i.e., viewing large models as superintelligent agents) is promoted both by optimists and skeptics within and outside the tech community. This narrative misinterprets the nature of these models and their relationship to past technological transformations. More importantly, it diverts attention from the real issues and opportunities posed by these technologies and ignores the lessons of history that guide weighing their pros and cons.
There may exist hypothetical future AI systems closer to agents, but large models are clearly not such systems. Like library card catalogs or the internet, large models belong to a continuum of cultural and social technology development.
Social sciences have explored this history in detail, forming a unique understanding of past technological upheavals. Close collaboration between computer science and engineering with social sciences will help us understand this history and apply its lessons: Will large models lead to cultural homogenization or fragmentation? Will they reinforce or undermine the social institutions of human discovery? Who will benefit and who will be harmed in this process?
These pressing questions are difficult to focus on in debates that analogize large models to human agents. Shifting the framework of debates about artificial intelligence will help promote research.
If both computer scientists and social scientists understand that large models are merely (but also) new cultural and social technologies, they will find it easier to collaborate by combining their expertise. Computer scientists can integrate their deep understanding of system mechanisms with social scientists’ knowledge of how large-scale systems have reshaped society, expanding existing research agendas and discovering new directions.
Additionally, steering debates away from the existential fears of “machines taking over” and the utopian promise of “everyone having a perfect artificial assistant” will yield different actual policy consequences for large models.
With this mindset, engineers and computer scientists have already recognized the bias issues of large models and are contemplating their relationship with ethical justice. They need to go further and ask: How will these systems affect resource distribution? What are their actual consequences for social polarization and integration? Can we develop large models that enhance rather than suppress human creativity?
Answering these questions requires an understanding that combines social sciences and engineering. Shifting the debate about artificial intelligence from agency to cultural and social technology is a crucial first step in building this interdisciplinary understanding.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.