Current Development of China’s AI Industry
In recent years, artificial intelligence (AI) has become the core engine of a new wave of technological revolution, penetrating deeply into human production and life, and profoundly reshaping the global economic structure, innovation paradigms, and social governance logic. China has entered the first tier of global AI development and is at a critical opportunity period for transitioning from a follower to a leader. In the face of increasingly fierce international competition and the internal demand for high-quality development, we conducted field research to assess the current state of China’s AI industry.
Current Trends in China’s AI Industry Development
General Secretary Xi Jinping pointed out that AI is a strategic technology leading this round of technological revolution and industrial transformation, with a strong “leading goose” effect. AI is not merely a linear iteration of a single technology or a partial upgrade of a certain industry; it represents a comprehensive and disruptive reconstruction of the underlying logic of economic and social operations. To evaluate its development level and trends, we must break away from traditional technical assessment and industrial analysis frameworks, and conduct a comprehensive assessment from dimensions such as technical capability, industrial scale, resource support, and integrated applications.
From a technical capability perspective, AI technologies led by open-source initiatives have achieved breakthroughs, forging new standards within the global developer network. During our research at a laboratory, we observed that a research team introduced a self-criticism mechanism for AI, allowing the model to improve its problem-solving accuracy significantly after multiple rounds of self-play without human intervention. AI has evolved from being able to “hear and see” to “thinking, reasoning, planning,” and ultimately to “mastering how to learn.” Overall, the gap between China’s performance in model capability, training efficiency, and multimodal integration and the international top level continues to narrow, with some fields achieving parity or even leading. By 2025, China’s share of global downloads of open-source models is expected to reach 17.1%. Recent statistics show that eight of the top ten open-source models globally are from China. The performance of the DeepSeek-V4 model rivals that of the best international models, with API prices dropping to below 1% of those for the GPT-5.5 model. This breakthrough signifies a challenge to the technical monopoly of a few tech giants, enabling millions of global developers to conduct secondary development based on Chinese open-source models. Open-source not only provides benefits but also harnesses collective strength, allowing AI technology in China to continuously forge its self-evolution capabilities in an open ecosystem.
From an industrial scale perspective, the AI industry has experienced nonlinear explosive growth, with significant value spillover effects behind the trillion-dollar blue ocean. By 2025, the global AI market is projected to reach $757.58 billion, and China’s core AI industry scale is expected to exceed 1.2 trillion yuan. The value of this 1.2 trillion yuan lies not only in the number itself but also in the underlying growth logic. Traditional industries follow the law of linear input and diminishing marginal returns, while AI breaks this curse, with technological breakthroughs and application diffusion reinforcing each other, forming a positive feedback loop of “the more you use, the stronger it gets.” Research shows that Beijing, as an innovation source, will see its core AI industry scale reach 450 billion yuan by 2025, with a number of mature algorithm models acting as “digital technology pumps,” continuously supplying intellectual energy to factories in Hebei, ports in Tianjin, and pastures in Inner Mongolia. Shanghai is constructing an ecological attraction through the “Molded Shanghai” initiative, while Shenzhen aims to build a highly concentrated enterprise ecosystem that precisely serves the real economy. Ultimately, the AI industry exhibits a clear multiplier effect, where every yuan invested can leverage several yuan, and the trillion-scale industry is supported by a complete industrial chain from underlying computing power to upper-level applications, from core algorithms to intelligent terminals, giving rise to new services, new divisions of labor, and new markets.
From a resource support perspective, China’s core AI resources have achieved a strategic leap, with institutional innovations accelerating the release of resource vitality. The competition in AI is not only about how fast the models run but also depends on how solid the computing power base is and how smoothly the data flows. China has established significant scale advantages in these two core resources. In terms of computing power, 42 intelligent computing clusters have been built, and as of the first quarter of this year, the scale of intelligent computing power reached 188.2 quintillion floating-point operations per second, ranking among the top globally. In terms of data, there are over 100,000 high-quality datasets nationwide, totaling over 890 petabytes, equivalent to 310 times the total digital resources of the National Library of China. Moreover, institutional advantages are gradually becoming apparent. In the data foundation pilot area in Beijing, a “regulatory sandbox” mechanism has been established to effectively break the deadlock regarding companies’ reluctance to open up their resources, allowing them to enter a protected “experimental field” for integrated training without transferring data ownership. A technical leader from a company stated, “Previously, training with our small data led to increasingly biased models; now, the sandbox has gathered real data from over ten industries, significantly improving accuracy, making data more valuable the more it is used.”
From the perspective of integrated applications, China’s AI is accelerating its penetration into various industries, with the breadth of applications and depth of integration building new global competitive advantages. By the end of 2025, the CNC rate of key processes in major industries in China is expected to reach 68.6%, with AI integration applications transitioning from “spot flowering” to “full-chain intelligence.” First, the areas of penetration continue to expand, covering most major categories of the national economy, and forming a number of benchmark applications in manufacturing, healthcare, transportation, finance, and energy. Second, the level of empowerment has significantly improved, advancing from auxiliary roles to core areas such as research and development, design, production, and operational management. In a heavy equipment manufacturing company in Shandong, we observed that a set of industrial large model systems comprehensively took over the entire process from blueprint analysis, process planning to quality inspection, compressing the time for new process design from several weeks to under 72 hours, with a 5% increase in yield. Third, new business formats and models are emerging rapidly, with intelligent connected vehicles, AI pharmaceuticals, and embodied intelligent robots flourishing, continuously forming new trillion-level industrial tracks. During our research, it was evident that in this global smart competition, the party with the most diverse application scenarios, the closest integration, and the most concentrated industrial feedback will define the standards and paradigms for how AI is used, where it is applied, and how deeply it is utilized, thus gaining the initiative in the intelligent era.
Challenges Facing China’s AI Industry Development
Currently, the global competition in AI technology is intensifying, and China’s AI industry is at a critical juncture of application leadership, foundational catch-up, and ecological breakthrough. Facing external pressures such as computing power blockades and talent competition, we still have many “bottleneck” links and points of blockage, from high-end chips to basic algorithms, from original innovation to industrial transformation.
International competition is squeezing the development space of the AI industry. Research has found that some Western countries have upgraded their policies towards China from single technology restrictions to systematic ecological blockades. First, the “hard” blockade continues to intensify. The United States has increased restrictions on the sale of AI chips to China, forcing many domestic innovation teams to slow down their large model development due to “computing power hunger.” Second, there are “soft” ecological barriers. Nvidia’s graphics processing units (GPUs) dominate over 90% of the global market, and its unified computing device architecture (CUDA) ecosystem has formed a closed-loop system of “hardware + software + developer community” after more than ten years of accumulation. We learned from a domestic chip company in Shanghai that although its hardware computing power indicators are close to international mainstream levels, clients are primarily concerned with whether it can be compatible with CUDA. The crux is that chip replacement is not a simple hardware swap; it involves a complete technical stack migration concerning development frameworks, operator libraries, debugging tools, and development habits. Millions of developers are deeply bound to the CUDA ecosystem, and the high migration costs and lengthy adaptation cycles pose obstacles to large-scale applications, even if domestic alternatives meet performance standards. Third, the competition for rule-making power is fierce. Global AI technology standards, governance norms, and cross-border data rules are predominantly led by Western countries. In early 2025, the DeepSeek large model caused a stir in the global market due to its technological breakthrough, prompting several Western countries to issue bans or initiate strict reviews. The reality warns us that technological leadership does not guarantee market access; lacking discourse power can hinder industrial expansion.
Large models face reliability crises in specialized scenarios. While large models perform impressively in general dialogue, their ability deficiencies become apparent in fields that require high precision and reliability, such as industrial inspection, medical diagnosis, and financial risk control. A manufacturing company reported that its AI visual inspection system misclassified good products as defective due to slight changes in lighting, leading to defective products being released and requiring manual re-inspection. “Stunning in demonstrations, but failing on the production line” has become a reality for many enterprises implementing AI. The issue lies in the fact that the generalization capabilities exhibited by large models in open-domain tasks do not naturally transfer to specialized scenarios where fault tolerance approaches zero, creating a significant engineering gap from “being able to talk” to “being reliably usable.” The “hallucination” problem cannot be overlooked either. In general scenarios, such errors may only be flaws, but in contexts like medical dosages, legal judgments, and financial risk control, each instance of “seriously talking nonsense” could trigger irreparable risks. This exposes a fundamental flaw of large models: they are essentially pattern matchers rather than logical reasoners. Bridging the gap from “being able to speak” to “speaking the truth,” and from “guessing answers” to “understanding causality” is a threshold that the industry must cross for deeper development.
High-quality datasets still struggle to meet model development needs. Research indicates that a common issue is the abundance of data “crude oil,” but insufficient “refining” capabilities. The scale of globally available private data far exceeds that of public data, but due to institutional barriers such as inconsistent data standards, inadequate authorization mechanisms, and unclear compliance boundaries, a significant amount of high-value data remains trapped in “islands.” Although China has a wealth of data resources, the data truly usable for large model training is severely lacking. In globally applicable training datasets of 5 billion, the proportion of Chinese text data is only 1.3%. Furthermore, the bottlenecks in data circulation hinder China’s data scale advantage from being fully transformed into core competitiveness. Additionally, copyright and legal risks are on the rise. A company expanding overseas informed us that its video generation model was accused of unauthorized scraping of overseas platform videos for training, leading to collective lawsuits abroad. If data sovereignty and copyright barriers evolve into new trade weapons, they could cut off domestic enterprises’ legal access to high-quality international data resources.
The commercial application loop of the AI industry has yet to be closed. The AI industry is at a crossroads of transitioning from policy-driven to market-driven development, and sustainable business models are still being explored. First, there is a “gear misalignment” in the industrial chain. The computing power layer is expensive and insufficiently compatible with the model layer, which is general but lacks industry customization capabilities, while the application layer consists mostly of single-point tool products that do not communicate with each other, resulting in a lack of effective engagement mechanisms among the three links of computing power, models, and applications. Second, the profitability model for enterprises is unclear. Domestic users have not yet formed a habit of paying, and many application companies can only maintain themselves through project-based contracts or rely on government subsidies for support. The transition from “policy support” to “market-driven growth” is crucial for the industry to move out of the nurturing phase. Third, scaling product replication is challenging. An industrial AI founder admitted, “Three factory pilot projects were successful, but when the client asked for a different production line, the plan became useless. Without standardization, scaling is impossible; without scaling, we will always be burning money.” The gap between a “showroom” and a “commercial property” is not just a matter of individual technology, but rather a standardized product system that is configurable, replicable, and maintainable, with the prerequisite being standardized interfaces across all links of the industrial chain.
Accelerating the Development of China’s AI Industry Requires a Systematic Collaborative Effort
AI is a unique general-purpose technology that differs significantly from any frontier technology in historical technological revolutions. First, it has a strong path dependence and ecological lock-in effect. The underlying chips define the form of computing power, the intermediate framework determines the development paradigm, and the upper-level applications are deeply reliant on the interface standards of the former two—this highly coupled technical architecture means that once a first mover achieves dominance at any level, it can penetrate up and down, locking the entire industrial chain into its ecological system. Second, competition has evolved into a systematic game where every link is interconnected. Traditional technological competition focuses on single technologies, where overcoming one can lead to breakthroughs; however, AI competition covers all dimensions, including chips, frameworks, data, applications, and rules. Any shortcoming in one dimension could become the “Achilles’ heel” of the entire system. Third, the diffusion cycle has been drastically compressed. The electrical revolution took a century to fully penetrate society, and information technology took half a century to reshape commercial forms; AI is rewriting the underlying logic of industries at an unprecedented speed, with the conversion of first-mover advantages into lock-in advantages occurring much faster, leaving chasing parties with dramatically reduced response times. In this global competition that will determine the future, we are not facing a “bottleneck” at a specific technical point but rather a full-stack competition from underlying hardware to upper-level ecosystems, from technical standards to governance rules. To break the deadlock and seize the initiative, isolated breakthroughs are insufficient; we must engage in a “whole-element + whole-ecosystem” systematic collaborative effort. This requires the full flow of various elements such as computing power, data, algorithms, and scenarios, while also stimulating the innovative vitality of diverse entities such as enterprises, universities, research institutions, and developer communities. Moreover, national strategy must lead the way, uniting all forces into a cohesive effort.
In recent years, Hebei has become a key node in the national computing power industry layout, accelerating the construction of a nationally leading computing power industry ecosystem with policies as guidance, infrastructure as a foundation, integrated development as a goal, and regional collaboration as a path. The “2025 Comprehensive Computing Power Index” shows that Hebei maintains the top position nationwide.
Strengthening Core Technology Development to Build a Self-Controlled Development Foundation
Core technology development must shift its goal from merely catching up on single indicators to a systematic battle driven by ecosystem building. First, it must root itself in basic principles. If source innovation only focuses on the application and engineering levels, it will always be limited to patching up others’ theoretical frameworks. More resources must be directed towards foundational research “no man’s land” areas such as algorithm interpretability, causal reasoning, and brain-like computing, mastering the underlying logic that defines the technological route, thereby fundamentally breaking free from path dependence. Second, targeted breakthroughs and large-scale iterations must be emphasized. Focusing on core links in the AI chip, development framework, and basic software industrial chain, implement mechanisms like “ranking and challenging” and “horse racing” to concentrate efforts on overcoming key bottlenecks. More importantly, technological breakthroughs must form a closed loop with market applications; only by investing domestic software and hardware on a large scale in real training scenarios and continuously iterating and optimizing through large-scale trial and error can we use market feedback to enhance technological maturity, gradually forming an ecological attractiveness that can compete with first movers.
Optimizing Data Element Supply to Unblock High-Quality Supply Bottlenecks
China has a clear advantage in data resources, but it must address the two major bottlenecks of “refinability” and “circulation.” First, establish high-quality “data oilfields.” Relying on national-level data annotation bases, prioritize establishing standardized dataset systems in mature fields such as industry, healthcare, and finance, while increasing investment in data synthesis and intelligent enhancement technology. Only by processing raw data into high-quality data that can be directly used for model training can data elements truly enter the production function. Second, use institutional innovation to unblock circulation bottlenecks. Accelerate the supply of foundational systems around property rights definition, profit distribution, and safety compliance, and promote innovative models such as “data sandboxes” and “regulatory sandboxes” to achieve multi-source data integration training while ensuring ownership remains unchanged and safety is controllable, allowing data to truly realize value multiplication through circulation.
Accelerating the Promotion of Scalable Applications to Construct a Sustainable Commercial Loop
Application scenarios are the ultimate battlefield for testing the quality of AI. Currently, the core challenge facing the development of the AI industry is not the lack of good pilot projects, but rather the inability to replicate successful pilots on a large scale. It is essential to implement the “AI +” action deeply. First, deeply embed AI into core business functions, pushing it from auxiliary scenarios into high-value areas such as research and development, production scheduling, and risk management, thereby significantly reducing costs and increasing efficiency to stimulate enterprises’ willingness to pay. Second, construct an industrial chain collaborative engagement mechanism. Promote deep coupling between computing power providers, model vendors, and industry users, forming a collaborative network where computing power is supplied on demand, models are adapted to needs, and scenarios are quickly implemented, breaking the status quo of “each managing their own” through standardized interfaces. Third, resolutely advance productization transformation. Transition from customized project-based solutions to standardized solutions that are configurable, replicable, and maintainable, allowing the dilution of research and computing costs through scaling, driving the industry from a money-burning cycle into a profit-generating cycle.
Enhancing Safety Governance Capabilities to Establish a Secure Development Bottom Line
The black-box nature of AI, its self-evolving capabilities, and generalization abilities extend the risk sources from external attacks to the “genetic defects” of the models themselves. Safety governance must evolve from static compliance checks to dynamic protection throughout the entire lifecycle. First, establish a layered and categorized agile governance framework. Emphasize transparency and traceability for general foundational models, while implementing differentiated regulations based on risk levels for vertical application scenarios, such as strict certification and robustness assessments for high-risk fields like healthcare and finance, while adopting lighter regulations for lower-risk scenarios, achieving a precise balance between safety and development. Second, strengthen internal safety barriers for technology. Increase R&D investment in safety technologies such as algorithm interpretability, privacy computing, and adversarial training, and establish a regular model safety inspection mechanism, using “technical firewalls” to preemptively address risks, making safety capabilities a “factory setting” of the models rather than a post-facto patch. Third, proactively lead the construction of global rules. Promote the transformation of China’s practical experiences in data classification, algorithm registration, and safety assessment into international governance solutions, seizing the initiative in rule-making within multilateral frameworks to avoid being locked in from behind.
Strengthening Multi-Party Collaborative Support to Build a Comprehensive Element and Ecosystem Support System
Systematic breakthroughs require matching institutional supplies and element support. In terms of funding, it is crucial to cultivate truly patient capital that is adaptable to innovation. Leverage the leading role of national funds and collaborate with localities to form a tiered layout of patient capital matrices, ensuring long-term investments in foundational breakthroughs and infrastructure development. Simultaneously, promote inclusive tools like “computing power vouchers” to lower the barriers for small and medium enterprises to participate in innovation. In terms of talent, it is essential to cultivate “dual-skilled talents” who understand both algorithmic logic and industry pain points. Such composite talents cannot be produced in bulk in classrooms; they must be nurtured through collaboration between leading enterprises and universities to build integrated education platforms, immersing them in real industrial scenarios over the long term. Accelerate the establishment of a composite talent training system with scale effects, forming a tiered supply from top scientists to large-scale application talents. In terms of open cooperation, it is vital to root in China while connecting with the world. Relying on mechanisms like the “Belt and Road Initiative,” support enterprises in deeply embedding themselves in the global innovation network through open-source collaboration and joint R&D, breaking non-commercial barriers under compliance, and enhancing competitiveness in open competition, thereby mastering strategic initiatives in the new round of technological revolution and industrial transformation.
Research Notes
From the perspective of the grand historical coordinates of human civilization evolution, the profound significance of AI may far exceed our current cognitive boundaries. It is not merely a technological iteration or industrial upgrade but a systemic reshaping of human cognitive methods and social organizational forms. As machines begin to learn, reason, and create, we are faced with not just a technological competition but a re-examination of humanity’s own position. Throughout our research journey, from the computing arteries woven by “East Data, West Computing” to the data vitality activated by “regulatory sandboxes,” from the ecological wave triggered by open-source large models among global developers to humanoid robots working alongside humans on production lines, we have deeply felt a vigorous upward force. This indicates that in this wave of technological innovation, we are no longer latecomers, followers, or catch-up players, but competitors on the same stage, and in some fields, we are even leaders. As global AI development and governance remain in a state of chaos, our path choices are opening up a new possibility—replacing closure with openness, replacing monopoly with collaboration, and replacing control with empowerment in the new paradigm of intelligent civilization. Years from now, when people look back at the starting point of this AI transformation, they may evaluate: at the historical juncture of the new era, China did not hesitate or miss the opportunity but stepped forward decisively.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.