Read More《AI六巨头同台:AGI,不再是“未来”的事了》
Positive Comments: Gradual Breakthroughs of AGI from Theory to Reality, and the New Era of Intelligence Initiated by Technological Collaboration and Application Implementation
The round – table dialogue among the six AI giants in London in 2025 can be regarded as an authoritative annotation on “the current situation and future of AGI”. Six top figures spanning the fields of algorithms, data, computing power, and engineering outlined a clear picture of AGI’s transformation from a “future concept” to a “real existence” with their respective technological experiences and insights. The most positive signal conveyed by this dialogue is that the development of AGI does not rely on a “singular explosion” of a single technology, but is an inevitable result of forty years of technological accumulation and multi – field collaborative breakthroughs. Moreover, it has begun to penetrate into practical applications, laying a solid foundation for the full arrival of the intelligent era.
First of all, the “gradual breakthrough” of AGI confirms the objective law of technological development. From Hinton’s “proto – experiment” of predicting the next word with a small model in 1984, to Bengio’s engagement in neural network research due to Hinton’s early papers, and then to LeCun’s insistence on the concept of “letting machines learn self – organizedly”, the seeds of technology have been continuously nurtured over the past forty years. Fei – Fei Li’s ImageNet solved the “data famine”, Dally’s GPU architecture broke through the “computing power bottleneck”, and Jensen Huang promoted AI from the laboratory to industrialization. The connection of these key nodes reveals that AGI did not “emerge out of nowhere”, but is the product of the collaborative evolution of the four pillars of algorithms, data, computing power, and engineering. This path of “accumulating strength over time before making a breakthrough” not only conforms to the long – cycle characteristics of technological development, but also provides replicable experience for subsequent breakthroughs: any disruptive innovation requires long – term investment in basic research and cross – field collaboration.
Secondly, the “real existence” of AGI has been initially verified through application scenarios. The concept of the “AI factory” proposed by Jensen Huang (that is, AI shifting from a “tool for answering questions” to a “production system for continuous intelligent output”) is exactly the epitome of the current implementation of AGI. Scenarios such as “AI writing code, diagnosing diseases, and doing finance” mentioned in the news have gone beyond the “dialogue function” of early language models and begun to deeply participate in actual work processes. Hinton’s prediction that “machines will defeat all humans in debates within 20 years” essentially reflects confidence in AI’s logical reasoning and knowledge integration abilities; Fei – Fei Li’s statement that “machines have surpassed humans in some fields” (such as recognizing 22,000 objects and translating 100 languages) proves the “partial realization” of AGI with specific capabilities. These progresses not only transform AGI from a “theoretical concept” into a “perceivable existence”, but also promote the industry’s trust in intelligent technologies. When enterprises start to use AI to complete core business processes, the value of AGI is truly integrated into the economic system.
Finally, the formation of technological consensus accelerates the evolution of AGI. Although the six experts have differences in their views on the “completion degree” and “timeline” of AGI, their consensus on “paradigm shift” (such as shifting from language ability to action ability, and from supervised learning to self – supervised learning) points the way for the industry. LeCun emphasizes “learning actively from the environment like a baby”, and Fei – Fei Li calls for “valuing spatial intelligence and hands – on ability”. The proposal of these directions marks the shift of AI research from “how to make machines more ‘human – like'” to “how to make machines more ‘useful'”. The formation of this consensus will avoid wasting resources on repeatedly verifying old paths and instead focus on overcoming key bottlenecks (such as multi – modal interaction and embodied intelligence), thereby accelerating the full implementation of AGI.
Negative Comments: Vague Definitions, Technological Bottlenecks, and Ethical Risks. The Development of AGI Still Needs to Be Wary of “Hype” and Hidden Worries
Although the six experts conveyed a positive signal that AGI “is happening”, the differences and challenges exposed in the dialogue are also worthy of vigilance. The vague definition of AGI, the unsolved key technological bottlenecks, and the potential threats of ethical and safety risks may become “hidden reefs” restricting its healthy development. If these issues are ignored, the industry may fall into the traps of “concept hype” or “blind expansion”, which will actually delay the true maturity of AGI.
Firstly, the vagueness of the AGI definition leads to confusion in expectations. During the dialogue, LeCun bluntly stated that “current large – scale models do not equal true intelligence, and there is not even a machine as smart as a cat”, while Jensen Huang emphasized that “we are using AGI – level intelligence for practical work today”, and Bengio even predicted that “AI will reach the level of an engineer within five years”. This difference in understanding of “AGI” essentially reflects a divergence in the definition of “intelligence”. If the industry fails to form a basic consensus on AGI (such as “whether human – like consciousness is required” and “whether it can set goals autonomously”), it may lead to a scattered direction of technological R & D, blind capital investment, and even the proliferation of “pseudo – AGI” products. For example, some systems with strong capabilities in only a single field may be packaged as “AGI”, misleading the market’s judgment on the technological maturity.
Secondly, the key technological bottlenecks remain unbroken, and there is still a wide gap in AGI’s “comprehensive intelligence”. Fei – Fei Li pointed out that “language models perform poorly in spatial judgment tasks”, and LeCun emphasized the “lack of the ability to learn actively from the environment like a baby”. These all reveal the “uneven development” problem of current AI: it has made significant progress in language, computing and other fields, but still lags far behind humans in spatial reasoning, embodied interaction, and common – sense understanding. For example, even though large – scale models can have fluent conversations, they may not understand the common sense that “a cup placed on the edge of a table is likely to fall”; they can generate code, but may have difficulty operating a robotic arm to complete tasks in a complex physical environment. This “ability gap” means that the “generality” of AGI is currently only reflected in some cognitive fields, and there is still a huge gap from “handling multi – type tasks like humans”. If technological R & D overly relies on “parameter stacking” while ignoring the underlying intelligent mechanisms (such as causal reasoning and autonomous learning), the development of AGI may fall into the bottleneck of “incremental improvement”.
Thirdly, the warnings about ethical and safety risks have not been fully taken seriously. Bengio mentioned “what will happen if the goals of machines are inconsistent with those of humans” and then turned to AI safety research. This warning is particularly crucial in the dialogue. Currently, the applications of AGI have penetrated into sensitive fields such as healthcare and finance. If the decision – making logic of AGI is opaque (such as the “black – box” reasoning in medical diagnosis) and the goal – alignment mechanism is lacking (such as an automated trading system ignoring compliance for the pursuit of profits), it may trigger systematic risks. What is even more worrying is that Bengio’s mention of “AI designing the next – generation AI system” may lead to out – of – control technological evolution. When AI has the ability of self – iteration, whether humans can effectively restrict its development direction remains an unsolved problem. If the industry ignores the construction of an ethical framework while pursuing technological implementation, AGI may transform from a “tool” into a “threat”.
Advice for Entrepreneurs: Seize the Opportunities of Technological Collaboration, Focus on Scenario Implementation, and Balance Innovation and Safety
This dialogue among the six giants provides multi – dimensional inspiration for entrepreneurs. Combining the current development stage and challenges of AGI, the following suggestions may help entrepreneurs find their positions and avoid risks in the intelligent wave:
- Value technological accumulation rather than blindly chasing the “AGI concept”: The technological experiences of the six experts show that the breakthrough of AGI depends on the long – term collaboration of algorithms, data, computing power, and engineering. Entrepreneurs need to avoid falling into the misunderstanding of “doing things for the sake of AGI”. They should focus on their own technological strengths (such as data accumulation in vertical fields and algorithm optimization in specific scenarios), and actively cooperate with other fields (such as computing power service providers and data annotation platforms) to form a “small but refined” technological closed – loop. For example, in the field of medical AI, entrepreneurs can deeply cultivate the vertical data set of “medical imaging + clinical data” and combine it with lightweight algorithm optimization, rather than pursuing “general diagnostic capabilities”.
- Pay attention to the technological shift from “language to action” and find breakthroughs in scenarios: The dialogue repeatedly emphasized that the next stage of AI is “from talking to doing”, that is, from information processing to actual task execution. Entrepreneurs can focus on exploring scenarios related to “embodied intelligence” and “spatial intelligence”, such as intelligent robots (warehouse handling, home services) and “AI + physical operation” in industrial scenarios (equipment inspection, fault repair). For example, the “AI + robotic arm” solution for the manufacturing industry can combine visual recognition and motion control algorithms to solve the problem of “lack of flexibility” in traditional automated equipment.
- Build an ethical framework of “explainability + goal alignment” to reduce application risks: As AGI penetrates into sensitive fields such as healthcare and finance, the risks of “black – box decision – making” will be magnified. Entrepreneurs need to incorporate ethical considerations into the initial stage of technological design. For example, make the decision – making logic of AI traceable through explainable algorithms (such as locally interpretable models); clearly define “human value constraints” in the training goals (such as medical AI giving priority to patient safety rather than efficiency). This can not only avoid legal and public – opinion risks, but also build users’ trust in products.
- Be vigilant against “technological hype” and focus on real needs: Jensen Huang mentioned that “AI factories need to serve industries worth trillions of dollars”, but not all industries need “AGI – level” intelligence. Entrepreneurs need to return to the real needs of users and judge whether the involvement of AI can solve actual pain points (such as improving efficiency and reducing costs), rather than using technology for the sake of technology. For example, in the field of education, the core value of AI may be “personalized learning path recommendation” rather than “replacing teachers”; in the field of customer service, the focus of AI should be “accurately understanding user intentions” rather than “simulating human emotions”.
Conclusion: The fact that AGI “is happening” is an inevitable result of technological accumulation and the starting point of industrial transformation. Entrepreneurs need to embrace the intelligent wave with a “pragmatic” attitude. They should not only see the opportunities brought by technological collaboration, but also be vigilant against the vague definition, technological bottlenecks, and ethical risks. They should not only focus on the “small goals” of scenario implementation, but also pay attention to the “big direction” of intelligent evolution. Only in this way can they truly seize their own opportunities in the evolution of AGI.
- Startup Commentary”Building LLMs: The Knowledge Graph Foundation Every AI Project Needs”
- Startup Commentary”The 17th Year of Tmall Double 11 and the New Map Rewritten by AI”
- Startup Commentary”How to Prepare Your Data for Artificial Intelligence”
- Startup Commentary”Small and Medium-sized Banks: “Cutting the Tail” in Loan Assistance”
- Startup Commentary”The Six AI Giants on Stage: AGI Is No Longer a “Future” Thing”

