A dialogue on the development and governance of AI technology is held at the 12th Beijing Xiangshan Forum in Beijing, Sept. 17, 2025. [Photo courtesy of the 12th Beijing Xiangshan Forum]
The 12th Beijing Xiangshan Forum convened in Beijing on Wednesday, drawing around 1,800 participants from over 100 countries to discuss international security issues. Among the many panel events held from Sept. 17-19, one session focused on the development and governance of artificial intelligence (AI) technology, where experts explored how to balance innovation with risk management while advancing a sound AI governance system.
Chen Zhimin, vice president of Fudan University and executive director of the Center for Global AI Innovation Governance, said AI is a disruptive technology with both opportunities and risks. "AI governance concerns the shared destiny of humankind," he said, warning of potential misuse such as AI-powered autonomous weapons in conflict zones or deepfake-enabled crimes.
Chen highlighted three challenges facing global AI governance: inherent risks embedded in AI models and data, the widening global gap in AI capabilities, and fragmented international AI governance. While many organizations and platforms are working to establish regulatory frameworks, he said efforts remain fragmented and lack coordination.
Linking these challenges with global responsibilities, Chen highlighted China's efforts to promote AI for good, citing its Global AI Governance Initiative launched in 2023, and an action plan for global AI governance unveiled this July at the 2025 World Artificial Intelligence Conference. He added that China's promotion of open-source innovation, such as the DeepSeek R1 model, lowers barriers to access and helps bridge the intelligence gap between developed and developing nations.
From a military application perspective, Song Haitao, dean of the Shanghai Artificial Intelligence Research Institute, described how AI is reshaping modern warfare.
"Since 2000, most major countries have entered the information age, reshaping battlefield dynamics toward integrated joint operations, precision strikes and multidimensional combat," he said. Now in 2025, with the rise of intelligent technologies, unmanned systems and AI-driven coordination are transforming warfare across domains.
"We're seeing improvements not only in operational efficiency but also in troop safety and strike precision," Song explained. However, he cautioned that AI's integration also brings risks, such as algorithmic biases and command failures.
Calling for global cooperation, Song also stressed the need for common consensus and shared standards. "Military AI governance must become a foundational consensus for maintaining international order and managing security," he said, adding that China advocates for comprehensive global collaboration under the U.N. framework.
A similar view was voiced by retired Vice Admiral Shekhar Kumar Sinha, chairman of the Trustee Board of the India Foundation and former chief of India's Integrated Defence Staff, who warned that "regulation has to run the race equally rapidly with the technology."
Outlining guiding principles, Sinha noted that AI should enhance human capabilities rather than replace them, with governments ensuring workers are upskilled for the jobs of the future. He emphasized the importance of inclusivity, urging that AI solutions address the needs of rural and marginalized populations to help close the digital divide. Ethical governance, transparency and accountability, he said, must also remain central as algorithms increasingly influence sensitive areas such as justice, health care and finance.
Sinha further called for stronger international cooperation to create interoperable standards, build cross-border trust and prevent misuse of emerging technologies. "Governments should avoid over-regulating in ways that stifle innovation, but equally they must not allow unchecked development that could harm society," he cautioned.
Professor Lampros Stergioulas, UNESCO chair in AI and data science for society at The Hague University of Applied Sciences, highlighted the ethical challenges posed by rapid advances in AI and the need for stronger global governance frameworks.
He underlined that while AI has transformed health care, education, business and productivity, it also carries risks linked to bias, privacy, data protection, human rights, environmental impacts and social inequality. Ethical ground rules, he said, are essential to prevent discrimination and protect marginalized groups.
Citing the recent UNESCO report, "Steering AI and Advanced ICTs for Knowledge Societies," he stressed that policy initiatives for AI governance need to be reinforced. He welcomed China's engagement in U.N.-level initiatives and urged broader participation by other countries.
Moving forward, Stergioulas emphasized priorities such as building capacity and fostering openness as well as shared standards. "Key priorities are equity, equitable benefits and human-centric, societal concerns," he said, adding that global collaboration under the U.N. framework must be ramped up to shape the ethical and governance landscape of AI.