Cryptopolitan
2025-09-22 20:30:14

Over 200 leaders and Nobel Prize winners urge binding international limits on dangerous AI uses by 2026

The campaign to push governments to agree on binding international limits to curtail the abuse of AI technology has been escalated to the UN level, as more than 200 leading politicians, scientists, and thought leaders, including 10 Nobel Prize winners, have issued a warning about the risks of the technology. The statement, released Monday at the opening of the United Nations General Assembly’s High-Level Week, is being called the Global Call for AI Red Lines. It argues that AI’s “current trajectory presents unprecedented dangers” and demands that countries work toward an international agreement on clear, verifiable restrictions by the end of 2026. Nobel Prize winners lead plea at the U.N. The plea was revealed by Nobel Peace Prize laureate and journalist Maria Ressa, who used her opening address to urge governments to “prevent universally unacceptable risks” and define what AI should never be allowed to do. Signatories of the statement include Nobel Prize recipients in chemistry, economics, peace, and physics, alongside celebrated authors such as Stephen Fry and Yuval Noah Harari. Former Irish president Mary Robinson and former Colombian president Juan Manuel Santos, who is also a Nobel Peace Prize winner, lent their names as well. Geoffrey Hinton and Yoshua Bengio, popularly known as “godfathers of AI” and winners of the Turing Award, which is widely considered the Nobel Prize of computer science, also added their signatures to the statement. “This is a turning point,” said Harari. “Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.” Past efforts to raise the alarm about AI have often focused on voluntary commitments by companies and governments. In March 2023, more than 1,000 technology leaders, including Elon Musk, called for a pause on developing powerful AI systems. A few months later, AI executives such as OpenAI’s Sam Altman and Google DeepMind’s Demis Hassabis signed a brief statement equating the existential risks of AI to those of nuclear war and pandemics. AI stokes fears of existential and societal risks Just last week, AI was implicated in cases ranging from a teenager’s suicide to reports of its use in manipulating public debate. The signatories of the call argue that these immediate risks may soon be eclipsed by larger threats. Commentators have warned that advanced AI systems could lead to mass unemployment, engineered pandemics, or systematic human-rights violations if left unchecked. Some of the items on the embargoed list include banning lethal autonomous weapons, prohibiting self-replicating AI systems, and ensuring AI is never deployed in nuclear warfare. “It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.” Ahmet Üzümcü, the former director general of the Organization for the Prohibition of Chemical Weapons, which won the 2013 Nobel Peace Prize under his leadership, said. More than 60 civil society organizations have signed the letter, including the UK-based think tank Demos and the Beijing Institute of AI Safety and Governance. The effort is being coordinated by three nonprofits: the Center for Human-Compatible AI at the University of California, Berkeley; The Future Society; and the French Center for AI Safety. Despite recent safety pledges from companies like OpenAI and Anthropic, which have agreed to government testing of models before release, research suggests that firms are fulfilling only about half of their commitments. “We cannot afford to wait,” Ressa said. “We must act before AI advances beyond our ability to control it.” If you're reading this, you’re already ahead. Stay there with our newsletter .

获取加密通讯
阅读免责声明 : 此处提供的所有内容我们的网站,超链接网站,相关应用程序,论坛,博客,社交媒体帐户和其他平台(“网站”)仅供您提供一般信息,从第三方采购。 我们不对与我们的内容有任何形式的保证,包括但不限于准确性和更新性。 我们提供的内容中没有任何内容构成财务建议,法律建议或任何其他形式的建议,以满足您对任何目的的特定依赖。 任何使用或依赖我们的内容完全由您自行承担风险和自由裁量权。 在依赖它们之前,您应该进行自己的研究,审查,分析和验证我们的内容。 交易是一项高风险的活动,可能导致重大损失,因此请在做出任何决定之前咨询您的财务顾问。 我们网站上的任何内容均不构成招揽或要约