🔗 Share this article Harry and Meghan Align With AI Pioneers in Demanding Prohibition on Advanced AI Prince Harry and Meghan Markle have teamed up with AI experts and Nobel Prize winners to push for a complete ban on creating artificial superintelligence. Harry and Meghan are part of the group of a influential declaration that demands “a ban on the development of artificial superintelligence”. Superintelligent AI refers to artificial intelligence that could exceed human intelligence in all cognitive tasks, though such systems remain theoretical. Key Demands in the Declaration The declaration insists that the prohibition should remain in place until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “substantial public support” has been secured. Prominent figures who added their signatures include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of modern AI, another AI expert; Apple co-founder Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; former Irish president an international leader, and British author Stephen Fry. Additional Nobel winners who signed include a peace advocate, a physics Nobelist, an astrophysicist, and Daron Acemoğlu. Behind the Movement The statement, aimed at national leaders, technology companies and lawmakers, was coordinated by the FLI organization, a US-based AI safety group that previously called for a pause in advancing strong artificial intelligence in recent years, shortly after the emergence of ChatGPT made artificial intelligence a worldwide public discussion topic. Industry Perspectives In recent months, Mark Zuckerberg, the leader of the social media giant, one of the leading tech companies in the US, stated that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some experts have argued that talk of ASI reflects market competition among tech companies investing enormous sums on artificial intelligence this year alone, rather than the sector being close to achieving any technical breakthroughs. Potential Risks Nonetheless, FLI warns that the prospect of ASI being developed “in the coming decade” carries numerous threats ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to security threats and even threatening humanity with extinction. Existential fears about artificial intelligence focus on the possible capability of a system to evade human control and safety guidelines and trigger actions contrary to human interests. Citizen Sentiment FLI published a US national poll showing that about 75% of US citizens want strong oversight on sophisticated artificial intelligence, with six out of 10 believing that artificial superintelligence should not be developed until it is proven safe or manageable. The survey of 2,000 US adults added that only 5% supported the current situation of rapid, uncontrolled advancement. Industry Objectives The top artificial intelligence firms in the US, including the ChatGPT developer a major AI lab and Google, have made the creation of human-level AI – the theoretical state where artificial intelligence equals human cognitive capability at many intellectual activities – an stated objective of their work. While this is one notch below superintelligence, some specialists also caution it could pose an extinction threat by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an underlying danger for the modern labour market.