Subscribe
An illustration depicting artificial intelligence and broadband / antennas.

An illustration depicting artificial intelligence and broadband / antennas. (N. Hanacek/NIST)

BLETCHLEY, Britain - As Adolf Hitler rained terror on Europe, the British government recruited its best and brightest to this secret compound northwest of London to break Nazi codes. The Bletchley Park efforts helped turn the tide of war and lay the groundwork for the modern computer.

But as countries from six continents concluded a landmark summit on the risks of artificial intelligence at the same historic site as the British code breakers Thursday, they faced a vexing modern-day reality: Governments are no longer in control of strategic innovation, a fact that has them scrambling to contain one of the most powerful technologies the world has ever known.

Already, AI is being deployed on battlefields and campaign trails, possessing the capacity to alter the course of democracies, undermine or prop up autocracies, and help determine the outcomes of wars. Yet the technology is being developed under the veil of corporate secrecy, largely outside the sight of government regulators and with the scope and capabilities of any given model jealously guarded as propriety information.

The tech companies driving this innovation are calling for limits - but on their own terms. OpenAI CEO Sam Altman has suggested that the government needs a new regulator to address future advanced AI models, but the company continues to plow forward, releasing increasingly advanced AI systems. Tesla CEO Elon Musk signed onto a letter calling for a pause on AI development but is still pushing ahead with his own AI company, xAI.

“They are daring governments to take away the keys, and it’s quite difficult because governments have basically let tech companies do whatever they wanted for decades,” said Stuart Russell, a noted professor of computer science at the University of California at Berkeley. “But my sense is that the public has had enough.”

The lack of government controls on AI has largely left an industry built on profit to self-police the risks and moral implications of a technology capable of next-level disinformation, ruining reputations and careers, even taking human life.

That may be changing. This week in Britain, the European Union and 27 countries including the United States and China agreed to a landmark declaration to limit the risks and harness the benefits of artificial intelligence. The push for global governance took a step forward, with unprecedented pledges of international cooperation by allies and adversaries.

On Thursday, top tech leaders including Altman, DeepMind founder Demis Hassabis and Microsoft President Brad Smith sat around a circular table with Harris, British Prime Minister Rishi Sunak and other global leaders. The executives agreed to allow experts from Britain’s new AI Safety Institute to test models for risks before their release to the public. Sunak hailed this as “the landmark achievement of the summit,” as Britain agrees to two partnerships, with the newly announced U.S. Artificial Intelligence Safety Institute, and with Singapore, to collaborate on testing.

But there are limited details about how the testing would work - or how it differs from the White House’s mandate - and the agreements are largely voluntary.

Observers say the global effort - with follow-up summits planned in South Korea and France in six months and one year, respectfully - remains in its relative infancy and is being far outpaced by the speed of development of wildly powerful AI tools.

Musk, who attended the two-day event, mocked government leaders by sharing a cartoon on social media that depicted them saying that AI was a threat to humankind and that they couldn’t wait to develop it first.

Companies now control the lion’s share of funding for tech and science research and development in the United States. U.S. businesses accounted for 73 percent of spending on such research in 2020, according to data compiled by the National Center for Science and Engineering Statistics. That’s a dramatic reversal from 1964, when government funding accounted for 67 percent of this spending.

That paradigm shift has created a geopolitical vacuum, with new institutions urgently needed to enable governments to balance the opportunities presented by AI with national security concerns, said Dario Gil, IBM’s senior vice president and director of research.

“That is being invented,” Gil said. “And if it looks a little bit chaotic, it’s because it is.”

He said this week’s Bletchley declaration as well as and recent announcements of two government AI Safety Institutes, one in Britain and one in the United States, were steps toward that goal.

However, the U.S. AI Safety Institute is being set up inside the National Institute of Standards and Technology, a federal laboratory that is notoriously underfunded and understaffed. That could present a key impediment to reining in the richest companies in the world, which are racing each other to ship out the most advanced AI models.

The NIST teams working on emerging technology and responsible artificial intelligence only have about 20 employees, and the agency’s funding challenges are so significant that its labs are deteriorating. Equipment has been damaged by plumbing issues and leaking roofs, delaying projects and incurring new costs, according to a report from the National Academies of Sciences, Engineering, and Medicine.

“NIST facilities are not world class and are therefore a growing impediment against attracting and retaining staff in a highly competitive STEM environment,” the 2023 report said.

The laboratory faces new demands to address AI, cybersecurity, quantum computing and a host of emerging technology, but Congress has not expanded its budget to keep pace with the evolving mandate.

“NIST is a billion dollar agency but is expected to work like a ten billion dollar agency,” said Divyansh Kaushik, the associate director for emerging technologies and national security at the Federation of American Scientists. “Their buildings are falling apart, staff are overworked, some are leading multiple initiatives all at once and that’s bad for them, that’s bad for the success of those initiatives.”

Department of Commerce spokesperson Charlie Andrews said NIST has achieved “remarkable results within its budget.” “To build on that progress it is paramount that, as President Biden has requested, Congress appropriates the funds necessary to keep pace with this rapidly evolving technology that presents both substantial opportunities and serious risks if used irresponsibly,” he said.

Governments and regions are taking a piecemeal approach, with the E.U. and China moving the fastest toward heavier handed regulation. Seeking to cultivate the sector even as they warn of AI’s grave risks, the British have staked out the lightest touch on rules, calling their strategy a “pro innovation” approach. The United States - home to the largest and most sophisticated AI developers - is somewhere in the middle, placing new safety obligations on developers of the most sophisticated AI systems but not so much as to stymie development and growth.

At the same time, American lawmakers are considering pouring billions of dollars into AI development amid concerns of competition with China. Senate Majority Leader Charles E. Schumer (D-N.Y.), who is leading efforts in Congress to develop AI legislation, said legislators are discussing the need for a minimum of $32 billion in funding.

For now, the United States is siding with cautious action. Tech companies, said Paul Scharre, executive vice president of the Center for New American Security, are not necessarily loved in Washington by Republicans or Democrats. And President Biden’s recent executive order marked a notable shift from more laissez faire policies on tech companies in the past.

“I’ve heard some people make the arguments the government just needs to sit back and just trust these companies and that the government doesn’t have the technical experience to regulate this technology,” Scharre said. “I think that’s a receipt for disaster. These companies aren’t accountable to the general public. Governments are.”

China’s inclusion in the Bletchley declaration disappointed some of the summit’s attendees, including Michael Kratsios, the former Trump-appointed chief technology officer of the United States. Kratsios said in 2019, he attended a G-20 summit meeting in 2019 where officials from China agreed to AI principles, including a commitment that “AI actors should respect human rights and democratic values throughout the AI system life cycle.” Yet in recent months, China has rolled out new rules in recent months to keep AI bound by “core socialist values” and in compliance with the country’s vast internet censorship regime.

“Just like with almost anything else when it comes to international agreements, they proceeded to flagrantly violate [the principles],” said Kratsios, who is now the managing director of ScaleAI.

Meanwhile, civil society advocates who were sidelined from the main event at Bletchley Park say governments are moving too slow - perhaps dangerously so. Beeban Kidron, a British baroness who has advocated for children’s safety online, warned that regulators risk making the same mistakes that they have when responding to tech companies in recent decades, which “has privatized the wealth of technology and outsourced the cost to society.”

“It is tech exceptionalism that poses an existential threat to humanity not the technology itself,” Kidron said in a speech Thursday at a competing event in London.

Sign Up for Daily Headlines

Sign up to receive a daily email of today's top military news stories from Stars and Stripes and top news outlets from around the world.

Sign Up Now