Artificial intelligence companies that have previously been reticent to allow military use of their technology are shifting policies and striking deals to offer it to spy agencies and the Pentagon.
On Thursday, Anthropic, a leading AI start-up that has raised billions of dollars in funding and competes with ChatGPT developer OpenAI, announced it would sell its AI to U.S. military and intelligence customers through a deal with Amazon’s cloud business and government software maker Palantir.
Earlier this week, Meta changed its policies to allow military use of its free, open source AI technology Llama that competes with technology offered by OpenAI and Anthropic. And OpenAI has a deal to sell ChatGPT to the Air Force, after earlier this year changing its policies to allow some military uses of its software.
The deals and policy changes add to a broad shift that has seen tech companies work more closely with the Pentagon, despite some employees protesting their work contributing to military applications.
Anthropic changed its policies in June to allow some intelligence agency uses for its technology but still bans customers from using it for weapons or domestic surveillance. OpenAI also prohibits its technology from being used to develop weapons. Anthropic and OpenAI spokespeople did not comment beyond referring to the policies.
Arms control advocates have long called for an international ban on using AI in weapons. The U.S. military has a policy that humans must maintain meaningful control over weapons technology but has resisted an outright ban, saying that it would allow potential enemies to gain a technological edge.
Tech leaders and politicians from both parties have increasingly argued that U.S. tech companies must ramp up the development of military tech to maintain the nation’s military and technological competitiveness with China.
In an October blog post, Anthropic CEO Dario Amodei argued that democratic nations should aim to develop the best AI technology to give them a military and commercial edge over authoritarian countries, which he said would probably use AI to abuse human rights.
“If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies,” Amodei wrote in the blog post.
Anthropic’s backers include Google and Amazon, which has invested $4 billion into the start-up. Amazon founder Jeff Bezos owns The Post.
The U.S. military uses AI for a broad range of purposes, from predicting when to replace parts on aircraft to recognizing potential targets on the battlefield. Palantir, which Anthropic is partnering with to get its technology to government customers, sells AI technology that can automatically detect potential targets from satellite and aerial imagery.
The war in Ukraine has triggered a new interest in adapting cheap, commercially available technology like small drones and satellite internet dishes to military use. A wave of Silicon Valley start-ups have sprung up to try to disrupt the U.S. defense industry and sell new tools to the military.
Military leaders in the United States and around the world expect future battlefield technology to be increasingly independent of human oversight. Though humans are still generally in control of making final decisions about choosing targets and firing weapons, arms control advocates and AI researchers worry that the increased use of AI could lead to poor decision-making or lethal errors and violate international laws.
Google, Microsoft and Amazon compete fiercely for military cloud computing contracts, but some tech employees have pushed back on such work.
In 2018, Google said it would not renew a Pentagon contract providing image-analysis of drone imagery that was protested by employees. The company has continued to expand its military contracts. This year Amazon and Google were targeted by protests over Israeli government contracts by workers who said they could assist the country’s military forces.
OpenAI and Anthropic, part of a newer generation of AI developers, have embraced military and intelligence work relatively early in their corporate development. Some other companies in the current AI boom, such as data provider Scale AI, have made willingness to work with the military a major focus of their business.