With technology, chasing access over understanding rarely bodes well for the individuals directly impacted by its use. In the defence industry, this choice could be fatal.
Since the 2010s, Big Tech have amassed billions in their winnings of government contracts across the globe. As the public focus shifted to AI and its potential for transformation in almost every sector, more divisions and task-forces have begun to welcome these companies into their digital walls. And for those at this frontier, when you’re being courted by Berlin and Paris, Washington D.C., or Westminster, there’s only two options. You say yes, we can do this right now, or, yes, we’ll build that for you. Both times, you’re promised some more millions for the trouble.
On the evening of Thursday 26th February, Anthropic, developer of the Claude large language model, took a third way. They claimed they could not in “good conscience” fulfil the request of the US Department of War to “remove safeguards” that prevent its tools’ incorporation into domestic mass surveillance and autonomous weapons systems. In return, the administration flagged Anthropic as a “supply chain risk” – a label never previously assigned to a domestic organisation – and instructed all government contractors and partners to stop using Anthropic products. This command did not prevent the Department of War from using Claude’s capabilities to plan and execute the joint Israeli-US air strikes on Iran later that night.
Also mere hours after the ban, OpenAI announced their agreement with the Department of War to replace Anthropic in the vacated contracts. They too stated the use of their tools, such as ChatGPT, would not support domestic mass surveillance and autonomous weapons, in addition to high-stakes decision making. The breaking point between the Pentagon and Anthropic, then, appears unclear.
OpenAI’s press release from Saturday reads as an appeal to the rule-consequentialists, logic echoing the administration’s sentiments, and in direct contrast to Anthropic’s statement on Thursday, which pinned their decision to the inherent morals and ethics of the company. Given the current administration’s assertion to seize AI contractors’ tools for “any lawful use”, and their laissez-faire approach to operating within the constraints of international law, it tracks that Anthropic, flag-bearers amongst frontier companies for the AI safety cause, felt that their current models were not ready for use with such sensitive requests. Social media reactions to OpenAI’s announcement were overwhelmingly negative.
Attempting to ameliorate the backlash, on Tuesday, CEO Sam Altman posted on X about OpenAI’s internal memo, describing their upcoming amendment to the deal to “protect the civil liberties of Americans”, and refusal of any “unconstitutional” orders. They also described their intention to explore the safety trade-offs “slowly”, having been quiet on this topic since their restructure to a for-profit company (and mission statement update).
Anthropic was founded by luminaries who left OpenAI over their disagreement with organisational direction, such as their push towards commercialisation. Their choice to walk away from the Pentagon, while consistent with their hitherto immutable ethos, is still what would be expected of them. The next natural question is whether any leading AI developer should be entertaining the current administration’s requests. xAI, Google, and Meta are waiting in the wings, and have all previously agreed deals with US government departments, but here, vague federal requests offer them no insurance or impunity in the occasion of a serious misalignment. One could further ask if existing vetoes are even harsh enough; OpenAI and Anthropic made no mention of their technologies’ future involvement in foreign surveillance, attacks on non-combatants, or safeguards against the use of disproportionate force.
Unfortunately, last week’s very public spat demonstrates exactly why, if the contract offer arrives, the tech frontrunners will at least give it a good thought. The AI race is still constrained by the rooms your company officers have a pass to, and the computers your models are chosen to run on. Time will tell of the effect that this “corporate murder” attempt has on Anthropic’s business and research. Governments need to stay ahead of the game, that much is true. Threatening private companies into indulging the state’s more pernicious development interests, however, is probably not the way to do so. As defence matters rise on the global agenda, the decision to integrate AI, so frequently defined as a “black box”, into national security is a chilling and potentially dangerous ask. If it’s access they’re after, we can only hope they understand want they really need.
Leave a comment