Anthropic on Thursday said it is teaming up with data analytics firm Palantir and Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to Anthropic’s Claude family of AI models.

The news comes as a growing number of AI vendors look to ink deals with U.S. defense customers for strategic and fiscal reasons. Meta recently revealed that it is making its Llama models available to defense partners, while OpenAI is seeking to establish a closer relationship with the U.S. Defense Department.

Anthropic’s head of sales, Kate Earle Jensen, said the company’s collaboration with Palantir and AWS will “operationalize the use of Claude” within Palantir’s platform by leveraging AWS hosting. Claude became available on Palantir’s platform earlier this month and can now be used in Palantir’s defense-accredited environment, Palantir Impact Level 6 (IL6).

The Defense Department’s IL6 is reserved for systems containing data that’s deemed critical to national security and requiring “maximum protection” against unauthorized access and tampering. Information in IL6 systems can be up to “secret” level — one step below top secret.

“We’re proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations,” Jensen said. “Access to Claude within Palantir on AWS will equip U.S. defense and intelligence organizations with powerful AI tools that can rapidly process and analyze vast amounts of complex data. This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource intensive tasks and boost operational efficiency across departments.”

This summer, Anthropic brought select Claude models to AWS’ GovCloud, signaling its ambition to expand its public-sector client base. GovCloud is AWS’ service designed for U.S. government cloud workloads.

Anthropic has positioned itself as a more safety-conscious vendor than OpenAI. But the company’s terms of service allow its products to be used for tasks like “legally authorized foreign intelligence analysis,” “identifying covert influence or sabotage campaigns,” and “providing warning in advance of potential military activities.”

“[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.

Government agencies are certainly interested in AI. A March 2024 analysis by the Brookings Institute found a 1,200% jump in AI-related government contracts. Still, certain branches, like the U.S. military, have been slow to adopt the technology and remain skeptical of its ROI.

Anthropic, which recently expanded to Europe, is said to be in talks to raise a new round of funding at a valuation of up to $40 billion. The company has raised about $7.6 billion to date, including forward commitments. Amazon is by far its largest investor.

26

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Military applications of AI are not an idle concern. AI systems are already being used to increase military capacity by generating and analysing targets faster than humans can (and in this case, seemingly without much oversight). Palantir’s own technology likely also allows police organisations to defer responsibility for racist policing to AI systems.

Sure, for the most part, Claude will probably just be used for common requests, but Anthropic have no way of guaranteeing this. You cannot do this by policy, especially if it’s on Amazon hardware that you don’t control and can’t inspect. Ranking agencies by ‘cooperativeness’ should also be taken as lip service until they have a proven mechanism for doing so.

So they are revealing that, to them, AI safety doesn’t mean that they try to prevent AI from doing harm, just that they try to prevent it from doing unintended harm. This is a significant moment for them and I fear what it portends for the whole industry.

Curated and popular this week
Relevant opportunities