PERSPECTA

News from every angle

← Back to headlines

Pentagon Continues Conflict with Anthropic Amid AI Ethics Concerns

The Pentagon is reportedly continuing its dispute with AI firm Anthropic, whose CEO has expressed concerns about military use of AI, following a previous clash with the Trump administration over technology guardrails and a 'supply chain risk' label.

18 Mar, 02:09 — 18 Mar, 15:36
PostShare

The Story

Analyzing sources…

Source Diversity

Source Diversity

High (71/100)
6 sources33/33
Spectrum spread3/5 buckets covered17/33
Far L1
Far Left (1)
CNN
Left4
Left (4)
publicondtvAl Jazeerahindu
Center1
Center (1)
Dawn
Right
Far R
Geographic diversity5 regions21/34
India2Portugal1Pakistan1Qatar1US1

Sources

Showing 5 of 6 sources
Al JazeeraMostly Factual17h ago

Trump administration defends Anthropic blacklisting in US court

US defence secretary designated Anthropic a 'supply chain risk' after it refused to remove guardrails on its technology.

Read full article →
CNNMostly Factual19h ago

Former judges side with Anthropic and raise concerns about Pentagon’s use of supply chain risk label - CNN

Former judges side with Anthropic and raise concerns about Pentagon’s use of supply chain risk label  CNN

Read full article →
hinduMostly Factual17h ago

Trump administration defends Anthropic blacklisting in US court

U.S. Defense Secretary Pete Hegseth designated Anthropic, the maker of popular AI ​assistant Claude, a national security supply chain risk on March ⁠3

Read full article →
ndtvMostly Factual5h ago

'Anthropic Poses Unacceptable Risk To Military Supply Chains': Pentagon

Artificial intelligence company Anthropic posed an "unacceptable risk" to military supply chains, the US government insisted Tuesday, as it defends against the tech firm's challenge to a designation as dangerous.

Read full article →
DawnMostly Factual8h ago

Anthropic AI an ‘unacceptable risk’ to military, US govt says

Artificial intelligence company Anthropic posed an “unacceptable risk” to military supply chains, the US government insisted on Tuesday, as it defends against the tech firm’s challenge to a designation as dangerous. Anthropic’s Claude AI model has been in the spotlight in recent weeks both for its alleged use in identifying targets for US bombing in Iran and the company’s refusal that its systems be used to power mass surveillance in the United States or lethal fully autonomous weapons system...

By none@none.com (AFP)

Read full article →