PERSPECTA

News from every angle

Results for "Claude AI"

64 stories found

Opinionwsj4d ago

Judge Mistakenly Identifies AI Chatbot as Human

An opinion piece discusses an incident where a judge reportedly confused the Claude AI chatbot with a human, raising questions about the interaction between artificial intelligence and legal proceedings.

Anthropic Confirms Claude AI Source Code Leak Due to Human Error, No Customer Data Compromised
TechnologyBBCbloombergcnbc+7tvn24Business InsiderTimes of Indiahindustan-timesindian-expressstraits-timesDawn9d ago10 sources

Anthropic Confirms Claude AI Source Code Leak Due to Human Error, No Customer Data Compromised

AI firm Anthropic has confirmed an accidental leak of its Claude AI assistant's source code, attributing the incident to human error and assuring that no sensitive customer data or passwords were compromised. This marks another incident for the company, with multiple sources reporting the leak.

Anthropic Refuses Pentagon Demand to Remove AI Safety Guardrails
PoliticswapoThe GuardianFrance 24+1channel-news-asia1mo ago4 sources

Anthropic Refuses Pentagon Demand to Remove AI Safety Guardrails

Anthropic has stated it cannot in good conscience comply with a Pentagon demand to remove safety precautions from its Claude AI model, despite threats to cancel a $200 million contract. Critics warn against the dangers of relaxing AI guardrails for military use.

Pentagon Deploys AI in Venezuela Raid
PoliticsNYTwsjThe Guardian1mo ago3 sources

Pentagon Deploys AI in Venezuela Raid

An exclusive report reveals the Pentagon utilized Anthropic’s Claude AI during a raid in Venezuela, marking a notable application of artificial intelligence in military operations.

Anthropic Withholds Claude Mythos AI Model, Partners with Tech Giants After Security Flaw Discovery
TechnologyReuterseconomistThe Guardian+13cnbctvn24marketwatchpublicoBusiness InsiderYahoohotnewsTimes of India+5 more2d ago16 sources

Anthropic Withholds Claude Mythos AI Model, Partners with Tech Giants After Security Flaw Discovery

Anthropic's Claude Mythos AI model, designed to identify complex software security flaws, is being withheld from public access due to its potential misuse. The company is instead partnering with major tech firms like Apple and Google to leverage its capabilities.

Anthropic Blames Human Error for Claude AI Source Code Leak
ScienceTimes of India9d ago

Anthropic Blames Human Error for Claude AI Source Code Leak

AI company Anthropic has attributed the leak of its Claude Code source code to human error, specifically a packaging issue, rather than a security breach, leading to the unintentional exposure of thousands of files and raising competitive concerns.

Sen. Warren Questions DOD on Anthropic Blacklist Amid OpenAI Military AI Clash
Technologycnbchvg18d ago2 sources

Sen. Warren Questions DOD on Anthropic Blacklist Amid OpenAI Military AI Clash

Senator Elizabeth Warren has questioned the Department of Defense regarding a reported 'blacklist' of AI firm Anthropic, which appears to be retaliation for its reluctance to provide its Claude AI model to the military without restrictions, contrasting with OpenAI's more cooperative stance. This dispute, which also involves the Trump administration, could significantly influence the future direction of artificial intelligence development and its applications.

AI Firm Anthropic Designated US National Security Risk, Vows Court Fight
TechnologyReutersbloombergwsj+19FTThe Guardiancnbcder-standardtelextvn24Business Insiderdelfi-lt+11 more1mo ago22 sources

AI Firm Anthropic Designated US National Security Risk, Vows Court Fight

AI firm Anthropic plans to fight in court after the Pentagon designated it as a US national security risk, marking the first time a US company has received such a label, typically reserved for foreign adversaries.

Israel Dominates Claude AI Usage Globally
Technologyzerohedge25d ago

Israel Dominates Claude AI Usage Globally

New data from Anthropic indicates that Israel leads the world in the usage of its Claude AI chatbot, with the United States topping the ranking among countries with a larger user base.

China sounds alarm over US military use of AI
WorldDawn1mo ago

China sounds alarm over US military use of AI

BEIJING: China warned the United States on Wednesday that the excessive use of artificial intelligence in its military could plunge the world into a “Terminator-like” dystopian future. US President Donald Trump’s administration has sought the unconditional use of AI startups in the military. The Pentagon has confirmed Elon Musk’s Grok system is cleared for use in a classified setting, and blacklisted Anthropic after it refused to allow its Claude AI model to be used for mass surveillance and ...

Claude outages lay bare software developers' growing reliance on AI: 'I guess I'll write code like a caveman'
TechnologyBusiness Insider1mo ago

Claude outages lay bare software developers' growing reliance on AI: 'I guess I'll write code like a caveman'

Dario Amodei, the CEO of Anthropic. Chance Yeh/Getty Images for HubSpot Anthropic's Claude AI models suffered several outages this week. Some software developers said the disruptions laid bare how reliant they had become on AI tools. A Meta software engineer said these tools act like "a single button" to get tasks done. When Anthropic's Claude AI tools stopped working this week, some software developers faced the unthinkable: coding manually. On Monday, users began reporting outages across...

How Pentagon's 'Friday deadline' may have come hours early for Anthropic
TechnologyNYTFTcnbc+6le-figaroBusiness InsiderYahooTimes of Indiachannel-news-asia20-minuten1mo ago9 sources

How Pentagon's 'Friday deadline' may have come hours early for Anthropic

The US Department of War is scrutinizing AI firm Anthropic. Major defense contractors like Boeing and Lockheed Martin have been asked about their use of Anthropic's Claude AI. This comes as the Pentagon issued an ultimatum to Anthropic regarding its AI model's military applications. Anthropic's refusal to remove safeguards has led to this escalation.