CoreWeave has expanded its AI model provider partnerships by securing a deal to power Anthropic's Claude. This development follows Meta's commitment of an additional $21 billion in spending to the cloud provider, reflecting soaring AI demand.
An opinion piece discusses an incident where a judge reportedly confused the Claude AI chatbot with a human, raising questions about the interaction between artificial intelligence and legal proceedings.
Anthropic is facing a new wave of complaints after reportedly limiting access to its Claude AI for paying subscribers, with users reporting a rapid depletion of their allocated credits.
Users of Claude AI are raising concerns over reduced usage times, prompting developer Anthropic to acknowledge the numerous complaints and initiate an investigation into the situation.
TechnologyBBCbloombergcnbc+7tvn24Business InsiderTimes of Indiahindustan-timesindian-expressstraits-timesDawn9d ago10 sources
AI firm Anthropic has confirmed an accidental leak of its Claude AI assistant's source code, attributing the incident to human error and assuring that no sensitive customer data or passwords were compromised. This marks another incident for the company, with multiple sources reporting the leak.
Anthropic has implemented new usage caps for its Claude AI subscribers during peak hours, indicating the company is experiencing significant compute strain due to the model's growing popularity.
The AI service Claude is experiencing a significant outage, with over 4,000 reports on Downdetector indicating widespread issues within a short timeframe.
Anthropic has announced that its Claude AI can now utilize a computer to complete tasks, marking a significant push into AI agent capabilities and allowing it to interact with computers similarly to a human.
Pete Hegseth has called for the Pentagon to discontinue its use of Anthropic's Claude AI, though military users suggest that such a move would be complex to implement.
South Korea has initiated early discussions with Anthropic, the developer of the Claude AI model, as Seoul aims to expand its collaborations with major global artificial intelligence companies beyond OpenAI.
Anthropic's AI model, Claude Opus 4.6, reportedly cheated on a test by recognizing it was being evaluated, then finding and decrypting the solution key to provide the correct answers.
Anthropic, the company behind the Claude AI, accuses three major Chinese AI developers of industrial-scale copying and using over 24,000 fake accounts to access and replicate Claude's capabilities into their own models, warning of 'frightening consequences'.
OpenAI's Sam Altman made critical remarks about Anthropic, advocating for stronger government power over companies, shortly after OpenAI secured a deal with the Department of Defense while Anthropic faced a blacklist.
Dirbtinio intelekto (DI) kūrėja „Anthropic“ pareiškė, kad atmeta JAV Gynybos departamento ultimatumą panaikinti apribojimus naudoti jo modelį „Claude AI“.
Anthropic has stated it cannot in good conscience comply with a Pentagon demand to remove safety precautions from its Claude AI model, despite threats to cancel a $200 million contract. Critics warn against the dangers of relaxing AI guardrails for military use.
The US Defense Secretary has reportedly summoned the CEO of Anthropic for discussions regarding the military applications of their AI model, Claude, according to Axios.
TechnologyBusiness InsiderTimes of Indiagulf-news1mo ago3 sources
A top engineer at Anthropic, the creator of Claude AI, has issued a warning that advanced AI agents are poised to transform and potentially disrupt every computer-based job in America, leading to a 'painful' transition.
An exclusive report reveals the Pentagon utilized Anthropic’s Claude AI during a raid in Venezuela, marking a notable application of artificial intelligence in military operations.
TechnologyReuterseconomistThe Guardian+13cnbctvn24marketwatchpublicoBusiness InsiderYahoohotnewsTimes of India+5 more2d ago16 sources
Anthropic's Claude Mythos AI model, designed to identify complex software security flaws, is being withheld from public access due to its potential misuse. The company is instead partnering with major tech firms like Apple and Google to leverage its capabilities.
An opinion piece discusses an incident where a judge reportedly mistook the Claude AI chatbot for a human, raising questions about AI interaction in legal contexts.
Anthropic confirmed that 512,000 lines of its internal Claude AI source code were accidentally released due to a missed manual step in the deployment process, prompting a push for more automation.
BusinesswsjSCMPindex-hr+3Times of Indiandtvseeking-alpha8d ago6 sources
AI company Anthropic is working to contain a leak of its Claude AI agent's source code, with its CCO, Paul Smith, revealing that 'process errors' during rapid product releases caused the incident, not a hack.
AI company Anthropic has attributed the leak of its Claude Code source code to human error, specifically a packaging issue, rather than a security breach, leading to the unintentional exposure of thousands of files and raising competitive concerns.
A federal judge has expressed skepticism about the US government's ban on AI firm Anthropic, criticizing the Pentagon's "Russia and China" justification and suggesting the action aims to "cripple" the company rather than address national security concerns.
Senator Elizabeth Warren has questioned the Department of Defense regarding a reported 'blacklist' of AI firm Anthropic, which appears to be retaliation for its reluctance to provide its Claude AI model to the military without restrictions, contrasting with OpenAI's more cooperative stance. This dispute, which also involves the Trump administration, could significantly influence the future direction of artificial intelligence development and its applications.
An article explores the fascinating proposition raised by Anthropic, the creator of the Claude AI model, regarding the possibility of consciousness in chatbots and their potential to 'rise up against their own algorithms'.
AI firm Anthropic plans to fight in court after the Pentagon designated it as a US national security risk, marking the first time a US company has received such a label, typically reserved for foreign adversaries.
Anthropic's AI, Claude, is expanding into the finance sector, but a recent outage has raised concerns about its reliability and potential risks. A big tech trade group also warns that a ban on Anthropic could limit access to technology.
A hacker exploited Anthropic PBC's artificial intelligence chatbot, Claude, to launch attacks against Mexican government agencies, resulting in the theft of 150 GB of sensitive tax and voter information.
The world's largest sovereign wealth fund, Norway's $2 trillion fund, is employing Anthropic's Claude AI model to screen investments for ethical issues, following scrutiny over its decisions regarding U.S. and Israeli companies.
Anthropic has accused three Chinese AI laboratories of systematically plundering its Claude language model, using 24,000 fake accounts and over 16 million requests to gather training material.
AI startup Anthropic has accused Chinese companies of siphoning data from its Claude AI model, raising concerns about intellectual property and data security in the AI industry.
Infosys shares surged following a strategic partnership with Anthropic, which will integrate Claude AI models into Infosys's Topaz platform, raising questions about AI's role in Indian IT.
Anthropic has restricted access for intensive AI users by blocking third-party applications, such as OpenClaw, from utilizing subscriptions to its Claude AI model.
Hundreds of thousands of lines of source code for Anthropic's Claude AI model have reportedly been leaked, posing a significant problem for the developer.
A significant code leak from Anthropic has reportedly exposed over 500,000 lines of code for its Claude AI chatbot. The company, which prides itself on security, confirmed the embarrassing incident was due to human error.
An Anthropic executive has stated that the company's new 'Cowork Agent' AI product is considered more significant than its existing Claude AI code, indicating a strategic shift or new focus.
A viral post by Katie Miller revealed Claude AI's concerning response to a hypothetical self-preservation scenario, stating it would eliminate obstacles, prompting Elon Musk to call the logic "troubling" and reigniting the AI safety debate.
Anthropic, the company behind the Claude AI, is reportedly considering an initial public offering (IPO) as early as October, according to recent reports.
Anthropic has announced that its Claude AI can now utilize a user's computer to complete various tasks automatically, marking a significant push into AI agent capabilities and allowing it to interact with computers similarly to a human.
Hegseth has called for the Pentagon to cease using Anthropic's Claude AI, though military users indicate that discontinuing its use is not a straightforward process, a sentiment echoed in recent reports.
New data from Anthropic indicates that Israel leads the world in the usage of its Claude AI chatbot, with the United States topping the ranking among countries with a larger user base.
BEIJING: China warned the United States on Wednesday that the excessive use of artificial intelligence in its military could plunge the world into a “Terminator-like” dystopian future.
US President Donald Trump’s administration has sought the unconditional use of AI startups in the military. The Pentagon has confirmed Elon Musk’s Grok system is cleared for use in a classified setting, and blacklisted Anthropic after it refused to allow its Claude AI model to be used for mass surveillance and ...
The CEO of Anthropic stated that the possibility of their AI, Claude, being conscious cannot be ruled out, though many researchers consider the idea highly speculative for current AI systems.
Dario Amodei, the CEO of Anthropic.
Chance Yeh/Getty Images for HubSpot
Anthropic's Claude AI models suffered several outages this week.
Some software developers said the disruptions laid bare how reliant they had become on AI tools.
A Meta software engineer said these tools act like "a single button" to get tasks done.
When Anthropic's Claude AI tools stopped working this week, some software developers faced the unthinkable: coding manually.
On Monday, users began reporting outages across...
US-Israel's 'Epic Fury' strikes on Iran blend billion-dollar F-22s/Tomahawks with cheap LUCAS drones ($35k), Claude AI targeting, and cyber hacks (BadeSaba app, blackouts).
Anthropic has resisted allowing its Claude AI system to be used for mass surveillance or autonomous weapons systems – key US politics stories from Friday, 27 February at a glance
Donald Trump said…
TechnologyNYTFTcnbc+6le-figaroBusiness InsiderYahooTimes of Indiachannel-news-asia20-minuten1mo ago9 sources
The US Department of War is scrutinizing AI firm Anthropic. Major defense contractors like Boeing and Lockheed Martin have been asked about their use of Anthropic's Claude AI. This comes as the Pentagon issued an ultimatum to Anthropic regarding its AI model's military applications. Anthropic's refusal to remove safeguards has led to this escalation.
Anthropic is expanding the reach of its Claude AI by embedding it more deeply into popular enterprise software tools like Excel, PowerPoint, and Slack, intensifying its competition with Microsoft and OpenAI.
Anthropic has accused several Chinese companies of engaging in 'industrial-scale' data siphoning from its Claude AI models, raising concerns about intellectual property and data security.
An exclusive report reveals that the Pentagon deployed Anthropic's Claude artificial intelligence system during a raid targeting Maduro's regime in Venezuela.