
Anthropic Refuses Pentagon Demand to Remove AI Safety Guardrails
Anthropic has stated it cannot in good conscience comply with a Pentagon demand to remove safety precautions from its Claude AI model, despite threats to cancel a $200 million contract. Critics warn against the dangers of relaxing AI guardrails for military use.

