PERSPECTA

News from every angle

Back to headlines

Figma and Anthropic Collaborate on AI Code-to-Design Conversion

Figma has partnered with Anthropic to develop a tool that converts AI-generated code into design elements, streamlining the design process.

17 Feb, 00:03 — 17 Feb, 19:30
PostShare

The Story

Analyzing sources…

Source Diversity

Source Diversity

High (59/100)
5 sources33/33
Spectrum spread3/5 buckets covered17/33
Far L
Left2
Left (2)
cnbcBBC
Center
Right2
Right (2)
seeking-alphawsj
Far R1
Far Right (1)
zerohedge
Geographic diversity2 regions9/34
US4UK1

Sources

Showing 5 of 5 sources
BBCHigh51d ago

Reddit's human content wins amid the AI flood

Reddit says its human contributors are valued amid an internet awash with AI-generated content.

Read full article →
wsjHigh50d ago

Infosys, Anthropic Partner on AI for Telecom, Finance, Manufacturing - The Wall Street Journal

Infosys, Anthropic Partner on AI for Telecom, Finance, Manufacturing  The Wall Street Journal

Read full article →
cnbcMostly Factual50d ago

Anthropic releases Claude Sonnet 4.6, continuing breakneck pace of AI model releases

Claude Sonnet 4.6 is more consistent with coding and is better at following coding instructions, Anthropic said.

Read full article →
seeking-alphaMixed50d ago

Anthropic continued to push model boundaries with latest Claude Sonnet 4.6 unveiling

Read full article →
zerohedgeLow50d ago

Anthropic–Pentagon Talks Stall Over AI Guardrails

Anthropic–Pentagon Talks Stall Over AI Guardrails Contract renewal talks between Anthropic and the Pentagon have stalled over how its Claude system can be used. The AI firm is seeking stricter limits before extending its agreement, according to a person familiar with the private negotiations and Bloomberg. At the heart of the dispute is control. Anthropic wants firm guardrails to prevent Claude from being used for mass surveillance of Americans or to build weapons that operate without human oversight. The Defense Department’s position is broader: it wants flexibility to deploy the model so long as its use complies with the law. The tension reflects a larger debate over how far advanced AI should go in military settings. Bloomberg writes that Anthropic has tried to distinguish itself as a safety-first AI developer. It created a specialized version, Claude Gov, tailored to U.S. national security work, designed to analyze classified information, interpret intelligence and process cybersecurity data. The company says it aims to serve government clients while staying within its own ethical red lines. “Anthropic is committed to using frontier AI in support of US national security,” a spokesperson said, describing ongoing discussions with the Defense Department as “productive conversations, in good faith.” The Pentagon, however, struck a firmer tone. “Our nation requires that our partners be willing to help our warfighters win in any fight,” spokesman Sean Parnell said, adding that the relationship is under review and emphasizing troop safety. Some defense officials have grown wary, viewing reliance on Anthropic as a potential supply-chain vulnerability. The department could ask contractors to certify they are not using Anthropic’s models, according to a senior official—an indication that the disagreement could ripple beyond a single contract. Rival AI developers are watching closely. Tools from OpenAI, Google and xAI are also being discussed for Pentagon use, with companies working to ensure their systems can operate within legal boundaries. Anthropic secured a two-year Pentagon deal last year involving Claude Gov and enterprise products, and the outcome of its current negotiations could influence how future agreements with other AI providers are structured. Tyler Durden Tue, 02/17/2026 - 13:00

By Tyler Durden

Read full article →