News

Bloomberg was allowed, and the New York Times wasn't. Anthropic said it had no knowledge of the list and that its contractor, ...
In the so-called "constitution" for its chatbot Claude, AI company Anthropic claims that it's committed to principles based ...
Chain-of-thought monitorability could improve generative AI safety by assessing how models come to their conclusions and ...
Researchers are urging developers to prioritize research into “chain-of-thought” processes, which provide a window into how ...
Anthropic released a guide to get the most out of your chatbot prompts. It says you should think of its own chatbot, Claude, ...
Monitoring AI's train of thought is critical for improving AI safety and catching deception. But we're at risk of losing this ...
Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.
Anthropic released one of its most unsettling findings I have seen so far: AI models can learn things they were never ...
The Silicon Valley start-up says it is ‘concerning’ that the US added only one-tenth of the power capacity that China added last year.
Let’s be honest. AI has already taken a seat in the classroom. Google, Microsoft, OpenAI, Anthropic have all been pushing ...
The chatbot can now be prompted to pull user data from a range of external apps and web services with a single click.
Amazon Web Services (AWS) is launching an AI agent marketplace next week and Anthropic is one of its partners, TechCrunch has ...