- - Donald Trump Declares War on Anthropic theatlantic.com
- - ‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies theguardian.com
- - Anthropic Refuses to Bend to Pentagon on AI Safeguards as Dispute Nears Deadline securityweek.com
- - OpenAI Fires an Employee for Prediction Market Insider Trading wired.com
- - Firm Data on AI nber.org
- - Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance theverge.com
- - AI firefighting robot swarm self-organizes, tackles multiple fires with 99.67% success interestingengineering.com
- - Why AI is exposing design’s craft crisis doc.cc
- - The Software Development Lifecycle Is Dead boristane.com
- - THE 2028 GLOBAL INTELLIGENCE CRISIS citriniresearch.com
- - THE 2028 GLOBAL INTELLIGENCE BOOM michaelxbloch.substack.com
- - Signs of psychosis seen in Australian users’ interactions with AI chatbots, expert warns theguardian.com
- - An AI Agent Published a Hit Piece on Me theshamblog.com
- - WHEN AI COMES TO TOWN sherwood.news
- - The Promptware Kill Chain lawfaremedia.org
- - The Context Development Lifecycle: Optimizing Context for AI Coding Agents tessl.io
- - The End Of The Apprentice: Dario Amodei And The Crisis Of The Automated Genius strangelove-ai.com
- - OpenClaw, OpenAI and the future steipete.me
- - AI is evolving fast and may bring the fourth industrial revolution with it abc.net.au
- - There is no spoon webdirections.org
- - Now problems vs. forever problems webdirections.org
- - Engineering Excellence In The Agentic Era: A Framework For Professional Standards And Quality Control strangelove-ai.com
- - The Illusion of AGI, or What Language Models Can Do Without Thought techpolicy.press
- - ANNOUNCING THE CREATORS COALITION ON AI creatorscoalitionai.com
- - Glossary Of The Ai Coding Mindset: From Flow To Psychosis strangelove-ai.com
- - Models that improve on their own are AI's next big thing axios.com
- - Anthropic's viral new work tool wrote itself axios.com
- - Sabotage Risk Report: Claude Opus 4.6 www-cdn.anthropic.com
- - Something Big Is Happening shumer.dev
- - The new UX Toolkit: data, context, and evals uxdesign.cc
- - What Is Claude? Anthropic Doesn’t Know, Either newyorker.com
- - Google and Microsoft offer lucrative deals to promote AI, but even $500,000 won’t sway some creators cnbc.com
- - I Loved My OpenClaw AI Agent—Until It Turned on Me wired.com
- - OpenAI Is Making the Mistakes Facebook Made. I Quit. nytimes.com
- - How AI Literacy Shapes GenAI Use nngroup.com
- - She saw the AI software collapse coming almost a year ago. Here's what she expects next. businessinsider.com
- - Power Prompts in Claude Code hvpandya.com
- - Shadow AI Is Everywhere: Meet Julius, the Open-Source LLM Fingerprinting Tool dev.to
- - Microsoft and ServiceNow's exploitable agents reveal a growing - and preventable - AI security crisis zdnet.com
- - Stop building systems for agents. Build agent systems for humans instead. blog.xiangpeng.systems
- - Road markers are a new target for hackers - experts find self-driving cars and autonomous drones can be misled by malicious instructions written on road signs techradar.com
- - The Displacement of Purpose peterboeckel.com
- - No humans needed: New AI platform takes industry by storm axios.com
February 2026
February's AI coverage spans an increasingly uneasy landscape where technical advances collide with governance, security, and cultural consequences. Coding agents and self-improving models dominate the innovation narrative — from Anthropic's viral Claude Code to frameworks for managing AI context and agent systems — while a parallel wave of safety discourse examines sabotage risks, prompt injection attacks, shadow AI, and exploitable enterprise agents from Microsoft and ServiceNow. Broader social and institutional friction runs throughout: OpenAI contends with employee dismissals, ethical comparisons to Facebook's trajectory, and an ongoing dispute between OpenClaw and OpenAI; meanwhile psychologists flag psychosis-adjacent behaviors in chatbot users, creators form coalitions against AI encroachment, and commentators debate whether AGI is illusion or imminence. Underlying it all is a design and purpose crisis — questions about what human creativity, software development, and even identity mean in a world where AI increasingly does the work.
We're not responsible for the content of these links.