- - ChatGPT Containers can now run bash, pip/npm install packages, and download files simonwillison.net
- - Moltbook is the most interesting place on the internet right now simonwillison.net
- - Users flock to open source Moltbot for always-on AI, despite major risks arstechnica.com
- - How AI assistance impacts the formation of coding skills anthropic.com
- - Darren Aronofsky’s AI-Generated Show Contains Garbled Neural Gore, Even Just in the Teaser Trailer futurism.com
- - THE FIVE LEVELS: FROM SPICY AUTOCOMPLETE TO THE DARK FACTORY danshapiro.com
- - Spite House: AI, disintermediation and the end of the free web lapope.com
- - We Need to Talk About How We Talk About 'AI' techpolicy.press
- - 2025 in Review: AI’s impact on domains domainnamewire.com
- - Five Trends in AI and Data Science for 2026 sloanreview.mit.edu
- - The cost of AI slop could cause a rethink that shakes the global economy in 2026 theguardian.com
- - When AI Builds AI cset.georgetown.edu
- - What If? AI in 2026 and Beyond oreilly.com
- - The Invisible Threat: How Polymorphic Malware Is Outsmarting Your Email Security secureworld.io
- - ChatGPT is pulling answers from Elon Musk’s Grokipedia techcrunch.com
- - Pope Leo warns of ‘overly affectionate’ AI chatbots edition.cnn.com
- - Anthropic's philosopher says we don't know for sure if AI can feel businessinsider.com
- - Computer says no: AI vetting rejects job hunters in record time smh.com.au
- - What comes after ‘seeing is believing washingtonpost.com
- - Your App Subscription Is Now My Weekend Project rselbach.com
- - AI and the Next Economy oreilly.com
- - cUrl shutters bug bounty program to remove incentive for submitting AI slop theregister.com
- - Software as Fast Fashion tante.cc
- - Ads Are Coming to ChatGPT. Here’s How They’ll Work wired.com
- - Defending LLM applications against Unicode character smuggling aws.amazon.com
- - Anthropic opens up its Claude Cowork feature engadget.com
- - Block CISO: We red-teamed our own AI agent to run an infostealer on an employee laptop theregister.com
- - Agent Guardrails and Controls engineering.block.xyz
- - The Economics of AI Coding: A Real-World Analysis blog.ziade.org
- - We asked over 150 software engineers about vibe-coding. Here's what they said. businessinsider.com
- - Welcome to Gas Town steve-yegge.medium.com
- - Ralph Wiggum as a "software engineer" ghuntley.com
- - A new way to extract detailed transcripts from Claude Code simonw.substack.com
- - SynthID: A tool to watermark and identify content generated through AI deepmind.google
- - Don't fall into the anti-AI hype antirez.com
- - Agent design patterns rlancemartin.github.io
- - ServiceNow patches critical AI platform flaw that could allow user impersonation cyberscoop.com
- - "AI" is bad UX buttondown.com
- - How to write a great agents.md: Lessons from over 2,500 repositories github.blog
- - How to Build Self-Evolving Claude Code Rules (So You Stop Fixing the Same Mistakes) nathanonn.com
- - Google removes some of its AI summaries after users’ health put at risk theguardian.com
- - Why Coles just hired US defence contractor Palantir abc.net.au
- - Agent-native Architectures every.to
- - Germany plans measures to combat harmful AI image manipulation reuters.com
- - Rejecting Reality in the Age of AI little-flying-robots.ghost.io
- - Scared of artificial intelligence? New law forces makers to disclose disaster plans calmatters.org
- - A Definition of AGI agidefinition.ai
- - A Year Of Vibes lucumr.pocoo.org
- - AI Datacenter Explorer epoch.ai
January 2026
The AI conversation is shifting from novelty to consequences, as advances in coding agents, self-evolving systems, and agent-native architectures promise major productivity gains while raising hard questions about skills formation, software quality, and the economics of AI-driven development. Cultural and ethical unease is growing alongside technical progress, reflected in debates over what we mean by 'AI,' concerns about emotional manipulation by chatbots, the rise of AI-generated media and 'slop,' and warnings about disintermediation, the erosion of the free web, and distorted labour markets as automated vetting and vibe-coding scale. At the same time, security, governance, and accountability are becoming central, with polymorphic malware, Unicode smuggling, AI platform flaws, red-teamed agents, watermarking tools like SynthID, and new laws forcing transparency around AI risks. Industry responses range from ads entering ChatGPT and companies tightening incentives and rules, to governments intervening on health, image manipulation, and disclosure, all against a backdrop of uncertainty about AI's inner nature, its economic impact, and whether today's fast-moving systems are a stepping stone toward—or a distraction from—true AGI.
We're not responsible for the content of these links.