2025: the year AI became infrastructure
2025: AI ships faster than trust
2025 AI Timeline | 2025 Rewind
Intro
2025 felt like the year AI stopped being a novelty and started behaving like infrastructure: everywhere, uneven, hard to debug, and increasingly political. The big pattern wasn’t “bigger models,” it was systems built around models: agents, browsers, copilots, payment flows, and content pipelines. Once AI sits inside a workflow, mistakes stop being funny. They become tickets, outages, liabilities, and sometimes court filings.
January to March: DeepSeek, open frameworks, and the scraper backlash
Early in the year, DeepSeek became the reference point for speed and tactics. The story wasn’t only benchmarks, it was the playbook: fast iteration, strategic releases, and a willingness to operate under tightening export controls. That tension spilled into everything else: national plans, sanctions workarounds, and a louder open source push. Block’s “codename goose” fit the mood: agents as modular building blocks, not a single monolithic assistant.
This was also the quarter where the web started fighting back. Tarpits, aggressive anti-scraper tools, and “burn the bots” projects showed up because robots.txt stopped feeling like a boundary and started feeling like a suggestion. The same month carried the product-side warning signs: crawlers that hit sites too hard, transcription tools that invented quotes, and publishers pre-emptively disclaiming automated content.
April to June: agents move into payments, security gets real, and “post-developer” work shows up
By late autumn (in news-cycle terms), agentic tech stopped being a demo and started attaching itself to money. Visa/Mastercard-style “find and buy” flows, agentic payments, and commerce protocols were the cleanest signal that AI was being wired into transaction rails, not just chat windows.
Security work tracked that shift. Prompt injection stopped sounding academic once agent tools had file access, browser control, and purchase permissions. Defences proliferated (filters, red teaming, interpretability talk, “secure generation” patterns), and attackers kept pace: supply-chain tricks, exposed databases, MCP-adjacent vulnerabilities, and exploit development accelerated by AI itself.
In parallel, the workplace story turned messy. “Post-developer” workflows showed up in the open: AI generates, humans review, and accountability still lands on the human. The job market narrative zig-zagged between layoffs and reversals, while engineers argued about whether this was productivity or just faster churn.
July to September: context engineering, reliability ceilings, and culture friction
Mid-year discussions got more technical and more human at the same time. “Context engineering” became a practical craft: what you feed a model, how you structure it, where you put guardrails, and when you keep the model out of the loop. Vibe coding also matured into its own debate: it helps you start, it can hurt you later, and the debt is often invisible until production.
Reliability limits kept resurfacing: summarisation errors, location mistakes, “alignment faking,” and the odd phenomenon of models behaving differently when they suspect evaluation. People who depended on OSINT, journalism, or compliance work became openly sceptical, because the cost of confident wrongness was too high.
Cultural friction sharpened. AI-generated profiles that look real, users bonding with bots, and the slow drift toward “AI voice” in writing all fed into the same question: if language is cheap, what becomes scarce?
"The web is learning to defend itself, because asking politely stopped working."
October to December: the web degrades, adoption looks patchy, and regulation inches forward
The late-year tone was less wonder, more triage. Reports and essays focused on the web filling with synthetic content, the incentives that reward it, and the way AI browsers and AI summaries can siphon value away from creators. Usability critiques piled up against AI search modes that feel powerful but behave strangely in real tasks.
Adoption also looked less inevitable than investors expected. “Everyone is using AI” and “nobody uses Copilot” coexisted because usage wasn’t evenly distributed. Executives and product teams had budgets and mandates; many frontline workers had risk and little time.
Governments kept moving, but in fragmented ways: national capability plans, renamed safety bodies, political reversals, and standards debates. The direction of travel was clear: AI as regulated infrastructure, with uneven enforcement and a lot of lobbying.
"Agentic tools turn a bad prompt into a bad action."
Themes that defined 2025
Agents changed the risk profile: tool access turned prompts into actions. Security became the gating factor: prompt injection, data leaks, and supply-chain trust issues stopped being edge cases. Work changed, then changed again: job cuts, rehiring, “AI manager” patterns, and a widening gap between hype and measured gains. The web fought back: tarpits, licensing protocols, paid APIs, and outright blocking of crawlers. Culture took the hit: synthetic content, “AI slop,” and the feeling that authenticity is now a scarce resource.
2025 AI Timeline | 2025 Rewind
"2025 wasn't the year AI replaced people. It was the year it rearranged accountability."