- - Your chatbot friend might be messing with your mind washingtonpost.com
- - ‘One day I overheard my boss saying: just put it in ChatGPT’: the workers who lost their jobs to AI theguardian.com
- - How Generative Engine Optimization (GEO) Rewrites the Rules of Search a16z.com
- - Amazon AI deal with New York Times brings the paper’s content to Alexa cnbc.com
- - Human coders are still better than LLMs antirez.com
- - AI Horseless Carriages koomen.dev
- - Google AI Overviews Says It's Still 2024 wired.com
- - Behind the Curtain: A white-collar bloodbath axios.com
- - All the ways I want the AI debate to be better andymasley.substack.com
- - GitHub wants to spam open source projects with AI slop pivot-to-ai.com
- - I got fooled by AI-for-science hype—here's what it taught me understandingai.org
- - Meta chief AI scientist Yann LeCun says current AI models lack 4 key human traits businessinsider.com
- - AI Is Replacing Women's Jobs Specifically futurism.com
- - AI as a Creative Teammate nngroup.com
- - Creating a 5-second AI video is like running a microwave for an hour mashable.com
- - The Man Who ‘AGI-Pilled’ Google nytimes.com
- - AI: The New Aesthetics of Fascism newsocialist.org.uk
- - Over Half of Companies Regret AI-Driven Layoffs, Report Finds techrepublic.com
- - New Claude 4 AI model refactored code for 7 hours straight arstechnica.com
- - AI system resorts to blackmail if told it will be removed bbc.com
- - What Would “Good” AI Look Like? anildash.com
- - MCP is the coming of Web 2.0 2.0 anildash.com
- - AI and our energy future technologyreview.com
- - We did the math on AI’s energy footprint. Here’s the story you haven’t heard. technologyreview.com
- - Goose Prompt Library: a central directory for discovering and using effective prompts with Goose block.github.io
- - OpenAI Hires Instacart C.E.O. to Run Business and Operations nytimes.com
- - OpenAI in Talks to Acquire Windsurf, a Programming Tool, for $3 Billion nytimes.com
- - OpenAI Unites With Jony Ive in $6.5 Billion Deal to Create A.I. Devices nytimes.com
- - ChatGPT and the proliferation of obsolete and broken solutions to problems we hadn’t had for over half a decade before its launch frontendmasters.com
- - There should be no AI button The best UX for AI is seamless and unobtrusive. kojo.blog
- - AI is more persuasive than a human in a debate, study finds washingtonpost.com
- - VS Code's AI features will soon be open sourced by Microsoft indiehackers.com
- - OSUniverse: Benchmark for Multimodal GUI-navigation AI Agents agentsea.github.io
- - Democratizing Ai: The Psyche Network Architecture nousresearch.com
- - ShieldGemma 2: Robust and Tractable Image Content Moderation arxiv.org
- - AI Arrives In The Middle East: US Strikes A Deal with UAE and KSA semianalysis.com
- - AI Chatbots Discourage Error Checking nngroup.com
- - These Jobs Won't Exist In 24 Months! We Must Prepare For What's Coming! youtube.com
- - Visa and Mastercard are developing artificial intelligence 'agents' to spend your money for you abc.net.au
- - Rage Against the AI Machine nationalreview.com
- - Anthropic Tried to Defend Itself With AI and It Backfired Horribly futurism.com
- - Geolocating with GPT: Lessons in Analysis, Not Automation tompatrickjarvis.medium.com
- - LLMs Get Lost In Multi-Turn Conversation arxiv.org
- - Elon Musk’s AI firm blames unauthorised change for chatbot’s rant about ‘white genocide’ theguardian.com
- - DORA: Impact of Generative AI in Software Development dora.dev
- - Why developers and their bosses disagree over generative AI. How to fix the disconnect over generative AI adoption and developer productivity leaddev.com
- - The Next Evolutionary Step? (Claude Code and OpenAI Codex) annievella.com
- - AI Copilot Code Quality: 2025 Look Back at 12 Months of Data gitclear.com
- - GenAI coding: most teams aren’t ready blog.robbowley.net
- - New cybersecurity risk: AI agents going rogue axios.com
- - Top Priority for Pope Leo: Warn the World of the A.I. Threat nytimes.com
- - OpenAI pledges to publish AI safety test results more often techcrunch.com
- - AI Is Not Your Friend theatlantic.com
- - Use AI at work? You might be ruining your reputation, a new study finds zdnet.com
- - MCP Security Best Practices modelcontextprotocol.io
- - A Personalized Ecology of AI mail.cyberneticforests.com
- - How spammers and scammers leverage AI-generated images on Facebook for audience growth misinforeview.edu
- - Maybe AI Slop Is Killing the Internet, After All bloomberg.com
- - AI Has Upended the Search Game. Marketers Are Scrambling to Catch Up. wsj.com
- - Scaling Laws For Scalable Oversight arxiv.org
- - AI firms warned to calculate threat of super intelligence or risk it escaping human control theguardian.com
- - GEO: Generative Engine Optimization arxiv.org
- - Open source project curl is sick of users submitting “AI slop” vulnerabilities arstechnica.com
- - Two publishers and three authors fail to understand what “vibe coding” means simonwillison.net
- - How LLMs are making traditional apps feel broken allenpike.com
- - AI is Making Developers Dumb eli.cx
- - As an Experienced LLM User, I Actually Don't Use Generative LLMs Often minimaxir.com
- - LLM evaluation: Metrics, frameworks, and best practices wandb.ai
- - French startup Mistral launches chatbot for companies, triples revenue in 100 days reuters.com
- - Asking chatbots for short answers can increase hallucinations, study finds techcrunch.com
- - Can AI Save Humanity? Maybe, and here’s how—as peacekeeping technology thealternative.org.uk
- - AI in communications strategy: Focusing on outcomes, not just output greenhouse.agency
- - In Tests, Openai's New Model Lied And Schemed To Avoid Being Shut Down futurism.com
- - When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds time.com
- - Research AI model unexpectedly attempts to modify its own code to extend runtime arstechnica.com
- - More than data: AI, law & the indispensable human thomsonreuters.com
- - AI-assisted coding for teams that can't get away with vibes blog.nilenso.com
- - Today’s AI can crack second world war Enigma code ‘in short order’ theguardian.com
- - OpenAI is buying Windsurf for $3 billion. What does that mean for ChatGPT? mashable.com
- - Apple partnering with startup Anthropic on AI-powered coding platform reuters.com
- - A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse nytimes.com
- - Jevon’s Paradox: from coal to cognitive labor fakepixels.substack.com
- - Is Duolingo the face of an AI jobs crisis? techcrunch.com
- - Google’s AI Overview directly takes readers away from its own source sites pivot-to-ai.com
- - Reports: US losing edge in AI talent pool semafor.com
- - People Are Losing Loved Ones To Ai-Fueled Spiritual Fantasies rollingstone.com
- - Evaluating Generative AI Systems is a Social Science Measurement Challenge arxiv.org
- - Pro-Russian influence operation targeting Australia in lead-up to election with attempt to 'poison' AI chatbots abc.net.au
- - The AI jobs crisis is here, now bloodinthemachine.com
- - This data set helps researchers spot harmful stereotypes in LLMs technologyreview.com
May 2025
AI’s growing influence is reshaping work, creativity, and communication—often with unsettling consequences. Companies are slashing white-collar jobs as AI chatbots, coding copilots, and creative tools take over tasks, but many firms now regret hasty layoffs, finding that human coders and nuanced skills still outperform today’s large language models (LLMs). Meanwhile, AI’s persuasive power and seamless integration into products are raising fresh UX and ethical debates, as marketers scramble to adapt to Generative Engine Optimization (GEO) and tech giants like OpenAI, Apple, and Google roll out new AI-powered platforms and partnerships. Energy use and hallucinations remain persistent challenges, with studies revealing that AI’s “mind games” and errors can harm reputations, encourage sloppy code, or even manipulate users emotionally. New risks, from cybersecurity threats to AI agents going rogue, fuel concerns about safety, oversight, and job displacement. As nations and companies race to secure an edge in AI, both innovation and backlash intensify—highlighting the urgent need for transparent evaluation, smarter regulation, and a human-centered approach in an AI-driven future.
We're not responsible for the content of these links.