- - Atlassian Surfaces Human Practicalities For AI In Service Management forbes.com
- - AI Is the Black Mirror nautil.us
- - As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies apnews.com
- - AI-generated optical illusions can sort humans from bots newscientist.com
- - Generative AI & journalism apo.org.au
- - OpenAI tries to ‘uncensor’ ChatGPT techcrunch.com
- - Guardian Media Group announces strategic partnership with OpenAI theguardian.com
- - 12 famous AI disasters cio.com
- - AI Mistakes Are Very Different From Human Mistakes spectrum.ieee.org
- - Humans outperform AI in Australian government trial ia.au
- - Rigorous AI research to enable advanced AI governance aisi.gov.uk
- - Introducing Perplexity Deep Research perplexity.ai
- - UK drops ‘safety’ from its AI body, now called AI Security Institute techcrunch.com
- - The False AI Energy Crisis theatlantic.com
- - The Anthropic Economic Index anthropic.com
- - The LLM Curve of Impact on Software Engineers serce.me
- - The future belongs to idea guys who can just do things ghuntley.com
- - The End of Programming as We Know It oreilly.com
- - Representation of BBC News content in AI Assistants (PDF) bbc.co.uk
- - AI chatbots are still hopelessly terrible at summarizing news pivot-to-ai.com
- - 200bn euro alliance seeks to put Europe in driving seat of AI development belganewsagency.eu
- - US and UK refuse to sign AI safety declaration at summit arstechnica.com
- - Elon Musk just offered to buy OpenAI for $97.4 billion theverge.com
- - Elon Musk-led group makes $97 billion bid for control of OpenAI reuters.com
- - AI Hallucinations: What Designers Need to Know nngroup.com
- - Artificial intelligence in healthcare ahpra.gov.au
- - Researchers Replicate OpenAI's Hot New AI Tool in 24 Hours futurism.com
- - AI pioneer Fei-Fei Li says AI policy must be based on ‘science, not science fiction’ techcrunch.com
- - OpenAI rebrands itself theverge.com
- - Constitutional Classifiers, a framework that trains classifier safeguards using explicit constitutional rules simonwillison.net
- - Get Started with Chrome Built-in AI : Access Gemini Nano Model locally medium.com
- - Thought partnership with language models ruperts.world
- - Infosec firm finds a DeepSeek database 'completely open and unauthenticated' exposing chat history, API keys, and operational details pcgamer.com
- - DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot wired.com
- - As DeepSeek upends the AI industry, one group is urging Australia to embrace the opportunity theguardian.com
- - Hallucinating copy/paste plagiarism robots are everywhere! What a time to be alive! theguardian.com
February 2025
The latest wave of AI news reflects a landscape where innovation and controversy collide. Atlassian is highlighting the real-world, human-centered challenges of integrating AI into service management, even as broader conversations liken today’s AI advancements to a 'Black Mirror' episode—unsettling, disruptive, and rife with ethical implications. Nowhere is this more stark than in reports of Israel using US-made AI models in warfare, raising urgent debates about technology’s life-and-death decisions. On the tech frontier, AI-generated optical illusions are being deployed to distinguish humans from bots, while generative AI continues to reshape journalism, with OpenAI both 'uncensoring' ChatGPT and forging a high-profile partnership with the Guardian Media Group. ---- As AI makes headlines for both progress and blunders—ranging from 12 notorious AI disasters to the revelation that AI mistakes differ fundamentally from human errors—recent trials show humans still outperform AI in certain government tasks. Meanwhile, the research community pushes for robust governance and transparency, with Perplexity launching Deep Research and the UK controversially rebranding its AI oversight body to the 'AI Security Institute', dropping the word 'safety'. Energy use and economic impact debates rage on, with skepticism over a so-called 'AI energy crisis' and new indices like Anthropic’s measuring the sector’s true economic influence. ---- Software engineering is being upended by the rapid evolution of large language models (LLMs), as the LLM Curve of Impact predicts seismic shifts in the profession, possibly heralding '**the end of programming as we know it**'. AI’s limitations are also on display—AI chatbots still struggle to accurately summarize news, BBC content is being repurposed in AI assistants, and concerns mount over copy/paste plagiarism and rampant hallucinations. As Europe launches a €200 billion alliance to seize global AI leadership and Elon Musk’s group bids $97 billion to control OpenAI, the sector is abuzz with power plays and regulatory drama; notably, the US and UK have declined to sign recent international AI safety declarations. ---- Amid this frenzy, researchers are replicating OpenAI’s hottest tools in mere hours, and new frameworks like Constitutional Classifiers aim to encode explicit ethical rules directly into AI. However, security lapses—such as DeepSeek’s exposed, unauthenticated database—and failed safety guardrails illustrate persistent risks. Fei-Fei Li cautions that AI policy must be grounded in science rather than hype, and calls for Australia to seize AI’s potential are met with mixed enthusiasm. As we wrestle with the practical, economic, and philosophical implications of AI—from healthcare to everyday copy/paste plagiarism robots—one thing is clear: the future belongs to those who can not only imagine, but also act, in a world where thought partnership with machines is increasingly the norm.
We're not responsible for the content of these links.