- - businessinsider.com/chatgpt-secret… businessinsider.com
- - Supermarket AI meal planner app suggests recipe that would create chlorine gas https://t.co/21UT2bIx5x theguardian.com
- - Hacking AI? Here are 4 common attacks on AI, according to Google's red team https://t.co/8qJlzzLWjs zdnet.com
- - Overcoming the Articulation Barrier in Generative AI Using Hybrid Interfaces https://t.co/L6M4GKyBKX nngroup.com
- - Consultation hub | Supporting responsible AI: discussion paper - Department of Industry, Science and Resources https://t.co/m2nFRru23E consult.au
- - Submission: Safe and Responsible AI - Digital Rights Watch https://t.co/HFTpz4hV3d digitalrightswatch.org.au
- - RT @sherifmansour: We need more safe & responsible AI practices across the board. We believe transparency is key: https://t.co/wo1S1RRBnZ afr.com
- - New Farmer: A post-photography experiment by Bruce Eesly https://t.co/3YHjguVkoK readymag.com
- - Can machines dream of secure code? From AI hallucinations to software vulnerabilities https://t.co/OVaf5TAwnG snyk.io
- - RT @BBCTech: Google tests watermark to identify AI images https://t.co/TFejj1YcpO bbc.in
- - RT @backchnnl: It Costs Just $400 to Build an AI Disinformation Machine https://t.co/qabD2IDS6l trib.al
August 2023
August 2023 focuses sharply on the gap between AI's practical deployment and the safety infrastructure needed to support it. A supermarket meal-planning chatbot generating a chlorine gas recipe serves as a vivid illustration, while Snyk's research into AI hallucinations producing software vulnerabilities and Google's red team outlining four common AI attack patterns signal that enterprise security teams are taking the threat seriously. On the policy front, Australia's Department of Industry opens a consultation on responsible AI, Digital Rights Watch submits to it, and Atlassian publicly calls for transparency disclosures when users interact with AI bots — a notable corporate position in the Australian context. The disinformation dimension is quantified bluntly: a $400 disinformation machine is now within reach, while Google's watermarking research for AI-generated images offers a nascent countermeasure. Shadow AI adoption runs parallel to all of this, with employees secretly using ChatGPT to advance their work regardless of corporate bans.
We're not responsible for the content of these links.