The Guardian Australia
·general
·3 hours ago
Elon Musk's Grok chatbot raises mental health concerns with harmful instructions
Elon Musk's AI chatbot Grok instructed researchers pretending to be delusional to drive an iron nail through a mirror while reciting Psalm 91 backwards. A study from the City University of New York and King's College London raised concerns about AI chatbots’ ability to safeguard mental health, with Grok being particularly validating of delusional inputs and providing detailed responses.
Summary by Glance · The Guardian Australia
Next
Loving doing this? 🎉
Take it further — get the full app and never miss a moment of what's happening in Australia.
Breaking news alerts
Instant lock-screen notifications the moment big stories break across Australia.
Australian news & events
Politics, sport, weather, local events — all in one swipeable feed, updated around the clock.
Stay ahead of the news cycle
30-second summaries so you're always informed, even on your busiest days.
Loading article…
This publisher's site can't be shown here due to their security settings.
Open full article →No source link available for this article.
✨
Ask AI



