
"Alphabet's Google announced it will direct Gemini chatbot users to a support hotline if the conversation indicates a potential crisis related to suicide or self-harm."
"To some extent, you can anticipate some of the harms we see. We've seen people acting bad with technology across a variety of behaviors for a very long time."
"Guadalupe Hayes-Mota wants to see proof that AI chatbot developers are using clinically validated guidelines for interactions where mental health care is an issue."
Google's Gemini chatbot will now direct users to a support hotline if conversations suggest a risk of suicide or self-harm. This decision follows lawsuits alleging that AI chatbots create emotional dependency loops. Experts like Jennifer King note that such harms were predictable. A recent lawsuit claims a man's use of Gemini led to a dangerous situation, prompting calls for improved safeguards. Bioethics experts demand evidence that AI developers are using clinically validated guidelines for mental health interactions.
Read at Kqed
Unable to calculate read time
Collection
[
|
...
]