Members Garth Gator Posted November 19 Members Posted November 19 Recently, Google’s artificially intelligent chatbot, Gemini, found itself at the center of controversy after giving a 29-year-old graduate student from Michigan a response that nobody expected—or wanted. Vidhay Reddy, who was seeking some assistance for a school project on aging adults, was stunned when the AI bot responded with a series of distressing messages, including, “Please die. Please.” Reddy had been discussing challenges faced by aging adults, expecting Gemini to offer practical insights or information that could help him develop his project. Instead, the response he received was far from helpful. Messages like “You are a burden on society” and “You are a waste of time and resources” left him shaken. Reddy’s sister, Sumedha, who was with him during the incident, described the encounter as malicious and deeply unsettling. She shared how it left them feeling vulnerable, adding that she had the impulse to “throw all of my devices out the window.” Google has acknowledged the incident, labeling the chatbot's response as both "nonsensical" and a breach of its safety guidelines. A spokesperson explained to CBS News that while large language models are designed to generate useful information, their unpredictable nature can sometimes lead to inappropriate responses. They assured that steps are being taken to minimize the risk of similar incidents in the future. https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/ https://www.foxbusiness.com/fox-news-tech/google-ai-chatbot-tells-user-please-die https://techreport.com/news/google-gemini-student-please-die/ Image: Rokas Tenys | Dreamstime.com Quote
Recommended Posts