Trendingger Posted February 22 Share Posted February 22 Image: Google Gemini The digital sphere is currently buzzing with discussions about Google’s new AI platform, Gemini. The controversy centers around Gemini’s image generation capabilities, which some users claim are biased. Specifically, the AI has been criticized for not showing images of white people when requested. Gemini, formerly known as Bard, can generate images based on text descriptions. However, users have reported that when asked to generate images of historically white figures or groups, Gemini often depicts them as racially diverse individuals. This has led to accusations of ‘woke’ bias and historical inaccuracy. The controversy was sparked when a former Google employee tweeted that it was "embarrassingly hard to get Google Gemini to acknowledge that white people exist". This tweet, along with similar complaints from other users, has fueled the ongoing debate. In response to the backlash, Google has issued an apology and promised to fix the inaccuracies in Gemini’s image generation depictions. The company has stated that it takes representation and bias seriously, and that Gemini’s image generation capabilities were designed to reflect its global user base. This trending topic raises important questions about the role of AI in shaping our understanding of history and diversity. How do you feel about this issue? Do you believe that AI should strive for historical accuracy, or should it aim to promote diversity and inclusivity? Share your thoughts on this complex and fascinating topic. Read more: https://www.bbc.com/news/business-68364690 https://www.fastcompany.com/91034044/googles-gemini-ai-was-mocked-for-its-revisionist-history-but-it-still-highlights-a-real-problem https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical Quote Link to comment Share on other sites More sharing options...
Recommended Posts