(Image credit: Shutterstock) When an AI doesn’t know history, you can’t blame the AI. It always comes down to the data, programming, training, algorithms, and every other bit of built-by-humans technology. It’s all that and our perceptions of the AI’s “intentions” on the other side.When Google’s recently rechristened Gemini (formerly Bard) started spitting out people of color to represent caucasian historical figures, (and diverse Nazis) people quickly assessed something was off. For Google’s part, it noted the error and pulled all people generation capabilities off Gemini until it could work out a solution.It wasn’t too hard to figure out what happened here. Since the early days of generative AI, and by that I mean 18 months ago, we’ve been talking about inherent and baked-in AI biases that, often unintentionally, come at the hand of programmers who train the large language and large image models on data that reflects their experiences and, perhaps, not the world’s. Sure, you’ll have a smart chatbot, but it’s likely to have significant blind spots, especially when you consider that the majority of programmers are still male and white (one 2021 study put the percentage of white programmers at 69% and found that just 20% of all programmers were women).Still, we’ve learned enough about the potential for bias in tra… Click below to read the full story from TechRadar
Read More