The Oversight Board, Meta’s semi-independent policy council, is turning its attention to how the company’s social platforms are handling explicit, AI-generated images. Tuesday, it announced investigations into two separate cases over how Instagram in India and Facebook in the U.S. handled AI-generated images of public figures after Meta’s systems fell short on detecting and responding to the explicit content. In both cases, the sites have now taken down the media. The board is not naming the individuals targeted by the AI images “to avoid gender-based harassment,” according to an e-mail Meta sent to TechCrunch. The board takes up cases about Meta’s moderation decisions. Users have to appeal to Meta first about a moderation move before approaching the Oversight Board. The board is due to publish its full findings and conclusions in the future. The cases Describing the first case, the board said that a user reported as pornography an AI-generated nude of a public figure from India on Instagram. The image was posted by an account that exclusively posts images of Indian women created by AI, and the majority of users who react to these images are based in India. Meta failed to take down the image a… Click below to read the full story from TechCrunch
Read More