As an AI developer who often uses public data sets for various purposes, I find this terrifying:
A mobile app developer accidentally discovered child sexual abuse material in an AI training dataset and responsibly reported it to child safety organizations, only to have Google suspend his accounts as a result. The incident highlights a troubling gap in how platforms handle users who are trying to do the right thing when encountering illegal content in widely-distributed AI datasets.
Comments
You can comment on this post by replying to this Mastodon post.