AI on X is extra than simply the possible first identify of one among Elon Musk‘s future children. It’s additionally a technological marvel that’s grow to be extra of a terror recently due to the way in which it’s getting used.
Most just lately some creeps used AI to create faux pornographic pictures of Taylor Swift and unfold them on-line — largely on X, the social media big previously generally known as Twitter. Her followers fought again in opposition to the trending subject, flooding it with constructive photos. But it didn’t cease the pics from being seen tens of millions of instances.
We’ve heard Taylor is contemplating authorized motion over the pictures, which is sensible. But would she sue X? The tech firm did finally take measures to cease the unfold of the pictures, together with eradicating the posts, suspending accounts that shared them, and making it inconceivable to go looking “Taylor Swift.” Was it sufficient?
Related: Is It Impossible For Taylor To Make It To The Super Bowl After Tokyo Concert?
OK, first off, we’re guessing that final one is NOT one thing Tay and her crew need as an answer! She has albums to promote ffs! She needs her followers to have the ability to discuss her! Secondly, properly, they took fairly a very long time doing something. We noticed dozens and dozens of Swifties publish complaints for a number of hours earlier than X took any motion. Why?
We know when he first purchased Twitter, Musk fired a LOT of the oldsters accountable for retaining the platform secure. After all, he was a self-professed champion of free speech. He wasn’t going to face for some tremendous woke liberals struggling to clamp down on… neo-Nazi hatespeech…
Well, it appears to be like like he’s lastly studying Twitter’s earlier house owners weren’t being PC police, they have been simply attempting to run a enterprise. And to be able to run X, he must preserve it secure — if solely to keep away from getting himself into extra authorized bother and stop the exodus of advertisers!
On Friday, within the wake of the Taylor Swift AI debacle, the corporate introduced it was constructing a “trust and safety center” in Austin, Texas, for which they’d rent 100 full-time content material moderators. Human beings, not AI, notably.
The important focus will truly be baby sexual exploitation, which is an issue we didn’t even understand had been spreading on X. But with the dearth of regulation, it is sensible. The darkest, most unaccountable elements of the web all the time appear to refill with such heinous content material. And as we perceive it, this type of faux AI NSFW materials is getting used extra typically in opposition to underage teenagers in bullying methods than in opposition to the well-known (and famously litigious). In an announcement, head of X enterprise operations Joe Benarroch defined:
“X does not have a line of business focused on children, but it’s important that we make these investments to keep stopping offenders from using our platform for any distribution or engagement with CSE content.”
He did, nonetheless, say this may simply be “a temporary action” which was being executed “with an abundance of caution as we prioritize safety on this issue.” Hmm. We must assume these content material moderators — being presumably first rate human beings — would additionally be capable to see issues just like the Taylor Swift AI development a lot earlier and put a cease to it. We simply hope Elon retains the brand new crew round lengthy sufficient to make an actual change on X.
[Image via MEGA/WENN.]
Discussion about this post