Massive Tech is surprisingly dangerous at catching, labeling, and eradicating dangerous content material. In idea, new advances in AI ought to enhance our means to try this. In follow, AI isn’t excellent at decoding nuance and context. And most automated content material moderation techniques had been skilled with English information, which means they don’t operate properly with different languages.
The latest emergence of generative AI and huge language fashions like ChatGPT implies that content material moderation is more likely to develop into even tougher.
Whether or not generative AI finally ends up being extra dangerous or useful to the web data sphere largely hinges on one factor: AI-generated content material detection and labeling. Learn the complete story.
—Tate Ryan-Mosley
Tate’s story is from The Technocrat, her weekly publication providing you with the within observe on all issues energy in Silicon Valley. Join to obtain it in your inbox each Friday.
In case you’re inquisitive about generative AI, why not try:
+ The way to spot AI-generated textual content. The web is more and more awash with textual content written by AI software program. We want new instruments to detect it. Learn the complete story.
+ The within story of how ChatGPT was constructed from the individuals who made it. Learn our unique conversations with the important thing gamers behind the AI cultural phenomenon.
+ Google is throwing generative AI at every thing. However consultants say that releasing these fashions into the wild earlier than fixing their flaws might show extraordinarily dangerous for the corporate. Learn the complete story.
