Content Moderation is Hard, But There’s a New Approach and it’s Fueled by Spectrum Labs
San Francisco, CA, Jan. 27, 2020 (GLOBE NEWSWIRE) -- (via Blockchain Wire ) Yes, the internet has become the most transformative invention of the modern age – it has forever changed technology, communication, gaming, marketing, banking, dating and more. But along with that change comes a dark side: The internet has also shown toxic human behavior, poisoning the experience both for users and for the content moderators charged with safeguarding their online platforms.
Faced with harassment or a basically bad experience online, many of us never report it, instead choosing to close our account or just avoid that platform altogether. We simply… leave. All that focus on online platform growth? Wasted.
Which begs a couple of questions: with all the transformation and dizzying innovations brought by technology, why do we still see daily headlines detailing online harassment, radicalization, human trafficking, and more? Can online platforms manage growth while still keeping their platforms safe?
Many companies think of “Trust and Safety” as just a compliance play, a box to check, rather than seeing the connection to their platform’s health and growth.But Spectrum Labs, a San Francisco-based Contextual AI platform, thinks that’s a mistake. Growth is directly tied to user experience. Platforms like Facebook have faced backlash over outsourcing their content moderation services —-- traumatizing their lower-paid contractors with images and videos of shootings, violence and hate —-- yet still removing only a fraction of the toxic content online.
Content moderation tools, while seeing some improvement over the last decade, are still flawed and need to be drastically improved. That’s where Spectrum Labs comes in.
Spectrum Labs has developed an astonishingly accurate Contextual AI system that identifies toxic behaviors like hate speech, radicalization, threats, and other ugly behaviors that drive users away from online communities -- made dead-simple so that even people who don’t understand code or datasets can know what’s happening on their platforms any time.. Its approach is gaining traction with customers like Pinterest, Niantic, and giant names in social networks, dating, marketplaces and gaming communities.
Legacy content moderation technologies typically use some form of keyword and simple message recognition (classification), which works best for interactions that occur at a single point in time. But most toxic behavior builds gradually; and Spectrum Labs’ superpower is spotting those larger patterns of toxic behavior — in context. Some customers have already seen a reduction of 75% or more in violent speech, heading them off before they ever reach users, while flagging the trickier, ambiguous cases to human moderators on the Trust and Safety team.
“Our customers put the safety of their community first — and are seeing better retention rates and satisfaction. Our technology gives them the visibility and power to easily know what’s happening on their platforms, any time, and in real time.
“In 16 years of working in tech, this is the first company I’ve been with where we are actually saving & improving lives — users, players, kids, and moderators. We are excited to continue working with the passionate Trust & Safety community and are looking forward to making big strides in Internet Safety,” Davis added.
Spectrum Labs has built a library of large labeled datasets for over 40 unique models of toxic behavior, such as self-harm, child abuse/sexual grooming, terrorism, human trafficking, cyberbullying, radicalization and more, across multiple languages. Spectrum Labs centralizes its library of models across languages and then democratizes access so that each client can tune the service to their own specific platform and policies. No one-size-fits-all because a) it doesn’t exist and b) it doesn’t work (see: headlines every day of one-size-fits-all keyword recognition failing, with disastrous consequences).
This collaborative approach solves the “cold start” problem of launching new models without training data, and brings together a fractured and siloed data landscape, giving online platforms the ability to automate their moderation needs, at scale, while allowing for human judgment to be the final arbiter of what to allow on their platform.
Additionally, the ethical use of AI, in combination with a strong commitment to diversity and inclusion, and transparent data sets are just a few of the critical elements needed in order to operationalize automated AI systems that can recognize and respond to toxic human behaviors and content on social platforms at scale without causing harm to employees, contractors and users.
Tiffany Xingyu Wang, Chief Strategy Officer of Spectrum Labs said, “Whether it’s the content children are watching, the dating apps adults are on, the gaming done by both children and adults, enjoying the experience safely is the priority. Wang adds, “Internet safety is no longer just a nice-to-have. We’re getting closer to a world where investments in trust and safety are differentiators that drive topline revenue.”
Contact:Shazir MucklaiImperium Groupshazir@imperium-pr.com