Gcore Debuts AI Solution for Real-Time Content Moderation & Compliance

  • Home
  • Industry News
  • Gcore Debuts AI Solution for Real-Time Content Moderation & Compliance
DateJun 27, 2024

It enables organisations to improve user safety and ensure compliance with regulations such as the EU’s Digital Services Act (DSA) and the UK’s Online Safety Bill (OSB).

Any platform that hosts user-generated content (UGC) from comments to long-form video or content that can be accessed by children, must moderate UGC to ensure its viewers are protectedfrom offensive, violent, illegal or age-inappropriate content. Social media, gaming, ecommerce, education,and digital advertising are just some of the industries where organisations have legal obligations. Exposing users to harmfulcontent can lead to reputational damage, legal investigation, service suspension, an operating ban, and significant fines (up to 6% of global revenue for the DSA).

Real-time AI content moderation

The dramatic growth in UGC means human moderation cannot keep pace with the challenge of identifying harmful and illegal content. High volumesoverwhelm moderators, costs become prohibitive, or operational inefficienciesresult inmissed violations and delayed publishing of legitimatecontent.

Gcore AI Content Moderation automates the review of video content streams, flagging inappropriate content and alerting human moderators where necessary. The solution integrates cutting-edge technologies to begin reviewing videos or live streams within seconds of their publication:

State-of-the-art computer vision:Advanced computer vision models are used for object detection, segmentation, and classification to identify and flag inappropriate visual content accurately.Optical character recognition (OCR):Text visible in videos is converted into machine-readable format, enabling the moderation of inappropriate or sensitive textual information displayed within video content.Speech recognition:Sophisticated algorithms analyse audio tracks to detect and flag foul language or hateful speech, thereby moderating audio as effectively as visual content.Multiple-model output aggregation:Complex decisions, such as identifying content involving child exploitation, require the integration of multiple data points and outputs from different models. Gcore AI Content Moderation aggregates these outputs to make precise and reliable moderation decisions.

Organisations can quicklyintegrate Gcore AI Content Moderation into their existing infrastructure through an API without the need for prior AI or ML experience. AI Content Moderation runs onGcore’sglobal network of 180+ edge points of presencewith a capacity exceeding 200Tbps, ensuringlow latency, speed,and resilience for customers.

“Human-only content moderation has become an unmanageable task it is simply impossible to assign enough human resources and keep costs in check,” commented Alexey Petrovskikh, Head of Video Streaming at Gcore. “Gcore AI Content Moderation gives organisations the power to moderate content at a global scale, while keeping humans in the loop to review flagged content. This new service plays an essential role in enabling companies to protect their users, communities, and their own reputation all while ensuring regulatory compliance.”

Learn moreabout the features and pricing of Gcore AI Content Moderation.

About Gcore

Gcore is a global edge AI, cloud, network, and security solutions provider. Headquartered in Luxembourg, with a staff of 600 operating from ten offices worldwide, Gcore provides solutions to global leaders in numerous industries. Gcore manages its global IT infrastructure across six continents, with one of the best network performances in Europe, Africa, and LATAM due to the average response time of 30 ms worldwide. Gcore’s network consists of 180 points of presence worldwide in reliable Tier IV and Tier III data centers, with a total capacity exceeding 200 Tbps.

Leave a Reply