Effectively moderate user-generated content on your platform by developing content moderation tools and policies tailored to your needs.
If your platform accepts user-generated content in any form, you know that people will upload everything and anything. For video this can be particularly high stakes, with the potential for users to upload anything from popular media content to inappropriate footage.
While large platforms may staff big teams of Trust & Safety specialists, you don't need an army to implement content moderation strategies of your own. Below we've rounded up a number of technical and operational strategies that Mux customers can use to keep their content libraries healthy.
Mux's secure video playback tools can help make it more difficult for bad actors to use your videos for their own purposes.
When first testing out Mux, it's common to set a video's playback policy to public
so you can easily view the video via its public URL. Once testing is done, we recommend that UGC platforms switch to using signed playback policies to help curb abuse. These allow you to use a JWT to time-limit requests for your content and to set playback restrictions specifying which referring domains can serve your content.
For certain platforms, we currently offer an internal feature that sends notifications via webhook when we detect high delivery traffic on an asset. This can be helpful to catch unauthorized content quickly, before it results in increased spend or risk to your platform. To get this feature enabled for your account, contact Support.
Our Trust & Safety team contacts all administrators on your account in the event of account usage or content that violates our Terms of Service. Our team may take actions that include the deletion of assets, disabling of live streams, and in rare cases disabling of environments. Because bad actors will often repeatedly upload the same unauthorized content, we recommend making sure these messages reach you right away so you can take appropriate actions to address the source (e.g., closing the user's account).
To ensure emails from our team get escalated, add an email group or paging service email as an admin on your Mux account. (For example, see PagerDuty's docs on email routing.)
As our own engineers have blogged about, you either die an MVP or live long enough to build content moderation. A basic content moderation flow should take some information about the video asset (a sample of still frames, the transcript of its audio track, a copy of its metadata) and evaluate it based on algorithmic rules to escalate potentially troublesome content. For a peek at how Mux has iterated on our own approach, check out this talk that one of our experts gave at Demuxed 2023.
For info on the tools Mux offers to help you retrieve relevant data, check out these docs & blogs:
Many customers grab images from their content via Mux APIs to feed into third-party services that can provide object detection and specialized content classification. While we recommend relying primarily on thumbnails, we also support MP4 downloads for those services that prefer a video. The results coming out of these services can be used as the trigger for automated workflows that end up in your own Slack channel or on platforms like Pagerduty or Opsgenie. Through those flows, you can action simple cases automatically and escalate edge cases to a human reviewer. You can use a tool like n8n to build these workflows with no-code blocks.
If high risk content is ending up in a page where you control the player, you can integrate Mux Data to get a lot of visibility into viewing sessions and track engagement (including the unwanted kind).
Aggregating these Views will help you gain insights into the types of platforms & devices being used to stream your content.
If you're using Mux Data's Media tier, you can also take advantage of two additional features:
One thing worth noting: the High Delivery Webhook's "delivery rate" is different than the "Views" tracked by Mux Data. Both can be used for telemetry, but they are looking at different parts of the video pipeline.
One simple way to address issues with user-provided content is to make sure your content policies are clear. These rules can be in your Terms of Service, Acceptable Use Policy, Community Guidelines, or a separate content policy.
Consider covering the following common topics (you're even welcome to copy this and make it your own):
You can also share details of how you'll enforce the policy, such as a strikes-based system.
For some examples of artful content policies, check out Patreon, Strava, and Crowdcast. If you have a legal advisor, make sure to discuss any obligations that may apply to your company (e.g., under DMCA) and include coverage of those.
While your platform will need its own active measures for content moderation, you can also incorporate third-party reporting into your approach. At minimum, you should have an email address specifically for complaints, such as copyright@yourdomain.com
or abuse@yourdomain.com
, but you can also build a simple intake form that will create a support ticket in your system. Make sure incoming messages will be routed to someone with training on how to handle them appropriately. Evaluate whether your contact info should be listed in the US DMCA Agent Directory.
You can also implement in-product reporting capabilities that allow other users to report a video that may violate your content policies.
This is another good place to consult your legal advisor, as some copyright safe harbor laws include specific requirements around contact details and response turnaround times to keep yourself free from liability.
When users sign up for an account on your platform, you likely collect a short list of details (e.g., email) while keeping things as simple/frictionless as possible. If your platform is seeing patterns of abuse, consider altering this flow to disincentivize signups/posting by bad actors: