User-generated content is nothing but anything posted by visitors of a website, blog, application, or any other online platform. If you think about it, a huge part of digital content is created this way. Many publishers decide to permit it, not to mention social media giants, like Facebook, that rely mainly on users’ activity. But is it a good idea? How to approach such content? And who is responsible for it?
Many publishers choose to let their audience actively participate in content creation. It’s a great way to bond with users and benefit from their feedback. What you can often stumble on are the comment sections. But there are also digital constructs in which users generate the vast majority or even entire content. These include online forums like commonly known Quora and Reddit or programming-related Stack Overflow, as well as image or video-sharing platforms like Pinterest, Flickr, or Behance.
We all know that sometimes user-generated content gets messy. How messy? It waves from inappropriate choice of words, through threats and vulgarity, all the way to explicit and illegal content. If you monetize your website, you need to be aware that not every advertiser may want to place their ads near such things. It makes both advertisers and ad systems sensitive to improper user behavior. Can you predict who will visit your app or website and how this person will behave? No. But are you responsible for it? Well, yes.
Are publishers responsible for what’s generated by users?
Imagine you have a beautiful house, visited and admired by many people. Then, out of a sudden, one of your guests points out that there is vulgar writing on one of the walls. Will you: A) fix the wall so that your house will go back to being enchanting, or B) say that you didn’t write that and let it just hang there? You have to admit option B sounds ridiculous. So would you apply it to your digital content?
Carrying on with the house’s example, anyone will undoubtedly notice the writing and associate it with you and your residence. And that’s precisely what happens when your online creation’s visitors come across unfavorable user-generated content. It just leaves a bitter aftertaste. And if that didn’t convince you, many providers of monetization solutions recognize the publisher responsible for everything located on the website, blog, app, platform, or whatever they create. It means that in the case of inappropriate content detection, it’s the publisher who will be responsible. The consequences depend on your solutions’ provider’s specific policy standards. For those using Google ad code, due to its prevalence, we’ve prepared articles on Google’s content policies and how you can verify if your content violates those.
How can publishers control user-generated content?
Naturally, it’s impossible to supervise each of the users individually. Especially when it comes to forums, apps, and websites leaning on their activity that expand widely in a single moment. However, there are some proven methods to maintain order in terms of what’s posted within your content. Let’s delve into those:
1. Moderate comments and other users’ activity
If you include third-party content within yours, you should always take your time to go through it. Make it a fixed element of your schedule. Pay particular attention to where such activity is increased and where you put ads provided by external monetization solutions providers.
2. Publish a Content Policy
It’s a set of rules that your audience has to obey (for instance, take a look at how Reddit handled it). This way, you’ll make your visitors aware of what type of content you permit and which you forbid. Make sure the policy is visible and accessible.
3. Add a “Report a violation” button
This will enable your audience to help you with your content’s monitoring process. Anytime they’ll notice a violation of your Content Policy amongst other users’ actions, they’ll be able to report it right away.
4. Don’t always monetize users’ activity
You can consider excluding user-generated content from what you monetize. Just create separate landing pages where your audience can be active and don’t place ads there. If you exploit solutions like optAd360 AI Engine, you can choose on which pages of your website the ads should be turned off.
5. Use helpful plugins
Some of the existing plugins for adding posts or comments can help you filter out the harmless content from the possibly inappropriate one. If you decide to use it, you’ll avoid the consuming part of analyzing what your visitors do. Many of the dedicated plugins feature moderating or filtering systems, enabling an ongoing screening of the content created by your audience.
6. Utilize reCAPTCHA
CAPTCHA is an acronym that stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” Whereas, reCAPTCHA is a system that lets you separate users from potentially harmful and spamming bots. You undoubtedly know what it looks like. We all had to check the “I’m not a robot” box at some point in life.
Verifying what content appears within your digital property is a must. For one thing, it ensures your web incomes’ safety, since it lets you prevent possible consequences for violating your monetization partners’ policies (like ads’ blocking). But, just as significantly, it’ll help provide a great user experience. Remember that whatever they’ll see on your pages will be somehow associated with your website. And as a publisher, your role is to make the time of your audience’s visit as pleasant as possible.