It’s time we had a conversation about censorship.
Recently a mass exodus of major advertisers occurred at YouTube, which has since caused the ecosystem of that platform to fall into disarray. As noted by both YouTubers and mainstream media outlets alike, the precipitating event seems to have been a small number of government and corporate ads appearing alongside racist hate videos on a very small number of channels. The issue was brought to the attention of governments and corporations in a high profile manner, and from there, industry brass decided to pull all advertising off the YouTube platform, citing the desire to not be associated with harmful content.
As various media outlets have reported, it’s an odd narrative to follow given the fact this problem has existed for many, many years. Until the middle of 2016, it’s been an issue that’s rarely made the news. Furthermore, despite the historical efforts made by media companies (especially Google) to stamp out racist and other extremist content, the issue remains difficult to address owing to the sheer volume of data being uploaded at any given time.
In Youtube’s case, at least 300 hours of video is uploaded each minute (though some put that number as high as 400 hrs/min). If we go with the lowest estimate, that’s still 18,000 hours of video in an hour, 432,000 hours of video in a day, or 12.96 million hours in a 30-day month. These numbers are definitely not in Google’s favour, and despite valiant efforts to screen user-generated content, Internet media companies as a rule tend to be faced with a never-ending, uphill battle when it comes to managing these enormous volumes of user-generated content.
Similar to the ongoing situation at Facebook (and its implications for that network’s 1.2 billion daily users), the logistics are impossible when it comes to setting up a purely human intervention as a solution to harmful content. There’s no practical way for Google, or any ultra high volume media company for that matter, to retain sufficient human staffing in order to individually review each piece of user-generated content that comes in the door. As a result, industry standard practices include the use of software algorithms as gatekeepers and the automation of most issues related to policy enforcement and content management.
For the most part, this functions quite well. The fact people don’t encounter more offensive material on a given platform than is presently visible remains an impressive testament to this technological achievement. Most people who haven’t worked for an ISP or a user-created media company aren’t aware of the vast complex of systems running 24/7 behind the scenes which are responsible for the majority of the user experience, never mind the innumerable roles and responsibilities filled by humans in company InfoSec, LEO Liaison, Member Safety, and other departments. If things are working normally, 99% of back-end operations tend to be invisible to end-users. Generally speaking, it’s only when something breaks or goes horribly wrong that people sit up and take notice.
At the time of this writing, the world stands in the shadow of the 2016 US election and its news cycle, during which major controversies concerning fake news and racism whipped up corporations, governments, and public alike into a massive frenzy, changing the online rules of engagement practically overnight. As a result, people are not only taking aim at legitimately harmful material like propaganda and racism, but the excessively sudden and sloppy deployment of censorship has resulted in a backlash due to the phenomenal amount of unintended, innocent by-catch.
Five years ago, this shit wouldn’t fly. If the current political anxiety didn’t exist today, we’d see corporations and governments exercising more restraint, which would take the form of finding an integrated technological solution to address the actual issue, rather than scapegoating and uprooting the vast sanctuaries of free expression we’ve worked so hard to cultivate.
I’m talking specifically about the YouTube advertiser boycott here. Don’t get me wrong, boycotts can achieve admirable results when done for the right reasons and when organized and executed in good faith, but I don’t think we’re looking at noble motivations or good faith in this situation. Everything about it, quite frankly, stinks.
Millions of people rely on YouTube as a communications medium to network with others — they’re not using it for illicit purposes. Thousands more use it for employment, and again, they’re not using it for illicit purposes either. Why should even one of these individuals bear any part of the punishment levied by the ad industry and media companies over the actions of a small number of bad actors? This is what we call collateral damage, pure and simple, and the clumsy approach taken in this case didn’t help, either. The innately complicated and sensitive nature of operating a media company demands nuance and caution to get results that a majority of its creators and consumers can live with. This is why the ‘ad-pocalypse,’ its application as policy, and its handling since application have all come across as crass and irresponsible.
Massive shocks to the revenue system also aren’t good for a media company that might stand to re-circulate some of that ad money for staffing and R&D needs, which in turn can pay dividends toward more effective content management. See where I’m going with this? Over months or years, a sufficient income loss could actually make it more difficult to advance long-term, integrated solutions when it comes to tackling the problem of racist extremism. What happens in that case? We end up seeing much less sophisticated (and less effective) solutions deployed.
Case in point: the latest changes to YouTube. Google has been augmenting its TOS with more extreme measures and enforcement, and tightening its filtering algorithms, in order to save face with advertisers. Judging by the response from the ad industry, that effort came way too late to change their minds, however, now everyone is stuck with a new problem: the stricter “advertiser friendly” rule has become financially disastrous for many high-income content creators. The majority of these problems seem to have risen out of the staggering amount of accidental by-catch by the beefed-up filtering algorithms, but occasionally some blame has also been placed on overzealous enforcement of various site policies in an effort to stay ‘advertiser friendly.’ There’s been no shortage of channel operators and content creators willing to share their personal experiences with the mass demonetization, censorship, and accidental deletion of videos that has been taking place.
Initially, the LGBTQ community was the majority of this by-catch when the issue first became known in the second half of 2016. Following a protracted stream of responses by YouTubers and coverage by mainstream media outlets, Google apologized and attempted to correct the error, but the controversy only spread as algorithm and policy issues worsened and began to affect an increasingly wider user base, from LGBTQ channels to gun culture to makeup reviews to various political commentators and many other people and organizations from all walks of life.
Problems encountered by YouTube users while trying to appeal wrongful demonetization and censorship have caused the system to further come under fire, as a recent bug led to the inability to lodge an appeal at all. Despite another sheepish Google apology, the damage has been done, and the YouTube user base has clearly become upset.
Many unintended consequences have come as a result of the recent disruptions, but perhaps none are as visible and telling as the number of high profile YouTubers who have begun focusing their monetization efforts off-site. Have you noticed your favourite YouTubers asking for support on Patreon? What about directing viewers to off-site vanity domains or channel related consumer products sites? Again, it’s not hard to read between the lines here: the writing is on the wall. If you haven’t noticed it yet, you will before 2017 is over. These changes are bellwethers for the way creators intend to structure their incomes over the next five to ten years. It’s their way of voting with their wallets.
While it remains to be seen how serious (or how not-serious) an effect the migration of income exchange systems could have on YouTube and its advertisers, the message it communicates is clear: people are getting scared, and they don’t want all their eggs in one basket. This trend actually began a couple of years ago, but the Patreon phenomenon in particular exploded when YouTube clamped down and began mass demonetization of videos in 2016.
What happens to platform loyalty if the revenue streams fall into decline? For that matter, what happens to user loyalty when a network’s terms of service become less than tolerable?
With YouTube noting growth of +50% year-over-year growth in channels that are being paid six-figure salaries, it seems especially unfortunate to lose so many advertisers right now, not to mention all the antipathy and bad press its overreaction to the ad-pocalypse has managed to rack up. Can such growth and creator incentives continue? Will the YouTube ecosystem be sustainable once the current controversy dies down?
While I hope Google finds a way to smooth things over, it will not be acceptable if this comes at the cost of the user base, or at the cost of basic freedom of expression. YouTube was founded on openness, sharing, free speech and expression, and unreasonable demands that content now be ‘advertiser friendly’ or else face loss of revenue or censorship are exactly that — unreasonable. It’s not all that much different from a dictatorial government suddenly moving in and shaking things up, telling people what they can and can’t do.
Because of their place in the online revenue cycle, advertisers by default share a significant portion of the moral and fiscal responsibility to support free expression and play the long game when it comes to sustaining and aiding a high quality media product such as YouTube. This includes being willing to do the appropriate technical consultations, reading, and negotiation when dealing with the more unpleasant aspects of Internet life in order to understand what’s going on and how to deal with it in a way that yields productive results.
If advertisers would rather follow transient political winds and permit themselves to be endlessly distracted by the superficiality of the news cycle, things might eventually take an uglier turn. In that scenario, they might find they have no one but themselves to blame once subscribers start looking elsewhere and set up a democratized, free form system of collaboration that is less ordered and less easily monetized. Social media demographics are indeed in flux, and should the worst happen, it wouldn’t be the first time a major network has collapsed. It generally takes a number of years, though.
Either way, imposing arbitrary sanctions from afar without the consent of the majority is a guaranteed loser — both in terms of loyalty and stability. It’s brand suicide because it alienates everyone. It’s fuel for the cycle of uncertainty that’s caused people to start looking for other platforms and revenue streams.
To save the platform from the current controversy, the best approach would be for all sides to calm down and resume working within the system to modify and improve upon what already exists.
The small amount of racist material that’s running the current news cycle at a fever pace is something that’s best handled by an integrated approach involving a combination of highly nuanced human intervention and machine algorithms. It will take effort, and it will certainly take time, but it’s crucial that every effort be made along the way to prevent innocent subscribers and creators falling into the crosshairs of over-eager enforcement efforts as we try to sort this out.