Tag Archives: free speech

The Sea-Change at YouTube

It’s time we had a con­ver­sa­tion about cen­sor­ship.

Recent­ly a mass exo­dus of major adver­tis­ers occurred at YouTube, which has since caused the ecosys­tem of that plat­form to fall into dis­ar­ray. As not­ed by both YouTu­bers and main­stream media out­lets alike, the pre­cip­i­tat­ing event seems to have been a small num­ber of gov­ern­ment and cor­po­rate ads appear­ing along­side racist hate videos on a very small num­ber of chan­nels. The issue was brought to the atten­tion of gov­ern­ments and cor­po­ra­tions in a high pro­file man­ner, and from there, indus­try brass decid­ed to pull all adver­tis­ing off the YouTube plat­form, cit­ing the desire to not be asso­ci­at­ed with harm­ful content.

As var­i­ous media out­lets have report­ed, it’s an odd nar­ra­tive to fol­low giv­en the fact this prob­lem has exist­ed for many, many years. Until the mid­dle of 2016, it’s been an issue that’s rarely made the news. Fur­ther­more, despite the his­tor­i­cal efforts made by media com­pa­nies (espe­cial­ly Google) to stamp out racist and oth­er extrem­ist con­tent, the issue remains dif­fi­cult to address owing to the sheer vol­ume of data being uploaded at any giv­en time.

In Youtube’s case, at least 300 hours of video is uploaded each minute (though some put that num­ber as high as 400 hrs/min). If we go with the low­est esti­mate, that’s still 18,000 hours of video in an hour, 432,000 hours of video in a day, or 12.96 mil­lion hours in a 30-day month. These num­bers are def­i­nite­ly not in Google’s favour, and despite valiant efforts to screen user-gen­er­at­ed con­tent, Inter­net media com­pa­nies as a rule tend to be faced with a nev­er-end­ing, uphill bat­tle when it comes to man­ag­ing these enor­mous vol­umes of user-gen­er­at­ed content.

Sim­i­lar to the ongo­ing sit­u­a­tion at Face­book (and its impli­ca­tions for that net­work’s 1.2 bil­lion dai­ly users), the logis­tics are impos­si­ble when it comes to set­ting up a pure­ly human inter­ven­tion as a solu­tion to harm­ful con­tent. There’s no prac­ti­cal way for Google, or any ultra high vol­ume media com­pa­ny for that mat­ter, to retain suf­fi­cient human staffing in order to indi­vid­u­al­ly review each piece of user-gen­er­at­ed con­tent that comes in the door. As a result, indus­try stan­dard prac­tices include the use of soft­ware algo­rithms as gate­keep­ers and the automa­tion of most issues relat­ed to pol­i­cy enforce­ment and con­tent management.

Con­tin­ue read­ing