As a part of its combing operation against harmful content on the platform, YouTube has been updating its policies, and investing in resources to protect the community. The Google-owned company has once again updated its policies to keep a check on hateful and supremacist content on YouTube. This adds to the 2017 rules that were introduced to limit recommendations, comments and the ability to share controversial videos. Google says that this step dramatically reduced about 80 percent views to these videos.
“Today, we're taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status. This would include, for example, videos that promote or glorify Nazi ideology, which is inherently discriminatory. Finally, we will remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place,” YouTube said in a statement.
YouTube says it will be enforcing the updated policy today and clears that its systems will take some time to fully ramp up. YouTube will gradually “expand the coverage over the next several months.” In addition, the company notes that it is aware of the fact that some of this content has value to researchers and NGOs to plan out a strategy to combat hate, hence it is exploring options to make it available to them in the future. It also clarifies that the context of a video also matters, so it is possible that some of the videos with hate content may be allowed to remain on the platform “because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events.”
YouTube also gave an update on the policies it piloted in the US in January this year. According to those policies, YouTube would limit recommendations of borderline content and harmful misinformation, such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat. “We’re looking to bring this updated system to more countries by the end of 2019,” the company added. It says that it was due to these policies that the number of views this type of content gets from recommendations has dropped by over 50 percent in the US.
For those who are unaware, YouTube relies on a combination of people and technology to flag inappropriate content like pornography, incitement to violence, harassment, or hate speech, and enforce guidelines. “Our systems are also getting smarter about what types of videos should get this treatment, and we’ll be able to apply it to even more borderline videos moving forward. As we do this, we’ll also start raising up more authoritative content in recommendations, building on the changes we made to news last year,” YouTube noted.
When it comes to the monetization on the platform, YouTube rewards those trusted creators who add value to the company. It has advertiser-friendly guidelines that prohibit ads from running on videos that include hateful content. In the case of the videos which contain hate speech, YouTube is strengthening enforcement of its existing YouTube Partner Program policies. Channels that repeatedly flout the company’s hate speech policies will be suspended from the YouTube Partner programme, which means they won’t be able to run ads on their channel or use other monetization features.