Platform Politics and Accountability

With midterm elections just around the corner, the campaigning has ramped up to full speed – as have the concerns around Facebook, Twitter, and other social networks facilitating and fostering election misinformation. Although Facebook usually bears the brunt of the criticism, it has recently taken some very public measures to minimize ‘fake news’ and identify information about different sites so users can make their own judgements about its content. And fake news doesn’t just come in the form of questionable journalism – it can also come in the form of digital ads and marketing campaigns. Platforms offer brands and political campaigns very nuanced targeting opportunities. Meanwhile, other platforms, including YouTube – which touts the strongest recommendation engine in the world – has come under scrutiny for its ability to be used by election interferers.

While YouTube has taken measures to moderate its content through terms of use and community guidelines, fake news and political ads can be extremely difficult to spot. As we saw in the 2016 election, many people could not spot the difference between real and fake stories – that which was designed specifically to harbor mistrust with the U.S. government and many aspects of cultural significance. For an algorithm, the difference can be even harder to catch. The spread of information, or rather misinformation, is then proliferated by machine learning, which is informed by user-decision, rendering the program itself unaccountable. Consequently, the user, platform, and the content provider all play a role in the spread of misinformation. This means that the real issue exists in assigning accountability for this misinformation and how it is propagated on the platform. Signs of election interference have already cropped up, with Russia seen as the primary instigator. Meanwhile, social networks and their users must stay wary of election information and ads they consume. Here are three areas exacerbating the problem and how platforms and their users can respond.

Section 230
One of the key areas of concerns for this is around the issue of curation and liability for websites. As consumption habits move from content sources with built-in human curation (such as limited and carefully-picked television programming or print news outlets with journalistic standards) to platforms with no barrier-to-entry and AI-driven distribution, misinformation becomes a greater danger without a clear solution for removing it.

Further, these platforms rely on protections offered by Section 230 of the Communication Decency Act, which prevent them from being held responsible for content on their site so long as they are acting as a platform rather than a publisher. This can make it tricky for social platforms such as Facebook and YouTube to manually curate content. Curation can serve as an editorial voice, making them directly responsible for the content and labeling them as “publishers,” forfeiting the protections of Section 230. This gives these companies great legal liability for the content on their platforms — clearly a risk without much incentive. And given the massive scale of content on their platforms, the risk is huge.

Need for Greater Transparency
Following the Cambridge Analytica scandal, data…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.