https://theviralmedialab.org/wp-content/uploads/2019/01/500px-EliPariser.jpg

Directions to a Point

The technology behind Eli Pariser’s Filter Bubble theory has grown significantly more sophisticated in recent years. I’d like to cover both the technological advances as well as the ethical debates that have developed since Pariser first published his book.

Recent innovations in machine learning, algorithms, and artificial intelligence have provided news media platforms and digital publishers with the data and tools that they need to build out robust recommendation and search algorithms to ensure that content is created and served in the most efficient methods possible for today’s media landscape.

However, as the years have progressed – and certainly as the geopolitical landscape of 2016 has come to a head this fall – it has become increasingly apparent that any discussion around the mechanics of how content is shared must go hand in hand with an exploration of the complex system of both logic and ethics behind these processes. The very definitions of “efficient” content sharing and scope of audience reach that have steered the course of technological priorities in recent years, bear scrutiny.

When Eli Pariser first published his book, The Filter Bubble, in 2011, media outlets had already been developing and employing recommendation algorithms throughout their platforms for several years. By the early 2000s, print sales had already begun to wan and publishers were shifting more of their focus on the platforms by which they could distribute their content to readers digitally: their websites and mobile apps. This shift from print to digital content consumption also precipitated a shift in the way the publishing companies earned revenue. Print subscriptions were dropping and with them so did advertising sales for print media. This resulted in a shift toward digital advertising. Distilling digital advertising to its essence: the digital advertising industry places a high premium on the sheer quantity of users viewing their ads. In order to earn significant ad revenue to support the business, digital publishers must ensure that their content platforms are designed to serve users as much advertising content as possible.  One of many ways of accomplishing this is to ensure that users click on and view as many pages on a site or pieces of a publisher’s’ content as possible. Therein lies a major driving force behind recommendation algorithms: digital publishers were able to devise content recommendation algorithms that would suggest related material to users based on the types of content they have already interacted with, thereby enticing them to continue consuming content within the publisher’s digital ecosystem. As the algorithmic technology has grown more sophisticated, media outlets and digital publishers are gleaning more and more data about their users’ activity and preferences when interacting with their content. Astute media companies found that users grew ever more inclined to continue interacting with their content when the articles their algorithms recommended were similar in topic and tone to that which with which they had already interacted on the platform.

In its essence, an algorithm is simply a set of step-by-step instructions for how a computer should execute a task: If a user clicks on x, likes y, or shares z, then recommend xyz article for them to read next. However, because computers do not understand vagueness and are largely incapable of inference, the recipes behind algorithms must be rather rigorous in order to address every minute detail of a set of instructions. This means that the best recommendation algorithms are those that are working off of an incredibly detailed and nuanced set of instructions. To create a truly intuitive recommendation algorithm for content consumers, the logic and instructions powering it must, then, be informed by a robust database of user information.

This is where data mining and machine learning technology come into play. When user data reaches a critical mass, tactics of machine learning can be employed to enhance predictive recommendation algorithms by leveraging big data. It logically follows that a comprehensive look at how users have previously interacted with content on a platform should inform digital content creators as well as platform and product designers how best to serve content to users moving forward.

The conversation around how “best” to serve content is where questions around the ethics – and even legalities – of recommendation algorithms come into play. In his May 2011 Ted Talk about Filter Bubbles, Eli Pariser discusses the distinctions between the choices our impulsive, present selves make when consuming digital content versus the more thoughtful decisions in content consumption that we make for our aspirational selves. All users are prone to falling for sensationalist, clickbait style content. If news media recommendation algorithms only make recommendations based on a user’s impulsive content consumption choices and diminishes or entirely disregards users’ aspirational content preferences, then not only do users lose significant agency in the news content they consume, but they miss out on important, but challenging topics that are necessary for responsible news consumption.

Amidst all of the discussions around social media’s fake news problem and the filter bubbles that clouded news media coverage during the elections, I felt particularly compelled to explore solutions to these issues. No single source or idea is enough to challenge the ways in which our society consumes digital news media, but I wanted to highlight a few ideas and proposals that are worth exploring:

To start, I went to the source:  Eli Pariser, himself. In the immediate aftermath of the election, Pariser reacted with what seems at first glance, an almost comically simple initiative. He created a Google Document titled “Design Solutions for Fake News” in an effort to start a public forum to crowd-source ideas on how to combat fake news and filters bubbles on social media platforms and in the wider news media community. Contributors to what Forbes has described as “Eli Pariser’s Brain Trust” have started discussion topics that range from ideas for propagating media literacy, to options for legal recourse against fake news sites, to innovative design ideas for improving algorithmic filtering of fake news and technological opportunities for fact-checking. The most striking quality of this project, to me, is its collaborative nature. The simplicity of Pariser’s effort shows how easy it can be to engage human networks in fruitful discussion for change.

I set out to specifically research opportunities for tweaking algorithmic models for news recommendations and news media aggregators – of which there are many. However, I was also repeatedly struck by conversations I encountered around socially conscious product design solutions. Organizations such as Time Well Spent (http://www.timewellspent.io/) work to challenge digital product designers and content creators to grant empower users with more autonomy in the settings for their media devices. They advocate challenging the incentives that drive media revenue and competition to ensure that higher quality content is distributed as broadly as possible.

In particular, I was intrigued by unique proposals for redesigning user interfaces – both frontend and backend – to provide more engaging and substantive content consumption experiences for online audiences. The team behind popular music annotation startup, Genius, has begun to openly discuss and explore ways in which they can apply the same annotation technology they employ for music lyrics to news media sites. Their intent is to allow readers to provide crowd-sourced annotations and fact-checking for news media articles and important speech transcripts.

Similarly, MIT’s Media Lab has sponsored the FOLD Project (https://fold.cm/). FOLD was the brainchild of media and communications students at MIT with the initiative of creating multimedia news content by serving the core of an article’s factual information first and then allowing the article’s author to provide additional context to their reporting by adding additional layers of videos, maps, infographics, and social posts.

https://fold.cm/read/sofiabarrett/hip-hop-a-hipstory-LidwhXs9

Specifically with regards to recommendation algorithms: In a piece for the Nieman Lab by Sara M. Watson of the Tow Center for Digital Journalism suggests that “As a response to the increasing awareness of the filter bubble problem, we may start to see third-party services allowing consumers to subscribe to a third-party filter proxy, reflecting their preference and intended optimization strategy, rather than a generic optimization algorithm.” Though this suggestion does raise some red flags for me in terms of access to such third-party services, I do feel that algorithmic transparency, manual human intervention, and greater user autonomy and personalization are underrepresented topics in discussions around combating the Filter Bubble.

Ultimately, it is abundantly clear that innovations in the way that users find and consume news content are on the horizon. Despite the very serious concerns around the ways in which algorithms create detrimental filter bubbles, I am optimistic that knowledge-sharing and collaborative design can work toward democratizing news media consumption.




There are no comments

Add yours