How Platforms Can Make Algorithms More Transparent
Algorithms play a significant part in our daily lives. From Facebook to Twitter to YouTube, each social media site we frequent uses an algorithm to keep us occupied and engaged. The big problem is that it isn’t at all transparent how that algorithm works. Online social media sites have a vested interest in protecting their algorithms. It’s how they make money, after all. If Facebook’s algorithm for how it serves up its news feed becomes public information, it loses its competitive edge. This balance presents a significant challenge to lawmakers and researchers. Is there a way to make algorithms more transparent without giving away the vital facets that make them unique?
Judiciary Looks Into Platforms’ Impact
In late April, the Senate Judiciary Committee convened along with invited heads of major social media sites to discuss harmful viral content plaguing their websites. As anyone who has spent time watching the (primarily sensationalist) documentaries on social media should know, it can impact people’s opinions and lead to polarization. Viral content typically comes from autoplayed videos and posts on a user’s feed. However, these posts are decided by the site’s algorithm. Engagement keeps users on the site for longer, increasing marketing opportunities for advertisers. These advertisers are responsible for the company’s income to a large extent.
What Happens When Users Turn Off the Algorithm?
The journal Proceedings of the ACM on Human-Computer Interaction notes the users have the option to turn off the algorithm in most social media sites. However, what happens after a user turns the algorithm off may surprise some. This study considered Twitter’s feed and saw that a user’s feed comprised 55% sponsored content when the algorithm was engaged. However, as the algorithm was turned off, there was a significant decline in the amount of content the user got from non-connected accounts. The algorithm also reduced exposure to news regarding COVID-19 and resulted in a more ideologically homogenous makeup than what the user would have access to with the algorithm turned on. The most vital clue from this study is the methodology. Researchers had to use “sock puppet” accounts to garner insight into the social media platform’s algorithm. Yet if the algorithm is so transparent, why would they have to jump through all those hoops?
Some Information Is Not Enough
The balancing act of “what does the public (and the government) need to know?” is one that social media sites need to pay attention to. How are their algorithms shaping public discourse? In some areas, people complain about Facebook and Twitter’s censorship. The lack of transparency in data and how their algorithms function leads to crackpot theories that get made into documentaries. This opaque approach to running their algorithm may hurt their business in the long run. How could platforms be more open and honest about the data that their algorithms use?
Some Suggestions for Better Transparency
While social media algorithms still need to maintain some layer of secrecy, there are some steps they can take to ensure that the algorithms are more transparent. Social media sites could share information about the posts they ban and why they were banned. In some cases on Twitter and Facebook, bans have been levied against users without enough information to decide whether the ban was avoidable. Additionally, shares from banned accounts could be used to determine how far their viral content has spread while protecting the user’s private information.
Data regarding user engagement could also be helpful. Sure, many companies already comply with the GDPR stipulations about collecting user data, but that’s not enough. How long do users spend on the platform? How many users turn the algorithm off? How is their content different from those that leave the algorithm on to do its thing? YouTube’s recommendation algorithm is a good example. It drives a lot of engagement on the site and might be responsible for many smaller channels getting YouTube likes because of the algorithm’s suggestion criteria. Would those smaller creators ever get the same exposure without the algorithm?
Finally, on-platform optimization is also a crucial consideration, especially for the best sneaker shops selling on Amazon. One of the more telling facts about on-platform optimization is the guiding principle behind these websites’ feeds. The research study mentioned above found that Twitter suppressed any link-based posts that would drive users off the platform. As mentioned before, user engagement (especially the time spent on the site) is what advertisers pay for. Thus, posts that take users off the platform are counter-intuitive. The algorithm, therefore, prioritizes posts that don’t have links in them. Unfortunately, it’s only via the research that this was uncovered. Twitter could have been more transparent about their algorithm’s suppression of link-based posts, but that would likely drive many users off the platform that use it for exposure.
Can Social Media Algorithms Be Harmful?
At this point, it’s safe to assume that there might be some harm hiding in viral social media posts. Yet determining what those posts consist of requires jumping through a lot of hoops. Legislation requiring social media algorithm transparency needs to be developed. However, it can’t be an effort undertaken by the government alone. The lawmakers should also look for input from grassroots societies and the researchers that spend their time sifting through the data. Most importantly, the impact on the end-user should be measured. It’s an even more precarious balancing act between keeping a social media business profitable and removing its competitive advantage.