The recent revelation by Instagram about how its algorithms work to create the Feed its users see has led me to reflect on the connection between AI, and unconscious bias.
Don’t get me wrong. I do see the platform’s decision to offer insight into its internal processes in a series of explainers as a positive step. Why? Because, as Instagram puts it, the platform recognises that it can “..do more to help people understand” what it does, “...how Instagram’s technology works and how it impacts the experiences that people have across the app”. Knowledge is power, and educating people about how technology works - and what that means for them as a user, and their role as a good digital citizen - is something I consider to be vital.
In the explainer post, Instagram states:
"By 2016, people were missing 70% of all their posts in Feed, including almost half of posts from their close connections. So we developed and introduced a Feed that ranked posts based on what you care about most."
What type of things do Instagram’s multiple algorithms look for? For your main Feed, and Stories, as Social Media Today notes, what types of posts you engage with and your relationship to the creator of each post, are key - along with elements like the popularity of the post, and how likely you are to take an action, or engage with, a post.
When it comes to the Explore algorithm, Instagram looks at the people you follow and your level of engagement. And, for Reels, content and creator popularity are key. As Instagram puts it, the platform will “survey people and ask whether they find a particular reel entertaining or funny, and learn from the feedback to get better at working out what will entertain people.”
This is what got me thinking. Which people are surveyed? How often are people surveyed? Who are the creators of the survey? How do we know that, when people report on what they consider to be entertaining, they aren’t affected by unconscious bias?
Algorithms are everywhere, and they are harnessing data that influences a range of things in our lives, ranging from making recommendations about what we binge on on Netflix, to how much we can borrow from a bank.
However, as the Brookings Institution reports:
…Research is starting to reveal some troubling examples in which the reality of algorithmic decision-making falls short of our expectations. Given this, some algorithms run the risk of replicating and even amplifying human biases…
One example? In the U.S., a judge may determine bail and sentencing limits using automated risk assessments. If the wrong conclusion is reached, cumulatively that may lead to certain groups - like people of colour - being discriminated against with longer prison sentences and higher bails.
As Brookings continues:
Bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities. If left unchecked, biased algorithms can lead to decisions which can have a collective, disparate impact on certain groups of people even without the programmer’s intention to discriminate.
I can’t help but wonder, who is programming Instagram’s algorithms? How do we know that their world views - which impact what we see on the app and our user journey - aren’t biased in some way, conscious or unconscious?
Ultimately, I hope Instagram continues to explain how its technology works so that we can better understand the processes - machine or human - at play. And, I hope that other apps join it.