The advent of advanced technology introduced a world that could not be governed by laws relating to matters in the physical space. A space where interactions occurred within an invisible but ever growing world required a whole new body of rules and regulations accommodating for it, as certain technologies, including algorithms, grew to be a powerful tool. Starting as a concept on paper and evolving into one that is shaping the future of the digital space, algorithms are increasingly competent in shaping the lives of humanity.
Algorithms are defined by Pedro Domingo as “a sequence of instructions telling a computer what to do”. They allow technology to deduce patterns, personalise feeds based on users’ actions and behaviours, and provide outputs based on a predictability model. Hence, algorithms are a tool that have been used in many sectors and by nearly all platforms. The success of many major corporations such as Google, Facebook, and various social media platforms relies on highly sophisticated algorithms, allowing them to recommend content based on your taste and preferences.
Whilst there are a myriad of pros that come with an algorithm-based user experience, such as efficiency, saving users hours of sifting through irrelevant content, there also comes many harms, such as indirect discrimination, unfair rankings and designs, and data privacy concerns, which could manipulate users and restrict their autonomy. The Competition and Markets Authority claims that “as algorithmic systems become more sophisticated, they are often less transparent, and it is more challenging to identify when they cause harm”, as it becomes increasingly difficult to pinpoint and articulate problems within the system. However, many have come to realise the pervasive effects of algorithms on their users and, hence, have taken measures to raise awareness and draw attention to this problem. This article will focus mainly on discrimination, and how to mitigate it.
Bias and discrimination within the algorithm
Though algorithms are mostly free from human intervention, bias and discrimination is still inherent. For instance, algorithms used to determine someone’s jail time sentence might factor in one’s socioeconomic background, race, gender, etc, despite there being no concrete evidence showing that any of those factors increase the likelihood of someone committing a crime. In 2017, a team led by Caliskan discussed a study that found artificial intelligence, through crawling through the internet and implementing their algorithms, was biased against black Americans and women. Though the mechanics of how the algorithm creates this bias may be harder to explain, the consequences of the algorithm speak for themselves. There are three ways such consequences could be mitigated: reviewing and updating non-discrimination laws, introducing the use of bias impact statements, and promoting diversity and equality in the engineering of algorithms.
How can we combat the issue?
The first solution is, according to Nicol Turner Lee, Paul Resnick, and Genie Barton, “updating non-discrimination law to apply to the digital space”. This would involve scrutiny of current non-discrimination law, and recognising the potential role it could play in regulating algorithmic discrimination. For instance, laws could regulate algorithms by limiting as much bias as possible in the design of the algorithm by requiring more transparency, or requiring there to be an option for users to use an algorithm-free version of the platform. The United States has seen recent attempts to address the issue, as evidenced by the introduction of a bipartisan bill, The Filter Bubble Transparency Act, that would essentially force tech companies to offer an alternative version of their platforms that is not run under an algorithm. In a recent CNN Opinion piece, Republican Sen. John Thune described the legislation as "a bill that would essentially create a light switch for big tech's secret algorithms — artificial intelligence (AI) that's designed to shape and manipulate users' experiences — and give consumers the choice to flip it on or off." So, users would have the option of an algorithm-free platform, hence giving users a say in whether or not their experience is subjected to an algorithm.
The second solution is introducing bias impact statements that could be run in accordance with assessment programs that would regularly scrutinise and assess algorithms in terms of “environmental, privacy, data, or human rights”. Such statements could be completed by operators of the algorithm who, in theory, should know the algorithm the best, and consider various aspects of the algorithm including the purpose, scope, and process. Coinciding with this assessment scheme, introducing an “advisory council of civil society organisations” that works alongside companies could also help in redirecting the growth of algorithms towards a healthier route. By doing so, designers and operators of algorithms could reevaluate the design of their algorithms and identify shortcomings with them, especially if they have inherent biases rooted within them.
Finally, the issue of discrimination could best be addressed via the root of the problem, the design itself. It has been suggested that having more representation in the AI field could significantly help reduce bias in algorithms. By having a more equal representation of people in the design process, the minds behind the algorithm will also be diversified, and ultimately the product will also be reflective of this representation. For instance, AI currently stands as a highly male dominated field, so algorithms may be biased against women. But having women as part of the design process could see a female voice in the design of algorithms, and hence could reduce inequalities created by them.
The growth of algorithms is inevitable. However, it also risks infringing on a multitude of rights. The development of this piece of technology should also anticipate rapid and dramatic developments in legal and social regulations, which may be able to mitigate certain detriments of algorithms.