The allegations spread like wildfire, with Hansson pointing out that artificial intelligence – now widely used to make lending decisions – was to blame. “It doesn’t matter what the intent of individual Apple representatives is, what matters is the ALGORITHM they have put their absolute faith in. And what it does is discrimination. This is fucked up.” While Apple and Goldman Sachs underwriters were finally cleared by US regulators of violating fair lending rules last year, it reignited a wider debate about the use of artificial intelligence in public and private industries. Policymakers in the European Union are now planning to introduce the first comprehensive global standard for regulating artificial intelligence, as institutions increasingly automate daily tasks in a bid to boost efficiency and ultimately cut costs. This legislation, known as the AI Act, will have ramifications beyond the EU’s borders and, like the EU’s General Data Protection Regulation, will apply to every institution, including UK banks, that serve EU customers. “The impact of the law, once passed, cannot be overstated,” said Alexandru Circiumaru, head of European public policy at the Ada Lovelace Institute. Depending on the EU’s final list of “high risk” uses, there is a push to introduce strict rules on how AI is used to filter job, university or welfare applications or – in the case of lenders – to assess the creditworthiness of potential borrowers. EU officials hope that with additional oversight and restrictions on the type of AI models that can be used, the rules will limit the kind of machine-based discrimination that could affect life-changing decisions such as whether you can afford a home or a student loan. “AI can be used to analyze your entire financial health, including spending, savings, other debts, to come up with a more holistic picture,” said Sarah Kocianski, an independent financial technology consultant. “If designed properly, such systems can provide wider access to affordable credit.” But one of the biggest risks is unintentional bias, in which algorithms end up denying loans or accounts to certain groups, including women, immigrants or people of color. Part of the problem is that most AI models can only learn from historical data they’ve been fed, which means they’ll learn what kind of customer has previously lent to and which customers have been flagged as untrustworthy. “There’s a risk that they’re biased about what a ‘good’ borrower looks like,” Kocianski said. “Specifically, gender and ethnicity often play a role in AI’s decision-making processes based on the data it has been trained on: factors that are in no way related to a person’s ability to repay a loan.” In addition, some models are designed to be blind to so-called protected characteristics, meaning they are not intended to take into account the influence of gender, race, ethnicity, or disability. However, these AI models can still discriminate as a result of analyzing other data points, such as zip codes, which may be associated with historically disadvantaged groups who have never applied for, secured or repaid loans or mortgages. One of the biggest risks is unintentional bias, in which algorithms discriminate against certain groups, including women, immigrants, or people of color. Photo: metamorworks/Getty Images/iStockphoto And in most cases, when an algorithm makes a decision, it’s hard for anyone to understand how it reached that conclusion, resulting in what’s commonly referred to as “black box” syndrome. It means banks, for example, may struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicant’s gender from male to female might lead to a different outcome . Circiumaru said the AI law, which could come into effect at the end of 2024, would benefit tech companies that have managed to develop “reliable AI” models under the new EU rules. Darko Matovski, CEO and co-founder of London-based AI startup causaLens, believes his company is among them. The startup, which launched in January 2021, has already licensed its technology to companies such as asset management Aviva and trading firm Quant Tibra, and says a number of retail banks are in the process of signing deals with the company before the EU rules came into force. The entrepreneur said causaLens offers a more advanced form of artificial intelligence that avoids potential bias by taking into account and controlling biased correlations in the data. “Association-based models learn injustices from the past and just repeat it in the future,” Matovski said. He believes the proliferation of so-called casual AI models like his will lead to better outcomes for marginalized groups who may have missed out on educational and economic opportunities. “It’s really hard to understand the extent of the damage that’s already been done because we can’t really inspect this model,” he said. “We don’t know how many people haven’t gone to university because of a haywire algorithm. We don’t know how many people were unable to get their mortgage because of algorithmic biases. We just don’t know.” Matovski said the only way to protect against potential discrimination was to use protected characteristics like disability, gender or race as an input, but guarantee that regardless of those specific inputs, the decision didn’t change. He said it was a matter of ensuring AI models reflect our current societal values and avoid perpetuating any racist, ableist or misogynist decision-making from the past. “Society believes that we should treat everyone equally, regardless of their gender, zip code, race. So the algorithms not only have to try to do it, they have to guarantee it,” he said. Subscribe to the Business Today daily email or follow Guardian Business on Twitter @BusinessDesk While the new EU rules are likely to be a big step towards curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they believe that set. at a disadvantage. “The risks posed by artificial intelligence, especially when applied in specific specific circumstances, are real, significant and already present,” said Circiumaru. “Artificial intelligence regulation should ensure that individuals are adequately protected from harm by authorizing or not authorizing the uses of artificial intelligence, and have remedial measures available when approved artificial intelligence systems malfunction or lead to harm. We can’t pretend that approved AI systems will always work perfectly and fail to prepare for when they don’t.”