UK Regulators Warn Banks to Safeguard Again Lending Discrimination if Using AI, ML

AI, ML, lending, UK regulator

Banks using artificial intelligence (AI) to approve loan applications have been warned by U.K. financial regulators that they can only deploy that technology if they can show it won’t be discriminatory towards minorities, the Financial Times reported Sunday (Feb. 13).

Minorities already have trouble borrowing as it is, and watchdogs have been getting more stringent against the biggest British banks on how they’ll combat the issues with AI.

The digital shift has high street banks looking at different ways to automate lending, including using AI and more advanced algorithms, to determine who to lend to based on data, postcodes, employment profiles and more.

Banks have been using machine learning (ML) techniques to make lending decisions, which they think could cut down on racial discrimination. AI, in their opinions, wouldn’t make “subjective and unfair” judgments, per the report.

That said, regulators and campaign groups feel differently, saying the use of AI for credit models could actually do more harm.

“If somebody is in a group which is already discriminated against, they will tend to often live in a postcode where there are other (similar) people … but living in that postcode doesn’t actually make you any more or less likely to default on your loan,” said Sara Williams of Debt Camel, a personal finance blog. “The more you spread the big data around, the more you’re going after data which is not directly relevant to the person. There’s a real risk of perpetuating stereotypes here.”

PYMNTS wrote recently that the new digital technologies, while having many improvements, has also opened up a new can of worms of privacy issues — particularly for Americans.

See also: US Must Learn From Europe in Privacy, AI Regulation, Says Policy Expert

Per the report, only three U.S. states — California, Virginia and Colorado — have adopted comprehensive consumer data privacy laws.

Marc Rotenberg, president and founder of the Center for AI and Digital Policy, has also said AI’s issues could be costing businesses.

“We have a lot of work to do,” he said. “Unlike almost every modern nation in the world, the U.S. does not have a comprehensive federal privacy law. We don’t even have a privacy agency.”