MWC21: Companies Using Biased AI Are to Lose Profits

The Mobile World Congress took place this year in Barcelona, albeit in a hybrid format.  Russia was represented by Beeline, Kaspersky, developers of artificial intelligence services – oneFactor, etc. Nothing new was discussed – as usual, 5G, Cloud, IoT, OpenRAN.  However, there were still interesting topics, in particular, new trends in the development of artificial intelligence.

For example, at the panel session “Ethics of Artificial Intelligence”, together with top managers of IBM and Telefonica (the world’s largest telecom operator), the CEO of oneFactor discussed the topic of discrimination against people by artificial intelligence algorithms.  Different countries have different strategies for AI.

The only common thing is that in the field of artificial intelligence technologies, everyone wants, if not to lead, then at least not to lag behind.  At the same time, the topic of ethics and regulation of AI remains the most acute and controversial one. 

One of the issues is the bias of algorithms when decision-making systems in applications with AI begin to discriminate against certain groups of people according to characteristics that these people cannot influence: gender, age, skin color, religion, place of residence, etc. special legislative regulation, which is designed to combat bias, and large global corporations are introducing expensive processes and software solutions to combat this phenomenon.

oneFactor CEO, Head of the Technology Committee of the Big Data Association in Russia, Roman Postnikov, presented a new look at this problem based on Russian experience.  Its essence is that artificial intelligence systems and AI developers must compete with each other.

Companies that use biased algorithms that discriminate against certain groups of the population begin to lose profit and market share.  “If we eliminate artificial intelligence bias from our systems, no regulatory action is required.  A company that succeeds in making AI unbiased will gain a competitive advantage”, – Postnikov says.

OneFactor has been creating AI applications for over 10 years and has repeatedly faced the fact that companies that discriminated against algorithms began to lose the competition.  The most striking example is discrimination based on geography.  Many banks, insurance companies, and companies from other industries in Russia use artificial intelligence that discriminates against residents of certain territories.

Read More: Apple to launch privacy features amid row with Facebook

It’s not a secret for anyone that Russian banks hardly provide loans to residents of the North Caucasus and the Chita Region, and insurance companies set barrage rates for car insurance for residents, for example, in the Krasnodar Territory.  This happens because residents of these territories often do not pay on loans and often get into accidents.

Artificial intelligence and machine learning processes are “lazy”, they find such a sign as a region with a high dividing power and apply a standard solution – to discriminate against all residents of this region: refuse to provide such people with the opportunity to get a loan or set extremely high rates for car insurance.  At the same time, if the variable of the territory of residence is excluded from the consideration of the AI, this will force the algorithm to look for other signs that divide the population.

Then the “trustworthy” people living in these territories have a chance to get the service and the best price.  And companies have an opportunity to get more profit by working with clients in a given territory. 

As the oneFactor example shows, companies that use algorithms that do not discriminate on grounds that a person cannot influence are winning in the market struggle: a place of birth, gender, age, etc.

Postnikov believes that the regulator’s task is to create a competitive environment for artificial intelligence systems, and not impose strict restrictions on their development, as it is now being discussed in Europe.  Regulators should fight against the secrecy of algorithms in advertising and other ecosystems, which cannot be improved precisely because of the monopoly of these market participants.

Open the possibility for their improvement both through external data and the use of their own AI algorithms by consumers.  For example, Facebook recently discovered an internal targeting algorithm that discriminated against women when displaying job offers for a range of jobs, since historically more men were employed in these positions.

Read More: Huawei’s plan to train 1000 Pakistani officials

“As elsewhere, for the successful development of the artificial intelligence industry, it is more effective to maintain competition between systems and AI developers.  It is important not to allow individual companies to monopolize the solution of artificial intelligence problems.  Competition leads the economy to grow, and in the case of artificial intelligence, it additionally eliminates any forms of discrimination and bias, ”sums up Roman Postnikov.

Social Groups
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *