We don’t see the essence of things, we just see it in our own way. “She describes quite succinctly the kinds of unfortunate biases that come with our brains.
In a business setting, affinity bias, confirmation bias, attribution bias, and the halo effect, some of these reasoning errors better known, are really only superficial. In general, they leave behind a litany of offenses and mistakes.
Of course, the most harmful prejudices in humans are those that prejudice our fellow human beings or against us based on age, race, gender, religion or appearance. Despite our efforts to purify ourselves, our work environment, and our society from these distortions, they still permeate our thinking and behavior, even with modern technologies such as artificial intelligence.
Critics say AI is making bias worse
Since AI was first deployed in recruiting, loan approval, insurance premium modeling, facial recognition, law enforcement and a host of other applications, critics (with quite a few reasons) have pointed to the technology’s propensity for bias.
For example, Google’s new language model BERT (Bidirectional Encoder Representations from Transformers) is a leading natural language processing (NLP) model that developers can use to build their own AI. BERT was originally built using Wikipedia text as its primary source. Is there anything wrong with that? Wikipedia’s contributors are overwhelmingly white males from Europe and North America. As such, one of the most important sources of language-based AI has a biased view in its inception.
Similar problems are found in computer vision, another key area of AI development. Facial recognition datasets contain hundreds of thousands of annotated faces, which are critical for developing facial recognition applications for cybersecurity, law enforcement and even customer service. However, it turns out that developers (probably mostly white middle-aged men) are unknowingly better at achieving accuracy for people like them. Women, children, older adults, and people of color had much higher error rates than middle-aged white men. As a result, IBM, Amazon and Microsoft were forced to stop selling their facial recognition technology to law enforcement in 2020 over concerns that these biases could lead to misidentification of suspects.
To learn more, watch the important and sometimes chilling documentary Coded Bias.
What if AI was actually part of the bias solution?
However, a better understanding of the phenomenon of AI bias suggests that AI is simply exposing and amplifying implicit biases that already exist but are ignored or misunderstood. AI itself is immune to color, gender, age, and other biases. It is less susceptible to the logical fallacies and cognitive biases that plague humans. The only reason we see bias in AI is because humans sometimes train it with heuristically wrong and biased data.
Since the aforementioned biases were discovered, all major tech companies have worked hard to improve their datasets and eliminate biases. One way to get rid of AI bias? – By using AI! If that seems unlikely, let’s move on.
Using AI to Eliminate Bias in Hiring
Classic examples can be found in job offers. Women and people of color are notoriously underrepresented within the most coveted employment opportunities. This phenomenon is self-perpetuating, as new employees become senior leaders and they take charge of recruiting. Affinity bias ensures that “people like me” continue to be hired, while attribution bias justifies those choices based on past employee performance.
But that could change when AI takes a bigger role in recruiting. Tools like Textio, Gender Decoder, and Ongig use artificial intelligence to scrutinize hidden biases about gender and other characteristics. Knockri, Ceridian, and Gapjumpers use artificial intelligence to remove or ignore features that identify gender, nationality, skin color, and age, so hiring managers can focus only on a candidate’s qualifications and experience. Some of these solutions also reduce recency bias, affinity bias, and gender bias in the interview process by objectively assessing candidates’ soft skills or changing their phone voice to mask their gender.
Using artificial intelligence to remove bias in venture capital decisions
A similar approach can be taken in the venture capital world. In the venture capital world, men make up 80% of partners, while women receive only 2.2% of investments, despite being the founders of 40% of new startups. For example, UK startup accelerator Founders Factory has written software to screen project candidates based on identifiable characteristics of entrepreneurial success. Similarly, F4capital, a nonprofit run by women, developed a “FICO score for Startups” that assesses startup maturity, opportunity and risk to remove bias in risk decision-making. This approach should be widely adopted, not only because it is an ethical thing to do, but also because it leads to better returns — 184% higher than investments without the help of AI.
Poor Reduction of Cognitive Biases in Artificial Intelligence in Medicine
AI can also help make better decisions in healthcare. Medical diagnostics company Flow Health, for example, is working to use artificial intelligence to overcome the cognitive biases doctors often use to diagnose patients. For example, the “availability heuristic” encourages doctors to make common but sometimes incorrect diagnoses, while the “anchoring heuristic” causes them to stick to incorrect initial diagnoses even when new information contradicts them. I believe AI will be an essential part of the rapidly evolving world of data-driven personalized medicine.
Other areas where AI can reduce common biases
AI can even help reduce the less vicious, but still very powerful, biases that often cloud our business judgment. Think of the bias (in English speaking countries) against information published in English, the bias of startups against older people despite their greater knowledge and experience; manufacturing tends to use the same suppliers and methods, not Try new, possibly better methods. Don’t forget that supply chain executives and Wall Street investors make short-term decisions for emotional reasons during tough economic times.
Putting AI at work in all of these areas can effectively check for unrecognized biases in the decision-making process.
AI can even be used to reduce AI bias
If it is human nature to make mistakes, AI may be the solution we need to avoid the costly and unethical consequences of our hidden biases. But what about these biases interfering with AI itself? If AI misreads biased data and amplifies biased human heuristics, how can it possibly be a useful solution?
There are now tools designed to remove the implicit human and data biases that creep into AI. Developed by Google’s People and AI Research team (PAIR), the What-If tool allows developers to explore the performance of AI using an extensive library of “fairness metrics”, while PWC’s Bias Analyzer tool, IBM Research’s AI Fairness 360 tool, and O ‘ Each of Reilly’s LIME tools can help us identify whether there is bias in AI code.
If you’re a senior executive or board member thinking about ways AI might reduce bias in your organization, I urge you to think of AI as a promising new weapon in your arsenal, not as a A panacea for a complete solution to the problem. From a holistic and practical perspective, you still need to establish benchmarks to reduce bias, train your staff to identify and avoid hidden biases, and gather external feedback from customers, suppliers or consultants. Bias reviews are not only a good idea, in some cases, they are even laws.



GIPHY App Key not set. Please check settings