You can be on Entrepreneur’s cover!

The 3 Principals of Building Anti-Bias AI Why your company needs to apply best practices in eliminating discriminatory bias in your artificial-intelligence systems -- and key principles in applying them.

By Salil Pande

entrepreneur daily

Opinions expressed by Entrepreneur contributors are their own.

In April of 2021, the U.S. Federal Trade Commission — in its "Aiming for truth, fairness, and equity in your company's use of AI" report — issued a clear warning to tech industry players employing artificial intelligence: "Hold yourself accountable, or be ready for the FTC to do it for you." Likewise, the European Commission has proposed new AI rules to protect citizens from AI-based discrimination. These warnings, and impending regulations, are warranted.

Machine learning (ML), a common type of AI, mimics patterns, attitudes and behaviors that exist in our imperfect world, and as a result, it often codifies inherent biases and systemic racism. Unconscious biases are particularly difficult to overcome, because they, by definition, exist without human awareness. However, AI also has the power to do precisely the opposite: remove inherent human bias and introduce greater fairness, equity and economic opportunity to individuals on a global scale. Put simply, AI has the potential to truly democratize the world.

AI's reputation for reflecting human bias

Just as a child observes surroundings, sees patterns and behaviors and mimics them, AI is susceptible to mirroring human biases. So, tech companies, like parents, carry the weighty responsibility of ensuring that racist, sexist and otherwise prejudiced thinking isn't perpetuated through AI applications.

Unfortunately, AI's unsavory reputation in that respect has been rightly earned. For example, in January of 2021, the entire Dutch government resigned after it was revealed that it used a biased algorithm to predict which citizens would be most likely to wrongly claim child benefits, which forced 26,000 parents (many selected due to their dual nationalities) to pay back benefits to the tax authority without the right to appeal.

Other research, conducted by the Gender Shades Project (a non-profit focused on gender discrimination), found that facial recognition machine learning is notoriously bad at accurately identifying people of color. Some of the impacts of associated applications have been catastrophic, such as identifying an innocent person as a criminal in a virtual line-up. In the summer of 2020, Detroit's police chief confessed that the facial recognition technology his department used misidentified roughly 96% of suspects, leading to innocent people being identified as potential criminals. More recently, Amazon extended its ban on using its facial recognition software in policing due to concerns around racial misidentification.

Related: These Entrepreneurs Are Taking on Bias in Artificial Intelligence

Principals of building anti-bias AI

But it doesn't have to be this way. AI can be created not only to be unbiased, but also to fight back against inequities relating to race, gender, etc., and ironically, the only true solution to the biases in AI is human intervention. To help inform this practice, having a constant feedback loop of critiques and comments from users with diverse backgrounds, experiences and demographics is key. This method gives a network-like effect, allowing developers to offer continual updates to algorithms and practices. Developers that employ diligent principles and practices can ensure their technology is impartial and can be applied to a diverse range of scenarios.

Some guiding values towards that end:

1. Design processes for developing AI/ML systems with the removal of bias in mind

Companies and teams developing AI/ML systems need to start by consideration of bias throughout the process of developing and testing algorithms, to ensure that it is minimized or eliminated. There are multiple stages of developing AI/ML systems, and bias should not be an afterthought; teams must start by thinking through all possible biases that could exist in what they are building, then problem solve as to how they will be addressed during each stage of the process. Critical to this is ensuring that such teams are diverse in their thought process, helping ensure that the AI will be a collaboration from different backgrounds.

Related: The Case for Transparent AI

2. Ensure data sets used to teach algorithms reflect true (global) diversity and don't unintentionally introduce bias

Just as with the importance of having a diverse team, ensuring that data and inputs for the AI truly reflect our diverse world quells potential biases against individual groups. AI is designed to follow the rules laid out for it, so it must be trained with unbiased data. Without proper considerations during data collection and prep stages, unintentional biases can creep into algorithms, which can later become expensive to remove, both from time and cost perspectives. Pressure testing data and reviewing patterns within will help teams see both the apparent as well as unintended consequences of data sets.

3. Ensure a rigorous approach to eliminating biases across the development lifecycle

No matter how careful or diverse your team or data is, human biases can still slip through the cracks, so the next critical task is using anti-bias principals to prevent human biases from entering the technology. This includes ensuring that the ML cycle (pre-training, training and post-training) is actively monitored to spot them. This can be done by raising alarms for sensitive parameters and creating repeated expulsion and inclusions in the results. Another important aspect in minimizing the bias through such processes is to define relevant fairness metrics; there is no one universal definition of fairness, and there are many definitions, each offering a different tradeoff between fairness and other objectives.

Related: Learn How Machine Learning Can Help Your Business

Final frontier

Finally, the ongoing research in the proxy problem of bias — namely explainability in AI systems — may ultimately lead the AI/ML community to build bias-free systems, or rather, systems that can be probed and held accountable for the decisions they make. Creating a more equitable world isn't just about AI or technological innovation. While they play a role, true change can and must start with each one of us.

Salil Pande

CEO and Founder of VMock

Salil Pande, CEO and founder of VMock, strives to empower students and alumni to own their career development. Pande has global experience in marketing, sales and management consulting. He is an alumnus of Chicago Booth and IIT Kanpur.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Making a Change

Learn to Play Guitar Even if You Have No Previous Training for Just $20

Start with the beginner's crash course and learn how to play guitar in no time.

Business News

This Fan-Favorite Masters 2024 Item Is Still $1.50 as Tournament Menu Appears Unscathed by Inflation

The pimento cheese sandwich is a tradition almost as big as the tournament itself.

Business News

I Designed My Dream Home For Free With an AI Architect — Here's How It Works

The AI architect, Vitruvius, created three designs in minutes, complete with floor plans and pictures of the inside and outside of the house.

Side Hustle

This Dad Started a Side Hustle to Save for His Daughter's College Fund — Then It Earned $1 Million and Caught Apple's Attention

In 2015, Greg Kerr, now owner of Alchemy Merch, was working as musician when he noticed a lucrative opportunity.

Marketing

Save Big and Get This Pro Collage App for $39.99

Edit, adjust, and create collages in seconds.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.