News

August 7, 2023

Read the full article here.

The innovative power of artificial intelligence (AI) is impossible to ignore. From healthcare to transportation and every industry in between, companies leverage AI to drive positive change and stay ahead of the competition. Some have used the technology for years, while others are only beginning to tap into its vast potential.

It's no surprise that AI attracts substantial investment. Forrester projects AI software markets will land north of $64 billion by 2025. These numbers aren't surprising, but realizing them will require the industry to better solve problems that we've grappled with for decades. AI's effectiveness depends on the quality of the data it's fed. Input data that's bad (ungoverned, mismanaged or low-quality), and issues surface. To the extent that we see the new AI as proxying human consciousness, one of the most insidious of these problems is unconscious bias.

Barriers To Widespread Adoption

AI is rapidly gaining ground in the enterprise world, with applications ranging from increased efficiency and productivity to better customer service, improved quality and reduction in human error. Despite its many benefits, several significant hurdles impede its widespread adoption:

Rudimentary models. AI is still a developing field. Many models are unreliable, can't be trusted and require fine-tuning. There's also a shortage of skilled professionals who can address these issues. The resulting biased outcomes erode trust, slowing adoption and development within organizations.

Lack of good and/or well-understood data. An inability to find, understand and trust data poses a significant hurdle for AI adoption. Users may struggle to search for and discover governed data, directly impacting the accuracy and effectiveness of their AI models.

A "speculative" investment. AI is still young despite being available for several years. The success of its applications remains shaky, and some companies require convincing before they invest.

Evolving Challenge Of AI Bias

Although AI has numerous benefits, recent advancements in generative AI have brought renewed attention to long-standing concerns about bias and ethics. Humans who choose the data that AI algorithms use can unintentionally introduce bias into AI systems. I could select a dataset that attempts to predict the next best customer to target, but this dataset might be based on demographic data that biases toward a certain population group.

Much of the data that's initially captured is based on the finite knowledge of the engineer who produced the software in the first place. I might not know that capturing data about certain preferences may be more highly predictive in one population than another. As a result, this data can ultimately lead to biased output.

A case in point is ChatGPT, which gained widespread popularity for its ability to generate credible-sounding prose. When a company relies on ChatGPT to help make business decisions, inaccuracies and biases may be introduced—leading to what is known as "hallucinations" in decision-making. To avoid this and ensure trust in generative models, diverse teams must test them to identify and correct known biases through fine-tuning.

It's even more alarming that we're dealing with a moving target. AI models drift as they transform based on new data, which can slowly lower accuracy results and unintentionally welcome bias. As the model learns from this new data, it can further eternalize damaging biases such as gender and racial inequality. Human error in designing the algorithms is natural—and even expected—which is why checks and balances are stringently implemented.

More nefarious, however, is what happens when AI models are fed and trained on bad data. While AI is designed to learn, it cannot subjectively differentiate "good" data from "bad." Regardless of quality, it learns from the data it's fed and generates new data and insights. As a result, any biases present in the original data will only be strengthened, exacerbating the problem.

For instance, while loan officers ultimately decide mortgage approvals, software underpins the information. An article published by Nasdaq cited a study by The Markup that found AI bias resulted in significant disparities in loan approvals among people of color. According to the study, "AI-based mortgage lenders are 80% more likely to reject Black applicants, 40% more likely to reject Latino applicants and 70% more likely to deny Native American applicants."

AI is also increasingly used to help HR professionals automate recruitment and hiring processes. However, the technology is often unintentionally trained on hidden biases in the data. Candidates belonging to protected groups may be less likely to be hired, or differences in hiring offers may ultimately inflate compensation disparities across different groups regardless of the users' intention.

AI’s limitations temper its benefits. What can companies do to curb AI bias?

Preventing Hidden AI Bias

To ensure the fairest and most unbiased outcomes, companies must first understand where the model's training data (inputs) is sourced from and whether it's trusted. Then, they provide tools that humans (data scientists) can use to monitor AI outputs effectively. This is called functional monitoring.

Companies must also publish AI algorithms and, where possible, reveal the datasets employed in training the model. Consistently and systematically present each outcome to test whether or not bias exists. There also must be greater awareness of AI bias; data scientists must be trained to recognize it. Emphasizing areas of study in data science programs at universities and executive education can play a crucial role in achieving this goal.

Finally, companies need to train anyone who touches data—whether they're managing it, producing it or consuming it—to be aware of introducing bias. This is no different from training scientists to avoid procedural bias when designing an experiment. Of course, the challenge with most AI is that the real world is rife with unpredictable and confounding variables.

AI innovation has reached a crucial juncture. However, this technology still has imperfections that can have disastrous consequences if not effectively managed. Only companies that put in place checks and balances such as monitoring both inputs (trusted data) and outputs will benefit from the potential of AI without falling prey to hidden bias.

If you like this article consider subscribing to our bi-monthly newsletter to get information about our portfolio, solutions, and insights delivered to your inbox.