Eliminating AI bias: Industry experts weigh in

Developers and data scientists are human, of course, but the systems they create are not — they are merely code-based reflections of the human reasoning that goes into them. Getting artificial intelligence systems to deliver unbiased results and ensure smart business decisions requires a holistic approach that involves most of the enterprise. 

IT staff and data scientists cannot — and should not — be expected to be solo acts when it comes to AI. 

There is a growing push to expand AI beyond the confines of systems development and into the business suite. For example, at a recent panel at AI Summit, panelists agreed that business leaders and managers need to not only question the quality of decisions delivered through AI, but also get more actively involved in their formulation. (I co-chaired the conference and moderated the panel.)  

There need to be systemized ways to open up the AI development and modeling process, insists Rod Butters, chief technology officer for Aible. “When we tell data scientists go out and create a model, we’re asking them to be a mind reader and a fortune teller. The data scientist is trying to do the right thing, creating a responsible and solid model, but based on what?,” he says.

“Just creating a great model does not necessarily solve all problems.”  

So how do we rectify the situation Butters describes and address potential bias or inaccuracies? Clearly, this is a challenge that needs to be addressed across the enterprise leadership spectrum. IT, which has been carrying most of the AI weight, can’t do it alone. Experts across the industry urge opening up AI development to more human engagement. 

“Placing the burden on IT leaders and staff is to mistakenly generalize a set of substantial, organization-wide ethical, reputational, and legal issues for a technical issue,” says Reid Blackman, CEO of Virtue and advisor to Bizconnect. “Bias in AI is not solely a technical problem; it is interweaved across departments.” 

To date, not enough has been done to combat AI bias, Blackman continues. “Despite the attention to biased algorithms, efforts to solve for this has been fairly minimal. The standard approach — apart from doing nothing, of course — is to use a variety of tools that see how assorted goods and services are distributed across several subpopulations, most noticeably including groups relating to race and gender; or utilize a range of quantitative metrics to determine whether the distribution is fair or biased.”

Artificial Intelligence

Eliminating bias and inaccuracies in AI takes time. “Most organizations understand that the success of AI depends on establishing trust with the end-users of these systems, which ultimately requires fair and unbiased AI algorithms,” says Peter Oggel, CTO and senior vice president of technology operations at Irdeto. However, delivering on this is much more complicated than simply acknowledging the problem exists and talking about it.” 

More action is required beyond the confines of data centers or analyst sites. “Data scientists lack the training, experience, and business needs to determine which of the incompatible metrics for fairness are appropriate,” says Blackman. “Furthermore, they often lack clout to elevate their concerns to knowledgeable senior executives or relevant subject matter experts.”

It’s time to do more “to review those results not only when a product is live, but during testing and after any significant project,” says Patrick Finn, president and general manager of Americas at Blue Prism. “They must also train both technical and business-side staff on how to alleviate bias within AI, and within their human teams, to empower them to participate in improving their organization’s AI use. It’s both a top-down and bottom-up effort powered by human ingenuity: remove obvious bias so that the AI doesn’t incorporate it and, therefore, doesn’t slow down work or worsen someone’s outcomes.”

Finn adds,”Those who aren’t thinking equitably about AI aren’t using it in the right way.”  


Also: NYC Health Department creates coalition to end bias and ‘race-norming’ in medical algorithms


Solving this challenge “requires more than validating AI systems against a couple of metrics,” Oggel says. “If you think about it, how does one even define the notion of fairness? Any given problem can have multiple viewpoints, each with a different definition of what is considered fair. Technically, it is possible to calculate metrics for data sets and algorithms that say something about fairness, but what should it be measured against?”

Oggel says more investment is required “into researching bias and understanding how to eliminate it from AI systems. The outcome of this research needs to be incorporated into a framework of standards, policies, guidelines and best practices that organizations can follow. Without clear answers to these and many more questions, corporate efforts for eliminating bias will struggle.”

AI bias is often “unintentional and subconscious,” he adds. “Making staff aware of the issue will go some way to addressing bias, but equally important is ensuring you have diversity in your data science and engineering teams, providing clear policies, and ensuring proper oversight.”

While opening up projects and priorities to the enterprise takes time, there are short-term measures that can be taken at the development and implementation level.

Harish Doddi, CEO of Datatron, advises asking the following questions as AI models are developed:   

  • What were the previous versions like?   
  • What input variables are coming into the model?   
  • What are the output variables?   
  • Who has access to the model?   
  • Has there been any unauthorized access?   
  • How is the model behaving when it comes to certain metrics?   

During development, ”machine learning models are bound by certain assumptions, rules and expectations” which may result in different results once put into production, Doddi explains. “This is where governance is critical.” Part of this governance is a catalog to keep track of all versions of models. “The catalog needs to be able to keep track and document the framework where the models are developed and their lineage.”

Enterprises “need to better ensure that commercial considerations don’t outweigh ethical considerations. This is not an easy balancing act,” Oggel says. “Some approaches include automatically monitoring how model behavior changes over time on a fixed set of prototypical data points. This helps in checking that models are behaving in an expected manner and adhering to some constraints around common sense and known risks of bias. In addition, regularly conducting manual checks of data examples to see how a model predictions align with what we expect or hope to achieve can help to spot emergent and unexpected issues.”

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here