AI technology is showing cultural biases, here’s why and what can be done

AI is the fastest growing technology in the world, but there is growing concern about its ability to consider and represent diverse communities. Prominent AI applications are showing racial biases and a lack of diversity and cultural sensitivity. ​AI expert Professor Kevin Wong from Murdoch University’s School of Information Technology says in order to deal with the problem of cultural biases in AI, it’s important to understand the fundamentals of different AI techniques.

Organisation/s: Murdoch University

Media release

From: Murdoch University

​AI expert Professor Kevin Wong from Murdoch University’s School of Information Technology says in order to deal with the problem of cultural biases in AI, it’s important to understand the fundamentals of different AI techniques.

“Machine Learning techniques, including Generative AI, require a huge amount of ‘representative’ data to train the complex system,” Dr Wong said.

“Data-driven machine learning techniques rely on the data to establish the intelligence of the system – which means bias can occur when the data used is not comprehensive enough, or there is an imbalanced distribution.”

He said while many big tech companies are trying to ensure that equity, diversity and ethical issues are addressed in the data that’s used to train Generative AI, the technology’s behaviour can still be unpredictable without proper handling.

Some publicly accessible AI systems are being called out for an inability to generate images of interracial couples, which is symptomatic of a much bigger problem.

Professor Wong said a “comprehensive evaluation and testing strategy” was required.

The driver for system-wide change is long-term, comprehensive evaluation to build a larger database and improve AI architecture, but Professor Wong said there were strategies for dealing with such problems.

These include incorporating other AI techniques where humans have better control and understanding, such as Explainable AI and Interpretable AI.

These are systems which ensure humans retain intellectual oversight, making decisions and answers given by the AI predictable.

This differs from other forms of AI, where even the designers can’t explain some of their results.

Professor Wong said Responsible AI, a type of ‘rule book’ made up of principles in AI for guiding development, was another emerging and important area for developing the systems.

“There is no one simple solution that can be used to solve this overnight; multi-dimensional and hierarchical approaches may need to be used to tackle such complex issues.

“The question is how to best adjust the AI system developed to handle the sensitive issues in culture, diversity, equity, privacy and ethics, which are important areas that will guide the acceptance of the user,” Professor Wong said.

“If some parameters or datasets are adjusted to include the handling of those broad issues, is there a systematic way to fully test the AI system before it can be rolled out without hurting anyone?”

While there are current issues with diversity and AI, Professor Wong said AI could be a powerful way to “help close equity and diversity gaps” if used correctly.

“It is important for a general system to be developed following some rules and ethical considerations that can then be adapted to different cultures and personal needs,” he said.

“However, thorough testing and evaluation are essential before using widely, as some outcomes could cause sensitive and fragile emotions in some populations around the world.”

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here