AI is becoming an integral part of modern life, from self-driving cars to understanding human speech. But, it has also become increasingly evident that some of its most sophisticated systems may be causing serious harm. These programs are vulnerable to biases, so it’s time to address this problem head-on.

According to a report published by the AI Now Institute at New York University, the field of artificial intelligence is overwhelmingly white and male. This means that those creating these algorithms don’t represent diverse communities and this lack of representation may lead to biased systems.

One prominent example is Amazon’s facial recognition software. This technology, which uses artificial intelligence to detect suspicious people, has been accused of discriminating against women and people of color.

Google’s image recognition software has also been accused of racist behavior. In 2015, it was caught auto tagging black faces as gorillas.

These issues aren’t new and can be traced back to Google’s failure to diversify its engineering teams. Furthermore, LaMDA, Google’s language model, was found to learn racist and sexist stereotypes.

But the reality is that these issues are becoming more widespread and impacting more industries than previously anticipated. Thus, it’s essential to acknowledge that a lack of diversity in tech is more than just an employee recruitment issue.

Researchers have noted a connection between hiring and retention issues and internal policies of companies. Resume screenings and other subjective assessments tend to be given priority over more objective processes, creating a “pipeline” of workers with less diverse backgrounds.

Though addressing these issues may take time, business leaders must understand how they can contribute to resolving this problem. The first step in doing so is creating policies that promote diversity, openness, and accountability within an organization.

This can include encouraging inclusivity in the hiring process, offering training on gender and race equality, and creating safe spaces that foster research from underrepresented groups. These policies can be implemented at both corporate and academic levels to make AI development more representative of society at large.

Developing these policies takes time, but the results are worth the effort. They’ll guarantee that AI isn’t used as a vehicle for racial or gender discrimination and can prevent harmful biases from being formed in the first place.

As such, the AI industry is facing a critical moment. It is essential for business and political leaders to acknowledge these problems, collaborate on solutions, and take responsibility.

For starters, the AI Now Institute recommended that companies publish diversity data publicly to demonstrate their progress. Furthermore, it encouraged the industry to publish harassment and discrimination transparency reports.

There are numerous resources available to organizations looking to increase their diversity, such as programs offered by colleges and universities that teach students how to develop AI products. The report suggested these efforts be supplemented with diversity-inclusive internships, mentorship from diverse AI professionals, and support from industry associations.