According to the 2016 census, up to only 20% of women hold tech or IT positions in Australia – meaning less female input toward the development of AI, and the risk of implicit gender biases literally encoded into AI algorithms by one over-represented group.
As Australian businesses begin experimenting with AI to drive sales and increase customer engagement, these unconscious biases could create issues in the way future customer interactions pan out – and enforce subtle sexist cues to future generations.
How discriminatory is AI?
CEOs may be wondering why this matters – and how it factors into the success of their businesses. Among their many roles in today’s business world, AI systems are most commonly deployed as customer-facing chatbots to deal with basic, day-to-day customer queries. This means they will be exposed to every face of customer service – from frustrated customers, to outright rude ones.
A notable trend in the development of AI is in home assistants whose voices are ubiquitously female by default. These assistants were built to perform subservient tasks, which, coupled with the default female personas, risk the reinforcement of legacy bias issues.
This can have a variety of negative effects on businesses. Public outrage has been sparked by the way these AI assistants portray female stereotypes – many of which are unacceptable today. Businesses that fail to address the concerns of their customers may see irreversible brand damage and lost sales from both female and male customers alike.
Coding empathy into AI
This doesn’t mean that Australian businesses should shun AI. On the contrary, AI will transform our lives. It will make technology more efficient, reliable, and capable, opening up tremendous human potential while overcoming some core governance issues that are causing concern today (think Banking Royal Commission).
Companies have the opportunity to extend their company persona and values into the AI they deploy, gender equality being just one of them.
A simple fix would be to make AI gender-neutral, but that would reduce the enormous opportunity that presents itself to extend a company’s values beyond the four walls of its offices.
Instead, as business leaders continue to invest in AI for their companies, the path to eliminating gender-biased coding is relatively simple: involve more diversity. As an example, LivePerson has found that a great way to do this is to involve a diverse set of contact centre agents in the design of the bots.
Contact centre agents are on the front lines of your company and know the nuances in consumer conversations, which is invaluable in designing bot conversations and flows.
Throughout development, diverse voices – from boardrooms to on-ground teams – should be actively involved in the programming and refining of the AI system’s machine learning, language, and processing algorithms. Businesses will need to enforce and champion these efforts to act as drivers and safeguard for future AI-related endeavours.
Nobody intentionally codes AI to be biased against gender or any other factor – we hope. But there’s a bit of every programmer in the program, and unconscious preconceptions or beliefs can easily seep into – and potentially compromise – the function of AI if we’re not careful.
By involving a wide variety of perspectives in an AI system’s learning process, business leaders not only avoid the risks of confirmation bias but also ensure that their core values are being upheld at every level of decision-making – even those advised on by machines.