IAG-backed research reveals need for investment in Responsible AI

Ethical AI Advisory and Gradient Institute have launched the inaugural Australian Responsible AI Index, sponsored by IAG and Telstra.

The index findings, which reveal that less than one in 10 Australia-based organisations have a mature approach to deploying responsible and ethical artificial intelligence (AI), signal the urgent need for Australian organisations to increase investment in responsible AI strategies

Responsible AI is designed and developed with a focus on ethical, safe, transparent and accountable use of AI technology, in line with fair human, societal and environmental values. It is critical in ensuring the ethical and appropriate application of AI technology, which is the fastest growing technology sector in the world, currently valued at US$327.5bn (IDC 2021).

The Responsible AI Index 2021, developed in partnership with Fifth Quadrant, studied 416 organisations operating in Australia and found that only 8% are in the ‘Maturing stage of Responsible AI’, while 38% are ‘Developing’, 34% are ‘Initiating’, and 20% are planning. The mean score was 62 out of 100, placing the overall result in the ‘Initiating’ category.

To help organisations accelerate Responsible AI adoption, a Responsible AI Self-Assessment Tool has been created to measure an organisation’s maturity when developing and deploying AI.

The tool will help companies in the earlier planning and initiating phases of Responsible AI adoption develop the right guardrails to support them, at a time when we are seeing rapid growth of AI in Australia and fast paced consumer adoption of digital technology using AI.

“AI is playing a central role in enhancing customer experience and improving business processes, but to ensure the right outcome for our customers, we embed considered thinking about fairness and equality before implementing an AI solution.”

Julie Batch

IAG Group Executive Direct Insurance Australia

Dr Catriona Wallace, CEO of Ethical AI Advisory, said, “The implications of organisations not developing AI responsibly are that unintended harms are likely to occur, to people, society and the environment, potentially at scale. As only three in ten organisations stated they had a high level of capability to deploy AI responsibly, there is significant work for Australian business leaders to do.”

Bill Simpson-Young, CEO of Gradient Institute, noted, “The Index found that just over half of the organisations have an AI strategy in place, highlighting the opportunity for business leaders to act on critical AI initiatives such as reviewing algorithms and underlying databases, monitoring outcomes for customers, sourcing legal advice around potential areas of liability and reviewing global best practice.” 

“Putting training in place, to upskill data scientists and engineers, as well as board and executive teams can also help close the gap by enabling a far greater level of understanding and education in Responsible AI,” Mr Simpson-Young said.

IAG Group Executive Direct Insurance Australia, Julie Batch said, “AI is playing a central role in enhancing customer experience and improving business processes, but to ensure the right outcome for our customers, we embed considered thinking about fairness and equality before implementing an AI solution.”

“At IAG we think of fair and ethical AI as a societal challenge and we see the Responsible AI tool as a great way for organisations to understand where they sit on the index and what they need to do to help ensure they’re applying AI in an ethical, responsible way,” Ms Batch said.

Responsible AI enhances customer experience at IAG

At IAG, we use artificial intelligence to predict whether a motor vehicle is a total loss after a car accident, reducing customer claims processing times from over three weeks to just a few days. The AI tool predicts whether a car is a ‘write-off’ within days and in some cases hours, to help reduce the emotional impact of a car accident by providing customers with more clarity and certainty sooner in the claims experience.

We use our established AI ethics framework and the Australian Government’s voluntary AI ethics principles to identify potential issues or risks prior to launch, including:

  • Human, social and environmental wellbeing: making sure the objective of the project is to benefit IAG’s customers, with no other conflicting objectives, and clearly documenting this to assist with ongoing monitoring.
  • Reliability and safety: experimentation to verify that customers had a positive experience and setting conservative thresholds for modelling to help reduce the likelihood of wrongly predicted total losses.
  • Fairness: Careful consideration of the potential benefits and harms of the system, including the distribution of benefits and harms across the population. 

IAG is now looking at how responsible and ethical AI can be used to help detect motor claim fraud using advanced analytical techniques. These techniques have been used to help claims consultants settle genuine customer claims sooner.