Trusting the ‘Black Box’ in an Era of AI Revolution
Written By Ava Emdadian
For decades, Artificial Intelligence (AI) has been the engine of shaking up the world, where
computers assess our health better than doctors, technology outsmarts financers, and
human leaders and researchers are out-invented. Whilst on surface AI has seen to have
tremendous innovation, cost and efficiency benefits, it would be naive to purely trust the
‘black box’. Black box AI operates without providing underpinning logic for its outputs,
therefore giving rise to ethical and legal questions and a transparency dilemma concerned
with privacy and surveillance and bias and discrimination.
Black boxes of AI and big data operate with precise secrecy and complexity. However,
would their deconstruction spark willingness of powerful Internet and finance corporations
to publicly expose their methods? Whilst at the heart of the information economy, their
accumulation of vast consumer data poses threats to customers unable to understand
exactly how their data is used. Companies not only use this data to make important
decisions for customers, but also influence decisions consumers make for themselves.
Insurance and open banking are sticking point examples.
Questionably, should insurance underwriters be legally entitled to use AI-interpreted data
to determine life insurance premiums? For example, insurers can use social media content
such as an Instagram post deeming an individual as ‘high-risk’ based on a rock-climbing
post. Likewise, open banking has elevated the possibility of compromised data privacy
resulting from data-sharing to offer more creative and personalised banking products.
Nevertheless, research shows consumer unease in trusting companies with using AI to
handle private information. Evidently, 83% of consumers were hesitant towards purchasing
insurance claims online without real human interaction. Likewise, 59% felt uncomfortable
transferring data to a third-party, regardless of brand reputation and trustworthiness.
Moreover, increasing AI use has risen new ethical dilemmas concerned with bias and
discrimination. Outsourcing decisions to AI may in fact result in unfair or biased decision-
making based on socially sensitive data including race, gender or sexual orientation.
Evidently, research shows that employers using AI for recruitment were 50% more prone to
shortlisting applicants with Caucasian-sounding names, rather than those with an African
American sound.
As AI implementation continues surging, so does society’s anxiety around losing control to
an AI revolution. Therefore, finding balance between the desire for innovation and the need
to ensure sufficient regulation of new technology remains a challenge for both companies
and lawmakers. Existing laws including Australia’s Privacy Act and the Privacy (Credit
Reporting) Code 2014 and Data Protection Act 1998, and Europe’s General Data Protection
Regulation (GDPR) and Payment Security Directive 2 (PSD2) have been put in place globally
consumer data protection and preventing bias and discrimination. However, a more rigid
response is needed by organisations and lawmakers to improve transparency measures.
One way to achieve this is shifting from a ‘black box’ approach to AI to a ‘glass box’
approach, whereby data algorithms are interpretable. With opacity at the heart of the
black box problem, glass boxing is the answer to consumer trust otherwise concerned with
how their personal data is used.