In this episode of our podcast series, Transforming business with MinterEllison: ideas and challenges that are shaping our future, we discuss artificial intelligence (AI) technology, what that looks like in today's reality and how it can be used ethically and responsibly.
While living, breathing, feeling AI-powered robots aren't a reality yet, more specific versions of artificial intelligence are already part of our daily lives – and sometimes, we don't even notice. Every day, we interact with AI through games, chatbots, mobile devices and other tools.
The use of this technology raises some important questions about ethics. For example, how can businesses, government and individuals ensure that this technology is being used to make a positive impact on society? How do you regulate artificial intelligence? And who is accountable if something goes wrong?
To explore these issues, we spoke with technology partner Paul Kallenbach and Chief Digital Officer Gary Adler.
AI is redefining human interaction. A raft of consumer products are already being widely used, for example, in the area of voice with Alexa, Google Assistant and Siri. We're just at the beginning of that journey in terms of AI, voice and intelligent assistants. We're expecting them to deal with a lot of the day-to-day tasks that human beings currently manage.
However, at least for the foreseeable future, machines don't have empathy or judgement. By taking care of the more menial tasks, AI technology empowers humans to focus on the more complex elements of work, and the relationship between the two is complementary.
One of the legal and ethical issues AI raises is around bias and discrimination. Bias may be inherent in data that is used to train the machines. There are many examples of this, such as an online recruitment tool which had a bias against women because the company using it had historically hired more men. The ramifications of this issue are clearly problematic, both ethically and legally.
By 2023, it's predicted that 30% of all algorithms will have another algorithm that sits over the top of them to supervise and ensure there is no bias inside the algorithm. It's become a real issue that data scientists are trying to address.
AI raises new questions around intellectual property (IP) and copyright. Australian and overseas copyright legislation was not born in the age of AI. Australian copyright law requires that there be a human author behind the work. However, with the introduction of technology that creates art, writing and other content, there is not always a human author. Rather, the author may computer code.
Likewise, privacy law does not always account for machine learning and natural language processing in artificial intelligence. Privacy governs identifiable information, or information that's reasonably identifiable. In this world of AI and machine-learning, anonymous information will become increasingly rare, as the techniques to re-identify information using massive datasets improve. This raises important questions about how a person's privacy can be protected.
“Technology in general is moving at such a rapid rate that regulation is constantly playing catch-up.”
Listen to the full discussion, including examples of AI in action, in our podcast