The robots are coming: artificial intelligence, ethics and the law

3 minute read  03.02.2020 Paul Kallenbach, Gary Adler

How is artificial intelligence changing our daily lives? How do we ensure that change is for the better?

Search for the podcast in your favourite podcast app or listen below. 

Transforming business podcast  Transforming business podcast  Transforming business podcast

In this episode of our podcast series, Transforming business with MinterEllison: ideas and challenges that are shaping our future, we discuss artificial intelligence (AI) technology, what that looks like in today's reality and how it can be used ethically and responsibly.

While living, breathing, feeling AI-powered robots aren't a reality yet, more specific versions of artificial intelligence are already part of our daily lives – and sometimes, we don't even notice. Every day, we interact with AI through games, chatbots, mobile devices and other tools.

The use of this technology raises some important questions about ethics. For example, how can businesses, government and individuals ensure that this technology is being used to make a positive impact on society? How do you regulate artificial intelligence? And who is accountable if something goes wrong?

To explore these issues, we spoke with technology partner Paul Kallenbach and Chief Digital Officer Gary Adler.

Human interaction with AI technology

AI is redefining human interaction. A raft of consumer products are already being widely used, for example, in the area of voice with Alexa, Google Assistant and Siri. We're just at the beginning of that journey in terms of AI, voice and intelligent assistants. We're expecting them to deal with a lot of the day-to-day tasks that human beings currently manage.

However, at least for the foreseeable future, machines don't have empathy or judgement. By taking care of the more menial tasks, AI technology empowers humans to focus on the more complex elements of work, and the relationship between the two is complementary.

Countering bias and discrimination in AI

One of the legal and ethical issues AI raises is around bias and discrimination. Bias may be inherent in data that is used to train the machines. There are many examples of this, such as an online recruitment tool which had a bias against women because the company using it had historically hired more men. The ramifications of this issue are clearly problematic, both ethically and legally.

By 2023, it's predicted that 30% of all algorithms will have another algorithm that sits over the top of them to supervise and ensure there is no bias inside the algorithm. It's become a real issue that data scientists are trying to address.

Intellectual property and privacy in AI

AI raises new questions around intellectual property (IP) and copyright. Australian and overseas copyright legislation was not born in the age of AI. Australian copyright law requires that there be a human author behind the work. However, with the introduction of technology that creates art, writing and other content, there is not always a human author. Rather, the author may computer code.

Likewise, privacy law does not always account for machine learning and natural language processing in artificial intelligence. Privacy governs identifiable information, or information that's reasonably identifiable. In this world of AI and machine-learning, anonymous information will become increasingly rare, as the techniques to re-identify information using massive datasets improve. This raises important questions about how a person's privacy can be protected.

Technology in general is moving at such a rapid rate that regulation is constantly playing catch-up.

How organisations can ethically and responsibly introduce AI technology 

  1. Start small when introducing new AI technology. Seek use cases that are low in complexity and have high impact. The more complex the technology, the higher the investment and the risk – meaning that it could be difficult to predict the return on investment.
  2. Genuinely adopt a mindset of experimentation and curiosity, and to some extent, be open to failure. There is a lot of technology being pushed through many industries at a rapid rate – much of it untested.
  3. Don't run AI tools as back end IT projects. Rather, build out an agile and multidisciplinary team which includes key members of the business, and where it makes sense, include your clients and customers. It's all about creating a joint outcome, which will ultimately ensure shared passion and ownership.
  4. Think ethically about the purpose for undertaking the project. Don't just ask, 'can we?', but ask, 'should we?'. 

Listen to the full discussion, including examples of AI in action, in our podcast

Search for the podcast in your favourite podcast app or listen below. 

Transforming business podcast  Transforming business podcast  Transforming business podcast




Point of View: insights into key issues and challenges facing business today.

In this series of interviews with MinterEllison partners we hear their perspective on key areas of interest to our clients and the business community.