AI-pocalypse now? Exploring the broader impact of AI on society

10 minute read  26.07.2023 Paul Kallenbach, Glen Ward, Maria Rychkova; Aaron Wood

In just a few short months, generative AI has moved from concept to concrete reality, capturing global consciousness in the process.


Key takeouts


  • The growing excitement around generative AI brings both potential benefits and risks. Proponents claim it could boost global GDP by 7%, while critics warn of 'existential risks' and imminent harm.
  • This technology is already changing the way we work and live, with its impact set to increase as development advances. To adapt to this evolving landscape, it's crucial to upskill and leverage AI as a helpful tool.
  • Stay informed on AI's implications for society and its future regulation. Educate yourself, embrace the change, and harness the power of AI responsibly.

The current sound and fury attending generative artificial intelligence (AI) is inescapable; from predictions of a 7% global uplift of GDP, to dire warnings of 'existential risks', loss of life and livelihood, and other potential societal harms. In the midst of the early hype, we break down some of the claims surrounding generative AI, and consider how it might augment the way we work, broader societal issues, and how regulation might look in the not too distant future.

What is Generative AI?

Generative AI, popularised by machine learning models such as ChatGPT and Bard, is an application of AI by which computer models are trained (or 'learn') from datasets without direct human input. The tech experts might describe it as algorithms powered by neural networks that use large language models in order to create content, like the predictive text function on your phone, though far more sophisticated.

Generative AI is currently subject to significant limitations. Many of the models (including ChatGPT) rely on static datasets, without access to the internet or any external information. Generative AI also suffers from frequent ‘hallucinations’, where gaps in the dataset are filled by predictive information (with the same confidence of actual 'facts') that doesn't reflect factual reality. The extent of this limitation was recently highlighted when a lawyer in the US with 30-years' experience infamously admitted to using ChatGPT to conduct legal research. Startlingly, the generative AI model confabulated six entirely fictional cases, including fabricated quotes and citations.

In addition, the probabilistic basis of generative AI means that wildly different answers can be generated from the same question; or prompts can be self-fulfilling, exacerbating and entrenching human biases (such as ‘confirmation bias’).

Finally, the datasets used for training is often a publicly accessible scrape of the internet (or a subset of it), comprised of largely unverified data that is replete with errors, biases and misinformation. These may be reflected in confident-sounding responses that are entirely inaccurate, misleading or inappropriate.

Given the 'black box' nature of generative AI and its reliance on web data as a primary source of information, regulators have raised concerns regarding the potential for personal information to be used in the training dataset. In March of 2023, the Italian data protection authority, Garante, temporarily banned ChatGPT whilst it investigated concerns over the protection of personal data. We have also previously discussed concerns raised in relation to copyright protection and generative AI.

The medium term – impact on workers and industries

These limitations and concerns have not dampened the hype. One analysis estimates that two-thirds of occupations are currently at risk of automation by AI. Significant impacts are predicted across almost every industry, with the banking, high tech and life sciences sectors forecast to experience the greatest revenue uplift. McKinsey has forecast global economic growth from generative AI ranging between US$2.6 trillion to US$4.4 trillion annually. 75% of this projected value increase falls under four areas: software engineering, research and development, customer operations, and sales and marketing.

Similarly, recent figures published by the World Economic Forum predict that AI and machine learning specialists will be the fastest growing job over the next 4 years, whilst clerical and secretarial roles are listed in 8 of the top 10 fastest declining job categories.

Fastest growing vs. fastest declining jobs

Technology as a driver of employment growth – or employment decline – is not new. 60% of people currently working in occupations that did not exist in 1940. Technological revolutions have generally displaced the human workforce from manual labour into cognitive occupations. Generative AI is different from its technological predecessors, with its capacity to disrupt industries which previously had the lowest potential for automation. Other areas most unaffected by technology are already feeling the impacts of AI-led disruption, include the creative professions and education.

Some commentators have expressed concern at the risk of humans losing economic value, with AI decoupling human intelligence from human consciousness. Up until very recently, predominately cognitive roles could only be performed by humans. However, in roles where intelligence is mandatory but consciousness is optional, unconscious artificial intelligence delivers an opportunity to save time and cost, and perform tasks to a higher standard, whilst mitigating the risks dependant on human consciousness. These benefits are predicted to be realised in driverless vehicles, algorithmic stock trading, digital teachers, lawyers and physicians, as well as AI's creative ability in generating art, music and literature.

The PwC annual global workforce survey found that almost one-third of respondents were concerned about the prospect of their role being replaced by technology in three years, whilst a Goldman Sachs report has estimated that generative AI could expose the equivalent of 300 million full-time jobs to automation.

Conversely, a research brief completed by the MIT Work of the Future entitled Artificial Intelligence and the Future of Work found that fears of AI leading to mass unemployment are most likely unfounded. Rather, the mass displacement of the human workforce by AI has the ability to drive innovation and productivity, thereby creating new industries and sectors of growth. These advances will augment how we work and what workplace efficiency looks like. Some predict that this will lead to a divide between individuals who embrace AI and its productivity enhancements, as compared with those who avoid incorporating it into their work practices – and it will be the AI adopters who will replace the technology hesitant.

The longer term impacts of AI – human rights and existential threats

Generative AI technology is still very much in its infancy, with its full potential still being realised. With the broader societal consequences of generative AI not entirely known, there have been calls to halt its adoption until a better understanding of risks is achieved, coupled with appropriate regulatory safeguards.

AI has the potential to infringe on fundamental human rights. The rise of AI-driven surveillance technologies may threaten individuals’ right to privacy, whilst excessive or inappropriate reliance on AI in decision-making could reduce the transparency of, or the ability to interpose human judgment in, consequential decisions.

Bias and discrimination embedded in AI datasets can perpetuate existing inequalities or biases, or marginalise minorities. In June, a US Senate Judiciary Committee Panel considered the impact of AI on human rights. Particular concerns were raised regarding generative AI's role in surveillance, including law enforcement's use of facial recognition technology. The potential to incorrectly identify individuals has led to people being wrongly accused of crimes, more often women and people of colour. Concerns were also raised during the hearing as to how generative AI can exacerbate online misinformation by making faster, cheaper and more convincing (though entirely misleading) text, images and video, and how this threat may undermine democratic governance.

Probability of AI-caused doom - P(doom)

As the capabilities of generative AI continue to increase, so too have concerns of the potential for an AI-caused catastrophic or apocalyptic disaster. This probability of doom is sometimes referred to as ‘P(doom)’. More recently, leading AI scientists and notable figures issued a statement calling for the mitigation of the risk of extinction from AI to be considered as a global priority alongside other societal-scale risks, such as pandemics and nuclear war. Similar concerns were raised in an open letter published by the Future of Life Institute, signed by Elon Musk amongst a host of other technologists, which called on all AI labs to immediately pause the training of AI systems more powerful than GPT-4 on the basis, amongst other things, that we run the risk of losing control of our civilization. In May 2023, Paul Christiano, a former OpenAI worker who now runs AI research non-profit Alignment Research Centre, claimed that once AI systems reach or surpass the cognitive capacity of a human, there is a ’50/50 chance of doom’. At the same time, Dr George Hinton, dubbed the ’Godfather of AI’, quit his role at Google, citing concerns over the flood of misinformation, the possibility of AI upending the job market, and the existential risk posed by the creation of a true digital intelligence. In particular, Hinton referred to the exploitation of generative AI by bad actors, citing a situation where an authoritative leader may give generative AI the ability to create its own sub-goals, such as ’I need to get more power’ and the subsequent consequences of this instruction. In this respect, P(doom) scenarios generally refer to unintended consequences stemming from AI systems employed in military conflicts or malicious use by rogue actors; however, the veil of uncertainty that shrouds AI capability renders it unclear how compelling these risks are.

Looking to the future of AI regulation

AI raises a lot of questions socially, ethically, economically. But now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility,’ - Thierry Breton, European Commissioner for the Internal Market

The long-term implications of generative AI revolve around privacy, freedom of expression, discrimination, and labour displacement. Companies and governments are aware to these risks and have the ability to mitigate concerns through effective upskilling of the workforce and responsible development, transparency and accountability of AI systems.

Recently, the European Union passed the first AI Act. The AI Act will apply to anyone who develops and deploys AI systems in the EU, including companies located outside the bloc. The AI Act classifies particular risks from ‘minimal’ to ‘unacceptable’, with the latter being banned outright. Those that will be banned include real-time facial recognition systems in public spaces, predictive policing tools, and social scoring systems. Tight restrictions are also placed on AI applications that threaten ‘significant harm to people's health, safety, fundamental rights or the environment’. This can include systems used to influence voters in an election and through social media platforms. Additionally, the AI Act includes transparency requirements for generative AI, such as requiring AI systems to disclose content that is AI-generated, distinguish deep-fake images from authentic ones, provide safeguards against the generation of illegal content and the publication of detailed summaries of copyrighted data used to train generative AI. Member states are required to establish at least one regulatory ’sandbox’ to test AI systems before they are deployed publicly, as well as designating national supervisory authorities for AI regulation. Engaging in prohibited practices under the AI Act could lead to a fine of up to $43 million or an amount equal to up to 7% of a company's worldwide annual turnover, whichever is higher. These penalties are even higher than those currently imposed under Europe's General Data Protection Regulation (GDPR) which fined Meta $1.3 billion last month (the GDPR sets fines up to $10.8 million or 2% of global turnover).

As an EU Regulation, the AI Act will apply automatically and uniformly to all EU Countries as soon it is entered into force, without the need to be transposed into national law. However, it is important to note that under the current draft of the Regulation, the AI Act will not apply until after 36-months from entering into force with many predictions citing 2026 as the likely date. Accordingly, any meaningful regulation in the interim will require EU lawmakers to actively and agilely work with technology companies and establish a voluntary interim pact.

In Australia, the Department of Industry, Science and Resources launched a public consultation into the Safe and Responsible AI in Australia on 1 June 2023 and is open for submissions until 26 July 23. The House of Representatives Standing Committee on Employment, Education and Training also recently launched an inquiry into the use of generative AI in the Australia education system, taking submissions regarding the benefits, future risks and impacts of the use of Generative AI tools in Australia's education system.

‘The question is who is going to benefit? And who will be left behind?’

In just a few short months, generative AI has moved from concept to concrete reality, capturing global consciousness in the process. It is already upending administrative and creative processes across multiple industries. It carries with it the promise of increased efficiencies and productivity, but poses untold threats to the livelihoods of millions, whilst its darker implications could affect the lives of billions.

This situation is not unique. Every technological advancement has brought with it a balance of opportunity and risk. The advent of generative AI is no different.

To adapt to this evolving landscape, it is crucial for organisations and their personnel to upskill; to stay informed of AI regulation and its societal implications; to embrace change; and to harness the power of these new technologies lawfully, thoughtfully and responsibly.


The team at MinterEllison can assist you in understanding the legal issues and risks associated with AI for your organisation. Contact us to find out more.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiI4YmFjYTAxZC0xYzI1LTQxYjMtYTFkMC0wMTc4NTNkZDAwZjYiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTczMzMwMzU0MCwiZXhwIjoxNzMzMzA0NzQwLCJpYXQiOjE3MzMzMDM1NDAsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2V4cGxvcmluZy10aGUtYnJvYWRlci1pbXBhY3Qtb2YtYWktb24tc29jaWV0eSIsImF1ZCI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2V4cGxvcmluZy10aGUtYnJvYWRlci1pbXBhY3Qtb2YtYWktb24tc29jaWV0eSJ9.AOlxRyLf0tQzFRxtdmP8kWL9k4J0f6qUzJFbPvdKf8Q
https://www.minterellison.com/articles/exploring-the-broader-impact-of-ai-on-society