Increased AI regulation? Final Senate Report key takeaways

8 minute read  04.12.2024 Sam Burrett, Chelsea Gordon, Shane Evans and Vanessa Mellis

The Senate Select Committee on Adopting Artificial Intelligence has delivered its final report, signalling a clear direction for regulation in Australia.


Key takeouts


  • The Report recommends comprehensive AI-specific legislation, moving beyond the current voluntary framework.
  • Enhanced focus on workplace impacts of AI, with new obligations proposed for employers.
  • Organisations should begin preparing for more structured AI governance requirements.

The Senate Select Committee on Adopting Artificial Intelligence (AI) (Committee) delivered its final report on 26th November 2024. The Committee's recommendations represent a material shift from the current voluntary, principles-based approach to AI, towards a mandatory regulatory framework with specific obligations for high-risk AI applications. Australian organisations developing or deploying AI technologies should understand these recommendations to enable future planning and compliance preparations.

Background on the Committee formation

On 26 March 2024, the Committee was established to consider the uptake and implications of AI technologies across Australia. The Committee received written submissions from the public. Its final report (Report) was tabled on 26 November 2024. The Committee considered various evidence, including in relation AI uptake trends, risks and harms of AI adoption, international approaches to AI regulation, and environmental impacts of AI technologies.

Three Key Implications

The Final Report contains 13 recommendations spanning AI regulation, workplace safety, and intellectual property rights. While all recommendations warrant attention, we have identified three key implications that will have an immediate and significant impact for Australian organisations.

Takeaway 1: Whole-of-Economy AI Act

The Committee has recommended introducing a comprehensive AI Act to regulate high-risk AI uses across the Australian economy. This approach proposes a model similar to the EU AI Act, and the approach taken by Canada, rather than the United States' sector-specific approach. The proposed legislation would establish mandatory guardrails for high-risk AI, and clear accountability mechanisms across the AI supply chain. The Committee advocates for a principles-based definition of high-risk AI, supported by a non-exhaustive list of applications, while allowing for targeted reforms to existing legislation where specifically warranted.

Our Analysis:

The Committee's stance on a whole-of-economy AI Act is a significant moment in the debate about regulation of AI in Australia. Although the Australian Government is not bound to accept this recommendation, we expect it will be highly influential. There are three key implications of this approach that Australian leaders should contemplate:

  • First, an AI Act would establish clear guardrails for Australian organisations. While this may increase the costs associated with AI development and deployment, a unified regulatory framework could also streamline compliance compared to the current patchwork of regulations.
  • Second, there is a risk the whole-of-economy approach could result in duplication of concepts and obligations. The Committee is alive to this and states the specific implementation of the approach should ‘seek to minimise this risk’. Careful attention will be required to harmonise new requirements with existing obligations.
  • Finally, we have observed varied perspectives in submissions to the committee, particularly from large technology companies, expressing concern with the need to enable, rather than stifle, innovation. Further industry engagement could impact the shape and form of the final regulatory regime, and Australian organisations should develop governance frameworks that are flexible enough to enable compliance while acknowledging the landscape is still evolving.

Takeaway 2: Workplace Health and Safety

The Committee has taken a strong position on AI in the workplace. It has recommended the definition of ‘High-risk’ AI should include AI that impacts the rights of people at work - a position that goes beyond the EU's regulatory approach. If the Committee’s recommendations are implemented, businesses may be required to undertake risk assessments and employee consultation processes before implementing AI systems that affect workers. The Committee also recommends that existing WHS frameworks be extended to cover AI-related risks.

Our analysis:

  • The classification of workplace AI as 'high-risk' would significantly expand compliance obligations for several common use cases of AI in the workplace, such as AI resume scanning services, automated rostering systems, or automated performance evaluations. Australian organisations should closely examine existing workplace AI applications and consider whether appropriate guardrails and controls may need to be implemented if these systems are to be classified as 'high-risk'.
  • The Committee expressed strong concern about the impacts of AI on workers’ rights and working conditions, particularly around workforce planning, management and surveillance. The Committee said there was 'considerable risk these invasive and dehumanising uses of AI in the workplace undermine workplace consultation and workers' rights more generally.' This suggests organisations should develop clear protocols around automated management systems, for example.
  • We're already seeing the impact of AI across customer service operations and employee productivity tools. In our view, organisations that embrace responsible AI implementation - with robust worker protections and clear governance - are likely to see enhanced productivity and reduced operational risks in the long term.
  • We recommend organisations begin reviewing their AI implementations through a WHS lens now. We also recommend employers put in place clear directions to workers about how AI should and should not be used in connection with their work.

Takeaway 3: Copyright and Intellectual Property (IP)

Three of the Committee's recommendations relate specifically to copyright and IP. The Committee has taken a firm stance on the use of copyrighted material in AI training, recommending mandatory transparency requirements and compensation mechanisms for rightsholders.

The Committee recognised that AI has significantly impacted creative industries, notably describing use of copyrighted work by some ‘multinational technology companies’ as ‘theft’. The Committee has urged the Australian Government to urgently undertake consultation with the creative industry on various issues, including to consider an appropriate mechanism to ensure fair remuneration is paid to creators ‘for commercial AI-generated outputs based on copyrighted material used to train AI’.

Our perspective

There are already copyright and IP protections in law. It is therefore unsurprising the focus of the Committee’s recommendations was to enhance transparency and consider the mechanisms that should be implemented to ensure fair remuneration for use of copyright material.

Other recommendations

Other key recommendations of the Committee include:

  • Environmental impact management: The Committee recommended a ‘coordinated holistic approach’ to managing AI infrastructure growth in Australia, particularly given projections that data centres could consume up to 15% of Australia’s total energy use by 2030. Recommendations include comprehensive environmental reporting and planning requirements for AI facilities;
  • Automated decisions: That the Government should introduce a right for individuals to request meaningful information about how automated decisions with significant effects are made, and implement a consistent legal framework to specifically monitor automated decisions in government services;
  • General purpose AI: That the definition of ‘High-risk’ AI should include general-purpose AI, such as large language models. This position would directly impact major tech companies operating in Australia;
  • Infrastructure: That the Government should take a comprehensive approach to managing AI infrastructure in Australia; and
  • Investment: That the Government should continue to increase the financial and non-financial support it provides in support of sovereign AI capability in Australia, focussing on Australia’s existing areas of comparative advantage and unique First Nations perspectives.

Implications for Business

The Report indicates that increased AI regulation is on the horizon. Exactly how close or far away, is unclear. What we do know is that organisations should take steps now to ensure AI use within their organisation (especially in high-risk contexts) complies with law and is appropriately governed.

We recommend organisations take a staged approach to preparing for the expected regulatory changes ahead. Below we have outlined a list of key actions organisations should take in the immediate future (e.g. over the next 3 months) to prepare for Australia's pending regulatory AI framework.

Immediate Actions (next 3 months)

  • Conduct an audit of current AI systems and use cases across the organisation, including automated decision-making systems and workplace AI tools, to develop a current state analysis.
  • Review existing governance frameworks against the Voluntary AI Safety Standard and the Committee's recommendations, particularly focusing on risk assessment and accountability measures. If AI is being used by your organisation without appropriate governance settings, consider developing a Governance Framework or seeking external advice.
  • Develop a risk assessment framework that specifically addresses high-risk applications (including in the workplace), while also considering broader impacts of AI use such as environmental impacts.
  • Establish cross-functional oversight committees with clear documentation requirements for AI decision-making processes.
  • Map current workplace consultation processes for AI implementation against proposed WHS requirements.
  • Include regular stakeholder consultations and clear protocols for reviewing and updating AI governance practices as regulatory requirements evolve.

These actions should be viewed as preparatory steps rather than final solutions, as the specific requirements will depend on the Government's response to the Committee's recommendations and any resulting legislation.

Organisations should also review the Voluntary AI Safety Standard and Proposed Mandatory Guardrails, which the Committee indicated should inform the Government’s approach to AI moving forward.

Looking Ahead

While these recommendations are not yet law, they signal a clear direction for AI regulation in Australia. The Committee's emphasis on transparency, accountability, and worker protection suggests a more robust regulatory environment is likely, and preparations should commence now to establish or uplift AI Governance practices.

The proposed reforms would require significant changes to current business practices, particularly in relation to workplace AI applications and automated decision-making systems. However, the Committee's risk-based approach also suggests that low-risk AI applications may face minimal regulatory burden.


For today and tomorrow, whether you need protection against AI risks (including in relation to compliance with changing legal frameworks), or are shaping your organisation with responsible and ethical AI to enhance and elevate capability, our nationwide AI Client Advisory team will guide you through your AI adoption journey – from insight, to strategy and implementation.

Our AI expertise includes legal and policy, risk, workforce, privacy data protection and cyber, procurement, strategy, and a co-creation model to develop tailored solutions for your organisation (ME AI). Operating with the highest standards of independence and trust as a firm for almost 200 years, our nationwide AI experts have the know-how and experience to help you make the best decisions, faster.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiJhNGI4YjhmMi0xNGQwLTRmNGMtOTNiMy1hZDQ1NDMyNGY1MTQiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTczNzEyNTE2OSwiZXhwIjoxNzM3MTI2MzY5LCJpYXQiOjE3MzcxMjUxNjksImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2luY3JlYXNlZC1haS1yZWd1bGF0aW9uLWZpbmFsLXNlbmF0ZS1yZXBvcnQta2V5LXRha2Vhd2F5cyIsImF1ZCI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2luY3JlYXNlZC1haS1yZWd1bGF0aW9uLWZpbmFsLXNlbmF0ZS1yZXBvcnQta2V5LXRha2Vhd2F5cyJ9.APOV8rOpLHxW-10417XzL5p-OJvzZFdhoO8tQoqvLLY
https://www.minterellison.com/articles/increased-ai-regulation-final-senate-report-key-takeaways