As AI use continues to embed itself into every facet of a company's operations, it is becoming increasingly important for management and boards to ensure this is done in a measured way with sufficient governance control systems in place. This article examines how responsibly integrating AI into ESG reporting processes, companies can unlock new opportunities to assess and improve their ESG performance, while mitigating the risks associated with AI misuse or greenwashing.
Responsible AI (RAI) is an ESG issue
Responsible business: Aligning responsible AI and ESG
Responsible businesses see ESG not as a compliance discipline, but a mechanism to forward manage risk, capture opportunities and create strategic value. Responsible AI (RAI) is the practice of developing and using AI systems in a way that benefits individuals, groups, and broader society - while minimising the risk of negative consequences. RAI is increasingly recognised as a critical foundation for enhancing trust in AI systems - and it's a key element of in the Australian Government's evolving AI policy framework. As such, overlaying RAI onto existing ESG frameworks is a useful way for Australian organisations and executives to effectively analyse threats and opportunities related to AI.
A defining characteristic of ESG is that it covers issues which are somewhere on the journey from 'societal expectation' to 'stakeholder expectation' to 'soft law' and eventually to 'legal obligation’. A recent development in Australia highlights how RAI uses may be starting this journey. In June 2024, the Australian eSafety Commissioner was empowered to seek details from online service providers about their consideration of user safety when designing and operating generative AI services, as part of the updated Ministerial Basic Online Safety Expectations (BOSE). These set standards and expectations for online service providers and may signify a step towards more binding obligations in the future.
RAI implementation and governance practices will continue to evolve, and companies that are well advanced in their ESG operations and in meeting ESG commitments will be better positioned to manage RAI in a meaningful, considered way before they are required to by law. Considering RAI in the context of existing ESG frameworks can help business leaders identify and mitigate AI-related risks – and better understand best AI-related opportunities for all stakeholders.
Safeguarding trust with RAI and ESG
One example of the crossover is data privacy and cybersecurity, which are key ESG issues and are also material to both AI innovation and risk management. The vast amounts of data that are collected and processed by AI systems can pose significant risks if not properly protected and managed. This is particularly relevant in industries such as healthcare, finance, and government - where the consequences of data breaches and the loss of trust may be severe. Companies that prioritise data privacy and cybersecurity as part of their ESG commitments are better positioned to manage the risks associated with AI – including data breaches, unauthorised access, or misuse of personal information, while also fostering responsible innovation.
In addition, ESG considerations should factor into AI use case analysis. Doing so can help illuminate both the consequences of particular use case - such as the potential for AI to enhance sustainability efforts, improve social outcomes, or contribute to more effective governance. However, it is equally important to evaluate the potential risks, such as the perpetuation of biases, the displacement of workers, or the erosion of privacy. Applying an ESG lens to use case assessments thus provides a different perspective on their potential value.
AI washing and ESG
The rise of AI washing
In the race to capitalise on the promise of AI, mentions of AI in public statements and investor calls have skyrocketed. This trend has also given rise to “AI washing” - a practice similar to greenwashing in the context of ESG. AI washing is the practice of either downplaying, exaggerating, or even fabricating a company's AI capabilities to influence public or stakeholder opinions.
A cautionary tale of AI washing
The consequences of AI washing are not hypothetical – they are already materialising. In March, the US corporate regulator, the Securities and Exchange Commission (SEC), fined two investment advisory firms for allegedly making false statements about their use of AI technology in forecasts, and signalled that it is closely scrutinising other investment firms for so-called AI washing. The SEC found that Toronto-based Delphia misled the public about its AI use, falsely claiming to use client data to enhance its AI models, while in reality, no such data integration existed. Similarly, San Francisco-based Global Predictions made claims about providing "AI-driven forecasts" and being the "first regulated AI financial adviser," which were proven to be untrue.
Transparency as a pillar of responsible AI
The SEC's actions against Delphia and Global Predictions serve as a warning to other firms about the consequences of misleading stakeholders, and the importance of transparency (a key AI ethics principle). It is essential for companies to back their AI claims with concrete evidence and transparency to maintain investor and stakeholder trust.
There is also a flipside issue to AI washing: a lack of transparency around where, when and how AI is being used by organisations. The EU's proposed AI Act emphasises the need for transparency in AI systems, requiring companies to provide clear information about the capabilities and limitations of their AI technologies. This legislation underscores the growing demand for transparency in AI use across industries.
Responsible AI is a light that enhances transparency. By activating RAI principles and practices (such as public disclosure), alongside existing ESG frameworks, Australian organisations can ensure that their AI efforts are ethical, transparent – and aligned with broader sustainability goals. One novel opportunity to do this lies in leveraging AI.
The use of AI to support ESG reporting requirements
A key challenge in addressing ESG is managing the vast amount of data required to meaningfully and comprehensively report on relevant activities and obligations – and AI applications have emerged as valuable tools to help Australian organisations.
AI applications including Natural Language Processing (NLP) and Machine Learning (ML) are used by technology companies to analyse data relating to ESG obligations, including extensive media, stakeholder, and third-party information. These AI-powered tools can generate summaries, extract valuable insights, and even identify controversies or incidents that are relevant to a company's ESG policies and positions. In fact, some AI systems can help combat Greenwashing and AI washing, detecting potentially misleading claims about a company's environmental or technological performance.
AI is also playing a role in ESG compliance and reporting processes. ESG requirements and reporting often rely on large volumes of data – and emerging AI-powered tools can assist in tracking information and organising it into reports.
For Australian organisations managing a swelling volume of ESG-related information, AI-driven tools are a promising option.
Building trust in the age of AI-driven sustainability
As the business landscape evolves, the successful integration of AI and ESG will be a key differentiator for companies seeking to build resilience, transparency, and trust in the age of AI-driven sustainability. To find out more, please reach out at any time.