As artificial intelligence (AI) and automated decision-making (ADM) technologies rapidly transform the world of work, Australian employers face a complex and shifting regulatory environment. On the employer side is the argument that it creates opportunities for increased productivity, efficiency and innovation. On the other hand, unions argue that there are risks ranging from potential discrimination in hiring to concerns over workplace surveillance and job displacement.
AI was a hot topic leading into the Productivity Roundtables in August this year where union representatives pushed for a strong regulatory framework and greater worker voice in the adoption of AI in the workplace. If union proposals are adopted, they have the potential to impede employers’ abilities to implement productivity measures in their workplaces.
While the union movement continues to push these reforms, the Australian Government has been careful not to commit to a position yet, instead announcing a regulatory “gap analysis” to determine whether legislative change is necessary, including a review of workplace laws. This work forms part of the Australian Government’s National AI Plan which is currently being consulted on.
While the outcome of this review and the National AI Plan is not expected to be completed until the end of the year, key figures in the Australian Government have already endorsed union calls for a greater voice in the adoption of AI in the workplace. Meaning that this issue remains one to watch in what could be the next major industrial relations contest for employers.
In this article we consider employers’ existing obligations with respect to the use of AI and ADM technologies in the workplace and how they should navigate the shifting regulatory environment into the future.
Existing regulatory framework
Despite claims from the union movement to the contrary, Australia’s existing workplace laws already provide a foundation of protections that are relevant to the use of AI and ADM.
Unfair dismissal laws, anti-discrimination statutes, adverse action provisions and work health and safety (WHS) legislation all play a role in safeguarding employees. For example, even if an algorithm makes a decision to terminate employment without human oversight, the employer remains liable under unfair dismissal laws. The Fair Work Commission (FWC) would still require a valid reason for dismissal and would assess whether the process was fair and reasonable.
Discrimination law presents a more nuanced challenge. While intent is not required for a finding of discrimination, the law typically contemplates actions taken by a person. This raises questions about liability when a decision is made solely by an algorithm, such as where an employer may rely on ADM technology to vet prospective employees, in circumstances where candidates may inadvertently be rejected for discriminatory reasons.
While some may point to this as a “regulatory gap”, the general protections provisions under the Fair Work Act 2009 (Cth) (FW Act) arguably capture circumstances where a prospective employee has been rejected for discriminatory reasons, regardless of whether a human or an algorithm made the decision. In practice, an employer would find it difficult to overcome the hurdle of a reverse onus of proof if relying exclusively on ADM technologies for recruitment purposes.
Workplace surveillance is governed by a patchwork of state and territory laws, as well as WHS obligations. While these laws may be in need of modernisation to keep pace with technological change, they do provide a level of protection against unreasonable monitoring and data collection.
Consultation requirements are another area of focus. Most employees are covered by modern awards or enterprise agreements that mandate consultation when major changes, such as the introduction of new technology, are likely to have a significant effect on employees. These obligations are broad enough to encompass AI and ADM, ensuring that employees and their representatives are involved in discussions about technological change. Recent committee reports and union submissions argue that consultation duties are sometimes “obviated by employers” and may lack transparency in practice, creating uncertainty over whether AI deployment constitutes a major change triggering formal consultation. While there is little proof that this is the case, this argument is quickly gaining support in the Federal Cabinet.
Federal Minister for Industry, Innovation and Science, Senator Tim Ayres, who is the Government’s lead on all things AI, has publicly endorsed a greater union voice in the adoption of AI in workplaces. Similarly, Assistant Treasury Minister, Dr Andrew Leigh MP, recently said that the union movement made the case at the Productivity Roundtable that “workers must be partners in shaping how AI is deployed, not passive recipients of decisions made in corporate boardrooms”.
Where to from here?
Recent developments indicate that specific regulation of AI in the workplace is not just a possibility but is already here. Notably, the introduction of a statutory Digital Labour Platform Deactivation Code for gig economy platforms and proposed amendments to the Workers Compensation Act 1987 (NSW) signal a move towards greater oversight of automated systems.
The New South Wales workers compensation changes, currently before Parliament, are particularly novel and may provide a blueprint for similar laws in other jurisdictions. They seek to link work, health and safety risks with workplace surveillance and “discriminatory” decision-making, providing union officials with specific entry rights to inspect “digital work systems” to investigate suspected breaches of the law.
These reforms purportedly aim to ensure human oversight in key decisions, prevent unreasonable performance metrics and surveillance, and grant unions increased powers.
At the Federal level, unions, led by the Australian Council of Trade Unions (ACTU), are advocating for mandatory “AI Implementation Agreements” that would require employers to consult with staff before introducing new AI technologies. These agreements would guarantee job security, skills development, retraining, and transparency over technology use. Additional union proposals include a right for workers to refuse to use AI in certain circumstances, mandated training, reforms to surveillance laws, and expanded bargaining rights related to AI adoption.
In a public critique of the Productivity Commission’s interim report on AI, the ACTU argued that the Productivity Commission had adopted a “let it rip” stance and reiterated its call for a “dedicated AI Act and a well-resourced regulator”. It also opposed any copyright changes that would diminish workers’ rights or allow their work to train AI systems without meaningful consent.
While the Australian Government appears to be moving away from a dedicated AI Act, recent supportive comments from key members of the Australian Government indicate that employers should be prepared for more targeted legislative changes which give workers and unions greater voice in the adoption of AI in the workplace.
Practical steps for employers
In this evolving landscape, employers should take proactive steps to manage both legal compliance and workforce relations. In practice, organisations should:
- Ensure human oversight: maintain human involvement in significant employment decisions made using AI or ADM, particularly in hiring, firing, promotion and performance management.
- Conduct AI risk assessments: evaluate the potential bias, privacy, WHS and discrimination risks before implementation.
- Consult with employees: engage in timely and meaningful consultation with employees and their representatives when introducing new technologies, in line with existing modern award or agreement obligations.
- Develop clear policies: establish and communicate clear policies on the use of AI, workplace surveillance and data handling to ensure transparency and build trust.
- Invest in skills development: provide upskilling and retraining opportunities to help employees use AI safely and effectively, adapt to technological change and maintain workforce capability.
- Monitor legal developments: track reforms at federal and state levels and emerging best practices to ensure ongoing compliance and readiness for future reforms.
Preparing for the future of work
With government, unions, and business groups all weighing in on the future of AI regulation, it is crucial for employers to understand both the current legal framework and the likely direction of future reforms. Employers who act now, will be better prepared to meet upcoming reforms and maintain the trust needed for successful AI adoption.
The views expressed in this article are general in nature only and do not constitute legal advice. Please contact us if you require specific advice tailored to the needs of your organisation’s circumstances.