Artificial intelligence and the Fair Work Commission: speed meets scrutiny
Published on 15th, May 2026
Read time 6 min
If you’re thinking that artificial intelligence is all anyone is talking about at the moment, you wouldn’t be alone.
It is therefore unsurprising that even the Fair Work Commission (Commission) is talking about AI, especially given the significant uptick in claims being made, which is at least partly attributable to an increased use of artificial intelligence by applicants.
In the past three years, the Commission has seen an unprecedent surge in its workload, which is estimated to have increased by over 70% in that time. Consequently, employers are bearing that burden too – through increased time, cost and complexity in responding to proceedings.
So, how is the Commission responding, and what does this mean for employers?
Draft guidance for use of AI in the CommissionIn February 2026, Commission President Justice Hatcher identified generative artificial intelligence (GenAI) as a significant driver in the increased lodgements.
Looking at the Commission’s statistics, he noted that the ‘normal’ case load until about 2023 was a around 30,000 matters each year, with significant increases year on year. He estimated up to 55,000 in the 2025-26 financial year.
The President identified Gen-AI as a culprit, noting the widespread use of AI-generated language in applications. Whilst he acknowledged the benefits of GenAI, he also noted that there were significant detriments, including the common tendency of AI tools to make a claim that has no reasonable prospects of success look somewhat plausible.
In response to these concerns, in March 2026, the President published a draft guidance note on the use of GenAI in Commission cases, (Guidance Note) setting out three proposed requirements where GenAI is used to prepare applications or other documents in the Commission:
Disclosure – state that GenAI was used to prepare the document.
Verification – check documents to ensure that all factual statements, evidence, legal references and quotations are accurate, relevant, genuinely exist, correctly support the propositions made and are properly attributed. The document must confirm the verification process has been completed.
Witness statements – where a witness statement or declaration is prepared using GenAI, the witness must independently review and amend it to ensure it reflects their own knowledge and include an express declaration in the document.
The guidance also addresses handling personal and confidential information when using GenAI tools.
Top tips for employersSo, how should employers respond and use the Guidance Note to their benefit?
Respond to claims strategically
Many AI-generated claims in the Commission lack merit, often being lodged outside the statutory timeframe or raising jurisdictional issues (e.g. where the applicant was not actually dismissed).
Employers can use procedural tools to challenge these claims early. Jurisdictional objections must be raised at the first opportunity to swiftly do away with such claims.
Indeed, the Commission is taking a stringent approach to dismissing applications that do not meet the statutory requirements, and has, in some cases, taken the approach of criticising reliance on GenAI.
In one case, an application made an astounding two and a half years following the employee’s resignation (meaning there was an objection to the claim on the basis that there was both no dismissal and the application was out of time), the Commission noted that it was clear that artificial intelligence was used in the drafting of the application, stating “so much was clear from the deficiencies in the application which failed to address the matters required to make good a claim that Part 3-1 of the Fair Work Act had been contravened.” In this case the Commission highlighted the “obvious dangers” of relying on suggestions made by artificial intelligence.
If no objection is available, employers should resist responding to each assertion and instead:
- Focus on the statutory elements relevant to the matter (e.g. a valid reason and procedural fairness for unfair dismissal claims);
- Provide a relatively high-level and concise response;
- Avoid a lengthy tit-for-tat response.
Given the Commission’s increasing case load, if there is an objection available, you should take it.
Test factual assertions and chronology
Gen-AI has a tendency to hallucinate and ‘fill in the blanks’ with its own narrative. The President himself demonstrated this hallucinatory tendency by telling ChatGPT he had been dismissed, providing only minimal facts and asking what he could do. The outcome was a ready-to-file general protections application, a witness statement that invented the facts, and an estimated $15,000 - $40,000 settlement. The reality was that the ‘claim’ had no reasonable prospects of success.
Whilst the Guidance Note proposes to require applicants who use GenAI to verify their materials and make any necessary changes so that all details are correct and relevant to the case, it should not be assumed that they will. As a result, in responding to claims, employers should verify their own records, and call out inconsistencies and inaccuracies.
Relevantly, a failure to verify AI generated material may have cost consequences for parties, and knowingly giving false or misleading evidence to the Commission is a criminal offence punishable by imprisonment for up to 12 months. As a result, fact checking and calling out any discrepancies is an important step in responding to claims.
A recent decision of the Commission in Hoverd v M & JD Pty Ltd [1] had Deputy President Lake dismissing the unfair dismissal application on the basis that what was involved was a resignation, and inviting the employer to consider making a costs application, noting that the applicant had acknowledged using AI for his application and materials, whilst nevertheless asserting that all the information, evidence, and factual statements provided were based on his own knowledge and documentation when they clearly weren’t.
Learn to recognise the hallmarks of AI-generated material
Trends in the structure, formatting and language of AI-generated claims in the Commission have become apparent. An overly formulaic structure, generic references to statutory criterion with little factual application and common phrases such as “denial of procedural fairness” are red flags.
In addition, AI‑generated claims often rely on case law that initially appears persuasive but, on closer inspection, either does not exist or does not support the propositions advanced. As a result, employers should take the time to check the citations relied on, at which point, it will usually become apparent that the citations either do not exist at all, or do not support the propositions being made by the applicant.
Given that the Guidance Note will require parties to disclose the use of GenAI and verify AI generated material, if the hallmarks of AI-generated material are present but no disclosure has been made, this may be raised with the Commission. As noted above, there may be costs and criminal implication for failure to disclose and verify.
Key TakeawaysGenAI is having a growing impact on Commission proceedings, and this trend shows no signs of slowing.
Employers who remain alert to the features of AI‑generated material, raise jurisdictional objections early, and adopt disciplined, evidence‑based responses will be better placed to manage increasing claim volumes and protect their interests.
This article was co-authored by Jo Cowen, an Associate (admitted in England, not admitted in Australia) in our Perth office.
The views expressed in this article are general in nature only and do not constitute legal advice. Please contact us if you require specific advice tailored to the needs of your organisation.
For more insights from the Kingston Reid team on the workplace law issues facing organisations in 2026, head to our Publications page to access our 2026 Workplace Insights report.
Photo by Conny Schneider on Unsplash.
Speak directly with:


