AI is everywhere these days, and it’s changing the way we work, live, and interact. With its rapid growth, it’s hard to ignore how AI boosts efficiency, increases knowledge, and becomes a normal part of many industries. At our law firm, we use legal AI tools to help with research and tasks, and we’ve seen firsthand just how much they can improve our work.
But as much as AI has its perks, it’s important to understand the risks too. The case of Dayal is a perfect example of why we can’t fully rely on AI without being cautious. It shows that while AI offers big benefits, it’s crucial to keep ethical and professional standards in mind when using it.
Dayal [2024] FedCFamC2F 1166
Facts
Heard in the Federal Circuit and Family Court of Australia (Division 2), the case involved Victorian solicitor Mr. Dayal (anonymised in published judgment), who submitted a list and summary of legal authorities in court.
However, it was later revealed to contain non-existent cases.
The document was generated using an AI tool in legal practice software without verification. Upon discovery, Mr. Dayal acknowledged his error, offered an unconditional apology, and took remedial steps.
Issues
- Professional Standards Breach: Submitting inaccurate and unverified legal research violates solicitors’ duties of competence, diligence, and honesty to the court.
- AI Usage in Legal Practice: The case highlighted risks associated with relying on generative AI for legal research without independent verification.
- Public Confidence: Unchecked reliance on AI undermines public trust in the legal profession and judicial process.
Findings
- Mr. Dayal breached ethical and professional standards by tendering an unverified AI-generated document.
- He did not intentionally mislead the court but failed to understand the AI tool’s limitations.
- Steps taken by Mr. Dayal, including an apology, cost settlement with the opposing party, and disclosure to the Legal Practitioners Liability Committee, demonstrated accountability.
- USA District Court case of Mata v Avianca Inc is a precedent for this issue: Attorneys who used generative AI to prepare legal submissions cited non-existent cases and initially defended the filings when questioned by the court. They were found to have neglected their professional responsibilities and were sanctioned. Judgment in this case was as follows:
“Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavours. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”
Outcome
The court accepted the solicitor’s genuine apology and acknowledged their efforts to mitigate the impact of their conduct, as confirmed by the opposing counsel. While noting the stress the solicitor endured and considering a repeat of such conduct unlikely, the court deemed it appropriate for the Victorian Legal Services Board and Commissioner to assess whether further investigation or action is needed. The court emphasised the public interest in raising awareness of professional conduct issues, especially given the growing use of AI tools in legal practice. The referral was not punitive but intended to address broader implications for professional conduct in the age of AI.
This outcome is a lesson for us: it underscores the need for diligence and transparency in integrating AI into legal workflows, while upholding ethical obligations.
If you have any enquiries, please get in touch with Warlows Legal today using the contact information below.