A senior lawyer in Australia has issued a formal apology to a judge for submitting documents in a murder case that contained fabricated quotes and non-existent case judgments generated by artificial intelligence.
The blunder in the Supreme Court of Victoria is another in a litany of mishaps AI has caused in justice systems around the world. Defense lawyer Rishi Nathwani, who holds the prestigious legal title of King’s Counsel, took “full responsibility” for filing incorrect information in submissions in the case of a teenager charged with murder, according to court documents seen by The Associated Press on Friday.
This error caused a 24-hour delay in the case’s resolution. Justice James Elliott, who had hoped to conclude the matter on Wednesday, was forced to postpone his ruling. On Thursday, he found the defendant not guilty of murder due to mental impairment.

The AI-generated errors included fictitious quotes from a speech to the state legislature and non-existent case citations supposedly from the Supreme Court. These discrepancies were discovered by the judge’s associates, who were unable to locate the cited cases.
The Supreme Court of Victoria had previously released guidelines on the use of AI by lawyers. Justice Elliott emphasized that artificial intelligence should not be employed unless its output is independently and thoroughly verified.

This development follows earlier reports of similar incidents in the United States. In 2023, a federal judge imposed fines on two lawyers and a law firm after ChatGPT was blamed for the submission of fictitious legal research. Later that year, AI-generated fake court rulings were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump.
The integration of AI in legal proceedings is raising important questions about the reliability of information and the responsibility of legal professionals. As this technology continues to evolve, its impact on the justice system will undoubtedly require ongoing scrutiny.
