Photo Credit: Noam Revkin Fenton/Flash90
Supreme Court Justice Gila Canfy-Steinitz, March 6, 2022.

On Sunday, Supreme Court Justice Gila Canfy-Steinitz dismissed a petition challenging a decision by the Sharia Court in Acre and the Sharia Court of Appeals in Jerusalem, which addressed the validity of a ruling that waived rights outlined in a divorce agreement. She wrote:

“The petitioner’s attorney, who was appointed to represent the petitioner through legal aid, based her claims on no fewer than 36 judgments––supposedly from this court–– some merely referenced, while others include quotes that are purportedly taken from these judgments. Upon an initial review of the petition, it appears that some of the citations may not originate from judgments of this court.”

Advertisement




She added: “A thorough review of the references revealed that five of them pointed to rulings that do not exist in the legal databases at all; 14 references cited rulings where there was no match between the case number, the classification of the proceeding, the identity of the parties, or the alleged content; and no fewer than 24 references included quotes or claims that had no even incidental connection to the ruling. It should also be noted that some of the claims and quotes in the petition were factually incorrect, as they presented legislation or case law that did not align with the actual law.”

The lawyer, who was caught red-handed, initially attempted to bluff the judge, claiming it was a “writing error” and downplaying the incident. However, she eventually confessed that the source of the error was her reliance on a website recommended to her by colleagues whose advice she trusted.

The judge used her investigative skills (as an attorney she specialized in medical malpractice – DI) to fill in the gaps in the lawyer’s confession, stating, “Although the petitioner’s attorney did not specify the unknown ‘site’ she relied on blindly, the references to non-existent judgments, the inconsistencies within the citations, and the non-existent sources strongly suggest, with a high degree of probability, that the petitioner’s attorney used a website based on artificial intelligence. Given the striking similarities between this case and other recent instances, as I will outline below, I will address the growing phenomenon of using artificial intelligence tools to draft court documents.”

Canfy-Steinitz detailed the risks in various areas related to the law: leakage of personal information, the admissibility of evidence suspected to have been generated by AI tools, and, importantly for this case, “AI hallucinations”—situations where AIs produce fictitious information that closely resembles real data, making it difficult to distinguish between the two.

She explained, “Another challenge, which seems relevant to our case, is the phenomenon known as artificial intelligence hallucinations. This term, borrowed from psychology, describes a situation in which an AI system ‘hallucinates’—that is, it generates content containing false or misleading information that is presented as fact… This issue is compounded by the fact that the information the algorithm relies on can often be misleading, outdated, or even intentionally biased… These outputs are perceived as so relevant and logical that some people rely on them without verifying their accuracy.”

She added, “It is worth noting that this is not the first case in which court documents containing false legal references—an act of artificial intelligence hallucinations—were submitted to the courts. This has occurred in the recent past, both in our jurisdictions and in other countries.”

Robot lawyer / AI Golem

AI HAD SOMETHING TO SAY

I asked Chat GPT to explain the problems inherent in relying on artificial intelligence in writing legal appeals. She elucidated:

Relying on artificial intelligence (AI) in writing legal appeals presents several challenges that can impact the quality, fairness, and effectiveness of legal proceedings:

1. Lack of Human Judgment: AI lacks the nuanced understanding of legal principles that comes with human experience. In law, decisions often depend on interpreting complex facts, emotional considerations, and the broader social context, which AI may not fully grasp. This can result in appeals that overlook subtleties in the case or fail to adequately address human rights or ethical considerations.

2. Inaccuracy in Understanding Legal Precedents: AI systems can struggle to interpret or apply legal precedents correctly. While AI can process vast amounts of data, it may miss crucial distinctions between cases or misinterpret the application of precedent, leading to flawed legal arguments in appeals.

3. Data Bias: AI systems are trained on large datasets, and if those datasets contain biases—whether in the form of biased court rulings, demographic imbalances, or historical prejudices—AI-generated appeals may perpetuate or amplify those biases, leading to unfair outcomes, especially for marginalized groups.

4. Overreliance on Templates: AI tools might use templates or automated structures that could create generic, formulaic arguments. Legal appeals require originality, creative thinking, and tailored solutions. AI might produce responses that lack the unique, case-specific insights an experienced lawyer would provide.

5. Ethical Concerns: There are ethical concerns around the use of AI in legal practice. The role of lawyers is not just to advocate for their clients, but to ensure justice is served. If an AI-generated appeal overlooks critical arguments or does not prioritize justice in the same way a human advocate would, it could undermine the fairness of the legal process.

6. Transparency Issues: AI systems, especially those based on machine learning, can be opaque, meaning their reasoning processes may not be easily understood. In legal matters, it’s essential for the reasoning behind a decision to be transparent, so that all parties can challenge it if necessary. If AI-generated appeals cannot be easily explained or justified, this could lead to a lack of accountability.

7. Legal Ethics and Responsibility: Legal professionals are bound by codes of ethics and are responsible for the arguments presented in court. If an AI tool generates a flawed appeal, it could potentially be seen as the responsibility of the lawyer who used it. This could lead to liability issues and undermine public trust in the legal system.

These challenges underscore the importance of keeping AI as a tool to assist lawyers rather than replacing human judgment in the crafting of legal arguments.

Couldn’t have said it better myself.


Share this article on WhatsApp:
Advertisement

SHARE
Previous articleIs Israel Betraying its Citizens with Health Care for Terrorists?
Next articleSpain Grants Citizenship to 72,000 Sephardic Jews
David writes news at JewishPress.com.