Adroid standing by
cabinet with law books
ARTICLES
&
RESOURCES
July 2025
New Alaska Ethics Opinion on
Artificial Intelligence
WSBA NW Sidebar
CLE - July 16, 2025
Ethical Issues Related to the Use of Artificial Intelligence
June 2025
LegalTech Tools:
A Quick and Proper Ethical Technology Assessment
Regulating AI in Legal Practice:
Challenges and Opportunities
April 2025
AI is not going to take over lawyers’ jobs – yet
Using generative artificial intelligence? Ethics guidance is available
February 2025
Illinois Supreme Court Adopts Policy
on Artificial Intelligence Use in Judiciary
November 2024
The Implications of ChatGPT
For Legal Services and Society
October 2024
September 2024
The Future is Now: Artificial Intelligence and the Legal Profession
Ethics guidance for generative AI use
August 2024
Transformative technologies (AI) challenges and principles of regulation
July 2024
Embracing Artificial Intelligence:
The Future of Legal Practice
The Future Of Legal Tech:
How AI And Automation Enhance Client Service
Generative AI in the modern lawyer’s toolbox
AI and the practice of law:
Major impacts to be aware of in 2024
Task Force on Law and Artificial Intelligence Addressing the Legal Challenges of AI
NEWS & OPINIONS
ROCHON-EIDSVIG v. JGB COLLATERAL, LLC, Tex: Court of Appeals, 5th Dist.
June 12, 2025
“This order addresses an attorney's use of technology to prepare a legal brief that included citations to non-existent cases. While the attorney did not act with intent to deceive, she failed to verify the information before filing it with the court and failed to explain or correct the citations even after the appellee, in its opening brief, pointed out the citations were of non-existent cases. The panel finds that this conduct violated basic duties of competence and candor as contemplated by the rules governing professional conduct. In light of the circumstances, the panel imposes a sanction designed to educate the attorney and uphold the standards of the legal profession.”
Willis v. US Bank National Association, Dist. Court, ND Texas
May 15, 2025
This standing order from U.S. Magistrate Judge David L. Horan in the Northern District of Texas establishes guidelines for using artificial intelligence in legal proceedings. While the court acknowledges AI's potential benefits for legal research and helping pro se litigants, it addresses the serious problem of "AI hallucinations" - when AI generates fake legal cases, citations, and authorities that appear legitimate but don't actually exist. This issue has become increasingly common as more attorneys and self-represented parties use AI without proper verification.
The order requires compliance with Local Civil Rule 7.2(f), which mandates that any brief prepared using generative AI must include a disclosure on the first page. The court warns that submitting unverified AI-generated content violates Federal Rule of Civil Procedure 11, which requires legal contentions to be supported by existing law after reasonable inquiry. Fake cases don't constitute "existing law," and relying on fabricated authorities constitutes an abuse of the legal system, potentially resulting in sanctions.
The judge's core message is that while AI can be useful for initial research, all AI-generated content must be independently verified for accuracy and actual existence before court submission. The order emphasizes that "the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution," establishing a "trust but verify" approach to AI use in legal proceedings.
IN RE INTERIM POLICY ON THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE, SC: Supreme Court
March 25, 2025
This interim policy from South Carolina Supreme Court Chief Justice John W. Kittredge addresses the use of generative artificial intelligence within the state's judicial branch. The policy applies to all judicial officers and employees, including justices, judges, attorneys, law clerks, administrative staff, and other personnel, regardless of their employment status or funding source. It defines AI broadly as technology enabling computers to perform tasks requiring human intelligence, while generative AI specifically refers to tools that create new content like text, images, or code based on user prompts, including platforms such as ChatGPT, Microsoft 365 Copilot, and Westlaw's AI tools.
The policy establishes strict guidelines for AI use by judicial personnel, requiring that only Supreme Court or Court Administration-approved generative AI tools may be used for official duties, and only on approved devices rather than personal systems. Critically, the policy prohibits using generative AI to draft legal documents like memoranda, orders, or opinions without direct human oversight and approval, emphasizing that AI content cannot be used verbatim, assumed to be accurate, or relied upon as the sole basis for decisions. The policy also restricts AI use with confidential court records or privileged information unless specifically authorized and compliant with existing security policies.
While the policy doesn't directly regulate lawyers and litigants appearing before the courts, it reminds them of their responsibility to ensure accuracy in their work product and exercise caution when using generative AI output. Attorneys are specifically warned that AI use must not compromise client confidentiality or violate professional conduct rules. The policy establishes that the judicial branch will develop training programs for proper AI use and that violations may result in disciplinary action. This interim policy remains effective until modified by further order from the Chief Justice or Supreme Court, reflecting the evolving nature of AI technology and its integration into legal practice.
AI for Legal Professionals
The Law Society of New South Wales
A SOLICITOR’S GUIDE TO RESPONSIBLE USE OF ARTIFICIAL INTELLIGENCE