The primary and most comprehensive guidance is Formal Opinion 512: Generative Artificial Intelligence Tools, issued by the ABA Standing Committee on Ethics and Professional Responsibility on July 29, 2024. This is the ABA's first formal ethics opinion specifically addressing GAI in law practice. It treats GAI as a tool (similar to paralegals, software, or other non-human assistance) that lawyers must understand, supervise, and verify, while upholding core ethical duties.
Key Ethical Principles from ABA Formal Opinion 512
The opinion organizes guidance around six main areas tied to the ABA Model Rules of Professional Conduct (which many states adopt or adapt):
Competence (Model Rule 1.1) Lawyers must provide competent representation, including having a reasonable understanding of the benefits and risks of relevant technologies like GAI.
This includes knowing the tool's capabilities, limitations (e.g., "hallucinations" or fabricating facts), and potential for inaccuracy.
No need to become an AI expert, but lawyers should stay informed through training, testing outputs, and periodic updates as technology evolves.
Technological competence is explicitly part of Rule 1.1 (Comment 8), requiring lawyers to keep abreast of changes in practice, including technology benefits and risks.
Confidentiality (Model Rule 1.6) Lawyers must protect all client information (regardless of source) from unauthorized disclosure.
Avoid inputting confidential data into GAI tools without adequate security, privacy protections, and terms of service review (e.g., check if the provider uses inputs for training).
Informed client consent may be required if disclosure risks exist; otherwise, use tools with strong safeguards or avoid sensitive inputs.
Communication with Clients (Model Rule 1.4) Lawyers may need to disclose GAI use if:
The client asks about methods or tools.
Disclosure is material to the representation (e.g., if GAI significantly affects strategy or outcomes).
Informed consent is needed for certain risks.
Candor Toward the Tribunal / Avoiding Frivolous Claims (Model Rules 3.1, 3.3, 8.4(c)) Lawyers must not mislead courts with false statements, fabricated evidence, or unverified AI-generated content.
Independently verify all GAI outputs (e.g., citations, facts, legal analysis) before submission.
Courts have sanctioned lawyers for uncorrected "hallucinations" in filings—ABA stresses personal accountability.
Supervisory Responsibilities (Model Rules 5.1 and 5.3) Partners/supervisors must ensure firm policies, training, and oversight for GAI use by lawyers and non-lawyers (e.g., staff, contractors).
Establish clear guidelines, provide education on risks, and monitor compliance—similar to supervising paralegals.
Fees and Expenses (Model Rule 1.5) Fees must be reasonable; avoid charging clients for excessive time saved by GAI (e.g., don't bill full hours for AI-drafted work).
Disclose basis for fees/expenses; client consent may be needed for passing on tool costs.
Reference earlier ABA guidance (e.g., Formal Opinion 93-379) on reasonable billing practices.
Overall Takeaways from ABA Guidance
GAI is permissible but only if lawyers can reasonably ensure compliance with ethical obligations—human judgment and oversight are non-negotiable.
No blanket prohibition—AI enhances efficiency (e.g., research, drafting, intake) but carries risks like bias, inaccuracy, or data exposure.
Ongoing evolution—The opinion notes GAI is a "rapidly moving target," so ABA (and state bars) will likely issue updates. Many states (e.g., California, Florida, New York) have their own opinions building on or preceding this.
Broader ABA efforts—Resolutions like 112 (2019) urge addressing AI bias, transparency, and oversight; the ABA Task Force on Law and Artificial Intelligence (formed 2023) monitors impacts.
For the full text, see the official ABA Formal Opinion 512 (available on the ABA website). If you're a lawyer in a specific state, check your state bar's ethics opinions, as they may adapt or expand on ABA guidance (many states require similar technological competence).
This framework aligns well with governance-first AI tools (like those for ethical intake/follow-up in regulated practices), as it prioritizes human oversight, verification, and client protection—key to avoiding ethical pitfalls in the AI wave. If you'd like a deeper dive into a specific rule, state variations, or how this applies to law firm intake automation, let me know!