Skip to content
All posts
Navigating State Bar rules on AI-generated ads (+ a 50-state overview)

Navigating State Bar rules on AI-generated ads (+ a 50-state overview)

AI-generated content is fast, affordable, and often surprisingly persuasive. In a competitive legal market, that’s a tempting combo. Whether you’re writing PPC headlines, spinning up testimonial graphics, or letting a GPT tool draft your next billboard, generative AI can help you move fast.

But moving fast can be risky; especially in a profession where ethical obligations still apply, no matter how slick your tools are.

While generative AI transforms how we create and deliver ads, many state bar associations are still figuring out how to respond. This means the risk often lands squarely on your firm. If you’re marketing legal services using AI (or hiring someone who is), here’s what you need to know (and do) to stay compliant.

Why AI advertising poses ethical concerns

AI-generated ads aren’t inherently unethical. But they can be deceptively slick. Tools trained to mimic human speech and style can produce content that looks polished, persuasive, and plausible. That’s what makes them so appealing. It’s also what makes them dangerous.

The trouble starts when that polish masks a problem. A tool might describe a lawyer’s win rate in absolute terms, suggest results that no firm could guarantee, or generate language that feels like a testimonial, without ever pulling from real cases or clients. No one told it to lie. But it doesn’t know how not to.

Then there’s the visual layer. AI can create avatars, synthesize speech, and produce seemingly “authentic” video without a human behind the message. If that content isn’t clearly labeled or if it implies a real person’s involvement, it opens the door to both bar discipline and consumer deception claims.

And on the backend, the risks get even quieter. Retargeting systems driven by AI may use sensitive user behavior: visits to a legal landing page, clicks on certain keywords, time spent on intake forms, and repackage that data into follow-up ads. If those systems aren’t properly configured, or if disclosures are missing, you may end up inadvertently breaching client confidentiality.

These aren’t edge cases. They’re real consequences that can emerge not from a bad actor, but from a lack of attention. When marketing moves fast, risk slips through.

AI doesn’t just change how you produce ads. It changes what kind of oversight those ads demand. And it requires that lawyers be as careful with a headline as they would be with a piece of advice—because in the eyes of the public, both represent the profession.

The Rules of Professional Conduct didn’t go away

When the ABA released ABA Formal Opinion 506 in July 2024, it didn’t invent new responsibilities. It simply reframed existing ones through the lens of AI. Competence, confidentiality, supervision, and candor all still apply. Now, they apply in new ways.

ABA Model Rule 7.1 still prohibits misleading ads. Rule 1.6 still protects client confidentiality. Rule 5.3 still requires lawyers to supervise nonlawyers—including AI vendors, tools, or platforms. What’s changed is the complexity of applying those standards in a space where outputs are faster, fuzzier, and sometimes hard to trace.

AI doesn’t invent responsibility. It just makes it harder to track and easier to overlook. Think of the current regulatory framework in three layers.

ABA Model Rules (The bedrock)

The American Bar Association doesn’t regulate lawyer advertising directly, but its Model Rules of Professional Conduct form the basis for most state ethics rules. And when the ABA issues a formal opinion, it often signals where things are headed.

That’s exactly what happened in Formal Opinion 506. The ABA didn’t announce new duties, It clarified that your existing ones still apply, even when AI is involved. The opinion also made something else clear: you don’t get to outsource ethical responsibility to a tool. If an AI model generates content that crosses a line, whether it’s misleading, improperly trained, or based on sensitive inputs, the lawyer is still the one accountable.

State guidance may differ in timing and tone, but almost all of it starts from this foundation. If you’re using AI in your legal marketing, you’re expected to apply the same ethical lens you always have—just to a faster, less predictable process.

Explicit AI guidance (Early-mover states)

While most state bars are still figuring out how AI fits into their advertising rules, a few have moved early. Florida, California, and New York have all issued guidance that directly addresses AI in legal practice, including how it’s used in client communications and marketing.

Each has taken a slightly different path, but the message is consistent: lawyers are still responsible for anything that goes out under their name, no matter how it was created.

In Florida, Ethics Advisory Opinion 24-1 sets a clear tone. Lawyers can use generative AI in advertising, but only if they supervise it carefully and ensure it complies with existing rules. That includes the state’s notoriously strict requirements around filings, disclaimers, and format. If an AI tool generates misleading language or omits a required disclaimer, the responsibility and the risk still belongs to the lawyer.

California has taken a practical guidance approach, urging lawyers to be transparent when using AI tools and to maintain full oversight of any content those tools help produce. The focus here isn’t just on truthfulness. It’s on the need for active, informed supervision at every step of the process.

New York’s 2024 task force report took a broader view, but still landed on a clear expectation: firms using AI in their marketing must disclose it when appropriate, avoid synthetic claims or endorsements, and treat AI content as attorney speech. If you wouldn’t say it yourself—or couldn’t back it up under ethics review—it shouldn’t be in your ad, even if AI wrote it.

What these early-mover states make clear is that the rules haven’t relaxed to accommodate new tools. If anything, they’ve tightened. The use of AI doesn’t eliminate your ethical duties, it expands them. And in states like these, there’s already a paper trail to prove it.

Silent but implied (Everywhere else)

In most of the country, there’s no official ethics opinion on AI-generated legal advertising. No bright-line guidance. No formal vote. Just silence or news that a committee is working on AI ethics.

But silence doesn’t mean permission. You’re expected to apply the existing rules to a new and rapidly shifting set of tools—and do so with the same care and judgment you’d use with any other part of your practice.

If you’re in a “silent” state, the standard rules still apply: your ads must be truthful, your client information must remain confidential, and you’re responsible for supervising anyone (or anything) involved in creating your marketing. That includes internal teams, vendors, and AI platforms.

Some state bars are likely watching and waiting. Others are drafting guidance now. But all of them are increasingly aware of what AI is doing to the speed, volume, and nature of legal advertising. If your jurisdiction hasn’t issued an opinion yet, assume the baseline expectations are still in force—and act accordingly.

The safest position is also the most practical: treat AI-assisted content like any other professional communication. If it wouldn’t pass review under your state’s traditional rules, it won’t pass just because it came from a tool.

Five-step compliance checklist for AI-powered legal ads

Use this checklist to build guardrails around your firm’s AI-assisted marketing. It’s not about stopping progress. It’s about making sure your team is moving fast without breaking trust, rules, or platforms.

1. Define boundaries for AI use

Before any tools are turned on, everyone involved: attorneys, marketers, vendors, needs to know where AI fits into your workflow. The biggest risks happen when expectations are vague and oversight is assumed.

☐ Identify which ad elements AI is allowed to generate (e.g., headlines, visual concepts)
☐ Prohibit AI use for client-specific content, testimonials, or legal advice
☐ Document this in a one-pager that marketing and compliance both sign off on

2. Keep prompts generic

Ethical advertising starts with clean inputs. Even the best AI tools can expose or embed sensitive information if you give them too much. Treat AI prompts like public records. They should be scrubbed, bland, and never traceable to real clients.

☐ Never input client names, facts, or confidential data into AI tools
☐ Use only publicly available or anonymized content in prompts
☐ Treat prompts like discovery: assume they’ll be reviewable someday

3. Require lawyer-level review

AI can generate polished content, but it can’t spot an ethical red flag. A licensed attorney should review every ad with the same lens they’d use for any public communication—because that’s exactly what it is.

☐ Review every ad for false or misleading claims (Rule 7.1)
☐ Flag any language that implies guaranteed outcomes
☐ Apply the same scrutiny to tone, not just facts

4. Lock down version control

When questions come up later, internally or from a regulator, you’ll want a clear paper trail. That means keeping the original prompt, the AI output, and the final version, all tied to a reviewer and a date. If you can’t trace how a message was made, you can’t defend it.

☐ Save the original prompt, AI output, and final approved version
☐ Store materials in a secure, uneditable (WORM) system
☐ Tag files by campaign, date, and reviewer

5. Monitor regulatory changes

This is a moving target. State bars, the FTC, and ad platforms are still figuring out their stances on AI. What’s fine today might raise red flags next quarter. Someone on your team needs to own the job of keeping tabs.

☐ Assign a single person or small team to track ethics updates
☐ Set calendar reminders for quarterly reviews
☐ Subscribe to alerts from your state bar, the ABA, the FTC, and key platforms

A 50-state review of Bar Association’s AI and advertising rules

This table is a living resource. You’ll want to revisit it regularly.

State AI-Specific Guidance Advertising Content Rules
Alabama Not Yet Available Rule 7.2(b); ad copy must be filed with Office of General Counsel
Alaska Ethics Opinion 2025-1 Refer to Ethics Opinions 69-4 and 94-2
Arizona Not Yet Available Must comply with Arizona RPC; no AI-specific guidance
Arkansas Not Yet Available Standard Rule 7.1 compliance; AI not yet addressed
California Practical Guidance (2023) Requires transparency and oversight; prevents undue influence
Colorado Informal bar journal article only Must comply with Rule 7.2; AI not formally addressed
Connecticut Not Yet Available Advertising considered legal practice; filing rules apply
Delaware Not yet available Rules include recordkeeping and accuracy obligations
Florida Ethics Opinion 24-1 Strict pre-approval, disclaimers, lawyer oversight
Georgia AI Committee formed Governed by Rules 7.1–7.3; AI treated as third-party assistance
Hawaii Not yet available Ads governed by general RPC; no AI-specific rules
Idaho Not Yet Available Attorney assumes liability; standard RPC compliance
Illinois Standing Committee created Governed by Rule 7.1; AI use not yet regulated
Indiana Not Yet Available Standard Rules 7.1–7.3 apply
Iowa AI resource guides Standard RPCs apply to AI-generated ads
Kansas Not Yet Available Advertising must be accurate; AI not yet addressed
Kentucky KBA E-457 (2023) AI must be supervised; content must be truthful
Louisiana Not Yet Available Governed by RPC 7.2; AI oversight implied
Maine Not Yet Available Governed by Rules 7.1–7.3; AI not mentioned
Maryland Not Yet Available Standard rules apply; AI not yet included
Massachusetts Not Yet Available Rule 7.1 applies; AI content treated as attorney speech
Michigan Not Yet Available Advertising follows RPC; CLEs address AI, no formal rule
Minnesota Not Yet Available No formal AI guidance; standard compliance applies
Mississippi Not Yet Available Governed by ABA-like rules; AI not addressed
Missouri Not Yet Available Rule 4-7.1 to 4-7.5 governs; AI must be supervised
Montana Not Yet Available Standard advertising rules apply
Nebraska Not Yet Available No AI-specific ethics guidance
Nevada Not Yet Available Advertising governed by RPC 7.2
New Hampshire Not Yet Available Governed by general RPCs
New Jersey Not Yet Available Rule 7.1 requires substantiation; strict ad standards
New Mexico Not Yet Available RPC Rule 16-701 applies; AI oversight implied
New York Task Force Report (2024) Requires disclosure of AI use and attorney review
North Carolina Not Yet Available Rules 7.1–7.3 apply; AI not yet addressed
North Dakota Not Yet Available AI-generated ads must meet RPC accuracy standards
Ohio Not Yet Available Governed by Rule 7.1; AI not yet addressed
Oklahoma Not Yet Available RPCs require ad supervision and accuracy
Oregon Not Yet Available Advertising and confidentiality rules apply
Pennsylvania Not Yet Available Rules 7.1–7.3 apply; AI supervision required
Rhode Island Not Yet Available Governed by Supreme Court advertising rules
South Carolina Not Yet Available Advertising must be filed in some cases
South Dakota Not Yet Available RPC Rules 7.1–7.3 apply
Tennessee Not Yet Available Strict filing and retention standards
Texas Not Yet Available Strong ad oversight; AI must be accurate, disclaimers may apply
Utah Not Yet Available Informal AI discussion; no specific rules for ads
Vermont Not Yet Available Governed by RPCs; AI rules not published
Virginia Not Yet Available Rules 7.1–7.3 apply; ad content must be supervised
Washington Not Yet Availablee Protect client data; general ad oversight applies
West Virginia Not Yet Available Standard truthfulness and transparency rules apply
Wisconsin Not Yet Available Governed by ABA-like RPCs
Wyoming Not Yet Availablee No AI-specific guidance; Rule 7.1 governs accuracy

Building an AI advertising playbook your team will use

Avoiding ethics violations isn’t about being afraid of technology. It’s about building processes that let you move fast and stay accountable. It means making compliance part of how you work, not just a last-minute approval step.

The most effective firms aren’t putting AI on pause. They’re putting structure around it.

Create a policy that sets real boundaries

Even if your state hasn’t weighed in yet, you don’t have to wait for permission to be proactive. A short internal policy, based on early guidance from states like Florida or California, can go a long way. Define what types of content AI can help produce (headlines, visuals, concept drafts), where it’s off-limits (client stories, disclaimers, outcome statements), and when human review is required. Keep it simple. Put it in writing. Share it widely.

Vet your vendors and tools like they’re part of the team

If you’re working with outside partners who use AI: media buyers, web vendors, content providers, you need to know what they’re doing and how they’re doing it. Ask how they handle prompt data, and whether they allow for final human review. A vendor’s contract should offer more than just deliverables, it should provide protection.

Train legal and marketing together

AI-driven ad creation can’t live in a silo. It needs collaboration. Schedule short working sessions between your legal and marketing teams to review workflows, flag ethical triggers, and build shared checklists. The goal isn’t to make everyone an expert. It’s to make sure no one assumes someone else is handling compliance.

Build in regular monitoring

Ethics guidance, FTC regulations, and ad platform policies are all evolving fast. Assign a specific person or small team to stay on top of changes—then give them the authority to recommend process updates. Set quarterly checkpoints to review your tools, outputs, and risk management practices. A one-time audit won’t cut it. AI compliance has to be continuous.

What’s next: Deepfakes, disclosure, and platform policy shifts

The next major compliance flashpoint? AI-generated video and synthetic media.

We’re rapidly moving beyond static images and chatbot copy. Law firms are beginning to experiment with tools that generate entire video scripts, create synthetic voiceovers, or render testimonial-style videos using avatars. While the creative possibilities are compelling, the risk is about to skyrocket—especially as regulators start to weigh in.

Expect to see formal ethics opinions around AI-generated video content within the next 12–18 months. Several state bars are already evaluating how deepfake-style ads align (or don’t) with core duties like truthfulness, transparency, and non-deception. Even if the footage “looks real,” it still needs to pass the Rule 7.1 sniff test: Is it truthful? Is it potentially misleading? Would a viewer reasonably believe it represents an actual client or attorney?

But here’s the part most firms miss:
It’s not just the bar you have to answer to—it’s the platforms, too.

Platforms like YouTube, Meta (Facebook/Instagram), TikTok, and X (formerly Twitter) are rolling out AI transparency policies that require clear disclosures when content has been synthetically generated or altered. For example:

  • YouTube now mandates disclosures for any realistic AI-altered visuals or voices in both organic and paid content.
  • Meta flags AI use in ads and requires disclosures when deepfake elements are present—especially in regulated verticals like law.
  • TikTok enforces “AI” tagging when content uses generative elements, and failure to comply can tank distribution.
  • X is evolving its policies in real time, with increased scrutiny on impersonation and misleading synthetic content.

Bottom line: Even if your state hasn’t explicitly banned AI-generated avatars, YouTube might pull your video anyway.

That means a compliant AI-powered ad strategy must now navigate two parallel systems. You should assign a compliance lead to monitor both evolving bar guidance and advertising platform policies.

    LaFleur: Stay compliant, stay competitive

    AI speeds up content creation, iteration, and deployment—but it also speeds up your exposure to risk. The more you automate, the more intentional your systems need to be.

    That doesn’t mean hitting pause. It means getting ahead of the risk before it becomes a problem. Because if something goes wrong: if a regulator flags your language, if a platform suspends your campaign, or if a competitor files a complaint, it won’t matter who wrote the copy. It’ll matter who approved it.

    If you’re using AI tools to support your legal advertising—or if your vendors are—now’s the time to put the right review process in place. Know what you’re publishing. Own how it’s made. And make sure your compliance standards are keeping up with your marketing ambitions.

    Schedule an AI advertising compliance audit with LaFleur.

    We’ll help you review your workflows, flag areas of concern, and build guardrails that protect your practice while letting your team keep moving fast.