AI systems like ChatGPT are increasingly used in legal contexts — from contract suggestions to quick legal explanations. But this raises a crucial legal question: Are the answers from large language models (LLMs) considered legal advice? And if so, who is allowed to give it?
Legal Advice: A Protected Activity
In many jurisdictions — especially in Germany and across the EU — legal advice is a regulated profession. The German Legal Services Act (Rechtsdienstleistungsgesetz, RDG) defines legal advice as:
“Any activity in concrete third-party matters that requires a legal examination of the individual case.”
(§ 2 Abs. 1 RDG)
This means: whenever someone (or something) offers guidance on how a specific law applies to a specific situation, it’s considered legal advice — and requires authorization, typically through qualification as a lawyer.
What About AI?
LLMs, such as GPT-based systems, can generate responses that look like legal advice. But does that qualify as “legal advice” under the law? It depends on several factors:
- Concrete reference: Is the answer tied to a specific, individual case?
- Legal analysis: Does the output contain an evaluation of legal rules?
- Target audience: Is the system aimed at laypersons or professionals?
- Context: Is the AI answer used in a legal advisory setting or as general information?
Legal Evaluation in Germany
According to legal scholars and recent debates in Germany, AI-generated output can qualify as legal advice under § 2 RDG — especially if it:
- Provides personalized guidance (“You should…”),
- Interprets legal provisions for a specific factual scenario,
- Is used without human legal review.
If a company or individual offers such AI-based legal services to others without being authorized under the RDG, it may be an unauthorized legal service, which is subject to prohibition (§ 3 RDG) and potentially fines.
AI as a Tool vs. AI as a Service
The key legal distinction is:
- AI as a tool: Used by lawyers or legally authorized professionals internally. ✅ Legal.
- AI as a service: Provided to the public without legal review. ❌ Possibly illegal.
This means: a lawyer using GPT to draft a first version of a contract, then reviewing it — perfectly acceptable. But an AI app giving legal recommendations to consumers, without human oversight — potentially a violation of RDG.
International Perspectives
In the United States, the situation varies by state. Unauthorized Practice of Law (UPL) statutes apply similarly: AI systems giving legal advice may violate UPL rules unless operated under a lawyer’s supervision. The ABA (American Bar Association) and several state bars have issued cautionary opinions.
Best Practice: Use with Transparency and Caution
Anyone using or offering AI in legal contexts should:
- Ensure disclaimers make clear that output is not legal advice.
- Limit AI output to general legal information, not case-specific advice.
- Have qualified legal review before clients act on AI-generated answers.
- Clarify responsibilities — AI is not liable, but you might be.
Conclusion
Yes — LLMs can provide content that qualifies as legal advice under the law. But they cannot be held accountable, and they are not qualified to do so. That puts responsibility squarely on the humans who use them — or profit from them.
The safest legal position: AI can support legal work — but it must never replace a licensed professional when giving binding guidance. When in doubt, treat AI output as informational — not advisory.
As the line between information and advice continues to blur, law firms and providers face a new strategic choice: regulate themselves — or wait to be regulated.
Further Reading & Sources
- IBA – Artificial Intelligence and legal services in Germany (PDF)
- ABA Formal Opinion 512 – Ethics guidance on lawyers’ use of Generative AI
- Reuters: Lawyers using AI must heed ethics rules – summary of ABA guidance
- Reuters: Seven Cs of ethical AI use by lawyers – including competence & confidentiality
Image credit: Alexander Supertramp – Shutterstock