What remains for human lawyers as AI systems get more powerful and widely used in law practice, and what are the implications for legal education? Two complementary views from thoughtful observers of the legal sector as a whole:
Jordan Furlong, “The new legal intelligence”:
“Legal intelligence” … is a capability — a power to do certain things and achieve certain outcomes. If you’re a lawyer, you acquired the “legal intelligence” capability in law school, using your human intellect. You probably refer to it as your ability to “think like a lawyer,” and that’s accurate enough for present purposes. … Artificial intelligence programs don’t “think” — they generate output in response to external prompts using statistical modelling and probabilistic prediction based on their data and training. So it would not be correct to say that LLMs can “think like a lawyer.”
But LLMs do possess the capability of “legal intelligence” — in functional terms, they can learn, understand, and reason with legal rules and principles, and they can use that ability to assess, analyze, and solve legal problems. This is particularly true of law-specific programs like Vincent, CoCounsel, Harvey and others; but … it can also be said of general-purpose LLMs like ChatGPT, Claude, and Gemini in many cases. These programs can generate legal output strikingly similar to the content lawyers produce using human legal intelligence.
Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.
Think of it this way: Picture your town or city from above, as if you’re looking down at a Google Maps display. Drop a red pushpin on the map for each lawyer below you, each human source of legal intelligence. Lots of little red pins would pop up, or maybe only a few, if (like many people) you’re in a region that’s short on lawyers.
Now drop a blue pin for every desktop, tablet, and smartphone that can run an LLM, either a general-purpose model like ChatGPT or a legally grounded model like Vincent. The entire map would flood blue with countless sources of artificial legal intelligence, vastly outnumbering the human sources in red.
Even if you were to count only the law-specific AIs, which conservatively are in regular use by a hundred thousand lawyers every month, the blue pins would at least rival the red — and they’re growing fast. And each blue pin on the map represents a standalone source of artificial legal intelligence, one that can mimic the results of a human lawyer’s legal reasoning much faster, much less expensively, 24/7/365.
That’s what’s happening to the legal market right now: the supply of legal intelligence is exploding. A valuable capability that used to be scarce is on its way to becoming ubiquitous. It’s not just that we’ve developed artificial legal intelligence, it’s that this capacity is scalable, portable, and accessible — three things that most human lawyers are not.
Sarah Glassmeyer (Legaltech Hub), “The Human Advantage: Professional Growth for Lawyers in the Age of Generative AI,” sharing a conclusion closely aligned with Furlong’s takeaway:
For generations, young lawyers were trained to master information: memorize doctrine, cite authority, produce flawless work. Progress was measured in output: the memo, the brief, the research note.
Generative AI flips that script. When a well-worded prompt can produce a decent first draft or a clean case summary, knowledge accumulation alone matters less. What matters more is discernment: knowing what to ask, what to trust, and what to change.
That is where reverse benchmarking comes in again. Instead of asking, “How can I do what AI does, only faster?” the better question is, “What can AI not do that I can learn to do better?”
The shift from knowledge accumulation to skill differentiation is the real story of professional growth in the AI era. It means investing in what cannot be automated: judgment, ethics, creativity, and human communication.




