Just before my son turned one, he started saying simple words like “ball”, “Alexa” (because we would frequently ask Amazon’s Alexa questions). To his mom’s chagrin, he said these words even before saying “mama”. Without explicitly being taught, he began to say “dog”. He would see a dog while we were out on a walk, point excitedly from his stroller, and say “dog!!!” I began to believe my son was unusually intelligent. To further reinforce his intelligence, I wanted to see what other animals he could identify, so I brought out the book of “first one hundred animals” to “read” to him. When he saw the picture of a bear, he pointed excitedly and said… “dog!!!” Picture of a goat… “dog!!!” The worst, picture of a chicken… “dog!!!” It turns out I had overestimated his intelligence.
This story illustrates the human fascination with emergent intelligence. You are more likely to be impressed by a dog that can accurately follow simple instructions than by a Ph.D. who can explain Quantum Computing (j.k. no one can really explain Quantum Computing)—unless, of course, you’re a nerd interested in Quantum Computing. Beyond our curiosity about the source of intelligence, we’re also more likely to exaggerate intelligence when it emerges from an unexpected source. The Ph.D. is expected to understand complex things. My toddler could only communicate through cries and unintelligible sounds. So when he begins—apparently out of the blue—to say “dog”, I naturally think he’s far more intelligent than he truly is.
This human fascination with emergent intelligence, I believe, is why AI has so consumed our society and why we’ve been so susceptible to the AI hype. When ChatGPT launched in November 2022, most people thought of computers as sophisticated calculators—machines that processed inputs and produced predictable outputs. To anyone outside of the world of machine learning, it seemed like a miracle that you could ask a computer to write a “new” poem in the voice of Shakespeare, and it would oblige. The technology appeared to cross a threshold, from a tool that executed commands to something that seemed to understand them.
Even the AI companies seemed surprised by how quickly ChatGPT captured the public’s attention, but they realized they needed to capitalize on this moment. To sustain interest and feed the hype cycle, they began to amplify claims about AI’s current and future capabilities. That led to breathless predictions about AGI (Artificial General Intelligence) and assertions that AI would soon have superhuman intelligence. When the AGI hype started to plateau, the focus shifted to “agentic AI” and other concepts. This is the pattern: each wave of hype serves to maintain momentum and investment.
Understanding these incentive structures is instructive for lawyers navigating AI’s impact on the profession. Recognizing that some claims about AI are significantly overstated will help you avoid spending hundreds of thousands or millions of dollars for tools that you may not need. Distinguishing hype from the reality of a genuinely powerful and transformative technology, including its significant limitations and pitfalls, will enable a more pragmatic approach to AI adoption in your practice—one that leverages real capabilities without falling for exaggerated promises. The lawyers who will thrive aren’t those who dismiss AI or those who swallow every claim uncritically. They’re the ones who resist the very human impulse to mistake a few impressive tricks for genius—and who remember that even the most enthusastic “dog!!!” might just be a chicken.