Last March, McDonald’s acquired the startup Dynamic Yield for $300 million, in the hope of employing machine learning to personalize customer experience.
In the age of artificial intelligence, this was a no-brainer for McDonald’s, since Dynamic Yield is widely recognized for its AI-powered technology and recently even landed a spot in a prestigious list of top AI startups. Neural McNetworks are upon us.
Trouble is, Dynamic Yield’s platform has nothing to do with AI, according to an article posted on Medium last month by the company’s former head of content, Mike Mallazzo. It was a heartfelt takedown of phony AI, which was itself taken down by the author but remains engraved in the collective memory of the internet. Mr. Mallazzo made the case that marketers, investors, pundits, journalists and technologists are all in on an AI scam. The definition of AI, he writes, is so “jumbled that any application of the term becomes defensible.”
Mr. Mallazzo’s critique, however, conflates two different issues. The first is the deliberately misleading marketing that is common to many hyped technologies, and is arguably epitomized by some blockchain companies. I am reminded of the infamous Long Island Iced Tea, which saw its stocks soar 289% in 2017 after it rebranded itself as Long Blockchain, citing hazy plans to explore blockchain technology.
The second issue is that, unlike blockchain, the term AI is indeed both broad and vague — which opens the door to its widespread use as an idiom for "something that solves hard problems." But this issue far predates the current period of hype, and is best understood by examining the field’s history and intellectual underpinnings.
AI was born as a scientific field in 1956, in a summer workshop at Dartmouth College. According to the workshop’s mission statement, in two months the 11 attendees would “make a significant advance” in their task of finding “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
The scale of the founders’ vision is staggering, so much so that, six decades later, it continues to be a source of inspiration. Admittedly (much) more than two months have gone by and we’re still far from realizing that vision, but it has given rise to a sprawling field of research. Even AI pioneer Marvin Minsky’s sweeping definition of AI — the “science of making machines capable of performing tasks that would require intelligence” if done by humans — doesn’t quite cut it at this point.
Take the area of AI known as heuristic search, for example. It started in the 1960s with a team of researchers at the Stanford Research Institute, who were building a robot with the then-revolutionary capability of autonomously moving around and avoiding obstacles. Continuing a trend of imposing nomenclature — evident in their creation’s dignified name, Shakey the robot — the researchers called their first pathfinding algorithm A1. Its successor, the equally illustrious A2, was later renamed A*.
As it turns out, moving from one point to another is similar to getting from an initial configuration of a puzzle to its solution. That makes A* an amazingly versatile algorithm; academics consider it to be one of the most fundamental and important tools in the AI arsenal. Yet the algorithm is so simple — it decides which action to take next by adding up two numbers, something that monkeys can do — that it can hardly be seen as a proxy for human intelligence.
A similar tale can be told of each of AI’s dozen diverse areas. One is the area of multi-agent systems, which focuses on designing the interaction between autonomous software agents such as self-driving cars. Another is automated planning. Yet another is machine learning, which many mistakenly view as being synonymous with AI. The staples of each area don’t quite jibe even with Minsky’s loose definition.
Still, these ostensibly disparate areas have much more in common than just history and excessive optimism. As with other mature scientific disciplines, AI has a shared vocabulary, which allows the most compelling ideas and the most powerful techniques to propagate across areas.
There’s also the periodic emergence of ambitious, cross-cutting enterprises that build on the synergies between AI’s areas. The 2000s brought us the DARPA Grand Challenge and the DARPA Urban Challenge, which supercharged the development of self-driving cars. In 2011, IBM’s Watson crushed two legendary Jeopardy! champions and fired the public imagination. And in recent years a variety of long-standing research threads have coalesced into a new agenda known as “AI for social good,” which aspires to make tangible progress on some of the biggest problems facing humanity.
The moral is that AI is a bit of a misnomer, but it's an intellectually meaningful term that has always been inclusive. For that reason, it would behoove investors and journalists to demand that startups billed as “AI-powered” explain how their technology fits into the broader AI landscape, instead of jumping to conclusions based on the label itself. It’s a cliché that you shouldn’t judge a book by its cover, but it’s doubly true in the age of AI — and triply true if the book was generated by AI.