A recent article in Bloomberg shed light on the humans behind the recent rash of “chatbot” based business like GoButler and x.ai. For those of you blissfully unfamiliar with this term, it refers to a supposedly artificially intelligent computerized agent that you can converse with via a chat client (SMS, FB Messenger, etc.). In the future envisioned by Microsoft CEO Satya Nadella these chatbots will form the backbone of crucial new services such as...ordering pizza. Well, it turns out that many of these services aren’t much more than a fancy call center wherein boring old humans sit around in a cubicle and wait for you to ask for pizza and then order it for you. The real innovation seems to have been in passing this off as artificial intelligence.
Now, the idea of interfacing with computers through conversation is a great idea. Whenever possible, interfaces should be invisible. We’ve spent the past four decades demanding that people think like computers, it’s about time that the tables. The problem, as anyone passingly familiar with the actual technology behind natural language processing (NLP) and (shudder) artificial intelligence will tell you, is that these things are extremely hard to do. While very narrow NLP tasks can reach an accuracy of (gasp!) 78%, these techniques are far from generalizable, regardless of what Wired might think.
The chatbot phenomenon is something banal passed off as something futuristic, which is really nothing remarkable in and of itself (cf the 21st century). Please don’t mistake this for judgment. Banal things are often useful and very often make good businesses. These chatbots represent the application of scheduling optimization to a service: How do you route highly variable inputs to workers in such a way as to maximize utilization? This is an operations problem as old as industrialization and a challenge worthy of addressing. The question is: Why can’t we just say that? Why does founder after founder have to paint their milquetoast contribution to operational efficiency as a mixture of the Manhattan Project, the Moon Landing and the March to Selma?
"Technology” journalism is an odd beast, one that seems to bear very little relationship to engineering or the fundamentals of business. Like all journalism, narrative is king and all facts must conform to its implacable logic. One of the amazing constants of the narratives of the technology media is that the narrative is essentially the same “X is coming and X will revolutionize the world. Resistance is futile.” The only thing that changes is the value of X. Last year it was “Big Data,” this year it is “AI.” What unites these things is that they are so vague as to be meaningless. What also unites them is that they become catnip for VCs. Founders then use this terms to show the VCs that they are “with it.” Pando covers this round of funding to prove the reality of the narrative and so on until the buzzword has been stretched and warped to encompass all things and is then discarded in favor of a new buzzword.
So goes all marketing, one might say. The reason that this problem is particularly insidious for technology is that these buzzwords, at one point, meant something. They were once concepts, at least nascent concepts, that defined an active area of scientific and engineering inquiry. Big data used to be a term for data sets whose size made previously trivial things (opening it into RAM, for example) impossible. Now it is a term used for all data sets. No technocapitalist worth his salt will admit that he has boring old “medium data.” AI is merely the new big data. The nice thing about artificial intelligence is that there it was a squishy term to begin with. People whose job is producing jeremiads about AI can’t seem to even decide if intelligence is a subset of human mental activity or just another word for it.
In the present environment, basic research is increasingly outsourced to startups. That means that the goals of research are being defined by the startup funding system, which is in turn defined by BuzzFeed lists about the sexiest robots in movie history. We can see that this feedback loop between the media, the funders and the engineers is not only annoying, it is possibly fatal to actual technological development. How will we ever achieve useful advances in AI if the goalposts are constantly being moved for the purposes of hype? As Yogi Berra said, “If you don’t know where you are going, any road will take you there.”
To return to our chatbot businesses: x.ai is not AI and likely never will be, since that term doesn’t mean anything anymore. It may, however, be a useful service and a novel contribution to operations optimization. And that’s not such a bad thing. In fact, it’s the kind of thing real businesses are based on.