"Artificial Intelligence" is dumb
I mean the term of art, not the concept or field of study. And what’s dumb is how it’s applied. “Machine learning” is also dumb in a similar way.
Some definitions for AI
You can go back to the beginning if the field if you want to, but modern definitions tend to to be vague. There are good definitions out there, but these sound esoteric and unless you’re really interested in defining AI precisely, you’ll probably just stick with Merriam-Webster or Wikipedia, which means literally:
The capability of a machine to imitate intelligent human behavior.
And this sounds perfectly reasonable for people who don’t spend all day thinking about this stuff, but it’s generic to the point of encompassing nearly anything.
What people think of when they think of AI
Here’s the crux. Most folks don’t even bother looking for a definintion for this term of art. They know what the words, “Artificial” and “Intelligence” mean, and they’ve seen movies. So they think of HAL 9000 or J.A.R.V.I.S. from Iron Man or GLaDOS or Data or Cortana (but the one from Halo, not the off-brand Alexa). All of these things are person-like AIs that have specific behaviors we recognize as human. They sound or act in ways that we recognzie as human. They’re spoken to as equals or near-equals. Humans develop emotional connections to these AIs.
But most relevantly, they’re consistent with the colloquial means of the words “artificial” and “intelligence”. These examples are all human-like but obviously not human; their physical represnetations vary but they’re not human in obvious ways. So they’re obviously “artificial” (i.e., non-organic) but they’re also “intelligent” in a way that is similar to humans in varying degrees. (Often, exploring the differences in the way these AIs “think” and the way humans think is a main purpose of including AI characters.)
So what’s the problem?
The issue is that the technical definitions for AI are so broad that they encompass everything from what people think of when they think of AI (HAL and JARVIS and all that other stuff) as well as… literally almost any man-built device that’s reactive to external stimuli. By most definitions, those bathroom paper towel dispensers that detect hand-waves and spit out inadequate paper towels qualify. It’s a system that processes external data to produce an action in a way that a human might. Sure, it’s a specific type of intelligence unique to a certain task, but most definitions of AI don’t rule that out.
So the end result is that anyone who builds any sort of predictive modeler is able to technically claim that they’re doing “AI”.
“Machine Learning” has the same issue
We could go through the same drill with the also-awful term of art, “Machine Learning”, but I’ll keep it brief here. When regular people hear the phrase “machine learning”, they think of machines that can learn.
Unfortunately, this is pretty far away from how much ML works in practice. At least in the contexts where I see it used, there’s no physical machine. You have a computer (or cluster or virtual computer or whatever) that produces some output file or result. This most typically takes the form of a visual display or a vector of classifications or predictions or something like that. The point is, there are no gears cranking, and the outputs aren’t mechanical.
And the “learning” is usually pretty static. I’m sure some companies have dynamic algorithms that update themselves constantly, but the only way I’ve seen it done in practice is in batches. So you might run an update monthly or weekly or something like that. Not nearly as dynamic as the term of art implies. That’s the best case scenario, too. Most of the applicatoins I see don’t work with a data pipline, and most text books that discuss ML don’t get there. (I’m working off Elements of Statistical Learning here, which may be dated.)
Terms of art should be meaningful
I hear all of these terms of art used when basic analsyses get briefed to decision-makers, and all it does is muddy the waters. The analysts/contractors use the fanciest term of art they can get away with (“AI!”), and because extant definitions are squishy enough, it isn’t really wrong. But it waters down the truely interesting analyses. If you use the same term to describe AlphaGo and the paper towel dispensor, then your term isn’t very useful.
And this is bad because there are some amazing things that people are doing with, e.g., deep neural nets and reinforcement learning and the like! When people fitting a regression tree or doing a bag-of-words analysis adopt the language use by people building ImageNet or the new gmail “Autocompose” feature, it muddies the water and makes it tougher for decision-makers to quickly differentiate between the mundane and the amazing. It leads to term-of-art proliferation (“computer vision”, etc.), which just makes it harder for all of us to communicate.
My resolution is to try to always be specific and use the proper name to describe whatever algorithms I use. There’s a balance to be struck, but the simple rule is not trying to over-sell your analysis. If it was simple, that’s fine! Decision-makers are usually more concerned with how useful the results are than they are with how fancy your method sounds. (You could also view a choice to not inflate the complexity of your analysis as a counter-signal to how high quality your results are!) Part of this is making sure that when I do use terms of art that I use them in ways that are helpful and not confusing. Terms are art should help convey meaning, and if the colloquial meanings of the words distract from the meaning you’re trying to convey, simpler language is probably the better choice.