We certainly live in interesting times. There are “intelligent” devices everywhere. They help us to make smarter decisions (finding the fastest way home), remember things (picking up milk on the way home) and deal with the unexpected (where is the nearest gas station). The term “intelligent” is sometimes used to describe such conveniences, although most of us would agree that we have a long way to go before these applications behave in a way that we would call truly intelligent. There are promising advances taking place in the field of Artificial Intelligence (AI). We are re-thinking everything from medical diagnosis to counter-terrorism in the context of amazing possibilities. At the same time, we have a long way to go before there is any real “intelligence” in AI. Sometimes, I wonder if we will forever face the reality that our machines can be no smarter than we make them (more on this topic later). I imagine a nerdy joke where a robot walks into a bar (where else would it go?) and tells the bartender to shut down his business because the finest predictive algorithms have concluded that all businesses eventually terminate, therefore, the closing is inevitable. This scenario is funny if you are a nerd…really.
The Terminology: New words for new things
One of the problems with discussions about AI is that the term is so broad. We have the same issue with other terms but they have been around long enough that there is common understanding. For example, the term “home entertainment” seems to be well-understood. For me, this term conjures up images of flat-screen TVs, stereo equipment, and maybe a Bluetooth device or two to tie it all together. For others, the term may refer to gaming systems. I know an orchestra conductor who built a performance space inside his home. For him, “home entertainment” means inviting a few friends over to perform live chamber music. So, we see that even a widely-accepted term can have both general and specific meaning.
The problem with the term AI is that the underlying technologies have not really been around long enough for everyone to understand the boundaries of what is, and is not, part of the conversation. This piece is not intended to be a “definitive guide” to all things AI (you can find plenty of those articles on the Internet, and they don’t even begin to agree with one another.) However, it is helpful to review some of the context in order to set the stage for understanding the field as it is emerging.
A simple way to start is to consider “artificial” to suggest mimicking or emulating. Many AI approaches are essentially designed to proceed with an underlying premise. For example, I recently went to an amazing raptor sanctuary. The trip started with an education session where they taught us to recognize different kinds of birds that we might see. We then journeyed up the mountain, helping the ranger to count different species. This experience is a good example of a goal-based training exercise. (This is a hawk. Not all hawks are the same. This is a sparrow. It’s not a hawk.) Many AI agents do something very similar, whereby they observe a “training set” (for example a series of photos of famous people) and then can be tasked with looking at millions of pictures to find other instances of the same visage.
Sometimes, technologies have the ability to refine their focus based on input about how well they are doing. If the objective is a discrete result (e.g. a mathematical correlation such as a stock going up or a temperature going down), the “learning” can be unsupervised; essentially the algorithm looks at thousands of potential parameters and decides which ones most readily predict the outcome, irrespective of causation. If the object is more subjective (e.g. that is NOT Lady Gaga), then human agents can help to train the algorithm. This approach is sometimes called supervised learning.
We can see right away that the use of the word “learning” is really anthropomorphizing what the algorithm is doing. Essentially, the algorithm has an objective, and it is trying to converge on that objective. This realization brings us to the second part of AI – the “Intelligence” part.
The famous computer scientist, mathematician, critical thinker, and code breaker extraordinaire Alan Turing once proposed a test for machine intelligence. This test, called the Turing Test, essentially says that a machine is intelligent when an intelligent human cannot distinguish another intelligent human from a machine by asking questions. If you want to have a bit of fun, the next time you are dealing with a “chat bot” (one of those helpful human avatars on the Internet that offer customer assistance without a human agent) ask them if they are a robot. Some of these agents have become very good at dealing with these types of questions.
Most definitions of AI include reference to human behavior or thinking. In some cases, the algorithms are specifically designed to model the behavior of people doing a task. However, not all AI is centered on humans. Sometimes, the way machines approach a problem is decidedly different. One of the best examples is the game of chess. In the 18th Century, one of the first examples of a thinking “machine” was a device that seemingly played an excellent game of chess against human opponents. This device was eventually debunked because it consisted of a very small human chess player sitting inside a box with levers attached to a second hidden chess board. Nevertheless, the inspiration was brilliant.
What if we asked a computer to find a way to play chess that exploited the power of pursuing computations that exceed the human capacity? Essentially, build a computer that plays chess like a computer, while trying to achieve the same outcome as a human performing the same task in a different way. That is exactly what has been done, not only with chess, but with many other objectives (for example, autonomous self-driving cars). These algorithms “learn” in ways that are different than humans performing the same task. There is great promise for such approaches, and of course, reason for great concern if the algorithms begin to modify their goals in the process.
The Tech: Our things are talking and learning
The field of AI is not only broad, but it is also deep. Like a vast ocean, we can travel broadly or deeply to explore it. Let’s go deeply first.
One of the most important things to watch in AI technology is the evolution of devices. From the Internet of Things, where sensors and previously-disconnected autonomous objects are sending signals over common platforms (and increasingly to one another), to the world of autonomous smart devices, such as self-driving cars and unmanned flying vehicles (e.g. “drones”), we are seeing the applications of AI technology blossom before our eyes. Of course, like any invention, such devices can be used for good or for not-so-good. We are currently in a period where the rush to bring new devices to market is punctuated with stories of these devices used to produce alarmingly unintended consequences.
Now, let’s look broadly at the field of AI technology. At the same time that physical devices are becoming smarter, so are the techniques of AI itself. Approaches such as Deep Learning and neuromorphic computing, software and hardware inspired by the way we understand our thought process to evolve, are enormously exciting and certainly worth watching. Other technologies that have been around for quite some time are finding intriguing new applications, such as Natural Language Processing (NLP) and heuristic agents, approaches designed to watch the way people use language and the way people behave and to derive computational insight from looking at how these things change over time.
The Philosophies: Can’t we all just get along?
Like any emerging field, not everyone agrees on the best way forward. There are established schools of thought, such as the Bayesian theorists who center on the strength of hypotheses as more information becomes available, or the strict NLP proponents, who put forth that language can be fully understood if we have a sufficiently large corpus of examples to interrogate. Machine learning is based on the belief that data from the past can be interrogated to make meaningful prediction about how to move forward, while the cognitive computing approach is based on the machine curating a vast and growing corpus of knowledge and working alongside a human agent to get to the best result.
It is extremely important, when dealing with AI, to understand the epistemological frame of the approach. What do you have to believe? Why is it likely that that belief is appropriate in the situation that you are facing?
As AI technologies continue to advance, these belief systems will be at the forefront of new regulations, new inventions, new risks, and new opportunities. AI Ethics, the emerging field of study about the implications of having machines make real world decisions (such as in self-driving cars and autonomous drones), is just beginning to grapple with some of these real-world issues. In some cases, laws exist that must be re-considered in light of new technology (e.g. cyber-voyeurism via drones, AI agents making real world statements that may be racist or defamatory).
Artificial Intelligence is, at its core, artificial. There is no substitute for human reasoning to make sure that we are focusing this amazing capability in the best way to serve our customers and ultimately to improve the human condition. Getting it right can lead to possibilities never before imagined.