The History of Artificial Intelligence

By John Paul Mueller, Luca Massaron

The desire to create intelligent machines (or, in ancient times, idols) is as old as humans. The desire not to be alone in the universe, to have something with which to communicate without the inconsistencies of other humans, is a strong one. The following discussion provides a brief, pertinent overview of the history of modern AI attempts.

Starting with symbolic logic at Dartmouth

The earliest computers were just that: computing devices. They mimicked the human ability to manipulate symbols in order to perform basic math tasks, such as addition. Logical reasoning later added the capability to perform mathematical reasoning through comparisons (such as determining whether one value is greater than another value). However, humans still needed to define the algorithm used to perform the computation, provide the required data in the right format, and then interpret the result. During the summer of 1956, various scientists attended a workshop held on the Dartmouth College campus to do something more. They predicted that machines that could reason as effectively as humans would require, at most, a generation to come about. They were wrong. Only now have we realized machines that can perform mathematical and logical reasoning as effectively as a human (which means that computers must master at least six more intelligences before reaching anything even close to human intelligence).

The stated problem with the Dartmouth College and other endeavors of the time relates to hardware — the processing capability to perform calculations quickly enough to create a simulation. However, that’s not really the whole problem. Yes, hardware does figure in to the picture, but you can’t simulate processes that you don’t understand. Even so, the reason that AI is somewhat effective today is that the hardware has finally become powerful enough to support the required number of calculations.

The biggest problem with these early attempts (and still a considerable problem today) is that we don’t understand how humans reason well enough to create a simulation of any sort—assuming that a direction simulation is even possible. Consider again the issues surrounding manned flight described earlier in the chapter. The Wright brothers succeeded not by simulating birds but rather by understanding the processes that birds use, thereby creating the field of aerodynamics. Consequently, when someone says that the next big AI innovation is right around the corner and yet no concrete dissertation exists of the processes involved, the innovation is anything but right around the corner.

Continuing with expert systems

Expert systems first appeared in the 1970s and again in the 1980s as an attempt to reduce the computational requirements posed by AI using the knowledge of experts. A number of expert system representations appeared, including rule based (which use if…then statements to base decisions on rules of thumb), frame based (which use databases organized into related hierarchies of generic information called frames), and logic based (which rely on set theory to establish relationships). The advent of expert systems is important because they present the first truly useful and successful implementations of AI.

You still see expert systems in use today (even though they aren’t called that any longer). For example, the spelling and grammar checkers in your application are kinds of expert systems. The grammar checker, especially, is strongly rule based. It pays to look around to see other places where expert systems may still see practical use in everyday applications.

A problem with expert systems is that they can be hard to create and maintain. Early users had to learn specialized programming languages such as List Processing (LisP) or Prolog. Some vendors saw an opportunity to put expert systems in the hands of less experienced or novice programmers by using products such as VP-Expert, which rely on the rule-based approach. However, these products generally provided extremely limited functionality in using smallish knowledge bases.

In the 1990s, the phrase expert system began to disappear. The idea that expert systems were a failure did appear, but the reality is that expert systems were simply so successful that they became ingrained in the applications that they were designed to support. Using the example of a word processor, at one time you needed to buy a separate grammar checking application such as RightWriter. However, word processors now have grammar checkers built in because they proved so useful (if not always accurate).

Overcoming the AI winters

The term AI winter refers to a period of reduced funding in the development of AI. In general, AI has followed a path on which proponents overstate what is possible, inducing people with no technology knowledge at all, but lots of money, to make investments. A period of criticism then follows when AI fails to meet expectations, and finally, the reduction in funding occurs. A number of these cycles have occurred over the years — all of them devastating to true progress.

AI is currently in a new hype phase because of machine learning, a technology that helps computers learn from data. Having a computer learn from data means not depending on a human programmer to set operations (tasks), but rather deriving them directly from examples that show how the computer should behave. It’s like educating a baby by showing it how to behave through example. Machine learning has pitfalls because the computer can learn how to do things incorrectly through careless teaching.

Five tribes of scientists are working on machine learning algorithms, each one from a different point of view (see the “Avoiding AI Hype” section, later in this chapter, for details). At this time, the most successful solution is deep learning, which is a technology that strives to imitate the human brain. Deep learning is possible because of the availability of powerful computers, smarter algorithms, large datasets produced by the digitalization of our society, and huge investments from businesses such as Google, Facebook, Amazon, and others that take advantage of this AI renaissance for their own businesses.

People are saying that the AI winter is over because of deep learning, and that’s true for now. However, when you look around at the ways in which people are viewing AI, you can easily figure out that another criticism phase will eventually occur unless proponents tone the rhetoric down.