Artificial Intelligence For Dummies
Book image
Explore Book Buy On Amazon
You can hardly avoid hearing about artificial intelligence (AI) today. You see AI in the movies, in the news, in books, and online. It's been in the news a lot lately, with all of the frenzy surrounding ChatGPT (see more about that below).

©Blue Planet Studio / Adobe Stock

AI is part of robots, self-driving (SD) cars, drones, medical systems, online shopping sites, and all sorts of other technologies that affect your daily life in so many ways. Some people have come to trust AIs so much, that they fall asleep while their self-driving cars take them to their destination — illegally, of course.

Many pundits are burying you in information (and disinformation) about AI, too. Some see AI as cute and fuzzy; others see it as a potential mass murderer of the human race. The problem with being so loaded down with information in so many ways is that you struggle to separate what’s real from what is simply the product of an overactive imagination.

Just how far can you trust your AI, anyway? Much of the hype about AI originates from the excessive and unrealistic expectations of scientists, entrepreneurs, and businesspersons. This article helps you understand some of the history of artificial intelligence and evolution of AI.

The ChatGPT controversy

The latest media storm around AI came in early January 2023, when OpenAI's free preview of its ChatGPT chatbot (released in November 2022) reached 100 million users. OpenAI then released a subscription service called ChatGPT Plus, and an upgraded version of its product, ChatGPT-4, in March 2023.

A chatbot is a computer program designed to simulate human conversation. ChatGPT (GPT stands for generative pretrained transformer) is a particularly powerful chatbot able to produce natural, human-like writing through its use of 570GB of data from the Internet.

Representing one of the latest achievements in the development of artificial intelligence, ChatGPT can answer questions and write articles, poems, emails, and research papers; it can also write programming code, translate languages, and perform other tasks related to language.

ChatGPT's possible real-world uses include:

  • Customer service
  • Ecommerce
  • Research
  • Education and training
  • Computer code writing and debugging
  • Scheduling and booking
  • Entertainment
  • Health care information and assistance
However, while many people are excited about the possibilities for ChatGPT and other similar technologies being developed, there are plenty of concerns about how it can be used in bad ways, too — for example, to cheat in school by having it write essays and research papers.

It’s difficult to discern whether a piece of writing has been generated by ChatGPT or a human. In addition, the technology is far from perfect; the text it produces is often inaccurate and biased, and therefore, can spread false and even harmful information.

AI can, and is, serving us well in many ways, but it’s important to understand its limitations. AI will never be able to engage in certain essential activities and tasks, and won’t be able to do other ones until far into the future. For example, while it can produce a piece of music with the data you’ve entered and in the style of a particular musician, say Beethoven, it cannot actually create anything. AI doesn’t have an imagination or original ideas.

The history of AI, starting with Dartmouth

Looking at artificial intelligence history begins with the earliest computers, which were just that: computing devices. They mimicked the human ability to manipulate symbols in order to perform basic math tasks, such as addition. Logical reasoning later added the capability to perform mathematical reasoning through comparisons (such as determining whether one value is greater than another value).

However, for artificial intelligence evolution, humans still needed to define the algorithm used to perform the computation, provide the required data in the right format, and then interpret the result.

During the summer of 1956, various scientists attended a workshop held on the Dartmouth College campus in Hanover, New Hampshire, to do something more. They predicted that machines that could reason as effectively as humans would require, at most, a generation to come about. They were wrong. Only now have we realized machines that can perform mathematical and logical reasoning as effectively as a human (which means that computers must master at least six more intelligences before reaching anything even close to human intelligence).

The stated problem with the Dartmouth College and other endeavors of the time relates to hardware — the processing capability to perform calculations quickly enough to create a simulation.

However, that’s not really the whole problem. Yes, hardware does figure in to the picture, but you can’t simulate processes that you don’t understand. Even so, the reason that AI is somewhat effective today is that the hardware has finally become powerful enough to support the required number of calculations.

The biggest problem with these early attempts (and still a considerable problem today) is that we don’t understand how humans reason well enough to create a simulation of any sort — assuming that a direction simulation is even possible.

Consider the issues surrounding the accomplishment of manned flight by the Wright brothers. They succeeded not by simulating birds, but rather by understanding the processes that birds use, thereby creating the field of aerodynamics.

Consequently, when someone says that the next big AI innovation is right around the corner and yet no concrete dissertation exists of the processes involved, the innovation is anything but right around the corner.

Continuing with expert systems

Expert systems first appeared in the 1970s and again in the 1980s as an attempt to reduce the computational requirements posed by AI using the knowledge of experts. A number of expert system representations appeared, including:
  • Rule based: These use "if … then" statements to base decisions on rules of thumb.
  • Frame based: These use databases organized into related hierarchies of generic information called frames.
  • Logic based: These rely on set theory to establish relationships).
The advent of expert systems is important in artificial intelligence background because they present the first truly useful and successful implementations of AI.

You still see expert systems in use today, although they aren’t called that any longer. For example, the spelling and grammar checkers in your application are kinds of expert systems. The grammar checker, especially, is strongly rule based. It pays to look around to see other places where expert systems may still see practical use in everyday applications.

A problem with expert systems is that they can be hard to create and maintain. Early users had to learn specialized programming languages, such as List Processing (LisP) or Prolog.

Some vendors saw an opportunity to put expert systems in the hands of less experienced or novice programmers. However, the products they used generally provided extremely limited functionality in using small knowledge bases.

In the 1990s, the phrase expert system began to disappear. The idea that expert systems were a failure did appear, but the reality is that expert systems were simply so successful that they became ingrained in the applications that they were designed to support.

Using the example of a word processor, at one time you needed to buy a separate grammar checking application, such as RightWriter. However, word processors now have grammar checkers built in because they proved so useful (if not always accurate).

Overcoming the AI winters

The term AI winter refers to a period of reduced funding in the development of AI. In general, AI has followed a path on which proponents overstate what is possible, inducing people with no technology knowledge at all, but lots of money, to make investments.

A period of criticism then follows when AI fails to meet expectations, and finally, the reduction in funding occurs. A number of these cycles have occurred over the years — all of them devastating to true progress.

AI is currently in a new hype phase because of machine learning, a technology that helps computers learn from data. Having a computer learn from data means not depending on a human programmer to set operations (tasks), but rather deriving them directly from examples that show how the computer should behave. '

Machine learning is like educating a baby by showing it how to behave through example. This technology has pitfalls because the computer can learn how to do things incorrectly through careless teaching.

At this time, the most successful solution is deep learning, which is a technology that strives to imitate the human brain. Deep learning is possible because of the availability of powerful computers, smarter algorithms, large datasets produced by the digitalization of our society, and huge investments from businesses such as Google, Facebook, Amazon, and others that take advantage of this AI renaissance for their own businesses.

People are saying that the AI winter is over because of deep learning, and that’s true for now. However, when you look around at the ways in which people are viewing AI, you can easily figure out that another criticism phase will eventually occur unless proponents tone the rhetoric down.

A brief artificial intelligence timeline

1942: First electronic digital computer built by John Vincent Atanasoff and Clifford Berry at Iowa State University

1950: Alan Turing paper “Computing Machinery and Intelligence;” his proposal later became “The Turing Test,” which measured machine AI

1958: Perceptron computer, built by Cornell University Professor Frank Rosenblatt, regarded as first artificial neural network

1966: First “chatterbox” (later shortened to chatbot) — created by Joseph Weizenbaum, a German-American computer scientist — uses natural language processing to converse with humans

1971: First commercial microprocessor by Intel

1988: Jabberwacky, a chatbot created by British computer scientist Rollo Carpenter, provides interesting and entertaining conversation to humans

1990s: Early days of the Internet

1992: TD-Gammon, developed by Gerald Tesauro, of IBM; an artificial neural network trained by temporal-difference learning to play high-level backgammon

1997: IBM's Deep Blue chess computer defeats Russian chess grandmaster Garry Kasparov; Windows releases a speech recognition software, developed by Dragon Systems

2012: AlexNet, a convolutional neural network architecture, primarily designed by Alex Krizhevsky, a Ukrainian-born, Canadian computer scientist

2020: OpenAI beta tests GPT-3, which uses deep learning to create code, poetry, and other language and writing tasks; it's the first such chatbot that can create content almost indistinguishable from human-created content

2023: In January, OpenAI releases a free preview of its ChatGPT-3 to the public, and in March releases the upgrade ChatGPT-4

AI in our everyday lives

You’re using AI in some way today; in fact, you probably rely on AI in many different ways — you just don’t notice it because it’s so mundane. A smart thermostat for your home may not sound very exciting, but it’s an incredibly practical use for a technology that has some people running for the hills in terror.

As the development of AI has continued, there are now really cool uses for AI. For example, you may not know there is a medical monitoring device that can actually predict when you might have a heart problem, but such a device exists.

AI powers drones, drives cars, and makes all sorts of robots possible. You see AI used today in all sorts of space applications, and the evolution of artificial intelligence figures prominently in all the space adventures humans will have tomorrow.

The potential uses for AI number in the millions — all safely out of sight even when they’re quite dramatic in nature. Here are some of the ways in which you might see AI used:

  • Fraud detection: You get a call from your credit card company asking whether you made a particular purchase. The credit card company isn’t being nosy; it’s simply alerting you to the fact that someone else could be making a purchase using your card. The AI embedded within the credit card company’s code detected an unfamiliar spending pattern and alerted someone to it.
  • Resource scheduling: Many organizations need to schedule the use of resources efficiently. For example, a hospital may have to determine where to put a patient based on the patient’s needs, availability of skilled experts, and the amount of time the doctor expects the patient to be in the hospital.
  • Complex analysis: Humans often need help with complex analysis because there are literally too many factors to consider. For example, the same set of symptoms could indicate more than one problem. A doctor or other expert might need help making a diagnosis in a timely manner to save a patient’s life.
  • Automation: Any form of automation can benefit from the addition of AI to handle unexpected changes or events. A problem with some types of automation today is that an unexpected event, such as an object in the wrong place, can actually cause the automation to stop. Adding AI to the automation can allow the automation to handle unexpected events and continue as if nothing happened.
  • Customer service: The customer service line you call today may not even have a human behind it. The automation is good enough to follow scripts and use various resources to handle the vast majority of your questions. With good voice inflection (provided by AI as well), you may not even be able to tell that you’re talking with a computer.
  • Safety systems: Many of the safety systems found in machines of various sorts today rely on AI to take over the vehicle in a time of crisis. For example, many automatic braking systems (ABS) rely on AI to stop the car based on all the inputs that a vehicle can provide, such as the direction of a skid. Computerized ABS is actually relatively old at 40 years from a technology perspective.
  • Machine efficiency: AI can help control a machine in such a manner as to obtain maximum efficiency. The AI controls the use of resources so that the system doesn’t overshoot speed or other goals. Every ounce of power is used precisely as needed to provide the desired services.

About This Article

This article is from the book:

About the book authors:

John Mueller has published more than 100 books on technology, data, and programming. John has a website and blog where he writes articles on technology and offers assistance alongside his published books.

Luca Massaron is a data scientist specializing in insurance and finance. A Google Developer Expert in machine learning, he has been involved in quantitative analysis and algorithms since 2000.

John Mueller has published more than 100 books on technology, data, and programming. John has a website and blog where he writes articles on technology and offers assistance alongside his published books.

Luca Massaron is a data scientist specializing in insurance and finance. A Google Developer Expert in machine learning, he has been involved in quantitative analysis and algorithms since 2000.

This article can be found in the category: