Artificial Intelligence For Dummies book cover

Artificial Intelligence For Dummies

Overview

Dive into the intelligence that powers artificial intelligence

Artificial intelligence is swiftly moving from a sci-fi future to a modern reality. This edition of Artificial Intelligence For Dummies keeps pace with the lighting-fast expansion of AI tools that are overhauling every corner of reality. This book demystifies how artificial intelligence systems operate, giving you a look at the inner workings of AI and explaining the important role of data in creating intelligence. You'll get a primer on using AI in everyday life, and you'll also get a glimpse into possible AI-driven futures. What's next for humanity in the age of AI? How will your job and your life change as AI continue to evolve? How can you take advantage of AI today to make your live easier? This jargon-free Dummies guide answers all your most pressing questions about the world of artificial intelligence.

  • Learn the basics of AI hardware and software, and how intelligence is created from code
  • Get up to date with the latest AI trends and disruptions across industries
  • Wrap your mind around what the AI revolution means for humanity, and for you
  • Discover tips on using generative AI ethically and effectively

Artificial Intelligence For Dummies is the ideal starting point for anyone seeking a deeper technological understanding of how artificial intelligence works and what promise it holds for the future.

Dive into the intelligence that powers artificial intelligence

Artificial intelligence is swiftly moving from a sci-fi future to a modern reality. This edition of Artificial Intelligence For Dummies keeps pace with the lighting-fast expansion of AI tools that are overhauling every corner of reality. This book demystifies how artificial intelligence systems operate, giving you a look at the inner workings of AI and explaining the important role of data in creating intelligence. You'll get a primer on using AI in everyday life, and you'll also get a glimpse into possible AI-driven futures. What's next for humanity in the age of AI? How will your job and your life change as AI continue to evolve? How can you

take advantage of AI today to make your live easier? This jargon-free Dummies guide answers all your most pressing questions about the world of artificial intelligence.
  • Learn the basics of AI hardware and software, and how intelligence is created from code
  • Get up to date with the latest AI trends and disruptions across industries
  • Wrap your mind around what the AI revolution means for humanity, and for you
  • Discover tips on using generative AI ethically and effectively

Artificial Intelligence For Dummies is the ideal starting point for anyone seeking a deeper technological understanding of how artificial intelligence works and what promise it holds for the future.

Artificial Intelligence For Dummies Cheat Sheet

Artificial intelligence (AI) is a technology that has grabbed a lot of attention in movies, books, products, and in a slew of other places. Often, vendors equate AI with smartness: You buy a smart device to obtain a device with an AI, even though smart devices sometimes are smart only in that they offer connectivity, not AI. Many products are hyped to contain AI that sometimes doesn’t even work. Some people, of course, want to grab headlines by telling mistruths or offering misconceptions about AI. This Cheat Sheet offers you some interesting insights into why the mundane is actually where you see AI most often. Yes, AI is being put to some amazing uses as well, but vendors often misrepresent these applications to the point that no one really knows how much is real and how much is the result of someone’s vivid imagination.

Articles From The Book

34 results

Generative AI Articles

Envision the World as a Graph with Bayes' Theorem

Bayes’ theorem can help you deduce how likely something is to happen in a certain context, based on the general probabilities of the fact itself and the evidence you examine, and combined with the probability of the evidence given the fact. Seldom will a single piece of evidence diminish doubts and provide enough certainty in a prediction to ensure that it will happen. As a true detective, to reach certainty, you have to collect more evidence and make the individual pieces work together in your investigation. Noticing that a person has long hair isn’t enough to determine whether person is female or a male. Adding data about height and weight could help increase confidence. The Naïve Bayes algorithm helps you arrange all the evidence you gather and reach a more solid prediction with a higher likelihood of being correct. Gathered evidence considered singularly couldn’t save you from the risk of predicting incorrectly, but all evidence summed together can reach a more definitive resolution. The following example shows how things work in a Naïve Bayes classification. This is an old, renowned problem, but it represents the kind of capability that you can expect from an AI. The dataset is from the paper “Induction of Decision Trees,” by John Ross Quinlan. Quinlan is a computer scientist who contributed to the development of another machine learning algorithm, decision trees, in a fundamental way, but his example works well with any kind of learning algorithm. The problem requires that the AI guess the best conditions to play tennis given the weather conditions. The set of features described by Quinlan is as follows:

  • Outlook: Sunny, overcast, or rainy
  • Temperature: Cool, mild, or hot
  • Humidity: High or normal
  • Windy: True or false
The following table contains the database entries used for the example: The option of playing tennis depends on the four arguments shown here. The result of this AI learning example is a decision as to whether to play tennis, given the weather conditions (the evidence). Using just the outlook (sunny, overcast, or rainy) won’t be enough, because the temperature and humidity could be too high or the wind might be strong. These arguments represent real conditions that have multiple causes, or causes that are interconnected. The Naïve Bayes algorithm is skilled at guessing correctly when multiple causes exist. The algorithm computes a score, based on the probability of making a particular decision and multiplied by the probabilities of the evidence connected to that decision. For instance, to determine whether to play tennis when the outlook is sunny but the wind is strong, the algorithm computes the score for a positive answer by multiplying the general probability of playing (9 played games out of 14 occurrences) by the probability of the day’s being sunny (2 out of 9 played games) and of having windy conditions when playing tennis (3 out of 9 played games). The same rules apply for the negative case (which has different probabilities for not playing given certain conditions): likelihood of playing: 9/14 * 2/9 * 3/9 = 0.05 likelihood of not playing: 5/14 * 3/5 * 3/5 = 0.13 Because the score for the likelihood is higher, the algorithm decides that it’s safer not to play under such conditions. It computes such likelihood by summing the two scores and dividing both scores by their sum: probability of playing : 0.05 / (0.05 + 0.13) = 0.278 probability of not playing : 0.13 / (0.05 + 0.13) = 0.722 You can further extend Naïve Bayes to represent relationships that are more complex than a series of factors that hint at the likelihood of an outcome using a Bayesian network, which consists of graphs showing how events affect each other. Bayesian graphs have nodes that represent the events and arcs showing which events affect others, accompanied by a table of conditional probabilities that show how the relationship works in terms of probability. The figure shows a famous example of a Bayesian network taken from a 1988 academic paper, “Local computations with probabilities on graphical structures and their application to expert systems,” by Lauritzen, Steffen L. and David J. Spiegelhalter, published by the Journal of the Royal Statistical Society. The depicted network is called Asia. It shows possible patient conditions and what causes what. For instance, if a patient has dyspnea, it could be an effect of tuberculosis, lung cancer, or bronchitis. Knowing whether the patient smokes, has been to Asia, or has anomalous x-ray results (thus giving certainty to certain pieces of evidence, a priori in Bayesian language) helps infer the real (posterior) probabilities of having any of the pathologies in the graph. Bayesian networks, though intuitive, have complex math behind them, and they’re more powerful than a simple Naïve Bayes algorithm because they mimic the world as a sequence of causes and effects based on probability. Bayesian networks are so effective that you can use them to represent any situation. They have varied applications, such as medical diagnoses, the fusing of uncertain data arriving from multiple sensors, economic modeling, and the monitoring of complex systems such as a car. For instance, because driving in highway traffic may involve complex situations with many vehicles, the Analysis of MassIve Data STreams (AMIDST) consortium, in collaboration with the automaker Daimler, devised a Bayesian network that can recognize maneuvers by other vehicles and increase driving safety.

General AI Articles

What Is AI Technology?

The first concept that’s important to understand is that artificial intelligence (AI) doesn’t really have anything to do with human intelligence. Yes, some AI is modeled to simulate human intelligence, but that’s what it is: a simulation. When asking "what is artificial intelligence?" notice an interplay between goal seeking, data processing used to achieve that goal, and data acquisition used to better understand the goal. AI technology relies on algorithms to achieve a result that may or may not have anything to do with human goals or methods of achieving those goals. With this in mind, you can categorize AI in four ways:

  • Acting like a human: When a computer acts like a human, it best reflects the Turing test, in which the computer succeeds when differentiation between the computer and a human isn’t possible. This category also reflects what the media would have you believe AI is all about. You see it employed for technologies such as natural language processing, knowledge representation, automated reasoning, and machine learning (all four of which must be present to pass the test).

    The original Turing Test didn’t include any physical contact. The newer, Total Turing Test does include physical contact in the form of perceptual ability interrogation, which means that the computer must also employ both computer vision and robotics to succeed.

    Modern techniques include the idea of achieving the goal rather than mimicking humans completely. For example, the Wright Brothers didn’t succeed in creating an airplane by precisely copying the flight of birds; rather, the birds provided ideas that led to aerodynamics that eventually led to human flight. The goal is to fly. Both birds and humans achieve this goal, but they use different approaches.

  • Thinking like a human: When a computer thinks as a human, it performs tasks that require intelligence (as contrasted with rote procedures) from a human to succeed, such as driving a car. To determine whether a program thinks like a human, you must have some method of determining how humans think, which the cognitive modeling approach defines. This model relies on three techniques:
    • Introspection: Detecting and documenting the techniques used to achieve goals by monitoring one’s own thought processes.
    • Psychological testing: Observing a person’s behavior and adding it to a database of similar behaviors from other persons given a similar set of circumstances, goals, resources, and environmental conditions (among other things).
    • Brain imaging: Monitoring brain activity directly through various mechanical means, such as Computerized Axial Tomography (CAT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Magnetoencephalography (MEG).

After creating a model, you can write a program that simulates the model. Given the amount of variability among human thought processes and the difficulty of accurately representing these thought processes as part of a program, the results are experimental at best. This category of thinking humanly is often used in psychology and other fields in which modeling the human thought process to create realistic simulations is essential.

  • Thinking rationally: Studying how humans think using some standard enables the creation of guidelines that describe typical human behaviors. A person is considered rational when following these behaviors within certain levels of deviation. A computer that thinks rationally relies on the recorded behaviors to create a guide as to how to interact with an environment based on the data at hand.

    The goal of this approach is to solve problems logically, when possible. In many cases, this approach would enable the creation of a baseline technique for solving a problem, which would then be modified to actually solve the problem. In other words, the solving of a problem in principle is often different from solving it in practice, but you still need a starting point.

  • Acting rationally: Studying how humans act in given situations under specific constraints enables you to determine which techniques are both efficient and effective. A computer that acts rationally relies on the recorded actions to interact with an environment based on conditions, environmental factors, and existing data. As with rational thought, rational acts depend on a solution in principle, which may not prove useful in practice. However, rational acts do provide a baseline upon which a computer can begin negotiating the successful completion of a goal.

Hintze's AI classifications

The categories used to define AI offer a way to consider various uses for or ways to apply AI. Some of the systems used to classify AI by type are arbitrary and not distinct. For example, some groups view AI as either strong (generalized intelligence that can adapt to a variety of situations) or weak (specific intelligence designed to perform a particular task well). The problem with strong AI is that it doesn’t perform any task well, while weak AI is too specific to perform tasks independently. Even so, just two type classifications won’t do the job even in a general sense. The four classification types promoted by Arend Hintze form a better basis for understanding AI:
  • Reactive machines: The machines you see beating humans at chess or playing on game shows are examples of reactive machines. A reactive machine has no memory or experience upon which to base a decision. Instead, it relies on pure computational power and smart algorithms to recreate every decision every time. This is an example of a weak AI used for a specific purpose.
  • Limited memory: A self-driving car or autonomous robot can’t afford the time to make every decision from scratch. These machines rely on a small amount of memory to provide experiential knowledge of various situations. When the machine sees the same situation, it can rely on experience to reduce reaction time and to provide more resources for making new decisions that haven’t yet been made. This is an example of the current level of strong AI.
  • Theory of mind: A machine that can assess both its required goals and the potential goals of other entities in the same environment has a kind of understanding that is feasible to some extent today, but not in any commercial form. However, for self-driving cars to become truly autonomous, this level of AI must be fully developed. A self-driving car would not only need to know that it must go from one point to another, but also intuit the potentially conflicting goals of drivers around it and react accordingly.
  • Self-awareness: This is the sort of AI that you see in movies. However, it requires technologies that aren’t even remotely possible now because such a machine would have a sense of both self and consciousness. In addition, instead of merely intuiting the goals of others based on environment and other entity reactions, this type of machine would be able to infer the intent of others based on experiential knowledge.

Problems defining AI

Artificial Intelligence has had several false starts and stops over the years, partly because people don’t really understand what AI is all about, or even what it should accomplish. A major part of the problem is that movies, television shows, and books have all conspired to give false hopes about hat AI could accomplish. In addition, the human tendency to anthropomorphize (give human characteristics to) technology makes it seem as if AI must do more than it can hope to accomplish. Of course, the basis for what you expect from AI is a combination of how you define AI, the technology you have for implementing AI, and the goals you have for AI. Consequently, everyone sees AI differently. Before you can use a term in any meaningful and useful way, you must have a definition for it. After all, if nobody agrees on a meaning, the term has none; it’s just a collection of characters. Defining the idiom (a term whose meaning isn’t clear from the meanings of its constituent elements) is especially important with technical terms that have received more than a little press coverage at various times and in various ways. The term artificial intelligence doesn’t really tell you anything meaningful, which is why there are so many discussions and disagreements about it. Yes, you can argue that what occurs is artificial, not having come from a natural source. However, the intelligence part is, at best, ambiguous.

Discerning intelligence

People define intelligence in many different ways. However, you can say that intelligence involves certain mental activities composed of the following:
  • Learning: Having the ability to obtain and process new information
  • Reasoning: Being able to manipulate information in various ways
  • Understanding: Considering the result of information manipulation
  • Grasping truths: Determining the validity of the manipulated information
  • Seeing relationships: Divining how validated data interacts with other data
  • Considering meanings: Applying truths to particular situations in a manner consistent with their relationship
  • Separating fact from belief: Determining whether the data is adequately supported by provable sources that can be demonstrated to be consistently valid

How does AI work?

The list above could easily get quite long, but even this list is relatively prone to interpretation by anyone who accepts it as viable. As you can see from the list, however, intelligence often follows a process that a computer system can mimic as part of a simulation:
  1. Set a goal based on needs or wants.
  2. Assess the value of any currently known information in support of the goal.
  3. Gather additional information that could support the goal. The emphasis here is on information that could support the goal, rather than information that you know will support the goal.
  4. Manipulate the data such that it achieves a form consistent with existing information.
  5. Define the relationships and truth values between existing and new information.
  6. Determine whether the goal is achieved.
  7. Modify the goal in light of the new data and its effect on the probability of success.
  8. Repeat Steps 2 through 7 as needed until the goal is achieved (found true) or the possibilities for achieving it are exhausted (found false).
Even though you can create algorithms and provide access to data in support of this process within a computer, a computer’s capability to achieve intelligence is severely limited. For example, a computer is incapable of understanding anything because it relies on machine processes to manipulate data using pure math in a strictly mechanical fashion. Likewise, computers can’t easily separate truth from mistruth. In fact, no computer can fully implement any of the mental activities described in the list that describes intelligence. As part of deciding what intelligence actually involves, categorizing intelligence is also helpful. Humans don’t use just one type of intelligence, but rather rely on multiple intelligences to perform tasks. Howard Gardner of Harvard has defined a number of these types of intelligence, and knowing them helps you to relate them to the kinds of tasks that a computer can simulate as intelligence (see the table below for a modified version of these intelligences with additional description).

The Kinds of Human Intelligence and How AIs Simulate Them

The reality vs. hype

There is a lot of hype about AI out there. If you watch movies such as Her and Ex Machina, you might be led to believe that AI is further along than it is. The problem is that AI is actually in its infancy, and any sort of application like those shown in the movies is the creative output of an overactive imagination. However, the importance of artificial intelligence to the future of technology cannot be overstated. It is already helping people in everyday technologies, and has great potential in everything from customer service to health care, to outer space exploration.

The five tribes and the master algorithm

You may have heard of something called the singularity, which is responsible for the potential claims presented in the media and movies. The singularity is essentially a master algorithm that encompasses all five tribes of learning used within machine learning. To achieve what these sources are telling you, the machine must be able to learn as a human would — as specified by the seven kinds of intelligence discussed earlier. Here are the five tribes of learning:
  • Symbologists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
  • Connectionists: This tribe’s origin is in neuroscience, and the group relies on backpropagation to solve problems.
  • Evolutionaries: The evolutionaries tribe originates in evolutionary biology, relying on genetic programming to solve problems.
  • Bayesians: This tribe’s origin is in statistics and relies on probabilistic inference to solve problems.
  • Analogizers: The origin of this tribe is in psychology. The group relies on kernel machines to solve problems.
The ultimate goal of machine learning is to combine the technologies and strategies embraced by the five tribes to create a single algorithm (the master algorithm) that can learn anything. Of course, achieving that goal is a long way off. Even so, scientists such as Pedro Domingos at the University of Washington are currently working toward that goal. To make things even less clear, the five tribes may not be able to provide enough information to actually solve the problem of human intelligence, so creating master algorithms for all five tribes may still not yield the singularity. At this point, you should be amazed at just how much people don’t know about how they think or why they think in a certain manner. Any rumors you hear about AI taking over the world or becoming superior to people are just plain false.

Considering sources of hype

There are many sources of AI hype. Quite a bit of the hype comes from the media and is presented by people who have no idea of what AI is all about, except perhaps from a sci-fi novel they read once. So, it’s not just movies or television that cause problems with AI hype; it’s all sorts of other media sources as well. You can often find news reports presenting AI as being able to do something that it can’t possibly do because the reporter doesn’t understand the technology. Oddly enough, many news services now use AI to at least start articles for reporters. Some products should be tested a lot more before being placed on the market. The “2020 in Review: 10 AI Failures” article at SyncedReview.com discusses ten products hyped by their developer but which fell flat on their faces. Some of these failures are huge and reflect badly on the ability of AI to perform tasks as a whole. However, something to consider with a few of these failures is that people may have interfered with the device using the AI. Obviously, testing procedures need to start considering the possibility of people purposely tampering with the AI as a potential source of errors. Until that happens, the AI will fail to perform as expected because people will continue to fiddle with the software in an attempt to cause it to fail in a humorous manner.

Another cause of problems comes from asking the wrong person about AI. Not every scientist, no matter how smart, knows enough about AI to provide a competent opinion about the technology and the direction it will take in the future. Asking a biologist about the future of AI in general is akin to asking your dentist to perform brain surgery — it simply isn’t a good idea. Yet, many stories appear with people like these as the information source. To discover the future direction of AI, it’s best to ask a computer scientist or data scientist with a strong background in AI research.

Understanding user overestimation

Because of hype (and sometimes laziness or fatigue), users continually overestimate the ability of AI to perform tasks. For example, a Tesla owner was recently found sleeping in his car while the car zoomed along the highway at 90 mph. However, even with the user significantly overestimating the ability of the technology to drive a car, it does apparently work well enough (at least, for this driver) to avoid a complete failure. However, you need not be speeding down a highway at 90 mph to encounter user overestimation. Robot vacuums can also fail to meet expectations, usually because users believe they can just plug in the device and then never think about vacuuming again. After all, movies portray the devices working precisely in this manner. The article “How to Solve the Most Annoying Robot Vacuum Cleaner Problems” at RobotsInMyHome.com discusses troubleshooting techniques for various robotic vacuums for a good reason — the robots still need human intervention. The point is that most robots need human intervention at some point because they simply lack the knowledge to go it alone.

What is AI technology?

Artificial intelligence is a sub-discipline of computer science that works by combining large amounts of data with fast, iterative algorithms with the goal of enabling computers to solve complex problems and complete complex tasks. To see AI at work, you need to have some sort of computing system, an application that contains the required software, and a knowledge base. For artificial intelligence, the computers could be anything with a chip inside; in fact, a smartphone does just as well as a desktop computer for some applications. Of course, if you’re Amazon and you want to provide advice on a particular person’s next buying decision, the smartphone won’t do — you need a really big computing system for that application. The size of the computing system is directly proportional to the amount of work you expect the AI to perform. The application can also vary in size, complexity, and even location. For example, if you’re a business and want to analyze client data to determine how best to make a sales pitch, you might rely on a server-based application to perform the task. On the other hand, if you’re a customer and want to find products on Amazon to go with your current purchase items, the application doesn’t even reside on your computer; you access it through a web-based application located on Amazon’s servers. The knowledge base varies in location and size as well. The more complex the data, the more you can obtain from it, but the more you need to manipulate it as well. You get no free lunch when it comes to knowledge management. The interplay between location and time is also important. A network connection affords you access to a large knowledge base online but costs you in time because of the latency of network connections. However, localized databases, while fast, tend to lack details in many cases.

General AI Articles

History of AI: How It All Started

You can hardly avoid hearing about artificial intelligence (AI) today. You see AI in the movies, in the news, in books, and online. It's been in the news a lot lately, with all of the frenzy surrounding ChatGPT (see more about that below). AI is part of robots, self-driving (SD) cars, drones, medical systems, online shopping sites, and all sorts of other technologies that affect your daily life in so many ways. Some people have come to trust AIs so much, that they fall asleep while their self-driving cars take them to their destination — illegally, of course. Many pundits are burying you in information (and disinformation) about AI, too. Some see AI as cute and fuzzy; others see it as a potential mass murderer of the human race. The problem with being so loaded down with information in so many ways is that you struggle to separate what’s real from what is simply the product of an overactive imagination. Just how far can you trust your AI, anyway? Much of the hype about AI originates from the excessive and unrealistic expectations of scientists, entrepreneurs, and businesspersons. This article helps you understand some of the history of artificial intelligence and evolution of AI.

The ChatGPT controversy

The latest media storm around AI came in early January 2023, when OpenAI's free preview of its ChatGPT chatbot (released in November 2022) reached 100 million users. OpenAI then released a subscription service called ChatGPT Plus, and an upgraded version of its product, ChatGPT-4, in March 2023. A chatbot is a computer program designed to simulate human conversation. ChatGPT (GPT stands for generative pretrained transformer) is a particularly powerful chatbot able to produce natural, human-like writing through its use of 570GB of data from the Internet. Representing one of the latest achievements in the development of artificial intelligence, ChatGPT can answer questions and write articles, poems, emails, and research papers; it can also write programming code, translate languages, and perform other tasks related to language. ChatGPT's possible real-world uses include:
  • Customer service
  • Ecommerce
  • Research
  • Education and training
  • Computer code writing and debugging
  • Scheduling and booking
  • Entertainment
  • Health care information and assistance
However, while many people are excited about the possibilities for ChatGPT and other similar technologies being developed, there are plenty of concerns about how it can be used in bad ways, too — for example, to cheat in school by having it write essays and research papers. It’s difficult to discern whether a piece of writing has been generated by ChatGPT or a human. In addition, the technology is far from perfect; the text it produces is often inaccurate and biased, and therefore, can spread false and even harmful information. AI can, and is, serving us well in many ways, but it’s important to understand its limitations. AI will never be able to engage in certain essential activities and tasks, and won’t be able to do other ones until far into the future. For example, while it can produce a piece of music with the data you’ve entered and in the style of a particular musician, say Beethoven, it cannot actually create anything. AI doesn’t have an imagination or original ideas.

The history of AI, starting with Dartmouth

Looking at artificial intelligence history begins with the earliest computers, which were just that: computing devices. They mimicked the human ability to manipulate symbols in order to perform basic math tasks, such as addition. Logical reasoning later added the capability to perform mathematical reasoning through comparisons (such as determining whether one value is greater than another value). However, for artificial intelligence evolution, humans still needed to define the algorithm used to perform the computation, provide the required data in the right format, and then interpret the result. During the summer of 1956, various scientists attended a workshop held on the Dartmouth College campus in Hanover, New Hampshire, to do something more. They predicted that machines that could reason as effectively as humans would require, at most, a generation to come about. They were wrong. Only now have we realized machines that can perform mathematical and logical reasoning as effectively as a human (which means that computers must master at least six more intelligences before reaching anything even close to human intelligence). The stated problem with the Dartmouth College and other endeavors of the time relates to hardware — the processing capability to perform calculations quickly enough to create a simulation. However, that’s not really the whole problem. Yes, hardware does figure in to the picture, but you can’t simulate processes that you don’t understand. Even so, the reason that AI is somewhat effective today is that the hardware has finally become powerful enough to support the required number of calculations. The biggest problem with these early attempts (and still a considerable problem today) is that we don’t understand how humans reason well enough to create a simulation of any sort — assuming that a direction simulation is even possible. Consider the issues surrounding the accomplishment of manned flight by the Wright brothers. They succeeded not by simulating birds, but rather by understanding the processes that birds use, thereby creating the field of aerodynamics. Consequently, when someone says that the next big AI innovation is right around the corner and yet no concrete dissertation exists of the processes involved, the innovation is anything but right around the corner.

Continuing with expert systems

Expert systems first appeared in the 1970s and again in the 1980s as an attempt to reduce the computational requirements posed by AI using the knowledge of experts. A number of expert system representations appeared, including:
  • Rule based: These use "if … then" statements to base decisions on rules of thumb.
  • Frame based: These use databases organized into related hierarchies of generic information called frames.
  • Logic based: These rely on set theory to establish relationships).
The advent of expert systems is important in artificial intelligence background because they present the first truly useful and successful implementations of AI.

You still see expert systems in use today, although they aren’t called that any longer. For example, the spelling and grammar checkers in your application are kinds of expert systems. The grammar checker, especially, is strongly rule based. It pays to look around to see other places where expert systems may still see practical use in everyday applications.

A problem with expert systems is that they can be hard to create and maintain. Early users had to learn specialized programming languages, such as List Processing (LisP) or Prolog. Some vendors saw an opportunity to put expert systems in the hands of less experienced or novice programmers. However, the products they used generally provided extremely limited functionality in using small knowledge bases. In the 1990s, the phrase expert system began to disappear. The idea that expert systems were a failure did appear, but the reality is that expert systems were simply so successful that they became ingrained in the applications that they were designed to support. Using the example of a word processor, at one time you needed to buy a separate grammar checking application, such as RightWriter. However, word processors now have grammar checkers built in because they proved so useful (if not always accurate).

Overcoming the AI winters

The term AI winter refers to a period of reduced funding in the development of AI. In general, AI has followed a path on which proponents overstate what is possible, inducing people with no technology knowledge at all, but lots of money, to make investments. A period of criticism then follows when AI fails to meet expectations, and finally, the reduction in funding occurs. A number of these cycles have occurred over the years — all of them devastating to true progress. AI is currently in a new hype phase because of machine learning, a technology that helps computers learn from data. Having a computer learn from data means not depending on a human programmer to set operations (tasks), but rather deriving them directly from examples that show how the computer should behave. ' Machine learning is like educating a baby by showing it how to behave through example. This technology has pitfalls because the computer can learn how to do things incorrectly through careless teaching. At this time, the most successful solution is deep learning, which is a technology that strives to imitate the human brain. Deep learning is possible because of the availability of powerful computers, smarter algorithms, large datasets produced by the digitalization of our society, and huge investments from businesses such as Google, Facebook, Amazon, and others that take advantage of this AI renaissance for their own businesses. People are saying that the AI winter is over because of deep learning, and that’s true for now. However, when you look around at the ways in which people are viewing AI, you can easily figure out that another criticism phase will eventually occur unless proponents tone the rhetoric down.

A brief artificial intelligence timeline

1942: First electronic digital computer built by John Vincent Atanasoff and Clifford Berry at Iowa State University 1950: Alan Turing paper “Computing Machinery and Intelligence;” his proposal later became “The Turing Test,” which measured machine AI 1958: Perceptron computer, built by Cornell University Professor Frank Rosenblatt, regarded as first artificial neural network 1966: First “chatterbox” (later shortened to chatbot) — created by Joseph Weizenbaum, a German-American computer scientist — uses natural language processing to converse with humans 1971: First commercial microprocessor by Intel 1988: Jabberwacky, a chatbot created by British computer scientist Rollo Carpenter, provides interesting and entertaining conversation to humans 1990s: Early days of the Internet 1992: TD-Gammon, developed by Gerald Tesauro, of IBM; an artificial neural network trained by temporal-difference learning to play high-level backgammon 1997: IBM's Deep Blue chess computer defeats Russian chess grandmaster Garry Kasparov; Windows releases a speech recognition software, developed by Dragon Systems 2012: AlexNet, a convolutional neural network architecture, primarily designed by Alex Krizhevsky, a Ukrainian-born, Canadian computer scientist 2020: OpenAI beta tests GPT-3, which uses deep learning to create code, poetry, and other language and writing tasks; it's the first such chatbot that can create content almost indistinguishable from human-created content 2023: In January, OpenAI releases a free preview of its ChatGPT-3 to the public, and in March releases the upgrade ChatGPT-4

AI in our everyday lives

You’re using AI in some way today; in fact, you probably rely on AI in many different ways — you just don’t notice it because it’s so mundane. A smart thermostat for your home may not sound very exciting, but it’s an incredibly practical use for a technology that has some people running for the hills in terror. As the development of AI has continued, there are now really cool uses for AI. For example, you may not know there is a medical monitoring device that can actually predict when you might have a heart problem, but such a device exists. AI powers drones, drives cars, and makes all sorts of robots possible. You see AI used today in all sorts of space applications, and the evolution of artificial intelligence figures prominently in all the space adventures humans will have tomorrow. The potential uses for AI number in the millions — all safely out of sight even when they’re quite dramatic in nature. Here are some of the ways in which you might see AI used:
  • Fraud detection: You get a call from your credit card company asking whether you made a particular purchase. The credit card company isn’t being nosy; it’s simply alerting you to the fact that someone else could be making a purchase using your card. The AI embedded within the credit card company’s code detected an unfamiliar spending pattern and alerted someone to it.
  • Resource scheduling: Many organizations need to schedule the use of resources efficiently. For example, a hospital may have to determine where to put a patient based on the patient’s needs, availability of skilled experts, and the amount of time the doctor expects the patient to be in the hospital.
  • Complex analysis: Humans often need help with complex analysis because there are literally too many factors to consider. For example, the same set of symptoms could indicate more than one problem. A doctor or other expert might need help making a diagnosis in a timely manner to save a patient’s life.
  • Automation: Any form of automation can benefit from the addition of AI to handle unexpected changes or events. A problem with some types of automation today is that an unexpected event, such as an object in the wrong place, can actually cause the automation to stop. Adding AI to the automation can allow the automation to handle unexpected events and continue as if nothing happened.
  • Customer service: The customer service line you call today may not even have a human behind it. The automation is good enough to follow scripts and use various resources to handle the vast majority of your questions. With good voice inflection (provided by AI as well), you may not even be able to tell that you’re talking with a computer.
  • Safety systems: Many of the safety systems found in machines of various sorts today rely on AI to take over the vehicle in a time of crisis. For example, many automatic braking systems (ABS) rely on AI to stop the car based on all the inputs that a vehicle can provide, such as the direction of a skid. Computerized ABS is actually relatively old at 40 years from a technology perspective.
  • Machine efficiency: AI can help control a machine in such a manner as to obtain maximum efficiency. The AI controls the use of resources so that the system doesn’t overshoot speed or other goals. Every ounce of power is used precisely as needed to provide the desired services.