Making Suggestions and Artificial Intelligence
A suggestion is different from a command. Even though some humans seem to miss the point entirely, a suggestion is simply an idea put forth as a potential solution to a problem. Making a suggestion implies that other solutions could exist and that accepting a suggestion doesn’t mean automatically implementing it. In fact, the suggestion is only an idea; it may not even work. Of course, in a perfect world, all suggestions would be good suggestions — at least possible solutions to a correct output, which is seldom the case in the real world.
Getting suggestions based on past actions
The most common way that an AI uses to create a suggestion is by collecting past actions as events and then using those past actions as a dataset for making new suggestions. For example, someone purchases a Half-Baked Widget every month for three months. It makes sense to suggest buying another one at the beginning of the fourth month. In fact, a truly smart AI might make the suggestion at the right time of the month. For example, if the user makes the purchase between the third and the fifth day of the month for the first three months, it pays to start making the suggestion on the third day of the month and then move onto something else after the fifth day.
Humans output an enormous number of clues while performing tasks. Unlike humans, an AI actually pays attention to every one of these clues and can record them in a consistent manner. The consistent collection of action data makes enables an AI to provide suggestions based on past actions with a high degree of accuracy in many cases.
Getting suggestions based on groups
Another common way to make suggestions relies on group membership. In this case, group membership need not be formal. A group could consist of a loose association of people who have some minor need or activity in common. For example, a lumberjack, a store owner, and a dietician could all buy mystery books. Even though they have nothing else in common, not even location, the fact that all three like mysteries makes them part of a group. An AI can easily spot patterns like this that might elude humans, so it can make good buying suggestions based on these rather loose group affiliations.
Groups can include ethereal connections that are temporary at best. For example, all the people who flew flight 1982 out of Houston on a certain day could form a group. Again, no connection whatsoever exists between these people except that they appeared on a specific flight. However, by knowing this information, an AI could perform additional filtering to locate people within the flight who like mysteries. The point is that an AI can provide good suggestions based on group affiliation even when the group is difficult (if not impossible) to identify from a human perspective.
Obtaining the wrong suggestions
Anyone who has spent time shopping online knows that websites often provide suggestions based on various criteria, such as previous purchases. Unfortunately, these suggestions are often wrong because the underlying AI lacks understanding. When someone makes a once-in-a-lifetime purchase of a Super-Wide Widget, a human would likely know that the purchase is indeed once in a lifetime because it’s extremely unlikely that anyone will need two. However, the AI doesn’t understand this fact. So, unless a programmer specifically creates a rule specifying that Super-Wide Widgets are a once-in-a-lifetime purchase, the AI may choose to keep recommending the product because sales are understandably small. In following a secondary rule about promoting products with slower sales, the AI behaves according to the characteristics that the developer provided for it, but the suggestions it makes are outright wrong.
Besides rule-based or logic errors in AIs, suggestions can become corrupted through data issues. For example, a GPS could make a suggestion based on the best possible data for a particular trip. However, road construction might make the suggested path untenable because the road is closed. Of course, many GPS applications do consider road construction, but they sometimes don’t consider other issues, such as a sudden change in the speed limit or weather conditions that make a particular path treacherous. Humans can overcome lacks in data through innovation, such as by using the less traveled road or understanding the meaning of detour signs.
When an AI manages to get past the logic, rule, and data issues, it sometimes still makes bad suggestions because it doesn’t understand the correlation between certain datasets in the same way a human does. For example, the AI may not know to suggest paint after a human purchases a combination of pipe and drywall when making a plumbing repair. The need to paint the drywall and the surrounding area after the repair is obvious to a human because a human has a sense of aesthetics that the AI lacks. The human makes a correlation between various products that isn’t obvious to the AI.