General AI Articles
AI shows up in everything from the operating room to your home entertainment system. Check out these articles for a heads-up on the latest developments in artificial intelligence.
Articles From General AI
Filter Results
Article / Updated 05-26-2023
The first concept that’s important to understand is that artificial intelligence (AI) doesn’t really have anything to do with human intelligence. Yes, some AI is modeled to simulate human intelligence, but that’s what it is: a simulation. When asking "what is artificial intelligence?" notice an interplay between goal seeking, data processing used to achieve that goal, and data acquisition used to better understand the goal. AI technology relies on algorithms to achieve a result that may or may not have anything to do with human goals or methods of achieving those goals. With this in mind, you can categorize AI in four ways: Acting like a human: When a computer acts like a human, it best reflects the Turing test, in which the computer succeeds when differentiation between the computer and a human isn’t possible. This category also reflects what the media would have you believe AI is all about. You see it employed for technologies such as natural language processing, knowledge representation, automated reasoning, and machine learning (all four of which must be present to pass the test). The original Turing Test didn’t include any physical contact. The newer, Total Turing Test does include physical contact in the form of perceptual ability interrogation, which means that the computer must also employ both computer vision and robotics to succeed. Modern techniques include the idea of achieving the goal rather than mimicking humans completely. For example, the Wright Brothers didn’t succeed in creating an airplane by precisely copying the flight of birds; rather, the birds provided ideas that led to aerodynamics that eventually led to human flight. The goal is to fly. Both birds and humans achieve this goal, but they use different approaches. Thinking like a human: When a computer thinks as a human, it performs tasks that require intelligence (as contrasted with rote procedures) from a human to succeed, such as driving a car. To determine whether a program thinks like a human, you must have some method of determining how humans think, which the cognitive modeling approach defines. This model relies on three techniques: Introspection: Detecting and documenting the techniques used to achieve goals by monitoring one’s own thought processes. Psychological testing: Observing a person’s behavior and adding it to a database of similar behaviors from other persons given a similar set of circumstances, goals, resources, and environmental conditions (among other things). Brain imaging: Monitoring brain activity directly through various mechanical means, such as Computerized Axial Tomography (CAT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Magnetoencephalography (MEG). After creating a model, you can write a program that simulates the model. Given the amount of variability among human thought processes and the difficulty of accurately representing these thought processes as part of a program, the results are experimental at best. This category of thinking humanly is often used in psychology and other fields in which modeling the human thought process to create realistic simulations is essential. Thinking rationally: Studying how humans think using some standard enables the creation of guidelines that describe typical human behaviors. A person is considered rational when following these behaviors within certain levels of deviation. A computer that thinks rationally relies on the recorded behaviors to create a guide as to how to interact with an environment based on the data at hand. The goal of this approach is to solve problems logically, when possible. In many cases, this approach would enable the creation of a baseline technique for solving a problem, which would then be modified to actually solve the problem. In other words, the solving of a problem in principle is often different from solving it in practice, but you still need a starting point. Acting rationally: Studying how humans act in given situations under specific constraints enables you to determine which techniques are both efficient and effective. A computer that acts rationally relies on the recorded actions to interact with an environment based on conditions, environmental factors, and existing data. As with rational thought, rational acts depend on a solution in principle, which may not prove useful in practice. However, rational acts do provide a baseline upon which a computer can begin negotiating the successful completion of a goal. Hintze's AI classifications The categories used to define AI offer a way to consider various uses for or ways to apply AI. Some of the systems used to classify AI by type are arbitrary and not distinct. For example, some groups view AI as either strong (generalized intelligence that can adapt to a variety of situations) or weak (specific intelligence designed to perform a particular task well). The problem with strong AI is that it doesn’t perform any task well, while weak AI is too specific to perform tasks independently. Even so, just two type classifications won’t do the job even in a general sense. The four classification types promoted by Arend Hintze form a better basis for understanding AI: Reactive machines: The machines you see beating humans at chess or playing on game shows are examples of reactive machines. A reactive machine has no memory or experience upon which to base a decision. Instead, it relies on pure computational power and smart algorithms to recreate every decision every time. This is an example of a weak AI used for a specific purpose. Limited memory: A self-driving car or autonomous robot can’t afford the time to make every decision from scratch. These machines rely on a small amount of memory to provide experiential knowledge of various situations. When the machine sees the same situation, it can rely on experience to reduce reaction time and to provide more resources for making new decisions that haven’t yet been made. This is an example of the current level of strong AI. Theory of mind: A machine that can assess both its required goals and the potential goals of other entities in the same environment has a kind of understanding that is feasible to some extent today, but not in any commercial form. However, for self-driving cars to become truly autonomous, this level of AI must be fully developed. A self-driving car would not only need to know that it must go from one point to another, but also intuit the potentially conflicting goals of drivers around it and react accordingly. Self-awareness: This is the sort of AI that you see in movies. However, it requires technologies that aren’t even remotely possible now because such a machine would have a sense of both self and consciousness. In addition, instead of merely intuiting the goals of others based on environment and other entity reactions, this type of machine would be able to infer the intent of others based on experiential knowledge. Problems defining AI Artificial Intelligence has had several false starts and stops over the years, partly because people don’t really understand what AI is all about, or even what it should accomplish. A major part of the problem is that movies, television shows, and books have all conspired to give false hopes about hat AI could accomplish. In addition, the human tendency to anthropomorphize (give human characteristics to) technology makes it seem as if AI must do more than it can hope to accomplish. Of course, the basis for what you expect from AI is a combination of how you define AI, the technology you have for implementing AI, and the goals you have for AI. Consequently, everyone sees AI differently. Before you can use a term in any meaningful and useful way, you must have a definition for it. After all, if nobody agrees on a meaning, the term has none; it’s just a collection of characters. Defining the idiom (a term whose meaning isn’t clear from the meanings of its constituent elements) is especially important with technical terms that have received more than a little press coverage at various times and in various ways. The term artificial intelligence doesn’t really tell you anything meaningful, which is why there are so many discussions and disagreements about it. Yes, you can argue that what occurs is artificial, not having come from a natural source. However, the intelligence part is, at best, ambiguous. Discerning intelligence People define intelligence in many different ways. However, you can say that intelligence involves certain mental activities composed of the following: Learning: Having the ability to obtain and process new information Reasoning: Being able to manipulate information in various ways Understanding: Considering the result of information manipulation Grasping truths: Determining the validity of the manipulated information Seeing relationships: Divining how validated data interacts with other data Considering meanings: Applying truths to particular situations in a manner consistent with their relationship Separating fact from belief: Determining whether the data is adequately supported by provable sources that can be demonstrated to be consistently valid How does AI work? The list above could easily get quite long, but even this list is relatively prone to interpretation by anyone who accepts it as viable. As you can see from the list, however, intelligence often follows a process that a computer system can mimic as part of a simulation: Set a goal based on needs or wants. Assess the value of any currently known information in support of the goal. Gather additional information that could support the goal. The emphasis here is on information that could support the goal, rather than information that you know will support the goal. Manipulate the data such that it achieves a form consistent with existing information. Define the relationships and truth values between existing and new information. Determine whether the goal is achieved. Modify the goal in light of the new data and its effect on the probability of success. Repeat Steps 2 through 7 as needed until the goal is achieved (found true) or the possibilities for achieving it are exhausted (found false). Even though you can create algorithms and provide access to data in support of this process within a computer, a computer’s capability to achieve intelligence is severely limited. For example, a computer is incapable of understanding anything because it relies on machine processes to manipulate data using pure math in a strictly mechanical fashion. Likewise, computers can’t easily separate truth from mistruth. In fact, no computer can fully implement any of the mental activities described in the list that describes intelligence. As part of deciding what intelligence actually involves, categorizing intelligence is also helpful. Humans don’t use just one type of intelligence, but rather rely on multiple intelligences to perform tasks. Howard Gardner of Harvard has defined a number of these types of intelligence, and knowing them helps you to relate them to the kinds of tasks that a computer can simulate as intelligence (see the table below for a modified version of these intelligences with additional description). The Kinds of Human Intelligence and How AIs Simulate Them Type Simulation Potential Human Tools Description Visual-spatial Moderate Models, graphics, charts, photographs, drawings, 3-D modeling, video, television, and multimedia Physical-environment intelligence used by people like sailors and architects (among many others). To move at all, humans need to understand their physical environment — that is, its dimensions and characteristics. Every robot or portable computer intelligence requires this capability, but the capability is often difficult to simulate (as with self-driving cars) or less than accurate (as with vacuums that rely as much on bumping as they do on moving intelligently). Bodily-kinesthetic Moderate to High Specialized equipment and real objects Body movements, such as those used by a surgeon or a dancer, require precision and body awareness. Robots commonly use this kind of intelligence to perform repetitive tasks, often with higher precision than humans, but sometimes with less grace. It’s essential to differentiate between human augmentation, such as a surgical device that provides a surgeon with enhanced physical ability, and true independent movement. The former is simply a demonstration of mathematical ability in that it depends on the surgeon for input. Creative None Artistic output, new patterns of thought, inventions, new kinds of musical composition Creativity is the act of developing a new pattern of thought that results in unique output in the form of art, music, and writing. A truly new kind of product is the result of creativity. An AI can simulate existing patterns of thought and even combine them to create what appears to be a unique presentation but is really just a mathematically based version of an existing pattern. In order to create, an AI would need to possess self-awareness, which would require intrapersonal intelligence. Interpersonal Low to Moderate Telephone, audio conferencing, video conferencing, writing, computer conferencing, email Interacting with others occurs at several levels. The goal of this form of intelligence is to obtain, exchange, give, and manipulate information based on the experiences of others. Computers can answer basic questions because of keyword input, not because they understand the question. The intelligence occurs while obtaining information, locating suitable keywords, and then giving information based on those keywords. Cross-referencing terms in a lookup table and then acting on the instructions provided by the table demonstrates logical intelligence, not interpersonal intelligence. Intrapersonal None Books, creative materials, diaries, privacy, and time Looking inward to understand one’s own interests and then setting goals based on those interests is currently a human-only kind of intelligence. As machines, computers have no desires, interests, wants, or creative abilities. An AI processes numeric input using a set of algorithms and provides an output; it isn’t aware of anything that it does, nor does it understand anything that it does. Linguistic (often divided into oral, aural, and written) Low for oral and aural None for written Games, multimedia, books, voice recorders, and spoken words Working with words is an essential tool for communication because spoken and written information exchange is far faster than any other form. This form of intelligence includes understanding oral, aural, and written input, managing the input to develop an answer, and providing an understandable answer as output. In many cases, computers can barely parse input into keywords, can’t actually understand the request at all, and output responses that may not be understandable at all. In humans, oral, aural, and written linguistic intelligence come from different areas of the brain, which means that even with humans, someone who has high written linguistic intelligence may not have similarly high oral linguistic intelligence. Computers don’t currently separate aural and oral linguistic ability — one is simply input and the other output. A computer can’t simulate written linguistic capability because this ability requires creativity. Logical-mathematical High (potentially higher than humans) Logic games, investigations, mysteries, and brain teasers Calculating a result, performing comparisons, exploring patterns, and considering relationships are all areas in which computers currently excel. When you see a computer beat a human on a game show, this is the only form of intelligence that you’re actually seeing, out of seven kinds of intelligence. Yes, you might see small bits of other kinds of intelligence, but this is the focus. Basing an assessment of human-versus-computer intelligence on just one area isn’t a good idea. The reality vs. hype There is a lot of hype about AI out there. If you watch movies such as Her and Ex Machina, you might be led to believe that AI is further along than it is. The problem is that AI is actually in its infancy, and any sort of application like those shown in the movies is the creative output of an overactive imagination. However, the importance of artificial intelligence to the future of technology cannot be overstated. It is already helping people in everyday technologies, and has great potential in everything from customer service to health care, to outer space exploration. The five tribes and the master algorithm You may have heard of something called the singularity, which is responsible for the potential claims presented in the media and movies. The singularity is essentially a master algorithm that encompasses all five tribes of learning used within machine learning. To achieve what these sources are telling you, the machine must be able to learn as a human would — as specified by the seven kinds of intelligence discussed earlier. Here are the five tribes of learning: Symbologists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems. Connectionists: This tribe’s origin is in neuroscience, and the group relies on backpropagation to solve problems. Evolutionaries: The evolutionaries tribe originates in evolutionary biology, relying on genetic programming to solve problems. Bayesians: This tribe’s origin is in statistics and relies on probabilistic inference to solve problems. Analogizers: The origin of this tribe is in psychology. The group relies on kernel machines to solve problems. The ultimate goal of machine learning is to combine the technologies and strategies embraced by the five tribes to create a single algorithm (the master algorithm) that can learn anything. Of course, achieving that goal is a long way off. Even so, scientists such as Pedro Domingos at the University of Washington are currently working toward that goal. To make things even less clear, the five tribes may not be able to provide enough information to actually solve the problem of human intelligence, so creating master algorithms for all five tribes may still not yield the singularity. At this point, you should be amazed at just how much people don’t know about how they think or why they think in a certain manner. Any rumors you hear about AI taking over the world or becoming superior to people are just plain false. Considering sources of hype There are many sources of AI hype. Quite a bit of the hype comes from the media and is presented by people who have no idea of what AI is all about, except perhaps from a sci-fi novel they read once. So, it’s not just movies or television that cause problems with AI hype; it’s all sorts of other media sources as well. You can often find news reports presenting AI as being able to do something that it can’t possibly do because the reporter doesn’t understand the technology. Oddly enough, many news services now use AI to at least start articles for reporters. Some products should be tested a lot more before being placed on the market. The “2020 in Review: 10 AI Failures” article at SyncedReview.com discusses ten products hyped by their developer but which fell flat on their faces. Some of these failures are huge and reflect badly on the ability of AI to perform tasks as a whole. However, something to consider with a few of these failures is that people may have interfered with the device using the AI. Obviously, testing procedures need to start considering the possibility of people purposely tampering with the AI as a potential source of errors. Until that happens, the AI will fail to perform as expected because people will continue to fiddle with the software in an attempt to cause it to fail in a humorous manner. Another cause of problems comes from asking the wrong person about AI. Not every scientist, no matter how smart, knows enough about AI to provide a competent opinion about the technology and the direction it will take in the future. Asking a biologist about the future of AI in general is akin to asking your dentist to perform brain surgery — it simply isn’t a good idea. Yet, many stories appear with people like these as the information source. To discover the future direction of AI, it’s best to ask a computer scientist or data scientist with a strong background in AI research. Understanding user overestimation Because of hype (and sometimes laziness or fatigue), users continually overestimate the ability of AI to perform tasks. For example, a Tesla owner was recently found sleeping in his car while the car zoomed along the highway at 90 mph. However, even with the user significantly overestimating the ability of the technology to drive a car, it does apparently work well enough (at least, for this driver) to avoid a complete failure. However, you need not be speeding down a highway at 90 mph to encounter user overestimation. Robot vacuums can also fail to meet expectations, usually because users believe they can just plug in the device and then never think about vacuuming again. After all, movies portray the devices working precisely in this manner. The article “How to Solve the Most Annoying Robot Vacuum Cleaner Problems” at RobotsInMyHome.com discusses troubleshooting techniques for various robotic vacuums for a good reason — the robots still need human intervention. The point is that most robots need human intervention at some point because they simply lack the knowledge to go it alone. What is AI technology? Artificial intelligence is a sub-discipline of computer science that works by combining large amounts of data with fast, iterative algorithms with the goal of enabling computers to solve complex problems and complete complex tasks. To see AI at work, you need to have some sort of computing system, an application that contains the required software, and a knowledge base. For artificial intelligence, the computers could be anything with a chip inside; in fact, a smartphone does just as well as a desktop computer for some applications. Of course, if you’re Amazon and you want to provide advice on a particular person’s next buying decision, the smartphone won’t do — you need a really big computing system for that application. The size of the computing system is directly proportional to the amount of work you expect the AI to perform. The application can also vary in size, complexity, and even location. For example, if you’re a business and want to analyze client data to determine how best to make a sales pitch, you might rely on a server-based application to perform the task. On the other hand, if you’re a customer and want to find products on Amazon to go with your current purchase items, the application doesn’t even reside on your computer; you access it through a web-based application located on Amazon’s servers. The knowledge base varies in location and size as well. The more complex the data, the more you can obtain from it, but the more you need to manipulate it as well. You get no free lunch when it comes to knowledge management. The interplay between location and time is also important. A network connection affords you access to a large knowledge base online but costs you in time because of the latency of network connections. However, localized databases, while fast, tend to lack details in many cases.
View ArticleArticle / Updated 05-25-2023
ChatGPT is a huge phenomenon and a major paradigm shift in the accelerating march of technological progression. Artificial intelligence (AI) research company OpenAI released a free preview of the chatbot in November 2022, and by January 2023, it had more than a million users. So, what is chatgpt? It's a large language model (LLM) that belongs to a category of AI called generative AI , which can generate new content rather than simply analyze existing data. Additionally, anyone can interact with ChatGPT (GPT stands for generative pre-trained transformer) in their own words. A natural, humanlike dialog ensues. ChatGPT is often directly accessed online by users, but it is also being integrated with several existing applications, such as Microsoft Office apps (Word, Excel, and PowerPoint) and the Bing search engine. The number of app integrations seems to grow every day as existing software providers hurry to capitalize on ChatGPT’s popularity. What is ChatGPT used for? The ways to use ChatGPT are as varied as its users. Most people lean towards more basic requests, such as creating a poem, an essay, or short marketing content. Students often turn to it to do their homework. Heads up, kids: ChatGPT stinks at answering riddles and sometimes word problems in math. Other times, it just makes things up. In general, people tend to use ChatGPT to guide or explain something, as if the bot were a fancier version of a search engine. Nothing is wrong with that use, but ChatGPT can do so much more. How much more depends on how well you write the prompt. If you write a basic prompt, you’ll get a bare-bones answer that you could have found using a search engine such as Google or Bing. That’s the most common reason why people abandon ChatGPT after a few uses. They erroneously believe it has nothing new to offer. But this particular failing is the user’s fault, not ChatGPT’s. What can ChatGPT do? This list covers just some of the more unique uses of this technology. Users have asked ChatGPT to: Conduct an interview with a long-dead legendary figure regarding their views of contemporary topics. Recommend colors and color combinations for logos, fashion designs, and interior decorating designs. Generate original works such as articles, e-books, and ad copy. Predict the outcome of a business scenario. Develop an investment strategy based on stock market history and current economic conditions. Make a diagnosis based on a patient’s real-world test results. Write computer code to make a new computer game from scratch. Leverage sales leads. Inspire ideas for a variety of things from A/B testing to podcasts, webinars, and full-feature films. Check computer code for errors. Summarize legalese in software agreements, contracts, and other forms into simple laymen language. Calculate the terms of an agreement into total costs. Teach a skill or get instructions for a complex task. Find an error in their logic before implementing their decision in the real world. Much ado has been made of ChatGPT’s creativity. But that creativity is a reflection and result of the human doing the prompting. If you can think it, you can probably get ChatGPT to play along. Unfortunately, that’s true for bad guys too. For example, they can prompt ChatGPT to find vulnerabilities in computer code or a computer system; steal your identity by writing a document in your style, tone, and word choices; or edit an audio clip or a video clip to fool your biometric security measures or make it say something you didn’t actually say. Only their imagination limits the possibilities for harm and chaos. Unwrapping ChatGPT fears Perhaps no other technology is as intriguing and disturbing as generative artificial intelligence. Emotions were raised to a fever pitch when 100 million monthly active users snatched up the free, research preview version of ChatGPT within two months after its launch. You can thank science fiction writers and your own imagination for both the tantalizing and terrifying triggers that ChatGPT is now activating in your head, making you wonder: Is ChatGPT safe? There are definitely legitimate reasons for caution and concern. Lawsuits have been launched against generative AI programs for copyright and other intellectual property infringements. OpenAI and other AI companies and partners stand accused of illegally using copyrighted photos, text, and other intellectual property without permission or payment to train their AI models. These charges generally spring from copyrighted content getting caught up in the scraping of the internet to create massive training datasets. In general, legal defense teams are arguing the inevitability and unsustainability of such charges in the age of AI and requesting that charges be dropped. The lawsuits regarding who owns the content generated by ChatGPT and its ilk lurk somewhere in the future. However, the U.S. Copyright Office has already ruled that AI-generated content, be it writing, images, or music, is not protected by copyright law. In the U.S., at least for now, the government will not protect anything generated by AI in terms of rights, licensing, or payment. Meanwhile, realistic concerns exist over other types of potential liabilities. ChatGPT and ChatGPT alternatives are known to sometimes deliver incorrect information to users and other machines. Who is liable when things go wrong, particularly in a life-threatening scenario? Even if a business’s bottom line is at stake and not someone's life, risks can run high and the outcome can be disastrous. Inevitably, someone will suffer and likely some person or organization will eventually be held accountable for it. Then, there are the magnifications of earlier concerns, such as data privacy, biases, unfair treatment of individuals and groups through AI actions, identity theft, deep fakes, security issues, and reality apathy, which is when the public can no longer tell what is true and what isn’t and thinks the effort to sort it all out is too difficult to pursue. In short, all of this probably has you wondering: Is ChatGPT safe? The potential to misuse it accelerates and intensifies the need for the rules and standards currently being studied, pursued, and developed by organizations and governments seeking to establish guardrails aimed at ensuring responsible AI. The big question is whether they’ll succeed in time, given ChatGPT’s incredibly fast adoption rate worldwide. Examples of groups working on guidelines, ethics, standards, and responsible AI frameworks include the following: ACM US Technology Committee’s Subcommittee on AI & Algorithms World Economic Forum UK’s Centre for Data Ethics Government agencies and efforts such as the US AI Bill of Rights and the European Council of the European Union’s Artificial Intelligence Act. IEEE and its 7000 series of standards Universities such as New York University’s Stern School of Business The private sector, wherein companies make their own responsible AI policies and foundations How does ChatGPT work? ChatGPT works differently than a search engine. A search engine, such as Google or Bing, or an AI assistant, such as Siri, Alexa, or Google Assistant, works by searching the internet for matches to the keywords you enter in the search bar. Algorithms refine the results based on any number of factors, but your browser history, topic interests, purchase data, and location data usually figure into the equation. You’re then presented with a list of search results ranked in order of relevance as determined by the search engine’s algorithm. From there, the user is free to consider the sources of each option and click a selection to do a deeper dive for more details from that source. By comparison, ChatGPT generates its own unified answer to your prompt. It doesn't offer citations or note its sources. You ask; it answers. Easy-peasey, right? No. That task is incredibly hard for AI to do, which is why generative AI is so impressive. Generating an original result in response to a prompt is achieved by using either the GPT-3 (Generative Pre-trained Transformer 3) or GPT-4 model to analyze the prompt with context and predict the words that are likely to follow. Both GPT models are extremely powerful large language models capable of processing billions of words per second. In short, transformers enable ChatGPT to generate coherent, humanlike text as a response to a prompt. ChatGPT creates a response by considering context and assigning weight (values) to words that are likely to follow the words in the prompt to predict which words would be an appropriate response. Some ChatGPT basics here: User input is called a prompt rather than a command or a query, although it can take either form. You are, in effect, prompting AI to predict and complete a pattern that you initiated by entering the prompt. If you'd like a comprehensive ChatGPT guide, including more detail on how it works and how to use it, check out my book ChatGPT For Dummies. Peeking at the ChatGPT architecture As its name implies, ChatGPT is a chatbot running on a GPT model. GPT-3, GPT-3.5, and GPT-4 are large language models (LLMs) developed by OpenAPI. When GPT-3 was introduced, it was the largest LLM at 175 billion parameters. An upgraded version called GPT-3.5 turbo is a highly optimized and more stable version of GPT-3 that's ten times cheaper for developers to use. ChatGPT is now also available on GPT-4, which is a multimodal model, meaning it accepts both image and text inputs although its outputs are text only. It's now the largest LLM to date, although GPT-4’s exact number of parameters has yet to be disclosed. Parameters are numerical values that weigh and define connections between nodes and layers in the neural network architecture. The more parameters a model has, the more complex its internal representations and weighting. In general, more parameters lead to better performance on specific tasks. ChatGPT for beginners Here, you'll learn the basics of how to use ChatGPT and why it relies on your skills to optimize its performance. But the real treasure here are the tips and insights on how to write prompts so that ChatGPT can perform its true magic. You can learn even more about writing prompts in my book ChatGPT For Dummies. Writing effective ChatGPT prompts ChatGPT appears deceptively simplistic. The user interface is elegantly minimalistic and intuitive, as shown in the figure below. The first part of the page offers information to users regarding ChatGPT’s capabilities and limitations plus a few examples of prompts. The prompt bar, which resembles a search bar, runs across the bottom of the page. Just enter a question or a command to prompt ChatGPT to produce results immediately. If you enter a basic prompt, you’ll get a bare-bones, encyclopedic-like answer, as shown in the figure below. Do that enough times and you’ll convince yourself that this is just a toy and you can get better results from an internet search engine. This is a typical novice’s mistake and a primary reason why beginners give up before they fully grasp what ChatGPT is and can do. Understand that your previous experience with keywords and search engines does not apply here. You must think of and use ChatGPT in a different way. Think hard about how you’re going to word your prompt. You have many options to consider. You can assign ChatGPT a role or a persona, or several personas and roles if you decide it should respond as a team, as illustrated in the figure below. You can assign yourself a new role or persona as well. Or tell it to address any type of audience — such as a high school graduating class, a surgical team, or attendees at a concert or a technology conference. You can set the stage or situation in great or minimum detail. You can ask a question, give it a command, or require specific behaviors. A prompt, as you can see now, is much more than a question or a command. Your success with ChatGPT hinges on your ability to master crafting a prompt in such a way as to trigger the precise response you seek. Ask yourself these questions as you are writing or evaluating your prompt. Who do you want ChatGPT to be? Where, when, and what is the situation or circumstances you want ChatGPT’s response framed within? Is the question you're entering in the prompt the real question you want it to answer, or were you trying to ask something else? Is the command you're prompting complete enough for ChatGPT to draw from sufficient context to give you a fuller, more complete, and richly nuanced response? And the ultimate question for you to consider: Is your prompt specific and detailed, or vague and meandering? Whichever is the case, that’s what ChatGPT will mirror in its response. ChatGPT’s responses are only as good as your prompt. That’s because the prompt starts a pattern that ChatGPT must then complete. Be intentional and concise about how you present that pattern starter — the prompt. Starting a chat To start a chat, just type a question or command in the prompt bar, shown at the bottom of the figure below. ChatGPT responds instantly. You can continue the chat by using the prompt bar again. Usually, you do this to gain further insights or to get ChatGPT to further refine its response. Following, are some things you can do in a prompt that may not be readily evident: Add data in the prompt along with your question or command regarding what to do with this data. Adding data directly in the prompt enables you to add more current info as well as make ChatGPT responses more customizable and on point. You can use the Browsing plug-in to connect ChatGPT to the live internet, which will give it access to current information. However, you may want to add data to the prompt anyway to better focus its attention on the problem or task at hand. However, there are limits on prompting and response sizes, so make your prompt as concise as possible. Direct the style, tone, vocabulary level, and other factors to shape ChatGPT's response. Command ChatGPT to assume a specific persona, job role, or authority level in its response. If you’re using ChatGPT-4, you'll soon be able to use images in the prompt too. ChatGPT can extract information from the image to use in its analysis. When you’ve finished chatting on a particular topic or task, it’s wise to start a new chat (by clicking or tapping the New Chat button in the upper left). Starting a new dialogue prevents confusing ChatGPT, which would otherwise treat subsequent prompts as part of a single conversational thread. On the other hand, starting too many new chats on the same topic or related topics can lead the AI to use repetitious phrasing and outputs, whether or not they apply to the new chat’s prompt. To recap: Don't confuse ChatGPT by chatting in one long continuous thread with a lot of topic changes or by opening too many new chats on the same topic. Otherwise, ChatGPT will probably say something offensive or make up random and wrong answers. When writing prompts, think of the topic or task in narrow terms. For example, don't have a long chat on car racing, repairs, and maintenance. To keep ChatGPT more intently focused, narrow your prompt to a single topic, such as determining when the vehicle will be at top trade-in value so you can best offset a new car price. Your responses will be of much higher quality. ChatGPT may call you offensive names and make up stuff if the chat goes on too long. Shorter conversations tend to minimize these odd occurrences, or so most industry watchers think. For example, after ChatGPT responses to Bing users became unhinged and argumentative, Microsoft limited conversations with it to 5 prompts in a row, for a total of 50 conversations a day per user. But a few days later, it increased the limit to 6 prompts per conversation and a total of 60 conversations per day per user. The limits will probably increase when AI researchers can figure out how to tame the machine to an acceptable — or at least a less offensive — level.
View ArticleArticle / Updated 05-25-2023
You can hardly avoid hearing about artificial intelligence (AI) today. You see AI in the movies, in the news, in books, and online. It's been in the news a lot lately, with all of the frenzy surrounding ChatGPT (see more about that below). AI is part of robots, self-driving (SD) cars, drones, medical systems, online shopping sites, and all sorts of other technologies that affect your daily life in so many ways. Some people have come to trust AIs so much, that they fall asleep while their self-driving cars take them to their destination — illegally, of course. Many pundits are burying you in information (and disinformation) about AI, too. Some see AI as cute and fuzzy; others see it as a potential mass murderer of the human race. The problem with being so loaded down with information in so many ways is that you struggle to separate what’s real from what is simply the product of an overactive imagination. Just how far can you trust your AI, anyway? Much of the hype about AI originates from the excessive and unrealistic expectations of scientists, entrepreneurs, and businesspersons. This article helps you understand some of the history of artificial intelligence and evolution of AI. The ChatGPT controversy The latest media storm around AI came in early January 2023, when OpenAI's free preview of its ChatGPT chatbot (released in November 2022) reached 100 million users. OpenAI then released a subscription service called ChatGPT Plus, and an upgraded version of its product, ChatGPT-4, in March 2023. A chatbot is a computer program designed to simulate human conversation. ChatGPT (GPT stands for generative pretrained transformer) is a particularly powerful chatbot able to produce natural, human-like writing through its use of 570GB of data from the Internet. Representing one of the latest achievements in the development of artificial intelligence, ChatGPT can answer questions and write articles, poems, emails, and research papers; it can also write programming code, translate languages, and perform other tasks related to language. ChatGPT's possible real-world uses include: Customer service Ecommerce Research Education and training Computer code writing and debugging Scheduling and booking Entertainment Health care information and assistance However, while many people are excited about the possibilities for ChatGPT and other similar technologies being developed, there are plenty of concerns about how it can be used in bad ways, too — for example, to cheat in school by having it write essays and research papers. It’s difficult to discern whether a piece of writing has been generated by ChatGPT or a human. In addition, the technology is far from perfect; the text it produces is often inaccurate and biased, and therefore, can spread false and even harmful information. AI can, and is, serving us well in many ways, but it’s important to understand its limitations. AI will never be able to engage in certain essential activities and tasks, and won’t be able to do other ones until far into the future. For example, while it can produce a piece of music with the data you’ve entered and in the style of a particular musician, say Beethoven, it cannot actually create anything. AI doesn’t have an imagination or original ideas. The history of AI, starting with Dartmouth Looking at artificial intelligence history begins with the earliest computers, which were just that: computing devices. They mimicked the human ability to manipulate symbols in order to perform basic math tasks, such as addition. Logical reasoning later added the capability to perform mathematical reasoning through comparisons (such as determining whether one value is greater than another value). However, for artificial intelligence evolution, humans still needed to define the algorithm used to perform the computation, provide the required data in the right format, and then interpret the result. During the summer of 1956, various scientists attended a workshop held on the Dartmouth College campus in Hanover, New Hampshire, to do something more. They predicted that machines that could reason as effectively as humans would require, at most, a generation to come about. They were wrong. Only now have we realized machines that can perform mathematical and logical reasoning as effectively as a human (which means that computers must master at least six more intelligences before reaching anything even close to human intelligence). The stated problem with the Dartmouth College and other endeavors of the time relates to hardware — the processing capability to perform calculations quickly enough to create a simulation. However, that’s not really the whole problem. Yes, hardware does figure in to the picture, but you can’t simulate processes that you don’t understand. Even so, the reason that AI is somewhat effective today is that the hardware has finally become powerful enough to support the required number of calculations. The biggest problem with these early attempts (and still a considerable problem today) is that we don’t understand how humans reason well enough to create a simulation of any sort — assuming that a direction simulation is even possible. Consider the issues surrounding the accomplishment of manned flight by the Wright brothers. They succeeded not by simulating birds, but rather by understanding the processes that birds use, thereby creating the field of aerodynamics. Consequently, when someone says that the next big AI innovation is right around the corner and yet no concrete dissertation exists of the processes involved, the innovation is anything but right around the corner. Continuing with expert systems Expert systems first appeared in the 1970s and again in the 1980s as an attempt to reduce the computational requirements posed by AI using the knowledge of experts. A number of expert system representations appeared, including: Rule based: These use "if … then" statements to base decisions on rules of thumb. Frame based: These use databases organized into related hierarchies of generic information called frames. Logic based: These rely on set theory to establish relationships). The advent of expert systems is important in artificial intelligence background because they present the first truly useful and successful implementations of AI. You still see expert systems in use today, although they aren’t called that any longer. For example, the spelling and grammar checkers in your application are kinds of expert systems. The grammar checker, especially, is strongly rule based. It pays to look around to see other places where expert systems may still see practical use in everyday applications. A problem with expert systems is that they can be hard to create and maintain. Early users had to learn specialized programming languages, such as List Processing (LisP) or Prolog. Some vendors saw an opportunity to put expert systems in the hands of less experienced or novice programmers. However, the products they used generally provided extremely limited functionality in using small knowledge bases. In the 1990s, the phrase expert system began to disappear. The idea that expert systems were a failure did appear, but the reality is that expert systems were simply so successful that they became ingrained in the applications that they were designed to support. Using the example of a word processor, at one time you needed to buy a separate grammar checking application, such as RightWriter. However, word processors now have grammar checkers built in because they proved so useful (if not always accurate). Overcoming the AI winters The term AI winter refers to a period of reduced funding in the development of AI. In general, AI has followed a path on which proponents overstate what is possible, inducing people with no technology knowledge at all, but lots of money, to make investments. A period of criticism then follows when AI fails to meet expectations, and finally, the reduction in funding occurs. A number of these cycles have occurred over the years — all of them devastating to true progress. AI is currently in a new hype phase because of machine learning, a technology that helps computers learn from data. Having a computer learn from data means not depending on a human programmer to set operations (tasks), but rather deriving them directly from examples that show how the computer should behave. ' Machine learning is like educating a baby by showing it how to behave through example. This technology has pitfalls because the computer can learn how to do things incorrectly through careless teaching. At this time, the most successful solution is deep learning, which is a technology that strives to imitate the human brain. Deep learning is possible because of the availability of powerful computers, smarter algorithms, large datasets produced by the digitalization of our society, and huge investments from businesses such as Google, Facebook, Amazon, and others that take advantage of this AI renaissance for their own businesses. People are saying that the AI winter is over because of deep learning, and that’s true for now. However, when you look around at the ways in which people are viewing AI, you can easily figure out that another criticism phase will eventually occur unless proponents tone the rhetoric down. A brief artificial intelligence timeline 1942: First electronic digital computer built by John Vincent Atanasoff and Clifford Berry at Iowa State University 1950: Alan Turing paper “Computing Machinery and Intelligence;” his proposal later became “The Turing Test,” which measured machine AI 1958: Perceptron computer, built by Cornell University Professor Frank Rosenblatt, regarded as first artificial neural network 1966: First “chatterbox” (later shortened to chatbot) — created by Joseph Weizenbaum, a German-American computer scientist — uses natural language processing to converse with humans 1971: First commercial microprocessor by Intel 1988: Jabberwacky, a chatbot created by British computer scientist Rollo Carpenter, provides interesting and entertaining conversation to humans 1990s: Early days of the Internet 1992: TD-Gammon, developed by Gerald Tesauro, of IBM; an artificial neural network trained by temporal-difference learning to play high-level backgammon 1997: IBM's Deep Blue chess computer defeats Russian chess grandmaster Garry Kasparov; Windows releases a speech recognition software, developed by Dragon Systems 2012: AlexNet, a convolutional neural network architecture, primarily designed by Alex Krizhevsky, a Ukrainian-born, Canadian computer scientist 2020: OpenAI beta tests GPT-3, which uses deep learning to create code, poetry, and other language and writing tasks; it's the first such chatbot that can create content almost indistinguishable from human-created content 2023: In January, OpenAI releases a free preview of its ChatGPT-3 to the public, and in March releases the upgrade ChatGPT-4 AI in our everyday lives You’re using AI in some way today; in fact, you probably rely on AI in many different ways — you just don’t notice it because it’s so mundane. A smart thermostat for your home may not sound very exciting, but it’s an incredibly practical use for a technology that has some people running for the hills in terror. As the development of AI has continued, there are now really cool uses for AI. For example, you may not know there is a medical monitoring device that can actually predict when you might have a heart problem, but such a device exists. AI powers drones, drives cars, and makes all sorts of robots possible. You see AI used today in all sorts of space applications, and the evolution of artificial intelligence figures prominently in all the space adventures humans will have tomorrow. The potential uses for AI number in the millions — all safely out of sight even when they’re quite dramatic in nature. Here are some of the ways in which you might see AI used: Fraud detection: You get a call from your credit card company asking whether you made a particular purchase. The credit card company isn’t being nosy; it’s simply alerting you to the fact that someone else could be making a purchase using your card. The AI embedded within the credit card company’s code detected an unfamiliar spending pattern and alerted someone to it. Resource scheduling: Many organizations need to schedule the use of resources efficiently. For example, a hospital may have to determine where to put a patient based on the patient’s needs, availability of skilled experts, and the amount of time the doctor expects the patient to be in the hospital. Complex analysis: Humans often need help with complex analysis because there are literally too many factors to consider. For example, the same set of symptoms could indicate more than one problem. A doctor or other expert might need help making a diagnosis in a timely manner to save a patient’s life. Automation: Any form of automation can benefit from the addition of AI to handle unexpected changes or events. A problem with some types of automation today is that an unexpected event, such as an object in the wrong place, can actually cause the automation to stop. Adding AI to the automation can allow the automation to handle unexpected events and continue as if nothing happened. Customer service: The customer service line you call today may not even have a human behind it. The automation is good enough to follow scripts and use various resources to handle the vast majority of your questions. With good voice inflection (provided by AI as well), you may not even be able to tell that you’re talking with a computer. Safety systems: Many of the safety systems found in machines of various sorts today rely on AI to take over the vehicle in a time of crisis. For example, many automatic braking systems (ABS) rely on AI to stop the car based on all the inputs that a vehicle can provide, such as the direction of a skid. Computerized ABS is actually relatively old at 40 years from a technology perspective. Machine efficiency: AI can help control a machine in such a manner as to obtain maximum efficiency. The AI controls the use of resources so that the system doesn’t overshoot speed or other goals. Every ounce of power is used precisely as needed to provide the desired services.
View ArticleArticle / Updated 05-09-2023
Artificial intelligence (AI) is great at automation, which can make it ideal for tasks in health care. It never deviates from the procedure, never gets tired, and never makes mistakes as long as the initial procedure is correct. Unlike humans, AI never needs a vacation or a break or even an eight-hour day (not that many in the medical profession have that, either). Consequently, the same AI that interacts with a patient for breakfast will do so for lunch and dinner as well. So, at the outset, AI has some significant advantages if viewed solely on the bases of consistency, accuracy, and longevity. Working with medical records The major way in which an AI helps in medicine is medical records. In the past, everyone used paper records to store patient data. Each patient might also have a blackboard that medical personnel use to record information daily during a hospital stay. Various charts contain patient data, and the doctor might also have notes. Having all these sources of information in so many different places made it hard to keep track of the patient in any significant way. Using an AI, along with a computer database, helps make information accessible, consistent, and reliable. Products such as Google Deepmind Health enable personnel to mine the patient information to see patterns in data that aren’t obvious. Doctors don’t necessarily interact with records in the same way that everyone else does. The use of products such as IBM’s WatsonPaths helps doctors interact with patient data of all sorts in new ways to make better diagnostic decisions about patient health. You can see a video on how this product works. Medicine is about a team approach, with many people of varying specialties working together. However, anyone who watches the process for a while soon realizes that these people don’t communicate among themselves sufficiently because they’re all quite busy treating patients. Products such as CloudMedX take all the input from the all parties involved and performs risk analysis on it. The result is that the software can help locate potentially problematic areas that could reduce the likelihood of a good patient outcome. In other words, this product does some of the talking that the various stakeholders would likely do if they weren’t submerged in patient care. Predicting the future Some truly amazing predictive software based on medical records includes CareSkore, which actually uses algorithms to determine the likelihood of a patient’s requiring readmission into the hospital after a stay. By performing this task, hospital staff can review reasons for potential readmission and address them before the patient leaves the hospital, making readmission less likely. Along with this strategy, Zephyr Health helps doctors evaluate various therapies and choose those most likely to result in a positive outcome — again reducing the risk that a patient will require readmission to the hospital. This video tells you more about Zephyr Health. In some respects, your genetics form a map of what will happen to you in the future. Consequently, knowing about your genetics can increase your understanding of your strengths and weaknesses, helping you to live a better life. Deep Genomics is discovering how mutations in your genetics affect you as a person. Mutations need not always produce a negative result; some mutations actually make people better, so knowing about mutations can be a positive experience, too. Check out this video for more details. Making procedures safer Doctors need lots of data to make good decisions. However, with data being spread out all over the place, doctors who lack the ability to analyze that disparate data quickly often make imperfect decisions. To make procedures safer, a doctor needs not only access to the data but also some means of organizing and analyzing it in a manner reflecting the doctor’s specialty. One such product is Oncora Medical, which collects and organizes medical records for radiation oncologists. As a result, is these doctors can deliver the right amount of radiation to just the right locations to obtain a better result with a lower potential for unanticipated side effects. Doctors also have trouble obtaining necessary information because the machines they use tend to be expensive and huge. An innovator named Jonathan Rothberg has decided to change all that by using the Butterfly Network. Imagine an iPhone-sized device that can perform both an MRI and an ultrasound. The picture on the website is nothing short of amazing. Creating better medications Everyone complains about the price of medications today. Yes, medications can do amazing things for people, but they cost so much that some people end up mortgaging homes to obtain them. Part of the problem is that testing takes a lot of time. Performing a tissue analysis to observe the effects of a new drug can take up to a year. Fortunately, products such as 3Scan can greatly reduce the time required to obtain the same tissue analysis to as little as one day. Of course, better still would be the drug company having a better idea of which drugs are likely to work and which aren’t before investing any money in research. Atomwise uses a huge database of molecular structures to perform analyses on which molecules will answer a particular need. In 2015, researchers used Atomwise to create medications that would make Ebola less likely to infect others. The analysis that would have taken human researchers months or possibly years to perform took Atomwise just one day to complete. Imagine this scenario in the midst of a potentially global epidemic. If Atomwise can perform the analysis required to render the virus or bacteria noncontagious in one day, the potential epidemic could be curtailed before becoming widespread. Drug companies also produce a huge number of drugs. The reason for this impressive productivity, besides profitability, is that every person is just a little different. A drug that performs well and produces no side effects on one person might not perform well at all and could even harm a different person. Turbine enables drug companies to perform drug simulations so that the drug companies can locate the drugs most likely to work with a particular person’s body. Turbine’s current emphasis is on cancer treatments, but it’s easy to see how this same approach could work in many other areas. Medications can take many forms. Some people think they come only in pill or shot form, yet your body produces a wide range of medications in the form of microbiomes. Your body actually contains ten times as many microbes as it does human cells, and many of these microbes are essential for life; you’d quickly die without them. Whole Biome is using a variety of methods to make these microbiomes work better for you so that you don’t necessarily need a pill or a shot to cure something. Check out this video for additional information. Some companies have yet to realize their potential, but they’re likely to do so eventually. One such company is Recursion Pharmaceuticals, which employs automation to explore ways to use known drugs, bioactive drugs, and pharmaceuticals that didn’t previously make the grade to solve new problems. The company has had some success in helping to solve rare genetic diseases, and it has a goal of curing 100 diseases in the next ten years (obviously, an extremely high goal to reach).
View ArticleArticle / Updated 04-14-2023
Many of the current techniques for extending the healthy range of human life (the segment of life that contains no significant sickness), rather than just increasing the number of years of life depends on making humans more capable of improving their own health in various ways. You can find any number of articles that tell you 30, 40, or even 50 ways to extend this healthy range, but often it comes down to a combination of eating right, exercising enough and in the right way, and sleeping well. Of course, figuring out just which food, exercise, and sleep technique works best for you is nearly impossible. The following sections discuss ways in which an AI-enabled device might make the difference between having 60 good years and 80 or more good years. (In fact, it’s no longer hard to find articles that discuss human life spans of 1,000 or more years in the future because of technological changes.) Using games for therapy A gaming console can make a powerful and fun physical therapy tool. Both Nintendo Wii and Xbox 360 see use in many different physical therapy venues. The goal of these games is to get people moving in certain ways. As when anyone else plays, the game automatically rewards proper patient movements, but a patient also receives therapy in a fun way. Because the therapy becomes fun, the patient is more likely to actually do it and get better faster. Of course, movement alone, even when working with the proper game, doesn’t assure success. In fact, someone could develop a new injury when playing these games. The Jintronix add-on for the Xbox Kinect hardware standardizes the use of this game console for therapy, increasing the probability of a great outcome. Considering the use of exoskeletons One of the most complex undertakings for an AI is to provide support for an entire human body. That’s what happens when someone wears an exoskeleton (essentially a wearable robot). An AI senses movements (or need to move) and provides a powered response to the need. The military has excelled in the use of exoskeletons. Imagine being able to run faster and carry significantly heavier loads as a result of wearing an exoskeleton. This video gives you just a glimpse of what’s possible. Of course, the military continues to experiment, which actually feeds into civilian uses. The exoskeleton you eventually see (and you’re almost guaranteed to see one at some point) will likely have its origins in the military. Industry has also gotten in on the exoskeleton technology. Factory workers currently face a host of illnesses because of repetitive stress injuries. In addition, factory work is incredibly tiring. Wearing an exoskeleton not only reduces fatigue but also reduces errors and makes the workers more efficient. People who maintain their energy levels throughout the day can do more with far less chance of being injured, damaging products, or hurting someone else. The exoskeletons in use in industry today reflect their military beginnings. Look for the capabilities and appearance of these devices to change in the future to look more like the exoskeletons shown in movies such as Aliens. The real-world examples of this technology are a little less impressive but will continue to gain in functionality. As interesting as the use of exoskeletons to make able people even more incredible is, what they can enable people to do that they can’t do now is downright amazing. For example, scientists at the National Institutes of Health Clinical Center in Bethesda, Maryland, have helped children with cerebral learn how to walk more effectively by using an exoskeleton. Not all exoskeletons used in medical applications provide lifetime use, however. For example, an exoskeleton can help a stroke victim walk normally again. As the person becomes more able, the exoskeleton provides less support until the wearer no longer needs it. Some users of the device have even coupled their exoskeleton to other products, such as Amazon’s Alexa. The overall purpose of wearing an exoskeleton isn’t to make you into Iron Man. Rather, it’s to cut down on repetitive stress injuries and help humans excel at tasks that currently prove too tiring or just beyond the limits of their body. From a medical perspective, using an exoskeleton is a win because it keeps people mobile longer, and mobility is essential to good health.
View ArticleCheat Sheet / Updated 01-19-2023
Artificial intelligence (AI) is a technology that has grabbed a lot of attention in movies, books, products, and in a slew of other places. Often, vendors equate AI with smartness: You buy a smart device to obtain a device with an AI, even though smart devices sometimes are smart only in that they offer connectivity, not AI. Many products are hyped to contain AI that sometimes doesn’t even work. Some people, of course, want to grab headlines by telling mistruths or offering misconceptions about AI. This Cheat Sheet offers you some interesting insights into why the mundane is actually where you see AI most often. Yes, AI is being put to some amazing uses as well, but vendors often misrepresent these applications to the point that no one really knows how much is real and how much is the result of someone’s vivid imagination.
View Cheat SheetArticle / Updated 07-20-2022
A medical professional isn’t always able to tell what is happening with a patient’s health simply by listening to their heart, checking vitals, or performing a blood test. The body doesn’t always send out useful signals that let a medical professional learn anything at all. In addition, some body functions, such as blood sugar, change over time, so constant monitoring becomes necessary. Going to the doctor’s office every time you need one of these vitals checked would prove time consuming and possibly not all that useful. Older methods of determining some body characteristics required manual, external intervention on the part of the patient — an error-prone process in the best of times. For these reasons, and many more, an AI can help monitor a patient’s statistics in a manner that is efficient, less error prone, and more consistent, as described in the following sections. Wearing helpful monitors All sorts of monitors fall into the helpful category. In fact, many of these monitors have nothing to do with the medical profession, yet produce positive results for your health. Consider the Moov monitor, which monitors both heart rate and 3-D movement. The AI for this device tracks these statistics and provides advice on how to create a better workout. You actually get advice on, for example, how your feet are hitting the pavement during running and whether you need to lengthen your stride. The point of devices like these is to ensure that you get the sort of workout that will improve health without risking injury. Mind you, if a watch-type monitoring device is too large, Motiv produces a ring that monitors about the same number of things that Moov does, but in a smaller package. This ring even tracks how you sleep to help you get a good night’s rest. Rings do tend to come with an assortment of pros and cons. This article tells you more about these issues. Interestingly enough, many of the pictures on the site don’t look anything like a fitness monitor, so you can have fashion and health all in one package. Of course, if your only goal is to monitor your heart rate, you can get devices such as Apple Watch that also provide some level of analysis using an AI. All these devices interact with your smartphone, so you can possibly link the data to still other applications or send it to your doctor as needed. Relying on critical wearable monitors A problem with some human conditions is that they change constantly, so checking intermittently doesn’t really get the job done. Glucose, the statistic measured by diabetics, is one statistic that falls into this category. The more you monitor the rise and fall of glucose each day, the easier it becomes to change medications and lifestyle to keep diabetes under control. Devices such as the K'Watch provide such constant monitoring, along with an app that a person can use to obtain helpful information on managing their diabetes. Of course, people have used intermittent monitoring for years; this device simply provides that extra level of monitoring that can make the difference between having diabetes be a life-altering issue or a minor nuisance. The act of constantly monitoring someone’s blood sugar or other chronic disease statistic might seem like overkill, but it has practical use as well. Products such as Sentrian let people use the remote data to predict that a patient will become ill before the event actually occurs. By making changes in patient medications and behavior before an event can occur, Sentrian reduces the number of unavoidable hospitalizations — making the patient’s life a lot better and reducing medical costs. Some devices are truly critical, such as the Wearable Defibrillator Vest (WDV), which senses your heart condition continuously and provides a shock should your heart stop working properly. This short-term solution can help a doctor decide whether you need the implanted version of the same device. There are pros and cons to wearing one, but then again, it’s hard to place a value on having a shock available when needed to save a life. The biggest value of this device is the monitoring it provides. Some people don’t actually need an implantable device, so monitoring is essential to prevent unnecessary surgery. Using movable monitors The number and variety of AI-enabled health monitors on the market today is staggering. For example, you can actually buy an AI-enabled toothbrush that will monitor your brushing habits and provide you with advice on better brushing technique. When you think about it, creating a device like this presents a number of hurdles, not the least of which is keeping the monitoring circuitry happy inside the human mouth. Of course, some people may feel that the act of brushing their teeth really doesn’t have much to do with good health, but it does. Creating movable monitors generally means making them both smaller and less intrusive. Simplicity is also a requirement for devices designed for use by people with little or no medical knowledge. One device in this category is a wearable electrocardiogram (ECG). Having an ECG in a doctor’s office means connecting wires from the patient to a semiportable device that performs the required monitoring. The QardioCore provides the ECG without using wires, and someone with limited medical knowledge can easily use it. As with many devices, this one relies on your smartphone to provide needed analysis and make connections to outside sources as needed. Current medical devices work just fine, but they aren’t portable. The point of creating AI-enabled apps and specialized devices is to obtain much needed data when a doctor actually needs it, rather than having to wait for that data. Even if you don’t buy a toothbrush to monitor your technique or an ECG to monitor your heart, the fact that these devices are small, capable, and easy to use means that you may still benefit from them at some point.
View ArticleCheat Sheet / Updated 03-14-2021
Here’s a quick reference to the major bullets and tables from Part 1 of Enterprise AI For Dummies, which is about what artificial intelligence can do for you, right now, in your business. It’s about well-established, tried-and-true technology and processes that are currently being used in businesses and organizations all over the world to help humans become more productive, more accurate, more efficient, and more understanding.
View Cheat SheetArticle / Updated 09-01-2020
Business organizations look to professional services firms to offload existing processes such as payroll, claims processing, and other clerical tasks. Consequently, rather than push the innovation curve as early adopters of emerging technology, professional services firms have traditionally followed well-established procedures and used conventional tools. However, much of the work they take on involves processes that are well suited for optimization through AI, and many corporations are investigating the benefits of AI for streamlining workflows and cutting operational expenses. A KPMG report predicts that enterprises will increase their spending on intelligent automation from $12.4 billion in 2019 to $232 billion in 2025, almost 19 times as much in just seven years. A McKinsey report estimates that 20 percent of the cyclical tasks of a typical finance unit can be fully automated and almost 50 percent can be mostly automated. Exploring the AI Pyramid From all appearances, the industries typically served by professional services firms are in the early stages of a tectonic shift that will reverberate throughout the professional services industry. The initial shock will involve adopting new ways of organizing and delivering professional services, but the aftershocks could very well challenge the essence of what professional services firms deliver. The following figure shows the hierarchy of business complexity. AI projects usually start at the base of the pyramid, where the goal is to save costs by optimizing manual process via human-machine collaboration. As the projects move up the pyramid, they move away from saving costs and focus on increasing revenue by making more informed decisions regarding existing lines of business or launching new lines of business. In a real-life example, one cookware company uses home demonstrations to sell high-end pots and pans using internal financing. They brought in AI to replace a manual workflow based on rules and decision trees with a semi-automated process that streamlines the underwriting decision and reduces acquisition cost. This project was a tactical move to save money by making the process more efficient. On the strength of that success, they moved up the pyramid. They used AI to analyze the historical behavior of accounts that underwriting declined and passed on to a third-tier lender. The model looked for common characteristics of borrowers that had been declined but got financing from the third-tier lender and didn’t default but paid it in full. They then applied the model to new loan applications to identify candidates who might not meet the traditional requirements for financing but were still a good risk. This project was a strategic move to expand their market to increase revenue. To begin with, if AI doesn’t eventually replace the most fundamental tiers of service delivery, such as paper handling and data entry, it will at the very least optimize them to the point that they can be delivered by a significantly reduced staff through human-machine collaboration. Or it could lead to an increase in staff by freeing up funds through increased efficiency, An Accenture report indicated that AI could boost employment levels by 10 percent if the rest of the world invested in AI and human-machine collaboration at the same level as the top performing 20 percent. Professional services firms touch many industries, and just as technology matures and affects all industries, by necessity it affects how professional services firms engage their clients. AI won’t replace core professional expertise, but it will make you more efficient and thus enable you to increase the value proposition for your clients. However, professionals who do embrace AI will replace those who don’t. Climbing the AI Pyramid The research tells us that enterprises across the board will increasingly turn to AI and big data to reduce costs and errors while improving efficiency and strategic planning. With a history of anticipating the needs of the market and then providing the services, you can use that knowledge to automate your own back-office processes and build on that experience to offer expanded services that relieve your clients of the heavy lifting of creating the architecture for an in-house or out-sourced AI initiative. Many firms focus on helping their clients automate routine tasks as a low-friction entry point with obvious time and cost savings. This simple application also serves as a platform for educating the client on the principles of AI and evaluating use cases for the best fit and results. With a proven win under their belt, clients are more receptive to expanding the role of AI and machine learning in their organizations, allowing them to introduce innovation and differentiation in their product and service offerings and to use the data to tackle tasks at higher levels of the pyramid. By applying AI to your client’s environment, you can also increase your value to your clients by weaning them from reactively correcting when unexpected issues arise to forestalling common issues with preventative practices and ultimately to anticipating outcomes with predictive management. As the application of AI to routine processes relieves employees from attending to mundane tasks, it also frees them to tackle more valuable and interesting tasks, thus enhancing their own career paths and adding more value for the clients. Another byproduct of the cycle of expanding automation upward through the tiers of the complexity pyramid is that, as the capabilities of artificial intelligence grow, the practices of your firm become more specialized until they are distilled to services that are beyond the touch of AI. Or the singularity happens, whichever comes first. But until that time, those who lean into innovation will gain the competitive advantage, but only if they incorporate continuous learning for their employees as a part of the business model. Unearthing the Algorithmic Treasures The uses for AI are as varied as the industries served by professional services firms. Healthcare AI can quickly and economically acquire, classify, process, and route unstructured text to everyone in the information pipeline, increasing accessibility while lowering costs. Natural-language processing can extract targeted information from unstructured text such as faxes, clinical notes, intake forms, and medical histories, to improve end-to-end workflow. The process starts with data capture and classification, and then routes data and documents to the appropriate back-end systems, spotting exceptions, validating edge cases, and creating action items. Content management AI uses machine learning, text mining, and natural-language processing to process content, extracting concepts and entities, such as names, places, dates, and customized elements relevant to the business. AI then uses that information to create metadata and import it into a structured database, accelerating searches and data analysis. At the same time, the system automatically classifies the document based on its type and content and either assigns it to the next step in an automated workflow or flags it for review. Compliance AI uses unstructured data mining, robotic process automation, statistical data aggregation, and natural-language processing to read and interpret compliance documents, interpret metadata, and identify roles and relationships, and then uses cognitive-process automation to deliver concise, actionable insights. AI uses supervised and unsupervised learning, natural-language processing, and intelligent segmentation to capture, analyze, and filter possible compliance violations to discard false positives that waste the time of compliance officers. AI uses structured and unstructured data mining and natural-language processing to monitor internal and external records, documents, and social media to detect errors, violations, and trends, allowing the compliance department to be proactive and avoid costly penalties. AI uses robotics-process automation, natural-language processing, and machine learning to identify potential violations of Know Your Customer (KYC) and Anti-Money Laundering (AML) regulations. Law AI uses text mining to process large pools of unstructured data, such as legal documents, emails, texts, and social media to identify key concepts, categorize content, detect subjectivity, isolate behavior patterns, discern the sentiment expressed in the content, and extract phrases and entities such as people, places, and things. AI uses supervised and unsupervised learning based on native or custom taxonomies to classify or characterize large volumes of documents and cull irrelevant documents as required in support of pre- and post-production activities, such as early case assessment and privilege detection. AI uses machine learning and natural-language processing to analyze large amounts of textual content and distill it into short summaries and chronologies, which can display entity and concept trends over time, as well as behavioral patterns of persons of interest. The results can be integrated with data visualization to display the outcome in a consumable and intuitively understandable structure using interactive reports and dashboards. Manufacturing AI uses decision trees and neural networks to establish baseline requirements and then uses real-time data to reveal patterns and relationships to determine demand behavior, which drives optimized inventory levels and replenishment plans. AI uses text mining, data mining, and optimization-planning techniques to integrate suppliers and automate transactions to help clients understand their current business, address issues, and formulate strategies for improved performance. Clients can use supply-chain analytics to compare the performance of trading partners to operational and business metrics to make better decisions about their partnerships. AI uses reinforcement learning to automate repetitive human processes. Robotic-process automation (RPA) combines analytics, machine learning and rules-based software to capture and interpret existing data-input streams to process a transaction, manipulate data, trigger responses, and communicate with other enterprise applications. Oil and gas AI uses predictive maintenance algorithms to achieve optimum uptime. AI uses IoT sensors and machine-learning algorithms to support data-driven decision-making and enable operational excellence for midstream processes, such as storing and transporting oil and gas. AI uses text mining, natural-language processing, and machine learning to read legacy exploration and production data to optimize new construction and development projects. AI uses text mining and machine learning to collect, combine, and assess data to improve operational performance, reduce cost, minimize risk, and accelerate time-to-production in well-site development. It also uses those techniques to boost health and safety and to improve environmental performance. Utilities AI uses machine-learning algorithms and data from IoT devices to help energy and utility companies predict energy demand to assist in meeting short- or long-term needs, pinpointing areas of the plant or grid that need maintenance and reducing waste by uncovering inefficiencies.
View ArticleArticle / Updated 08-20-2020
Chances are good that you’ve sent an email to a customer service department at one point or another. Perhaps your order was late, items were damaged in shipping, or you needed to know how to initiate the return process. You may have found that while some companies are prompt in sending a reply and resolving your issue, you may not hear back from others for days. Although the timeliness of their response may have something to do with the nature of your issue, it’s also likely to be influenced by whether the company is still using manual processes to sort through incoming emails. Retailers that offer a prompt resolution are likely using AI-enhanced advanced capture technology. These solutions offer the ability to quickly process incoming data, but they don’t stop at email. Advanced capture technology can process handwritten notes, snail mail, and even social media. Several technologies come together to make enhanced content capture possible. Capture The workhorse of the capture technology is, of course, its ability to capture data from any source, including handwritten forms, emails, PDF files, Word documents, and more. Advanced OCR technology recognizes both machine- and hand-printed characters in any major language. AI-enhanced capture can also recognize specific forms and can manage complex capture workflows across different departments quickly. Most systems also capture mobile information, such as forms submitted via smartphone. Digitize where needed Based on predetermined configurations, capture technology can convert the information it captures into editable text or a searchable PDF file, depending on your needs. For example, some paperwork-heavy industries, such as medical offices, have begun scanning documents primarily for archival purposes, while others are transforming their entire business processes to become digitized. Process, classify, and extract AI uses machine learning, including natural-language processing and sentiment analysis, to gain a contextual understanding of the data. After it reads and understands content, it applies advanced recognition and auto-classifies it based on these findings. AI-enhanced capture uses two types of technology to drive speed and accuracy: Zonal extraction: This approach uses a template that identifies fields to capture and their locations. It’s most effective for recurring documents, such as claims forms or vendor invoices. Freeform extraction: Using keywords and text analysis, freeform extraction is a flexible solution for retrieving data from documents that come from different sources. For instance, vendors may send your company invoices in multiple formats. AI-enhanced capture uses this technology to apply freeform rules that enable the extraction of key data from the invoices. Together, these technologies automate data extraction to save time and reduce the risk of human error. AI delivers clear, actionable insights and even predictive analytics. It also prioritizes content based on any additional established or learned criteria to trigger a machine-initiated workflow. For example, in the contact form scenario mentioned above, AI can quickly determine whether emails from customers have a positive, neutral, or negative tone. This ability to read and analyze sentiment allows the system to prioritize appropriately, so customer support personnel can deliver answers in a timely manner to the customers who need them most. Similarly, it can detect important differences between internal documents. For example, it can appropriately process invoices sent to customers versus invoices received from vendors requiring payment. Validate edge cases Another standout quality of AI-enhanced capture is its ability to help humans focus on challenging tasks. Not only does it reduce the tedious processing of data without the need for manual intervention, but it also brings edge cases to the attention of the right person for validation. For example, an admissions department at a community college may be able to process most transcripts rapidly using capture technology. They extract the information and send the files to the appropriate repository. Yet, in some cases, the system might flag missing information or errors that exceed value thresholds. In these scenarios, these specific transcripts can be brought to the attention of the appropriate admissions officers so they can follow up with students or take other actions as needed. AI-enhanced content capture becomes more intelligent over time. It learns from historical data to determine which cases can be considered normal and which require human intervention. It can also make decisions based on pre-established thresholds to deliver value to your organization right away. Manage AI-enhanced content capture also simplifies document management. With its ability to read and make meaning of data, it routes and indexes information to the appropriate place within the content suite repository. Because it also can extract keywords, it makes your data and content easier to search. You can use AI-enhanced capture to automatically assign metadata from keywords to each piece of content that enters the enterprise, effectively acting as comprehensive translators. Although functions like HR, finance, and sales all have their own unique document types and language, these systems are sufficiently intelligent to understand their specific nuances. They can therefore manage content across the entire organization and link various functions seamlessly through simplified sharing and connections to line-of-business applications. Visualize Finally, AI-enhanced capture offers key analytics via dashboards and reports. It can deliver key performance indicators to help you spot inefficiencies in your business processes.
View Article