AI Articles
AI is officially not science-fiction anymore. It's as real as it gets. And our articles will give you the skinny on everything from machine learning to neural networks.
Articles From AI
Filter Results
Cheat Sheet / Updated 06-01-2023
ChatGPT took the world by storm shortly after its debut. If you’re feeling at a loss as to what to do, too excited to decide where to start, or uncertain about how to react to artificial intelligence (AI) entering your space, this cheat sheet will help you make some quick and meaningful headway that you can build on as you go. Read a brief explanation of what ChatGPT is before checking it out either directly online or by connecting with it in one of many software applications. Then learn the keys to mastering ChatGPT: perfecting the prompt and using the right ChatGPT plug-ins. Using ChatGPT will significantly increase your productivity, but you must always fact-check its responses before relying on them. Don’t worry if you make a mistake along the way or find yourself stumped over what to do next. You're not alone. ChatGPT is completely new and mysterious for most people. Keep this cheat sheet handy and dive right in!
View Cheat SheetArticle / Updated 05-26-2023
The first concept that’s important to understand is that artificial intelligence (AI) doesn’t really have anything to do with human intelligence. Yes, some AI is modeled to simulate human intelligence, but that’s what it is: a simulation. When asking "what is artificial intelligence?" notice an interplay between goal seeking, data processing used to achieve that goal, and data acquisition used to better understand the goal. AI technology relies on algorithms to achieve a result that may or may not have anything to do with human goals or methods of achieving those goals. With this in mind, you can categorize AI in four ways: Acting like a human: When a computer acts like a human, it best reflects the Turing test, in which the computer succeeds when differentiation between the computer and a human isn’t possible. This category also reflects what the media would have you believe AI is all about. You see it employed for technologies such as natural language processing, knowledge representation, automated reasoning, and machine learning (all four of which must be present to pass the test). The original Turing Test didn’t include any physical contact. The newer, Total Turing Test does include physical contact in the form of perceptual ability interrogation, which means that the computer must also employ both computer vision and robotics to succeed. Modern techniques include the idea of achieving the goal rather than mimicking humans completely. For example, the Wright Brothers didn’t succeed in creating an airplane by precisely copying the flight of birds; rather, the birds provided ideas that led to aerodynamics that eventually led to human flight. The goal is to fly. Both birds and humans achieve this goal, but they use different approaches. Thinking like a human: When a computer thinks as a human, it performs tasks that require intelligence (as contrasted with rote procedures) from a human to succeed, such as driving a car. To determine whether a program thinks like a human, you must have some method of determining how humans think, which the cognitive modeling approach defines. This model relies on three techniques: Introspection: Detecting and documenting the techniques used to achieve goals by monitoring one’s own thought processes. Psychological testing: Observing a person’s behavior and adding it to a database of similar behaviors from other persons given a similar set of circumstances, goals, resources, and environmental conditions (among other things). Brain imaging: Monitoring brain activity directly through various mechanical means, such as Computerized Axial Tomography (CAT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Magnetoencephalography (MEG). After creating a model, you can write a program that simulates the model. Given the amount of variability among human thought processes and the difficulty of accurately representing these thought processes as part of a program, the results are experimental at best. This category of thinking humanly is often used in psychology and other fields in which modeling the human thought process to create realistic simulations is essential. Thinking rationally: Studying how humans think using some standard enables the creation of guidelines that describe typical human behaviors. A person is considered rational when following these behaviors within certain levels of deviation. A computer that thinks rationally relies on the recorded behaviors to create a guide as to how to interact with an environment based on the data at hand. The goal of this approach is to solve problems logically, when possible. In many cases, this approach would enable the creation of a baseline technique for solving a problem, which would then be modified to actually solve the problem. In other words, the solving of a problem in principle is often different from solving it in practice, but you still need a starting point. Acting rationally: Studying how humans act in given situations under specific constraints enables you to determine which techniques are both efficient and effective. A computer that acts rationally relies on the recorded actions to interact with an environment based on conditions, environmental factors, and existing data. As with rational thought, rational acts depend on a solution in principle, which may not prove useful in practice. However, rational acts do provide a baseline upon which a computer can begin negotiating the successful completion of a goal. Hintze's AI classifications The categories used to define AI offer a way to consider various uses for or ways to apply AI. Some of the systems used to classify AI by type are arbitrary and not distinct. For example, some groups view AI as either strong (generalized intelligence that can adapt to a variety of situations) or weak (specific intelligence designed to perform a particular task well). The problem with strong AI is that it doesn’t perform any task well, while weak AI is too specific to perform tasks independently. Even so, just two type classifications won’t do the job even in a general sense. The four classification types promoted by Arend Hintze form a better basis for understanding AI: Reactive machines: The machines you see beating humans at chess or playing on game shows are examples of reactive machines. A reactive machine has no memory or experience upon which to base a decision. Instead, it relies on pure computational power and smart algorithms to recreate every decision every time. This is an example of a weak AI used for a specific purpose. Limited memory: A self-driving car or autonomous robot can’t afford the time to make every decision from scratch. These machines rely on a small amount of memory to provide experiential knowledge of various situations. When the machine sees the same situation, it can rely on experience to reduce reaction time and to provide more resources for making new decisions that haven’t yet been made. This is an example of the current level of strong AI. Theory of mind: A machine that can assess both its required goals and the potential goals of other entities in the same environment has a kind of understanding that is feasible to some extent today, but not in any commercial form. However, for self-driving cars to become truly autonomous, this level of AI must be fully developed. A self-driving car would not only need to know that it must go from one point to another, but also intuit the potentially conflicting goals of drivers around it and react accordingly. Self-awareness: This is the sort of AI that you see in movies. However, it requires technologies that aren’t even remotely possible now because such a machine would have a sense of both self and consciousness. In addition, instead of merely intuiting the goals of others based on environment and other entity reactions, this type of machine would be able to infer the intent of others based on experiential knowledge. Problems defining AI Artificial Intelligence has had several false starts and stops over the years, partly because people don’t really understand what AI is all about, or even what it should accomplish. A major part of the problem is that movies, television shows, and books have all conspired to give false hopes about hat AI could accomplish. In addition, the human tendency to anthropomorphize (give human characteristics to) technology makes it seem as if AI must do more than it can hope to accomplish. Of course, the basis for what you expect from AI is a combination of how you define AI, the technology you have for implementing AI, and the goals you have for AI. Consequently, everyone sees AI differently. Before you can use a term in any meaningful and useful way, you must have a definition for it. After all, if nobody agrees on a meaning, the term has none; it’s just a collection of characters. Defining the idiom (a term whose meaning isn’t clear from the meanings of its constituent elements) is especially important with technical terms that have received more than a little press coverage at various times and in various ways. The term artificial intelligence doesn’t really tell you anything meaningful, which is why there are so many discussions and disagreements about it. Yes, you can argue that what occurs is artificial, not having come from a natural source. However, the intelligence part is, at best, ambiguous. Discerning intelligence People define intelligence in many different ways. However, you can say that intelligence involves certain mental activities composed of the following: Learning: Having the ability to obtain and process new information Reasoning: Being able to manipulate information in various ways Understanding: Considering the result of information manipulation Grasping truths: Determining the validity of the manipulated information Seeing relationships: Divining how validated data interacts with other data Considering meanings: Applying truths to particular situations in a manner consistent with their relationship Separating fact from belief: Determining whether the data is adequately supported by provable sources that can be demonstrated to be consistently valid How does AI work? The list above could easily get quite long, but even this list is relatively prone to interpretation by anyone who accepts it as viable. As you can see from the list, however, intelligence often follows a process that a computer system can mimic as part of a simulation: Set a goal based on needs or wants. Assess the value of any currently known information in support of the goal. Gather additional information that could support the goal. The emphasis here is on information that could support the goal, rather than information that you know will support the goal. Manipulate the data such that it achieves a form consistent with existing information. Define the relationships and truth values between existing and new information. Determine whether the goal is achieved. Modify the goal in light of the new data and its effect on the probability of success. Repeat Steps 2 through 7 as needed until the goal is achieved (found true) or the possibilities for achieving it are exhausted (found false). Even though you can create algorithms and provide access to data in support of this process within a computer, a computer’s capability to achieve intelligence is severely limited. For example, a computer is incapable of understanding anything because it relies on machine processes to manipulate data using pure math in a strictly mechanical fashion. Likewise, computers can’t easily separate truth from mistruth. In fact, no computer can fully implement any of the mental activities described in the list that describes intelligence. As part of deciding what intelligence actually involves, categorizing intelligence is also helpful. Humans don’t use just one type of intelligence, but rather rely on multiple intelligences to perform tasks. Howard Gardner of Harvard has defined a number of these types of intelligence, and knowing them helps you to relate them to the kinds of tasks that a computer can simulate as intelligence (see the table below for a modified version of these intelligences with additional description). The Kinds of Human Intelligence and How AIs Simulate Them Type Simulation Potential Human Tools Description Visual-spatial Moderate Models, graphics, charts, photographs, drawings, 3-D modeling, video, television, and multimedia Physical-environment intelligence used by people like sailors and architects (among many others). To move at all, humans need to understand their physical environment — that is, its dimensions and characteristics. Every robot or portable computer intelligence requires this capability, but the capability is often difficult to simulate (as with self-driving cars) or less than accurate (as with vacuums that rely as much on bumping as they do on moving intelligently). Bodily-kinesthetic Moderate to High Specialized equipment and real objects Body movements, such as those used by a surgeon or a dancer, require precision and body awareness. Robots commonly use this kind of intelligence to perform repetitive tasks, often with higher precision than humans, but sometimes with less grace. It’s essential to differentiate between human augmentation, such as a surgical device that provides a surgeon with enhanced physical ability, and true independent movement. The former is simply a demonstration of mathematical ability in that it depends on the surgeon for input. Creative None Artistic output, new patterns of thought, inventions, new kinds of musical composition Creativity is the act of developing a new pattern of thought that results in unique output in the form of art, music, and writing. A truly new kind of product is the result of creativity. An AI can simulate existing patterns of thought and even combine them to create what appears to be a unique presentation but is really just a mathematically based version of an existing pattern. In order to create, an AI would need to possess self-awareness, which would require intrapersonal intelligence. Interpersonal Low to Moderate Telephone, audio conferencing, video conferencing, writing, computer conferencing, email Interacting with others occurs at several levels. The goal of this form of intelligence is to obtain, exchange, give, and manipulate information based on the experiences of others. Computers can answer basic questions because of keyword input, not because they understand the question. The intelligence occurs while obtaining information, locating suitable keywords, and then giving information based on those keywords. Cross-referencing terms in a lookup table and then acting on the instructions provided by the table demonstrates logical intelligence, not interpersonal intelligence. Intrapersonal None Books, creative materials, diaries, privacy, and time Looking inward to understand one’s own interests and then setting goals based on those interests is currently a human-only kind of intelligence. As machines, computers have no desires, interests, wants, or creative abilities. An AI processes numeric input using a set of algorithms and provides an output; it isn’t aware of anything that it does, nor does it understand anything that it does. Linguistic (often divided into oral, aural, and written) Low for oral and aural None for written Games, multimedia, books, voice recorders, and spoken words Working with words is an essential tool for communication because spoken and written information exchange is far faster than any other form. This form of intelligence includes understanding oral, aural, and written input, managing the input to develop an answer, and providing an understandable answer as output. In many cases, computers can barely parse input into keywords, can’t actually understand the request at all, and output responses that may not be understandable at all. In humans, oral, aural, and written linguistic intelligence come from different areas of the brain, which means that even with humans, someone who has high written linguistic intelligence may not have similarly high oral linguistic intelligence. Computers don’t currently separate aural and oral linguistic ability — one is simply input and the other output. A computer can’t simulate written linguistic capability because this ability requires creativity. Logical-mathematical High (potentially higher than humans) Logic games, investigations, mysteries, and brain teasers Calculating a result, performing comparisons, exploring patterns, and considering relationships are all areas in which computers currently excel. When you see a computer beat a human on a game show, this is the only form of intelligence that you’re actually seeing, out of seven kinds of intelligence. Yes, you might see small bits of other kinds of intelligence, but this is the focus. Basing an assessment of human-versus-computer intelligence on just one area isn’t a good idea. The reality vs. hype There is a lot of hype about AI out there. If you watch movies such as Her and Ex Machina, you might be led to believe that AI is further along than it is. The problem is that AI is actually in its infancy, and any sort of application like those shown in the movies is the creative output of an overactive imagination. However, the importance of artificial intelligence to the future of technology cannot be overstated. It is already helping people in everyday technologies, and has great potential in everything from customer service to health care, to outer space exploration. The five tribes and the master algorithm You may have heard of something called the singularity, which is responsible for the potential claims presented in the media and movies. The singularity is essentially a master algorithm that encompasses all five tribes of learning used within machine learning. To achieve what these sources are telling you, the machine must be able to learn as a human would — as specified by the seven kinds of intelligence discussed earlier. Here are the five tribes of learning: Symbologists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems. Connectionists: This tribe’s origin is in neuroscience, and the group relies on backpropagation to solve problems. Evolutionaries: The evolutionaries tribe originates in evolutionary biology, relying on genetic programming to solve problems. Bayesians: This tribe’s origin is in statistics and relies on probabilistic inference to solve problems. Analogizers: The origin of this tribe is in psychology. The group relies on kernel machines to solve problems. The ultimate goal of machine learning is to combine the technologies and strategies embraced by the five tribes to create a single algorithm (the master algorithm) that can learn anything. Of course, achieving that goal is a long way off. Even so, scientists such as Pedro Domingos at the University of Washington are currently working toward that goal. To make things even less clear, the five tribes may not be able to provide enough information to actually solve the problem of human intelligence, so creating master algorithms for all five tribes may still not yield the singularity. At this point, you should be amazed at just how much people don’t know about how they think or why they think in a certain manner. Any rumors you hear about AI taking over the world or becoming superior to people are just plain false. Considering sources of hype There are many sources of AI hype. Quite a bit of the hype comes from the media and is presented by people who have no idea of what AI is all about, except perhaps from a sci-fi novel they read once. So, it’s not just movies or television that cause problems with AI hype; it’s all sorts of other media sources as well. You can often find news reports presenting AI as being able to do something that it can’t possibly do because the reporter doesn’t understand the technology. Oddly enough, many news services now use AI to at least start articles for reporters. Some products should be tested a lot more before being placed on the market. The “2020 in Review: 10 AI Failures” article at SyncedReview.com discusses ten products hyped by their developer but which fell flat on their faces. Some of these failures are huge and reflect badly on the ability of AI to perform tasks as a whole. However, something to consider with a few of these failures is that people may have interfered with the device using the AI. Obviously, testing procedures need to start considering the possibility of people purposely tampering with the AI as a potential source of errors. Until that happens, the AI will fail to perform as expected because people will continue to fiddle with the software in an attempt to cause it to fail in a humorous manner. Another cause of problems comes from asking the wrong person about AI. Not every scientist, no matter how smart, knows enough about AI to provide a competent opinion about the technology and the direction it will take in the future. Asking a biologist about the future of AI in general is akin to asking your dentist to perform brain surgery — it simply isn’t a good idea. Yet, many stories appear with people like these as the information source. To discover the future direction of AI, it’s best to ask a computer scientist or data scientist with a strong background in AI research. Understanding user overestimation Because of hype (and sometimes laziness or fatigue), users continually overestimate the ability of AI to perform tasks. For example, a Tesla owner was recently found sleeping in his car while the car zoomed along the highway at 90 mph. However, even with the user significantly overestimating the ability of the technology to drive a car, it does apparently work well enough (at least, for this driver) to avoid a complete failure. However, you need not be speeding down a highway at 90 mph to encounter user overestimation. Robot vacuums can also fail to meet expectations, usually because users believe they can just plug in the device and then never think about vacuuming again. After all, movies portray the devices working precisely in this manner. The article “How to Solve the Most Annoying Robot Vacuum Cleaner Problems” at RobotsInMyHome.com discusses troubleshooting techniques for various robotic vacuums for a good reason — the robots still need human intervention. The point is that most robots need human intervention at some point because they simply lack the knowledge to go it alone. What is AI technology? Artificial intelligence is a sub-discipline of computer science that works by combining large amounts of data with fast, iterative algorithms with the goal of enabling computers to solve complex problems and complete complex tasks. To see AI at work, you need to have some sort of computing system, an application that contains the required software, and a knowledge base. For artificial intelligence, the computers could be anything with a chip inside; in fact, a smartphone does just as well as a desktop computer for some applications. Of course, if you’re Amazon and you want to provide advice on a particular person’s next buying decision, the smartphone won’t do — you need a really big computing system for that application. The size of the computing system is directly proportional to the amount of work you expect the AI to perform. The application can also vary in size, complexity, and even location. For example, if you’re a business and want to analyze client data to determine how best to make a sales pitch, you might rely on a server-based application to perform the task. On the other hand, if you’re a customer and want to find products on Amazon to go with your current purchase items, the application doesn’t even reside on your computer; you access it through a web-based application located on Amazon’s servers. The knowledge base varies in location and size as well. The more complex the data, the more you can obtain from it, but the more you need to manipulate it as well. You get no free lunch when it comes to knowledge management. The interplay between location and time is also important. A network connection affords you access to a large knowledge base online but costs you in time because of the latency of network connections. However, localized databases, while fast, tend to lack details in many cases.
View ArticleArticle / Updated 05-25-2023
ChatGPT is a huge phenomenon and a major paradigm shift in the accelerating march of technological progression. Artificial intelligence (AI) research company OpenAI released a free preview of the chatbot in November 2022, and by January 2023, it had more than a million users. So, what is chatgpt? It's a large language model (LLM) that belongs to a category of AI called generative AI , which can generate new content rather than simply analyze existing data. Additionally, anyone can interact with ChatGPT (GPT stands for generative pre-trained transformer) in their own words. A natural, humanlike dialog ensues. ChatGPT is often directly accessed online by users, but it is also being integrated with several existing applications, such as Microsoft Office apps (Word, Excel, and PowerPoint) and the Bing search engine. The number of app integrations seems to grow every day as existing software providers hurry to capitalize on ChatGPT’s popularity. What is ChatGPT used for? The ways to use ChatGPT are as varied as its users. Most people lean towards more basic requests, such as creating a poem, an essay, or short marketing content. Students often turn to it to do their homework. Heads up, kids: ChatGPT stinks at answering riddles and sometimes word problems in math. Other times, it just makes things up. In general, people tend to use ChatGPT to guide or explain something, as if the bot were a fancier version of a search engine. Nothing is wrong with that use, but ChatGPT can do so much more. How much more depends on how well you write the prompt. If you write a basic prompt, you’ll get a bare-bones answer that you could have found using a search engine such as Google or Bing. That’s the most common reason why people abandon ChatGPT after a few uses. They erroneously believe it has nothing new to offer. But this particular failing is the user’s fault, not ChatGPT’s. What can ChatGPT do? This list covers just some of the more unique uses of this technology. Users have asked ChatGPT to: Conduct an interview with a long-dead legendary figure regarding their views of contemporary topics. Recommend colors and color combinations for logos, fashion designs, and interior decorating designs. Generate original works such as articles, e-books, and ad copy. Predict the outcome of a business scenario. Develop an investment strategy based on stock market history and current economic conditions. Make a diagnosis based on a patient’s real-world test results. Write computer code to make a new computer game from scratch. Leverage sales leads. Inspire ideas for a variety of things from A/B testing to podcasts, webinars, and full-feature films. Check computer code for errors. Summarize legalese in software agreements, contracts, and other forms into simple laymen language. Calculate the terms of an agreement into total costs. Teach a skill or get instructions for a complex task. Find an error in their logic before implementing their decision in the real world. Much ado has been made of ChatGPT’s creativity. But that creativity is a reflection and result of the human doing the prompting. If you can think it, you can probably get ChatGPT to play along. Unfortunately, that’s true for bad guys too. For example, they can prompt ChatGPT to find vulnerabilities in computer code or a computer system; steal your identity by writing a document in your style, tone, and word choices; or edit an audio clip or a video clip to fool your biometric security measures or make it say something you didn’t actually say. Only their imagination limits the possibilities for harm and chaos. Unwrapping ChatGPT fears Perhaps no other technology is as intriguing and disturbing as generative artificial intelligence. Emotions were raised to a fever pitch when 100 million monthly active users snatched up the free, research preview version of ChatGPT within two months after its launch. You can thank science fiction writers and your own imagination for both the tantalizing and terrifying triggers that ChatGPT is now activating in your head, making you wonder: Is ChatGPT safe? There are definitely legitimate reasons for caution and concern. Lawsuits have been launched against generative AI programs for copyright and other intellectual property infringements. OpenAI and other AI companies and partners stand accused of illegally using copyrighted photos, text, and other intellectual property without permission or payment to train their AI models. These charges generally spring from copyrighted content getting caught up in the scraping of the internet to create massive training datasets. In general, legal defense teams are arguing the inevitability and unsustainability of such charges in the age of AI and requesting that charges be dropped. The lawsuits regarding who owns the content generated by ChatGPT and its ilk lurk somewhere in the future. However, the U.S. Copyright Office has already ruled that AI-generated content, be it writing, images, or music, is not protected by copyright law. In the U.S., at least for now, the government will not protect anything generated by AI in terms of rights, licensing, or payment. Meanwhile, realistic concerns exist over other types of potential liabilities. ChatGPT and ChatGPT alternatives are known to sometimes deliver incorrect information to users and other machines. Who is liable when things go wrong, particularly in a life-threatening scenario? Even if a business’s bottom line is at stake and not someone's life, risks can run high and the outcome can be disastrous. Inevitably, someone will suffer and likely some person or organization will eventually be held accountable for it. Then, there are the magnifications of earlier concerns, such as data privacy, biases, unfair treatment of individuals and groups through AI actions, identity theft, deep fakes, security issues, and reality apathy, which is when the public can no longer tell what is true and what isn’t and thinks the effort to sort it all out is too difficult to pursue. In short, all of this probably has you wondering: Is ChatGPT safe? The potential to misuse it accelerates and intensifies the need for the rules and standards currently being studied, pursued, and developed by organizations and governments seeking to establish guardrails aimed at ensuring responsible AI. The big question is whether they’ll succeed in time, given ChatGPT’s incredibly fast adoption rate worldwide. Examples of groups working on guidelines, ethics, standards, and responsible AI frameworks include the following: ACM US Technology Committee’s Subcommittee on AI & Algorithms World Economic Forum UK’s Centre for Data Ethics Government agencies and efforts such as the US AI Bill of Rights and the European Council of the European Union’s Artificial Intelligence Act. IEEE and its 7000 series of standards Universities such as New York University’s Stern School of Business The private sector, wherein companies make their own responsible AI policies and foundations How does ChatGPT work? ChatGPT works differently than a search engine. A search engine, such as Google or Bing, or an AI assistant, such as Siri, Alexa, or Google Assistant, works by searching the internet for matches to the keywords you enter in the search bar. Algorithms refine the results based on any number of factors, but your browser history, topic interests, purchase data, and location data usually figure into the equation. You’re then presented with a list of search results ranked in order of relevance as determined by the search engine’s algorithm. From there, the user is free to consider the sources of each option and click a selection to do a deeper dive for more details from that source. By comparison, ChatGPT generates its own unified answer to your prompt. It doesn't offer citations or note its sources. You ask; it answers. Easy-peasey, right? No. That task is incredibly hard for AI to do, which is why generative AI is so impressive. Generating an original result in response to a prompt is achieved by using either the GPT-3 (Generative Pre-trained Transformer 3) or GPT-4 model to analyze the prompt with context and predict the words that are likely to follow. Both GPT models are extremely powerful large language models capable of processing billions of words per second. In short, transformers enable ChatGPT to generate coherent, humanlike text as a response to a prompt. ChatGPT creates a response by considering context and assigning weight (values) to words that are likely to follow the words in the prompt to predict which words would be an appropriate response. Some ChatGPT basics here: User input is called a prompt rather than a command or a query, although it can take either form. You are, in effect, prompting AI to predict and complete a pattern that you initiated by entering the prompt. If you'd like a comprehensive ChatGPT guide, including more detail on how it works and how to use it, check out my book ChatGPT For Dummies. Peeking at the ChatGPT architecture As its name implies, ChatGPT is a chatbot running on a GPT model. GPT-3, GPT-3.5, and GPT-4 are large language models (LLMs) developed by OpenAPI. When GPT-3 was introduced, it was the largest LLM at 175 billion parameters. An upgraded version called GPT-3.5 turbo is a highly optimized and more stable version of GPT-3 that's ten times cheaper for developers to use. ChatGPT is now also available on GPT-4, which is a multimodal model, meaning it accepts both image and text inputs although its outputs are text only. It's now the largest LLM to date, although GPT-4’s exact number of parameters has yet to be disclosed. Parameters are numerical values that weigh and define connections between nodes and layers in the neural network architecture. The more parameters a model has, the more complex its internal representations and weighting. In general, more parameters lead to better performance on specific tasks. ChatGPT for beginners Here, you'll learn the basics of how to use ChatGPT and why it relies on your skills to optimize its performance. But the real treasure here are the tips and insights on how to write prompts so that ChatGPT can perform its true magic. You can learn even more about writing prompts in my book ChatGPT For Dummies. Writing effective ChatGPT prompts ChatGPT appears deceptively simplistic. The user interface is elegantly minimalistic and intuitive, as shown in the figure below. The first part of the page offers information to users regarding ChatGPT’s capabilities and limitations plus a few examples of prompts. The prompt bar, which resembles a search bar, runs across the bottom of the page. Just enter a question or a command to prompt ChatGPT to produce results immediately. If you enter a basic prompt, you’ll get a bare-bones, encyclopedic-like answer, as shown in the figure below. Do that enough times and you’ll convince yourself that this is just a toy and you can get better results from an internet search engine. This is a typical novice’s mistake and a primary reason why beginners give up before they fully grasp what ChatGPT is and can do. Understand that your previous experience with keywords and search engines does not apply here. You must think of and use ChatGPT in a different way. Think hard about how you’re going to word your prompt. You have many options to consider. You can assign ChatGPT a role or a persona, or several personas and roles if you decide it should respond as a team, as illustrated in the figure below. You can assign yourself a new role or persona as well. Or tell it to address any type of audience — such as a high school graduating class, a surgical team, or attendees at a concert or a technology conference. You can set the stage or situation in great or minimum detail. You can ask a question, give it a command, or require specific behaviors. A prompt, as you can see now, is much more than a question or a command. Your success with ChatGPT hinges on your ability to master crafting a prompt in such a way as to trigger the precise response you seek. Ask yourself these questions as you are writing or evaluating your prompt. Who do you want ChatGPT to be? Where, when, and what is the situation or circumstances you want ChatGPT’s response framed within? Is the question you're entering in the prompt the real question you want it to answer, or were you trying to ask something else? Is the command you're prompting complete enough for ChatGPT to draw from sufficient context to give you a fuller, more complete, and richly nuanced response? And the ultimate question for you to consider: Is your prompt specific and detailed, or vague and meandering? Whichever is the case, that’s what ChatGPT will mirror in its response. ChatGPT’s responses are only as good as your prompt. That’s because the prompt starts a pattern that ChatGPT must then complete. Be intentional and concise about how you present that pattern starter — the prompt. Starting a chat To start a chat, just type a question or command in the prompt bar, shown at the bottom of the figure below. ChatGPT responds instantly. You can continue the chat by using the prompt bar again. Usually, you do this to gain further insights or to get ChatGPT to further refine its response. Following, are some things you can do in a prompt that may not be readily evident: Add data in the prompt along with your question or command regarding what to do with this data. Adding data directly in the prompt enables you to add more current info as well as make ChatGPT responses more customizable and on point. You can use the Browsing plug-in to connect ChatGPT to the live internet, which will give it access to current information. However, you may want to add data to the prompt anyway to better focus its attention on the problem or task at hand. However, there are limits on prompting and response sizes, so make your prompt as concise as possible. Direct the style, tone, vocabulary level, and other factors to shape ChatGPT's response. Command ChatGPT to assume a specific persona, job role, or authority level in its response. If you’re using ChatGPT-4, you'll soon be able to use images in the prompt too. ChatGPT can extract information from the image to use in its analysis. When you’ve finished chatting on a particular topic or task, it’s wise to start a new chat (by clicking or tapping the New Chat button in the upper left). Starting a new dialogue prevents confusing ChatGPT, which would otherwise treat subsequent prompts as part of a single conversational thread. On the other hand, starting too many new chats on the same topic or related topics can lead the AI to use repetitious phrasing and outputs, whether or not they apply to the new chat’s prompt. To recap: Don't confuse ChatGPT by chatting in one long continuous thread with a lot of topic changes or by opening too many new chats on the same topic. Otherwise, ChatGPT will probably say something offensive or make up random and wrong answers. When writing prompts, think of the topic or task in narrow terms. For example, don't have a long chat on car racing, repairs, and maintenance. To keep ChatGPT more intently focused, narrow your prompt to a single topic, such as determining when the vehicle will be at top trade-in value so you can best offset a new car price. Your responses will be of much higher quality. ChatGPT may call you offensive names and make up stuff if the chat goes on too long. Shorter conversations tend to minimize these odd occurrences, or so most industry watchers think. For example, after ChatGPT responses to Bing users became unhinged and argumentative, Microsoft limited conversations with it to 5 prompts in a row, for a total of 50 conversations a day per user. But a few days later, it increased the limit to 6 prompts per conversation and a total of 60 conversations per day per user. The limits will probably increase when AI researchers can figure out how to tame the machine to an acceptable — or at least a less offensive — level.
View ArticleArticle / Updated 05-25-2023
You can hardly avoid hearing about artificial intelligence (AI) today. You see AI in the movies, in the news, in books, and online. It's been in the news a lot lately, with all of the frenzy surrounding ChatGPT (see more about that below). AI is part of robots, self-driving (SD) cars, drones, medical systems, online shopping sites, and all sorts of other technologies that affect your daily life in so many ways. Some people have come to trust AIs so much, that they fall asleep while their self-driving cars take them to their destination — illegally, of course. Many pundits are burying you in information (and disinformation) about AI, too. Some see AI as cute and fuzzy; others see it as a potential mass murderer of the human race. The problem with being so loaded down with information in so many ways is that you struggle to separate what’s real from what is simply the product of an overactive imagination. Just how far can you trust your AI, anyway? Much of the hype about AI originates from the excessive and unrealistic expectations of scientists, entrepreneurs, and businesspersons. This article helps you understand some of the history of artificial intelligence and evolution of AI. The ChatGPT controversy The latest media storm around AI came in early January 2023, when OpenAI's free preview of its ChatGPT chatbot (released in November 2022) reached 100 million users. OpenAI then released a subscription service called ChatGPT Plus, and an upgraded version of its product, ChatGPT-4, in March 2023. A chatbot is a computer program designed to simulate human conversation. ChatGPT (GPT stands for generative pretrained transformer) is a particularly powerful chatbot able to produce natural, human-like writing through its use of 570GB of data from the Internet. Representing one of the latest achievements in the development of artificial intelligence, ChatGPT can answer questions and write articles, poems, emails, and research papers; it can also write programming code, translate languages, and perform other tasks related to language. ChatGPT's possible real-world uses include: Customer service Ecommerce Research Education and training Computer code writing and debugging Scheduling and booking Entertainment Health care information and assistance However, while many people are excited about the possibilities for ChatGPT and other similar technologies being developed, there are plenty of concerns about how it can be used in bad ways, too — for example, to cheat in school by having it write essays and research papers. It’s difficult to discern whether a piece of writing has been generated by ChatGPT or a human. In addition, the technology is far from perfect; the text it produces is often inaccurate and biased, and therefore, can spread false and even harmful information. AI can, and is, serving us well in many ways, but it’s important to understand its limitations. AI will never be able to engage in certain essential activities and tasks, and won’t be able to do other ones until far into the future. For example, while it can produce a piece of music with the data you’ve entered and in the style of a particular musician, say Beethoven, it cannot actually create anything. AI doesn’t have an imagination or original ideas. The history of AI, starting with Dartmouth Looking at artificial intelligence history begins with the earliest computers, which were just that: computing devices. They mimicked the human ability to manipulate symbols in order to perform basic math tasks, such as addition. Logical reasoning later added the capability to perform mathematical reasoning through comparisons (such as determining whether one value is greater than another value). However, for artificial intelligence evolution, humans still needed to define the algorithm used to perform the computation, provide the required data in the right format, and then interpret the result. During the summer of 1956, various scientists attended a workshop held on the Dartmouth College campus in Hanover, New Hampshire, to do something more. They predicted that machines that could reason as effectively as humans would require, at most, a generation to come about. They were wrong. Only now have we realized machines that can perform mathematical and logical reasoning as effectively as a human (which means that computers must master at least six more intelligences before reaching anything even close to human intelligence). The stated problem with the Dartmouth College and other endeavors of the time relates to hardware — the processing capability to perform calculations quickly enough to create a simulation. However, that’s not really the whole problem. Yes, hardware does figure in to the picture, but you can’t simulate processes that you don’t understand. Even so, the reason that AI is somewhat effective today is that the hardware has finally become powerful enough to support the required number of calculations. The biggest problem with these early attempts (and still a considerable problem today) is that we don’t understand how humans reason well enough to create a simulation of any sort — assuming that a direction simulation is even possible. Consider the issues surrounding the accomplishment of manned flight by the Wright brothers. They succeeded not by simulating birds, but rather by understanding the processes that birds use, thereby creating the field of aerodynamics. Consequently, when someone says that the next big AI innovation is right around the corner and yet no concrete dissertation exists of the processes involved, the innovation is anything but right around the corner. Continuing with expert systems Expert systems first appeared in the 1970s and again in the 1980s as an attempt to reduce the computational requirements posed by AI using the knowledge of experts. A number of expert system representations appeared, including: Rule based: These use "if … then" statements to base decisions on rules of thumb. Frame based: These use databases organized into related hierarchies of generic information called frames. Logic based: These rely on set theory to establish relationships). The advent of expert systems is important in artificial intelligence background because they present the first truly useful and successful implementations of AI. You still see expert systems in use today, although they aren’t called that any longer. For example, the spelling and grammar checkers in your application are kinds of expert systems. The grammar checker, especially, is strongly rule based. It pays to look around to see other places where expert systems may still see practical use in everyday applications. A problem with expert systems is that they can be hard to create and maintain. Early users had to learn specialized programming languages, such as List Processing (LisP) or Prolog. Some vendors saw an opportunity to put expert systems in the hands of less experienced or novice programmers. However, the products they used generally provided extremely limited functionality in using small knowledge bases. In the 1990s, the phrase expert system began to disappear. The idea that expert systems were a failure did appear, but the reality is that expert systems were simply so successful that they became ingrained in the applications that they were designed to support. Using the example of a word processor, at one time you needed to buy a separate grammar checking application, such as RightWriter. However, word processors now have grammar checkers built in because they proved so useful (if not always accurate). Overcoming the AI winters The term AI winter refers to a period of reduced funding in the development of AI. In general, AI has followed a path on which proponents overstate what is possible, inducing people with no technology knowledge at all, but lots of money, to make investments. A period of criticism then follows when AI fails to meet expectations, and finally, the reduction in funding occurs. A number of these cycles have occurred over the years — all of them devastating to true progress. AI is currently in a new hype phase because of machine learning, a technology that helps computers learn from data. Having a computer learn from data means not depending on a human programmer to set operations (tasks), but rather deriving them directly from examples that show how the computer should behave. ' Machine learning is like educating a baby by showing it how to behave through example. This technology has pitfalls because the computer can learn how to do things incorrectly through careless teaching. At this time, the most successful solution is deep learning, which is a technology that strives to imitate the human brain. Deep learning is possible because of the availability of powerful computers, smarter algorithms, large datasets produced by the digitalization of our society, and huge investments from businesses such as Google, Facebook, Amazon, and others that take advantage of this AI renaissance for their own businesses. People are saying that the AI winter is over because of deep learning, and that’s true for now. However, when you look around at the ways in which people are viewing AI, you can easily figure out that another criticism phase will eventually occur unless proponents tone the rhetoric down. A brief artificial intelligence timeline 1942: First electronic digital computer built by John Vincent Atanasoff and Clifford Berry at Iowa State University 1950: Alan Turing paper “Computing Machinery and Intelligence;” his proposal later became “The Turing Test,” which measured machine AI 1958: Perceptron computer, built by Cornell University Professor Frank Rosenblatt, regarded as first artificial neural network 1966: First “chatterbox” (later shortened to chatbot) — created by Joseph Weizenbaum, a German-American computer scientist — uses natural language processing to converse with humans 1971: First commercial microprocessor by Intel 1988: Jabberwacky, a chatbot created by British computer scientist Rollo Carpenter, provides interesting and entertaining conversation to humans 1990s: Early days of the Internet 1992: TD-Gammon, developed by Gerald Tesauro, of IBM; an artificial neural network trained by temporal-difference learning to play high-level backgammon 1997: IBM's Deep Blue chess computer defeats Russian chess grandmaster Garry Kasparov; Windows releases a speech recognition software, developed by Dragon Systems 2012: AlexNet, a convolutional neural network architecture, primarily designed by Alex Krizhevsky, a Ukrainian-born, Canadian computer scientist 2020: OpenAI beta tests GPT-3, which uses deep learning to create code, poetry, and other language and writing tasks; it's the first such chatbot that can create content almost indistinguishable from human-created content 2023: In January, OpenAI releases a free preview of its ChatGPT-3 to the public, and in March releases the upgrade ChatGPT-4 AI in our everyday lives You’re using AI in some way today; in fact, you probably rely on AI in many different ways — you just don’t notice it because it’s so mundane. A smart thermostat for your home may not sound very exciting, but it’s an incredibly practical use for a technology that has some people running for the hills in terror. As the development of AI has continued, there are now really cool uses for AI. For example, you may not know there is a medical monitoring device that can actually predict when you might have a heart problem, but such a device exists. AI powers drones, drives cars, and makes all sorts of robots possible. You see AI used today in all sorts of space applications, and the evolution of artificial intelligence figures prominently in all the space adventures humans will have tomorrow. The potential uses for AI number in the millions — all safely out of sight even when they’re quite dramatic in nature. Here are some of the ways in which you might see AI used: Fraud detection: You get a call from your credit card company asking whether you made a particular purchase. The credit card company isn’t being nosy; it’s simply alerting you to the fact that someone else could be making a purchase using your card. The AI embedded within the credit card company’s code detected an unfamiliar spending pattern and alerted someone to it. Resource scheduling: Many organizations need to schedule the use of resources efficiently. For example, a hospital may have to determine where to put a patient based on the patient’s needs, availability of skilled experts, and the amount of time the doctor expects the patient to be in the hospital. Complex analysis: Humans often need help with complex analysis because there are literally too many factors to consider. For example, the same set of symptoms could indicate more than one problem. A doctor or other expert might need help making a diagnosis in a timely manner to save a patient’s life. Automation: Any form of automation can benefit from the addition of AI to handle unexpected changes or events. A problem with some types of automation today is that an unexpected event, such as an object in the wrong place, can actually cause the automation to stop. Adding AI to the automation can allow the automation to handle unexpected events and continue as if nothing happened. Customer service: The customer service line you call today may not even have a human behind it. The automation is good enough to follow scripts and use various resources to handle the vast majority of your questions. With good voice inflection (provided by AI as well), you may not even be able to tell that you’re talking with a computer. Safety systems: Many of the safety systems found in machines of various sorts today rely on AI to take over the vehicle in a time of crisis. For example, many automatic braking systems (ABS) rely on AI to stop the car based on all the inputs that a vehicle can provide, such as the direction of a skid. Computerized ABS is actually relatively old at 40 years from a technology perspective. Machine efficiency: AI can help control a machine in such a manner as to obtain maximum efficiency. The AI controls the use of resources so that the system doesn’t overshoot speed or other goals. Every ounce of power is used precisely as needed to provide the desired services.
View ArticleArticle / Updated 05-09-2023
Artificial intelligence (AI) is great at automation, which can make it ideal for tasks in health care. It never deviates from the procedure, never gets tired, and never makes mistakes as long as the initial procedure is correct. Unlike humans, AI never needs a vacation or a break or even an eight-hour day (not that many in the medical profession have that, either). Consequently, the same AI that interacts with a patient for breakfast will do so for lunch and dinner as well. So, at the outset, AI has some significant advantages if viewed solely on the bases of consistency, accuracy, and longevity. Working with medical records The major way in which an AI helps in medicine is medical records. In the past, everyone used paper records to store patient data. Each patient might also have a blackboard that medical personnel use to record information daily during a hospital stay. Various charts contain patient data, and the doctor might also have notes. Having all these sources of information in so many different places made it hard to keep track of the patient in any significant way. Using an AI, along with a computer database, helps make information accessible, consistent, and reliable. Products such as Google Deepmind Health enable personnel to mine the patient information to see patterns in data that aren’t obvious. Doctors don’t necessarily interact with records in the same way that everyone else does. The use of products such as IBM’s WatsonPaths helps doctors interact with patient data of all sorts in new ways to make better diagnostic decisions about patient health. You can see a video on how this product works. Medicine is about a team approach, with many people of varying specialties working together. However, anyone who watches the process for a while soon realizes that these people don’t communicate among themselves sufficiently because they’re all quite busy treating patients. Products such as CloudMedX take all the input from the all parties involved and performs risk analysis on it. The result is that the software can help locate potentially problematic areas that could reduce the likelihood of a good patient outcome. In other words, this product does some of the talking that the various stakeholders would likely do if they weren’t submerged in patient care. Predicting the future Some truly amazing predictive software based on medical records includes CareSkore, which actually uses algorithms to determine the likelihood of a patient’s requiring readmission into the hospital after a stay. By performing this task, hospital staff can review reasons for potential readmission and address them before the patient leaves the hospital, making readmission less likely. Along with this strategy, Zephyr Health helps doctors evaluate various therapies and choose those most likely to result in a positive outcome — again reducing the risk that a patient will require readmission to the hospital. This video tells you more about Zephyr Health. In some respects, your genetics form a map of what will happen to you in the future. Consequently, knowing about your genetics can increase your understanding of your strengths and weaknesses, helping you to live a better life. Deep Genomics is discovering how mutations in your genetics affect you as a person. Mutations need not always produce a negative result; some mutations actually make people better, so knowing about mutations can be a positive experience, too. Check out this video for more details. Making procedures safer Doctors need lots of data to make good decisions. However, with data being spread out all over the place, doctors who lack the ability to analyze that disparate data quickly often make imperfect decisions. To make procedures safer, a doctor needs not only access to the data but also some means of organizing and analyzing it in a manner reflecting the doctor’s specialty. One such product is Oncora Medical, which collects and organizes medical records for radiation oncologists. As a result, is these doctors can deliver the right amount of radiation to just the right locations to obtain a better result with a lower potential for unanticipated side effects. Doctors also have trouble obtaining necessary information because the machines they use tend to be expensive and huge. An innovator named Jonathan Rothberg has decided to change all that by using the Butterfly Network. Imagine an iPhone-sized device that can perform both an MRI and an ultrasound. The picture on the website is nothing short of amazing. Creating better medications Everyone complains about the price of medications today. Yes, medications can do amazing things for people, but they cost so much that some people end up mortgaging homes to obtain them. Part of the problem is that testing takes a lot of time. Performing a tissue analysis to observe the effects of a new drug can take up to a year. Fortunately, products such as 3Scan can greatly reduce the time required to obtain the same tissue analysis to as little as one day. Of course, better still would be the drug company having a better idea of which drugs are likely to work and which aren’t before investing any money in research. Atomwise uses a huge database of molecular structures to perform analyses on which molecules will answer a particular need. In 2015, researchers used Atomwise to create medications that would make Ebola less likely to infect others. The analysis that would have taken human researchers months or possibly years to perform took Atomwise just one day to complete. Imagine this scenario in the midst of a potentially global epidemic. If Atomwise can perform the analysis required to render the virus or bacteria noncontagious in one day, the potential epidemic could be curtailed before becoming widespread. Drug companies also produce a huge number of drugs. The reason for this impressive productivity, besides profitability, is that every person is just a little different. A drug that performs well and produces no side effects on one person might not perform well at all and could even harm a different person. Turbine enables drug companies to perform drug simulations so that the drug companies can locate the drugs most likely to work with a particular person’s body. Turbine’s current emphasis is on cancer treatments, but it’s easy to see how this same approach could work in many other areas. Medications can take many forms. Some people think they come only in pill or shot form, yet your body produces a wide range of medications in the form of microbiomes. Your body actually contains ten times as many microbes as it does human cells, and many of these microbes are essential for life; you’d quickly die without them. Whole Biome is using a variety of methods to make these microbiomes work better for you so that you don’t necessarily need a pill or a shot to cure something. Check out this video for additional information. Some companies have yet to realize their potential, but they’re likely to do so eventually. One such company is Recursion Pharmaceuticals, which employs automation to explore ways to use known drugs, bioactive drugs, and pharmaceuticals that didn’t previously make the grade to solve new problems. The company has had some success in helping to solve rare genetic diseases, and it has a goal of curing 100 diseases in the next ten years (obviously, an extremely high goal to reach).
View ArticleArticle / Updated 04-14-2023
Many of the current techniques for extending the healthy range of human life (the segment of life that contains no significant sickness), rather than just increasing the number of years of life depends on making humans more capable of improving their own health in various ways. You can find any number of articles that tell you 30, 40, or even 50 ways to extend this healthy range, but often it comes down to a combination of eating right, exercising enough and in the right way, and sleeping well. Of course, figuring out just which food, exercise, and sleep technique works best for you is nearly impossible. The following sections discuss ways in which an AI-enabled device might make the difference between having 60 good years and 80 or more good years. (In fact, it’s no longer hard to find articles that discuss human life spans of 1,000 or more years in the future because of technological changes.) Using games for therapy A gaming console can make a powerful and fun physical therapy tool. Both Nintendo Wii and Xbox 360 see use in many different physical therapy venues. The goal of these games is to get people moving in certain ways. As when anyone else plays, the game automatically rewards proper patient movements, but a patient also receives therapy in a fun way. Because the therapy becomes fun, the patient is more likely to actually do it and get better faster. Of course, movement alone, even when working with the proper game, doesn’t assure success. In fact, someone could develop a new injury when playing these games. The Jintronix add-on for the Xbox Kinect hardware standardizes the use of this game console for therapy, increasing the probability of a great outcome. Considering the use of exoskeletons One of the most complex undertakings for an AI is to provide support for an entire human body. That’s what happens when someone wears an exoskeleton (essentially a wearable robot). An AI senses movements (or need to move) and provides a powered response to the need. The military has excelled in the use of exoskeletons. Imagine being able to run faster and carry significantly heavier loads as a result of wearing an exoskeleton. This video gives you just a glimpse of what’s possible. Of course, the military continues to experiment, which actually feeds into civilian uses. The exoskeleton you eventually see (and you’re almost guaranteed to see one at some point) will likely have its origins in the military. Industry has also gotten in on the exoskeleton technology. Factory workers currently face a host of illnesses because of repetitive stress injuries. In addition, factory work is incredibly tiring. Wearing an exoskeleton not only reduces fatigue but also reduces errors and makes the workers more efficient. People who maintain their energy levels throughout the day can do more with far less chance of being injured, damaging products, or hurting someone else. The exoskeletons in use in industry today reflect their military beginnings. Look for the capabilities and appearance of these devices to change in the future to look more like the exoskeletons shown in movies such as Aliens. The real-world examples of this technology are a little less impressive but will continue to gain in functionality. As interesting as the use of exoskeletons to make able people even more incredible is, what they can enable people to do that they can’t do now is downright amazing. For example, scientists at the National Institutes of Health Clinical Center in Bethesda, Maryland, have helped children with cerebral learn how to walk more effectively by using an exoskeleton. Not all exoskeletons used in medical applications provide lifetime use, however. For example, an exoskeleton can help a stroke victim walk normally again. As the person becomes more able, the exoskeleton provides less support until the wearer no longer needs it. Some users of the device have even coupled their exoskeleton to other products, such as Amazon’s Alexa. The overall purpose of wearing an exoskeleton isn’t to make you into Iron Man. Rather, it’s to cut down on repetitive stress injuries and help humans excel at tasks that currently prove too tiring or just beyond the limits of their body. From a medical perspective, using an exoskeleton is a win because it keeps people mobile longer, and mobility is essential to good health.
View ArticleCheat Sheet / Updated 01-19-2023
Artificial intelligence (AI) is a technology that has grabbed a lot of attention in movies, books, products, and in a slew of other places. Often, vendors equate AI with smartness: You buy a smart device to obtain a device with an AI, even though smart devices sometimes are smart only in that they offer connectivity, not AI. Many products are hyped to contain AI that sometimes doesn’t even work. Some people, of course, want to grab headlines by telling mistruths or offering misconceptions about AI. This Cheat Sheet offers you some interesting insights into why the mundane is actually where you see AI most often. Yes, AI is being put to some amazing uses as well, but vendors often misrepresent these applications to the point that no one really knows how much is real and how much is the result of someone’s vivid imagination.
View Cheat SheetArticle / Updated 08-16-2022
This article is too short. It can’t even begin to describe the ways in which deep learning will affect you in the future. Consider this article to be offering a tantalizing tidbit — an appetizer that can whet your appetite for exploring the world of deep learning further. These deep learning applications are already common in some cases. You probably used at least one of them today, and quite likely more than just one. Although the technology has begun to see widespread usage, it’s really just the beginning. We’re at the start of something, and AI is actually quite immature at this point. This article doesn’t discuss killer robots, dystopian futures, AI run amok, or any of the sensational scenarios that you might see in the movies. The information you find here is about real life, existing AI applications that you can interact with today. Deep learning can be used to restore color to black-and-white videos and pictures You probably have some black-and-white videos or pictures of family members or special events that you’d love to see in color. Color consists of three elements: hue (the actual color), value (the darkness or lightness of the color), and saturation (the intensity of the color). Oddly enough, many artists are color-blind and make strong use of color value in their creations. So having hue missing (the element that black-and-white art lacks) isn’t the end of the world. Quite the contrary, some artists view it as an advantage. When viewing something in black and white, you see value and saturation but not hue. Colorization is the process of adding the hue back in. Artists generally perform this process using a painstaking selection of individual colors. However, AI has automated this process using Convolutional Neural Networks (CNNs). The easiest way to use CNN for colorization is to find a library to help you. The Algorithmia site offers such a library and shows some example code. You can also try the application by pasting a URL into the supplied field. This Petapixel.com article describes just how well this application works. It’s absolutely amazing! Deep learning can approximate person poses in real time Person poses don’t tell you who is in a video stream, but rather what elements of a person are in the video stream. For example, using a person pose could tell you whether the person’s elbow appears in the video and where it appears. This article tells you more about how this whole visualization technique works. In fact, you can see how the system works through a short animation of one person in the first case and three people in the second case. Person poses can have all sorts of useful purposes. For example, you could use a person pose to help people improve their form for various kinds of sports — everything from golf to bowling. A person pose could also make new sorts of video games possible. Imagine being able to track a person’s position for a game without the usual assortment of cumbersome gear. Theoretically, you could use person poses to perform crime-scene analysis or to determine the possibility of a person committing a crime. Another interesting application of pose detection is for medical and rehabilitation purposes. Software powered by deep learning could tell you whether you’re doing your exercises correctly and track your improvements. An application of this sort could support the work of a professional rehabilitator by taking care of you when you aren’t in a medical facility (an activity called telerehabilitation). Fortunately, you can at least start working with person poses today using the tfjs-models (PoseNet) library. Deep learning can perform real-time behavior analysis Behavior analysis goes a step beyond what the person poses analysis does. When you perform behavior analysis, the question still isn’t a matter of whom, but how. This particular AI application affects how vendors design products and websites. Articles such as this one from Amplitude go to great lengths to fully define and characterize the use of behavior analysis. In most cases, behavior analysis helps you see how the process the product designer expected you to follow doesn’t match the process you actually use. Behavior analysis has a role to play in other areas of life as well. For example, behavior analysis can help people in the medical profession identify potential issues with people who have specific medical conditions, such as autism, and help the patient overcome those issues. Behavior analysis may also help teachers of physical arts show students how to hone their skills. You might also see it used in the legal profession to help ascertain motive. (The guilt is obvious, but why a person does something is essential to fair remediation of an unwanted behavior.) Fortunately, you can already start performing behavior analysis with Python. Deep learning can be used to translate languages The Internet has created an environment that can keep you from knowing whom you’re really talking to, where that person is, or sometimes even when the person is talking to you. One thing hasn’t changed, however: the need to translate one language to another when the two parties don’t speak a common language. In a few cases, mistranslation can be humorous, assuming that both parties have a sense of humor. However, mistranslation has also led to all sorts of serious consequences, including war. Consequently, even though translation software is extremely accessible on the Internet, careful selection of which product to use is important. One of the most popular of these applications is Google Translate, but many other applications are available, such as, DeepL. According to Forbes, machine translation is one area in which AI excels. Translation applications generally rely on Bidirectional Recurrent Neural Networks (BRNNs). You don’t have to create your own BRNN because you have many existing APIs to choose from. For example, you can get Python access to the Google Translate API using the library. The point is that translation is possibly one of the more popular deep learning applications and one that many people use without even thinking about it. Deep learning can be used to estimate solar savings potential Trying to determine whether solar energy will actually work in your location is difficult unless a lot of other people are also using it. In addition, it’s even harder to know what level of savings you might enjoy. Of course, you don’t want to install solar energy if it won’t satisfy your goals for using it, which may not actually include long-term cost savings (although generally it does). Some deep reinforcement learning projects now help you take the guesswork out of solar energy, including Project Sunroof. Fortunately, you can also get support for this kind of prediction in your Python application. AI can beat people at computer games The AI-versus-people competition continues to attract interest. From winning at chess to winning at Go, AI seems to have become unbeatable — at least, unbeatable at one game. Unlike humans, AI specializes, and an AI that can win at Go is unlikely to do well at chess. Even so, 2017 is often hailed as the beginning of the end for humans over AI in games. Of course, the competition has been going on for some time, And you can likely find competitions that the AI won far earlier than 2017. Indeed, some sources place the date for a Go win as early as October 2015. The article at Interesting Engineering describes 11 other times that the AI won. The problem is to custom create an AI that can win a particular game and realize that in specializing at that game, the AI may not do well at other games. The process of building an AI for just one game can look difficult. This article describes how to build a simple chess AI, which actually won’t defeat a chess master but could do well with an intermediate player. However, it’s actually a bit soon to say that people are out of the game. In the future, people may compete against the AI with more than one game. Examples of this sort of competition already abound, such as people who perform in a triathlon of games, which consists of three sporting events, rather than one. The competition would then become one of flexibility: the AI couldn’t simply hunker down and learn only one game, so the human would have a flexibility edge. This sort of AI use demonstrates that humans and AI may have to cooperate in the future, with the AI specializing in specific tasks and the human providing the flexibility needed to perform all required tasks. Deep learning can be used to generate voices Your car may already speak to you; many cars speak regularly to people now. Oddly, the voice generation is often so good that it’s hard to tell the generated voice from a real one. Some articles talk about how the experience of finding computer voices that sound quite real are becoming more common. The issue attracts enough attention now that many call centers tell you that you’re speaking to a computer rather than a person. Although call output relies on scripted responses, making it possible to generate responses with an extremely high level of confidence, voice recognition is a little harder to perform (but it has greatly improved). To work with voice recognition successfully, you often need to limit your input to specific key terms. By using keywords that the voice recognition is designed to understand, you avoid the need for a user to repeat a request. This need for specific terms gives it away that you’re talking to a computer — simply ask for something unexpected and the computer won’t know what to do with it. The easy way to implement your own voice system is to rely on an existing API, such as Cloud Speech to Text. Of course, you might need something that you can customize. In this case, using an API will prove helpful. This article tells how to build your own voice-based application using Python. Deep learning can be used to predict demographics Demographics, those vital or social statistics that group people by certain characteristics, have always been part art and part science. You can find any number of articles about getting your computer to generate demographics for clients (or potential clients). The use of demographics is wide ranging, but you see them used for things like predicting which product a particular group will buy (versus that of the competition). Demographics are an important means of categorizing people and then predicting some action on their part based on their group associations. Here are the methods that you often see cited for AIs when gathering demographics: Historical: Based on previous actions, an AI generalizes which actions you might perform in the future. Current activity: Based on the action you perform now and perhaps other characteristics, such as gender, a computer predicts your next action. Characteristics: Based on the properties that define you, such as gender, age, and area where you live, a computer predicts the choices you are likely to make. You can find articles about AI’s predictive capabilities that seem almost too good to be true. For example, this Medium article says that AI can now predict your demographics based solely on your name. The company in that article, Demografy, claims to provide gender, age, and cultural affinity based solely on name. Even though the site claims that it’s 100 percent accurate, this statistic is highly unlikely because some names are gender ambiguous, such as Renee, and others are assigned to one gender in some countries and another gender in others. Yes, demographic prediction can work, but exercise care before believing everything that these sites tell you. If you want to experiment with demographic prediction, you can find a number of APIs online. For example, the DeepAI API promises to help you predict age, gender, and cultural background based on a person’s appearance in a video. Each of the online APIs do specialize, so you need to choose the API with an eye toward the kind of input data you can provide. AI can create art from real-world pictures Deep learning can use the content of a real-world picture and an existing master for style to create a combination of the two. In fact, some pieces of art generated using this approach are commanding high prices on the auction block. You can find all sorts of articles on this particular kind of art generation, such as this Wired article. However, even though pictures are nice for hanging on the wall, you might want to produce other kinds of art. For example, you can create a 3-D version of your picture using products like Smoothie 3-D. It’s not the same as creating a sculpture; rather, you use a 3-D printer to build a 3-D version of your picture. Check out an experiment that you can perform to see how the process works. The output of an AI doesn’t need to consist of something visual, either. For example, deep learning enables you to create music based on the content of a picture. This form of art makes the method used by AI clearer. The AI transforms content that it doesn’t understand from one form to another. As humans, we see and understand the transformation, but all the computer sees are numbers to process using clever algorithms created by other humans. Deep learning can be used to forecast natural catastrophes People have been trying to predict natural disasters for as long as there have been people and natural disasters. No one wants to be part of an earthquake, tornado, volcanic eruption, or any other natural disaster. Being able to get away quickly is the prime consideration in this case given that humans can’t control their environment well enough yet to prevent any natural disaster. Deep learning provides the means to look for extremely subtle patterns that boggle the minds of humans. These patterns can help predict a natural catastrophe. The fact that software can predict any disaster at all is simply amazing. However, this article warns that relying on such software exclusively would be a mistake. Overreliance on technology is a constant theme, so don’t be surprised that deep learning is less than perfect in predicting natural catastrophes as well.
View ArticleArticle / Updated 07-20-2022
A medical professional isn’t always able to tell what is happening with a patient’s health simply by listening to their heart, checking vitals, or performing a blood test. The body doesn’t always send out useful signals that let a medical professional learn anything at all. In addition, some body functions, such as blood sugar, change over time, so constant monitoring becomes necessary. Going to the doctor’s office every time you need one of these vitals checked would prove time consuming and possibly not all that useful. Older methods of determining some body characteristics required manual, external intervention on the part of the patient — an error-prone process in the best of times. For these reasons, and many more, an AI can help monitor a patient’s statistics in a manner that is efficient, less error prone, and more consistent, as described in the following sections. Wearing helpful monitors All sorts of monitors fall into the helpful category. In fact, many of these monitors have nothing to do with the medical profession, yet produce positive results for your health. Consider the Moov monitor, which monitors both heart rate and 3-D movement. The AI for this device tracks these statistics and provides advice on how to create a better workout. You actually get advice on, for example, how your feet are hitting the pavement during running and whether you need to lengthen your stride. The point of devices like these is to ensure that you get the sort of workout that will improve health without risking injury. Mind you, if a watch-type monitoring device is too large, Motiv produces a ring that monitors about the same number of things that Moov does, but in a smaller package. This ring even tracks how you sleep to help you get a good night’s rest. Rings do tend to come with an assortment of pros and cons. This article tells you more about these issues. Interestingly enough, many of the pictures on the site don’t look anything like a fitness monitor, so you can have fashion and health all in one package. Of course, if your only goal is to monitor your heart rate, you can get devices such as Apple Watch that also provide some level of analysis using an AI. All these devices interact with your smartphone, so you can possibly link the data to still other applications or send it to your doctor as needed. Relying on critical wearable monitors A problem with some human conditions is that they change constantly, so checking intermittently doesn’t really get the job done. Glucose, the statistic measured by diabetics, is one statistic that falls into this category. The more you monitor the rise and fall of glucose each day, the easier it becomes to change medications and lifestyle to keep diabetes under control. Devices such as the K'Watch provide such constant monitoring, along with an app that a person can use to obtain helpful information on managing their diabetes. Of course, people have used intermittent monitoring for years; this device simply provides that extra level of monitoring that can make the difference between having diabetes be a life-altering issue or a minor nuisance. The act of constantly monitoring someone’s blood sugar or other chronic disease statistic might seem like overkill, but it has practical use as well. Products such as Sentrian let people use the remote data to predict that a patient will become ill before the event actually occurs. By making changes in patient medications and behavior before an event can occur, Sentrian reduces the number of unavoidable hospitalizations — making the patient’s life a lot better and reducing medical costs. Some devices are truly critical, such as the Wearable Defibrillator Vest (WDV), which senses your heart condition continuously and provides a shock should your heart stop working properly. This short-term solution can help a doctor decide whether you need the implanted version of the same device. There are pros and cons to wearing one, but then again, it’s hard to place a value on having a shock available when needed to save a life. The biggest value of this device is the monitoring it provides. Some people don’t actually need an implantable device, so monitoring is essential to prevent unnecessary surgery. Using movable monitors The number and variety of AI-enabled health monitors on the market today is staggering. For example, you can actually buy an AI-enabled toothbrush that will monitor your brushing habits and provide you with advice on better brushing technique. When you think about it, creating a device like this presents a number of hurdles, not the least of which is keeping the monitoring circuitry happy inside the human mouth. Of course, some people may feel that the act of brushing their teeth really doesn’t have much to do with good health, but it does. Creating movable monitors generally means making them both smaller and less intrusive. Simplicity is also a requirement for devices designed for use by people with little or no medical knowledge. One device in this category is a wearable electrocardiogram (ECG). Having an ECG in a doctor’s office means connecting wires from the patient to a semiportable device that performs the required monitoring. The QardioCore provides the ECG without using wires, and someone with limited medical knowledge can easily use it. As with many devices, this one relies on your smartphone to provide needed analysis and make connections to outside sources as needed. Current medical devices work just fine, but they aren’t portable. The point of creating AI-enabled apps and specialized devices is to obtain much needed data when a doctor actually needs it, rather than having to wait for that data. Even if you don’t buy a toothbrush to monitor your technique or an ECG to monitor your heart, the fact that these devices are small, capable, and easy to use means that you may still benefit from them at some point.
View ArticleCheat Sheet / Updated 04-12-2022
Deep learning affects every area of your life — everything from smartphone use to diagnostics received from your doctor. Python is an incredible programming language that you can use to perform deep learning tasks with a minimum of effort. By combining the huge number of available libraries with Python-friendly frameworks, you can avoid writing the low-level code normally needed to create deep learning applications. All you need to focus on is getting the job done. This cheat sheet presents the most commonly needed reminders for making your programming experience fast and easy.
View Cheat Sheet