Technology Articles
Technology. It makes the world go 'round. And whether you're a self-confessed techie or a total newbie, you'll find something to love among our hundreds of technology articles and books.
Articles From Technology
Filter Results
Article / Updated 05-19-2023
ChatGPT is a huge phenomenon and a major paradigm shift in the accelerating march of technological progression. So, what is chatgpt? It's a large language model (LLM) that belongs to a category of AI (artificial intelligence) called generative AI (GPT stands for generative pre-trained transformer), which can generate new content rather than simply analyze existing data. Additionally, anyone can interact with ChatGPT in their own words. A natural, humanlike dialog ensues. ChatGPT is often directly accessed online by users, but it is also being integrated with several existing applications, such as Microsoft Office apps (Word, Excel, and PowerPoint) and the Bing search engine. The number of app integrations seems to grow every day as existing software providers hurry to capitalize on ChatGPT’s popularity. What is ChatGPT used for? The ways to use ChatGPT are as varied as its users. Most people lean towards more basic requests, such as creating a poem, an essay, or short marketing content. Students often turn to it to do their homework. Heads up, kids: ChatGPT stinks at answering riddles and sometimes word problems in math. Other times, it just makes things up. In general, people tend to use ChatGPT to guide or explain something, as if the bot were a fancier version of a search engine. Nothing is wrong with that use, but ChatGPT can do so much more. How much more depends on how well you write the prompt. If you write a basic prompt, you’ll get a bare-bones answer that you could have found using a search engine such as Google or Bing. That’s the most common reason why people abandon ChatGPT after a few uses. They erroneously believe it has nothing new to offer. But this particular failing is the user’s fault, not ChatGPT’s. What can ChatGPT do? This list covers just some of the more unique uses of this technology. Users have asked ChatGPT to: Conduct an interview with a long-dead legendary figure regarding their views of contemporary topics. Recommend colors and color combinations for logos, fashion designs, and interior decorating designs. Generate original works such as articles, e-books, and ad copy. Predict the outcome of a business scenario. Develop an investment strategy based on stock market history and current economic conditions. Make a diagnosis based on a patient’s real-world test results. Write computer code to make a new computer game from scratch. Leverage sales leads. Inspire ideas for a variety of things from A/B testing to podcasts, webinars, and full-feature films. Check computer code for errors. Summarize legalese in software agreements, contracts, and other forms into simple laymen language. Calculate the terms of an agreement into total costs. Teach a skill or get instructions for a complex task. Find an error in their logic before implementing their decision in the real world. Much ado has been made of ChatGPT’s creativity. But that creativity is a reflection and result of the human doing the prompting. If you can think it, you can probably get ChatGPT to play along. Unfortunately, that’s true for bad guys too. For example, they can prompt ChatGPT to find vulnerabilities in computer code or a computer system; steal your identity by writing a document in your style, tone, and word choices; or edit an audio clip or a video clip to fool your biometric security measures or make it say something you didn’t actually say. Only their imagination limits the possibilities for harm and chaos. Unwrapping ChatGPT fears Perhaps no other technology is as intriguing and disturbing as generative artificial intelligence. Emotions were raised to a fever pitch when 100 million monthly active users snatched up the free, research preview version of ChatGPT within two months after its launch. You can thank science fiction writers and your own imagination for both the tantalizing and terrifying triggers that ChatGPT is now activating in your head, making you wonder: Is ChatGPT safe? There are definitely legitimate reasons for caution and concern. Lawsuits have been launched against generative AI programs for copyright and other intellectual property infringements. OpenAI and other AI companies and partners stand accused of illegally using copyrighted photos, text, and other intellectual property without permission or payment to train their AI models. These charges generally spring from copyrighted content getting caught up in the scraping of the internet to create massive training datasets. In general, legal defense teams are arguing the inevitability and unsustainability of such charges in the age of AI and requesting that charges be dropped. The lawsuits regarding who owns the content generated by ChatGPT and its ilk lurk somewhere in the future. However, the U.S. Copyright Office has already ruled that AI-generated content, be it writing, images, or music, is not protected by copyright law. In the U.S., at least for now, the government will not protect anything generated by AI in terms of rights, licensing, or payment. Meanwhile, realistic concerns exist over other types of potential liabilities. ChatGPT and ChatGPT alternatives are known to sometimes deliver incorrect information to users and other machines. Who is liable when things go wrong, particularly in a life-threatening scenario? Even if a business’s bottom line is at stake and not someone's life, risks can run high and the outcome can be disastrous. Inevitably, someone will suffer and likely some person or organization will eventually be held accountable for it. Then, there are the magnifications of earlier concerns, such as data privacy, biases, unfair treatment of individuals and groups through AI actions, identity theft, deep fakes, security issues, and reality apathy, which is when the public can no longer tell what is true and what isn’t and thinks the effort to sort it all out is too difficult to pursue. In short, all of this probably has you wondering: Is ChatGPT safe? The potential to misuse it accelerates and intensifies the need for the rules and standards currently being studied, pursued, and developed by organizations and governments seeking to establish guardrails aimed at ensuring responsible AI. The big question is whether they’ll succeed in time, given ChatGPT’s incredibly fast adoption rate worldwide. Examples of groups working on guidelines, ethics, standards, and responsible AI frameworks include the following: ACM US Technology Committee’s Subcommittee on AI & Algorithms World Economic Forum UK’s Centre for Data Ethics Government agencies and efforts such as the US AI Bill of Rights and the European Council of the European Union’s Artificial Intelligence Act. IEEE and its 7000 series of standards Universities such as New York University’s Stern School of Business The private sector, wherein companies make their own responsible AI policies and foundations How does ChatGPT work? ChatGPT works differently than a search engine. A search engine such as Google or Bing or an AI assistant such as Siri, Alexa, or Google Assistant works by searching the internet for matches to the keywords you enter in the search bar. Algorithms refine the results based on any number of factors, but your browser history, topic interests, purchase data, and location data usually figure into the equation. You’re then presented with a list of search results ranked in order of relevance as determined by the search engine’s algorithm. From there, the user is free to consider the sources of each option and click a selection to do a deeper dive for more details from that source. By comparison, ChatGPT generates its own unified answer to your prompt. It doesn't offer citations or note its sources. You ask; it answers. Easy-peasey, right? No. That task is incredibly hard for AI to do, which is why generative AI is so impressive. Generating an original result in response to a prompt is achieved by using either the GPT-3 (Generative Pre-trained Transformer 3) or GPT-4 model to analyze the prompt with context and predict the words that are likely to follow. Both GPT models are extremely powerful large language models capable of processing billions of words per second. In short, transformers enable ChatGPT to generate coherent, humanlike text as a response to a prompt. ChatGPT creates a response by considering context and assigning weight (values) to words that are likely to follow the words in the prompt to predict which words would be an appropriate response. Some ChatGPT basics here: User input is called a prompt rather than a command or a query, although it can take either form. You are, in effect, prompting AI to predict and complete a pattern that you initiated by entering the prompt. If you'd like a comprehensive ChatGPT guide, including more detail on how it works and how to use it, check out my book ChatGPT For Dummies. Peeking at the ChatGPT architecture As its name implies, ChatGPT is a chatbot running on a GPT model. GPT-3, GPT-3.5, and GPT-4 are large language models (LLMs) developed by OpenAPI. When GPT-3 was introduced, it was the largest LLM at 175 billion parameters. An upgraded version called GPT-3.5 turbo is a highly optimized and more stable version of GPT-3 that's ten times cheaper for developers to use. ChatGPT is now also available on GPT-4, which is a multimodal model, meaning it accepts both image and text inputs although its outputs are text only. It's now the largest LLM to date, although GPT-4’s exact number of parameters has yet to be disclosed. Parameters are numerical values that weigh and define connections between nodes and layers in the neural network architecture. The more parameters a model has, the more complex its internal representations and weighting. In general, more parameters lead to better performance on specific tasks. ChatGPT for beginners Here, you'll learn the basics of how to use ChatGPT and why it relies on your skills to optimize its performance. But the real treasure here are the tips and insights on how to write prompts so that ChatGPT can perform its true magic. You can learn even more about writing prompts in my book ChatGPT For Dummies. Writing effective ChatGPT prompts ChatGPT appears deceptively simplistic. The user interface is elegantly minimalistic and intuitive, as shown in the figure below. The first part of the page offers information to users regarding ChatGPT’s capabilities and limitations plus a few examples of prompts. The prompt bar, which resembles a search bar, runs across the bottom of the page. Just enter a question or a command to prompt ChatGPT to produce results immediately. If you enter a basic prompt, you’ll get a bare-bones, encyclopedic-like answer, as shown in the figure below. Do that enough times and you’ll convince yourself that this is just a toy and you can get better results from an internet search engine. This is a typical novice’s mistake and a primary reason why beginners give up before they fully grasp what ChatGPT is and can do. Understand that your previous experience with keywords and search engines does not apply here. You must think of and use ChatGPT in a different way. Think hard about how you’re going to word your prompt. You have many options to consider. You can assign ChatGPT a role or a persona, or several personas and roles if you decide it should respond as a team, as illustrated in the figure below. You can assign yourself a new role or persona as well. Or tell it to address any type of audience — such as a high school graduating class, a surgical team, or attendees at a concert or a technology conference. You can set the stage or situation in great or minimum detail. You can ask a question, give it a command, or require specific behaviors. A prompt, as you can see now, is much more than a question or a command. Your success with ChatGPT hinges on your ability to master crafting a prompt in such a way as to trigger the precise response you seek. Ask yourself these questions as you are writing or evaluating your prompt. Who do you want ChatGPT to be? Where, when, and what is the situation or circumstances you want ChatGPT’s response framed within? Is the question you're entering in the prompt the real question you want it to answer, or were you trying to ask something else? Is the command you're prompting complete enough for ChatGPT to draw from sufficient context to give you a fuller, more complete, and richly nuanced response? And the ultimate question for you to consider: Is your prompt specific and detailed, or vague and meandering? Whichever is the case, that’s what ChatGPT will mirror in its response. ChatGPT’s responses are only as good as your prompt. That’s because the prompt starts a pattern that ChatGPT must then complete. Be intentional and concise about how you present that pattern starter — the prompt. Starting a chat To start a chat, just type a question or command in the prompt bar, shown at the bottom of the figure below. ChatGPT responds instantly. You can continue the chat by using the prompt bar again. Usually, you do this to gain further insights or to get ChatGPT to further refine its response. Following, are some things you can do in a prompt that may not be readily evident: Add data in the prompt along with your question or command regarding what to do with this data. Adding data directly in the prompt enables you to add more current info as well as make ChatGPT responses more customizable and on point. You can use the Browsing plug-in to connect ChatGPT to the live internet, which will give it access to current information. However, you may want to add data to the prompt anyway to better focus its attention on the problem or task at hand. However, there are limits on prompting and response sizes, so make your prompt as concise as possible. Direct the style, tone, vocabulary level, and other factors to shape ChatGPT's response. Command ChatGPT to assume a specific persona, job role, or authority level in its response. If you’re using ChatGPT-4, you'll soon be able to use images in the prompt too. ChatGPT can extract information from the image to use in its analysis. When you’ve finished chatting on a particular topic or task, it’s wise to start a new chat (by clicking or tapping the New Chat button in the upper left). Starting a new dialogue prevents confusing ChatGPT, which would otherwise treat subsequent prompts as part of a single conversational thread. On the other hand, starting too many new chats on the same topic or related topics can lead the AI to use repetitious phrasing and outputs, whether or not they apply to the new chat’s prompt. To recap: Don't confuse ChatGPT by chatting in one long continuous thread with a lot of topic changes or by opening too many new chats on the same topic. Otherwise, ChatGPT will probably say something offensive or make up random and wrong answers. When writing prompts, think of the topic or task in narrow terms. For example, don't have a long chat on car racing, repairs, and maintenance. To keep ChatGPT more intently focused, narrow your prompt to a single topic, such as determining when the vehicle will be at top trade-in value so you can best offset a new car price. Your responses will be of much higher quality. ChatGPT may call you offensive names and make up stuff if the chat goes on too long. Shorter conversations tend to minimize these odd occurrences, or so most industry watchers think. For example, after ChatGPT responses to Bing users became unhinged and argumentative, Microsoft limited conversations with it to 5 prompts in a row, for a total of 50 conversations a day per user. But a few days later, it increased the limit to 6 prompts per conversation and a total of 60 conversations per day per user. The limits will probably increase when AI researchers can figure out how to tame the machine to an acceptable — or at least a less offensive — level.
View ArticleArticle / Updated 05-18-2023
You can hardly avoid hearing about artificial intelligence (AI) today. You see AI in the movies, in the news, in books, and online. It's been in the news a lot lately, with all of the frenzy surrounding ChatGPT (see more about that below). AI is part of robots, self-driving (SD) cars, drones, medical systems, online shopping sites, and all sorts of other technologies that affect your daily life in so many ways. Some people have come to trust AIs so much, that they fall asleep while their self-driving cars take them to their destination — illegally, of course. Many pundits are burying you in information (and disinformation) about AI, too. Some see AI as cute and fuzzy; others see it as a potential mass murderer of the human race. The problem with being so loaded down with information in so many ways is that you struggle to separate what’s real from what is simply the product of an overactive imagination. Just how far can you trust your AI, anyway? Much of the hype about AI originates from the excessive and unrealistic expectations of scientists, entrepreneurs, and businesspersons. This article helps you understand some of the history of artificial intelligence and evolution of AI. The ChatGPT controversy The latest media storm around AI came in early January 2023, when OpenAI launched a free preview of its ChatGPT chatbot. It then released an upgrade, ChatGPT-4 in March 2023. A chatbot is a computer program designed to simulate human conversation. ChatGPT (GPT stands for generative pretrained transformer) is a particularly powerful chatbot able to produce natural, human-like writing through its use of 570GB of data from the Internet. Representing one of the latest achievements in the development of artificial intelligence, ChatGPT can answer questions and write articles, poems, emails, and research papers; it can also write programming code, translate languages, and perform other tasks related to language. ChatGPT's possible real-world uses include: Customer service Ecommerce Research Education and training Computer code writing and debugging Scheduling and booking Entertainment Health care information and assistance However, while many people are excited about the possibilities for ChatGPT and other similar technologies being developed, there are plenty of concerns about how it can be used in bad ways, too — for example, to cheat in school by having it write essays and research papers. It’s difficult to discern whether a piece of writing has been generated by ChatGPT or a human. In addition, the technology is far from perfect; the text it produces is often inaccurate and biased, and therefore, can spread false and even harmful information. AI can, and is, serving us well in many ways, but it’s important to understand its limitations. AI will never be able to engage in certain essential activities and tasks, and won’t be able to do other ones until far into the future. For example, while it can produce a piece of music with the data you’ve entered and in the style of a particular musician, say Beethoven, it cannot actually create anything. AI doesn’t have an imagination or original ideas. The history of AI, starting with Dartmouth Looking at artificial intelligence history begins with the earliest computers, which were just that: computing devices. They mimicked the human ability to manipulate symbols in order to perform basic math tasks, such as addition. Logical reasoning later added the capability to perform mathematical reasoning through comparisons (such as determining whether one value is greater than another value). However, for artificial intelligence evolution, humans still needed to define the algorithm used to perform the computation, provide the required data in the right format, and then interpret the result. During the summer of 1956, various scientists attended a workshop held on the Dartmouth College campus in Hanover, New Hampshire, to do something more. They predicted that machines that could reason as effectively as humans would require, at most, a generation to come about. They were wrong. Only now have we realized machines that can perform mathematical and logical reasoning as effectively as a human (which means that computers must master at least six more intelligences before reaching anything even close to human intelligence). The stated problem with the Dartmouth College and other endeavors of the time relates to hardware — the processing capability to perform calculations quickly enough to create a simulation. However, that’s not really the whole problem. Yes, hardware does figure in to the picture, but you can’t simulate processes that you don’t understand. Even so, the reason that AI is somewhat effective today is that the hardware has finally become powerful enough to support the required number of calculations. The biggest problem with these early attempts (and still a considerable problem today) is that we don’t understand how humans reason well enough to create a simulation of any sort — assuming that a direction simulation is even possible. Consider the issues surrounding the accomplishment of manned flight by the Wright brothers. They succeeded not by simulating birds, but rather by understanding the processes that birds use, thereby creating the field of aerodynamics. Consequently, when someone says that the next big AI innovation is right around the corner and yet no concrete dissertation exists of the processes involved, the innovation is anything but right around the corner. Continuing with expert systems Expert systems first appeared in the 1970s and again in the 1980s as an attempt to reduce the computational requirements posed by AI using the knowledge of experts. A number of expert system representations appeared, including: Rule based: These use "if … then" statements to base decisions on rules of thumb. Frame based: These use databases organized into related hierarchies of generic information called frames. Logic based: These rely on set theory to establish relationships). The advent of expert systems is important in artificial intelligence background because they present the first truly useful and successful implementations of AI. You still see expert systems in use today, although they aren’t called that any longer. For example, the spelling and grammar checkers in your application are kinds of expert systems. The grammar checker, especially, is strongly rule based. It pays to look around to see other places where expert systems may still see practical use in everyday applications. A problem with expert systems is that they can be hard to create and maintain. Early users had to learn specialized programming languages, such as List Processing (LisP) or Prolog. Some vendors saw an opportunity to put expert systems in the hands of less experienced or novice programmers. However, the products they used generally provided extremely limited functionality in using small knowledge bases. In the 1990s, the phrase expert system began to disappear. The idea that expert systems were a failure did appear, but the reality is that expert systems were simply so successful that they became ingrained in the applications that they were designed to support. Using the example of a word processor, at one time you needed to buy a separate grammar checking application, such as RightWriter. However, word processors now have grammar checkers built in because they proved so useful (if not always accurate). Overcoming the AI winters The term AI winter refers to a period of reduced funding in the development of AI. In general, AI has followed a path on which proponents overstate what is possible, inducing people with no technology knowledge at all, but lots of money, to make investments. A period of criticism then follows when AI fails to meet expectations, and finally, the reduction in funding occurs. A number of these cycles have occurred over the years — all of them devastating to true progress. AI is currently in a new hype phase because of machine learning, a technology that helps computers learn from data. Having a computer learn from data means not depending on a human programmer to set operations (tasks), but rather deriving them directly from examples that show how the computer should behave. ' Machine learning is like educating a baby by showing it how to behave through example. This technology has pitfalls because the computer can learn how to do things incorrectly through careless teaching. At this time, the most successful solution is deep learning, which is a technology that strives to imitate the human brain. Deep learning is possible because of the availability of powerful computers, smarter algorithms, large datasets produced by the digitalization of our society, and huge investments from businesses such as Google, Facebook, Amazon, and others that take advantage of this AI renaissance for their own businesses. People are saying that the AI winter is over because of deep learning, and that’s true for now. However, when you look around at the ways in which people are viewing AI, you can easily figure out that another criticism phase will eventually occur unless proponents tone the rhetoric down. A brief artificial intelligence timeline 1942: First electronic digital computer built by John Vincent Atanasoff and Clifford Berry at Iowa State University 1950: Alan Turing paper “Computing Machinery and Intelligence;” his proposal later became “The Turing Test,” which measured machine AI 1958: Perceptron computer, built by Cornell University Professor Frank Rosenblatt, regarded as first artificial neural network 1966: First “chatterbox” (later shortened to chatbot) — created by Joseph Weizenbaum, a German-American computer scientist — uses natural language processing to converse with humans 1971: First commercial microprocessor by Intel 1988: Jabberwacky, a chatbot created by British computer scientist Rollo Carpenter, provides interesting and entertaining conversation to humans 1990s: Early days of the Internet 1992: TD-Gammon, developed by Gerald Tesauro, of IBM; an artificial neural network trained by temporal-difference learning to play high-level backgammon 1997: IBM's Deep Blue chess computer defeats Russian chess grandmaster Garry Kasparov; Windows releases a speech recognition software, developed by Dragon Systems 2012: AlexNet, a convolutional neural network architecture, primarily designed by Alex Krizhevsky, a Ukrainian-born, Canadian computer scientist 2020: OpenAI beta tests GPT-3, which uses deep learning to create code, poetry, and other language and writing tasks; it's the first such chatbot that can create content almost indistinguishable from human-created content 2023: In January, OpenAI releases a free preview of its ChatGPT-3 to the public, and in March releases the upgrade ChatGPT-4 AI in our everyday lives You’re using AI in some way today; in fact, you probably rely on AI in many different ways — you just don’t notice it because it’s so mundane. A smart thermostat for your home may not sound very exciting, but it’s an incredibly practical use for a technology that has some people running for the hills in terror. As the development of AI has continued, there are now really cool uses for AI. For example, you may not know there is a medical monitoring device that can actually predict when you might have a heart problem, but such a device exists. AI powers drones, drives cars, and makes all sorts of robots possible. You see AI used today in all sorts of space applications, and the evolution of artificial intelligence figures prominently in all the space adventures humans will have tomorrow. The potential uses for AI number in the millions — all safely out of sight even when they’re quite dramatic in nature. Here are some of the ways in which you might see AI used: Fraud detection: You get a call from your credit card company asking whether you made a particular purchase. The credit card company isn’t being nosy; it’s simply alerting you to the fact that someone else could be making a purchase using your card. The AI embedded within the credit card company’s code detected an unfamiliar spending pattern and alerted someone to it. Resource scheduling: Many organizations need to schedule the use of resources efficiently. For example, a hospital may have to determine where to put a patient based on the patient’s needs, availability of skilled experts, and the amount of time the doctor expects the patient to be in the hospital. Complex analysis: Humans often need help with complex analysis because there are literally too many factors to consider. For example, the same set of symptoms could indicate more than one problem. A doctor or other expert might need help making a diagnosis in a timely manner to save a patient’s life. Automation: Any form of automation can benefit from the addition of AI to handle unexpected changes or events. A problem with some types of automation today is that an unexpected event, such as an object in the wrong place, can actually cause the automation to stop. Adding AI to the automation can allow the automation to handle unexpected events and continue as if nothing happened. Customer service: The customer service line you call today may not even have a human behind it. The automation is good enough to follow scripts and use various resources to handle the vast majority of your questions. With good voice inflection (provided by AI as well), you may not even be able to tell that you’re talking with a computer. Safety systems: Many of the safety systems found in machines of various sorts today rely on AI to take over the vehicle in a time of crisis. For example, many automatic braking systems (ABS) rely on AI to stop the car based on all the inputs that a vehicle can provide, such as the direction of a skid. Computerized ABS is actually relatively old at 40 years from a technology perspective. Machine efficiency: AI can help control a machine in such a manner as to obtain maximum efficiency. The AI controls the use of resources so that the system doesn’t overshoot speed or other goals. Every ounce of power is used precisely as needed to provide the desired services.
View ArticleArticle / Updated 05-09-2023
The first concept that’s important to understand is that artificial intelligence (AI) doesn’t really have anything to do with human intelligence. Yes, some AI is modeled to simulate human intelligence, but that’s what it is: a simulation. When asking "what is artificial intelligence?" notice an interplay between goal seeking, data processing used to achieve that goal, and data acquisition used to better understand the goal. AI technology relies on algorithms to achieve a result that may or may not have anything to do with human goals or methods of achieving those goals. With this in mind, you can categorize AI in four ways: Acting like a human: When a computer acts like a human, it best reflects the Turing test, in which the computer succeeds when differentiation between the computer and a human isn’t possible. This category also reflects what the media would have you believe AI is all about. You see it employed for technologies such as natural language processing, knowledge representation, automated reasoning, and machine learning (all four of which must be present to pass the test). The original Turing Test didn’t include any physical contact. The newer, Total Turing Test does include physical contact in the form of perceptual ability interrogation, which means that the computer must also employ both computer vision and robotics to succeed. Modern techniques include the idea of achieving the goal rather than mimicking humans completely. For example, the Wright Brothers didn’t succeed in creating an airplane by precisely copying the flight of birds; rather, the birds provided ideas that led to aerodynamics that eventually led to human flight. The goal is to fly. Both birds and humans achieve this goal, but they use different approaches. Thinking like a human: When a computer thinks as a human, it performs tasks that require intelligence (as contrasted with rote procedures) from a human to succeed, such as driving a car. To determine whether a program thinks like a human, you must have some method of determining how humans think, which the cognitive modeling approach defines. This model relies on three techniques: Introspection: Detecting and documenting the techniques used to achieve goals by monitoring one’s own thought processes. Psychological testing: Observing a person’s behavior and adding it to a database of similar behaviors from other persons given a similar set of circumstances, goals, resources, and environmental conditions (among other things). Brain imaging: Monitoring brain activity directly through various mechanical means, such as Computerized Axial Tomography (CAT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Magnetoencephalography (MEG). After creating a model, you can write a program that simulates the model. Given the amount of variability among human thought processes and the difficulty of accurately representing these thought processes as part of a program, the results are experimental at best. This category of thinking humanly is often used in psychology and other fields in which modeling the human thought process to create realistic simulations is essential. Thinking rationally: Studying how humans think using some standard enables the creation of guidelines that describe typical human behaviors. A person is considered rational when following these behaviors within certain levels of deviation. A computer that thinks rationally relies on the recorded behaviors to create a guide as to how to interact with an environment based on the data at hand. The goal of this approach is to solve problems logically, when possible. In many cases, this approach would enable the creation of a baseline technique for solving a problem, which would then be modified to actually solve the problem. In other words, the solving of a problem in principle is often different from solving it in practice, but you still need a starting point. Acting rationally: Studying how humans act in given situations under specific constraints enables you to determine which techniques are both efficient and effective. A computer that acts rationally relies on the recorded actions to interact with an environment based on conditions, environmental factors, and existing data. As with rational thought, rational acts depend on a solution in principle, which may not prove useful in practice. However, rational acts do provide a baseline upon which a computer can begin negotiating the successful completion of a goal. Hintze's AI classifications The categories used to define AI offer a way to consider various uses for or ways to apply AI. Some of the systems used to classify AI by type are arbitrary and not distinct. For example, some groups view AI as either strong (generalized intelligence that can adapt to a variety of situations) or weak (specific intelligence designed to perform a particular task well). The problem with strong AI is that it doesn’t perform any task well, while weak AI is too specific to perform tasks independently. Even so, just two type classifications won’t do the job even in a general sense. The four classification types promoted by Arend Hintze form a better basis for understanding AI: Reactive machines: The machines you see beating humans at chess or playing on game shows are examples of reactive machines. A reactive machine has no memory or experience upon which to base a decision. Instead, it relies on pure computational power and smart algorithms to recreate every decision every time. This is an example of a weak AI used for a specific purpose. Limited memory: A self-driving car or autonomous robot can’t afford the time to make every decision from scratch. These machines rely on a small amount of memory to provide experiential knowledge of various situations. When the machine sees the same situation, it can rely on experience to reduce reaction time and to provide more resources for making new decisions that haven’t yet been made. This is an example of the current level of strong AI. Theory of mind: A machine that can assess both its required goals and the potential goals of other entities in the same environment has a kind of understanding that is feasible to some extent today, but not in any commercial form. However, for self-driving cars to become truly autonomous, this level of AI must be fully developed. A self-driving car would not only need to know that it must go from one point to another, but also intuit the potentially conflicting goals of drivers around it and react accordingly. Self-awareness: This is the sort of AI that you see in movies. However, it requires technologies that aren’t even remotely possible now because such a machine would have a sense of both self and consciousness. In addition, instead of merely intuiting the goals of others based on environment and other entity reactions, this type of machine would be able to infer the intent of others based on experiential knowledge. Problems defining AI Artificial Intelligence has had several false starts and stops over the years, partly because people don’t really understand what AI is all about, or even what it should accomplish. A major part of the problem is that movies, television shows, and books have all conspired to give false hopes about hat AI could accomplish. In addition, the human tendency to anthropomorphize (give human characteristics to) technology makes it seem as if AI must do more than it can hope to accomplish. Of course, the basis for what you expect from AI is a combination of how you define AI, the technology you have for implementing AI, and the goals you have for AI. Consequently, everyone sees AI differently. Before you can use a term in any meaningful and useful way, you must have a definition for it. After all, if nobody agrees on a meaning, the term has none; it’s just a collection of characters. Defining the idiom (a term whose meaning isn’t clear from the meanings of its constituent elements) is especially important with technical terms that have received more than a little press coverage at various times and in various ways. The term artificial intelligence doesn’t really tell you anything meaningful, which is why there are so many discussions and disagreements about it. Yes, you can argue that what occurs is artificial, not having come from a natural source. However, the intelligence part is, at best, ambiguous. Discerning intelligence People define intelligence in many different ways. However, you can say that intelligence involves certain mental activities composed of the following: Learning: Having the ability to obtain and process new information Reasoning: Being able to manipulate information in various ways Understanding: Considering the result of information manipulation Grasping truths: Determining the validity of the manipulated information Seeing relationships: Divining how validated data interacts with other data Considering meanings: Applying truths to particular situations in a manner consistent with their relationship Separating fact from belief: Determining whether the data is adequately supported by provable sources that can be demonstrated to be consistently valid How does AI work? The list above could easily get quite long, but even this list is relatively prone to interpretation by anyone who accepts it as viable. As you can see from the list, however, intelligence often follows a process that a computer system can mimic as part of a simulation: Set a goal based on needs or wants. Assess the value of any currently known information in support of the goal. Gather additional information that could support the goal. The emphasis here is on information that could support the goal, rather than information that you know will support the goal. Manipulate the data such that it achieves a form consistent with existing information. Define the relationships and truth values between existing and new information. Determine whether the goal is achieved. Modify the goal in light of the new data and its effect on the probability of success. Repeat Steps 2 through 7 as needed until the goal is achieved (found true) or the possibilities for achieving it are exhausted (found false). Even though you can create algorithms and provide access to data in support of this process within a computer, a computer’s capability to achieve intelligence is severely limited. For example, a computer is incapable of understanding anything because it relies on machine processes to manipulate data using pure math in a strictly mechanical fashion. Likewise, computers can’t easily separate truth from mistruth. In fact, no computer can fully implement any of the mental activities described in the list that describes intelligence. As part of deciding what intelligence actually involves, categorizing intelligence is also helpful. Humans don’t use just one type of intelligence, but rather rely on multiple intelligences to perform tasks. Howard Gardner of Harvard has defined a number of these types of intelligence, and knowing them helps you to relate them to the kinds of tasks that a computer can simulate as intelligence (see the table below for a modified version of these intelligences with additional description). The Kinds of Human Intelligence and How AIs Simulate Them Type Simulation Potential Human Tools Description Visual-spatial Moderate Models, graphics, charts, photographs, drawings, 3-D modeling, video, television, and multimedia Physical-environment intelligence used by people like sailors and architects (among many others). To move at all, humans need to understand their physical environment — that is, its dimensions and characteristics. Every robot or portable computer intelligence requires this capability, but the capability is often difficult to simulate (as with self-driving cars) or less than accurate (as with vacuums that rely as much on bumping as they do on moving intelligently). Bodily-kinesthetic Moderate to High Specialized equipment and real objects Body movements, such as those used by a surgeon or a dancer, require precision and body awareness. Robots commonly use this kind of intelligence to perform repetitive tasks, often with higher precision than humans, but sometimes with less grace. It’s essential to differentiate between human augmentation, such as a surgical device that provides a surgeon with enhanced physical ability, and true independent movement. The former is simply a demonstration of mathematical ability in that it depends on the surgeon for input. Creative None Artistic output, new patterns of thought, inventions, new kinds of musical composition Creativity is the act of developing a new pattern of thought that results in unique output in the form of art, music, and writing. A truly new kind of product is the result of creativity. An AI can simulate existing patterns of thought and even combine them to create what appears to be a unique presentation but is really just a mathematically based version of an existing pattern. In order to create, an AI would need to possess self-awareness, which would require intrapersonal intelligence. Interpersonal Low to Moderate Telephone, audio conferencing, video conferencing, writing, computer conferencing, email Interacting with others occurs at several levels. The goal of this form of intelligence is to obtain, exchange, give, and manipulate information based on the experiences of others. Computers can answer basic questions because of keyword input, not because they understand the question. The intelligence occurs while obtaining information, locating suitable keywords, and then giving information based on those keywords. Cross-referencing terms in a lookup table and then acting on the instructions provided by the table demonstrates logical intelligence, not interpersonal intelligence. Intrapersonal None Books, creative materials, diaries, privacy, and time Looking inward to understand one’s own interests and then setting goals based on those interests is currently a human-only kind of intelligence. As machines, computers have no desires, interests, wants, or creative abilities. An AI processes numeric input using a set of algorithms and provides an output; it isn’t aware of anything that it does, nor does it understand anything that it does. Linguistic (often divided into oral, aural, and written) Low for oral and aural None for written Games, multimedia, books, voice recorders, and spoken words Working with words is an essential tool for communication because spoken and written information exchange is far faster than any other form. This form of intelligence includes understanding oral, aural, and written input, managing the input to develop an answer, and providing an understandable answer as output. In many cases, computers can barely parse input into keywords, can’t actually understand the request at all, and output responses that may not be understandable at all. In humans, oral, aural, and written linguistic intelligence come from different areas of the brain, which means that even with humans, someone who has high written linguistic intelligence may not have similarly high oral linguistic intelligence. Computers don’t currently separate aural and oral linguistic ability — one is simply input and the other output. A computer can’t simulate written linguistic capability because this ability requires creativity. Logical-mathematical High (potentially higher than humans) Logic games, investigations, mysteries, and brain teasers Calculating a result, performing comparisons, exploring patterns, and considering relationships are all areas in which computers currently excel. When you see a computer beat a human on a game show, this is the only form of intelligence that you’re actually seeing, out of seven kinds of intelligence. Yes, you might see small bits of other kinds of intelligence, but this is the focus. Basing an assessment of human-versus-computer intelligence on just one area isn’t a good idea. The reality vs. hype There is a lot of hype about AI out there. If you watch movies such as Her and Ex Machina, you might be led to believe that AI is further along than it is. The problem is that AI is actually in its infancy, and any sort of application like those shown in the movies is the creative output of an overactive imagination. However, the importance of artificial intelligence to the future of technology cannot be overstated. It is already helping people in everyday technologies, and has great potential in everything from customer service to health care, to outer space exploration. The five tribes and the master algorithm You may have heard of something called the singularity, which is responsible for the potential claims presented in the media and movies. The singularity is essentially a master algorithm that encompasses all five tribes of learning used within machine learning. To achieve what these sources are telling you, the machine must be able to learn as a human would — as specified by the seven kinds of intelligence discussed earlier. Here are the five tribes of learning: Symbologists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems. Connectionists: This tribe’s origin is in neuroscience, and the group relies on backpropagation to solve problems. Evolutionaries: The evolutionaries tribe originates in evolutionary biology, relying on genetic programming to solve problems. Bayesians: This tribe’s origin is in statistics and relies on probabilistic inference to solve problems. Analogizers: The origin of this tribe is in psychology. The group relies on kernel machines to solve problems. The ultimate goal of machine learning is to combine the technologies and strategies embraced by the five tribes to create a single algorithm (the master algorithm) that can learn anything. Of course, achieving that goal is a long way off. Even so, scientists such as Pedro Domingos at the University of Washington are currently working toward that goal. To make things even less clear, the five tribes may not be able to provide enough information to actually solve the problem of human intelligence, so creating master algorithms for all five tribes may still not yield the singularity. At this point, you should be amazed at just how much people don’t know about how they think or why they think in a certain manner. Any rumors you hear about AI taking over the world or becoming superior to people are just plain false. Considering sources of hype There are many sources of AI hype. Quite a bit of the hype comes from the media and is presented by people who have no idea of what AI is all about, except perhaps from a sci-fi novel they read once. So, it’s not just movies or television that cause problems with AI hype; it’s all sorts of other media sources as well. You can often find news reports presenting AI as being able to do something that it can’t possibly do because the reporter doesn’t understand the technology. Oddly enough, many news services now use AI to at least start articles for reporters. Some products should be tested a lot more before being placed on the market. The “2020 in Review: 10 AI Failures” article at SyncedReview.com discusses ten products hyped by their developer but which fell flat on their faces. Some of these failures are huge and reflect badly on the ability of AI to perform tasks as a whole. However, something to consider with a few of these failures is that people may have interfered with the device using the AI. Obviously, testing procedures need to start considering the possibility of people purposely tampering with the AI as a potential source of errors. Until that happens, the AI will fail to perform as expected because people will continue to fiddle with the software in an attempt to cause it to fail in a humorous manner. Another cause of problems comes from asking the wrong person about AI. Not every scientist, no matter how smart, knows enough about AI to provide a competent opinion about the technology and the direction it will take in the future. Asking a biologist about the future of AI in general is akin to asking your dentist to perform brain surgery — it simply isn’t a good idea. Yet, many stories appear with people like these as the information source. To discover the future direction of AI, it’s best to ask a computer scientist or data scientist with a strong background in AI research. Understanding user overestimation Because of hype (and sometimes laziness or fatigue), users continually overestimate the ability of AI to perform tasks. For example, a Tesla owner was recently found sleeping in his car while the car zoomed along the highway at 90 mph. However, even with the user significantly overestimating the ability of the technology to drive a car, it does apparently work well enough (at least, for this driver) to avoid a complete failure. However, you need not be speeding down a highway at 90 mph to encounter user overestimation. Robot vacuums can also fail to meet expectations, usually because users believe they can just plug in the device and then never think about vacuuming again. After all, movies portray the devices working precisely in this manner. The article “How to Solve the Most Annoying Robot Vacuum Cleaner Problems” at RobotsInMyHome.com discusses troubleshooting techniques for various robotic vacuums for a good reason — the robots still need human intervention. The point is that most robots need human intervention at some point because they simply lack the knowledge to go it alone. What is AI technology? Artificial intelligence is a sub-discipline of computer science that works by combining large amounts of data with fast, iterative algorithms with the goal of enabling computers to solve complex problems and complete complex tasks. To see AI at work, you need to have some sort of computing system, an application that contains the required software, and a knowledge base. For artificial intelligence, the computers could be anything with a chip inside; in fact, a smartphone does just as well as a desktop computer for some applications. Of course, if you’re Amazon and you want to provide advice on a particular person’s next buying decision, the smartphone won’t do — you need a really big computing system for that application. The size of the computing system is directly proportional to the amount of work you expect the AI to perform. The application can also vary in size, complexity, and even location. For example, if you’re a business and want to analyze client data to determine how best to make a sales pitch, you might rely on a server-based application to perform the task. On the other hand, if you’re a customer and want to find products on Amazon to go with your current purchase items, the application doesn’t even reside on your computer; you access it through a web-based application located on Amazon’s servers. The knowledge base varies in location and size as well. The more complex the data, the more you can obtain from it, but the more you need to manipulate it as well. You get no free lunch when it comes to knowledge management. The interplay between location and time is also important. A network connection affords you access to a large knowledge base online but costs you in time because of the latency of network connections. However, localized databases, while fast, tend to lack details in many cases.
View ArticleArticle / Updated 05-09-2023
Artificial intelligence (AI) is great at automation, which can make it ideal for tasks in health care. It never deviates from the procedure, never gets tired, and never makes mistakes as long as the initial procedure is correct. Unlike humans, AI never needs a vacation or a break or even an eight-hour day (not that many in the medical profession have that, either). Consequently, the same AI that interacts with a patient for breakfast will do so for lunch and dinner as well. So, at the outset, AI has some significant advantages if viewed solely on the bases of consistency, accuracy, and longevity. Working with medical records The major way in which an AI helps in medicine is medical records. In the past, everyone used paper records to store patient data. Each patient might also have a blackboard that medical personnel use to record information daily during a hospital stay. Various charts contain patient data, and the doctor might also have notes. Having all these sources of information in so many different places made it hard to keep track of the patient in any significant way. Using an AI, along with a computer database, helps make information accessible, consistent, and reliable. Products such as Google Deepmind Health enable personnel to mine the patient information to see patterns in data that aren’t obvious. Doctors don’t necessarily interact with records in the same way that everyone else does. The use of products such as IBM’s WatsonPaths helps doctors interact with patient data of all sorts in new ways to make better diagnostic decisions about patient health. You can see a video on how this product works. Medicine is about a team approach, with many people of varying specialties working together. However, anyone who watches the process for a while soon realizes that these people don’t communicate among themselves sufficiently because they’re all quite busy treating patients. Products such as CloudMedX take all the input from the all parties involved and performs risk analysis on it. The result is that the software can help locate potentially problematic areas that could reduce the likelihood of a good patient outcome. In other words, this product does some of the talking that the various stakeholders would likely do if they weren’t submerged in patient care. Predicting the future Some truly amazing predictive software based on medical records includes CareSkore, which actually uses algorithms to determine the likelihood of a patient’s requiring readmission into the hospital after a stay. By performing this task, hospital staff can review reasons for potential readmission and address them before the patient leaves the hospital, making readmission less likely. Along with this strategy, Zephyr Health helps doctors evaluate various therapies and choose those most likely to result in a positive outcome — again reducing the risk that a patient will require readmission to the hospital. This video tells you more about Zephyr Health. In some respects, your genetics form a map of what will happen to you in the future. Consequently, knowing about your genetics can increase your understanding of your strengths and weaknesses, helping you to live a better life. Deep Genomics is discovering how mutations in your genetics affect you as a person. Mutations need not always produce a negative result; some mutations actually make people better, so knowing about mutations can be a positive experience, too. Check out this video for more details. Making procedures safer Doctors need lots of data to make good decisions. However, with data being spread out all over the place, doctors who lack the ability to analyze that disparate data quickly often make imperfect decisions. To make procedures safer, a doctor needs not only access to the data but also some means of organizing and analyzing it in a manner reflecting the doctor’s specialty. One such product is Oncora Medical, which collects and organizes medical records for radiation oncologists. As a result, is these doctors can deliver the right amount of radiation to just the right locations to obtain a better result with a lower potential for unanticipated side effects. Doctors also have trouble obtaining necessary information because the machines they use tend to be expensive and huge. An innovator named Jonathan Rothberg has decided to change all that by using the Butterfly Network. Imagine an iPhone-sized device that can perform both an MRI and an ultrasound. The picture on the website is nothing short of amazing. Creating better medications Everyone complains about the price of medications today. Yes, medications can do amazing things for people, but they cost so much that some people end up mortgaging homes to obtain them. Part of the problem is that testing takes a lot of time. Performing a tissue analysis to observe the effects of a new drug can take up to a year. Fortunately, products such as 3Scan can greatly reduce the time required to obtain the same tissue analysis to as little as one day. Of course, better still would be the drug company having a better idea of which drugs are likely to work and which aren’t before investing any money in research. Atomwise uses a huge database of molecular structures to perform analyses on which molecules will answer a particular need. In 2015, researchers used Atomwise to create medications that would make Ebola less likely to infect others. The analysis that would have taken human researchers months or possibly years to perform took Atomwise just one day to complete. Imagine this scenario in the midst of a potentially global epidemic. If Atomwise can perform the analysis required to render the virus or bacteria noncontagious in one day, the potential epidemic could be curtailed before becoming widespread. Drug companies also produce a huge number of drugs. The reason for this impressive productivity, besides profitability, is that every person is just a little different. A drug that performs well and produces no side effects on one person might not perform well at all and could even harm a different person. Turbine enables drug companies to perform drug simulations so that the drug companies can locate the drugs most likely to work with a particular person’s body. Turbine’s current emphasis is on cancer treatments, but it’s easy to see how this same approach could work in many other areas. Medications can take many forms. Some people think they come only in pill or shot form, yet your body produces a wide range of medications in the form of microbiomes. Your body actually contains ten times as many microbes as it does human cells, and many of these microbes are essential for life; you’d quickly die without them. Whole Biome is using a variety of methods to make these microbiomes work better for you so that you don’t necessarily need a pill or a shot to cure something. Check out this video for additional information. Some companies have yet to realize their potential, but they’re likely to do so eventually. One such company is Recursion Pharmaceuticals, which employs automation to explore ways to use known drugs, bioactive drugs, and pharmaceuticals that didn’t previously make the grade to solve new problems. The company has had some success in helping to solve rare genetic diseases, and it has a goal of curing 100 diseases in the next ten years (obviously, an extremely high goal to reach).
View ArticleCheat Sheet / Updated 05-08-2023
Whether you’ve purchased a new Mac with macOS Ventura pre-installed or you’ve upgraded from a previous version of macOS, you’ll find that Ventura makes your computer easier to use and offers myriad improvements to make you more productive. This Cheat Sheet includes information on things you should never do to your Mac; a compendium of useful and timesaving keyboard shortcuts; recommendations for backing up data; and website recommendations for smart Ventura users.
View Cheat SheetStep by Step / Updated 05-03-2023
Setting up a firewall is an effective way to protect your computer from outside cyber attackers and malicious software. But keep in mind that by setting up a firewall, you are changing the way your computer communicates with other computers on the Internet. The firewall blocks all incoming communications unless you set up a specific inbound exception in the Windows firewall to let a program in. Some of your programs won’t respond until they receive a signal via the Internet. If you have a program that doesn’t poke its own hole through the Windows Firewall, you can tell the firewall to allow packets destined for that specific program — and only that program — in through the firewall.
View Step by StepArticle / Updated 05-03-2023
You may want to consider establishing automatic investment programs to save for your retirement. Several automatic savings programs may be available to you. You need to determine how much you can direct to each of these automatic plans. Here’s how you do it: Make sure that you’re taking full advantage of any employer matching contribution for which you may be eligible with your company’s retirement plan. Contribute the maximum amount that the employer will match. If eligible, make the maximum contributions to your and your spouse’s (if applicable) Roth IRA accounts each year; take your contributions automatically out of your checking account each month. A Roth IRA is the best retirement funding vehicle — from a tax standpoint — ever! Although you don’t get a deduction when you contribute to a Roth IRA, all the earnings and withdrawals on the account are tax-free forever. You can establish a Roth IRA account at most banks, through investment advisors, or directly with a low-cost, no-load mutual fund company like Vanguard or a deep discount broker like Scottrade or ShareBuilder. Making monthly contributions is much easier than coming up with the whole year’s contribution at once. You can set up direct automatic investments from your checking account into your Roth IRA account. Build your personal portfolio with low-cost, tax-advantaged-passive investment vehicles, such as exchange-traded funds (ETFs) and index funds. You need to have investments that you can tap into if needed prior to retirement. Also, when you retire and pull money out of your retirement account, 100 percent of that withdrawal is taxable to you as ordinary income. Capital gains tax rates are much lower. You may be much better off from a tax standpoint to pay minimal capital gains now rather than the tax for ordinary income in the future. Index funds are a way individual investors can own the stock market that you hear about on the news, such as the Standard and Poor 500 Composite Index (S&P 500, for short). Index funds have been available through no-load mutual fund powerhouses like Vanguard for decades. However, the range of options now available has exploded in the last few years. You can now buy an exchange-traded fund (ETF) that invests exclusively in United States Treasury Inflation Protection Securities. Rather than buying one bond for $10,000, you can literally buy one share of an ETF, which trades like stocks, incurring a transaction fee to buy or sell shares. And with the advent of deep-discount online brokerage firms, you now can afford to make monthly purchases of exchange-traded funds. Which automatic savings programs are available to you, and how much can you direct to each of these automatic plans? Use the Making Your Investments Automatic Worksheet to put these steps in action. Click here to download and print the Making Your Investments Automatic worksheet.
View ArticleArticle / Updated 05-03-2023
To assign a font family to part of your page, use some new CSS. As an example, this page has the heading set to Comic Sans MS. If this page is viewed on a Windows machine, it generally displays the font correctly because Comic Sans MS is installed with most versions of Windows. If you're on another type of machine, you may get something else. Look at the simple case. Here's the code: <!DOCTYPE html> <html lang = "en-US"> <head> <meta charset = "UTF-8"> <title>comicHead.html</title> <style type = "text/css"> h1 { font-family: "Comic Sans MS"; } </style> </head> <body> <h1>This is a heading</h1> <p> This is ordinary text. </p> </body> </html> The secret to this page is the CSS attribute. Like most CSS elements, this can be applied to any HTML tag on your page. In this particular case, it was applied it to the level one heading. h1 { font-family: "Comic Sans MS"; } You can then attach any font name you wish, and the browser attempts to use that font to display the element. Even though a font may work perfectly fine on your computer, it may not work if that font isn't installed on the user's machine. If you run exactly the same page on an iPad, you might see this result. The specific font Comic Sans MS is installed on Windows machines, but the MS stands for Microsoft. This font isn't always installed on Linux or Mac. (Sometimes it's there, and sometimes it isn't.) You can't count on users having any particular fonts installed. The Comic Sans font is fine for an example, but it has been heavily over-used in web development. Serious web developers avoid using it in real applications because it tends to make your page look amateurish.
View ArticleArticle / Updated 05-03-2023
A resource record is the basic data component in the Domain Name Service (DNS). DNS resource records define not only names and IP addresses but domains, servers, zone, and services as well. This list shows you the most common types of resource records: Type Purpose A Address resource records match an IP address to a host name. CNAME Canonical name resource records associate a nickname to a host name. MX Mail exchange resource records identify mail servers for the specified domain. NS Name server resource records identify servers (other than the SOA server) that contain zone information files. PTR Pointer resource records match a host name to a given IP address. This is the opposite of an Address record, which matches an IP address to the supplied host name. SOA Start of authority resource records specify which server contains the zone file for a domain. SRV Service resource records identify servers that provide special services to the domain.
View ArticleArticle / Updated 04-27-2023
Google Chrome takes seriously the privacy and security of your content while you browse the web, because, like it or not, there are certain people out there who will try to take advantage of you by trying to get a hold of the information on your computer. As with most things in life, it’s better to be safe than sorry when protecting your personal information. If you’re working with a personal (that is, non-work) computer, managing these settings is your responsibility. But if you’re using a work computer, you may find that your employer’s IT department is already enforcing some of these settings according to its security policy. Those settings will appear grayed-out and with a little buildings icon next to it, meaning that you can’t change them. Here’s a rundown of what all those content privacy settings mean. Cookies Cookies allow external websites to store information on your computer to help them remember you. This information may include the last time you visited the site, the links you’ve clicked, and so on. You may not want external websites to set that kind of data on your computer, or your employer may not want them to. Just check the box to block third-party cookies and site data if you don’t want websites to have that kind of access. Images Deciding whether to show images on websites isn’t really a security concern, but not showing images can speed up your browsing considerably. You’ll miss out on a lot, though. You may only want to disable images if your connection is very slow or if you’re on a data plan (for example, if you’re on the road and tethering your laptop to your phone’s data connection so that you can access the internet). Downloading images can eat into your allowed data quickly, and if you’re interested only in the text, why waste your data? JavaScript JavaScript can be a major security concern. JavaScript applications are tiny programs that run on websites. Most above-board websites use JavaScript in a positive, nonthreatening way, such as gathering website traffic data (that is, tracking where you go and what you click on their website). However, some not-so-nice websites can use JavaScript to try to get at the information on your computer. If you’re in the habit of visiting only nice websites, then you can leave JavaScript enabled; however, if you tend to venture on the wild side of the web, you may want to disable JavaScript. Handlers Handlers are external applications (that is, not your browser) on your computer that are allowed to handle certain tasks. For example, if you click a link for someone’s email address, it’s very likely that Chrome will tell your default email application to open so that you can write a new message to the recipient. Websites may ask you if you’d like them to handle certain tasks for you. It’s up to you to decide whether to let them. Rest assured, websites can’t do this without your permission, which is why they ask. Plugins Plugins are little applications that you install in your browser to enhance its functionality. You might also call then Add-ons. Plugins are a great way to enable your browser to do things more easily. For example, if you frequently take screenshots of websites, you can get a screenshot plugin that enables you to take a screenshot with just one or two clicks. But plugins can also do some nefarious things, which is why you may want to limit their use. Pop-ups We all know about pop-ups — those mostly annoying browser windows that pop up with advertisements, interrupting the flow of what you’re doing. It’s worse than TV commercials! Sometimes, though, pop-ups are necessary, such as when you’re purchasing something online. But that’s a relatively rare situation compared to when ads pop up, so it’s best to keep pop-ups disabled and add exceptions on a case-by-case basis. Location Some websites may want to know where you’re located, such as a shopping site asking you where you are so that it can show you pricing for the nearest store. Most of the time, this is harmless. But still, you may not want people to know where you are. So, it’s probably best to not allow websites to know where you are, except for when they ask and you decide to let them know. Note, though, that for your work computer, your IT department may completely disable this so that, no matter what, websites can’t know where you are. Notifications Chrome allows websites to provide desktop notifications, such as when new emails arrive in your web-based email app or the latest football scores or weather updates. The default for this setting is to have websites ask if you want to receive notifications. But if you know for sure that you either want or don’t want them, you can change this setting appropriately. Fullscreen Believe it or not, some websites have the audacity to want to take over your entire screen. Luckily, Chrome makes them ask first, so you can rightfully say no. You can use this setting to specify exceptions — that is, sites that you want to automatically take over your screen, such as gaming sites. Mouse cursor You may not realize this, but an external website can disable your mouse cursor if it wants to. For example, online games may disable your mouse cursor during play. You can decide whether you want websites to be able to do this; the default is that they have to ask. Protected content Protected content is usually content that you’ve subscribed to or purchased the right to view on your computer. If you do this often, you’ll want to make sure the Allow box is checked for this option. Media Some websites, such as sites that offer web conferencing, may want to use your microphone and camera. That’s perfectly understandable, given the usage. But beware websites that you don’t know that want access. That’s why Chrome asks for your permission before granting access. But if you’re sure you’d never want a website to have that kind of access, choose Do Not Allow from the options. Unsandboxed plugin access Chrome runs all of its plug-ins in a sandboxed environment, which means that it limits the access that the plug-ins have to your computer. That way, they can’t cause all kinds of havoc on your computer. Some plug-ins, however, require unrestricted access. You can safely allow the above-board add-ons, such as a streaming video player from a company you trust (such as your cable company), to run outside of the sandboxed environment. But you should be very careful about giving that kind of access to any and all plug-ins. It’s best to let Chrome ask when to run plug-ins outside of the sandbox. Automatic downloads Some websites may try to force Chrome to download multiple files — and some of them may be harmful. For example, if you download one file by choice, the site may try to download another file after that without your permission. Obviously, you don’t want websites downloading stuff to your computer without your permission, so it’s best to keep the Ask When option selected. MIDI devices full control MIDI is an old technology that allows for digital communication between electronic musical instruments. What does this have to do with Chrome? Well, believe it or not, your computer contains MIDI support (and has for a long, long time). Websites can access those MIDI devices to make music in your Chrome browser. Will you ever use this? Probably not, but you might as well leave the default Ask Me option selected. Most of these settings have a Manage Exceptions button that enables you to set which sites you want to exclude from a particular exception. So, for example, if you don’t want to download images on most sites except for a few, then you can list the exceptions under that setting.
View Article