Generative AI Articles
AI shows up in everything from the operating room to your home entertainment system. Check out these articles for a heads-up on the latest developments in artificial intelligence.
Articles From Generative AI
Filter Results
Article / Updated 12-09-2024
As you delve deeper into the realm of prompt engineering, you find out that the effectiveness of a prompt can vary greatly depending on the application. Whether you’re using AI for creative writing, data analysis, customer service, or any other specific use, the prompts you use need to be tailored to fit the task at hand. The art in prompt engineering is matching your form of communication to the nature of the task. If you succeed, you’ll unlock the vast potential of AI. For instance, when engaging with AI for creative writing, your prompts should be open-ended and imaginative, encouraging the AI to generate original and diverse ideas. A prompt like “Write a story about a lost civilization discovered by a group of teenagers” sets the stage for a creative narrative. In contrast, data analysis requires prompts that are precise and data-driven. Here, you might need to guide the AI with specific instructions or questions, such as “Analyze the sales data from the last quarter and identify the top-performing products.” You may need to include that data in the prompt if it isn’t already loaded into the training data, retrieval-augmented generation (RAG), system or custom messages, or a specialized GPT. In any case, this type of prompt helps the AI focus on the exact task, ensuring that the output is relevant and actionable. The key to designing effective prompts lies in understanding the domain you’re addressing. Each field has its own set of terminologies, expectations, and objectives. For example, legal prompts require a different structure and language than those used in entertainment or education. It’s essential to incorporate domain-specific knowledge into your prompts to guide the AI in generating the desired output. Following are some examples across various industries that illustrate how prompts can be tailored for domain-specific applications: Legal domain: In the legal industry, precision and formality are paramount. Prompts must be crafted to reflect the meticulous nature of legal language and reasoning. For instance, a prompt for contract analysis might be, “Identify and summarize the obligations and rights of each party as per the contract clauses outlined in Section 2.3 and 4.1.” This prompt is structured to direct the AI to focus on specific sections, reflecting the detailed-oriented nature of legal work. Healthcare domain: In healthcare, prompts must be sensitive to medical terminology and patient privacy. A prompt for medical diagnosis might be, “Given the following anonymized patient symptoms and test results, what are the potential differential diagnoses?” This prompt respects patient confidentiality while leveraging the AI’s capability to process medical data. Education domain: Educational prompts often aim to engage and instruct. A teacher might use a prompt like, “Create a lesson plan that introduces the concept of photosynthesis to 5th graders using interactive activities.” This prompt is designed to generate educational content that is age-appropriate and engaging. Finance domain: In finance, prompts need to be data-driven and analytical. A financial analyst might use a prompt such as, “Analyze the historical price data of XYZ stock over the past year and predict the trend for the next quarter based on the moving average and standard deviation.” This prompt asks the AI to apply specific financial models to real-world data. Marketing domain: Marketing prompts often focus on creativity and audience engagement. A marketing professional could use a prompt like, “Generate a list of catchy headlines for our new eco-friendly product line that will appeal to environmentally conscious consumers.” This prompt encourages the AI to produce creative content that resonates with a target demographic. Software development domain: In software development, prompts can be technical and require understanding of coding languages. A prompt might be, “Debug the following Python code snippet and suggest optimizations for increasing its efficiency.” This prompt is technical, directing the AI to engage with code directly. Customer service domain: For customer service, prompts should be empathetic and solution oriented. A prompt could be, “Draft a response to a customer complaint about a delayed shipment, ensuring to express understanding and offer a compensatory solution.” This prompt guides the AI to handle a delicate situation with care. By understanding the unique requirements and language of each domain, you can craft prompts to effectively guide AI in producing the desired outcomes. It’s not just about giving commands; it’s about framing them in a way that aligns with the goals, terms, and practices of the industry in question. As AI continues to evolve, the ability to engineer precise and effective prompts becomes an increasingly valuable skill across all sectors. 15 tips and tricks for better AI prompting Although GenAI may seem like magic, it takes knowledge and practice to write effective prompts that will generate the content you’re looking for. The following list provides some insider tips and tricks to help you optimize your prompts to get the most out of your interactions with GenAI tools: Know your goal. Decide what you want from the AI — like a simple how-to or a bunch of ideas — before you start asking. Get specific. The clearer you are, the better the AI can help. Ask “How do I bake a beginner's chocolate cake?” instead of just “How do I make a cake?” Keep it simple. Use easy language unless you’re in a special field like law or medicine where using the right terms is necessary. Add context. Give some background if it's a special topic, like tips for small businesses on social media. Play pretend. Tell the AI to act like someone, like a fitness coach, to get answers that fit that role. Try again. If the first answer isn't great, change your question a bit and ask again. Show examples. If you want something creative, show the AI an example to follow, like asking for a poem like one by Robert Frost. Don't overwhelm. Keep your question focused. If it's too packed with info, it gets messy. Mix it up. Try asking in different ways, like with a question or a command, to see what works best. Embrace the multimodal functionality. Multimodal functionality means that the GenAI model you’re working with can accept more than one kind of prompt input. Typically, that means it can accept both text and images in the input. Understand the model’s limitations. GenAI is not infallible and can still produce errors or “hallucinate” responses. Always approach the AI’s output with a critical eye and use it as a starting point rather than the final word on any subject. Leverage the enhanced problem-solving abilities. GenAI’s enhanced problem-solving skills mean that you can tackle more complex prompts. Use this to your advantage when crafting prompts that require a deep dive into a topic. Keep prompts aligned with AI training. For example, remember that GPT-4, like its predecessors, is trained on a vast dataset up to a certain point in time (April 2023 at the time of this writing). It doesn’t know about anything that happened after that date. If you need to reference more recent events or data, provide that context within your prompt. Experiment with different prompt lengths. Short prompts can be useful for quick answers, while longer, more detailed prompts can provide more context and yield more comprehensive responses. Incorporate feedback loops. After receiving a response from your GenAI application, assess its quality and relevance. If it hit — or is close to — the mark, click on the thumbs-up icon. If it’s not quite what you were looking for, provide feedback in your next prompt by clicking on the thumbs-down icon. This iterative process can help refine the AI’s understanding of your requirements and improve the quality of future responses. By keeping these tips in mind and staying informed about the latest developments in the capabilities of various GenAI models and applications, you’ll be able to craft prompts that are not only effective but also responsible and aligned with the AI’s strengths and limitations. How to use prompts to fine-tune the AI model The point of prompt engineering is to carefully compose a prompt that can shape the AI’s learning curve and fine-tune its responses to perfection. In this section, you dive into the art of using prompts to refine the GenAI model, ensuring that it delivers the most accurate and helpful answers possible. In other words, you discover how to use prompts to also teach the model to perform better for you over time. Here are some specific tactics: When you talk to the AI and it gives you answers, tell it if you liked the answer or not. Do this by clicking the thumbs up or thumbs down, or the + or – icons above or below the output. The model will learn how to respond better to you and your prompts over time if you do this consistently. If the AI gives you a weird answer, there's a “do-over” button you can press. It's like asking your friend to explain something again if you didn't get it the first time. Look for “Regenerate Response'’ or some similar wording (term varies among models) near the output. Click on that and you’ll instantly get the AI’s second try! Think of different ways to ask the AI the same or related questions. It's like using magic words to get the best answers. If you're really good at it, you can make a list of prompts that others can use to ask good questions too. Prompt libraries are very helpful to all. It’s smart to look at prompt libraries for ideas when you’re stumped on how or what to prompt. Share your successful prompts. If you find a super good way to ask something, you can share it online (at sites like GitHub) with other prompt engineers and use prompts others have shared there too. Instead of teaching the AI everything from scratch (retraining the model), you can teach it a few more new things through your prompting. Just ask it in different ways to do new things. Over time, it will learn to expand its computations. And with some models, what it learns from your prompts will be stored in its memory. This will improve the outputs it gives you too! Redirect AI biases. If the AI says something that seems mean or unfair, rate it a thumbs down and state why the response was unacceptable in your next prompt. Also, change the way you ask questions going forward to redirect the model away from this tendency. Be transparent and accountable when you work with AI. Tell people why you're asking the AI certain questions and what you hope to get from it. If something goes wrong, try to make it right. It's like being honest about why you borrowed your friend's toy and fixing it if it breaks. Keep learning. The AI world changes a lot, and often. Keep up with new models, features, and tactics, talk to others, and always try to get better at making the AI do increasingly more difficult things. The more you help GenAI learn, the better it gets at helping you! What to do when AI goes wrong When you engage with AI through your prompts, be aware of common pitfalls that can lead to biased or undesirable outcomes. Following are some strategies to avoid these pitfalls, ensuring that your interactions with AI are both effective and ethically sound. Recognize and mitigate biases. Biases in AI can stem from the data it was trained on or the way prompts are structured. For instance, a healthcare algorithm in the United States inadvertently favored white patients over people of color because it used healthcare cost history as a proxy for health needs, which correlated with race. To avoid such biases, carefully consider the variables and language used in your prompts. Ensure they do not inadvertently favor one group over another or perpetuate stereotypes. Question assumptions. Wrong or flawed assumptions can lead to misguided AI behavior. For example, Amazon’s hiring algorithm developed a bias against women because it was trained on resumes predominantly submitted by men. Regularly review the assumptions behind your prompts and be open to challenging and revising them as needed. Avoid overgeneralization. AI can make sweeping generalizations based on limited data. To prevent this, provide diverse and representative examples in your prompts. This helps the AI understand the nuances and variations within the data, leading to more accurate and fair outcomes. Keep your purpose in sight. Losing sight of the purpose of your interaction with AI can result in irrelevant or unhelpful responses. Always align your prompts with the intended goal and avoid being swayed by the AI’s responses into a direction that deviates from your original objective. Diversify information sources. Relying on too narrow a set of information can skew AI responses. Ensure that the data and examples you provide cover a broad spectrum of scenarios and perspectives. This helps the AI develop a well-rounded understanding of the task at hand. For example, if the AI is trained to find causes of helicopter crashes and the only dataset the AI has is of events when helicopters crash, it will deduce that all helicopters crash which in turn will render skewed outputs that could be costly or even dangerous. Add data on flights or events when helicopters did not crash, and you’ll get better outputs because the model has more diverse and more complete information to analyze. Encourage open debate. AI can sometimes truncate debate by providing authoritative-sounding answers. Encourage open-ended prompts that allow for multiple viewpoints and be critical of the AI’s responses. This fosters a more thoughtful and comprehensive exploration of the topic. Be wary of consensus. Defaulting to consensus can be tempting, especially when AI confirms our existing beliefs. However, it’s important to challenge the AI and yourself by considering alternative viewpoints and counterarguments. This helps in uncovering potential blind spots and biases. Check your work. Always review the AI’s responses for accuracy and bias. As with the healthcare algorithm that skewed resources toward white patients, unintended consequences can arise from seemingly neutral variables. Rigorous checks and balances are necessary to ensure the AI’s outputs align with ethical standards.
View ArticleArticle / Updated 10-28-2024
Bayes’ theorem can help you deduce how likely something is to happen in a certain context, based on the general probabilities of the fact itself and the evidence you examine, and combined with the probability of the evidence given the fact. Seldom will a single piece of evidence diminish doubts and provide enough certainty in a prediction to ensure that it will happen. As a true detective, to reach certainty, you have to collect more evidence and make the individual pieces work together in your investigation. Noticing that a person has long hair isn’t enough to determine whether person is female or a male. Adding data about height and weight could help increase confidence. The Naïve Bayes algorithm helps you arrange all the evidence you gather and reach a more solid prediction with a higher likelihood of being correct. Gathered evidence considered singularly couldn’t save you from the risk of predicting incorrectly, but all evidence summed together can reach a more definitive resolution. The following example shows how things work in a Naïve Bayes classification. This is an old, renowned problem, but it represents the kind of capability that you can expect from an AI. The dataset is from the paper “Induction of Decision Trees,” by John Ross Quinlan. Quinlan is a computer scientist who contributed to the development of another machine learning algorithm, decision trees, in a fundamental way, but his example works well with any kind of learning algorithm. The problem requires that the AI guess the best conditions to play tennis given the weather conditions. The set of features described by Quinlan is as follows: Outlook: Sunny, overcast, or rainy Temperature: Cool, mild, or hot Humidity: High or normal Windy: True or false The following table contains the database entries used for the example: Outlook Temperature Humidity Windy PlayTennis Sunny Hot High False No Sunny Hot High True No Overcast Hot High False Yes Rainy Mild High False Yes Rainy Cool Normal False Yes Rainy Cool Normal True No Overcast Cool Normal True Yes Sunny Mild High False No Sunny Cool Normal False Yes Rainy Mild Normal False Yes Sunny Mild Normal True Yes Overcast Mild High True Yes Overcast Hot Normal False Yes Rainy Mild High True No The option of playing tennis depends on the four arguments shown here. The result of this AI learning example is a decision as to whether to play tennis, given the weather conditions (the evidence). Using just the outlook (sunny, overcast, or rainy) won’t be enough, because the temperature and humidity could be too high or the wind might be strong. These arguments represent real conditions that have multiple causes, or causes that are interconnected. The Naïve Bayes algorithm is skilled at guessing correctly when multiple causes exist. The algorithm computes a score, based on the probability of making a particular decision and multiplied by the probabilities of the evidence connected to that decision. For instance, to determine whether to play tennis when the outlook is sunny but the wind is strong, the algorithm computes the score for a positive answer by multiplying the general probability of playing (9 played games out of 14 occurrences) by the probability of the day’s being sunny (2 out of 9 played games) and of having windy conditions when playing tennis (3 out of 9 played games). The same rules apply for the negative case (which has different probabilities for not playing given certain conditions): likelihood of playing: 9/14 * 2/9 * 3/9 = 0.05 likelihood of not playing: 5/14 * 3/5 * 3/5 = 0.13 Because the score for the likelihood is higher, the algorithm decides that it’s safer not to play under such conditions. It computes such likelihood by summing the two scores and dividing both scores by their sum: probability of playing : 0.05 / (0.05 + 0.13) = 0.278 probability of not playing : 0.13 / (0.05 + 0.13) = 0.722 You can further extend Naïve Bayes to represent relationships that are more complex than a series of factors that hint at the likelihood of an outcome using a Bayesian network, which consists of graphs showing how events affect each other. Bayesian graphs have nodes that represent the events and arcs showing which events affect others, accompanied by a table of conditional probabilities that show how the relationship works in terms of probability. The figure shows a famous example of a Bayesian network taken from a 1988 academic paper, “Local computations with probabilities on graphical structures and their application to expert systems,” by Lauritzen, Steffen L. and David J. Spiegelhalter, published by the Journal of the Royal Statistical Society. The depicted network is called Asia. It shows possible patient conditions and what causes what. For instance, if a patient has dyspnea, it could be an effect of tuberculosis, lung cancer, or bronchitis. Knowing whether the patient smokes, has been to Asia, or has anomalous x-ray results (thus giving certainty to certain pieces of evidence, a priori in Bayesian language) helps infer the real (posterior) probabilities of having any of the pathologies in the graph. Bayesian networks, though intuitive, have complex math behind them, and they’re more powerful than a simple Naïve Bayes algorithm because they mimic the world as a sequence of causes and effects based on probability. Bayesian networks are so effective that you can use them to represent any situation. They have varied applications, such as medical diagnoses, the fusing of uncertain data arriving from multiple sensors, economic modeling, and the monitoring of complex systems such as a car. For instance, because driving in highway traffic may involve complex situations with many vehicles, the Analysis of MassIve Data STreams (AMIDST) consortium, in collaboration with the automaker Daimler, devised a Bayesian network that can recognize maneuvers by other vehicles and increase driving safety.
View ArticleCheat Sheet / Updated 10-17-2024
The first public release of ChatGPT ignited the world’s demand for increasingly sophisticated Generative AI (GenAI) models and tools, and the market was quick to deliver. But what’s the use of having so many GenAI tools if you get stuck using them? And make no mistake, everyone gets stuck quite often! This cheat sheet helps you get the very best results by introducing you to advanced (but pretty easy) prompting techniques and giving you useful tips on how to choose models or applications that are right for the task.
View Cheat SheetArticle / Updated 08-19-2024
The landscape of contract lifecycle management (CLM) is rapidly evolving with the advent of advanced technologies like generative AI (Gen AI). Gen AI is a new iteration of AI whose key benefit is the generation of new content based on the patterns and information it’s learned from existing datasets. Gen AI isn’t a trend or a fad. It’s a new technology that represents a seismic shift in many ways. Organizations are no longer asking if they should embrace AI in CLM but rather how swiftly and effectively they can adapt. The golden age of powerful intelligent technology must be embraced, and you must adapt to advance your business. Integrating these technologies into your CLM can make your CLM an even more powerful tool. AI is like giving machines a brain to think and learn, while Gen AI is about giving them creativity to make new things. When you apply Gen AI to CLM and your contracting processes, it truly expedites your third-party paper review, contract redlining, playbook review, negotiation, and more. In this article, you discover how Gen AI’s powerful use cases are wielded in CLM. Tackling Gen AI Use Cases that Impact CLM Gen AI streamlines contract creation, analysis, and risk assessment, revolutionizing how businesses manage contracts. It’s an exciting development that promises efficiency and accuracy in CLM processes. Within CLM, Gen AI’s prominent use cases include the following: Drafting your contracts with ease: Transform how your organization handles your contracts and their processes. Creating contracts through traditional methods is a time-consuming process that requires highly trained experts, but Gen AI can flip that old way of doing things and start automating your contract drafting. Gen AI does this by learning from your existing contracts and then generating new ones based on your specific business needs and specific inputs that you provide to the tool. Improved adoption: Gen AI becomes a critical co-pilot, working with your users without requiring training. By adding this resource capacity, you can increase efficiency through automating repetitive processes, such as expedited contract review and risk analysis. Your business can do more and free up valuable human resources to focus on strategic initiatives. While Gen AI is still new and slowly being adopted, the benefits are compelling for businesses to adopt Gen AI faster. Voice and text-activated operation: You can easily communicate your objectives through voice commands or by typing, and Gen AI provides guided, click-free actions to efficiently achieve your goals. Intelligent search: Gen AI is able to review large amounts of data quicker than before, allowing for less time spent on searches and more time achieving precise results faster. It can identify key provisions and the existence of specific business terms across agreements swiftly, making audits or merger and acquisitions (M&A) transactions much easier. Advanced business intelligence: Gen AI offers more robust contextual insights and actionable recommendations, including summaries of data that it then can use to drive more data-driven decisions. These AI insights can help you negotiate better terms, optimize contract structures, and align legal strategies with broader business objectives. Proactive support and risk management: Gen AI facilitates smooth collaboration during document review, and it can proactively identify legal risks, offering recommendations to ensure compliance and mitigate potential issues. In today’s culture, minimizing risk and ensuring compliance are paramount. Gen AI can leverage advanced algorithms to systematically analyze agreements, flag potential compliance issues, and ensure adherence to legal standards. With Gen AI’s contract analysis and risk assessment, your organization can make better informed decisions about its contracts. Using Gen AI Use Cases to Strengthen Your Teams AI-powered CLM use cases provide value in diverse scenarios. By implementing AI contract software, all your teams benefit: Legal: Legal departments can automate contract analysis, strategy development, and negotiations. AI also ensures that contracts comply with the latest legal standards and regulations. Procurement: Procurement teams can automate the vendor contract lifecycle and third-party paper reviews. AI streamlines the creation, review, and approval of contracts, ensuring that procurement processes are seamless and compliant. Sales: Sales teams leverage AI to accelerate the contract negotiation process. By expediting redlining and ensuring the accuracy of contract terms, sales professionals can close deals more efficiently and with reduced risks. Compliance: AI helps you monitor and ensure adherence to contractual obligations. By providing real-time insights into contract performance, AI-enhanced solutions help identify and mitigate risks associated with non-compliance. Expanding Gen AI in CLM with Malbek You’re ready to elevate your CLM experience and unleash the power of Gen AI. You want to maximize the power of your digital contracts, but you need a solid partner along the way. In this section, you learn more about Malbek and how the company can help you do just that. To learn more about Malbek, you can also visit one of these resources: • www.malbek.io • www.malbek.io/platform Simplify CLM complexity Malbek empowers its customers with a dynamic, centralized, and fully configurable CLM platform that simplifies your CLM processes. CLM can be complex, but with a trusted partner, you can distill critical insights from contracts for actionable decision-making and peak profitability. Accelerate contracting velocity Build and launch contract and approval processes with ease. From intuitive workflows and seamless approvals to swift contract generation, Malbek’s platform empowers enterprises to navigate contracts with unprecedented speed, ensuring efficiency, compliance, and strategic impact at every turn. Unite global teams and improve collaboration Malbek seamlessly integrates with your favorite business apps, such as Salesforce, Microsoft, SAP, NetSuite, Slack, Coupa, OneTrust, Adobe Sign, DocuSign, and more. By connecting your CLM system with the rest of your business, you can maintain a single source of truth and streamline your operations. Improve decision-making and minimize risk Eliminate time-consuming, manual tasks that take away from high-value objectives. With Malbek AI infused throughout the contracting process, you gain immediate access to timely contextual insights and recommendations to have the greatest impact on your business. AI also streamlines negotiations and shortens review cycles. Download your free copy of Contract Lifecycle (CLM) Management For Dummies, Malbek Special Edition today.
View ArticleArticle / Updated 05-31-2024
At work as well as in your personal life, you’ve almost certainly been bombarded with talk about generative artificial intelligence (AI). It’s all over the mainstream media, in trade journals, in C-suite conversations, and on the front lines of whatever work your organization does. There’s no escaping it. The stories make AI sound so miraculous that, in fact, you could be forgiven for thinking it must be a bunch of hype. But the reality is, generative AI can truly be transformational for businesses. You can leave it for textbooks to fill in the details about what AI is and how it works. But in a nutshell, AI relies on building large language models (LLM) with the help of machine learning (ML). AI trains on vast amounts of data, immerses itself, and learns from the data in ways not unlike how humans learn (but a whole lot faster, and ingesting far, far more data). Notice that the title of this article refers to generative AI. This AI doesn’t just make recommendations — it actually creates new data or content, or generates insights by using the power of natural language processing (NLP) and ML. Tackling many tasks What can generative AI really do for your business? What business problems can it solve? For starters, it’s a fantastic headache remedy. Some of the business headaches generative can cure include Production bottlenecks: Got processes that are stuck and unable to keep up with the demands of customers? Generative AI breaks through bottlenecks by automating processes, improving efficiency, facilitating faster and better human decisions, increasing output, maximizing resources, and speeding up development cycles. Tedious tasks: Generative AI can tackle mundane and tedious tasks, freeing up human brainpower for real value-creating initiatives that your people will find more fulfilling. Inconsistencies and noncompliance: Generative AI creates consistency across your organization’s communications and enforces compliance with internal and external standards. It’s easy for discrepancies and errors to pop up and multiply — generative AI can identify these issues, offer insights and recommendations, and even automatically fix them. Training hurdles: Generative AI helps new hires onboard and get up-to-speed quickly by generating training materials and job simulations. Personalized instruction can fill knowledge gaps. Customer-service struggles: When equipped with information-retrieval solutions, the technology can answer questions quickly and can even handle some customer interactions entirely on its own. It also improves live human interactions by empowering agents and creating instant conversation summaries. Exploring the use cases What generative AI can do for your organization boils down to three primary areas: Creating: This is what it sounds like — using AI to come up with something new. It also may mean editing or revising something that has already been created, by a person or AI, perhaps by turning it into a different format. For your marketing team, a generative AI tool can write the first draft of an ebook about a new product, or create a press release or search engine optimization (SEO)-ready web content. It can come up with a knowledge base article on the latest product feature to help the support team, or a best-practices management article for learning and development. It can help the human resources (HR) team write a job description, making sure it’s doing so in inclusive language. The product development team will love how it ingests and crunches a list of features and bug tickets to come up with release notes. Analyzing: This means taking an in-depth look at content of some kind and generating insights. Generative AI can spot trends or reach conclusions of some sort, perhaps even analyze sentiment amid a batch of customer feedback. Marketing may ask the AI platform to process a webinar recording and summarize the key takeaways. The support team can have it scour customer support survey responses to come up with insights on areas of improvement to consider. Generative AI can help learning and development conjure up some FAQs by analyzing and categorizing what’s in an internal wiki. AI can listen to a recording of a job interview and create a summary for a recruiter. Product developers can have it study customer feedback to find insights for what new features to prioritize. Governing: The govern use case includes a focus on compliance, looking for language that runs afoul of legal and regulatory rules. It finds incorrect terminology and statements and works to prevent data loss and global compliance problems. This type of AI work also means checking for factual accuracy, detecting claims that are wrong and suggesting replacement wording. Marketers can use it to find errors and violations in advertising copy, and for HR, AI can flag non-inclusive language in employee communications, then make suggested revisions. The learning and development team may use it to ensure training materials are compliant with industry certification requirements and other vital standards. Making it happen Many generative AI tools are out there right now, and they’re ready for the masses. Countless people subscribe to platforms such as ChatGPT and Google’s Gemini, and Meta AI is now built right into social media platforms. For the use cases outlined in the preceding section, though, it’s essential to seek an enterprise-grade, full-stack generative AI platform rather than a consumer-targeted AI assistant. Your organization will want a platform that can be truly customized to your needs and integrated with your operations, trained on accurate data that’s relevant to your business and industry, and fully in line with your security and compliance requirements. So, do it yourself? That’s not such a great plan, either. Building your own AI stack can be slow and expensive. Look for a partner that can abstract the complexity so you can benefit from the AI-first workflows, not get bogged down building and maintaining infrastructure. When picking a platform, follow these tips: Keep pace with your organizational needs. Get a tool that can deploy custom AI apps in a snap for any use case, including digital assistants, content generation, summarization, and data analysis. Seek the right model. Palmyra LLMs from Writer, for example, are top-ranked on key benchmarks for model performance set by Stanford’s Holistic Evaluation of Language Models. Connect to your company knowledge. An LLM alone can’t deliver accurate answers about information that’s locked inside your business knowledge bases. For that, you need retrieval-augmented generation (RAG), which is basically a way to feed an LLM-based AI app company-specific information that can’t be found in its training data. Check out writer.com/product/graph-based-rag for more information. Be sure it’s fully customizable. You need consistent, high-quality outputs that meet your organization’s specific requirements, and a general consumer tool can’t do that. You also must have AI guardrails that enforce all your rules and standards. Integrate the tool. To fit into your flow, AI apps need to be in your people’s hands however they’re working. You need an enterprise application programming interface (API) and extensions that’ll build tools right into Microsoft Word and Outlook, Google Docs and Chrome, Figma, Contentful, or whatever else your people love to use. Deploy it your way. Look for options that include single-tenant or multi-tenant deployments. Get things done quickly. Look for a platform that can have you up and running in days, not months. Wouldn’t you rather spend your time adopting than tediously building? Keep it secure. Here’s an incredibly vital area where consumer tools can leave your enterprise at great risk. You need an LLM that’s secure, auditable, and never uses your sensitive data in model training. You’ll lose a lot of sleep if your tool doesn’t comply with the standards your organization must follow, whether that means SOC 2 Type II, HIPAA, PCI, GDPR, or CCPA. Find a tool that manages access with single-sign on (SSO), multifactor authentication, and role-based permissions. Writer is the full-stack generative AI platform for enterprises. It empowers your entire organization to accelerate growth, increase productivity, and ensure compliance. For more information on how to transform work with generative AI, download Generative AI For Dummies, Writer Special Edition.
View ArticleArticle / Updated 12-05-2023
We depend on machines to produce everyday essentials — such as power, food, and medicine — and to support nearly every aspect of society. Thus, machine health is vital to overall manufacturing and business health. Machine health transforms reliability, maintenance, operations, and asset performance management by using artificial intelligence (AI) and Internet of Things (IoT) technologies to improve performance, reduce downtime, and help manufacturers reach Industry 4.0 standards. Transform the way you work In an era of labor shortages and technology innovation, manufacturing needs to move faster toward digitization. Using AI in manufacturing, companies can eliminate repetitive tasks, reduce inefficiencies, and strengthen data-driven decision making. With insights into the real-time condition of their machines, workers can break away from traditional maintenance schedules and manual tasks. This means more time for proactive work and collaboration with other departments, which leads to stronger cross-functional teams. It also leads to further innovations, such as process optimization. When employees have insights into the health of their machines, they can predict their workdays and own their schedules, opening up new opportunities for innovation and engagement. Learn more about transforming the way you work with machine health at www.augury.com/use-cases/business-goal/transform-the-way-you-work. Eliminate unnecessary downtime Unplanned downtime can be expensive. There’s the cost of the repairs themselves, lost production and sales, and reputational damage. Maintenance and reliability teams often have to scramble to diagnose and fix the problem as quickly as possible — often resulting in overtime pay and expedited shipping costs for emergency spare parts. Sudden and catastrophic machine failures can also harm worker morale and jeopardize worker safety. Thus, reducing unnecessary downtime has a significant impact on both your top and bottom lines. Learn more about eliminating unnecessary downtime with machine health at www.augury.com/use-cases/business-goal/eliminate-unnecessary-downtime. Reduce loss, waste, and emissions Reducing waste to improve sustainability has become a top priority for manufacturers for cost-cutting/efficiency purposes, as well as to promote a healthier planet. Healthy machines can run at capacity and with less downtime, leading to less waste and more efficient energy use. An industry study by the Electric Power Research Institute (EPRI) found that optimizing the performance of rotating assets — which account for approximately 54 percent of U.S. industrial electricity consumption — can reduce energy consumption by 12 to 15 percent. Learn more about reducing loss, waste, and emissions with machine health at www.augury.com/use-cases/business-goal/reduce-loss-waste-and-emissions. Maximize yield and capacity Real-time machine health insights allow maintenance teams to adjust shutdown schedules based on the current condition of the machine, its history, and the recommendations of experts. In the longer-term, machine health minimizes unplanned downtime by improving the health of your machines, reduces planned downtime by deferring nonessential maintenance activities, and can increase output on production lines with healthier machines running optimally. Learn more about maximizing yield and capacity with machine health at www.augury.com/use-cases/business-goal/maximize-yield-and-capacity. Optimize asset care When it comes to asset care — from acquiring and storing parts to managing maintenance resources — manufacturers have traditionally taken a more preventive than predictive approach: Machines are serviced on fixed schedules, regardless of whether or not they need maintenance. Many manufacturers keep spare parts for critical machine assets in inventory because calendar-based maintenance dictates when parts should be replaced — whether or not replacement is needed. The alternative is overspending to get parts on short notice and expensive downtime while you wait for the parts to arrive. Whether hoarding parts or paying more for last-minute parts, both methods are inefficient and costly. Machine health empowers you to control how you spend time and money based on the real-time condition of your machine assets. Learn more about optimizing asset care with machine health at www.augury.com/use-cases/business-goal/optimize-asset-care. Start building a winning machine health culture Machine health can transform operations for every manufacturer. Get your free copy of Machine Health For Dummies at https://www.augury.com/machinehealthfordummies/. How much will you save with Augury Machine Health? Use the ROI calculator at www.augury.com/value-calculator/ to see how much time and money you can save with Augury’s Machine Health.
View ArticleArticle / Updated 10-31-2023
Indeed, prompting is both the easy part and the most difficult part of using a generative artificial intelligence (AI) model, like ChatGPT. Difficulties in the complexity of cues and nuances in text-based prompts are why some organizations have a prompt engineering job role. What is a ChatGPT prompt? It's a phrase or sentence that you write in ChatGPT to initiate a response from the AI. ChatGPT responds based on its existing knowledge base. Don't have time to read the entire article? Jump to the quick read summary. Prompt engineering is the act of crafting an input, which is a deed borne partly of art and partly of logic. And yes, you can do this! However, you might want to practice and polish your prompting skills before you apply for a job. Considering how to prompt ChatGPT, if you have a good command of the subtleties of language and great critical-thinking and problem-solving skills, seasoned with more than a dash of intuitive intelligence, you’ll be amazed at the responses you can tease out of this technology with a single, well-worded prompt. When you prompt ChatGPT, you are embedding the task description in the input (called the prompt) in a natural-language format, rather than entering explicit instructions via computer code. Prompt engineers can be trained AI professionals or people who possess sufficient intuitive intelligence or skills transferrable to crafting the best prompts for ChatGPT (or other generative AI platforms) that solicit the desired outputs. One example of a transferrable skill is a journalist’s ability to tease out the answers they seek in an interview by using direct or indirect methods. Prompt-based learning is a strategy AI engineers use to train large language models. The engineers make the model multipurpose to avoid retraining it for each new language-based task. Currently, the demand for talented writers who know how to write a prompt, or prompt engineers, is very high. However, there is a strong debate as to whether employers should delineate this unique skill as a dedicated job role, a new profession, or a universal skill to be required of most workers, much like typing skills are today. Meanwhile, people are sharing their prompts with other ChatGPT users in several forums. You can see one example on GitHub. How to write a prompt If you enter a basic prompt, you’ll get a bare-bones, encyclopedic-like answer, as shown in the figure below. Do that enough times and you’ll convince yourself that this is just a toy and you can get better results from an internet search engine. This is a typical novice’s mistake and a primary reason why beginners give up before they fully grasp what ChatGPT is and can do. Understand that your previous experience with keywords and search engines does not apply here. To write awesome ChatGPT prompts, you must think of and use ChatGPT in a different way. Think hard about how you’re going to word your prompt. You have many options to consider. You can assign ChatGPT a role or a persona, or several personas and roles if you decide it should respond as a team, as illustrated in the following figure. You can assign yourself a new role or persona as well. Or tell it to address any type of audience, such as a high school graduating class, a surgical team, or attendees at a concert or a technology conference. You can set the stage or situation in great or minimum detail. You can ask a question, give it a command, or require specific behaviors. A prompt, as you can see now, is much more than a question or a command. Your success with ChatGPT hinges on your ability to master crafting a prompt in such a way as to trigger the precise response you seek. Ask yourself these questions as you are writing or evaluating your prompt: Who do you want ChatGPT to be? Where, when, and what is the situation or circumstances you want ChatGPT’s response framed within? Is the question you're entering in the prompt the real question you want it to answer, or were you trying to ask something else? Is the command you're prompting complete enough for ChatGPT to draw from sufficient context to give you a fuller, more complete, and richly nuanced response? And the ultimate question for you to consider: Is your prompt specific and detailed, or vague and meandering? Whichever is the case, that’s what ChatGPT will mirror in its response. ChatGPT’s responses are only as good as your prompt. That’s because the prompt starts a pattern that ChatGPT must then complete. Be intentional and concise about how you present that pattern starter — the prompt. For more details on using ChatGPT, including how to start a chat, reviewing your chat history, and much more, check out my book ChatGPT For Dummies. Thinking in threads Conversations happen when one entity’s expression initiates and influences another entity’s response. Most conversations do not conclude after a simple one-two exchange like this, but rather continue in a flow of responses cued by the interaction with the other participant. The resulting string of messages in a conversation is called a thread. To increase your success with ChatGPT, write prompts as part of a thread rather than as standalone queries. In this way, you'll craft prompts targeted towards the outputs you seek, building one output on another to reach a predetermined end. In other words, you don’t have to pile everything into one prompt. You can write a series of prompts to more precisely direct ChatGPT’s “thought processes.” Basic prompts result in responses that can be too general or vague. When you think in threads, you’re not aiming to craft a series of basic prompts; you’re looking to break down what you seek into prompt blocks that aim ChatGPT’s responses in the direction you want the conversation to go. In effect, you're using serialized prompts to manipulate the content and direction of ChatGPT's response. Does it work all the time? No, of course not. ChatGPT can opt for an entirely different response than expected, repeat an earlier response, or simply hallucinate one. But serialized prompts do work often enough to enable you to keep the conversation targeted and the responses flowing toward the end you seek. You can use this method to shape a single prompt by imagining someone asking for clarification of your thought or question. Write the prompt so that it includes that information, and the AI model will have more of the context it needs to deliver an intelligent and refined answer. ChatGPT will not ask for clarification of your prompt; it will guess at your meaning instead. You’ll typically get better quality responses by clarifying your meaning in the prompt itself at the outset. Chaining prompts and other tips and strategies Here’s a handy list of other tips and refinements to help get you started on the path to mastering the art of the prompt: Plan to spend more time than expected on crafting a prompt. No matter how many times you write prompts, the next one you write won’t be any easier to do. Don’t rush this part. Start by defining the goal. What exactly do you want ChatGPT to deliver? Craft your prompt to push ChatGPT towards that goal; if you know where you want to end up, you’ll be able to craft a prompt that will get you there. Think like a storyteller, not an inquisitor. Give ChatGPT a character or a knowledge level from which it should shape its answer. For example, tell ChatGPT that it's a chemist, an oncologist, a consultant, or any other job role. You can also instruct it to answer as if it were a famous person, such as Churchill, Shakespeare, or Einstein, or a fictional character such as Rocky. Give it a sample of your own writing and instruct ChatGPT to write its answer to your question, or complete the task in the way you would. Remember that any task or thinking exercise (within reason and the law) is fair game and within ChatGPT’s general scope. For example, instruct ChatGPT to check your homework, your kids’ homework, or its own homework. Enter something such as computer code or a text passage in quotation marks and instruct ChatGPT to find errors in it or in the logic behind it. Or skip the homework checking and ask it to help you think instead. Ask it to finish a thought, an exercise, or a mathematical equation that has you stumped. The only limit to what you can ask is your own imagination and whatever few safety rules the AI trainer installed. Be specific. The more details you include in the prompt, the better. Basic prompts lead to basic responses. More specific and concise prompts lead to more detailed responses, more nuanced responses, and better performance in ChatGPT’s responses — and usually well within token limits. Use prompt chains as a way of strategizing. Prompt chaining is a technique used to build chatbots, but we can reimagine it here as a way to develop a strategic plan using combined or serial prompting in ChatGPT. This technique uses multiple prompts to guide ChatGPT through a more complex thought process. You can use multiple prompts as a single input, such as telling ChatGPT it's a team consisting of several members with different roles, all of whom are to answer the one prompt you entered. Or you can use multiple prompts in a sequence in which the output of one becomes the input of the next. In this case, each response builds on the prompt you just entered and the prompts you entered earlier. This type of a prompt chain forms organically, unless you stop ChatGPT from considering earlier prompts in its responses by starting a new chat. Use prompt libraries and tools to improve your prompting. Some examples follows: Check out the Awesome ChatGPT Prompts repository on GitHub at https://github.com/f/awesome-chatgpt-prompts Use a prompt generator to ask ChatGPT to improve your prompt by visiting PromptGenerator. Visit ChatGPT and Bing AI Prompts on GitHub. Use a tool such as Hugging Face’s ChatGPT Prompt Generator. Try specialized prompt templates, such as the curated list for sales and marketing use cases at Tooltester. On GitHub, you can find tons of curated lists in repositories as well as lots of free prompting tools from a variety of sources. Just make sure that you double-check sources, apps, and browser extensions for malware before using or relying on them. Quick Read Summary Writing effective prompts for ChatGPT is both a craft and a science. A prompt is the phrase or sentence that initiates a response from the AI model. To excel in this skill, consider these essential tips. Crafting an artful prompt: A well-crafted prompt is essential to unlock ChatGPT's potential. Think beyond basic questions and commands. You can assign roles or personas to ChatGPT, set the stage, or address different audiences. Prompt engineering: This skill can be highly valuable. It involves creating prompts that draw out the desired responses from the AI. Prompt engineers often have a background in AI, journalism, or other fields where they've honed their ability to solicit specific information. Thinking in threads: Instead of standalone queries, use prompts as part of a conversation thread. This helps you build on previous outputs and guide the AI's responses toward your desired end. Chaining prompts: Connect prompts sequentially to steer ChatGPT's thought process. This approach can lead to more targeted and refined responses. Be patient and put thought into each prompt. Specificity is key: Detailed prompts lead to more detailed and nuanced responses. Avoid vague or meandering instructions, as ChatGPT mirrors the prompt's clarity. Prompt libraries and tools: Leverage existing resources to improve your prompting skills. There are repositories and tools available, like the Awesome ChatGPT Prompts repository on GitHub and Hugging Face's ChatGPT Prompt Generator. The art of imagination Within reasonable and legal limits, you can instruct ChatGPT for various tasks, from checking homework to creative writing. The only boundary is your imagination. In a world where the demand for skilled prompt writers is increasing, your ability to craft the perfect prompt is a valuable asset. By mastering this art, you can unlock the full potential of ChatGPT and guide its responses to meet your specific needs. Hungry for more? Go back and read the article or check out the book.
View ArticleArticle / Updated 10-30-2023
ChatGPT is a huge phenomenon and a major paradigm shift in the accelerating march of technological progression. Artificial intelligence (AI) research company OpenAI released a free preview of the chatbot in November 2022, and by January 2023, it had more than a million users. So, what is chatgpt? It's a large language model (LLM) that belongs to a category of AI called generative AI , which can generate new content rather than simply analyze existing data. Don't have time to read the entire article? Jump to the quick read summary. Additionally, anyone can interact with ChatGPT (GPT stands for generative pre-trained transformer) in their own words. A natural, humanlike dialog ensues. ChatGPT is often directly accessed online by users, but it is also being integrated with several existing applications, such as Microsoft Office apps (Word, Excel, and PowerPoint) and the Bing search engine. The number of app integrations seems to grow every day as existing software providers hurry to capitalize on ChatGPT’s popularity. What is ChatGPT used for? The ways to use ChatGPT are as varied as its users. Most people lean towards more basic requests, such as creating a poem, an essay, or short marketing content. Students often turn to it to do their homework. Heads up, kids: ChatGPT stinks at answering riddles and sometimes word problems in math. Other times, it just makes things up. In general, people tend to use ChatGPT to guide or explain something, as if the bot were a fancier version of a search engine. Nothing is wrong with that use, but ChatGPT can do so much more. How much more depends on how well you write the prompt. If you write a basic prompt, you’ll get a bare-bones answer that you could have found using a search engine such as Google or Bing. That’s the most common reason why people abandon ChatGPT after a few uses. They erroneously believe it has nothing new to offer. But this particular failing is the user’s fault, not ChatGPT’s. What can ChatGPT do? This list covers just some of the more unique uses of this technology. Users have asked ChatGPT to: Conduct an interview with a long-dead legendary figure regarding their views of contemporary topics. Recommend colors and color combinations for logos, fashion designs, and interior decorating designs. Generate original works such as articles, e-books, and ad copy. Predict the outcome of a business scenario. Develop an investment strategy based on stock market history and current economic conditions. Make a diagnosis based on a patient’s real-world test results. Write computer code to make a new computer game from scratch. Leverage sales leads. Inspire ideas for a variety of things from A/B testing to podcasts, webinars, and full-feature films. Check computer code for errors. Summarize legalese in software agreements, contracts, and other forms into simple laymen language. Calculate the terms of an agreement into total costs. Teach a skill or get instructions for a complex task. Find an error in their logic before implementing their decision in the real world. Much ado has been made of ChatGPT’s creativity. But that creativity is a reflection and result of the human doing the prompting. If you can think it, you can probably get ChatGPT to play along. Unfortunately, that’s true for bad guys too. For example, they can prompt ChatGPT to find vulnerabilities in computer code or a computer system; steal your identity by writing a document in your style, tone, and word choices; or edit an audio clip or a video clip to fool your biometric security measures or make it say something you didn’t actually say. Only their imagination limits the possibilities for harm and chaos. Unwrapping ChatGPT fears Perhaps no other technology is as intriguing and disturbing as generative artificial intelligence. Emotions were raised to a fever pitch when 100 million monthly active users snatched up the free, research preview version of ChatGPT within two months after its launch. You can thank science fiction writers and your own imagination for both the tantalizing and terrifying triggers that ChatGPT is now activating in your head, making you wonder: Is ChatGPT safe? There are definitely legitimate reasons for caution and concern. Lawsuits have been launched against generative AI programs for copyright and other intellectual property infringements. OpenAI and other AI companies and partners stand accused of illegally using copyrighted photos, text, and other intellectual property without permission or payment to train their AI models. These charges generally spring from copyrighted content getting caught up in the scraping of the Internet to create massive training datasets. In general, legal defense teams are arguing the inevitability and unsustainability of such charges in the age of AI and requesting that charges be dropped. The lawsuits regarding who owns the content generated by ChatGPT and its ilk lurk somewhere in the future. However, the U.S. Copyright Office has already ruled that AI-generated content, be it writing, images, or music, is not protected by copyright law. In the U.S., at least for now, the government will not protect anything generated by AI in terms of rights, licensing, or payment. Meanwhile, realistic concerns exist over other types of potential liabilities. ChatGPT and ChatGPT alternatives are known to sometimes deliver incorrect information to users and other machines. Who is liable when things go wrong, particularly in a life-threatening scenario? Even if a business’s bottom line is at stake and not someone's life, risks can run high and the outcome can be disastrous. Inevitably, someone will suffer and likely some person or organization will eventually be held accountable for it. Then, there are the magnifications of earlier concerns, such as data privacy, biases, unfair treatment of individuals and groups through AI actions, identity theft, deep fakes, security issues, and reality apathy, which is when the public can no longer tell what is true and what isn’t and thinks the effort to sort it all out is too difficult to pursue. In short, all of this probably has you wondering: Is ChatGPT safe? The potential to misuse it accelerates and intensifies the need for the rules and standards currently being studied, pursued, and developed by organizations and governments seeking to establish guardrails aimed at ensuring responsible AI. The big question is whether they’ll succeed in time, given ChatGPT’s incredibly fast adoption rate worldwide. Examples of groups working on guidelines, ethics, standards, and responsible AI frameworks include the following: ACM US Technology Committee’s Subcommittee on AI & Algorithms World Economic Forum UK’s Centre for Data Ethics Government agencies and efforts such as the US AI Bill of Rights and the European Council of the European Union’s Artificial Intelligence Act. IEEE and its 7000 series of standards Universities such as New York University’s Stern School of Business The private sector, wherein companies make their own responsible AI policies and foundations How does ChatGPT work? ChatGPT works differently than a search engine. A search engine, such as Google or Bing, or an AI assistant, such as Siri, Alexa, or Google Assistant, works by searching the Internet for matches to the keywords you enter in the search bar. Algorithms refine the results based on any number of factors, but your browser history, topic interests, purchase data, and location data usually figure into the equation. You’re then presented with a list of search results ranked in order of relevance as determined by the search engine’s algorithm. From there, the user is free to consider the sources of each option and click a selection to do a deeper dive for more details from that source. By comparison, ChatGPT generates its own unified answer to your prompt. It doesn't offer citations or note its sources. You ask; it answers. Easy-peasey, right? No. That task is incredibly hard for AI to do, which is why generative AI is so impressive. Generating an original result in response to a prompt is achieved by using either the GPT-3 (Generative Pre-trained Transformer 3) or GPT-4 model to analyze the prompt with context and predict the words that are likely to follow. Both GPT models are extremely powerful large language models capable of processing billions of words per second. In short, transformers enable ChatGPT to generate coherent, humanlike text as a response to a prompt. ChatGPT creates a response by considering context and assigning weight (values) to words that are likely to follow the words in the prompt to predict which words would be an appropriate response. Some ChatGPT basics here: User input is called a prompt rather than a command or a query, although it can take either form. You are, in effect, prompting AI to predict and complete a pattern that you initiated by entering the prompt. If you'd like a comprehensive ChatGPT guide, including more detail on how it works and how to use it, check out my book ChatGPT For Dummies. Peeking at the ChatGPT architecture As its name implies, ChatGPT is a chatbot running on a GPT model. GPT-3, GPT-3.5, and GPT-4 are large language models (LLMs) developed by OpenAPI. When GPT-3 was introduced, it was the largest LLM at 175 billion parameters. An upgraded version called GPT-3.5 turbo is a highly optimized and more stable version of GPT-3 that's ten times cheaper for developers to use. ChatGPT is now also available on GPT-4, which is a multimodal model, meaning it accepts both image and text inputs although its outputs are text only. It's now the largest LLM to date, although GPT-4’s exact number of parameters has yet to be disclosed. Parameters are numerical values that weigh and define connections between nodes and layers in the neural network architecture. The more parameters a model has, the more complex its internal representations and weighting. In general, more parameters lead to better performance on specific tasks. ChatGPT for beginners Here, you'll learn the basics of how to use ChatGPT and why it relies on your skills to optimize its performance. But the real treasure here are the tips and insights on how to write prompts so that ChatGPT can perform its true magic. You can learn even more about writing prompts in my book ChatGPT For Dummies. Writing effective ChatGPT prompts ChatGPT appears deceptively simplistic. The user interface is elegantly minimalistic and intuitive, as shown in the figure below. The first part of the page offers information to users regarding ChatGPT’s capabilities and limitations plus a few examples of prompts. The prompt bar, which resembles a search bar, runs across the bottom of the page. Just enter a question or a command to prompt ChatGPT to produce results immediately. If you enter a basic prompt, you’ll get a bare-bones, encyclopedic-like answer, as shown in the figure below. Do that enough times and you’ll convince yourself that this is just a toy and you can get better results from an Internet search engine. This is a typical novice’s mistake and a primary reason why beginners give up before they fully grasp what ChatGPT is and can do. Understand that your previous experience with keywords and search engines does not apply here. You must think of and use ChatGPT in a different way. Think hard about how you’re going to word your prompt. You have many options to consider. You can assign ChatGPT a role or a persona, or several personas and roles if you decide it should respond as a team, as illustrated in the figure below. You can assign yourself a new role or persona as well. Or tell it to address any type of audience — such as a high school graduating class, a surgical team, or attendees at a concert or a technology conference. You can set the stage or situation in great or minimum detail. You can ask a question, give it a command, or require specific behaviors. A prompt, as you can see now, is much more than a question or a command. Your success with ChatGPT hinges on your ability to master crafting a prompt in such a way as to trigger the precise response you seek. Ask yourself these questions as you are writing or evaluating your prompt. Who do you want ChatGPT to be? Where, when, and what is the situation or circumstances you want ChatGPT’s response framed within? Is the question you're entering in the prompt the real question you want it to answer, or were you trying to ask something else? Is the command you're prompting complete enough for ChatGPT to draw from sufficient context to give you a fuller, more complete, and richly nuanced response? And the ultimate question for you to consider: Is your prompt specific and detailed, or vague and meandering? Whichever is the case, that’s what ChatGPT will mirror in its response. ChatGPT’s responses are only as good as your prompt. That’s because the prompt starts a pattern that ChatGPT must then complete. Be intentional and concise about how you present that pattern starter — the prompt. Starting a chat To start a chat, just type a question or command in the prompt bar, shown at the bottom of the figure below. ChatGPT responds instantly. You can continue the chat by using the prompt bar again. Usually, you do this to gain further insights or to get ChatGPT to further refine its response. Following, are some things you can do in a prompt that may not be readily evident: Add data in the prompt along with your question or command regarding what to do with this data. Adding data directly in the prompt enables you to add more current info as well as make ChatGPT responses more customizable and on point. You can use the Browsing plug-in to connect ChatGPT to the live Internet, which will give it access to current information. However, you may want to add data to the prompt anyway to better focus its attention on the problem or task at hand. However, there are limits on prompting and response sizes, so make your prompt as concise as possible. Direct the style, tone, vocabulary level, and other factors to shape ChatGPT's response. Command ChatGPT to assume a specific persona, job role, or authority level in its response. If you’re using ChatGPT-4, you'll soon be able to use images in the prompt too. ChatGPT can extract information from the image to use in its analysis. When you’ve finished chatting on a particular topic or task, it’s wise to start a new chat (by clicking or tapping the New Chat button in the upper left). Starting a new dialogue prevents confusing ChatGPT, which would otherwise treat subsequent prompts as part of a single conversational thread. On the other hand, starting too many new chats on the same topic or related topics can lead the AI to use repetitious phrasing and outputs, whether or not they apply to the new chat’s prompt. To recap: Don't confuse ChatGPT by chatting in one long continuous thread with a lot of topic changes or by opening too many new chats on the same topic. Otherwise, ChatGPT will probably say something offensive or make up random and wrong answers. When writing prompts, think of the topic or task in narrow terms. For example, don't have a long chat on car racing, repairs, and maintenance. To keep ChatGPT more intently focused, narrow your prompt to a single topic, such as determining when the vehicle will be at top trade-in value so you can best offset a new car price. Your responses will be of much higher quality. ChatGPT may call you offensive names and make up stuff if the chat goes on too long. Shorter conversations tend to minimize these odd occurrences, or so most industry watchers think. For example, after ChatGPT responses to Bing users became unhinged and argumentative, Microsoft limited conversations with it to 5 prompts in a row, for a total of 50 conversations a day per user. But a few days later, it increased the limit to 6 prompts per conversation and a total of 60 conversations per day per user. The limits will probably increase when AI researchers can figure out how to tame the machine to an acceptable — or at least a less offensive — level. Quick Read Summary ChatGPT, a product of OpenAI, represents a groundbreaking advancement in the world of artificial intelligence. Released as a free preview in November 2022, it quickly gained over a million users by January 2023. ChatGPT is a powerful example of generative AI, capable of generating new content instead of just analyzing existing data. This versatile tool is accessible online and is being integrated into various applications like Microsoft Office and Bing search, expanding its utility daily. Users initially engage with ChatGPT for basic tasks like crafting poems, essays, or marketing content. Students use it for homework. But all who use it should be cautious: ChatGPT struggles with riddles and word problems in math. It also has a tendency to make things up. People tend to use ChatGPT to guide or explain something, but its potential goes beyond simple requests. Depending on the quality of your prompt, it can perform a wide range of tasks. Users have leveraged ChatGPT for tasks like conducting interviews with historical figures, recommending color combinations, generating articles, predicting business scenarios, and even diagnosing medical conditions based on patients’ real-world test results. Users can harness ChatGPT’s capabilities for both good and ill, from identifying vulnerabilities in computer systems to creating deepfakes. Therefore, as ChatGPT's popularity soars, concerns about its safety and misuse grow. Legal battles surrounding copyright infringement and accountability for incorrect information continue to emerge. Ethical guidelines and standards are under development by organizations and governments to ensure responsible AI usage. ChatGPT operates differently from search engines and AI assistants. It generates original responses to prompts, making it a valuable tool for diverse tasks. Users must craft prompts effectively to receive meaningful responses, considering factors like context, role assignment, and audience specification. In summary, ChatGPT is a game-changer in AI technology, offering endless possibilities when used responsibly. Its potential for good or harm depends on the user, emphasizing the need for ethical guidelines and responsible AI practices. Hungry for more? Go back and read the article or check out the book.
View ArticleArticle / Updated 07-10-2023
They say if it ain’t broke, don’t fix it, but anyone with high-value assets, whether a fleet of bucket trucks or drilling rigs, knows preventive maintenance is much more effective than performing repairs reactively. Servicing equipment before it fails reduces costly downtime and extends its lifespan, thus stretching your resources as far as possible. This concept certainly isn’t new. Routine equipment checks and preventive maintenance in general have been the responsibility of every maintenance department for decades. But here’s the good part. You can use AI and Internet of Things (IoT) sensors to go beyond preventive maintenance to implement predictive maintenance. Preventive maintenance prevents failures with inspections and services performed at predetermined intervals. Predictive maintenance uses large volumes of data and advanced analytics to anticipate the likelihood of failure based on the history and current status of a specific piece of equipment and recommends service before it happens. How do you like the sound of that? It’s the sound of asset performance optimization. This figure traces the evolution of this concept. Spying on Your Machines Asset performance optimization (APO) collects the digital output from IoT-enabled equipment and associated processes, analyzes the data, tracks performance, and makes recommendations regarding maintenance. APO allows you to forecast future needs and perform predictive maintenance before immediate actions are needed. Although some machines run continuously with little need for maintenance, others require much more care and attention to operate at their best level. Determining which equipment needs more frequent servicing can be time-consuming and tedious. Often maintenance guidelines rely heavily on guesswork. Time frames for tune-ups tend to be little more than suggestions, based on information such as shop manuals and a recommendation from the lead mechanics rather than hard data, such as metrics from the performance history of each piece of equipment, including downtime and previous failures. APO, on the other hand, analyzes both structured and unstructured data, such as field notes, to add context for equipment readings and deliver more precise recommendations. Using IoT devices, APO systems gather data from sensors, EIM systems, and external sources. It then uses AI to acquire, merge, manage, and analyze this information, presenting the results using real-time dashboards that can be shared and reviewed quickly for at-a-glance updates. Throughout the lifespan of the equipment, through regular use and maintenance, the system continues to learn and improve its insights over time. Fixing It Before It Breaks APO allows you to take a strategic approach to predictive maintenance by focusing on what will make the greatest difference to your operation. Unforeseen equipment malfunctions and downtime cause disruptions, which can ultimately jeopardize project timelines, customer satisfaction, and business revenue. These are benefits of APO: Smoother, more predictable operations: Equipment issues are addressed preemptively instead of after a disruption occurs, leading to greater overall operational efficiency. Implementing APO can help companies achieve a 70-percent elimination of unanticipated equipment failure. Reduced downtime: Preventive maintenance typically reduces machine downtime by 30 to 50 percent. Boosted productivity: In addition to reducing downtime, predictive maintenance allows you to become more strategic in scheduling maintenance. This also can uncover any routine maintenance tasks that can be performed at less frequent intervals. Lowered costs: APO can reduce the time required to plan maintenance by 20 to 50 percent, increase equipment uptime and availability by 10 to 20 percent, and reduce overall maintenance costs by 5 to 10 percent, according to a Deloitte study. Increased customer satisfaction: Assets nearing failures can sacrifice production quality, cause service outages, and create other circumstances that ultimately affect the customer. By preventing these issues from happening, APO helps companies achieve and maintain better customer satisfaction. Improved safety outcomes: Equipment malfunctions can result in serious injury, but often companies don’t know a system is about to fail until it’s too late. PwC reports that APO and predictive maintenance can improve maintenance safety by up to 14 percent. Reduced risk of litigation and penalties: With fewer breakdowns and disruptions to service, organizations can minimize their risk of costly fines, lawsuits, and subsequent reputational damage. Ultimately, in any industry with high-value equipment, or even large numbers of low-cost assets, APO pays off. It directs technicians’ efforts to the machines that need attention the most, instead of performing inspections or maintenance on the equipment that doesn’t need it. This leads to predictable and seamless operations, improved uptime, increased revenue opportunities, and greater customer satisfaction. Learning from the Future APO solutions allow you to enhance your operations by making your machines smarter and sharing that intelligence with your workforce, thereby maximizing the value of both your human teams and your mechanical equipment. As the age of AI advances, the companies that thrive will be those that find the best ways to harness data for improved operational performance. Data collection APO continuously collects data on mechanical performance from IoT sensors in virtually any type of device or machine, ranging from hydraulic brake system sensors on a train to temperature monitors inside industrial and medical refrigerators holding sensitive products. The system collects numerous data points from the sensors, blending them with other sources, and analyzes the results. For example, in the case of a hydraulic brake system, APO compares current data to historical performance records, including failure reports, to deliver predictive maintenance insights. When further data inputs are blended with this specific brake data, even richer and more accurate recommendations can be delivered. Additional input samples from internal and third-party sources can be blended to provide context and greater insight; these types of input can be invaluable: Weather data Maintenance recommendations from manuals Supplier quality data Historical brake maintenance schedules and failure rates Passenger travel analysis Heavy or unusual usage data Using this comprehensive blend of data, you can implement an APO solution to recognize patterns and perform an in-depth analysis in multiple applications, from manufacturing plants to utilities and even healthcare. The data can include metrics such as temperature, movement, light exposure, and more collected from IoT sensors on fleets, plants, pipelines, medical imaging equipment, jets, grids, and any other Internet-enabled device. Analysis AI uses big data analytics and natural-language processing to derive key data points from structured data as well as unstructured content such as field notes and equipment manuals. You can use AI to analyze this information and relevant historical data to identify patterns and generate questions that help engineers, maintenance supervisors, production managers, and other key personnel make informed and timely decisions. APO solutions use these patterns to make predictive conclusions that help you answer questions in various areas, such as these: Timing: Am I performing inspections at appropriate intervals? Would shortening the intervals improve overall uptime? Or could I afford to lengthen the intervals to reduce resource expenditure? Quality: Could defective components be slipping through my inspections and leading to downtime? If so, how can I improve the inspection process to prevent this issue moving forward? Design: Can my design be modified to reduce future failures? For example, a predictive conclusion formed by the patterns observed in the case of the train brake system might indicate the need for shorter inspection intervals. This is where humans come in and leverage all of these valuable findings to improve their business. Putting insights to use After patterns are identified and their related questions are answered, the predictive conclusions provided by APO solutions can then be implemented. For example, train maintenance workers can schedule inspections more frequently to check for a key component in the hydraulic brake system that has shown a tendency to fail. Or perhaps the APO solution discovers that a defective component in the train needs attention. Field engineers can use a digital model of the train to determine a repair strategy. If the part cannot be repaired, the APO solution can trigger a replacement part order through the supply chain network, using automation to streamline the process of getting the train back up and running.
View ArticleArticle / Updated 07-05-2023
All that data being collected in manufacturing from IoT devices at unprecedented volume and velocity is driving the fourth industrial revolution. The first industrial revolution was powered by steam. The second was powered by electricity. The third was powered by silicon, which enabled unprecedented computing power. And the fourth industrial revolution is being powered by data. In fact, in the last decade, data has emerged as the new currency that operates across all levels of commerce, right down to the consumer, who pays for the use of “free” social media platforms with their personal data, which those platforms exchange with their clients. The combination of AI and analytics can help manufacturers optimize the use of their IoT data for many applications. Consider three related strategies: proactive replenishment, predictive maintenance, and pervasive visibility. The following figure shows the relationship between the method of controlling costs and the AI technique used to accomplish it. Connected supply chain A supply chain connects a customer to the raw materials required to produce the product. It can be as simple as a single link, such as when the customer stops at a roadside stand to buy tomatoes directly from the farmer. Or it can be very complex and involve dozens of links, such as all the steps between a customer driving a car off the lot back to mining the iron ore or bauxite for the steel or aluminum engine block. In a traditional supply chain, each link operates as a black box, connected to each other by a paper-thin link made up of documents such as purchase orders, shipping manifests, invoices, and the like. Each entity does its own planning, forecasting, and ordering while being blind to conditions on either side of it in the chain. You could think of the links in the traditional supply chain as separate continents, each with its own ecosystem but largely isolated and insulated from the ecosystems of other continents. A connected supply chain brings those continents together like the supercontinent Pangea in the Paleozoic era, forming one interconnected ecosystem of partners, suppliers, and customers. In the connected supply chain, the paper-thin connection of the traditional model is replaced with a digital connection that provides full visibility in all directions. A 2017 McKinsey study suggests that companies that aggressively digitize their supply chains can expect to boost annual growth of earnings by an average of 3.2 percent — the largest increase from digitizing any business area — and annual revenue growth by 2.3 percent. In a recent KRC Research survey of manufacturing executives, 46 percent indicated that big data analytics and IoT are essential for improving supply chain performance. They identified the two areas where big data analytics can have the greatest impact on manufacturing to be improving supply chain performance (32 percent) and enabling real-time decisions and avoiding unplanned downtime (32 percent). The study also identified the top three benefits of big data for manufacturers: Enabling well-informed decisions in real-time (63 percent) Reducing wasted resources (57 percent) Predicting the risks of downtime (56 percent) The connected supply chain enables you to react to changing market needs by sharing insights between every node in the value chain. There are many other applications for AI in manufacturing, some of which I address later, but the three elements of connected supply chain — proactive replenishment, predictive maintenance, and pervasive visibility — provide an intuitive case for using AI to revolutionize your business. Proactive replenishment Optimizing inventory levels while improving customer experience requires the ability to automate much of the replenishment process. Proactive replenishment leverages analytics to monitor inventory consumption and initiate a purchase order on the business network to the supplier to replenish the stock before an out-of-stock situation occurs. The intelligent and connected supply chain provides real-time inventory visibility. In addition to reporting stock levels, it can indicate the condition of each item — such as the temperature at which it is stored — to ensure the quality of those items. Manufacturers can automate the replenishment of parts from the supplier before they are needed in the production process. And, query between suppliers to procure from which has availability, the best rates, and taking into account shipping times required for in-time delivery. Predictive maintenance The ability to predict when a part of a sub-system of a serviceable product is likely to fail and intervene can save a manufacturer millions of dollars, and thus predictive maintenance is a key investment area for the supply chain. Whether that part is within the production process, within the warehousing environment, or part of a connected vehicle, an IoT network automatically monitors and analyzes performance to boost operating capacity and lifespan. This system intelligently decides whether the part needs to be replaced or repaired and can automatically trigger the correct process, typically reducing machine downtime by 30 to 50 percent and increasing machine life by 20 to 40 percent. Studies by the Federal Energy Management Program of the U.S. Department of Energy found that predictive maintenance provides a 30 to 40 percent spending reduction compared to reactive maintenance, and an 8 to 12 percent spending reduction compared to preventative maintenance. Organizations often make incremental progress toward predictive maintenance, starting with monitoring IoT data in the control room and reacting to problematic readings. In the next stage, they run reports to view recommendations for maintenance. In the final stage, they remove the requirement for human interaction to initiate maintenance by automating the creation of repair tickets. Pervasive visibility Pervasive visibility is the ability to see exactly where goods are during their life cycle, providing a view of the current state of all assets across the entire organization and beyond to partners, customers, competitors, and even the impact of the weather on operations and fulfillment. IoT plays a key role in providing that visibility. Current estimates predict that there will be 75 billion connected devices by 2025, ten connected devices for every human on the planet. Most of those devices will be in the manufacturing sector. When you consider that a single device can produce gigabytes of information each minute, the volume and velocity of data within production and operations can easily spiral out of control — or, more often, simply be ignored. While 60 percent of executives that McKinsey surveyed said that IoT data yielded significant insights, 54 percent admitted that they used less than 10 percent of that IoT information. Making sense of that data is where AI comes in. By combining AI and analytics, you can bring together information from a variety of data sources, identify patterns and trends, and provide recommendations for future actions. It provides the basis for new levels of production and business process automation, as well as improved support for employees in their daily roles and in their decision-making. To achieve these gains in the supply chain, AI has the potential to bring together both structured and unstructured data from a wide range of sources, including IoT devices, plant operations, and external partners to identify patterns linking factors such as demand, location, socioeconomic conditions, weather, and political status. This information forms a basis for a new level of supply chain optimization spanning raw materials, logistics, inventory control, and supplier performance and helps anticipate and react to market changes.
View Article