Give Generative AI a Try-What are Prompts and How to Create Them

So, what exactly are “prompts” in the context of using AI? The MIT Sloan School of Management who partners with Sloan Technology Services, describes prompts as “conversation starters: what and how you tell something to the AI for it to respond in a way that generates useful responses for you.” Once you have a response from AI, you can then build upon that response with another prompt. Essentially, it’s like having a conversation with another person, only in this case the conversation is text-based, and your converser is AI.1 However, for prompts to be most effective and produce helpful and relevant responses, there are ways in which prompts must be written (and not written) to yield the most effective results.

Here are 5 Pitfalls to Avoid When Writing Prompts:

  1. The Vagueness Trap – Using clear specific prompts are crucial for accurate and useful outputs from AI. You might think vague prompts may yield more results, but the responses AI provides will take you longer to evaluate for relevance and end up costing you more time.
  2. Information Overload – Break your requests into a series of focused, manageable prompts. As stated above, be specific in your initial prompt. Once you have a useful response, you can then build upon that response with another prompt.
  3. Context Vacuum – Craft a prompt that includes critical details and relevant information. Again, be specific in your prompts. If your initial prompt doesn’t return a helpful response, then simply try again! But the more relevant information you initially input, the more likely the response will be helpful to your request and help you with your task.
  4. Creativity Crunch – Use AI as a brainstorming partner, or as a starting point. If you are having difficulty getting started on drafting a document or starting a task, or not sure how or where to begin, prompt AI. But understand that AI is a tool to enhance creativity, but not to replace it.
  5. Privacy Pitfall – NEVER put anything into public AI you can’t “write in the sky” or declare on the courthouse steps. It is important to understand that whatever information you input into “generative AI”, such as “Chat GPT” or “Perplexity” for example, goes into the “collective.” Both the information input and output are now part of the AI’s program and can be used by the program to enhance the program itself. Thus, never put any identifying client or privileged information into AI that would expose you to violations of any Rules of Professional Conduct.

As an example of a “good” versus “bad” prompt, a “bad” prompt may be an input into AI that simply says, “write about a dog.” Versus a “good” prompt which might say, “Write a short story about a golden retriever that gets lost in the woods, focusing on how it finds its way home.” The difference being the “good” prompt specifies the type of content (short story) and describes the subject (a golden retriever lost in the woods) which produced a more useful response.

Another example would be a “bad” prompt of, “Write about the history of space exploration, including all major milestones, key astronauts, scientific discoveries, and how it has impacted society.” Versus a “good” prompt which might say, “Write about the Apollo 11 mission and its impact on space exploration and society.” The difference between the two prompts is that the latter prompt avoids information overload, and narrows the scope to one specific event, making the request clear and manageable for AI.

There are multiple free AI sites and apps (ChatGPT, Perplexity, etc.) for you to try out and practice your prompting skills. LMICK will continue to have more useful AI information, tips and tricks in our AI series in future Issues of the LMICK Minute. So, stay tuned!

Finally, when using AI technology, please also be aware and cautious that AI can produce “hallucinations” which are incorrect or misleading results that the AI model has generated. The results may seem legitimate and make the user and others believe the information is true or real, when in fact, the information is false or made up. Accordingly, you must always double check the information contained in a result or response from AI. A good rule of thumb is to supervise AI’s results like you’d supervise an associate or paralegal.