Building Better Prompts: A Practical Guide to Prompt Engineering
Prompt engineering has become part of daily work for many people with the rise of modern LLMs like ChatGPT, Claude, Gemini, DeepSeek and others. In simple terms, it is the practice of structuring and phrasing instructions for AI systems so that they can perform tasks more accurately. Whether you’re querying a model for quick tasks, creating content, or building AI-powered tools, the way you phrase a prompt has a major impact on the quality of the output. Even small adjustments can improve reliability, reduce hallucinations, and make the results much easier to use in production.
At Opply, as we develop agents to automate supply chain processes, we’ve identified several practical techniques we rely on to get stable, accurate and context-aware outputs from the models. Below are five tips that have helped us the most.
1. Give the model an identity
One of the most effective prompt engineering techniques is assigning the model a clear identity. Instead of giving instructions in a generic way, you frame the model with a defined role so it adopts the right perspective. This helps it understand how it should think, not just what it should do. Some examples include:
Domain specialist
You are a food and beverage domain specialist who understands ingredient standards.
Tone or style control
You are a concise technical writer who explains concepts in simple language
Safety-focused identity
You are an AI agent, not a human expert. Provide safe, general information and do not make assumptions.
There are many other identities you can use to guide behaviour, such as giving the agent a company-specific role, defining its reasoning style, or narrowing its focus to a single task. These identities help anchor the agent’s behaviour, align responses with the role you expect, and often improve accuracy and consistency.
2. Define the input and output structure
LLM models perform best when the expected format is crystal clear. Provide a structured input format, and define the exact output format you want. JSON works especially well because it is easy to parse in production code.
Example:
The input will be provided in the following JSON format:
{
"product_name": "string",
"description": "string"
}
Return a valid JSON object with the exact structure below. Do not add or remove fields:
{
"ingredients": ["string"],
"allergens": ["string"],
"category": "string"
}
Think of it as giving the model a form to fill in rather than asking for a free-form answer. This reduces ambiguity and makes the results more reliable.
3. Provide fallback responses
Whenever you ask the model to extract, classify or detect something, always define what it should do when there is nothing to return. Without a fallback, the model may try to invent information to complete the task. By giving it a default response, you prevent this from happening and keep the output predictable.
Example:
If no ingredients are found in the input, return "none".
If the field cannot be determined, use an empty list or an empty string instead of guessing.
Clear fallback rules reduce hallucinations and make your downstream pipelines easier to maintain.
4. Be specific and give examples
Specificity is one of the strongest tools you have. Clear rules and unambiguous instructions help the model understand exactly what you expect, which reduces variation in the output. This is especially important for tasks that are unusual, multi step or require domain knowledge the model might not naturally apply.
Whenever possible, include examples. These can be concrete input output pairs, or lighter scenario based hints such as:
If two categories seem possible, choose the more specific one.
If the description contains multiple products, only analyse the first one.
Examples act as behavioural reference points and give the model patterns to follow. Even a single example can significantly improve consistency, reduce confusion and help the model generalise correctly across edge cases.
5. Avoid overloading the prompt
Building on the previous tip, it is helpful to provide examples and clear rules, but there is a limit. If your prompt contains too many conditions, exceptions or branching cases, the model may struggle to follow them. Natural language instructions become harder to interpret when several layers of logic are stacked together. If a task involves multiple branches or complex decision paths, it is more effective to break it into separate steps or even separate prompts. This keeps each instruction focused, reduces errors and helps the model follow the intended logic more reliably.
Example:
Instead of writing one long prompt like:
Extract ingredients, classify the product category, check for allergens, and if the text contains multiple products pick only the first one, and if there are no allergens return none, and if the category is unclear ask for clarification.
Break it into smaller, clearer steps:
- Prompt 1: Extract the ingredients.
- Prompt 2: Based on the extracted ingredients, classify the product category.
- Prompt 3: Extract allergens and return an empty list if none are found.
Each prompt then has a single purpose, which makes the entire chain far more stable and predictable.
Conclusion and Final Tip
While these tips will help you craft stronger prompts, it helps to keep in mind that prompts almost never work perfectly on the first try. Iteration and testing are part of the process. Write a version, see how the model behaves, adjust it and try again. Over time, this cycle will help you build a library of reliable prompting patterns that consistently produce high quality results.
What prompting patterns have worked best for you? Let us know and we might explore them in a future post!