# The Four Ingredients of a Good Prompt
> LOs: LO-S1-1

## Introduction
Have you ever asked an AI assistant a question and received a beautifully written, completely useless answer? That gap between "what you typed" and "what you actually needed" is almost always a prompt problem, not a model problem. In this class we unpack the four ingredients — Role, Task, Context, and Constraints — that turn vague requests into reliable, work-ready outputs.

## Core Concept
A weak prompt usually fails because it leaves the model guessing. When you write "Summarize this report," the AI has to invent who it is, who you are, what kind of summary you want, and how long it should be. Each guess is a coin flip, and four coin flips rarely land in your favor (see Slide 1).

Strong prompts replace those guesses with four explicit ingredients:

| Ingredient | What it answers | Mini example |
|------------|-----------------|--------------|
| **Role** | Who should the AI act as? | "You are a senior ERP data analyst." |
| **Task** | What exact action do you want? | "Compare Q3 vs Q4 inventory turnover." |
| **Context** | What background does it need? | "We use SAP S/4HANA; data is in the attached CSV." |
| **Constraints** | What are the limits or format rules? | "Answer in 5 bullets, flag anomalies over 15%." |

Think of these like the four legs of a stool (see Slide 2). Drop one and the answer wobbles. Role sets tone and depth. Task removes ambiguity about the verb. Context grounds the answer in your reality. Constraints make the output usable without rework. Together they shrink the space of "plausible answers" the model can drift into, which is exactly why outputs become more consistent across runs.

## Worked Example
Slide 3 walks through a real rewrite for an ERP analyst. The before-prompt reads:

> "Look at our inventory data and tell me what's going on."

The model has no role, the task verb "look at" is fuzzy, there is zero context about which system or period, and no format rules. Predictably, you get a generic essay about inventory management best practices.

The after-prompt layers in all four ingredients:

> "**Role:** You are a senior ERP analyst familiar with SAP S/4HANA.
> **Task:** Compare Q3 and Q4 inventory turnover for our top 20 SKUs.
> **Context:** Data is in the attached CSV; fiscal year ends Dec 31; we recently switched a supplier in October.
> **Constraints:** Reply in 5 bullets max, highlight any SKU with turnover change above 15%, and flag the supplier-switch effect explicitly."

The same model, same data, now produces a focused diagnostic you can paste into a Monday standup. Notice nothing magical happened — we just stopped making the AI guess.

## Common Pitfalls
- **Vague verbs.** Words like "analyze," "look at," or "help with" let the model pick its own task. Swap them for precise verbs: compare, rank, summarize in 3 bullets, draft an email to X.
- **Missing audience or role.** Telling the AI *who it is* and *who is reading* changes vocabulary, depth, and tone. A prompt without a role tends to default to a Wikipedia voice — neutral, generic, and rarely matched to your audience (engineer, executive, customer).
- **Over-constraining.** The opposite trap: stuffing the prompt with twenty rules, contradictory format demands, and edge cases. The model starts negotiating between rules instead of solving your problem. Start with 2-4 constraints that actually matter and add more only if you see drift.

## Recap
Four ingredients — Role, Task, Context, Constraints — turn a wish into a work order (see Slide 5). Skip any one and you reintroduce guesswork; balance all four and even a short prompt becomes reliable. Next up in **S1.C2** we zoom into the first ingredient, **Role**, and see how a single well-chosen persona can change an answer more than any other tweak.
