# Write dataset instructions

> Give the agent lasting context about the survey — what fields mean, which weight to use, important caveats, and house-style conventions. Instructions live with the project and reach both the in-app agent and MCP clients.

*Source:* https://recense.ai/docs/dataset-instructions

## What instructions do

Instructions supplement what the agent can infer from metadata alone. Use them for:

- Survey context the agent can't derive from variable labels (e.g. "this was a B2B panel, not general population").
- Interpretation rules ("always weight by wgt_design", "report column percentages by default").
- Caveats about the data ("Q12 was asked only in wave 3").
- House style preferences ("use 95% confidence level", "always include unweighted base").

## Managing instructions

- Draft, edit, and publish instruction blocks from the Instructions panel.
- Filter blocks by scope (dataset-wide or variable-specific) and activation mode.
- View version history and restore older versions.
- Instructions are saved with the project file.

## How instructions reach the agent

Published instructions are included in the agent's context automatically — both for the in-app agent and MCP clients. You don't need to repeat them in every prompt.

## Tips

- Keep instructions factual and specific. "This survey has 2,400 respondents from the UK aged 18+" is useful. "Analyse this data well" is not.
- Use variable-scoped instructions for field-specific caveats rather than one large block.
- Update instructions when the dataset or analysis requirements change.

## Next steps

- **[Agent recipes](/docs/agent-recipes)** — Pair good instructions with prompt patterns to maximise agent quality.
- **[Build tables and analysis](/docs/tables-and-analysis)** — See your instructions take effect on real tables.
- **[Methodology](/docs/methodology)** — Reference the methodology page when writing instructions about tests.
