Agent recipes
Patterns for getting reliable, high-quality work out of the agent — from a one-shot dataset description to a full draft report. Each recipe shows the prompt, what the agent does, and how to verify the output.
How to prompt the agent #
The agent is good at survey-shaped work: describing data, building tables, applying filters and weights, summarising results, and drafting narrative. It is less reliable when the prompt is vague or asks it to make a strategic call without enough context.
Three habits make the difference. Be specific about the variables and segments. Name the measure you want. State the format of the answer.
Recipe — describe the dataset #
Use this as a first prompt on a new dataset to confirm the agent has loaded it and understands the structure.
- What you should see: a short structural summary, named splits, and a flagged anomaly if one exists.
- Verify by spot-checking the sample size and at least one demographic split against the Variables view.
Describe this dataset: sample size, panel, key demographic splits, and any question groups that look stackable. Highlight anything that looks unusual. Recipe — build a cross-tabulation #
Specify variables, segments, measure, and weighting explicitly.
- What you should see: a new table on the canvas matching the spec.
- Verify the weight is applied (Weight pill shows the variable name) and the unweighted base looks plausible.
Cross-tabulate Q5 (brand awareness) by AgeGroup with column percentages. Apply the design weight. Show significance letters at 95%. Recipe — segment comparison #
Useful for comparing two cohorts on a battery of metrics.
- What you should see: a side-by-side table with a Difference column and significance markers.
- Verify segment definitions match what you intended — the agent should state them back.
For respondents in the High-engagement segment vs Low-engagement segment, compare mean scores on the Q12 satisfaction grid. Show the difference and flag any significant gaps. Recipe — narrative summary #
Use after you have built and reviewed the supporting tables. The agent reads what is on the canvas.
- What you should see: three paragraphs in a workspace note, anchored near the source tables.
- Verify every statistic in the narrative against the source table — agents sometimes round inconsistently.
Draft a three-paragraph executive summary from the four tables tagged "headline". Lead with the strongest finding, qualify with sample size, and end with the one thing a stakeholder should do next. Recipe — text coding #
Use during text coding to consolidate or refine themes.
- What you should see: a revised theme list with explicit merges and splits explained.
- Verify by sampling responses for any merged or split themes before publishing the codebook.
Review the proposed themes for Q15. Merge anything semantically duplicate, split themes that are doing two jobs, and propose at most one new theme if you see a gap. Verifying agent output #
Treat the agent like a fast junior analyst. The output is usually right, sometimes wrong, and always worth a quick check before it leaves your workspace.
- For tables: check the weight, base, and at least one cell against the variables view.
- For narratives: check every number in the narrative against its source table.
- For coded data: spot-check 10 responses across high- and low-confidence themes.
- For statistical claims: confirm the test the agent named matches what the table is configured to run.
Known limitations #
- The agent does not invent statistical methods Recense doesn't support. If you ask for a method outside the methodology page, it will tell you and propose the nearest equivalent.
- The agent will not delete tables or notes without explicit instruction.
- For very long conversations, summarise progress occasionally — context windows have limits.
- Built-in mode and BYOK use the same tools and the same prompt; differences in output usually reflect model capability, not Recense behaviour.