Creating Recipes - Step-by-Step Guide

This guide walks you through creating your own multi-step AI workflows, from concept to working recipe.

Step 1: Define Your Workflow

Break down your task into distinct steps. Each step should have a clear purpose.

Example task: Code review with improvements

Your mental model:
1. Analyze the code for issues
2. Suggest concrete improvements
3. Validate suggestions are feasible
4. Format as markdown report

Good step characteristics:

Step 2: Map Data Flow

Identify what each step needs and produces:

Step 1 (analyze):
  Needs: file_path (from context)
  Produces: issues

Step 2 (suggest):
  Needs: issues (from step 1)
  Produces: improvements

Step 3 (validate):
  Needs: improvements (from step 2)
  Produces: validation

Step 4 (report):
  Needs: issues, improvements, validation (from steps 1-3)
  Produces: final_report

Key insight: Draw arrows showing dependencies. This becomes your recipe structure!

Step 3: Choose Agents and Step Types

Step Types

Type When to Use Example
agent Analysis, reasoning, generation Code review, brainstorming, writing
bash File ops, tests, builds Run pytest, git status, file checks
recipe Reusable sub-workflows Call shared component analyzer

Agent selection:

Step 4: Write the YAML

Translate your plan to recipe format:

name: "code-review-flow"
description: "Multi-stage code review with validation"
version: "1.0.0"
author: "Your Name"
tags: ["code-review", "quality"]

context:
  file_path: ""  # Required input (empty = must be provided)

steps:
  - id: "analyze"
    agent: "foundation:zen-architect"
    mode: "ANALYZE"
    prompt: |
      Analyze {{`{file_path}`}} for:
      - Code quality issues
      - Potential bugs
      - Performance concerns
    output: "issues"
    timeout: 300

  - id: "suggest-improvements"
    agent: "foundation:zen-architect"
    mode: "ARCHITECT"
    prompt: |
      Given these issues:
      {{`{issues}`}}

      Suggest concrete, actionable improvements for {{`{file_path}`}}.
    output: "improvements"
    timeout: 300

  - id: "validate-feasibility"
    agent: "foundation:zen-architect"
    mode: "REVIEW"
    prompt: |
      Review these improvement suggestions:
      {{`{improvements}`}}

      For each, assess:
      - Feasibility (easy/medium/hard)
      - Breaking change risk
      - Testing requirements
    output: "validation"
    timeout: 300

  - id: "create-report"
    agent: "foundation:modular-builder"
    prompt: |
      Create a markdown code review report:

      File: {{`{file_path}`}}
      Issues Found: {{`{issues}`}}
      Suggested Improvements: {{`{improvements}`}}
      Validation: {{`{validation}`}}

      Format professionally with clear sections.
    output: "final_report"
    timeout: 300

Step 5: Test and Iterate

Validate Structure

# Check YAML syntax and structure
amplifier run "validate recipe code-review-flow.yaml"

Validation checks:

Test Execution

# Run with a real file
amplifier run "execute code-review-flow.yaml with file_path=src/auth.py"

# Watch it execute:
# [1/4] analyze (agent)
# [2/4] suggest-improvements (agent)
# [3/4] validate-feasibility (agent)
# [4/4] create-report (agent)

Debug if Needed

# Check session state
ls ~/.amplifier/projects/*/recipe-sessions/

# View state
cat ~/.amplifier/projects/*/recipe-sessions/recipe_*/state.json | jq .

# View events
cat ~/.amplifier/projects/*/recipe-sessions/recipe_*/events.jsonl | jq .

Iterate

Common refinements:

Advanced: Adding Features

Retry Logic

- id: "api-call"
  agent: "foundation:web-research"
  prompt: "Fetch data from API"
  retry:
    max_attempts: 5
    backoff: "exponential"
    initial_delay: 5
    max_delay: 300
  output: "api_data"

Conditional Steps

- id: "check-tests-exist"
  type: "bash"
  command: "test -d tests/"
  output_exit_code: "has_tests"
  on_error: "continue"

- id: "run-tests"
  condition: "{{`{has_tests}`}} == '0'"
  type: "bash"
  command: "pytest tests/"
  output: "test_results"

Agent Configuration Override

- id: "creative-brainstorm"
  agent: "foundation:zen-architect"
  agent_config:
    providers:
      - module: "provider-anthropic"
        config:
          temperature: 0.9       # More creative
          model: "claude-opus-4"  # More capable
  prompt: "Brainstorm 10 alternative approaches"
  output: "alternatives"

Best Practices

Design Principles
  • Small steps - Each does one thing well
  • Descriptive IDs - analyze-security not step1
  • Clear outputs - Name what the data represents
  • Handle errors - Decide: critical or optional?
  • Set timeouts - Realistic per-step estimates
  • Use approval gates - Before destructive operations
  • Document context - Comment required inputs

Common Patterns

Analyze → Improve → Validate

1. Analyze current state
2. Generate improvements
3. Validate improvements
4. Report/apply

Plan → Approve → Execute

1. Plan changes
2. Human approval gate
3. Execute if approved
4. Verify results

Collect → Process → Aggregate

1. Foreach to collect data
2. Process each item
3. Aggregate results
4. Generate summary

Using the Recipe-Author Agent

Let AI help you create recipes:

amplifier run --bundle git+https://github.com/microsoft/amplifier-bundle-recipes@main \
  "I need a recipe for upgrading Python dependencies safely"

The recipe-author agent will:

  1. Ask about your workflow
  2. Identify steps and dependencies
  3. Generate YAML specification
  4. Validate structure
  5. Save recipe file

Next Steps