How to Create Dynamic Prompt Chains
Chain multiple AI calls together where each output feeds the next prompt.
Jay Banlasan
The AI Systems Guy
Dynamic AI prompt chaining workflow is how you break complex tasks into steps and run them sequentially, where each AI output becomes the input for the next. A single prompt asking "analyze this customer, write a proposal, and suggest follow-up actions" produces mediocre output across all three. Three focused prompts, each doing one thing well and passing its result forward, produces output that is better on every dimension. I chain prompts on almost every production system I build because the improvement over single-call approaches is substantial and the code is not complex.
The "dynamic" part means the chain's path can change based on intermediate outputs. Not every call goes through every step. The output of step one decides which step two runs.
What You Need Before Starting
- Python 3.10+ with
anthropic(pip install anthropic) - A multi-step task currently handled by one large prompt
- Clear definitions of what each step should produce
Step 1: Understand When to Chain
Chain when a task naturally decomposes into distinct steps where each step requires different instructions or model behavior.
Good candidates for chaining:
- Analyze data, then write a recommendation based on the analysis
- Extract facts from text, then generate content using those facts
- Classify the input, then route to a specialized handler based on the classification
- Research phase, then synthesis phase, then output formatting phase
Do not chain when:
- The task is genuinely atomic (classify this, translate this)
- The steps are so connected that separating them produces worse context
- Latency is critical and you cannot run steps in parallel
Step 2: Build the Basic Chain Executor
import anthropic
from typing import Callable, Any
client = anthropic.Anthropic()
def ai_step(
system_prompt: str,
user_message: str,
model: str = "claude-haiku-4-5",
max_tokens: int = 500
) -> str:
response = client.messages.create(
model=model,
max_tokens=max_tokens,
system=system_prompt,
messages=[{"role": "user", "content": user_message}]
)
return response.content[0].text
def run_chain(steps: list, initial_input: str, context: dict = None) -> dict:
context = context or {}
current_input = initial_input
step_outputs = {}
for i, step in enumerate(steps):
step_name = step.get("name", f"step_{i+1}")
print(f"Running step: {step_name}")
# Check if this step should be skipped based on conditions
condition = step.get("condition")
if condition and not condition(context, step_outputs):
print(f" Skipping {step_name} (condition not met)")
step_outputs[step_name] = None
continue
# Build the input for this step
input_fn = step.get("input_fn")
if input_fn:
step_input = input_fn(current_input, step_outputs, context)
else:
step_input = current_input
# Run the step
output = ai_step(
system_prompt=step["system_prompt"],
user_message=step_input,
model=step.get("model", "claude-haiku-4-5"),
max_tokens=step.get("max_tokens", 500)
)
step_outputs[step_name] = output
# Transform output for next step if needed
transform_fn = step.get("transform_fn")
if transform_fn:
current_input = transform_fn(output, context)
else:
current_input = output
print(f" Completed: {output[:100]}...")
return {"final_output": current_input, "steps": step_outputs}
Step 3: Build a Lead Processing Chain
A real-world example: process an inbound lead through analysis, scoring, and draft creation.
import json
def build_lead_processing_chain() -> list:
return [
{
"name": "extract_info",
"system_prompt": """Extract key information from this lead inquiry. Return JSON only.
Schema: {"name": string, "company": string, "problem": string, "urgency": "high|medium|low", "budget_signal": "has_budget|unknown|price_shopper"}""",
"model": "claude-haiku-4-5",
"max_tokens": 200,
"transform_fn": lambda output, ctx: output # Pass raw JSON to next step
},
{
"name": "score_lead",
"system_prompt": """Score this lead based on extracted information. Return JSON only.
Schema: {"score": 1-10, "tier": "hot|warm|cold", "top_signal": string, "red_flags": [string]}""",
"model": "claude-haiku-4-5",
"max_tokens": 200,
"input_fn": lambda orig, steps, ctx: f"Score this lead:\n{steps['extract_info']}",
},
{
"name": "draft_response",
"system_prompt": """Write a personalized follow-up email. Under 100 words. End with one question.
No placeholders. Casual and specific.""",
"model": "claude-haiku-4-5",
"max_tokens": 300,
"condition": lambda ctx, steps: (
steps.get("score_lead") is not None and
json.loads(steps["score_lead"]).get("tier") in ["hot", "warm"]
),
"input_fn": lambda orig, steps, ctx: f"""Write a follow-up email for this lead:
Lead info: {steps['extract_info']}
Lead score: {steps['score_lead']}
Original message: {ctx.get('original_message', '')}"""
},
{
"name": "suggest_actions",
"system_prompt": "Suggest 2-3 specific next actions based on lead analysis. Be concrete, not generic.",
"model": "claude-haiku-4-5",
"max_tokens": 200,
"input_fn": lambda orig, steps, ctx: f"Suggest next actions:\nLead: {steps['extract_info']}\nScore: {steps['score_lead']}"
}
]
def process_lead(lead_message: str) -> dict:
chain = build_lead_processing_chain()
result = run_chain(
steps=chain,
initial_input=lead_message,
context={"original_message": lead_message}
)
# Parse JSON outputs
parsed = {}
for step_name, output in result["steps"].items():
if output:
try:
parsed[step_name] = json.loads(output)
except json.JSONDecodeError:
parsed[step_name] = output
return parsed
# Test
result = process_lead("""
Hi, I run a 40-person consulting firm. We're wasting hours every week on manual reporting.
I saw your AI automation case studies and this looks exactly like what we need.
Budget is not a concern if the ROI is there. Who can I talk to this week?
""")
print(json.dumps(result, indent=2))
Step 4: Build a Parallel Chain for Independent Steps
When multiple steps do not depend on each other, run them in parallel to cut latency.
import concurrent.futures
def run_parallel_steps(
steps: list,
shared_input: str,
context: dict = None
) -> dict:
context = context or {}
results = {}
def run_step(step: dict) -> tuple:
name = step["name"]
input_fn = step.get("input_fn")
user_input = input_fn(shared_input, {}, context) if input_fn else shared_input
output = ai_step(
system_prompt=step["system_prompt"],
user_message=user_input,
model=step.get("model", "claude-haiku-4-5"),
max_tokens=step.get("max_tokens", 300)
)
return name, output
with concurrent.futures.ThreadPoolExecutor(max_workers=len(steps)) as executor:
futures = {executor.submit(run_step, step): step["name"] for step in steps}
for future in concurrent.futures.as_completed(futures):
name, output = future.result()
results[name] = output
print(f"Completed parallel step: {name}")
return results
# Example: analyze a document from multiple angles simultaneously
def analyze_document_parallel(document: str) -> dict:
analysis_steps = [
{
"name": "summary",
"system_prompt": "Summarize this document in 3 bullet points.",
"max_tokens": 200
},
{
"name": "key_risks",
"system_prompt": "Identify the top 3 risks or concerns in this document.",
"max_tokens": 200
},
{
"name": "action_items",
"system_prompt": "Extract all action items or next steps mentioned in this document.",
"max_tokens": 200
},
{
"name": "sentiment",
"system_prompt": "Assess the overall tone and sentiment of this document in one sentence.",
"max_tokens": 100
}
]
return run_parallel_steps(analysis_steps, document)
Step 5: Dynamic Routing Based on Classification
Route to different chain paths based on the output of an early classification step.
def dynamic_support_chain(ticket: dict) -> dict:
ticket_text = f"Subject: {ticket.get('subject')}\n{ticket.get('body')}"
# Step 1: Classify
classification_response = ai_step(
system_prompt='Return JSON only: {"category": "billing|technical|account|other", "urgency": "high|normal|low"}',
user_message=f"Classify: {ticket_text}",
max_tokens=100
)
try:
classification = json.loads(classification_response)
except json.JSONDecodeError:
classification = {"category": "other", "urgency": "normal"}
category = classification.get("category", "other")
# Step 2: Route to specialized chain based on classification
specialist_prompts = {
"billing": "You are a billing specialist. Provide a clear, empathetic response to this billing issue. Mention the next concrete step.",
"technical": "You are a technical support engineer. Diagnose the issue and provide step-by-step troubleshooting instructions.",
"account": "You are an account manager. Address this account issue warmly and offer a resolution path.",
"other": "You are a customer success agent. Respond helpfully and route to the right team."
}
system_prompt = specialist_prompts.get(category, specialist_prompts["other"])
response = ai_step(
system_prompt=system_prompt,
user_message=ticket_text,
max_tokens=400,
model="claude-haiku-4-5"
)
return {
"ticket_id": ticket.get("id"),
"classification": classification,
"response_draft": response,
"routed_to": category
}
Step 6: Add Checkpointing for Long Chains
For chains with 5+ steps, save intermediate results so failures do not restart from step 1.
import json
from pathlib import Path
def run_chain_with_checkpoints(
steps: list,
initial_input: str,
chain_id: str,
checkpoint_dir: str = "checkpoints"
) -> dict:
Path(checkpoint_dir).mkdir(exist_ok=True)
checkpoint_file = f"{checkpoint_dir}/{chain_id}.json"
# Load existing checkpoints if resuming
step_outputs = {}
current_input = initial_input
if Path(checkpoint_file).exists():
with open(checkpoint_file) as f:
checkpoint_data = json.load(f)
step_outputs = checkpoint_data.get("steps", {})
current_input = checkpoint_data.get("last_output", initial_input)
print(f"Resuming from checkpoint. Completed steps: {list(step_outputs.keys())}")
for step in steps:
step_name = step.get("name", "unnamed")
if step_name in step_outputs:
print(f"Skipping (checkpointed): {step_name}")
current_input = step_outputs[step_name]
continue
print(f"Running: {step_name}")
input_fn = step.get("input_fn")
step_input = input_fn(current_input, step_outputs, {}) if input_fn else current_input
output = ai_step(step["system_prompt"], step_input,
step.get("model", "claude-haiku-4-5"), step.get("max_tokens", 500))
step_outputs[step_name] = output
current_input = output
# Save checkpoint after each step
with open(checkpoint_file, "w") as f:
json.dump({"steps": step_outputs, "last_output": current_input}, f)
# Clean up checkpoint on success
Path(checkpoint_file).unlink(missing_ok=True)
return {"final_output": current_input, "steps": step_outputs}
What to Build Next
- Build a visual chain builder that lets non-technical team members construct prompt chains through a UI
- Add cost tracking per chain so you know exactly what each workflow costs per execution
- Implement chain versioning so you can roll back to a previous chain definition when a step degrades
Related Reading
- Prompt: Create a Standard Operating Procedure - prompt create sop standard procedure
- Prompt: Create a Project Brief - prompt create project brief
- Prompt: Create Email Subject Line Variations - prompt email subject line variations
Want this system built for your business?
Get a free assessment. We will map every system your business needs and show you the ROI.
Get Your Free Assessment