Systems Library / AI Model Setup / How to Set Up Anthropic Claude with System Prompts
AI Model Setup foundations

How to Set Up Anthropic Claude with System Prompts

Configure Claude behavior with system prompts for consistent business outputs.

Jay Banlasan

Jay Banlasan

The AI Systems Guy

The claude system prompt configuration guide is the thing most people skip that makes the biggest difference in output quality. A well-written system prompt is the difference between Claude responding like a generic chatbot and Claude responding like a specialist you hired for a specific job. I use system prompts on every Claude integration I build. They define the role, the output format, the constraints, and the tone in one block that runs before every user message.

System prompts persist for the entire conversation. They do not count against your per-message token budget the way you might think, because Anthropic caches them after the first call. For business applications this means you can write a detailed 2,000-word system prompt without worrying too much about cost inflation on repeat calls.

What You Need Before Starting

Step 1: Get Your API Key and Install the SDK

Log in to console.anthropic.com. Go to "API Keys" and click "Create Key." Name it clearly (e.g., my-business-prod). Copy it immediately.

Add to .env:

ANTHROPIC_API_KEY=sk-ant-your-key-here

Install the SDK:

pip install anthropic python-dotenv

Step 2: Make a Basic Call With and Without a System Prompt

See the difference in output:

import os
import anthropic
from dotenv import load_dotenv

load_dotenv()

client = anthropic.Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))

# Without system prompt
response_plain = client.messages.create(
    model="claude-opus-4-5",
    max_tokens=300,
    messages=[{"role": "user", "content": "Review this email for tone and clarity."}]
)

# With system prompt
response_configured = client.messages.create(
    model="claude-opus-4-5",
    max_tokens=300,
    system="You are an email QA specialist for a B2B SaaS company. Review emails for: (1) professional tone, (2) clear CTA, (3) personalization. Return a score out of 10 and 3 specific improvement suggestions.",
    messages=[{"role": "user", "content": "Review this email for tone and clarity."}]
)

print("Plain:", response_plain.content[0].text)
print("\nConfigured:", response_configured.content[0].text)

The configured version gives you structured, actionable output every time. The plain version is unpredictable.

Step 3: Write an Effective System Prompt

System prompts have four components. Hit all four:

SYSTEM_PROMPT = """
ROLE:
You are a customer support specialist for Acme Software. You handle questions about billing, account management, and product features.

BEHAVIOR:
- Always greet the customer by name if their name is provided
- Acknowledge the problem before offering a solution
- Keep responses under 150 words unless a detailed explanation is genuinely needed
- If you cannot resolve an issue, explain why and offer to escalate

OUTPUT FORMAT:
Response structure:
1. Acknowledgment (1 sentence)
2. Solution or next steps (2-4 bullet points or short paragraph)
3. Closing (1 sentence offering further help)

CONSTRAINTS:
- Never promise refunds without checking the policy
- Never share other customers' account information
- Do not speculate about upcoming features
- If asked about pricing, direct to the pricing page: acmesoftware.com/pricing
"""

Send it as the system parameter:

response = client.messages.create(
    model="claude-opus-4-5",
    max_tokens=500,
    system=SYSTEM_PROMPT,
    messages=[
        {"role": "user", "content": "Hi, I was charged for a plan I downgraded from last month."}
    ]
)

print(response.content[0].text)

Step 4: Use Different System Prompts for Different Tasks

Build a system prompt library:

SYSTEM_PROMPTS = {
    "email_reviewer": """
        You are an email QA specialist. Review emails for tone, clarity, and CTA strength.
        Return: score (1-10), three specific improvements, and a revised subject line.
        Format: JSON with keys "score", "improvements", "subject_line".
    """,
    
    "data_extractor": """
        You are a data extraction engine. Extract structured data from unstructured text.
        Return only valid JSON. No explanations. No markdown. Raw JSON only.
    """,
    
    "content_writer": """
        You are a business content writer for a B2B audience.
        Write in a direct, confident tone. No filler phrases. No hedging.
        Grade 6 reading level. Short paragraphs. One idea per paragraph.
    """,
    
    "support_agent": """
        You are a customer support specialist. Be empathetic but efficient.
        Acknowledge the problem, offer a solution, close with a next step.
        Stay under 150 words per response.
    """
}


def ask_claude(prompt: str, role: str = "content_writer", model: str = "claude-opus-4-5") -> str:
    """Send a prompt to Claude using a named system prompt role."""
    
    system = SYSTEM_PROMPTS.get(role)
    if not system:
        raise ValueError(f"Unknown role: {role}. Available: {list(SYSTEM_PROMPTS.keys())}")
    
    response = client.messages.create(
        model=model,
        max_tokens=1000,
        system=system,
        messages=[{"role": "user", "content": prompt}]
    )
    
    return response.content[0].text

Step 5: Test System Prompt Reliability

A good system prompt should produce consistent output across 10 runs. Test it:

def test_prompt_consistency(role: str, test_input: str, runs: int = 5) -> None:
    """Run the same prompt multiple times to check consistency."""
    
    print(f"Testing role '{role}' with {runs} runs...\n")
    
    for i in range(1, runs + 1):
        result = ask_claude(test_input, role=role)
        print(f"Run {i}:\n{result}\n{'='*40}")


# Test the data extractor - should always return valid JSON
test_prompt_consistency(
    role="data_extractor",
    test_input='Extract name, company, and email from: "Hi, I\'m Sarah Chen from Horizon Tech. Reach me at [email protected]"',
    runs=3
)

If outputs vary wildly, tighten the format instructions in your system prompt. Lower temperature (closer to 0.0) also helps for structured output tasks.

Step 6: Load System Prompts from Files

For production, store system prompts in files so non-developers can edit them:

import os
from pathlib import Path


def load_system_prompt(name: str, prompts_dir: str = "prompts") -> str:
    """Load a system prompt from a .txt file."""
    path = Path(prompts_dir) / f"{name}.txt"
    if not path.exists():
        raise FileNotFoundError(f"No prompt file found at {path}")
    return path.read_text().strip()


def ask_claude_from_file(prompt: str, role: str) -> str:
    system = load_system_prompt(role)
    response = client.messages.create(
        model="claude-opus-4-5",
        max_tokens=1000,
        system=system,
        messages=[{"role": "user", "content": prompt}]
    )
    return response.content[0].text

Create prompts/email_reviewer.txt and put your system prompt there. Now anyone on your team can edit the prompt without touching Python code.

What to Build Next

Related Reading

Want this system built for your business?

Get a free assessment. We will map every system your business needs and show you the ROI.

Get Your Free Assessment

Related Systems