Prompts

Prompt: Generate A/B Test Ideas

Jay Banlasan

Jay Banlasan

The AI Systems Guy

tl;dr

Twenty A/B test ideas for your website, ads, or emails. Prioritized by expected impact and ease of implementation.

This prompt generate ab test ideas for your specific business, prioritized so you run the high-impact tests first instead of guessing.

Most teams run random A/B tests. Different button color. New headline. Slightly different image. That is not testing. That is hoping. Structured test generation changes the game.

The Prompt

You are a conversion rate optimization expert. Generate 20 A/B test ideas for my business.

Business type: [e.g., SaaS landing page, e-commerce product page, lead generation funnel]
Current conversion rate: [e.g., 2.3% visitor to trial signup]
Traffic volume: [e.g., 5,000 visitors/month]
What I have already tested: [list any previous tests and results]
Biggest objection from prospects: [e.g., "too expensive", "not sure it works for my industry"]
Primary CTA: [e.g., "Start Free Trial", "Book a Call", "Download Guide"]

For each test idea, provide:
1. TEST NAME: Short descriptive name
2. HYPOTHESIS: "We believe [change] will [outcome] because [reasoning]"
3. WHAT TO CHANGE: Specific element and the variation
4. EXPECTED IMPACT: High / Medium / Low
5. IMPLEMENTATION EFFORT: Easy (copy change) / Medium (design change) / Hard (structural change)
6. PRIORITY SCORE: Impact x Ease (High-Easy = run first, Low-Hard = run last)

Sort the list by priority score, highest first.

Focus on tests that address the stated objection and improve the specific conversion metric. Skip cosmetic tests unless there is evidence they matter.

How to Use the Output

Take the top 5 tests. Run them one at a time if your traffic is under 10,000 monthly visitors. Running multiple tests simultaneously with low traffic gives you garbage data.

Each test needs at least 100 conversions per variation to be meaningful. Do the math on your traffic before committing to a test duration.

What Makes This Prompt Different

The hypothesis requirement. Every test idea comes with a reason. That means you are not just testing random changes. You are testing beliefs about your customer. When a test wins or loses, you learn something about your audience, not just about button colors.

Follow-Up Prompt

After running tests, feed the results back: "Test A won by 23%. Test B was inconclusive. Test C lost. Based on these results, what does this tell us about our audience and what should we test next?"

That creates a compounding learning loop. Each round of tests makes the next round smarter.

Build These Systems

Ready to implement? These step-by-step tutorials show you exactly how:

Want this built for your business?

Get a free assessment of where AI operations can replace overhead in your company.

Get Your Free Assessment

Related posts