How to Build an AI Landing Page Copy Optimizer
Use AI to test and optimize landing page copy for higher conversion rates.
Jay Banlasan
The AI Systems Guy
An ai landing page copy optimization tool analyzes your existing page, scores every section, and generates better alternatives. I run this on every landing page before launch. It catches weak headlines, unclear CTAs, and friction points that kill conversion rates.
The system does not replace a copywriter. It gives the copywriter data-backed direction on what to fix first.
What You Need Before Starting
- Anthropic API key
- Python 3.8+ with
anthropicandrequestsinstalled - Your landing page URL or raw HTML
- Conversion rate data if available
Step 1: Extract Page Copy
import requests
from html.parser import HTMLParser
class CopyExtractor(HTMLParser):
def __init__(self):
super().__init__()
self.sections = []
self.current_tag = None
self.current_text = ""
def handle_starttag(self, tag, attrs):
if tag in ["h1", "h2", "h3", "p", "li", "button", "a"]:
self.current_tag = tag
self.current_text = ""
def handle_data(self, data):
if self.current_tag:
self.current_text += data.strip()
def handle_endtag(self, tag):
if tag == self.current_tag and self.current_text:
self.sections.append({"tag": tag, "text": self.current_text})
self.current_tag = None
def extract_copy(url):
resp = requests.get(url)
parser = CopyExtractor()
parser.feed(resp.text)
return parser.sections
Step 2: Score Each Section
import anthropic
import json
def score_page_copy(sections, target_audience, offer):
client = anthropic.Anthropic()
copy_text = "\n".join([f"[{s['tag'].upper()}] {s['text']}" for s in sections])
prompt = f"""Score this landing page copy section by section.
Target audience: {target_audience}
Offer: {offer}
PAGE COPY:
{copy_text}
For each section, score 1-10 on:
- Clarity: Is the message immediately clear?
- Persuasion: Does it move the reader toward action?
- Specificity: Does it use concrete details instead of vague claims?
- Voice: Does it sound human and direct?
Also identify:
- The weakest section (fix this first)
- Missing elements (social proof, objection handling, urgency)
- Friction points (confusing language, unclear next steps)
Return JSON: {{
"sections": [{{"tag": "...", "text": "...", "clarity": N, "persuasion": N, "specificity": N, "voice": N, "feedback": "..."}}],
"weakest_section": "...",
"missing_elements": [...],
"friction_points": [...],
"overall_score": N
}}"""
resp = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1500,
messages=[{"role": "user", "content": prompt}]
)
return json.loads(resp.content[0].text)
Step 3: Generate Optimized Alternatives
def generate_alternatives(weak_sections, audience, offer):
client = anthropic.Anthropic()
sections_text = "\n".join([f"[{s['tag']}] {s['text']} (Score: {s.get('clarity', 0)}/10)" for s in weak_sections])
prompt = f"""Rewrite these underperforming landing page sections.
Audience: {audience}
Offer: {offer}
SECTIONS TO IMPROVE:
{sections_text}
For each section, provide 3 alternatives that:
- Score higher on clarity and persuasion
- Use Grade 5-6 reading level
- Are specific and concrete (numbers, outcomes, proof)
- Sound like a confident practitioner, not a copywriter
- Use contractions naturally
- Avoid: "leverage", "utilize", "game-changing", "revolutionary", "in today's"
Return JSON array: [{{"original": "...", "alternatives": ["...", "...", "..."], "why_better": "..."}}]"""
resp = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1500,
messages=[{"role": "user", "content": prompt}]
)
return json.loads(resp.content[0].text)
Step 4: A/B Test Recommendations
def create_test_plan(score_results, alternatives):
plan = []
# Sort sections by impact (lowest scoring first)
sections = sorted(score_results["sections"], key=lambda x: (x.get("clarity", 0) + x.get("persuasion", 0)) / 2)
for section in sections[:3]: # Test top 3 weakest
alt = next((a for a in alternatives if a["original"] == section["text"]), None)
if alt:
plan.append({
"element": section["tag"],
"current": section["text"],
"test_variant": alt["alternatives"][0],
"expected_impact": alt["why_better"],
"priority": "high" if section.get("clarity", 5) < 5 else "medium",
})
return plan
Step 5: Track Results
def log_test_result(db_path, page_url, element, variant, conversion_rate_control, conversion_rate_variant):
conn = sqlite3.connect(db_path)
conn.execute("""CREATE TABLE IF NOT EXISTS lp_tests (
id INTEGER PRIMARY KEY, page_url TEXT, element TEXT, variant TEXT,
cr_control REAL, cr_variant REAL, lift REAL,
tested_at DATETIME DEFAULT CURRENT_TIMESTAMP
)""")
lift = ((conversion_rate_variant - conversion_rate_control) / conversion_rate_control * 100) if conversion_rate_control > 0 else 0
conn.execute("INSERT INTO lp_tests (page_url, element, variant, cr_control, cr_variant, lift) VALUES (?,?,?,?,?,?)",
(page_url, element, variant, conversion_rate_control, conversion_rate_variant, lift))
conn.commit()
conn.close()
print(f"Test result: {element} -> {lift:+.1f}% lift")
Run the optimizer before every page launch and after every major traffic milestone. The AI catches patterns humans miss, especially after staring at the same page for weeks.
What to Build Next
Build a page-level heatmap analyzer that combines the copy scores with user behavior data from Clarity or Hotjar. Then add automatic variant generation that feeds winning test results back into the system.
Related Reading
- AI for Content Creation at Scale - scaling content production
- The Compounding Advantage Nobody Talks About - how incremental improvements compound
- Input, Process, Output: The Universal AI Framework - the framework behind every AI system
Want this system built for your business?
Get a free assessment. We will map every system your business needs and show you the ROI.
Get Your Free Assessment