How to Build an AI Process Optimization Analyzer
Analyze business processes with AI to find bottlenecks and optimization opportunities.
Jay Banlasan
The AI Systems Guy
You cannot fix a process you do not understand. I built an ai process optimization analyzer that takes workflow data, identifies bottlenecks, measures cycle times, and recommends specific changes. It finds the bottleneck hidden in your process before you waste time optimizing the wrong step.
Feed it your process data. It tells you where the time goes.
What You Need Before Starting
- Python 3.8+
- An AI API key (Claude)
- Process execution data (timestamps per step)
- A process map or description
Step 1: Collect Process Execution Data
import sqlite3
from datetime import datetime
def init_process_db(db_path="processes.db"):
conn = sqlite3.connect(db_path)
conn.execute("""
CREATE TABLE IF NOT EXISTS process_executions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
process_name TEXT,
instance_id TEXT,
step_name TEXT,
step_order INTEGER,
started_at TEXT,
completed_at TEXT,
assignee TEXT,
status TEXT
)
""")
conn.commit()
return conn
def log_step(conn, process_name, instance_id, step_name, step_order, assignee):
conn.execute(
"INSERT INTO process_executions (process_name, instance_id, step_name, step_order, assignee, started_at, status) VALUES (?,?,?,?,?,?,?)",
(process_name, instance_id, step_name, step_order, assignee, datetime.now().isoformat(), "in_progress")
)
conn.commit()
Step 2: Calculate Step-Level Metrics
def analyze_step_times(conn, process_name, days=30):
cutoff = (datetime.now() - __import__('datetime').timedelta(days=days)).isoformat()
rows = conn.execute("""
SELECT step_name, step_order,
AVG((julianday(completed_at) - julianday(started_at)) * 24) as avg_hours,
MIN((julianday(completed_at) - julianday(started_at)) * 24) as min_hours,
MAX((julianday(completed_at) - julianday(started_at)) * 24) as max_hours,
COUNT(*) as executions
FROM process_executions
WHERE process_name = ? AND completed_at IS NOT NULL AND started_at > ?
GROUP BY step_name, step_order
ORDER BY step_order
""", (process_name, cutoff)).fetchall()
steps = []
for row in rows:
steps.append({
"step": row[0], "order": row[1],
"avg_hours": round(row[2], 2), "min_hours": round(row[3], 2),
"max_hours": round(row[4], 2), "executions": row[5],
"variability": round(row[4] - row[3], 2)
})
return steps
Step 3: Identify Bottlenecks
def find_bottlenecks(step_metrics):
if not step_metrics:
return []
total_time = sum(s["avg_hours"] for s in step_metrics)
bottlenecks = []
for step in step_metrics:
pct_of_total = (step["avg_hours"] / total_time * 100) if total_time > 0 else 0
if pct_of_total > 30 or step["variability"] > step["avg_hours"]:
bottlenecks.append({
"step": step["step"],
"avg_hours": step["avg_hours"],
"pct_of_total": round(pct_of_total, 1),
"variability": step["variability"],
"type": "time_hog" if pct_of_total > 30 else "inconsistent"
})
bottlenecks.sort(key=lambda x: -x["avg_hours"])
return bottlenecks
Step 4: Generate AI Recommendations
import anthropic
import json
client = anthropic.Anthropic()
def get_optimization_recommendations(process_name, step_metrics, bottlenecks):
prompt = f"""Analyze this business process and recommend optimizations.
Process: {process_name}
Step Metrics:
{json.dumps(step_metrics, indent=2)}
Identified Bottlenecks:
{json.dumps(bottlenecks, indent=2)}
For each bottleneck, recommend:
1. Root cause (why this step takes so long or varies so much)
2. Specific fix (what to change)
3. Expected improvement (percentage time reduction)
4. Implementation difficulty (low/medium/high)
Also identify:
- Any steps that could run in parallel
- Any steps that could be automated entirely
- Quick wins (easy changes, big impact)
Return as JSON."""
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=2048,
messages=[{"role": "user", "content": prompt}]
)
return json.loads(message.content[0].text)
Step 5: Generate the Optimization Report
def optimization_report(process_name, metrics, bottlenecks, recommendations):
total_time = sum(s["avg_hours"] for s in metrics)
report = f"# Process Optimization: {process_name}\n\n"
report += f"Total avg cycle time: {round(total_time, 1)} hours\n"
report += f"Steps: {len(metrics)} | Bottlenecks: {len(bottlenecks)}\n\n"
report += "## Step Breakdown\n\n"
report += "| Step | Avg Hours | % of Total | Variability |\n|---|---|---|---|\n"
for s in metrics:
pct = round(s["avg_hours"]/total_time*100, 1) if total_time else 0
report += f"| {s['step']} | {s['avg_hours']}h | {pct}% | {s['variability']}h |\n"
report += "\n## Bottlenecks\n\n"
for b in bottlenecks:
report += f"- **{b['step']}**: {b['avg_hours']}h avg ({b['pct_of_total']}% of total) - {b['type']}\n"
return report
What to Build Next
Add before/after comparison. After you implement a change, the system should measure whether the bottleneck actually improved. Without measurement, you are guessing. With measurement, you are optimizing.
Related Reading
- Identifying Your Biggest Bottleneck - the framework for bottleneck analysis
- The Feedback Loop That Powers Everything - measuring process improvements
- Cost of Manual vs Cost of Automated - calculating the value of process optimization
Want this system built for your business?
Get a free assessment. We will map every system your business needs and show you the ROI.
Get Your Free Assessment