Systems Library / AI Model Setup / How to Set Up OpenAI Function Calling
AI Model Setup foundations

How to Set Up OpenAI Function Calling

Configure GPT to call external functions and tools for dynamic responses.

Jay Banlasan

Jay Banlasan

The AI Systems Guy

The openai function calling setup tutorial is the gateway to building agents that actually do things. Without function calling, GPT-4 is a text generator. With function calling, it becomes a reasoning engine that decides which tools to use, extracts the right parameters from the conversation, and hands off to your code for execution. I use this pattern to build chatbots that can look up customer records, check inventory, send emails, and update CRMs directly from a conversation.

Function calling does not execute your functions. It decides what to call and what arguments to pass. Your Python code does the actual work. This separation is what makes the pattern safe and controllable.

What You Need Before Starting

Step 1: Understand the Function Calling Loop

The flow works in two API calls:

  1. Send message + function definitions to GPT
  2. GPT returns a "tool call" object specifying which function and what arguments
  3. Your code executes the real function
  4. Send the function result back to GPT
  5. GPT incorporates the result and generates the final response
import os
import json
from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

Step 2: Define Your Tools

Tools are defined as JSON Schema objects describing the function name, description, and parameters:

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_customer_info",
            "description": "Look up a customer's account information by email address. Returns name, plan, and account status.",
            "parameters": {
                "type": "object",
                "properties": {
                    "email": {
                        "type": "string",
                        "description": "The customer's email address"
                    }
                },
                "required": ["email"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_order_status",
            "description": "Check the status of an order by order ID. Returns order status and estimated delivery date.",
            "parameters": {
                "type": "object",
                "properties": {
                    "order_id": {
                        "type": "string",
                        "description": "The order ID to check"
                    }
                },
                "required": ["order_id"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "create_support_ticket",
            "description": "Create a support ticket for a customer issue. Use when the issue cannot be resolved directly.",
            "parameters": {
                "type": "object",
                "properties": {
                    "customer_email": {
                        "type": "string",
                        "description": "Customer's email address"
                    },
                    "issue_summary": {
                        "type": "string",
                        "description": "Brief description of the issue"
                    },
                    "priority": {
                        "type": "string",
                        "enum": ["low", "medium", "high"],
                        "description": "Ticket priority based on issue severity"
                    }
                },
                "required": ["customer_email", "issue_summary", "priority"]
            }
        }
    }
]

Step 3: Implement the Real Functions

These are your actual business logic functions. GPT calls will route to these:

# Simulated database for this example
CUSTOMER_DB = {
    "[email protected]": {"name": "Sarah Chen", "plan": "Pro", "status": "active"},
    "[email protected]": {"name": "James Okafor", "plan": "Starter", "status": "past_due"},
}

ORDER_DB = {
    "ORD-1234": {"status": "shipped", "delivery_date": "2024-06-15", "carrier": "FedEx"},
    "ORD-5678": {"status": "processing", "delivery_date": "2024-06-20", "carrier": None},
}


def get_customer_info(email: str) -> dict:
    """Look up customer by email."""
    customer = CUSTOMER_DB.get(email.lower())
    if customer:
        return {"found": True, **customer, "email": email}
    return {"found": False, "email": email, "message": "Customer not found"}


def get_order_status(order_id: str) -> dict:
    """Get order status by ID."""
    order = ORDER_DB.get(order_id.upper())
    if order:
        return {"found": True, "order_id": order_id, **order}
    return {"found": False, "order_id": order_id, "message": "Order not found"}


def create_support_ticket(customer_email: str, issue_summary: str, priority: str) -> dict:
    """Create a support ticket (simulated)."""
    ticket_id = f"TKT-{hash(customer_email + issue_summary) % 10000:04d}"
    return {
        "ticket_id": ticket_id,
        "status": "created",
        "customer_email": customer_email,
        "priority": priority,
        "message": f"Ticket {ticket_id} created. Support team will respond within 24 hours."
    }


# Map function names to actual functions
FUNCTION_MAP = {
    "get_customer_info": get_customer_info,
    "get_order_status": get_order_status,
    "create_support_ticket": create_support_ticket,
}

Step 4: Build the Function Calling Loop

def run_agent(user_message: str, conversation_history: list = None) -> str:
    """
    Run a single agent turn with function calling support.
    
    Args:
        user_message: The user's input
        conversation_history: Previous messages in the conversation
    
    Returns:
        Final response text from the assistant
    """
    
    if conversation_history is None:
        conversation_history = []
    
    # Add user message
    messages = conversation_history + [{"role": "user", "content": user_message}]
    
    # First API call: Let GPT decide if a function call is needed
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "system",
                "content": "You are a helpful customer service agent. Use the available tools to look up information and assist customers accurately."
            }
        ] + messages,
        tools=tools,
        tool_choice="auto"  # Let the model decide
    )
    
    assistant_message = response.choices[0].message
    
    # Check if the model wants to call a function
    if assistant_message.tool_calls:
        # Execute each tool call
        tool_results = []
        
        for tool_call in assistant_message.tool_calls:
            function_name = tool_call.function.name
            arguments = json.loads(tool_call.function.arguments)
            
            print(f"Calling function: {function_name}({arguments})")  # Debug
            
            # Execute the real function
            result = FUNCTION_MAP[function_name](**arguments)
            
            tool_results.append({
                "tool_call_id": tool_call.id,
                "role": "tool",
                "content": json.dumps(result)
            })
        
        # Second API call: Send results back, get final response
        final_response = client.chat.completions.create(
            model="gpt-4o",
            messages=[
                {"role": "system", "content": "You are a helpful customer service agent."}
            ] + messages + [assistant_message] + tool_results
        )
        
        return final_response.choices[0].message.content
    
    # No function call needed, return direct response
    return assistant_message.content

Step 5: Test It

# Test the agent with various queries
test_queries = [
    "Can you check the account status for [email protected]?",
    "What's the status of order ORD-1234?",
    "I need to create a high priority ticket for [email protected] - he's been unable to log in for 2 days."
]

for query in test_queries:
    print(f"\nUser: {query}")
    response = run_agent(query)
    print(f"Agent: {response}")
    print("-" * 60)

Step 6: Add Multi-Turn Support with Function History

For conversations that span multiple messages:

def interactive_agent():
    """Run an interactive multi-turn agent session."""
    print("Customer Support Agent (type 'quit' to exit)\n")
    
    history = []
    
    while True:
        user_input = input("Customer: ").strip()
        if user_input.lower() == "quit":
            break
        
        response = run_agent(user_input, history)
        print(f"Agent: {response}\n")
        
        # Update history
        history.append({"role": "user", "content": user_input})
        history.append({"role": "assistant", "content": response})


if __name__ == "__main__":
    interactive_agent()

What to Build Next

Related Reading

Want this system built for your business?

Get a free assessment. We will map every system your business needs and show you the ROI.

Get Your Free Assessment

Related Systems