Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/QwenLM/Qwen-Agent/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Function calling enables LLMs to interact with external tools and APIs by generating structured function call requests. Qwen-Agent provides a robust function calling implementation with support for parallel execution, custom prompting, and seamless tool integration.

How Function Calling Works

The function calling workflow involves several steps:

Basic Usage

Enabling Function Calling

Function calling is automatically enabled when you provide tools to an agent:
from qwen_agent.agents import FnCallAgent

agent = FnCallAgent(
    function_list=['code_interpreter', 'image_gen'],
    llm={'model': 'qwen-plus'}
)

# Agent automatically handles function calls
responses = agent.run_nonstream([
    {'role': 'user', 'content': 'Calculate fibonacci(15)'}
])

# The agent:
# 1. Receives user message
# 2. LLM decides to call code_interpreter
# 3. Executes the code
# 4. Returns result to LLM
# 5. LLM generates final response
Source Reference: qwen_agent/agents/fncall_agent.py:73-108

Manual Function Calling

You can also use function calling directly with an LLM:
from qwen_agent.llm import get_chat_model
from qwen_agent.llm.schema import Message, FUNCTION

llm = get_chat_model('qwen-plus')

# Define functions
functions = [{
    'name': 'get_weather',
    'description': 'Get current weather for a location',
    'parameters': {
        'type': 'object',
        'properties': {
            'location': {
                'type': 'string',
                'description': 'City name, e.g., Beijing'
            },
            'unit': {
                'type': 'string',
                'enum': ['celsius', 'fahrenheit']
            }
        },
        'required': ['location']
    }
}]

# Initial request
responses = llm.chat(
    messages=[Message(role='user', content='What is the weather in Tokyo?')],
    functions=functions,
    stream=False
)

# Check for function call
if responses[0].function_call:
    fn_name = responses[0].function_call.name
    fn_args = responses[0].function_call.arguments
    
    print(f"LLM wants to call: {fn_name}")
    print(f"With arguments: {fn_args}")
    
    # Execute function (your implementation)
    result = get_weather(fn_args)
    
    # Send result back
    messages = [
        Message(role='user', content='What is the weather in Tokyo?'),
        responses[0],
        Message(role=FUNCTION, name=fn_name, content=result)
    ]
    
    final_responses = llm.chat(
        messages=messages,
        functions=functions,
        stream=False
    )
    print(final_responses[0].content)
Source Reference: qwen_agent/llm/base.py:118-176

Function Call Configuration

Generate Config Parameters

parallel_function_calls
bool
default:"false"
Enable parallel execution of multiple function calls in a single response.When enabled, the LLM can request multiple independent function calls simultaneously.
function_choice
str
default:"'auto'"
Control when functions are called:
  • 'auto' - Model decides whether to call functions
  • 'none' - Disable function calling (functions still in context)
  • function_name - Force a specific function call
thought_in_content
bool
default:"false"
Include the model’s reasoning in the content field alongside function calls.
fncall_prompt_type
str
default:"'nous'"
Choose function calling prompt format:
  • 'nous' - Nous Research format (default)
  • 'qwen' - Qwen-specific format
agent = FnCallAgent(
    function_list=['tool1', 'tool2', 'tool3'],
    llm={
        'model': 'qwen-plus',
        'generate_cfg': {
            'parallel_function_calls': True,
            'function_choice': 'auto',
            'thought_in_content': True,
            'fncall_prompt_type': 'qwen'
        }
    }
)
Source Reference: qwen_agent/llm/function_calling.py:25-39

Parallel Function Calls

Parallel function calling allows the LLM to request multiple independent function executions in a single response:
from qwen_agent.agents import FnCallAgent

agent = FnCallAgent(
    function_list=['get_weather', 'get_traffic', 'get_news'],
    llm={
        'model': 'qwen-plus',
        'generate_cfg': {
            'parallel_function_calls': True
        }
    }
)

# User asks for multiple pieces of information
responses = agent.run_nonstream([{
    'role': 'user',
    'content': 'What is the weather, traffic, and latest news in Beijing?'
}])

# The LLM can call all three functions in parallel:
# 1. get_weather({"location": "Beijing"})
# 2. get_traffic({"location": "Beijing"})
# 3. get_news({"location": "Beijing"})
#
# All executed simultaneously, then results sent back to LLM

Message Format for Parallel Calls

With parallel function calls, the response contains multiple assistant messages with function calls:
[
    Message(
        role='assistant',
        content='',
        function_call=FunctionCall(
            name='get_weather',
            arguments='{"location": "Beijing"}'
        ),
        extra={'function_id': '1'}
    ),
    Message(
        role='assistant',
        content='',
        function_call=FunctionCall(
            name='get_traffic',
            arguments='{"location": "Beijing"}'
        ),
        extra={'function_id': '2'}
    ),
    Message(
        role='assistant',
        content='',
        function_call=FunctionCall(
            name='get_news',
            arguments='{"location": "Beijing"}'
        ),
        extra={'function_id': '3'}
    )
]

# Followed by corresponding function result messages
[
    Message(role='function', name='get_weather', content='25°C, Sunny', extra={'function_id': '1'}),
    Message(role='function', name='get_traffic', content='Light traffic', extra={'function_id': '2'}),
    Message(role='function', name='get_news', content='...', extra={'function_id': '3'})
]
Source Reference: qwen_agent/llm/function_calling.py:59-65

Function Calling Modes

Auto Mode (Default)

The LLM decides when to use functions based on the user’s request:
agent = FnCallAgent(
    function_list=['calculator'],
    llm={
        'model': 'qwen-plus',
        'generate_cfg': {
            'function_choice': 'auto'  # Default
        }
    }
)

# Calls calculator
response1 = agent.run_nonstream([{'role': 'user', 'content': 'What is 25 * 17?'}])

# Doesn't call calculator
response2 = agent.run_nonstream([{'role': 'user', 'content': 'Hello!'}])

None Mode

Disable function calling while keeping function context:
agent = FnCallAgent(
    function_list=['calculator'],
    llm={
        'model': 'qwen-plus',
        'generate_cfg': {
            'function_choice': 'none'
        }
    }
)

# LLM knows about calculator but won't call it
# Instead, it might explain how to do the calculation
response = agent.run_nonstream([{'role': 'user', 'content': 'What is 25 * 17?'}])
Source Reference: qwen_agent/llm/base.py:201-209

Forced Mode

Force the LLM to call a specific function:
agent = FnCallAgent(
    function_list=['search_database', 'web_search'],
    llm={
        'model': 'qwen-plus',
        'generate_cfg': {
            'function_choice': 'search_database'  # Always use this
        }
    }
)

# Will always call search_database, never web_search
response = agent.run_nonstream([{'role': 'user', 'content': 'Find information about AI'}])

Function Prompting

Qwen-Agent uses specialized prompt templates to guide function calling:

Nous Format (Default)

The Nous format is compatible with most models:
from qwen_agent.llm.fncall_prompts import NousFnCallPrompt

prompt = NousFnCallPrompt()

# Automatically formats messages with function schemas
# Functions are injected as part of the system message

Qwen Format

Optimized for Qwen models:
agent = FnCallAgent(
    function_list=['tool1', 'tool2'],
    llm={
        'model': 'qwen-plus',
        'generate_cfg': {
            'fncall_prompt_type': 'qwen'
        }
    }
)
Source Reference: qwen_agent/llm/function_calling.py:27-39

Message Processing

Preprocessing

Before sending to the LLM, messages are preprocessed to inject function schemas:
# Original messages
messages = [
    Message(role='user', content='What is the weather?')
]

# After preprocessing with functions
# Function schemas are added to system message
messages = [
    Message(
        role='system',
        content='You have access to the following functions:\n...'
    ),
    Message(role='user', content='What is the weather?')
]
Source Reference: qwen_agent/llm/function_calling.py:41-66

Postprocessing

LLM outputs are parsed to extract function calls:
# Raw LLM output (text format)
raw_output = '''
I will check the weather for you.
<function_call>
{"name": "get_weather", "arguments": {"location": "Beijing"}}
</function_call>
'''

# After postprocessing
Message(
    role='assistant',
    content='I will check the weather for you.',
    function_call=FunctionCall(
        name='get_weather',
        arguments='{"location": "Beijing"}'
    )
)
Source Reference: qwen_agent/llm/function_calling.py:68-82

Tool Detection

Agents use _detect_tool to identify function calls in LLM responses:
class MyAgent(Agent):
    def _run(self, messages, **kwargs):
        # Get LLM response
        for responses in self._call_llm(messages, functions=[...]):
            for msg in responses:
                # Detect if this message contains a function call
                use_tool, tool_name, tool_args, text = self._detect_tool(msg)
                
                if use_tool:
                    # Execute the tool
                    result = self._call_tool(tool_name, tool_args)
                    # Add result to messages
                    messages.append(
                        Message(role=FUNCTION, name=tool_name, content=result)
                    )
Source Reference: qwen_agent/agent.py:239-259

Iterative Tool Use

Agents can use tools iteratively to solve complex tasks:
# User: "Fetch the latest stock price and analyze the trend"

# Iteration 1:
# LLM → function_call: get_stock_price({"symbol": "AAPL"})
# Tool → "Current price: $150.25"

# Iteration 2:
# LLM → function_call: get_historical_data({"symbol": "AAPL", "days": 30})
# Tool → "[historical data]"

# Iteration 3:
# LLM → function_call: analyze_trend({"data": "..."})
# Tool → "Upward trend with 5% growth"

# Iteration 4:
# LLM → Final response: "Based on the data, AAPL shows an upward trend..."
The agent automatically manages this loop up to MAX_LLM_CALL_PER_RUN iterations. Source Reference: qwen_agent/agents/fncall_agent.py:73-108

Advanced Patterns

Conditional Function Calling

class SmartAgent(FnCallAgent):
    def _run(self, messages, **kwargs):
        # Determine if we need expensive tools
        query = messages[-1].content
        
        if 'code' in query.lower():
            # Enable code interpreter
            extra_cfg = {'function_choice': 'code_interpreter'}
        elif 'search' in query.lower():
            extra_cfg = {'function_choice': 'auto'}
        else:
            # Disable tools for simple queries
            extra_cfg = {'function_choice': 'none'}
        
        return self._call_llm(messages, 
                             functions=[...],
                             extra_generate_cfg=extra_cfg)

Custom Function Results

Tools can return multimodal results:
from qwen_agent.llm.schema import ContentItem

class PlotTool(BaseTool):
    def call(self, params, **kwargs):
        # Generate plot
        plot_path = self.generate_plot(params)
        
        # Return as multimodal content
        return [
            ContentItem(text="Here is the plot:"),
            ContentItem(image=plot_path)
        ]
Source Reference: qwen_agent/agent.py:205-210

Thought in Content

Include reasoning alongside function calls:
agent = FnCallAgent(
    function_list=['calculator'],
    llm={
        'model': 'qwen-plus',
        'generate_cfg': {
            'thought_in_content': True
        }
    }
)

# Response will include both reasoning and function call:
# Message(
#     role='assistant',
#     content='To solve this, I need to calculate 25 * 17.',
#     function_call=FunctionCall(name='calculator', arguments='...')
# )

Error Handling

Tool Errors

class RobustAgent(FnCallAgent):
    def _call_tool(self, tool_name, tool_args, **kwargs):
        try:
            return super()._call_tool(tool_name, tool_args, **kwargs)
        except ToolServiceError as e:
            # Return error to LLM so it can try a different approach
            return f"Error calling {tool_name}: {e.message}"
        except Exception as e:
            # Log and return generic error
            logger.error(f"Unexpected error in {tool_name}: {e}")
            return f"Tool {tool_name} encountered an error. Please try another approach."
Source Reference: qwen_agent/agent.py:178-210

Best Practices

Function Descriptions

  • Write clear, detailed function descriptions
  • Specify parameter types and constraints precisely
  • Include examples in descriptions when helpful
  • Keep function names descriptive and unambiguous

Parallel Execution

  • Enable parallel calls for independent operations
  • Ensure tools are thread-safe if used in parallel
  • Consider rate limits when parallelizing API calls
  • Test parallel behavior thoroughly

Error Recovery

  • Return informative error messages to the LLM
  • Let the LLM try alternative approaches
  • Use ToolServiceError for expected failures
  • Log errors for debugging

Performance

  • Set appropriate tool timeouts
  • Use function_choice='none' for non-tool queries
  • Monitor iteration counts
  • Cache expensive tool results

Debugging

import logging
from qwen_agent.log import logger

# Enable debug logging
logger.setLevel(logging.DEBUG)

agent = FnCallAgent(
    function_list=['tool1', 'tool2'],
    llm={'model': 'qwen-plus'}
)

# Inspect messages at each step
for i, responses in enumerate(agent.run(messages)):
    print(f"\n=== Step {i} ===")
    for msg in responses:
        print(f"Role: {msg.role}")
        if msg.function_call:
            print(f"Function: {msg.function_call.name}")
            print(f"Arguments: {msg.function_call.arguments}")
        else:
            print(f"Content: {msg.content[:100]}...")

Tools

Learn how to create and configure tools

Agents

Understand agent architecture and workflows

LLM Configuration

Configure LLM parameters for optimal function calling