Function calling enables LLMs to interact with external tools and APIs by generating structured function call requests. Qwen-Agent provides a robust function calling implementation with support for parallel execution, custom prompting, and seamless tool integration.
Enable parallel execution of multiple function calls in a single response.When enabled, the LLM can request multiple independent function calls simultaneously.
Parallel function calling allows the LLM to request multiple independent function executions in a single response:
from qwen_agent.agents import FnCallAgentagent = FnCallAgent( function_list=['get_weather', 'get_traffic', 'get_news'], llm={ 'model': 'qwen-plus', 'generate_cfg': { 'parallel_function_calls': True } })# User asks for multiple pieces of informationresponses = agent.run_nonstream([{ 'role': 'user', 'content': 'What is the weather, traffic, and latest news in Beijing?'}])# The LLM can call all three functions in parallel:# 1. get_weather({"location": "Beijing"})# 2. get_traffic({"location": "Beijing"})# 3. get_news({"location": "Beijing"})## All executed simultaneously, then results sent back to LLM
Disable function calling while keeping function context:
agent = FnCallAgent( function_list=['calculator'], llm={ 'model': 'qwen-plus', 'generate_cfg': { 'function_choice': 'none' } })# LLM knows about calculator but won't call it# Instead, it might explain how to do the calculationresponse = agent.run_nonstream([{'role': 'user', 'content': 'What is 25 * 17?'}])
from qwen_agent.llm.fncall_prompts import NousFnCallPromptprompt = NousFnCallPrompt()# Automatically formats messages with function schemas# Functions are injected as part of the system message
Before sending to the LLM, messages are preprocessed to inject function schemas:
# Original messagesmessages = [ Message(role='user', content='What is the weather?')]# After preprocessing with functions# Function schemas are added to system messagemessages = [ Message( role='system', content='You have access to the following functions:\n...' ), Message(role='user', content='What is the weather?')]
# Raw LLM output (text format)raw_output = '''I will check the weather for you.<function_call>{"name": "get_weather", "arguments": {"location": "Beijing"}}</function_call>'''# After postprocessingMessage( role='assistant', content='I will check the weather for you.', function_call=FunctionCall( name='get_weather', arguments='{"location": "Beijing"}' ))
Agents use _detect_tool to identify function calls in LLM responses:
class MyAgent(Agent): def _run(self, messages, **kwargs): # Get LLM response for responses in self._call_llm(messages, functions=[...]): for msg in responses: # Detect if this message contains a function call use_tool, tool_name, tool_args, text = self._detect_tool(msg) if use_tool: # Execute the tool result = self._call_tool(tool_name, tool_args) # Add result to messages messages.append( Message(role=FUNCTION, name=tool_name, content=result) )
agent = FnCallAgent( function_list=['calculator'], llm={ 'model': 'qwen-plus', 'generate_cfg': { 'thought_in_content': True } })# Response will include both reasoning and function call:# Message(# role='assistant',# content='To solve this, I need to calculate 25 * 17.',# function_call=FunctionCall(name='calculator', arguments='...')# )
class RobustAgent(FnCallAgent): def _call_tool(self, tool_name, tool_args, **kwargs): try: return super()._call_tool(tool_name, tool_args, **kwargs) except ToolServiceError as e: # Return error to LLM so it can try a different approach return f"Error calling {tool_name}: {e.message}" except Exception as e: # Log and return generic error logger.error(f"Unexpected error in {tool_name}: {e}") return f"Tool {tool_name} encountered an error. Please try another approach."