{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 终止条件\n", "\n", "在前一节中,我们探讨了如何定义代理,并将它们组织成可以通过通信(对话)解决任务的团队。然而,对话可能永远持续下去,在许多情况下,我们需要知道_何时_停止它们。这就是终止条件的作用。\n", "\n", "AgentChat 通过提供基础 {py:class}`~autogen_agentchat.base.TerminationCondition` 类和几个继承自它的实现来支持多种终止条件。\n", "\n", "终止条件是一个可调用对象,它接收自上次调用该条件以来的 ChatMessage 对象序列,如果对话应该终止则返回 StopMessage,否则返回 None。一旦达到终止条件,必须在再次使用之前重置它。\n", "\n", "关于终止条件的一些重要注意事项:\n", "- 它们是有状态的,必须在再次使用之前重置。\n", "- 它们可以使用 AND 和 OR 运算符组合。\n", "- 它们由团队实现/执行,而不是由代理执行。代理可以通过发送 StopMessage 来发出或请求终止,但团队负责执行它。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "首先,让我们定义一个只有一个代理的简单团队,然后探索如何应用多个终止条件来指导结果行为。" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/tmp/ipykernel_326523/3735405718.py:21: DeprecationWarning: CodingAssistantAgent is deprecated. Use AssistantAgent instead.\n", " writing_assistant_agent = CodingAssistantAgent(\n" ] } ], "source": [ "import logging\n", "\n", "from autogen_agentchat import EVENT_LOGGER_NAME\n", "from autogen_agentchat.agents import CodingAssistantAgent\n", "from autogen_agentchat.logging import ConsoleLogHandler\n", "from autogen_agentchat.task import MaxMessageTermination, TextMentionTermination\n", "from autogen_agentchat.teams import RoundRobinGroupChat\n", "from autogen_core.components.models import OpenAIChatCompletionClient\n", "\n", "logger = logging.getLogger(EVENT_LOGGER_NAME)\n", "logger.addHandler(ConsoleLogHandler())\n", "logger.setLevel(logging.INFO)\n", "\n", "model_client = OpenAIChatCompletionClient(\n", " model=\"gpt-4o\",\n", " temperature=1,\n", " api_key=\"sk-\", # Optional if you have an OPENAI_API_KEY env variable set.\n", ")\n", "\n", "writing_assistant_agent = CodingAssistantAgent(\n", " name=\"writing_assistant_agent\",\n", " system_message=\"You are a helpful assistant that solve tasks by generating text responses and code.\",\n", " model_client=model_client,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MaxMessageTermination \n", "\n", "最简单的终止条件是 {py:class}`~autogen_agentchat.teams.MaxMessageTermination` 条件,它在固定数量的消息后终止对话。" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\"timestamp\": \"2024-12-06T17:57:10.104944\", \"message\": \"{\\n \\\"source\\\": \\\"user\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Write a unique, Haiku about the weather in Paris\\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:10.104944\", \"message\": \"{\\n \\\"source\\\": \\\"user\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Write a unique, Haiku about the weather in Paris\\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:10.104944\", \"message\": \"{\\n \\\"source\\\": \\\"user\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Write a unique, Haiku about the weather in Paris\\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:11.015792\", \"message\": \"{\\n \\\"source\\\": \\\"writing_assistant_agent\\\",\\n \\\"models_usage\\\": {\\n \\\"prompt_tokens\\\": 38,\\n \\\"completion_tokens\\\": 19\\n },\\n \\\"content\\\": \\\"Rain whispers softly, \\\\nEiffel towers through the mist\\u2014 \\\\nCity wrapped in gray. \\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:11.015792\", \"message\": \"{\\n \\\"source\\\": \\\"writing_assistant_agent\\\",\\n \\\"models_usage\\\": {\\n \\\"prompt_tokens\\\": 38,\\n \\\"completion_tokens\\\": 19\\n },\\n \\\"content\\\": \\\"Rain whispers softly, \\\\nEiffel towers through the mist\\u2014 \\\\nCity wrapped in gray. \\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:11.015792\", \"message\": \"{\\n \\\"source\\\": \\\"writing_assistant_agent\\\",\\n \\\"models_usage\\\": {\\n \\\"prompt_tokens\\\": 38,\\n \\\"completion_tokens\\\": 19\\n },\\n \\\"content\\\": \\\"Rain whispers softly, \\\\nEiffel towers through the mist\\u2014 \\\\nCity wrapped in gray. \\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:11.737698\", \"message\": \"{\\n \\\"source\\\": \\\"writing_assistant_agent\\\",\\n \\\"models_usage\\\": {\\n \\\"prompt_tokens\\\": 66,\\n \\\"completion_tokens\\\": 19\\n },\\n \\\"content\\\": \\\"Rain whispers softly, \\\\nEiffel towers through the mist\\u2014 \\\\nCity wrapped in gray. \\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:11.737698\", \"message\": \"{\\n \\\"source\\\": \\\"writing_assistant_agent\\\",\\n \\\"models_usage\\\": {\\n \\\"prompt_tokens\\\": 66,\\n \\\"completion_tokens\\\": 19\\n },\\n \\\"content\\\": \\\"Rain whispers softly, \\\\nEiffel towers through the mist\\u2014 \\\\nCity wrapped in gray. \\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:11.737698\", \"message\": \"{\\n \\\"source\\\": \\\"writing_assistant_agent\\\",\\n \\\"models_usage\\\": {\\n \\\"prompt_tokens\\\": 66,\\n \\\"completion_tokens\\\": 19\\n },\\n \\\"content\\\": \\\"Rain whispers softly, \\\\nEiffel towers through the mist\\u2014 \\\\nCity wrapped in gray. \\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:11.738580\", \"message\": \"{\\n \\\"source\\\": \\\"MaxMessageTermination\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Maximum number of messages 3 reached, current message count: 3\\\"\\n}\", \"type\": \"StopMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:11.738580\", \"message\": \"{\\n \\\"source\\\": \\\"MaxMessageTermination\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Maximum number of messages 3 reached, current message count: 3\\\"\\n}\", \"type\": \"StopMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:57:11.738580\", \"message\": \"{\\n \\\"source\\\": \\\"MaxMessageTermination\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Maximum number of messages 3 reached, current message count: 3\\\"\\n}\", \"type\": \"StopMessage\"}\n" ] } ], "source": [ "max_msg_termination = MaxMessageTermination(max_messages=3)\n", "round_robin_team = RoundRobinGroupChat([writing_assistant_agent], termination_condition=max_msg_termination)\n", "round_robin_team_result = await round_robin_team.run(task=\"Write a unique, Haiku about the weather in Paris\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们看到对话在代理发送指定数量的消息后终止。\n", "\n", "## StopMessageTermination\n", "\n", "在这种情况下,如果任何代理发送 `StopMessage`,团队就会终止对话。那么,代理什么时候会发送 `StopMessage` 呢?通常,这是在代理的 `on_message` 方法中实现的,代理可以检查传入的消息,并根据某些条件决定发送 `StopMessage`。\n", "\n", "这里一个常见的模式是提示代理(或参与对话的某个代理)在其响应中发出特定的文本字符串,这可以用来触发终止条件。\n", "\n", "事实上,如果你查看 AgentChat 提供的默认 `CodingAssistantAgent` 类的代码实现,你会发现两件事:\n", "- 默认的 `system_message` 指示代理在认为任务完成时以\"terminate\"一词结束其响应\n", "- 在 `on_message` 方法中,代理检查传入的消息是否包含文本\"terminate\",如果包含则返回 `StopMessage`。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/tmp/ipykernel_326523/3843252945.py:1: DeprecationWarning: CodingAssistantAgent is deprecated. Use AssistantAgent instead.\n", " writing_assistant_agent = CodingAssistantAgent(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{\"timestamp\": \"2024-12-06T17:58:44.631351\", \"message\": \"{\\n \\\"source\\\": \\\"user\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Write a unique, Haiku about the weather in Paris\\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:58:44.631351\", \"message\": \"{\\n \\\"source\\\": \\\"user\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Write a unique, Haiku about the weather in Paris\\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:58:44.631351\", \"message\": \"{\\n \\\"source\\\": \\\"user\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Write a unique, Haiku about the weather in Paris\\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:58:45.489890\", \"message\": \"{\\n \\\"source\\\": \\\"writing_assistant_agent\\\",\\n \\\"models_usage\\\": {\\n \\\"prompt_tokens\\\": 48,\\n \\\"completion_tokens\\\": 25\\n },\\n \\\"content\\\": \\\"Cobblestones glisten, \\\\nParis whispers in the rain\\u2014 \\\\nClouds dance with the Seine. \\\\n\\\\nTERMINATE\\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:58:45.489890\", \"message\": \"{\\n \\\"source\\\": \\\"writing_assistant_agent\\\",\\n \\\"models_usage\\\": {\\n \\\"prompt_tokens\\\": 48,\\n \\\"completion_tokens\\\": 25\\n },\\n \\\"content\\\": \\\"Cobblestones glisten, \\\\nParis whispers in the rain\\u2014 \\\\nClouds dance with the Seine. \\\\n\\\\nTERMINATE\\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:58:45.489890\", \"message\": \"{\\n \\\"source\\\": \\\"writing_assistant_agent\\\",\\n \\\"models_usage\\\": {\\n \\\"prompt_tokens\\\": 48,\\n \\\"completion_tokens\\\": 25\\n },\\n \\\"content\\\": \\\"Cobblestones glisten, \\\\nParis whispers in the rain\\u2014 \\\\nClouds dance with the Seine. \\\\n\\\\nTERMINATE\\\"\\n}\", \"type\": \"TextMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:58:45.490716\", \"message\": \"{\\n \\\"source\\\": \\\"TextMentionTermination\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Text 'TERMINATE' mentioned\\\"\\n}\", \"type\": \"StopMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:58:45.490716\", \"message\": \"{\\n \\\"source\\\": \\\"TextMentionTermination\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Text 'TERMINATE' mentioned\\\"\\n}\", \"type\": \"StopMessage\"}\n", "{\"timestamp\": \"2024-12-06T17:58:45.490716\", \"message\": \"{\\n \\\"source\\\": \\\"TextMentionTermination\\\",\\n \\\"models_usage\\\": null,\\n \\\"content\\\": \\\"Text 'TERMINATE' mentioned\\\"\\n}\", \"type\": \"StopMessage\"}\n" ] } ], "source": [ "writing_assistant_agent = CodingAssistantAgent(\n", " name=\"writing_assistant_agent\",\n", " system_message=\"You are a helpful assistant that solve tasks by generating text responses and code. Respond with TERMINATE when the task is done.\",\n", " model_client=model_client,\n", ")\n", "\n", "text_termination = TextMentionTermination(\"TERMINATE\")\n", "round_robin_team = RoundRobinGroupChat([writing_assistant_agent], termination_condition=text_termination)\n", "\n", "round_robin_team_result = await round_robin_team.run(task=\"Write a unique, Haiku about the weather in Paris\")" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.3" } }, "nbformat": 4, "nbformat_minor": 2 }