Skip to content

v3.2.2.3

Latest
Compare
Choose a tag to compare
@Maplemx Maplemx released this 08 Apr 03:23
· 56 commits to main since this release
cf94bf7

New Features

  • [Agent.load_yaml_prompt()]:

    We provide developers a new way to manage your request prompt template in YAML file!

    • HOW TO USE:

      • YAML file:
      input: ${user_input}
      use_public_tools:
        - browse
      set_tool_proxy: http://127.0.0.1:7890
      instruct:
        output language: English
      output:
        page_topic:
          $type: str
          $desc: ""
        summary:
          $type: str
          $desc: ""
      • Python file:
      import Agently
      
      agent_factory = (
          Agently.AgentFactory()
              .set_settings("model.Google.auth.api_key", "")
              .set_settings("current_model", "Google")
      )
      
      agent = agent_factory.create_agent()
      
      print(
          agent
              .load_yaml_prompt(
                  path="./yaml_prompt.yaml",
                  # or just pass YAML string like this:
                  #yaml=yaml_str
                  variables = {
                      "user_input": "http://Agently.tech",
                  }
              )
              .start()
      )
      • Result:
      {'page_topic': 'Agently - Artificial Intelligence for the Enterprise', 'summary': 'Agently is a leading provider of AI-powered solutions for the enterprise. Our platform enables businesses to automate tasks, improve efficiency, and gain insights from their data. We offer a range of services, including:\n\n* **AI-powered automation:** Automate repetitive tasks, such as data entry and customer service, to free up your team to focus on more strategic initiatives.\n* **Machine learning:** Use machine learning to improve the accuracy of your predictions and decisions. We can help you identify trends and patterns in your data, and develop models that can predict future outcomes.\n* **Natural language processing:** Use natural language processing to understand and generate human language. This can be used for a variety of applications, such as chatbots, text analysis, and sentiment analysis.\n\nAgently is committed to helping businesses succeed in the digital age. We believe that AI is a powerful tool that can be used to improve efficiency, innovation, and customer satisfaction. We are excited to partner with you to explore the possibilities of AI for your business.'}
  • [Agently Workflow: YAML Flow]:[🧪beta] This feature may change in the future

    We try to provide a simple way to help developers to manage workflow easier with YAML files, so we publish a beta feature YAML Flow to present the idea.

    With this new feature, you can use YAML files to state chunks and manage the connections between chunks.

    Also, we preset some basic chunks(Start, UserInput and Print) to help you build your own workflow quicker.

    • BASIC USE:

      • YAML file:
      chunks:
        start:
          type: Start
        user_input:
          type: UserInput
          placeholder: '[User Input]:'
        print:
          type: Print
      connections:
        - start->user_input->print
      • Python file:
      import Agently
      workflow = Agently.Workflow()
      # You can use draw=True to output workflow Mermaid code instead of running it
      #print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
      workflow.start_yaml(path="./yaml_file.yaml")
      • Result:
      [User Input]: 1+2
      >>> 1+2
    • ADD YOUR OWN EXECUTORS:

      • YAML file:
      chunks:
        start:
          type: Start
        user_input:
          type: UserInput
          placeholder: '[User Input]:'
        # We state a new chunk named 'calculate'
        calculate:
          # We add a calculate executor to calculate user input
          # with executor_id = 'calculate'
          executor: calculate
        print:
          type: Print
      connections:
        # Then add the 'calculate' chunk into the workflow
        - start->user_input->calculate->print
      • Python file:
      import Agently
      
      # use decorator `@workflow.executor_func(<executor_id>)`
      # to state executor function
      @workflow.executor_func("calculate")
      def calculate_executor(inputs, storage):
          result = eval(inputs["input"])
          return str(result)
      
      workflow = Agently.Workflow()
      #print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
      workflow.start_yaml(path="./yaml_file.yaml")
      • Result:
      [User Input]: 1+2
      >>> 3
  • [Basic Prompt Management Methods]:

    Add a series prompt management methods to help developers directly manage the prompt information in agent instants or request with different information life cycles.

    These methods below will manage prompt information in agent instant and the prompt information will be pass to model when requesting every time** until this agent instant is dropped**.

    • agent.set_agent_prompt(<slot_name>, <value>)
    • agent.get_agent_prompt(<slot_name>)
    • agent.remove_agent_prompt(<slot_name>)

    These methods below will manage prompt information in single request which will only use the prompt information once! When the request is finished, all the prompt information will be erased.

    • agent.set_request_prompt(<slot_name>, <value>)
    • agent.get_request_prompt(<slot_name>)
    • agent.remove_request_prompt(<slot_name>)

Read Development Handbook - Standard Request Slots to learn more

Updates:

  • [Agently Workflow]: Make some changes to make complex flow more stable. #64
  • [Framework Core]: Rename variables of basic prompt slots to keep in unison. 3303aa1
  • [Facility]: Use Agently.lib as alias of Agently.facility
  • [Tools: browse]: remove newspaper3k and replace it with BeautifulSoup4 df8c69a

Bug Fixed:

  • [Request: OpenAI]: Fixed a bug that report error await can not use on response when using proxy 7643cfe
  • [Request: OAIClient]: Fixed a bug that proxy can not work correctly 7643cfe
  • [Request: OAIClient]: Fixed a bug that system prompt can not work correctly 1f9d275
  • [Agent Component: Tool]: Fixed a bug that make tool calling can not work correctly 48b80f8

新功能

通过YAML格式数据管理单次Agent请求模板

[Agent.load_yaml_prompt()]

我们提供了一种全新的YAML语法表达方式,来帮助您更好地管理单次Agent请求,除了方便开发人员将不同模块进行解耦外,我们也希望通过这种方式,将Agently提供的能力用一种标准化配置的方式进行跨语种表达,或是将这种表达方式提供给非开发人员使用。

如何使用

  • YAML文件/YAML文本内容:
input: ${user_input}
use_public_tools:
  - browse
set_tool_proxy: http://127.0.0.1:7890
instruct:
  输出语言: 中文
output:
  page_topic:
    $type: str
    $desc: ""
  summary:
    $type: str
    $desc: ""
  • Python文件:
import Agently

agent_factory = (
    Agently.AgentFactory()
        .set_settings("model.Google.auth.api_key", "")
        .set_settings("current_model", "Google")
)

agent = agent_factory.create_agent()

print(
    agent
        .load_yaml_prompt(
            path="./yaml_prompt.yaml",
            # 你也可以用下面方式直接传递YAML格式的字符串
            #yaml=yaml_str
            variables = {
                "user_input": "http://Agently.tech",
            }
        )
        .start()
)
  • 运行结果:
{
    "page_topic": "易用、灵活、高效的开源大模型应用开发框架",
    "summary": "Agently是一个开源的大模型应用开发框架,它可以让开发者轻松地使用大模型来构建应用程序。Agently的特点包括:\n\n* 语法简单易学,5分钟开始使用\n* 安装简单,使用pip install -U Agently即可\n* 使用灵活,可以通过几行代码指定大模型、鉴权信息等信息\n* 支持链式调用,像调用函数一样和Agent实例交互\n* 为工程开发者设计,应用开发灵活性高\n* 支持传递结构化数据灵活表达请求,管理Agent实例设定信息,提供自定义函数\n* 支持监听流式输出,使用Agently Workflow将复杂任务切分成块\n* 架构设计深度,解构了大模型驱动的Agent结构,维护了模型请求前后置信息流处理工作流等基础原子要件\n* 提供能力插件、工作流管理方案等增强开发者在应用层的表达丰富度"
}

使用YAML格式数据管理你的工作流

[Agently Workflow: YAML Flow]

[🧪测试] 这个功能后续可能会调整用法或者语法

我们向您提供一种实验性的通过YAML格式数据管理工作流的方法,通过这种管理方法,您可以对工作流中的工作块定义,以及工作块间的连接关系进行更加方便直观的管理。这个功能将为您呈现我们的初步想法,我们还会持续完善这个能力,强化这种表达方法的表达能力。

同时,我们也在这项新能力中预置了开始(Start)用户输入(UserInput)打印结果(Print)这三个基本工作块,帮助您更快速的构建自己的工作流。通过阅读这三个工作块的定义方法,也能够对您创建自定义工作块提供思路参考。

基本用法

  • YAML文件/YAML文本内容:
chunks:
  start:
    type: Start
  user_input:
    type: UserInput
    placeholder: '[用户输入]: '
  print:
    type: Print
connections:
  - start->user_input->print
  • Python文件:
import Agently
workflow = Agently.Workflow()
# 你可以通过设置draw=True来输出工作流的Mermaid代码,而不是运行它
#print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
workflow.start_yaml(path="./yaml_file.yaml")
  • 运行结果:
[用户输入]: 1+2
>>> 1+2

自定义你自己的工作块执行器

  • YAML文件/YAML文本内容:
chunks:
  start:
    type: Start
  user_input:
    type: UserInput
    placeholder: '[User Input]:'
  # 我们在这里声明一个新的calculate工作块
  calculate:
    # 然后在这里添加一个calculate执行器来计算用户输入结果
    # 在executor里指定执行器id为calc
    executor: calc
  print:
    type: Print
connections:
  # 然后把calculate工作块放入工作流中
  - start->user_input->calculate->print
  • Python file:
import Agently

# 使用函数装饰器`@workflow.executor_func(<executor_id>)`
# 来声明一个执行器id为calc的执行器函数
@workflow.executor_func("calc")
def calculate_executor(inputs, storage):
    result = eval(inputs["input"])
    return str(result)

workflow = Agently.Workflow()
#print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
workflow.start_yaml(path="./yaml_file.yaml")
  • Result:
[用户输入]: 1+2
>>> 3

通过基础Prompt管理方法理解不同的Prompt生命周期

我们添加了一系列的Prompt管理方法,来帮助开发者直接管理设定给Agent实例或是单次请求的Prompt信息,设定对象不同,这些Prompt信息的生命周期也是有差别的。

当我们使用agent.set_agent_prompt()方法向Agent实例设定Prompt信息的时候,这些信息将被传递并存储到Agent实例的结构体内,并在这个Agent实例每次请求模型时,都携带这些信息,直到这个Agent实例被销毁或者回收。

- `agent.set_agent_prompt(<基础指令名>, <value>)`
- `agent.get_agent_prompt(<基础指令名>)`
- `agent.remove_agent_prompt(<基础指令名>)`

当我们使用agent.set_request_prompt()方法向Agent实例内部的单次请求实例设定Prompt信息的时候,这些信息将只会在下一次请求时传递给模型,当请求完成后,这些信息就会被清除掉,不再保留。

- `agent.set_request_prompt(<基础指令名>, <value>)`
- `agent.get_request_prompt(<基础指令名>)`
- `agent.remove_request_prompt(<基础指令名>)`

在我们之前提供的Agent指令中,通过Agent能力插件提供的方法,例如Role插件提供的.set_role()方法,就使用了类似.set_agent_prompt()的设定方法。因此,通过.set_role()方法设定的信息将在多次请求间保留。

而基础指令如.input().instruct().output()则使用了类似.set_request_prompt()的设定方法。因此,通过.input()这些方法设定的信息,在当次请求(以.start()命令为标志)结束后,这些信息就被清理了,下次请求时需要重新设定。

阅读框架开发教程 - 基础指令列表了解我们支持的基础指令

功能升级

  • [Agently Workflow]: 做了大量让复杂工作流更加稳定可靠的优化。查看详情
  • [框架核心]: 重命名了基础Prompt槽位,让它们能和基础指令名保持一致。查看详情
  • [Facility]: 使用Agently.lib作为Agently.facility的别名,方便使用。
  • [工具: 网页浏览browse]: 移除了对newspaper3k包的依赖,并使用BeautifulSoup4包作为浏览工具替代 查看详情

问题修复

  • [请求插件: OpenAI]: 修复了会导致在使用代理Proxy的时候报await can not use on response的错误的问题 查看详情
  • [请求插件: OAIClient]: 修复了代理Proxy无法生效的问题 查看详情
  • [请求插件: OAIClient]: 修复了一个导致system prompt无法正常工作的问题 查看详情
  • [Agent能力插件: Tool]: 修复了一个因为重命名Prompt槽位导致的工具调用无法生效的问题 查看详情