Skip to content

Releases: modelscope/modelscope-agent

v0.5.3 release

22 May 02:16
Compare
Choose a tag to compare

Feature

  • v1/chat/completions fully compatible with openai by @Zhikaiiii in #441

Fix

Doc

  • Doc for agent about train_cn and train_agentfabric_llm_tool_use_cn by @slin000111 in #421

Full Changelog: v0.5.2...v0.5.3

v0.5.2 release

20 May 08:47
Compare
Choose a tag to compare

feature

fix

Full Changelog: v0.5.1...v0.5.2

v0.5.1 release

15 May 06:45
Compare
Choose a tag to compare

Features

  • support GPT-4o and openai multi-modal compatible by @Zhikaiiii in #437

Bug Fixing

New Contributors

Full Changelog: v0.5.0...v0.5.1

v0.5.0 release

10 May 08:47
Compare
Choose a tag to compare

What's Changed

Features

  • Add Assistant API server with v1/chat/completion for tool calling, and v1/assistant/lite for agent running.
  • Add Tool Manager API server, allow user executes utilities in isolated, secure containers.
  • Add rag workflow based on llama-index
  • Add automatic stop words finding for different LLM's special token.
  • Support llama3.

Full Changelog: v0.4.1...v0.5.0

v0.4.1 release

16 Apr 11:18
Compare
Choose a tag to compare

What's Changed

Features

  • 🔥 The Ray version of multi-agent solution is on modelscope-agent, please find the document
  • update distributed instantiation method for multi-agent method to allow multi user run case in a same ray cluster
  • fix bugs introduced by multi-agent

Demos & Apps

  • multi-role chatroom use case support multi user & update prompt for multi-agent

Full Changelog: https://github.com/modelscope/modelscope-agent/commits/v0.4.1

v0.4.0 release

14 Apr 09:40
Compare
Choose a tag to compare

What's Changed

Features

  • 🔥 The Ray version of multi-agent solution is on modelscope-agent, please find the document
  • add a simple server api in apps/agentfabric, which is also the api running on modelscope studio
  • add tool lazy load, so that it will not load all tools at starting.
  • add token count in memory module.
  • update agent loop logic make the result much more solid.

Demos & Apps

  • multi-role chatroom and videogen by multi agent in apps based multi-agent framework
  • using local model as llm by vllm in demos
  • multi-round example with memory in demos

Full Changelog: https://github.com/modelscope/modelscope-agent/commits/v0.4.0