ModelScope-Agent provides a universal, customizable Agent framework to facilitate users to create their own agents. The framework is based on open source Large Language Models (LLMs) at its core and provides a user-friendly system library with the following features:
The following will first show some capabilities of ModelScopeGPT (Magic GPT) implemented based on ModelScope-Agent:
The content that needs to be rewritten is: 1. In a single-step tool call, the Agent needs to select the appropriate tool and generate a request, and then return the result to the user based on the execution result
2. In a multi-step tool call, the Agent needs to plan, schedule, execute and reply to multiple tools
3. When the tool is called in multiple rounds of dialogue, the Agent needs to mine the parameters that need to be passed to the tool from the historical dialogue.
Community knowledge Q&A platform based on search tools
## Framework introduction#What is the design principle of the ModelScope-Agent framework?
ModelScope-Agent is a general, customizable Agent framework for practical application development. It is based on open source large language models (LLMs) as the core and includes Modules such as memory control and tool use. The open source LLM is mainly responsible for task planning, scheduling and reply generation; the memory control module mainly includes knowledge retrieval and prompt (prompt word) management; the tool usage module includes tool library, tool retrieval and tool customization. The ModelScope-Agent system architecture is as follows:
How the ModelScope-Agent framework is executed
ModelScope-Agent works by splitting the goal into smaller tasks and then completing them one by one. For example, when a user requests "Write a short story, read it with a female voice, and add a video at the same time," ModelScope-Agent will display the entire task planning process. It will first retrieve relevant speech synthesis tools through tool search, and then the open source LLM will perform planning and scheduling. , first generate a story, then call the corresponding speech generation model, generate the speech and read it in a female voice, display it to the user, and finally call the video generation model to generate a video based on the generated story content; the entire process does not require user configuration. The current request may require The called tools greatly improve the ease of use.
Open source large model training framework: new training methods, data and model open source In addition to the ModelScope-Agent framework, the research team also proposed a new tool instruction fine-tuning training method: Weighted LM, which improves the ability to call open source large model tool instructions by weighting loss on some tokens called by tool instructions. The research team also released a high-quality Chinese and English dataset called MSAgent-Bench, which contains 600,000 multi-round and multi-step A sample of tool command calling capabilities. Based on this data set, the research team adopted a new training method to optimize the Qwen-7B model and obtained a model named MSAgent-Qwen-7B. Relevant data sets and models have been publicly released on the open source platform Rewritten content: Integrated tool list Currently, ModelScope-Agent has been connected to many AI models such as natural language processing, speech, vision, and multi-modality by default, and has also integrated open source solutions such as knowledge retrieval and API retrieval by default. ModelScope-Agent github also provides a nanny-level practice demo page, Let novices also build their own intelligent agents. Please download the demo notebook: https://github.com/modelscope/modelscope-agent/blob/master/demo/demo_qwen_agent.ipynb 1. First pull the ModelScope-Agent code and install the relevant dependencies 2. You need to configure the config file, including ModelScope token and build API tools Search engine 3. Start the central large model 4. Agent construction and use, relying on previously built large models, tool lists, tool retrieval and memory modules 1. After pulling the ModelScope-Agent code, enter the modelscope_agent/tools directory and add a file named custom_tool.py at the code level. Configure the API's required description, name, and parameters in this file. At the same time, add two calling options: local_call (local call) and remote_call (remote call) The content that needs to be rewritten is: 2 , Configuration environment and large model deployment refer to 2 and 3 of the previous chapter 3. Construct the newly registered tools into a list and add it to the Agent's construction process 4. Use the agen.run() method and enter the query to test whether the tool can successfully call the corresponding API 5. The agent will automatically call the corresponding API and return the execution result to the main model, and the main model will return a reply Developers can refer to the above tutorials to easily build their own agents. ModelScope-Agent relies on the Magic Community and will adapt to more new open source technologies in the future. Large model, launch more applications developed based on ModelScope-Agent, such as customer service agent, personal assistant agent, story agent, motion agent, multi-agent (multi-modal agent) and so on.
ModelScope-Agent Practice
Register New Tool Practice
One More Thing
The above is the detailed content of With ModelScope-Agent, novices can also create exclusive agents, with nanny-level tutorials included. For more information, please follow other related articles on the PHP Chinese website!