Table of Contents
How difficult is large model programming?
From Chinese startup company
"StarShip is all kinds of home appliances"
Home Technology peripherals AI Beyond Devin! Led by Yao Ban, they set a new world record for large model programming

Beyond Devin! Led by Yao Ban, they set a new world record for large model programming

Jun 04, 2024 pm 12:50 PM
programming Model

Beyond Devin! SWEBench has welcomed a new player on the ranking list -

StarShip CodeGen Agent, produced by the start-up company OpenCSG led by Yao Ban, and ranked second in the world with a score of 23.67% score.

At the same time, it created the highest record of non-GPT-4o basic model (SOTA) .

Beyond Devin! Led by Yao Ban, they set a new world record for large model programming

We all know that SWebench evaluation is highly close to real programming scenarios and extremely difficult. It not only requires the model to understand the requirements, coordinate changes to multiple functions/classes and even files, but also Models are required to interact with the execution environment, handle extremely long contexts, and perform complex logical reasoning for traditional code generation tasks.

In this difficult real test, the most advanced GPT4 and Devin in the industry can only solve 1.74% and 13.86% of the problems.

This achievement is a leading move based on OpenCSG to promote the development of language models in a more practical, intelligent and autonomous direction. This move marks an important step taken by domestic companies in promoting the development of language model applications in a more practical, intelligent and autonomous direction.

How difficult is large model programming?

In March 2024, the emergence of Devin, the first AI software engineer, detonated the entire technology world. Although accompanied by a series of controversies, Devin's strong innovation capabilities and huge potential have brought new expectations to many AI enthusiasts and practitioners. Devin has profound technical skills and extensive knowledge reserves. He is known for his excellent algorithms and powerful programming abilities. His research results and developed software have been constantly breaking through and innovating, bringing

to many AI enthusiasts and practitioners. Devin is not only able to easily solve coding tasks, but can also complete the entire software development cycle independently - —From project planning to deployment, covering but not limited to building websites, independently searching and fixing bugs, training and fine-tuning AI models, etc.

Beyond Devin! Led by Yao Ban, they set a new world record for large model programming

Why does Devin dare to challenge the programming capabilities of basic models such as GPT4?

The core is that software engineers not only write code, but also involve requirements understanding, code interpretation, programming planning, code generation, debugging and exception repair, etc. Each link here will affect large model programming. usability and effectiveness.

For such real-life scenarios, Princeton University proposed SWEBench, a tool to quantitatively evaluate end-to-end code generation capabilities.

GPT-4’s score on SWEBench is only 1.74%. Even with RAG technology, the score is less than 3%. This shows that relying solely on basic models can directly solve real-world problems. The programming problem is impossible to do.

And Devin’s technological innovation is based on Agent-based workflow construction, which raises SWEBench’s solution rate to a new level.

In March, Devin topped the list with an independent problem solving rate of 13.86%, which directly improved "large model programming" from a state of being almost unusable to "seeing the light of day." Major Silicon Valley companies and large model startups have entered the field of LLM for SE, and this record has been continuously rewritten.

As of the end of April 2024, the best record was 20.33% created by the Amazon Q Developer Agent launched by the Amazon AI team.

Regrettably, compared to the "letting a hundred flowers bloom" of Chinese companies on the basic model list, Chinese companies rarely participated in this difficult challenge until this time OpenCSG rewrote this record.

From Chinese startup company

SWEBench’s latest evaluation results have been updated. OpenCSG has jumped to second place on the list. The OpenCSG StarShip CodeGen Agent launched by the company achieved a pass rate of 23.67% in the Lite evaluation. , this result not only exceeds the results of Devin and Amazon.

OpenCSG(Open Expression) was established only one year ago. It is a company dedicated to the construction of large model ecological community and brings together the upstream and downstream enterprise chains in the artificial intelligence industry to jointly build large models. A company that provides solutions and tool platforms for applications in vertical industries.

The team has deep experience in open source and large model compounding——

CEO Chen Ran is a well-known entrepreneur in the field of open source software and has successfully built many companies in the open source field business company.

CTO Wang Wei comes from Yao Class 05 of Tsinghua University and has many years of research and development experience in the field of artificial intelligence.

The company’s core R&D team also brings together elite students from Tsinghua University, Peking University, Wharton, Hong Kong University of Science and Technology and other universities.

So how does such a team create a new record?

Currently, many companies are actively exploring and practicing technologies such as basic models, vertical domain models, and RAG, while OpenCSG has chosen a focused direction: Dedicated to the innovative development of programming agents and the depth of large-scale model algorithms optimization.

Agent level: Different from LLM+RAG or general Agent framework, OpenCSG StarShip CodeGen Agent is designed for highly customized and optimized agents in the field of software research and development: integrating various stages of research and development (requirements understanding, code retrieval , programming plan, writing code, cycle verification, etc.) is implemented through LLM Agent, and combined with software engineering methods, such as AST syntax analysis, dependency retrieval, etc. for in-depth optimization, we strive for excellence in every link, and finally integrate to achieve a higher Accurate code generation.

Algorithm level: In response to typical problems such as API conflicts caused by code version changes, OpenCSG proposes an adaptive teacher model. The teacher model analyzes code version change records and generates high-quality programming data for use. To improve the generation effect of the basic model. According to the evaluation, the improvements brought by these innovations are significantly better than the current RAG model, especially in popular project scenarios where the API structure is updated frequently. The relevant results of this part have been formed into papers and submitted to international conferences.

It is this algorithm + engineering two-pronged approach and continuous improvement model that allows OpenCSG CodeGen Agent to stand out among other models.

"StarShip is all kinds of home appliances"

If the real evaluation of CodeGen Agent is a small test, then StarShip carries the grand blueprint of OpenCSG.

Regarding StarShip’s product positioning, OpenCSG CEO Chen Ran said:

StarShip bears our vision of reshaping software development for large models. Users form their own digital employee team through StarShip's built-in agent. CodeGen Agent is a digital programmer built into the platform. Currently, CodeReview Agent code reviewer and CodeSearch code question and answer engineer have been released. Unlike coding assistance tools, we expect these digital workers to work directly and independently without the need for human assistance intervention. In the future, we will release more types of digital employees to fully cover all aspects of requirements, design, coding, testing, and operation and maintenance.

CTO Wang Wei said that this path is full of challenges but very interesting, "From first principles, large models are no longer a question of 'yes' or 'no' for improving productivity. It’s a question of when, where and in what form. StarShip is the answer we are trying to give.” High yield:

CSGHub open source model platform, wukong pre-training model, CSGCoder fine-tuned code model

, etc. These products are accurately positioned and well received in the industry. Beyond Devin! Led by Yao Ban, they set a new world record for large model programming

The rapid launch and iteration of these products not only meet market demand, but also serve a common goal: to empower everyone in every enterprise with large models.

To allow big models to empower every enterprise and every person, we need to make big models the same as water and electricity. If the big model is electric energy, then CSGHub is the electric power network, and StarShip is a variety of home appliances that will eventually empower thousands of households.

The concept of OpenCSG is open source. As a company that insists on open source as its core, it not only realizes open source models and code, but also makes the platform open source.

CTO Wang Wei summed it up this way: We are a young company that benefits from open source so that we can make some results in a short time. At the same time, we will also give back to the open source community in an all-round way. This is the basic principle of the open source community. In addition, I very much agree with Sam Altman's statement that open source is just a model, and product value is more important than the model.

"Benchmark itself is just a number. With the launch of GPT4-o, SWEBench's test scores are expected to exceed 30% soon, and optimistic estimates can exceed 50% next year. And we are more concerned about the factors behind these numbers. Product value: With the improvement of model capabilities and engineering technology, digital employees will change from quantitative changes to qualitative changes, from usable to easy to use, ushering in a comprehensive explosion in various industries." Wang Wei explained, "This may be the era of big models. A major change in the context, from companies to individuals, we must prepare for this.”

The above is the detailed content of Beyond Devin! Led by Yao Ban, they set a new world record for large model programming. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Two Point Museum: All Exhibits And Where To Find Them
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Comprehensively surpassing DPO: Chen Danqi's team proposed simple preference optimization SimPO, and also refined the strongest 8B open source model Comprehensively surpassing DPO: Chen Danqi's team proposed simple preference optimization SimPO, and also refined the strongest 8B open source model Jun 01, 2024 pm 04:41 PM

In order to align large language models (LLMs) with human values ​​and intentions, it is critical to learn human feedback to ensure that they are useful, honest, and harmless. In terms of aligning LLM, an effective method is reinforcement learning based on human feedback (RLHF). Although the results of the RLHF method are excellent, there are some optimization challenges involved. This involves training a reward model and then optimizing a policy model to maximize that reward. Recently, some researchers have explored simpler offline algorithms, one of which is direct preference optimization (DPO). DPO learns the policy model directly based on preference data by parameterizing the reward function in RLHF, thus eliminating the need for an explicit reward model. This method is simple and stable

Yolov10: Detailed explanation, deployment and application all in one place! Yolov10: Detailed explanation, deployment and application all in one place! Jun 07, 2024 pm 12:05 PM

1. Introduction Over the past few years, YOLOs have become the dominant paradigm in the field of real-time object detection due to its effective balance between computational cost and detection performance. Researchers have explored YOLO's architectural design, optimization goals, data expansion strategies, etc., and have made significant progress. At the same time, relying on non-maximum suppression (NMS) for post-processing hinders end-to-end deployment of YOLO and adversely affects inference latency. In YOLOs, the design of various components lacks comprehensive and thorough inspection, resulting in significant computational redundancy and limiting the capabilities of the model. It offers suboptimal efficiency, and relatively large potential for performance improvement. In this work, the goal is to further improve the performance efficiency boundary of YOLO from both post-processing and model architecture. to this end

No OpenAI data required, join the list of large code models! UIUC releases StarCoder-15B-Instruct No OpenAI data required, join the list of large code models! UIUC releases StarCoder-15B-Instruct Jun 13, 2024 pm 01:59 PM

At the forefront of software technology, UIUC Zhang Lingming's group, together with researchers from the BigCode organization, recently announced the StarCoder2-15B-Instruct large code model. This innovative achievement achieved a significant breakthrough in code generation tasks, successfully surpassing CodeLlama-70B-Instruct and reaching the top of the code generation performance list. The unique feature of StarCoder2-15B-Instruct is its pure self-alignment strategy. The entire training process is open, transparent, and completely autonomous and controllable. The model generates thousands of instructions via StarCoder2-15B in response to fine-tuning the StarCoder-15B base model without relying on expensive manual annotation.

Tsinghua University took over and YOLOv10 came out: the performance was greatly improved and it was on the GitHub hot list Tsinghua University took over and YOLOv10 came out: the performance was greatly improved and it was on the GitHub hot list Jun 06, 2024 pm 12:20 PM

The benchmark YOLO series of target detection systems has once again received a major upgrade. Since the release of YOLOv9 in February this year, the baton of the YOLO (YouOnlyLookOnce) series has been passed to the hands of researchers at Tsinghua University. Last weekend, the news of the launch of YOLOv10 attracted the attention of the AI ​​community. It is considered a breakthrough framework in the field of computer vision and is known for its real-time end-to-end object detection capabilities, continuing the legacy of the YOLO series by providing a powerful solution that combines efficiency and accuracy. Paper address: https://arxiv.org/pdf/2405.14458 Project address: https://github.com/THU-MIG/yo

Li Feifei reveals the entrepreneurial direction of 'spatial intelligence': visualization turns into insight, seeing becomes understanding, and understanding leads to action Li Feifei reveals the entrepreneurial direction of 'spatial intelligence': visualization turns into insight, seeing becomes understanding, and understanding leads to action Jun 01, 2024 pm 02:55 PM

After Stanford's Feifei Li started his business, he unveiled the new concept "spatial intelligence" for the first time. This is not only her entrepreneurial direction, but also the "North Star" that guides her. She considers it "the key puzzle piece to solve the artificial intelligence problem." Visualization leads to insight; seeing leads to understanding; understanding leads to action. Based on Li Feifei's 15-minute TED talk, which is fully open to the public, it starts from the origin of life evolution hundreds of millions of years ago, to how humans are not satisfied with what nature has given them and develops artificial intelligence, to how to build spatial intelligence in the next step. Nine years ago, Li Feifei introduced the newly born ImageNet to the world on the same stage - one of the starting points for this round of deep learning explosion. She herself also encouraged netizens: If you watch both videos, you will be able to understand the computer vision of the past 10 years.

LLM is all done! OmniDrive: Integrating 3D perception and reasoning planning (NVIDIA's latest) LLM is all done! OmniDrive: Integrating 3D perception and reasoning planning (NVIDIA's latest) May 09, 2024 pm 04:55 PM

Written above & the author’s personal understanding: This paper is dedicated to solving the key challenges of current multi-modal large language models (MLLMs) in autonomous driving applications, that is, the problem of extending MLLMs from 2D understanding to 3D space. This expansion is particularly important as autonomous vehicles (AVs) need to make accurate decisions about 3D environments. 3D spatial understanding is critical for AVs because it directly impacts the vehicle’s ability to make informed decisions, predict future states, and interact safely with the environment. Current multi-modal large language models (such as LLaVA-1.5) can often only handle lower resolution image inputs (e.g.) due to resolution limitations of the visual encoder, limitations of LLM sequence length. However, autonomous driving applications require

Beating GPT-4o in seconds, beating Llama 3 70B in 22B, Mistral AI opens its first code model Beating GPT-4o in seconds, beating Llama 3 70B in 22B, Mistral AI opens its first code model Jun 01, 2024 pm 06:32 PM

French AI unicorn MistralAI, which is targeting OpenAI, has made a new move: Codestral, the first large code model, was born. As an open generative AI model designed specifically for code generation tasks, Codestral helps developers write and interact with code by sharing instructions and completion API endpoints. Codestral's proficiency in coding and English allows software developers to design advanced AI applications. The parameter size of Codestral is 22B, it complies with the new MistralAINon-ProductionLicense, and can be used for research and testing purposes, but commercial use is prohibited. Currently, the model is available for download on HuggingFace. download link

See all articles