DoNews reported on June 7 that the biggest shortcoming of the current GPT-4 model is mainly its arithmetic ability. Since the logical reasoning ability of the model needs to be improved, GPT-4 cannot solve even the calculation problems that many people think are relatively simple. Get the correct result.
Researchers at the National University of Singapore recently launched a model called Goat, designed to solve arithmetic problems. This news was reported by IT House. The researchers stated that "after fine-tuning the LLaMA model, Goat achieved mathematically higher accuracy and better performance than GPT-4."
Researchers proposed a new method to classify tasks according to the learnable types of arithmetic, and then use basic arithmetic principles to decompose unlearnable tasks into a series of learnable tasks (IT Home Note: Complex tasks The calculation process is broken down into simple steps) and then imported into the AI model.
This new method allows the model to learn the answer pattern and generalize the process to unseen data, rather than relying solely on pure "weight memory calculation". Therefore, it can effectively improve arithmetic performance and can learn in zero samples. generates answers for large number addition and subtraction with "near-perfect accuracy."
The researchers trained on a GPU with 24 GB of video memory, and tested the final model using the BIG-bench arithmetic subtask. The accuracy results were outstanding, ahead of Bloom, GPT-NeoX, OPT, etc. in the industry Model.
The accuracy of the zero-sample Goat-7B even exceeded the PaLM-540 model after few-sample learning, and far exceeded GPT-4 in large number calculations.
The above is the detailed content of Singapore releases AI arithmetic model Goat, with capabilities above GPT-4. For more information, please follow other related articles on the PHP Chinese website!