


Make your own tools for large models such as GPT-4 to identify ChatGPT fraud
Table of contents:
- Multiscale Positive-Unlabeled Detection of AI-Generated Texts
- Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective
- Large Language Models as Tool Makers
- ##SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification
- Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models ##mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video
- Where to Go Next for Recommender Systems? ID- vs. Modality-based Recommender Models Revisited
Paper 1: Multiscale Positive-Unlabeled Detection of AI-Generated Texts
- ## Author: Yuchuan Tian, Hanting Chen, etc.
- Paper address: https://arxiv.org/abs/2305.18149
The success rate of AI fraud is very high. A few days ago, "defrauded 4.3 million in 10 minutes" was a hot search topic. Regarding the most popular large language model, researchers from Peking University and Huawei recently explored a recognition method. Here are several examples of people and AI answering the same question respectively:
## Recommendation:
Paper 2: Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective
Authors: Guhao Feng, Bohang Zhang, etc.
- Paper address: https:// arxiv.org/abs/2305.15408
- Abstract:
This article selects two very basic but core mathematical tasks: arithmetic and equations (the following figure gives examples of input and output of these two tasks)
Paper 3: Large Language Models as Tool Makers
Authors: Tianle Cai, Xuezhi Wang, etc.
- Paper address: https://arxiv.org/pdf/2305.17126.pdf
- Abstract:
Recommendation: GPT-4 and other large models have reached an evolutionary turning point: not only use them, but also make their own tools
Paper 4: SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification
- ##Author: Xupeng Miao, Gabriele Oliaro, etc.
- Paper address: https://arxiv.org/abs/2305.09781
Abstract: Recently, the Catalyst Group team from Carnegie Mellon University (CMU) released a "speculative reasoning" engine SpecInfer, which can use lightweight small models to help large models without affecting the accuracy of generated content at all. In this case, two to three times the inference speedup is achieved.
Recommendation: LLM inference speeds up 2.8 times, CMU Tsinghua Yao class alumni proposed "speculative approach" "Inference" engine SpecInfer, small models leverage large models for efficient reasoning
Paper 5: Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models
- Authors: Gen Luo, Yiyi Zhou, etc.
- Paper address: https://arxiv .org/pdf/2305.15023.pdf
##Abstract: This paper proposes a novel and cost-effective solution for effective Adapting LLMs to VL (visual language) tasks is called MMA. Instead of using large neural networks to connect image encoders and LLMs, MMA adopts lightweight modules, called adapters, to bridge the gap between LLMs and VL tasks, while also enabling joint optimization of image models and language models. At the same time, MMA is also equipped with a routing algorithm that can help LLM automatically switch between single-modal and multi-modal instructions without compromising its natural language understanding capabilities.
Training time is reduced by 71.4%, storage cost is saved by 99.9%, Xiamen University instruction adjustment The excellent new solution MMA allows the alpaca model to achieve multi-modality
Paper 6: mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video
- Authors: Haiyang Xu, Qinghao Ye, etc.
- Paper address: https:/ /arxiv.org/pdf/2302.00402.pdf
For a multimodal base model, we hope that it can not only handle specific It also hopes to have excellent performance when handling single-modal tasks. The Aidamo Academy team found that existing models often cannot balance the issues of modal cooperation and modal entanglement well, which limits the performance of the model in various single-modal and cross-modal downstream tasks. Based on this, researchers from DAMO Academy proposed mPLUG-2, which uses a modular network structure design to balance the collaboration and entanglement problems between multi-modal modes. mPLUG -2 In more than 30/single-modal tasks, it achieves SOTA or Comparable results with the same data volume and model size, and surpasses very large models such as Flamingo, VideoCoca, and GITv2 in VideoQA and VideoCaption to achieve absolute SOTA. In addition, mPLUG-Owl is the latest work of the mPLUG series of Alibaba Damo Academy. It continues the modular training idea of the mPLUG series and upgrades LLM into a large multi-modal model. The research paper of mPLUG-2 has been accepted by ICML 2023.
Recommended: ICML 2023 | Based on the modular idea, Alibaba DAMO Academy proposed the multi-modal basic model mPLUG-2
Paper 7: Where to Go Next for Recommender Systems? ID- vs. Modality-based Recommender Models Revisited
- ## Authors: Zheng Yuan, Fajie Yuan, etc.
- Paper address: https://arxiv.org/abs/2303.13835
Abstract: This paper investigates a potential issue, that is, whether the multi-modal recommendation system MoRec is expected to end IDRec's 10-year dominance in the field of recommendation systems. Based on this, the paper conducts in-depth research. Related results have been accepted by SIGIR 2023. The figure below shows the network architecture.
Recommendation: SIGIR 2023 | Where will the recommendation system go? Will the classic ID paradigm be subverted?
The above is the detailed content of Make your own tools for large models such as GPT-4 to identify ChatGPT fraud. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

This paper explores the problem of accurately detecting objects from different viewing angles (such as perspective and bird's-eye view) in autonomous driving, especially how to effectively transform features from perspective (PV) to bird's-eye view (BEV) space. Transformation is implemented via the Visual Transformation (VT) module. Existing methods are broadly divided into two strategies: 2D to 3D and 3D to 2D conversion. 2D-to-3D methods improve dense 2D features by predicting depth probabilities, but the inherent uncertainty of depth predictions, especially in distant regions, may introduce inaccuracies. While 3D to 2D methods usually use 3D queries to sample 2D features and learn the attention weights of the correspondence between 3D and 2D features through a Transformer, which increases the computational and deployment time.

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving
