current location:Home > Technical Articles > Technology peripherals > AI
- Direction:
- All web3.0 Backend Development Web Front-end Database Operation and Maintenance Development Tools PHP Framework Daily Programming WeChat Applet Common Problem Other Tech CMS Tutorial Java System Tutorial Computer Tutorials Hardware Tutorial Mobile Tutorial Software Tutorial Mobile Game Tutorial
- Classify:
-
- Why artificial intelligence could revolutionize mathematics
- Editor | Cabbage Leaf "Proposing a conjecture—a proposition that is suspected to be true, but requires explicit proof—is like a moment of divine inspiration for mathematicians. Mathematical conjectures are more than just educated guesses. Formulating them It takes a combination of genius, intuition and experience. Even mathematicians have a hard time explaining their own discovery process. Yet, counterintuitively, I think this is the most transformative area of machine intelligence initially." Institute of Mathematical Sciences, London, UK. Chief Thomas Fink said. In 2017, researchers at the Institute of Mathematical Sciences in London began applying machine learning to mathematical data as a hobby. During the COVID-19 pandemic, they discovered that simple artificial intelligence (AI)
- AI 634 2024-06-02 14:47:39
-
- How to solve the long tail problem in autonomous driving scenarios?
- Yesterday during the interview, I was asked whether I had done any long-tail related questions, so I thought I would give a brief summary. The long-tail problem of autonomous driving refers to edge cases in autonomous vehicles, that is, possible scenarios with a low probability of occurrence. The perceived long-tail problem is one of the main reasons currently limiting the operational design domain of single-vehicle intelligent autonomous vehicles. The underlying architecture and most technical issues of autonomous driving have been solved, and the remaining 5% of long-tail problems have gradually become the key to restricting the development of autonomous driving. These problems include a variety of fragmented scenarios, extreme situations, and unpredictable human behavior. The "long tail" of edge scenarios in autonomous driving refers to edge cases in autonomous vehicles (AVs). Edge cases are possible scenarios with a low probability of occurrence. these rare events
- AI 1280 2024-06-02 14:44:00
-
- IBM releases Granite AI model to open source community
- IBM Research recently announced that its Granite coding basic model is open source, with the goal of democratizing advanced AI tools and promoting comprehensive changes in the way code is written, maintained and developed across industries. This move will enable developers to create, optimize and deploy AI models more efficiently, thereby accelerating the application of artificial intelligence technology. Granite is an advanced AI programming tool developed by IBM Research Institute with powerful functions. What level is it based on the open-standard IBM Granite coding model? Granite originated from IBM's ambition to simplify the coding process. After realizing the complexity and rapid development requirements inherent in software development, IBM used its strong scientific research capabilities to build a set of AI-driven tools aimed at
- AI 796 2024-06-02 13:46:40
-
- A new way to play crowdsourcing! Benchmark test was born in LLM Arena to strictly separate the bad students and the top students.
- Which company is stronger in the large model rankings? Also watch LLM Arena ~ As of now, a total of 90 LLMs have joined the battle, and the total number of user votes has exceeded 770,000. Picture However, while netizens are making fun of new models rushing to the top and old models losing their dignity, LMSYS, the organization behind Renjia Arena, has quietly completed the transformation of results: the most convincing benchmark test born from actual combat—— Arena-Hard. Picture The four advantages demonstrated by Arena-Hard are exactly what the current LLM benchmark tests need most: - Separability (87.4%) is significantly better than MT-bench (22.6%); - Ranked best with ChatbotArena Similar, reaching 89.1%; - fast running speed, low price
- AI 389 2024-06-02 13:21:08
-
- 2,500 pages of algorithm documents leaked! The most powerful black box in search history is exposed, will Google overturn and upgrade again?
- Written by Noah | 51CTO Technology Stack (WeChat ID: blog51cto) Google is having a bit of a bad year. Over the past two days, the search engine has provided information about its "AI Overviews" feature that often provides egregiously inaccurate search results, for example, absurdly suggesting that users use glue to prevent cheese from sliding off a pizza. In this regard, CEO Pichai also had to admit that this was caused by the illusion of the large language model, and there is currently no solution. An internal document from Google's search engine was recently leaked, possibly showing the public how it works for the first time. This article was first published here Google has yet to issue an official response to the leak and has not disputed the authenticity of the documents. For a long time, Google has been the
- AI 795 2024-06-02 12:21:35
-
- Goose Factory has built an AI translation company: specializing in online novels, automatically adapting language styles, and both real people and GPT-4 can read it well
- Goose Factory has set up a "translation company" with more than 150 people. From the boss to the employees, they are all AI agents! The main business is the translation of online novels. The quality is extremely high. Readers who participated in the evaluation think that the translation is better than that of real people. And compared to hiring real people, using it to translate literary works reduces the cost by nearly 80 times. ATransAgents, with 30 different employees in each position, can adapt different translation styles according to language, genre and target audience. Compared with traditional translation, the resulting translation is more flexible and diverse, more in line with the expression habits of the target language, and more literary. Therefore, although TransAgents "failed" in the automatic evaluation based on similarity, it has won strong recognition from readers and professionals. so
- AI 410 2024-06-02 12:09:21
-
- AI coding, is it a real need or a gimmick?
- Guest | Interview by Xu Xiaoqiang | Written by Zhang Xiaonan | Produced by Li Meihan | 51CTO Technology Stack (WeChat ID: blog51cto) Since the popularity of generative AI, AI seems to have "struggled" with the role of programmers. Almost every once in a while, the topic of whether AI programming tools can replace programmers will be discussed again. The heated discussion aroused by AI programming makes people confused: Will this set off a productivity revolution in the field of programming? Or is this another over-hyped stunt? Thanks to AI programming, Baidu has achieved a 10% improvement in human efficiency, and 27% of the new code submitted by engineers today was generated by AI. The pioneers of this answer are the major manufacturers who are exploring this answer. However, as the architect of Baidu Comate, I am also the founder of this product.
- AI 1099 2024-06-02 10:15:47
-
- Adapting to multiple forms and tasks, the most powerful open source robot learning system 'Octopus' was born
- When it comes to robot learning, a common approach is to collect a dataset specific to a specific robot and task, and then use it to train a policy. However, if this method is used to learn from scratch, sufficient data needs to be collected for each task, and the generalization ability of the resulting policy is usually poor. “In principle, experience gathered from other robots and tasks can provide possible solutions, allowing the model to see a variety of robot control problems, and these problems may improve the robot’s generalization ability and performance on downstream tasks. . However, even if general models that can handle a variety of natural language and computer vision tasks have emerged, building a "universal robot model" is still difficult. "It is very difficult to train a unified control strategy for robots, including
- AI 679 2024-06-02 10:04:53
-
- Why are small language models the next big thing in the AI world?
- Translator | Bugatti Review | Chonglou In the field of AI, technology giants have been competing to build increasingly larger language models, and now a surprising new trend has emerged: small is big. As progress on large language models (LLMs) shows signs of stalling, researchers and developers are increasingly turning their attention to small language models (SLMs). This compact, efficient, and adaptable AI model is challenging the notion that “bigger is better” and promises to change the way we approach AI development. Is LLM starting to stagnate? The recently released performance comparison results of Vellum and HuggingFace show that the performance gap between LLMs is closing rapidly. This trend is evident in specific tasks such as multiple choice questions, reasoning and math questions
- AI 1154 2024-06-01 22:35:35
-
- Is it better to have more data or higher quality? This research can help you make your choice
- Scaling the basic model refers to using more data, calculations and parameters for pre-training, which is simply "scale expansion". Although directly scaling up the model seems simple and crude, it has indeed brought many outstanding models to the machine learning community. Many previous studies have recognized the practice of expanding the scale of neuroeconomic models. The so-called quantitative changes lead to qualitative changes. This view is also known as neural scaling laws. However, as the model size increases, it results in intensive consumption of computing resources. This means that larger models require more computing resources, including processors and memory. This is not feasible for many practical applications, especially on resource-constrained devices. Therefore, researchers began
- AI 1161 2024-06-01 22:09:19
-
- KAN, which replaces MLP, has been extended to convolution by open source projects
- Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has
- AI 979 2024-06-01 22:03:37
-
- Multi-grid redundant bounding box annotation for accurate object detection
- 1. Introduction Currently, the leading object detectors are two-stage or single-stage networks based on the repurposed backbone classifier network of deep CNN. YOLOv3 is one such well-known state-of-the-art single-stage detector that receives an input image and divides it into an equal-sized grid matrix. Grid cells with target centers are responsible for detecting specific targets. What I’m sharing today is a new mathematical method that allocates multiple grids to each target to achieve accurate tight-fit bounding box prediction. The researchers also proposed an effective offline copy-paste data enhancement for target detection. The newly proposed method significantly outperforms some current state-of-the-art object detectors and promises better performance. 2. The background target detection network is designed to use
- AI 698 2024-06-01 21:46:08
-
- Kimi + Coze (coze) is a great combo, I want to build a GPT-4o
- Hello everyone, I am Lao Du. Among domestic large models, Kimi’s performance is very good. Fortunately, the coze platform supports the Kimi large model. Button is a platform for building Agent intelligence. Today we will try to use Kimi+ Button to make an agent with GPT-4o effect. First, click "Create Bot" on the homepage of the button. Bot is actually an Agent. In the picture here, the model of the moonshot series selected is the Kimi large model. The remaining highlight of the picture is the "plug-in". Coze provides a very rich set of plug-ins, which can be combined with the large model to complete many complex functions. To give a few examples, for example, visual ability. Add a plug-in to enable large models to generate pictures and view
- AI 1148 2024-06-01 20:23:12
-
- Overview of path planning: based on sampling, search, and optimization, all done!
- 1 Overview of decision control and motion planning Current decision control methods can be divided into three categories: sequential planning, behavior-aware planning, and end-to-end planning. Sequential planning: The most traditional method, the three parts of perception, decision-making and control are relatively clear; behavior-aware planning: Compared with the first method, the highlight is the introduction of human-machine co-driving, vehicle-road collaboration and vehicle risk estimation of the external dynamic environment; End-to-end planning: DL and DRL technologies use a large amount of data training to obtain sensory information such as images, steering wheel angles, etc.
- AI 1162 2024-06-01 20:12:48
-
- Is generative AI leading to a private cloud renaissance?
- Compilation丨Produced by Noah | 51CTO Technology Stack (WeChat ID: blog51cto) As another round of technological revolution approaches, many companies are facing a strategic choice: whether to continue to rely on the convenience of public cloud, or to return to private cloud The embrace of? With the rapid development of AI technology, this decision has become more urgent. According to Forrester's 2023 Infrastructure Cloud Survey, approximately 79% of approximately 1,300 enterprise cloud decision-makers surveyed stated that their organizations are implementing private clouds. In addition, IDC predicts that global spending on dedicated private cloud services, including managed private clouds, will reach $20.4 billion in 2024 and will double by at least 2027. Before 2024, IDC data shows that
- AI 884 2024-06-01 20:11:36