current location:Home > Technical Articles > Technology peripherals > AI
- Direction:
- All web3.0 Backend Development Web Front-end Database Operation and Maintenance Development Tools PHP Framework Daily Programming WeChat Applet Common Problem Other Tech CMS Tutorial Java System Tutorial Computer Tutorials Hardware Tutorial Mobile Tutorial Software Tutorial Mobile Game Tutorial
- Classify:
-
- The first AI Mathematical Olympiad competition plan was announced: the four winning teams all chose the domestic model DeepSeekMath
- The winning AI Mathematical Olympiad model is released! A few days ago, with the announcement of the list, there was a lot of discussion about the progress award of the world's first AI Mathematics Olympiad (AIMO). A total of 5 teams won this competition. The Numina team won the first place, CMU_MATH ranked second, afterexams temporarily ranked third, and the codeinter and Conor#2 teams ranked fourth and fifth respectively. Image source: https://www.kaggle.com/c/ai-mathematical-olympiad-prize/leaderboard This result once surprised Tao Zhexuan. At that time, only the official list of winners was announced.
- AI 991 2024-07-16 18:14:57
-
- Hot GPTs experience report: Create exclusive GPT, spring is coming for people who don't understand code
- Getting closer to AI agents. If OpenAI’s developer conference is like a stone hitting the water, when it’s over, ripples are spreading in all directions. Not only does GPT go a step further in integration, eliminating the need to call it step by step, it will also become a powerful tool that everyone can develop. Even if you don’t know coding or have basic computer-related knowledge, you can easily build it. Official blog: https://openai.com/blog/introducing-gpts It seems that we are not far away from the ultimate imagination of AI-"AI intelligent agent". The definition of this word is still vague, but it roughly refers to an autonomous AI program that can independently achieve a goal after being given one. In the past few months, there have been
- AI 474 2024-07-16 17:23:42
-
- DeepMind develops neural network variational Monte Carlo for quantum chemical calculations
- Editor | However, quantum chemical calculations of the ground state properties of positron-molecule complexes are challenging. Here, researchers from DeepMind and Imperial College London tackle this problem using a recently developed Fermionic Neural Network (FermiNet) wave function that does not rely on a basis set. FermiNet was found to produce highly precise, and in some cases state-of-the-art, ground state energies in a range of atoms and small molecules with a variety of qualitative positron binding properties. Researchers calculated the binding energy of the challenging nonpolar benzene molecule and found that it was in high agreement with experimental values.
- AI 517 2024-07-16 15:26:30
-
- Llama molecule embedding is better than GPT, can LLM understand molecules? Meta defeated OpenAI in this round
- Editor | Radish Skin Large language models (LLMs) such as OpenAI's GPT and MetaAI's Llama are increasingly recognized for their potential in the field of cheminformatics, particularly in understanding the Simplified Molecular Input Line Input System (SMILES). These LLMs are also able to decode SMILES strings into vector representations. Researchers at the University of Windsor in Canada compared the performance of GPT and Llama with pretrained models on SMILES for embedding SMILES strings in downstream tasks, focusing on two key applications: molecular property prediction and drug-drug interaction prediction. The study uses "Canlargelanguagemod"
- AI 653 2024-07-16 13:33:18
-
- For the first time, neural activation of language has been localized to the cellular level
- The highest-resolution map of neurons encoding word meaning to date is here. 1. Humans can access rich and subtle meanings through language, which is crucial to human communication. Despite increasing understanding of the brain regions that support language and semantic processing, much remains unknown about neurosemantic derivations at the cellular level. Recently, a research paper published in the journal Nature discovered the fine cortical representation of semantic information by single neurons by tracking the activity of neurons during natural speech processing. The paper is titled "Semantic encoding for language understanding at single-cell resolution." 1. Paper address: https://www.nature.com/articles/s41586-024-07643-
- AI 856 2024-07-16 12:12:59
-
- Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
- The 2.4B Memory3 achieves better performance than the larger LLM and RAG models. In recent years, large language models (LLMs) have received unprecedented attention due to their extraordinary performance. However, the training and inference costs of LLM are high, and people have been trying to reduce the cost through various optimization methods. In this article, researchers from Shanghai Algorithm Innovation Research Institute, Peking University and other institutions were inspired by the memory hierarchy of the human brain. They reduced this problem by equipping LLM with explicit memory (a cheaper memory format than model parameters and RAG). cost. Conceptually, LLM can enjoy smaller parameter sizes, training costs, and inference costs since most of its knowledge is externalized into explicit memory. Paper address: https:
- AI 579 2024-07-16 11:57:51
-
- Peking University's embodied intelligence team proposes demand-driven navigation to align human needs and make robots more efficient
- Imagine if a robot could understand your needs and work hard to meet them, wouldn't it be great? If you want a robot to help you, you usually need to give a more precise command, but the actual implementation of the command may not be ideal. If we consider the real environment, when the robot is asked to find a specific item, the item may not actually exist in the current environment, and the robot cannot find it anyway; but is it possible that there is another item in the environment, which is related to the user? Does the requested item have similar functions and can also meet the user's needs? This is the benefit of using "requirements" as task instructions. Recently, Peking University Dong Hao’s team proposed a new navigation task—demand-driven navigation (
- AI 1098 2024-07-16 11:27:39
-
- The giant behind the scenes is taking industrial AI to the next stage
- There is no new king in industrial AI, it is bright but not brilliant, and the waters are still and run deep. To say that generative AI is the king of topics today, no one would object. With a few simple words, the Terracotta Warriors can be "resurrected" to sing Qin Opera, and Trump can appear on a talk show. When the emotional value is full, do you dare to imagine cooler things, such as being able to create what you want with just your words? AI can not only generate a video, but also build an immersive, highly simulated virtual space that follows physical laws. It only needs natural voice to input instructions, and it can convert them into professional industrial language, and then hand them over to the intelligence of the real factory. The chemical production line becomes a "real thing". Do you dare to imagine something cooler? You can create what you want with just your words! Such a wonderful future may seem far away, but according to Siemens’ depiction, it is no longer in the air.
- AI 1166 2024-07-16 09:50:46
-
- Peking University launches new multi-modal robot model! Efficient reasoning and operations for general and robotic scenarios
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com This article was completed by HMILab. Relying on the two major platforms of Peking University’s National Engineering Research Center for Video and Visual Technology and the National Key Laboratory of Multimedia Information Processing, HMILab has long been engaged in research in the direction of machine learning, multi-modal learning and embodied intelligence. This work No.
- AI 431 2024-07-16 03:51:40
-
- ICML 2024 high-scoring paper | Zero-order optimizer fine-tunes large models and significantly reduces memory
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com Introduction to the co-first author of this article: Zhang Yihua: a third-year doctoral student in the Department of Computer Science at Michigan State University, under the supervision of Professor Sijia Liu. His main research direction is the security, privacy and efficiency of large models. question. Li Pingzhi: Graduated from the University of Science and Technology of China and will
- AI 982 2024-07-16 03:17:30
-
- With accuracy comparable to AlphaFold, EPFL's AI method matches protein interactions from sequences
- 1. Importance of protein interactions Proteins are the building blocks of life and participate in almost all biological processes. Understanding how proteins interact is critical to explaining the complexity of cellular function. 2. New method: pairing interacting protein sequences Anne-Florence Bitbol’s team at Ecole Polytechnique Fédérale de Lausanne (EPFL) proposed a method to pair interacting protein sequences. This method exploits the power of protein language models trained on multiple sequence alignments. 3. Method advantages This method performs well for small data sets and can improve the structure prediction of protein complexes through supervised methods. 4. The research results are published as "Pairinginteractingprotein
- AI 858 2024-07-16 01:18:30
-
- Getting started with deep learning 12 years ago, Karpathy set off a wave of memories of the AlexNet era, and LeCun, Goodfellow, etc. all ended up
- Unexpectedly, 12 years have passed since the deep learning revolution started by AlexNet in 2012. Now, we have also entered the era of large models. Recently, a post by the well-known AI research scientist Andrej Karpathy caused many big guys involved in this wave of deep learning revolution to fall into memories. From Turing Award winner Yann LeCun to Ian Goodfellow, the father of GAN, everyone recalled the past. So far, the post has had 630,000+ views. In the post, Karpathy mentioned: An interesting fact is that many people may have heard of the ImageNet/AlexNet moment in 2012 and the deep learning revolution it started. However, there may be few
- AI 1062 2024-07-16 01:08:30
-
- AlphaFold 3 is launched, comprehensively predicting the interactions and structures of proteins and all living molecules, with far greater accuracy than ever before
- Editor | Radish Skin Since the release of the powerful AlphaFold2 in 2021, scientists have been using protein structure prediction models to map various protein structures within cells, discover drugs, and draw a "cosmic map" of every known protein interaction. . Just now, Google DeepMind released the AlphaFold3 model, which can perform joint structure predictions for complexes including proteins, nucleic acids, small molecules, ions and modified residues. The accuracy of AlphaFold3 has been significantly improved compared to many dedicated tools in the past (protein-ligand interaction, protein-nucleic acid interaction, antibody-antigen prediction). This shows that within a single unified deep learning framework, it is possible to achieve
- AI 509 2024-07-16 00:08:11
-
- Actual test of the latest AI speech model: asking Trump and Ding Zhen to say tongue twisters can be said to be fake, but the sentences are broken into pieces.
- Editor of Machine Power Report: Yang Wen’s new AI voice model, FishSpeech, is an excellent imitator of timbre. Recently, the AI voice track has suddenly become lively. More than a month ago, ChatTTS, known as the "ceiling level of open source voice TTS", became popular. How popular is it? In just three days, it collected 9.2kStar traffic on GitHub, and once topped the list of GitHub Trending and continues to dominate the list. Not long after, Byte also launched a similar project, Seed-TTS, with the same slogan of "generating natural and real speech." In the past few days, a new player has entered this track - FishSpeech. It is reported that after 150,000 hours of data training, the model has become proficient in
- AI 469 2024-07-15 20:44:38
-
- Everyone can be a prompt engineer! New in Claude: Generate, test and evaluate prompts with one click
- If you don't know how to write prompts, please check it out. When building AI applications, prompt quality has a significant impact on the results. However, producing high-quality prompts is challenging and requires researchers to have an in-depth understanding of application requirements and expertise in large-scale language models. To speed up development and improve results, AI startup Anthropic has streamlined this process to make it easier for users to create high-quality prompts. Specifically, the researchers added new functionality to AnthropicConsole to generate, test, and evaluate prompts. Anthropic prompt engineer Alex Albert said: This is the result of a lot of work they have put in over the past few weeks, and now C
- AI 637 2024-07-15 20:13:31