current location:Home > Technical Articles > Technology peripherals > AI
- Direction:
- All web3.0 Backend Development Web Front-end Database Operation and Maintenance Development Tools PHP Framework Daily Programming WeChat Applet Common Problem Other Tech CMS Tutorial Java System Tutorial Computer Tutorials Hardware Tutorial Mobile Tutorial Software Tutorial Mobile Game Tutorial
- Classify:
-
- Runway and Luma are fighting again! Yann LeCun bombards: No matter how good you are, you are not a 'world model'
- Editor of the Machine Power Report: Yang Wen The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. The AI video industry is "fighting" again! On June 29, the well-known generative AI platform Runway announced that its latest model Gen-3Alpha has started testing for some users. On the same day, Luma launched a new keyframe feature and made it available to all users for free. It can be said that "you have a good plan, I have a wall ladder"
- AI 1090 2024-07-03 09:13:06
-
- Published in the Nature sub-journal, the topological Transformer model predicts multi-scale protein-ligand interactions to assist drug development
- Editor | Radish Skin A new artificial intelligence application will help researchers improve their drug development capabilities. The project, called TopoFormer, was developed by an interdisciplinary team led by Professor Guowei Wei of the Department of Mathematics at Michigan State University. TopoFormer converts three-dimensional information about molecules into data that can be used by typical AI-based drug interaction models, expanding the ability of these models to predict drug effectiveness. “With artificial intelligence, you can make drug development faster, more efficient, and cheaper,” said Wei, who holds appointments in both the Department of Biochemistry and Molecular Biology and the Department of Electrical and Computer Engineering. Professor Wei explained that in the United States,
- AI 1219 2024-07-02 15:23:21
-
- Can't wait for OpenAI's Q*, Huawei Noah's secret weapon MindStar to explore LLM reasoning is here first
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are from Kang Jikun, Li Xinze, Chen Xi, Amirreza Kazemi, and Chen Boxing from Huawei's Noah's Ark Laboratory in Montreal. Artificial intelligence (AI) has made great progress in the past decade, especially in natural language processing and computer vision.
- AI 681 2024-07-02 05:01:41
-
- Come quickly! Luchen Open-Sora can collect wool, and you can easily start video generation for 10 yuan.
- Recently, the field of video generation models is booming, with Vincent videos and Tu videos emerging in endless ways. However, even though there are many models on the market, most people still cannot experience them because they are not qualified for internal testing and can only look at the "models" and sigh. Not long ago, we reported on Luchen Technology’s Open-Sora model. As the world’s first open source Sora-like model, it not only performs well on multiple types of videos, but is also low-cost and available to everyone. Does it work? how to use? Let’s take a look at the review of this site. Recently, the new open source version 1.2 of Open-Sora can generate 720p high-definition videos up to 16 seconds. The official video effect is as follows: The generated effect is really amazing. It is no wonder that so many readers in the background want to get started and experience it. Compared
- AI 1186 2024-07-02 04:22:00
-
- Amazon Cloud Innovation 'Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this article are Dr. Yang Yang, head of machine learning, and machine learning engineers Geng Zhichao and Guan Cong from the OpenSearch China R&D team. OpenSearch is a pure open source search and real-time analysis engine project initiated by Amazon Cloud Technology
- AI 1028 2024-07-02 02:55:57
-
- Compose a graphics program just by looking at a hand-drawn sketch. Berkeley, California, teaches diffusion models new skills
- It turns out that diffusion models can be used not only to generate images and videos, but also to synthesize new programs. Suppose we give the model a hand-drawn "5"-shaped graphic, it can modify the program through continuous mutations, and finally get a program that can output the target graphic. The model comes from a research team at the University of California, Berkeley, who proposed a new method of program synthesis that uses a neural diffusion model to directly manipulate syntax trees. Thesis 1 is Shreyas Kapur, a doctoral student at the school, whose supervisor is Stuart Russell, professor of computer science at the school. Paper title: DiffusionOnSyntaxTreesForProgramSynthesis Paper address: https://arxiv.
- AI 1086 2024-07-02 01:14:04
-
- Defeating 25 molecular design algorithms, Georgia Tech, University of Toronto, and Cornell proposed large language model MOLLEO
- Author | Editor Wang Haorui, Georgia Institute of Technology | ScienceAI Molecular discovery, as an optimization problem, poses significant computational challenges because its optimization goals may not be differentiable. Evolutionary algorithms (EAs) are commonly used to optimize black-box targets in molecular discovery by traversing chemical space through random mutation and crossover, but this results in extensive and expensive target evaluation. In this work, researchers from the Georgia Institute of Technology, the University of Toronto, and Cornell University collaborated to propose Molecular Language Enhanced Evolutionary Optimization (MOLLEO) by integrating pre-trained large language models (LLMs) with chemical knowledge into evolutionary algorithms. , significantly improving the molecular optimization capabilities of evolutionary algorithms. The study is titled "EfficientEvolutionarySearc"
- AI 1307 2024-07-02 01:07:36
-
- ICML 2024| Large language model helps CLIP-based out-of-distribution detection tasks
- Machine learning models can show superior performance when the distributions of the training and test data sets are the same. However, in an open world environment, models often encounter out-of-distribution (OOD) samples. OOD samples may cause the model to behave unpredictable, and the consequences of errors may be fatal, especially in high-risk scenarios such as autonomous driving [1,2]. Therefore, OOD detection is crucial to ensure the reliability of machine learning models in actual deployment. Most OOD detection methods [1, 3] can effectively detect OOD samples based on well-trained in-distribution (ID) classifiers. Ran
- AI 725 2024-07-01 23:29:18
-
- ICML 2024 Spotlight | Realignment in decoding makes language models less hallucinatory and more consistent with human preferences
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com This article introduces a paper on language model alignment research, which was completed by doctoral students from three universities in Switzerland, the United Kingdom, and France, and researchers from Google DeepMind and Google Research. Among them, the corresponding author Ti
- AI 596 2024-07-01 22:09:56
-
- Developers are ecstatic! Meta's latest release of LLM Compiler achieves 77% automatic tuning efficiency
- Meta has developed an awesome LLMCompiler to help programmers write code more efficiently. Yesterday, the three major AI giants OpenAI, Google, and Meta teamed up to release the latest research results of their own large models - OpenAI launched CriticGPT, a new model specially designed to find bugs based on GPT-4 training, Google open sourced the 9B and 27B versions of Gemma2, and Meta came up with Developed a latest artificial intelligence breakthrough - LLMCompiler. This is a powerful set of open source models designed to optimize code and revolutionize compiler design. This innovation has the potential to change the way developers approach code optimization, making it faster, more efficient, and more economical
- AI 1434 2024-07-01 18:16:39
-
- 30 times more efficient than traditional methods, the Transformer deep learning model of the Chinese Academy of Sciences team predicts sugar-protein interaction sites
- Sugars are the most abundant organic substances in nature and are essential for life. Understanding how carbohydrates regulate proteins during physiological and pathological processes can provide opportunities to address key biological questions and develop new treatments. However, the diversity and complexity of sugar molecules poses a challenge to experimentally identify sugar-protein binding and interaction sites. Here, a team from the Chinese Academy of Sciences developed DeepGlycanSite, a deep learning model that can accurately predict sugar-binding sites on a given protein structure. DeepGlycanSite integrates the geometric and evolutionary characteristics of proteins into a deep equivariant graph neural network with a Transformer architecture. Its performance significantly surpasses previous advanced methods and can effectively predict
- AI 1100 2024-07-01 15:17:50
-
- More than 300 related studies, the latest multi-modal image editing review papers from Fudan University and Nanyang Technological University
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com Shuai Xincheng, the first author of this article, is currently studying for a PhD in the FVL Laboratory of Fudan University and graduated from Shanghai Jiao Tong University with a bachelor's degree. His main research interests include image and video editing and multimodal learning. Introduction This article proposes a unified approach to solving general editing tasks.
- AI 620 2024-06-29 06:14:41
-
- With an accuracy of 0.96, a physical and chemical constraint graph neural network is used to predict protein-ligand interactions from sequences.
- Editor | Radish Skin In drug development, it is crucial to determine the binding affinity and functional effect of small molecule ligands on proteins. Current computational methods can predict these protein-ligand interaction properties, but without high-resolution protein structures, accuracy is often lost and functional effects cannot be predicted. Researchers at Monash University and Griffith University have developed PSICHIC (PhySIcoCHhemICalgraphneuralnetwork), a framework that combines physicochemical constraints to decode interaction fingerprints directly from sequence data. This enables PSICHIC to decode the mechanisms behind protein-ligand interactions, achieving state-of-the-art accuracy and interpretability. In the absence of structured data
- AI 810 2024-06-29 05:16:50
-
- Google's 'sincere work', open source 9B and 27B versions of Gemma2, focusing on efficiency and economy!
- How can Gemma2, which has twice the performance, play with Llama3, which has the same level of performance? On the AI track, technology giants compete fiercely. GPT-4o came out on the front foot, and Claude3.5Sonnet appeared on the back foot. In such a fierce battle, although Google launched its efforts late, it has significant ability to follow up in a short period of time, which shows its potential for technological development and innovation. In addition to the Gemini model, Gemma, a series of lightweight SOTA open models, seems to be closer to us. It is built on the same research and technology as the Gemini model and aims to give everyone the tools to build AI. Google continues to expand the Gemma family, including CodeGemma, RecurrentGemma and P
- AI 1148 2024-06-29 00:59:21
-
- ICML 2024 | Revealing the mechanism of non-linear Transformer learning and generalization in contextual learning
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The author of this article, Li Hongkang, is a doctoral candidate in the Department of Electrical, Computer and Systems Engineering at Rensselaer Polytechnic Institute in the United States. He graduated from the University of Science and Technology of China with a bachelor's degree. Research directions include deep learning theory, large language model theory, statistical machine learning, etc. Currently at ICLR/
- AI 503 2024-06-29 00:44:41