current location:Home > Technical Articles > Technology peripherals > AI
- Direction:
- All web3.0 Backend Development Web Front-end Database Operation and Maintenance Development Tools PHP Framework Daily Programming WeChat Applet Common Problem Other Tech CMS Tutorial Java System Tutorial Computer Tutorials Hardware Tutorial Mobile Tutorial Software Tutorial Mobile Game Tutorial
- Classify:
-
- With an accuracy of 0.96, a physical and chemical constraint graph neural network is used to predict protein-ligand interactions from sequences.
- Editor | Radish Skin In drug development, it is crucial to determine the binding affinity and functional effect of small molecule ligands on proteins. Current computational methods can predict these protein-ligand interaction properties, but without high-resolution protein structures, accuracy is often lost and functional effects cannot be predicted. Researchers at Monash University and Griffith University have developed PSICHIC (PhySIcoCHhemICalgraphneuralnetwork), a framework that combines physicochemical constraints to decode interaction fingerprints directly from sequence data. This enables PSICHIC to decode the mechanisms behind protein-ligand interactions, achieving state-of-the-art accuracy and interpretability. In the absence of structured data
- AI 630 2024-06-29 05:16:50
-
- Google's 'sincere work', open source 9B and 27B versions of Gemma2, focusing on efficiency and economy!
- How can Gemma2, which has twice the performance, play with Llama3, which has the same level of performance? On the AI track, technology giants compete fiercely. GPT-4o came out on the front foot, and Claude3.5Sonnet appeared on the back foot. In such a fierce battle, although Google launched its efforts late, it has significant ability to follow up in a short period of time, which shows its potential for technological development and innovation. In addition to the Gemini model, Gemma, a series of lightweight SOTA open models, seems to be closer to us. It is built on the same research and technology as the Gemini model and aims to give everyone the tools to build AI. Google continues to expand the Gemma family, including CodeGemma, RecurrentGemma and P
- AI 1008 2024-06-29 00:59:21
-
- ICML 2024 | Revealing the mechanism of non-linear Transformer learning and generalization in contextual learning
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The author of this article, Li Hongkang, is a doctoral candidate in the Department of Electrical, Computer and Systems Engineering at Rensselaer Polytechnic Institute in the United States. He graduated from the University of Science and Technology of China with a bachelor's degree. Research directions include deep learning theory, large language model theory, statistical machine learning, etc. Currently at ICLR/
- AI 425 2024-06-29 00:44:41
-
- Defeated Gemini-1.5-Pro and GPT-4V, ranking among the top three in the world in large model multi-modal capabilities
- Recently, Yuncong Technology's large model has made significant progress in the multi-modal evaluation field of OpenCompass, the authoritative comprehensive evaluation platform. The latest evaluation results show that the average score of Yuncong Technology's Congrong large model in this system is 65.5. This result puts Congrong large model into the top three in the world, surpassing Google's Gemini-1.5-Pro and GPT-4v, ranking second. On GPT-4o (69.9) and Claude3.5-Sonnet (67.9). In the domestic market, the performance of the large model also exceeded InternVL-Chat (61.7) and GLM-4V (60.8), ranking first. 1.OpenCompass multi-modal list OpenCom
- AI 963 2024-06-29 00:25:01
-
- The Bengio team proposes a new multi-modal benchmark, targeting the weaknesses of Claude 3.5 and GPT-4o
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The author of this article, Zhang Tianyu, studied at the Mila Artificial Intelligence Institute in Canada and studied under Professor Yoshua Bengio, the winner of the Turing Award. The main work during the doctoral period focused on multi-modality, GFlowNet, multi-agent reinforcement learning, AI and climate change
- AI 779 2024-06-29 00:06:53
-
- The Depth Anything V2 model of the Byte model team was selected as Apple's latest CoreML model
- Recently, Apple released 20 new CoreML models and 4 data sets on HuggingFace, and the monocular depth estimation model DepthAnythingV2 of the Byte Model Team was selected among them. CoreML Apple's machine learning framework is used to integrate machine learning models into devices such as iOS and MacOS to run efficiently. Perform complex AI tasks without the need for an internet connection, enhance user privacy and reduce latency. Apple developers can use these models to build intelligent and safe AI applications. A monocular depth estimation model developed by the DepthAnythingV2 byte large model team. The V2 version has finer detail processing, stronger robustness, and significantly improved speed. Contains 25M to
- AI 364 2024-06-28 22:40:06
-
- Automatically convert images into text, and image descriptions are of higher quality and more accurate.
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com Pi Renjie: a third-year doctoral student at Hong Kong University of Science and Technology, studying under Professor Zhang Tong and Professor Zhou Xiaofang. Received Apple Scholarship in 2024. The current main research directions are multi-modal large language models and data-centered AI. Zhang Jianshu: a third-year undergraduate student at Wuhan University,
- AI 879 2024-06-28 21:41:35
-
- The birth of Cambrian No. 1: Xie Saining and Yann LeCun team released the most powerful open source multi-modal LLM
- Just like animals have eyes, Cambrian-1 from Yann LeCun's team allows AI to gain powerful visual representation learning capabilities. Throughout the ages, many philosophers have explored this question: Does understanding the meaning of language need to be based on the senses? Although philosophers disagree, one thing is clear: solid and effective sensory grounding can at least help. For example, scientists generally believe that the emergence of vision during the Cambrian Explosion was a key step in the evolution of early animals; it not only helped animals better find food and avoid predators, but also helped the evolution of the animals themselves. In fact, most of the knowledge humans (and nearly all animals) have is acquired through sensory experiences interacting with the physical
- AI 1071 2024-06-28 21:28:07
-
- Domestic large models reach new heights! iFlytek Spark 4.0 released: overall surpassing GPT-4 Turbo, ranking first in 8 international authoritative test sets
- The capabilities of domestic large models have reached a new level! On June 27, iFlytek officially released the iFlytek Spark model V4.0, as well as artificial intelligence applications in many fields such as medical care, education, and business. With the release of the new version, the seven core capabilities of iFlytek Spark V4.0 have been fully upgraded, ranking first in 8 international mainstream test sets, overall surpassing GPT-4Turbo, and leading domestic large models. Liu Qingfeng said that currently, the number of downloads of Spark APP has reached 131 million, and a number of user-favorite application assistants have emerged. With the support of the Spark model, the sales of smart hardware in some scenarios increased by 70%+ year-on-year, and the average monthly usage exceeded 40 million. In addition, the Xinghuo V4.0 large model is based on the country’s first domestic Wanka
- AI 1018 2024-06-28 20:52:47
-
- The first real-time AI video generation technology in history: DiT universal, 10.6 times faster
- DiT can be used to generate videos with no quality loss and no training required. Real-time AI video generation is here! On Wednesday, the National University of Singapore's You Yang team proposed the industry's first DiT-based video generation method that can be output in real time. The technology is called PyramidAttentionBroadcast (PAB). By reducing redundant attention computations, PAB achieves frame rates up to 21.6FPS and 10.6x speedup without sacrificing the benefits of popular DiT-based video generation models including Open-Sora, Open-Sora-Plan, and Latte. quality. It is worth noting that as a method that does not require training, PAB can be used for any future DiT-based
- AI 1243 2024-06-28 19:14:46
-
- Tsinghua AIR and others proposed ESM-AA, the first protein language model from amino acids to atomic scales
- Research teams from Tsinghua University AIR, Peking University, and Nanjing University proposed the ESM-AA model. This model has made important progress in the field of protein language modeling, providing a unified modeling solution that integrates multi-scale information. It is the first protein pre-trained language model that can handle both amino acid information and atomic information. The excellent performance of the model demonstrates the great potential of multi-scale unified modeling to overcome existing limitations and unlock new capabilities. As a base model, ESM-AA has received attention and extensive discussion from many scholars (see screenshot below). It is considered to have the potential to develop models based on ESM-AA that can compete with AlphaFold3 and RoseTTAFoldAll-Atom to study different organisms. phase between structures
- AI 1120 2024-06-28 18:10:06
-
- Efficient and accurate, Zhengzhou University team develops new AI tool to identify drug-target interactions
- Editor | Dry Leaf Butterfly Accurate identification of drug-target interactions (DTIs) is one of the key steps in the drug discovery and drug repositioning process. Currently, many computational-based models have been proposed for predicting DTI, and some significant advances have been achieved. However, these methods rarely focus on how to fuse multi-view similarity networks related to drugs and targets in an appropriate manner. Furthermore, how to fully incorporate known interaction relationships to accurately represent drugs and targets has not been well studied. Therefore, improving the accuracy of DTI prediction models is still necessary. In the latest research, teams from Zhengzhou University and University of Electronic Science and Technology of China proposed a new method, MIDTI. This method uses a multi-view similarity network fusion strategy and a deep interactive attention mechanism to predict drug
- AI 1139 2024-06-28 02:31:25
-
- Specifically customized for five major scientific fields, NASA and IBM cooperate to develop a large language model INDUS
- INDUS, named after the southern constellation, is a comprehensive set of large-scale language models supporting five scientific fields. (Source: NASA) Editor | KX’s large language models (LLM) trained on large amounts of data perform well on natural language understanding and generation tasks. Most popular LLMs are trained using general corpora such as Wikipedia, but the distribution changes of vocabulary lead to poor performance in specific domains. Inspired by this, NASA collaborated with IBM to develop INDUS, a comprehensive set of LLMs tailored for the fields of Earth science, biology, physics, heliophysics, planetary science, and astrophysics, and using data from diverse A curated scientific corpus of data sources is used for training. INDUS contains two types of models: encoder and sentence
- AI 817 2024-06-27 20:28:32
-
- OpenAI suddenly cuts off supply to China! SenseTime launches zero-cost 'moving service” and comes with a big gift package
- On June 25, SenseTime Technology announced the launch of a 0-yuan purchase plan for super-valuable large models. From now on, becoming a newly registered enterprise user of SenseTime’s “SenseNova” will receive a free service package involving calling, migration, training, etc. , 0 yuan GO! The "RiRiXin SenseNova" platform covers many types of model API interfaces, such as consultation language models, consultation graphic and text multi-modal models, Miaohua text-based image models, speech models, vector models, etc., to meet the different needs of enterprise users. SenseTime has always insisted on AI originality, technological security, independent controllability, and its own advanced, low-cost, large-scale new generation AI infrastructure such as SenseCore, which has guaranteed computing power. Shang Tangjiang
- AI 430 2024-06-27 00:23:50
-
- Simulating 500 million years of evolutionary information, it is the first large-scale biological model to simultaneously infer protein sequence, structure and function.
- Editor |During the **long** natural evolution process of three billion years, the **form** of the **existing** proteins was formed and went through a long natural selection process. Evolution is like a parallel experiment conducted on geological time scales, through random mutation and selection mechanisms, sifting according to the sequence, structure and function of proteins. , here researchers at EvolutionaryScale show that language models trained on evolution-generated markers can serve as evolutionary simulators for generating functional proteins that differ from known protein sequences. protein. , researchers propose the **cutting-edge** ESM3, an **advanced** multimodal generative language model that can reason about proteins
- AI 934 2024-06-26 20:40:11