


Add special effects to videos in one sentence; the most complete insect brain map to date
Directory:
- #Composer: Creative and Controllable Image Synthesis with Composable Conditions
- Structure and Content-Guided Video Synthesis with Diffusion Models
- The connectome of an insect brain
- Uncertainty-driven dynamics for active learning of interatomic potentials
- Combinatorial synthesis for AI-driven materials discovery
- Masked Images Are Counterfactual Samples for Robust Fine -tuning
- One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale
- ArXiv Weekly Radiostation: NLP, CV, ML and more Selected paper (with audio)
Paper 1: Composer: Creative and Controllable Image Synthesis with Composable Conditions
- Author: Lianghua Huang et al
- Paper address: https://arxiv.org/pdf/2302.09778v2.pdf
Abstract: In the field of AI painting, many researchers are working on improving the controllability of the AI painting model, that is, allowing the model to generate Images are more human-friendly. Some time ago, a model called ControlNet pushed this controllability to a new peak. Around the same time, researchers from Alibaba and Ant Group also made results in the same field. This article is a detailed introduction to this result.
Recommended: New ideas for AI painting: Domestic open source new model with 5 billion parameters, synthetically Achieve a leap forward in controllability and quality.
Paper 2: Structure and Content-Guided Video Synthesis with Diffusion Models
- Author :Patrick Esser et al
- Paper address: https://arxiv.org/pdf/2302.03011.pdf
Abstract: I believe many people have already understood the charm of generative AI technology, especially after experiencing the AIGC outbreak in 2022. Text-to-image generation technology represented by Stable Diffusion was once popular all over the world, and countless users poured in to express their artistic imagination with the help of AI...
Compared with image editing, video Editing is a more challenging topic, requiring synthesizing new actions rather than just modifying the visual appearance, while also maintaining temporal consistency. There are also many companies exploring this track. Some time ago, Google released Dreamix to apply text conditional video diffusion model (VDM) to video editing.
Recently, Runway, a company that participated in the creation of Stable Diffusion, launched a new artificial intelligence model "Gen-1", which uses any style specified by applying text prompts or reference images. Can convert existing videos into new videos. For example, turning "people on the street" into "clay puppets" only requires one line of prompt.
# Recommended: Just one sentence or a picture to add special effects, Stable Diffusion company AIGC has played new tricks.
Paper 3: The connectome of an insect brain
- ##Author: MICHAEL WINDING et al.
- Paper address: https://www.science.org/doi/10.1126/science.add9330
Abstract: Researchers have completed the most advanced atlas of the insect brain to date, a landmark achievement in neuroscience that brings scientists closer to a true understanding of the mechanisms of thought. An international team led by Johns Hopkins University and the University of Cambridge has produced an astonishingly detailed map of every neural connection in the brain of a fruit fly larvae, a study that is closely related to the human brain. Quite a prototype scientific model. The research may support future brain research and inspire new machine learning architectures.
Recommended: The most complete insect brain map to date, which may inspire new machine learning architectures .
Paper 4: Uncertainty-driven dynamics for active learning of interatomic potentials
- Author : Maksim Kulichenko et al
- Paper address: https://www.nature.com/articles/s43588-023-00406-5
Abstract: Machine learning (ML) models, if trained on datasets from high-fidelity quantum simulations, can generate accurate and efficient interatomic potentials. Active learning (AL) is a powerful tool for iteratively generating different data sets. In this approach, the ML model provides uncertainty estimates and its predictions for each new atomic configuration. If the uncertainty estimate exceeds a certain threshold, the configuration is included in the data set.
Recently, researchers from Los Alamos National Laboratory in the United States developed a strategy: Uncertainty-Driven Dynamics of Active Learning (UDD-AL) to achieve faster Discover configurations that meaningfully augment training data sets. UDD-AL modifies the potential energy surfaces used in molecular dynamics simulations to support regions of configuration space where large model uncertainties exist. The performance of UDD-AL is demonstrated on two AL tasks. The figure below shows a comparison of UDD-AL and MD-AL methods for the glycine test case.
Recommended: Nature sub-journal | Uncertainty-driven, power for active learning Learn to use automatic sampling.
Paper 5: Combinatorial synthesis for AI-driven materials discovery
- Author: John M. Gregoire et al
- Paper address: https://www.nature.com/articles/s44160-023-00251-4
Abstract: Synthesis is the cornerstone of solid-state materials experimentation, and any synthesis technique necessarily involves changing some synthesis parameters, the most common being composition and annealing temperature. Combinatorial synthesis generally refers to automated/parallelized materials synthesis to create collections of materials with systematic variations of one or more synthesis parameters. Artificial intelligence-controlled experimental workflows place new requirements on combinatorial synthesis.
Here, Caltech researchers provide an overview of combinatorial synthesis, envisioning a future of accelerated materials science driven by the co-development of combinatorial synthesis and AI technologies. and established ten metrics to evaluate trade-offs between different technologies, covering speed, scalability, scope, and quality. These metrics help evaluate a technology's suitability for a given workflow and illustrate how advances in combinatorial synthesis will usher in a new era of accelerated materials science. The following are the synthesis indicators and respective evaluations of the combined synthesis platform.
# Recommended: Nature Synthesis Review: Combinatorial synthesis driven by artificial intelligence for materials discovery.
Paper 6: Masked Images Are Counterfactual Samples for Robust Fine-tuning
- Author: Yao Xiao et al
- Paper address: https://arxiv.org/abs/2303.03052
Abstract: Sun Yat-sen University Human-Computer Intelligence Fusion Laboratory (HCP) has made fruitful achievements in AIGC and multi-modal large models. More than ten papers have been selected for the recent AAAI 2023 and CVPR 2023, ranking among the first echelons of global research institutions. One of the works implemented the use of causal models to significantly improve the controllability and generalization of multi-modal large models in tuning - "Masked Images Are Counterfactual Samples for Robust Fine-tuning".
Recommendation: Sun Yat-sen University HCP Laboratory New Breakthrough: Using Causal Paradigm to Upgrade Multi-Model Large state model.
Paper 7: One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale
- Author: Fan Bao et al
- Paper address: https://ml.cs.tsinghua.edu.cn/diffusion/unidiffuser.pdf
Abstract: This paper proposes UniDiffuser, a probabilistic modeling framework designed for multi-modality, and adopts the transformer-based network architecture proposed by the team U-ViT trained a model with one billion parameters on the open source large-scale graphic data set LAION-5B, enabling an underlying model to complete a variety of generation tasks with high quality (Figure 1). To put it simply, in addition to one-way text generation, it can also realize multiple functions such as image generation, image and text joint generation, unconditional image and text generation, image and text rewriting, etc., which greatly improves the production efficiency of text and image content, and further improves the generation of text and graphics. The application imagination of formula model.
Recommended: Tsinghua Zhu Jun team open sourced the first large multi-modal diffusion model based on Transformer , the interplay of text and pictures, and rewriting won all.
ArXiv Weekly Radiostation
Heart of Machine cooperates with ArXiv Weekly Radiostation initiated by Chu Hang, Luo Ruotian, and Mei Hongyuan. Based on 7 Papers, this selection is More important papers this week, including 10 selected papers in each of NLP, CV, and ML fields, and abstract introductions to the papers in audio format.
This week’s 10 selected NLP papers are:
1. GLEN: General-Purpose Event Detection for Thousands of Types. (from Martha Palmer, Jiawei Han)
2. An Overview on Language Models: Recent Developments and Outlook. (from C.-C. Jay Kuo)
3. Learning Cross-lingual Visual Speech Representations. (from Maja Pantic)
4. Translating Radiology Reports into Plain Language using ChatGPT and GPT-4 with Prompt Learning: Promising Results, Limitations, and Potential. (from Ge Wang)
5. A Picture is Worth a Thousand Words: Language Models Plan from Pixels. (from Honglak Lee)
6. Do Transformers Parse while Predicting the Masked Word?. (from Sanjeev Arora)
7. The Learnability of In-Context Learning . (from Amnon Shashua)
##8. Is In-hospital Meta-information Useful for Abstractive Discharge Summary Generation?. (from Yuji Matsumoto)
9. ChatGPT Participates in a Computer Science Exam. (from Ulrike von Luxburg)
10. Team SheffieldVeraAI at SemEval-2023 Task 3: Mono and multilingual approaches for news genre, topic and persuasion technique classification. (from Kalina Bontcheva)
This week’s 10 CV selected papers are:
1. From Local Binary Patterns to Pixel Difference Networks for Efficient Visual Representation Learning. (from Matti Pietikäinen, Li Liu)
2. Category-Level Multi-Part Multi-Joint 3D Shape Assembly. (from Wojciech Matusik, Leonidas Guibas)
3. PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D Supervision. (from Leonidas Guibas)
4. Exploring Recurrent Long-term Temporal Fusion for Multi-view 3D Perception. (from Xiangyu Zhang)
5. Grab What You Need: Rethinking Complex Table Structure Recognition with Flexible Components Deliberation. (from Bing Liu)
6. Unified Visual Relationship Detection with Vision and Language Models. (from Ming-Hsuan Yang)
7. Contrastive Semi-supervised Learning for Underwater Image Restoration via Reliable Bank. (from Huan Liu)
8. InstMove: Instance Motion for Object-centric Video Segmentation. (from Xiang Bai, Alan Yuille)
9. ViTO: Vision Transformer-Operator. (from George Em Karniadakis)
10. A Simple Framework for Open-Vocabulary Segmentation and Detection. (from Jianfeng Gao, Lei Zhang)
本周 10 篇 ML 精选论文是:
1. Generalizing and Decoupling Neural Collapse via Hyperspherical Uniformity Gap. (from Bernhard Schölkopf)
2. AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph Neural Networks. (from Jure Leskovec)
3. Relational Multi-Task Learning: Modeling Relations between Data and Tasks. (from Jure Leskovec)
4. Interpretable Outlier Summarization. (from Samuel Madden)
5. Visual Prompt Based Personalized Federated Learning. (from Dacheng Tao)
6. Interpretable Joint Event-Particle Reconstruction for Neutrino Physics at NOvA with Sparse CNNs and Transformers. (from Pierre Baldi)
7. FedLP: Layer-wise Pruning Mechanism for Communication-Computation Efficient Federated Learning. (from Fei Wang, Khaled B. Letaief)
8. Traffic4cast at NeurIPS 2022 -- Predict Dynamics along Graph Edges from Sparse Node Data: Whole City Traffic and ETA from Stationary Vehicle Detectors. (from Sepp Hochreiter)
9. Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning. (from Thomas Hofmann)
10. Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning. (from Dimitris N. Metaxas)
The above is the detailed content of Add special effects to videos in one sentence; the most complete insect brain map to date. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

At the forefront of software technology, UIUC Zhang Lingming's group, together with researchers from the BigCode organization, recently announced the StarCoder2-15B-Instruct large code model. This innovative achievement achieved a significant breakthrough in code generation tasks, successfully surpassing CodeLlama-70B-Instruct and reaching the top of the code generation performance list. The unique feature of StarCoder2-15B-Instruct is its pure self-alignment strategy. The entire training process is open, transparent, and completely autonomous and controllable. The model generates thousands of instructions via StarCoder2-15B in response to fine-tuning the StarCoder-15B base model without relying on expensive manual annotation.

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

1. Introduction Over the past few years, YOLOs have become the dominant paradigm in the field of real-time object detection due to its effective balance between computational cost and detection performance. Researchers have explored YOLO's architectural design, optimization goals, data expansion strategies, etc., and have made significant progress. At the same time, relying on non-maximum suppression (NMS) for post-processing hinders end-to-end deployment of YOLO and adversely affects inference latency. In YOLOs, the design of various components lacks comprehensive and thorough inspection, resulting in significant computational redundancy and limiting the capabilities of the model. It offers suboptimal efficiency, and relatively large potential for performance improvement. In this work, the goal is to further improve the performance efficiency boundary of YOLO from both post-processing and model architecture. to this end

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G
