current location:Home > Technical Articles > Technology peripherals > AI
- Direction:
- All web3.0 Backend Development Web Front-end Database Operation and Maintenance Development Tools PHP Framework Daily Programming WeChat Applet Common Problem Other Tech CMS Tutorial Java System Tutorial Computer Tutorials Hardware Tutorial Mobile Tutorial Software Tutorial Mobile Game Tutorial
- Classify:
-
- Apple lets large models learn to be lazy: spit out the first token faster and maintain accuracy
- Being lazy makes you work better. Llama 3.1 has just been released, have you tried it yet? Even if your personal computer is the latest top configuration, running the smallest 8B version may still cause significant delays. In order to improve the reasoning efficiency of the model, researchers have come up with a variety of methods, but many of them will cause the model to sacrifice some accuracy. Recently, a research team from Apple and MetaAI proposed a new method that can increase the inference speed of Llama2 pre-filling stage to more than 2 times while ensuring that the accuracy does not drop significantly. This may improve Llama3.1 The acceleration provides some inspiration. They call this approach LazyLLM, which stands for Lazy Large Language Model. Paper title: LazyL
- AI 668 2024-08-05 20:41:02
-
- Nature sub-journal, 10 times faster, reverse protein sequence design method based on Transformer
- Editor | Radish Skin Protein design and engineering are advancing at an unprecedented pace thanks to advances in deep learning. However, current models cannot naturally account for non-protein entities during the design process. Here, researchers at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland propose a deep learning method based entirely on a geometric transformer of atomic coordinates and element names that can predict proteins based on backbone scaffolds with constraints imposed by different molecular environments. sequence. Using this method, researchers can produce highly thermostable, catalytically active enzymes with a high success rate. This is expected to increase the versatility of protein design pipelines to achieve desired functions. This study uses "Context-awaregeometricde
- AI 1073 2024-08-05 20:33:31
-
- The author of Transformer returns to Google, and the founding team of Character.AI is 'acquired', as long as people don't want the company
- Will AI startups end up in big companies? When I woke up, the “chicken-eating contest” of generative AI was shrinking again. Startup Character.AI announced on Friday that it has signed an agreement with Google to obtain a non-exclusive license to Character.AI’s large language model (LLM) technology. Google also announced the rehiring of Noam Shazeer and Daniel DeFreitas. Among them, NoamShazeer is the founder and CEO of Character.AI and one of the authors of the Transformer paper. He once served as chief software engineer at Google. Daniel DeFreitas is the president of Character.AI and served as a senior engineer at Google.
- AI 946 2024-08-05 20:17:10
-
- The high-definition video is not real. The 3D scenes rendered in several photos make it difficult for you to distinguish the authenticity.
- Please note that the above animation is completely a 3D scene rendered from multiple photos. It is difficult for humans to detect their flaws. So let's take a look at how this scenario is realized. Grids and points are the most common representations of 3D scenes because they are explicit and well suited for fast GPU/CUDA-based rasterization. In contrast, state-of-the-art Neural Radiation Field (NeRF) methods are built on continuous scene representation, often using volumetric ray rendering optimized multi-layer perceptrons (MLP) to synthesize new perspectives on the captured scene. While the continuity of these methods helps with optimization, the random sampling required for rendering is expensive and noisy. Researchers from the University of the French Riviera have introduced a new method that combines the two methods
- AI 711 2024-08-05 20:15:51
-
- Why is the delayed interaction model standard for the next generation of RAG?
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com Zhang Yingfeng: Co-founder of Infra, with many years of experience in search, AI, and Infra infrastructure development, he is currently working on the construction of the next generation of RAG core products. In the development of RAG system, a good Reranker model is an indispensable link.
- AI 1263 2024-08-05 20:15:22
-
- ECCV2024 | Harvard team develops FairDomain to achieve fairness in cross-domain medical image segmentation and classification
- Editor | ScienceAI Author | YuTian Team In the field of artificial intelligence (AI), especially medical AI, addressing fairness issues is crucial to ensuring fair medical outcomes. Recently, efforts to enhance fairness have introduced new methods and datasets. However, the issue of fairness has been little explored in the context of domain transfer, even though clinics often rely on different imaging technologies (e.g., different retinal imaging modalities) for patient diagnosis. This paper proposes FairDomain, which is the first systematic study of algorithm fairness under domain transfer. We test state-of-the-art domain adaptation (DA) and domain generalization (DG) algorithms for medical image segmentation and classification tasks, aiming to Understand how bias is transferred between different domains.
- AI 1344 2024-08-05 20:04:36
-
- From now on, more than 100 million developers on GitHub can directly access the world's top large models to build AI applications
- The new feature "GitHubModels" launched by GitHub is expected to accelerate the arrival of the era of AI engineers. What? The familiar code hosting platform GitHub has evolved again! The platform has also begun to provide Playgroud with large AI models. All popular large models in the industry that you can name, including Microsoft's Phi-3, OpenAI's GPT-4o, Meta's Llama3.1, Cohere's CommandR+, and MistralAI's MistralLarge, can be tried in an interactive sandbox . In the coming months, Github will also add more language, visual, and other types of models. In other words, the model in this picture
- AI 1195 2024-08-05 19:36:38
-
- AI in use | Use large models to write 'luminous' copywriting, and the articles are full of national beauty
- The Power of Machines Contributor: Jia Siluan The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. Submission email: content@jiqizhixin.com Two days ago, I saw a small article on AI applications published by Jiqizhixin. It was a bit interesting to use a large model to write "crazy" copy. The general process is to first make the large model write a few paragraphs of "Let you The funny title and copywriting of "Laugh Out of Parkinson's" let the large model summarize the style of the copywriting
- AI 710 2024-08-05 19:26:47
-
- After Sora, OpenAI Lilian Weng personally wrote an article to teach you how to design a video generation diffusion model from scratch.
- The powerful image synthesis capabilities of diffusion models have been well demonstrated over the past few years. The research community is now tackling a more difficult task: video generation. Recently, Lilian Weng, head of OpenAI Safety Systems (SafetySystems), wrote a blog about the diffusion model of video generation. LilianWeng's website has compiled and organized this blog without changing its original meaning. The following is the original text of the blog: The video generation task itself is a superset of image synthesis, because the image is a single frame of video. Video synthesis is much more difficult for the following reasons: 1. Video synthesis also requires temporal consistency between different frames, which naturally requires encoding more world knowledge into the model. 2. Compared to text or images
- AI 1059 2024-08-05 19:20:02
-
- Tested 7 'Sora-level' video generation artifacts. Who has the ability to ascend to the 'Iron Throne'?
- Editor of Machine Power Report: Yang Wen Who can become the King of AI video circle? In the American TV series "Game of Thrones", there is an "Iron Throne". Legend has it that it was made by the giant dragon "Black Death" who melted thousands of swords discarded by enemies, symbolizing supreme authority. In order to sit on this iron chair, the major families started fighting and fighting. Since the emergence of Sora, a vigorous "Game of Thrones" has been launched in the AI video circle. The main players in this game include RunwayGen-3 and Luma from across the ocean, as well as domestic Kuaishou Keling, ByteDream, and Zhimo. Spectrum Qingying, Vidu, PixVerseV2, etc. Today we are going to evaluate and see who is qualified to sit on the "Iron Throne" of the AI video circle. -1- Vincent Video
- AI 1052 2024-08-05 19:19:51
-
- AI helps human painters win first place in art competitions. What's the secret behind it?
- Two years ago, a work titled "Space Opera" won first place in the Colorado State Fair art competition. This painting is majestic, light and dark, and is quite reminiscent of the French symbolist painter Gustave Moreau. However, it was not drawn by a human being, but by a contestant with no painting background, using an AI drawing tool. Let’s turn the time back to 2018. At that time, an AI painting titled "Portrait of Edmund Bellamy" sold for more than $400,000 at Christie's auction house in New York. This is the first artificial intelligence work to be auctioned, which also marks the beginning of AI art works being recognized by the market. Nowadays, AI painting has become commonplace, and AI players at home and abroad are leaving this track in smoke.
- AI 730 2024-08-05 18:29:12
-
- Small tricks with big effects, 'only read the prompt twice' allows the cyclic language model to surpass Transformer++
- In the current AI field, the mainstream architecture adopted by large language models is Transformer. However, with the advent of architectures such as RWKV and Mamba, a clear trend has emerged: cyclic large language models that compete with Transformer in terms of language modeling perplexity are quickly entering people's attention. What is exciting is that these architectures use a constant amount of memory during inference. However, due to limited memory, recurrent language models (LM) cannot remember and use all the information in long contexts, which leads to poor quality of in-context learning (ICL). Therefore, a key challenge in obtaining efficient large language models is choosing which information to store or discard.
- AI 660 2024-08-05 17:09:49
-
- It only takes a few demonstrations to align large models. The DITTO proposed by Yang Diyi's team is so efficient.
- Human education methods also work well for large models. When raising children, people throughout the ages have talked about one important method: leading by example. That is to say, let yourself be an example for children to imitate and learn from, rather than simply telling them what to do. When training a large language model (LLM), we may also be able to use this method - demonstrate to the model. Recently, Yang Diyi's team at Stanford University proposed a new framework DITTO that can align LLM with specific settings through a small number of demonstrations (examples of desired behavior provided by users). These examples can be obtained from the user's existing interaction logs, or by directly editing the output of LLM. This allows the model to efficiently understand and align users for different users and tasks
- AI 898 2024-08-05 16:10:32
-
- All employees left their old club, and Stable Diffusion led the team to start a business as soon as it was released. It defeated MJ v6 and SD3 immediately, and also opened the source
- Another powerful player has been added to the field of AI image and video generation. Remember Robin Rombach, a research scientist who resigned from the AI startup StabilityAI at the end of March this year? As one of the two main authors who developed the Vincent graph model StableDiffusion, he joined StabilityAI in 2022. Now, nearly five months after leaving StabilityAI, Robin Rombach announced the good news of his own business on Twitter! He established "BlackForestLabs" to promote SOTA high-quality generative deep learning models for images and videos and make them available to as many people as possible. Team member Youjie
- AI 1052 2024-08-05 16:06:52
-
- Forum Preview | 'Inspiring Cultural Creativity, Stimulating Unlimited New Productivity' - Preview of the 'AI + Cultural Creativity' Development Forum
- Forum theme: Zhiqi Cultural Creativity, stimulating unlimited new productivity Forum time: July 6, 9:30-11:40 Forum location: Conference Room 515, Shanghai World Expo Center In recent years, with the rapid development of artificial intelligence technology, all walks of life New impetus bursts out with the help of new technologies. In the cultivation and development of "AI + cultural and creative" new productive forces, Shanghai actively implements the deployment of digital China construction, provides top-level design and strategic layout for digital cultural and creative and metaverse new tracks, and creates a "nuclear explosion point" for the innovative development of the cultural and creative industry. Provides new opportunities. In order to promote Chinese culture, promote the innovative development of the cultural and creative industry, and build a communication platform for domestic and foreign cultural and creative fields, this "AI+ Cultural and Creative Development Forum" came into being. The forum invites global experts, scholars, cultural and creative industry elites and industry leaders to gather together
- AI 504 2024-08-05 15:58:42