current location:Home > Technical Articles > Technology peripherals > AI
- Direction:
- All web3.0 Backend Development Web Front-end Database Operation and Maintenance Development Tools PHP Framework Daily Programming WeChat Applet Common Problem Other Tech CMS Tutorial Java System Tutorial Computer Tutorials Hardware Tutorial Mobile Tutorial Software Tutorial Mobile Game Tutorial
- Classify:
-
- OmniDrive: A framework for aligning large models with 3D driving tasks
- We start with a novel 3DMLLM architecture that uses sparse queries to lift and compress visual representations into 3D, which are then fed into the LLM. Title: OmniDrive: AHolisticLLM-AgentFramework for Autonomous Driving with 3DPerception Reasoning and Planning Author affiliation: Beijing Institute of Technology, NVIDIA, Huazhong University of Science and Technology Open source address: GitHub-NVlabs/OmniDrive The development of multimodal large language models (MLLMs) has led to growing interest in LLM-based autonomous driving , leveraging their powerful reasoning capabilities
- AI 1146 2024-05-06 15:16:35
-
- MLP was killed overnight! MIT Caltech and other revolutionary KANs break records and discover mathematical theorems that crush DeepMind
- Overnight, the machine learning paradigm is about to change! Today, the infrastructure that dominates the field of deep learning is the multilayer perceptron (MLP), which places activation functions on neurons. So, beyond that, are there any new routes we can take? Just today, teams from MIT, Caltech, Northeastern University and other institutions released a new neural network structure-Kolmogorov–Arnold Networks (KAN). The researchers made a simple change to the MLP by moving the learnable activation function from the nodes (neurons) to the edges (weights)! Paper address: https://arxiv.org/pdf/2404.19756 This change seems baseless at first glance
- AI 1099 2024-05-06 15:10:01
-
- Artificial Intelligence in Medical Diagnostics Market to Reach $4 Billion by 2028
- In healthcare, where accuracy and speed are critical, the integration of artificial intelligence (AI) has become a transformative force. The market for artificial intelligence in medical diagnostics was once an emerging niche, but it has quickly grown into a powerful market with forecasts reaching billions of dollars. The artificial intelligence market size in medical diagnosis will be worth US$1.25 billion in revenue in 2023 and is expected to reach US$4.48 billion by 2028, growing at a CAGR of 29.04% during the forecast period. The growth of artificial intelligence in medical diagnostics market is driven by several key factors: Growing demand for artificial intelligence-based solutions: As the modern healthcare landscape continues to evolve and new diseases and conditions are discovered, the demand for artificial intelligence-based solutions
- AI 621 2024-05-06 15:01:06
-
- Tesla's Optimus humanoid robot works in the factory, is skilled in disassembling batteries, self-correcting, and can go even further
- Tesla humanoid robot has unlocked new skills! Yesterday, Tesla Optimus officially released a new demo video, showing the latest progress of the second-generation Optimus humanoid robot. This time, Optimus started working in the factory. He learned to assemble batteries at the Tesla battery factory and walked faster, farther and more steadily than before. Let’s take a first look at Optimus’ latest skills and training details. Optimus’ end-to-end neural network is now trained to accurately assemble battery cells at Tesla factories. Runs in real time on the robot's FSD computer, relying solely on 2D cameras, hand touch and force sensors. Optimus uses its legs to maintain balance while the network drives its entire upper body.
- AI 1105 2024-05-06 14:52:10
-
- CVPR 2024 | With the help of neural structured light, Zhejiang University realizes real-time acquisition and reconstruction of dynamic three-dimensional phenomena
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Efficient and high-quality reconstruction of dynamic three-dimensional physical phenomena such as smoke is an important issue in related scientific research. It has broad application prospects in aerodynamic design verification, meteorological three-dimensional observation and other fields. By reconstructing a three-dimensional density sequence that changes over time, it can help scientists
- AI 919 2024-05-06 14:50:14
-
- ICLR 2024 Spotlight | NoiseDiffusion: Correct diffusion model noise and improve interpolation image quality
- Author|PengfeiZheng Unit|USTC,HKBUTMLRGroup In recent years, the rapid development of generative AI has injected strong impetus into eye-catching fields such as text-to-image generation and video generation. The core of these techniques lies in the application of diffusion models. The diffusion model first gradually changes the picture into Gaussian noise by defining a forward process that continuously adds noise, and then gradually denoises the Gaussian noise through a reverse process and turns it into a clear picture to obtain real samples. The diffusion ordinary differential model is used to interpolate the values of the generated images, which has great application potential in generating videos and some advertising creatives. However, we noticed that when this method is applied to natural images, the interpolated image effects are often unsatisfactory. exist
- AI 1165 2024-05-06 14:01:24
-
- AI learns to hide its thinking and reason secretly! Solving complex tasks without relying on human experience is more black box
- When AI does math problems, the real thinking is actually "mental arithmetic" secretly? A new study by a team from New York University found that even if AI is not allowed to write steps and is replaced with meaningless "...", its performance on some complex tasks can be greatly improved! One author, JacabPfau, said: As long as you spend computing power to generate additional tokens, you can bring advantages. It doesn’t matter what token you choose. For example, let Llama34M answer a simple question: How many of the first 6 digits of the natural constant e are greater than 5? The AI's direct answer is equivalent to messing around. It only counts the first 6 digits and actually counts 7. Let AI write out the steps to verify each number, and you can get the correct answer. Let AI hide the steps and replace them with a large number of "...
- AI 1001 2024-05-06 12:00:30
-
- Stanford Li Feifei started his first business: two years of academic leave, aiming at 'spatial intelligence'
- "AI Godmother" Li Feifei started a business. Unexpectedly, in the era of large models, the well-known "AI Godmother" Li Feifei would also "start a business" and completed a seed round of financing. According to an exclusive report from Reuters, famous computer scientist Li Feifei is creating a start-up company. The company leverages human-like visual data processing to enable artificial intelligence to perform advanced reasoning. People familiar with the matter revealed that Li Feifei recently raised a seed round of financing for the company, with investors including Silicon Valley venture capital firm Andreessen Horowitz and Canadian company Radical Ventures, which she joined last year. However, spokespersons for both Andreessen Horowitz and Radical Ventures confirmed this.
- AI 1270 2024-05-05 13:04:06
-
- Walking the 'dog' on the yoga ball! Eureka, selected as one of NVIDIA's top ten projects, has made a new breakthrough
- The robot dog walks steadily on the yoga ball, and its balance is quite good: it can handle various scenes, whether it is a flat sidewalk or a challenging lawn: it can even be kicked by researchers. Even with a yoga ball on its feet, the robot dog will not tip over. Even if the balloon is deflated, the robot dog can maintain balance: The above demonstrations are all at 1x speed and have not been accelerated. Paper address: https://eureka-research.github.io/dr-eureka/assets/dreureka-paper.pdf Project homepage: https://github.com/eureka-research/DrEureka paper title: DrE
- AI 761 2024-05-05 13:01:01
-
- The performance of small models is saturated and the performance is poor. Is the root cause due to Softmax?
- The emergence of small language models is to make up for the disadvantages of expensive training and inference of large language models. However, it also has the fact that its performance declines after training to a certain stage (saturation phenomenon). So what is the reason for this phenomenon? Can it be overcome and exploited to improve the performance of small language models? The latest progress in the field of language modeling consists in pre-training highly parameterized neural networks on extremely large-scale web text corpora. In practice, using such a model for training and inference can be costly, prompting the use of smaller alternative models. However, it has been observed that smaller models may suffer from saturation and a phenomenon characterized by a decrease in capability and plateauing at some advanced stage of training. A recent paper found that this saturation sum phenomenon can be reduced by smaller models
- AI 1106 2024-05-04 13:10:01
-
- Finally, someone investigated the overfitting of small models: two-thirds of them had data pollution, and Microsoft Phi-3 and Mixtral 8x22B were named
- Two-thirds of the most popular large models currently have overfitting problems? A study that just came out surprised many researchers in the field. Improving the reasoning capabilities of large language models is one of the most important directions of current research. In this type of task, many small models recently released seem to perform well and can handle such tasks well. For example, Microsoft's Phi-3, Mistral8x22B and other models. The researchers pointed out that there is a key problem in the current field of large model research: many studies fail to accurately benchmark the capabilities of existing LLMs. This suggests that we need to spend more time evaluating and testing the current LLM capability level. This is because most current research uses GSM8k, MATH, M
- AI 672 2024-05-04 13:05:13
-
- A relay spanning more than 300 years: Inspired by Terence Teru, mathematicians decided to use AI to formalize the proof of Fermat's Last Theorem.
- Inspired by Terence Tao, more and more mathematicians began to try to use artificial intelligence to conduct mathematical exploration. This time, their target is Fermat's Last Theorem, one of the world's top ten most difficult mathematical problems. Fermat's Last Theorem is a very complex mathematical problem for which no feasible solution has been found so far. Mathematicians hope that with the powerful computing power and intelligent algorithms of artificial intelligence, they can explore Fermat's Last Theorem in mathematics. Also known as "Fermat's Last Theorem (FLT)", it was invented by the 17th-century French mathematician Pierre.・De Fermat proposed. There is a legendary story behind it. It is said that around 1637, when Fermat was reading the Latin translation of Diophantus' Arithmetic, he wrote next to the 8th proposition of Volume 11
- AI 838 2024-05-03 13:04:01
-
- Transformer wants to become Kansformer? It has taken decades for MLP to usher in challenger KAN
- MLP (Multilayer Perceptron) has been used for decades. Is there really no other choice? Multilayer perceptrons (MLPs), also known as fully connected feedforward neural networks, are the fundamental building blocks of today's deep learning models. The importance of MLPs cannot be overstated, as they are the default method for approximating nonlinear functions in machine learning. However, is MLP the best nonlinear regressor we can build? Although MLPs are widely used, they have significant drawbacks. For example, in Transformer models, MLPs consume almost all non-embedded parameters and are generally less interpretable relative to attention layers without post-processing analysis tools. So, is there an alternative to MLP?
- AI 1063 2024-05-03 13:01:04
-
- The hottest generative AI hardware has sold more than 100,000 units. After tearing it apart, it turns out it's just an Android app?
- "RabbitR1, it is essentially a Launcher program on the Android system. After being cracked, it can run on the phone." Through cracking, Rahman managed to start and run the R1 application on the Pixel6a phone. On Tuesday, American journalist Mishaal Rahman exposed the details of the well-known generative AI hardware RabbitR1, which immediately attracted the attention of the technology circle. A few months ago, two startups, Humane and Rabbit, continued to launch their artificial intelligence devices - AiPin and RabbitR1. Initially, some believed these devices would usher in a new era of wearable artificial intelligence. However, as the months passed, controversy grew over the two devices. R
- AI 890 2024-05-02 16:01:19
-
- Yu Chengdong steps down as CEO of Huawei Terminal BG, He Gang will take over
- According to multiple media reports, Huawei internally issued a personnel adjustment document on the afternoon of April 30, announcing that Yu Chengdong would step down as CEO of Huawei Terminal BG. Yu Chengdong will remain as chairman of Terminal BG. He Gang, the former Huawei Terminal BG and Chief Operating Officer, will take over the position of CEO of Huawei Terminal BG. According to reports, apart from the above-mentioned personal changes and adjustments, the document does not contain any more information. There is no further explanation on the background of this major personnel change and Yu Chengdong’s new business focus after stepping down as CEO of Terminal BG. Some sources said that this adjustment is a routine business structure adjustment, which will allow Yu Chengdong to have more energy to create high-quality products for consumers. Yu Chengdong was born in 1969. He graduated from the Automatic Control Department of Northwestern Polytechnical University with a bachelor's degree and a master's degree from Tsinghua University.
- AI 814 2024-05-02 16:01:14