Table of Contents
Trend 1: Generative AI requires explainable AI
Trend 2: The FastSaaS race begins
Trend Three: Reliance on Supercomputers
Trend Four: Beyond 3nm Chips
Trend Five: Integration of Quantum and Traditional Computing
Home Technology peripherals AI TOP5 Artificial Intelligence Development Trends in 2023

TOP5 Artificial Intelligence Development Trends in 2023

Apr 11, 2023 pm 07:28 PM
AI openai

TOP5 Artificial Intelligence Development Trends in 2023

2022 saw many groundbreaking breakthroughs in the field of AI/ML. Big tech companies like Google, Meta, and Microsoft are making major strides in new innovations from quantum computing to generative artificial intelligence.

For example, some of the biggest breakthroughs include Meta’s HyperTreeProofSearch (HTPS) for solving International Mathematical Olympiad problems; DeepMind’s Alpha Fold and Meta AI’s ESMFold for protein folding prediction; Google’s DeepNull to simulate the correlation between phenotypes The relationship between variable effects and improving genome-wide association studies (GWAS), etc.

Next, let’s look at some predictions for 2023.

ChatGPT is popular on the Internet with its excellent conversation capabilities. It is built on OpenAI’s GPT-3, which has 176 billion parameters and relies on larger model sizes. Although there are other LLMs with two, three or even ten times the parameters of GPT-3, some models of DeepMind or Meta (also known as small language models (SLM)) have more parameters than GPT-3 in logical reasoning. and prediction on multiple tasks.

In addition to reducing the size of the model, it is expected that a larger model, such as GPT-4, will have approximately 100 trillion parameters. Since the largest model currently is the Google Switch Transformer model with 1.6 trillion parameters, the jump will be huge.

However, to achieve greater latency and predictability, the next few years could see existing models being fine-tuned to serve specific purposes. Recently, OpenAI fine-tuned GPT-3 using the DaVinci update.

Trend 1: Generative AI requires explainable AI

Text-to-image generation is the trend that will break the charts in 2022. Models like DALL-E, Stable Diffusion, and Midjourney top the list among enthusiasts who want to experiment with AI-generated art. Conversations quickly moved from text to images to text to video to text to anything, and multiple models were created that could also generate 3D models.

As language models expand and propagation models improve, the text-to-anything trend is expected to rise even higher. Publicly available datasets make generative AI models more scalable.

These datasets introduce a section on explainable artificial intelligence, where the properties of each image used to train these generative models become critical.

Trend 2: The FastSaaS race begins

Companies that have caught up with the trend of generating artificial intelligence have begun to provide it as a cloud service. As LLM and generative models such as GPT-3 and DALL-E became publicly available, it became increasingly easier for enterprises to offer them as a service, which gave rise to FastSaaS.

Recently, Shutterstock plans to integrate DALL-E 2 into its platform, Microsoft VS Code added Copilot as an extension, TikTok announced an in-app text-to-image AI generator, and Canva launched AI-on-its-platform Generate function.

Trend Three: Reliance on Supercomputers

This is the trend of building supercomputers to rely on generating tasks and providing services to companies. With these ever-increasing data sets and generative models, the demand for supercomputers is rising and is expected to rise further. With the competition for FastSaaS, the need for better and high-performance computing is the next thing.

NVIDIA and Microsoft recently collaborated to create Quantum-2, a cloud-native supercomputing platform. In October, Tesla announced that its Dojo supercomputer was built entirely from scratch using chips developed by Tesla. Soon, it looks like it could provide access to enterprise customers. Additionally, Cerebras launched Andromeda, a 13.5 million-core AI supercomputer that delivers over 1 exaflop of AI computing power. Recently, Jasper partnered with Cerebras to achieve better performance.

Trend Four: Beyond 3nm Chips

As predicted by Moore’s Law, processing power increases as chip size decreases. So for supercomputers to run large models, they need smaller chips, and we're already seeing chips getting smaller.

In recent years, the chip industry has been pushing for miniaturization, with manufacturers constantly looking for ways to make chips smaller and more compact. For example, for the M2 chip and A16, Apple uses 5nm and 4nm chips respectively. It is expected that TSMC will develop 3nm chips in 2023, which will improve the efficiency and performance of AI/ML algorithm development.

Trend Five: Integration of Quantum and Traditional Computing

As companies such as NVIDIA, Google, and Microsoft provide hardware services to the cloud, more innovations in the field of quantum computing are bound to occur. This will allow small tech companies to train, test and build AI/ML models without the need for heavy hardware.

The rise of quantum computing in the coming years should definitely be taken into account by developers as its use will increase in many other areas such as healthcare, financial services, etc.

In a recent announcement, a quantum computer was connected to Europe's fastest supercomputer to combine classical and quantum computers to solve problems faster. Similarly, Nvidia has also released QODA - Quantum-Optimised Device Architecture for short, which is the first platform for hybrid quantum classical computers.

IBM recently announced their quantum hardware and software at its annual Quantum Summit 2022, outlining a groundbreaking vision for quantum-centric supercomputing using a 433-qubit (qubit) processor. At the Global Artificial Intelligence Summit, IBM announced that next year they will demonstrate a 1,000-qubit system that will become a disruptor for further innovation in various fields.

The above is the detailed content of TOP5 Artificial Intelligence Development Trends in 2023. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

ChatGPT is now available for macOS with the release of a dedicated app ChatGPT is now available for macOS with the release of a dedicated app Jun 27, 2024 am 10:05 AM

Open AI’s ChatGPT Mac application is now available to everyone, having been limited to only those with a ChatGPT Plus subscription for the last few months. The app installs just like any other native Mac app, as long as you have an up to date Apple S

SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. Aug 01, 2024 pm 09:40 PM

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

See all articles