


Cindy Elder: OpenAI CEO Sam Altman discusses AI's energy consumption and regulatory stance
January 17 news, Davos, Switzerland - OpenAI CEO Sam Altman pointed out at the Davos World Economic Forum that future artificial intelligence development needs to break through energy constraints, which means it will consume more than we expected. More power. He also mentioned the potential impact that artificial intelligence could have on upcoming global elections and expressed his views on the regulatory attitudes of the United States and the European Union.
The Energy Needs of Artificial Intelligence
Speaking at a Bloomberg event on the sidelines of the World Economic Forum’s annual meeting in Davos, Altman said he believes the silver lining is Shift to greener energy sources such as nuclear fusion and lower-cost solar and energy storage technologies. In addition, he also pointed out that the development of artificial intelligence is also an important direction.
“There’s no way to get there without a breakthrough,” he said. “This motivates us to invest more in nuclear fusion.”
He revealed that in 2021 he personally provided US$375 million to the US private nuclear fusion company Helion Energy. The company subsequently signed a deal with Microsoft to provide it with energy over the next few years. Microsoft is OpenAI’s most important financial supporter and provides it with AI computing resources.
Altman said his company is developing next-generation artificial intelligence models designed to do more than existing models. However, these models require more energy support. “The goal is to create an artificial intelligence system that can understand and improve the world,” he explains. This goal requires a lot of computing power, which requires a large supply of electricity.
He hopes that artificial intelligence can bring benefits to mankind instead of disasters. He said: "We do not want to create artificial intelligence that will destroy us, but create an artificial intelligence that can help us." The EU's regulatory stance is encouraging.
“What’s impressive to me and really remarkable is that they’re still having a continuous conversation around artificial intelligence.” He said: “It shows that there is a consensus on the importance of artificial intelligence, which is a good thing. signs."
He expressed his support for establishing a principled, flexible and collaborative regulatory framework, rather than overly strict or overly lax regulation. He emphasized: "We do not want artificial intelligence to become an uncontrollable beast, nor do we want it to become a shackles prisoner. We hope that artificial intelligence will become a respected and guided partner."
He said he hopes Work with government, business, academia, civil society and other stakeholders to develop rules for AI that benefit people and the planet. "Our goal is to create an artificial intelligence that can coexist harmoniously with us," he said. "This requires effort and cooperation from all of us."
The above is the detailed content of Cindy Elder: OpenAI CEO Sam Altman discusses AI's energy consumption and regulatory stance. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

If the answer given by the AI model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Open AI’s ChatGPT Mac application is now available to everyone, having been limited to only those with a ChatGPT Plus subscription for the last few months. The app installs just like any other native Mac app, as long as you have an up to date Apple S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
