What is the place of generative AI in the Internet of Things?
Generative artificial intelligence is a set of machine learning models that are created after being trained to guess the next set of words or the correct image based on a prompt.
Recently, the mainstream media has been very worried about Alexa, Siri and Google’s digital assistants. This is because, so far, these products have not used generative AI. Generative AI, is a set of machine learning models created that are trained to guess the next set of words or the correct image based on a prompt.
DALL-E, Stable Diffusion and Midjourney are popular for image generation and will be available in early 2021. But the latest hype is around large language models, specifically ChatGPT created by OpenAI. When someone enters tips and style suggestions, the results can be easily read and provide actual information or the illusion of information.
All devices are exciting and will have a major impact on the way work is done, content is created, and your business is built. However, there are concerns about the accuracy of generative AI. We still have to grapple with the ability to use generative AI for easy propaganda, fraud, and other malicious behavior, rather than simply blindly accepting generative AI as a world-changing technology.
That said, we are currently at the stage where technologists are blindly trusting generative AI, including investing billions of dollars in companies experimenting with use cases and new models. We're still at a stage where members of the media are spending hours trying to trick these models into misbehaving, or trying to prove that AI is sentient and potentially hostile to us.
But this is not the focus of this article. This article focuses on areas where generative AI will have a significant impact on how IoT is deployed and used. For example, where can we use it to improve user experience? What types of jobs can it help or take over? Besides words and images, what other generative AI models can help in the Internet of Things?
From Smart home begins. Rather than confusing Amazon Alexa with a chatbot, it and other digital assistants will continue to use natural language processing (NLP) to "understand" and then act on various task-based requests, such as "turn on the lights" or "good morning" ” to launch the wake-up call, while also adding a GPT-style chatbot to handle requests that require more in-depth communication.
A good digital assistant will not have just one or two models, but will consist of any version of the model to provide the most practical functionality to the user. There are also economic concerns. There can be fees for calling a chatbot, which requires a different business model, and not everyone is willing to pay for a subscription.
Additionally, we may soon see chatbot-style generative AI models being used in homes. Recently Home Assistant founder Paulus Schoutsen demonstrated how to use HomePod to access a GPT-style chatbot to tell stories to his children.
In fact, the utility of combining NLP, which is already part of digital assistants, with generative AI models is clear for SoundHound, which is introducing a platform that combines voice assistants with generative AI. So ChatGPT won't replace Alexa, but it may eventually become part of Alexa, with Alexa as the interface, and ChatGPT is just one of the many services it offers.
Other smart home areas where ChatGPT or generative AI models will have an impact include children’s toys, fitness services, and recipe or activity suggestions. That’s because generative AI is really just another reason to add connectivity and sentience to everyday objects, either providing personalized training data, or acting as a conduit for such services.
On the enterprise side, there is clear utility in using generative AI to help business people implement digital solutions without coding. One example is how Software AG is combining its Web Methods cloud-to-cloud integration platform with generative AI models to help employees figure out how to link data and various digital services. Ultimately, as more things become connected in buildings, production lines, commercial kitchens, and more, using simple written language to tell connected devices how to work with connected business software will help managers become more efficient and capable.
In industrial settings, the promise of ChatGPT comes with compelling use cases and considerations. Some support using generative AI for things like predictive maintenance. Generative AI models work by training on large amounts of data and then generating the most likely next element. So in a large language model, a generative AI model is being trained on a large amount of text and generates the next word or phrase that the model thinks is most likely to occur.
Presumably, with enough machine data, the model can decide what it should do next and send an alert if the expected result is incorrect. But honestly, this feels like overkill, as traditional anomaly detection is great for predictive maintenance and is much less expensive. Where generative AI might get interesting is by taking process data and suggesting alternative workflows, or by using written language to describe a workflow and letting the AI code it for someone.
But there are also things to pay attention to. These models are only as good as their training data and may produce wrong answers in some cases, but can be written so well that it is difficult to determine whether they are wrong.
Given the intellectual property battles surrounding generative AI, the last concern is that “feeling” will become an issue. But in practice, it’s relatively simple to set limits on where the training data actually comes from—even if a model built on proprietary data is deployed outside the intended factory or enterprise.
Time and education on how generative AI models are created and how they work will address some of the IP issues. With only a few months into this cycle, I believe the future will see generative AI becoming as important and accepted as computer vision and NLP.
The above is the detailed content of What is the place of generative AI in the Internet of Things?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S
