Vince Kellen is the Chief Information Officer of the University of California, San Diego (UCSD) and is familiar with ChatGPT, DALL-E and other generative AI The technology has well-documented limitations: the answers generated may not be realistic, the images generated may lack integrity, and the output may be biased. But he's moving forward anyway, saying employees are already using ChatGPT to write code and job descriptions.
OpenAI’s text generation technology ChatGPT and image generation technology DALL-E are the most prominent among a series of large-scale language models (also called generative language models or generative AI) that have captured the public imagination. Models respond to written requests to generate answers ranging from text documents and images to programming code.
Kellen believes that the code generated by ChatGPT is a productivity tool, just like the compiler is an improvement on assembly language. “Generating things for libraries and software is no different than searching GitHub,” he said, “and we also use it to write job descriptions that are content and format-sensitive. You can then move on to editing very quickly, looking for errors and areas of confusion. "While this technology is still in its early stages, there's no denying the impact it's already having on certain enterprise applications, such as those that are content and workflow intensive, but you'll want to proceed with caution.
Oliver Wittmaier, chief information officer and product lead at DB SYSTEL, said generative AI is ready for coding, managing workflows, data Refined and simple use cases (e.g. pre-filled forms), DB SYSTEL is a wholly owned subsidiary of DB AG and digital partner for all group companies. In the transportation industry, he said, "artificial intelligence can directly or indirectly affect congestion avoidance, steering and management of transportation processes."
Content generation is also of particular interest to Michal Cenkl, director of innovation and experiments at Miter field. Currently, his team is investigating two uses for the technology, in the intellectual and professional fields. “The first is if I want to write an email to one of our sponsors summarizing the work we do and the work relevant to them, and that’s in the context of the communications we’ve had with them Email. This technology is incredibly powerful."
The second is project staffing. Typically, Cenkl will review resumes and search based on skill tags to find candidates who match the project. Generative AI can help do this. "For example, I might want to ask, 'What can Michael do on this project?' and summarize what he can do based on what he is doing now, so that I don't need to search through the resume."
Used car retailer CarMax has been using generative AI for more than a year, leveraging OpenAI’s API to consolidate customer review text into summaries that are more manageable and readable. But Shamim Mohammad, the company's chief information officer, said his team has applied the technology to other areas as well.
Among them, vehicle imaging can help improve customer experience. He said the AI can optimize images of every vehicle they add to their inventory, which ranges from 50,000 to 60,000 vehicles at any given time. “We make every image as realistic as possible without sacrificing effectiveness.” For example, their data scientists created a “digital sweeper” model that uses an image showing a car parked on a clean floor. , replacing a photo of a car parked on a dirty floor. "It's still the same car, but the photos look better, which is a better experience for customers." Similarly, Forrester analyst Rowan Curran said that Nike has been using generative AI to generate product prototype images. "You can use the text-to-3D modeler to test it in 3D space and get a more intuitive idea of how it will look in the real world — all with ease," he said. Applications with the greatest potential returns
Creating code and improving customer experiences are the main areas where businesses are using generative AI today, with the greatest potential returns in terms of increased efficiency, Mohammad said.
CarMax is testing GitHub's Copilot, and he says engineers may be generating up to 40% less code in some cases. “Evolution moves very quickly, but if you use it to create software, you have to make sure you’re not infringing copyright, generating false content, or embedding malware. You can’t insert this code without supervision.
Curran said other areas are ripe for enterprise applications, such as generating marketing copy, graphics, design, and creating better summaries of data so people can use the data more effectively. “Some people are even using these large language models to clean unstructured data,” he said. Next, he said, generative AI capabilities may start to appear in some enterprise software, such as technical support software and Microsoft Office applications.
CarMax’s Mohammad warns that in addition to the benefits, CIOs deploying this technology will also need to understand the issues surrounding the content output generated. Potential intellectual property issues. Generative models, such as DALL-E, which is trained on internet data, can generate content that may infringe copyright, which is why Getty Images recently sued Stability AI over its AI-powered art generation tool Stable Diffusion.
This technology also requires human supervision. “Systems like ChatGPT don’t know what they’re creating, and these systems are very good at making you believe that what they’re saying is accurate, even if it’s not,” Cenkl said. No AI can guarantee that — no attributes or Reference information lets you know how it arrived at a response, and there is no AI interpretability that shows why something is written the way it is. “You don’t know what the underlying basis is, you don’t know what parts of the training set are influencing the model, and what you get is purely an analysis based on the existing data set, so you not only have the potential for bias, but also for de facto Wrong."
Wittmaier is bullish on the technology but still believes it is an early technology that could be used for customer-facing deployments. At this point, he said, office suite environments, customer contact chatbots, technical support functions and general documentation all have short-term potential, but when it comes to safety-related areas of a transportation company's business, the answer is clearly no. He said: "We still have a lot to learn and improve before we can incorporate generative AI into these sensitive areas."
Jeter has similar concerns. While his team used ChatGPT to identify code fixes and deploy them to the site in 30 minutes, "without ChatGPT it would have taken much longer," and he believes ChatGPT is also useful for drafting contract terms and conditions. , but these have not yet been fully verified. "We will not expose any generative AI to outside members, and TruStone will not be at the forefront in this area."
When TruStone finally starts using the technology to bring benefits to its members, he added , conversations will be monitored through human and automated review to protect members and the brand.
Today, the key to successful deployments remains having humans in the loop to review the generated content for accuracy and compliance, said Kellen of the University of California, San Diego. "Ensuring that machines make the right decisions will be a significant litigation point, and it will be a long time before organizations use the technology to do anything high-risk - such as medical diagnostics. But generative AI can be very It's good to generate things like review summaries, assuming there's human supervision. This slows us down a bit, but it's the right thing to do. Eventually, we'll find automated ways to ensure quality. But for now, you have to have A review process to ensure the content generated is accurate.”
In addition to accuracy, another well-documented risk is the potential for bias in models brought in from the training data center. Kellen says this is especially problematic when generative AI uses content from the internet — as ChatGPT does. But when you train the model on your own private company's data, you can check for potential bias and this may not be an issue. He said: "The deeper you go into the enterprise, and the more restricted and common the data categories there are, the more useful generative AI will be."
The thing you need to know about large language models, Cenkl said, is that these Machines are experts to a certain extent. "They don't understand, but they are very good at computing."
"Technology can make things better, but it also brings us a lot of extra work.” However, he thinks generative AI is different. "This is exciting because it takes away some of the things we don't like to do and makes us smarter, and it makes humans stronger."
But Curran pointed out that generative AI will not completely replace any role in the short term. “It might reduce the number of people needed to perform a role, such as content development, product information management, or software development, but there will always be humans involved,” he said. Mohammad added that generative AI technology can write And to summarize, human intelligence is always needed to ensure content quality and control the generated content to make it better.
Kellen said now is the time to accelerate generative AI technology and start experimenting. He said: "CIOs must solve this problem before they are confused by vendors who embed technology into their enterprise software products. If you continue to delay next year, you will be behind the curve."
Curran said it is important to understand the technology and explore it in depth rather than creating a public buzz around ChatGPT, to understand that the technology is much more complex than its application. Then start thinking about the possible uses of generative AI to improve the efficiency or quality of existing processes. Finally, ask yourself what type of functionality you need and whether you will get it from a vendor or build it yourself.
The next step is to test the technology and consider potential use cases. "A lot of your systems, whether they're using structured or unstructured data, are going to have at least some components of natural language and conversational interfaces," Cenkl said. "Think about the data you have and think about the technologies that can enhance it. which parts of it,” and then demonstrate its potential. For example, Jeter said he generated a terms and conditions template and sent it to compliance departments to show how they were using the technology.
Generative AI models are large and training them from scratch is extremely expensive, so the best way to get started is to use one of the cloud services, Curran said. For example, CarMax uses Microsoft Azure OpenAI service with GPT 3.5. “The data we load is our own – it’s not shared with others, and we can have large amounts of data and quickly process it to run our models,” Mohammad said. “This might be useful if you have a small team or a business problem. If you want to learn generative AI technology, give it a try.”
The above is the detailed content of CIO shares: Enterprise IT should use generative AI with caution moving forward. For more information, please follow other related articles on the PHP Chinese website!