Table of Contents
What’s wrong with DevOps?
DevOps and the future of generative AI
1. Automatic fault detection and suggested remedial measures
2. On-demand code/configuration generation and deployment
3. Prompt-driven, on-demand workflow management
Key Takeaway: Generative AI securely accelerates your work
ROI is critical to generating AI
Summary
Home Technology peripherals AI Generative AI for DevOps: A realistic perspective

Generative AI for DevOps: A realistic perspective

Apr 12, 2023 pm 04:52 PM
AI devops team

Generative AI enables DevOps teams to eliminate tedious duplication, enhance automation, and compress complex workflows into simple conversational actions.

Generative AI for DevOps: A realistic perspective

The concept of generative AI describes machine learning algorithms that can create new content from minimal human input. The field has grown rapidly over the past few years, with projects such as text author tool ChatGPT and photorealistic image creator DALL-E2 gaining mainstream attention.

Generative AI isn’t just for content creators, though. It is also poised to transform technology jobs in software engineering and DevOps. For example, the controversial "AI Pair Programmer" GitHub Copilot is already prompting a rethinking of how code is written, but the potential of collaborative AI remains underexplored in the DevOps world.

In this article, we look toward a future where generative AI enables DevOps teams to eliminate tedious duplication, enhance their automation, and compress complex workflows into simple conversational actions. But before that, let’s dive into the DevOps problems that generative AI can improve.

What’s wrong with DevOps?

DevOps is far from solved. While the adoption of DevOps thinking is growing rapidly year over year, the process still relies on many tools, a limited talent pool, and repetitive tasks that are only partially automated.

DevOps engineers can spend too much time on menial tasks that don’t contribute significant business value, such as approving deployments, checking environment status, and building basic configuration files. Although unavoidable, these tasks are chores and do not directly contribute to the final product. They're also great candidates for generating AI processing, and both ChatGPT and Copilot (or the OpenAI Codex that builds Copilot) might take some of the pressure off:

  • They can populate common configuration files and templates so engineers don't have to Do.
  • They help team members gain new skills by suggesting contextually relevant snippets. This reduces the learning curve in upskilling by providing assistance when needed.
  • They help improve maintainability by reducing the time required to build new assets and making them more consistent.

However, existing systems are limited by their narrow focus on content generation. DevOps assistants would be even more powerful if they also provided intent-based and action-based experiences to trigger workflow steps and apply state changes. For example, imagine the experience of merging Copilot's code authorship with a two-way conversational interface:

  • You can ask the assistant to start a process on demand, and then prompt you for input when needed.
  • Developers have self-service access to potentially sensitive tasks, such as requesting deployment to production. AI will perform operations securely on their behalf, minimizing the risk of errors and creating a security barrier between developers and infrastructure. The AI ​​assistant can also request a review from relevant team members before submitting a program to ensure everyone is aware of changes to the platform.
  • AI can alert you in real time when monitoring indicators change. For example, when a deployment fails, a security vulnerability is detected, or performance deviates from baseline, you'll receive a message and have the option to take immediate action.

Importantly, these abilities do not replace humans or fundamentally change their roles. This form of AI enhances engineering capabilities by handling the mundane and consistently enforcing safety mechanisms. It frees up DevOps teams to complete more meaningful work in less time.

DevOps and the future of generative AI

Generative AI has huge potential to redefine the way DevOps works. Here are three specific areas where it will dominate.

1. Automatic fault detection and suggested remedial measures

Failure is a common problem for developers and operation and maintenance personnel. They are unpredictable interruptions that force an immediate context switch to prioritize repairs. Unfortunately, this can impact productivity, slow down release progress, and lead to frustration when remediation efforts don't go as planned.

Artificial intelligence agents can detect failures and investigate their causes. Additionally, they can combine their analytics with generative capabilities and knowledge of past failures to recommend immediate actions within the context of displayed alerts.

Consider a simple Kubernetes example: an assistant notices a production outage; realizes that a Pod has been evicted due to resource constraints; and provides action buttons to restart the Pod, scale the cluster, or terminate other abandoned resources. Teams can resolve incidents with a single click instead of spending minutes manually troubleshooting.

2. On-demand code/configuration generation and deployment

The ability to write code for generative AI provides incredible value. Layered conversational intent makes it more accessible and convenient. For example, you can ask an AI agent to set up a new project, configuration file, or Terraform state definition by writing a short message in the chat interface. The agent can prompt you to provide a value for any template placeholder and then notify the appropriate stakeholders that the content is ready for review.

Once approved, AI can notify the original developers, launch the project into a live environment, and provide a link to view the deployment and start iterating on it. This condenses several different sequences into one self-service operation for the developer. Operations teams eliminate the need to manually provision project resources in advance, allowing them to focus on their tasks.

3. Prompt-driven, on-demand workflow management

Next-generation AI agents go beyond simple text and photo creation to support fully automated prompt-driven workflows. For example, bidirectional AI lets you use natural language to initiate processes such as "restart production cluster" to interact with your AWS ECS resources. There’s no need to tell the AI ​​which platform you’re using or the specific steps it should run. For example, at Kubiya.ai, we have taken full advantage of this and now offer our customers the option to create any DevOps workflow through natural language prompts.

The language models for these agents are trained based on the vocabulary of your cloud service. When you ask for a cluster restart, the agent uses its domain knowledge to interpret your words. For example, it knows that your "production" cluster is running on AWS, and it must retrieve the details of the cluster and then make the correct API calls to restart it, such as ecs.UpdateService, etc. Your words translate directly into fully functional workflows.

Additionally, the two-way aspect means the AI ​​agent becomes more powerful over time. Once you start running your workflows, the agent is also trained on them, allowing it to suggest similar processes for future scenarios and describe what each workflow actually does.

This approach allows developers to do more without involving the operations team. AI agents mediate between humans and infrastructure platforms, allowing anyone to launch workflows consistently without compromising security. As part of the workflow, the agent can prompt for input at relevant points, such as when you ask it to "Add a new VM," asking you to select a cloud account, datacenter region, machine type, and pricing tier.

Key Takeaway: Generative AI securely accelerates your work

DevOps use cases for generative AI accelerate key tasks while improving accessibility, security, and reliability. Additionally, they enable developers to focus on advancing new features rather than repeatedly running familiar processes and waiting for results.

An agent who is smart enough to sustain a conversation is like another member of your team. They provide support to developers who may be unfamiliar with certain tools, while ensuring full compliance with the organization's security and compliance policies. These security measures protect the code base and give developers confidence that they can launch any workflow. Additionally, reducing the number of interactions with DevOps teams can increase efficiency and tighten feedback loops.

Generative AI is not a static experience either. It gets better over time as it analyzes interactions to more accurately determine user intent. For example, if the suggestions aren't appropriate the first time you type your query, you can expect them to improve as you and others repeat the request and take different courses of action.

Artificial intelligence agents also support missing human knowledge. They allow developers to start the process even if they are unfamiliar with some of the steps, tools, or terminology involved. AI can fill in the gaps for questions like “Which instances failed?” Figure out that you're referring to the Kubernetes Pods in your production cluster. These capabilities allow AI to effectively complement human capabilities, making it a source of supportive cues for teams.

ROI is critical to generating AI

Organizations that use AI regularly are likely to achieve the best results because their agents will be better at anticipating their needs. However, it’s also important not to overdo it when adding AI to your workflow. The most successful adoptions will focus on solving real business needs. First, assess your processes to identify bottlenecks between development and operations teams, then use AI to target those repeating use cases.

The solution you choose should help you hit your KPIs, such as closing more issues or resolving incidents faster. Otherwise, the AI ​​agent will be underutilized and hinder your natural operating procedures.

Summary

Generative AI is one of the fastest maturing technologies today. As a result, ChatGPT gained a certain level of spread as more researchers, consumers, and organizations began to explore its capabilities. DALL-E2 has achieved similarly impressive results, with more than 1.2 million developers using GitHub Copilot in its first 12 months.

All three technologies demonstrate clear revolutionary potential, but it is the hybrid and highly complex workflows of DevOps that may benefit the most in the long run. For example, DevOps combines the creation of new assets such as code and configurations with sequential processes such as deployment approvals and review requests.

Contrary to the predictions of some outsiders, generative AI for DevOps will move beyond simple templates of ordinary file snippets to provide complete workflow automation. Using simple conversational phrases, you can instruct your agents to take specific actions on your behalf, from provisioning new cloud resources to checking production performance. As a result, agents will provide a real-time two-way feedback loop to improve collaboration, increase productivity, and reduce the daily stress faced by developers.

The above is the detailed content of Generative AI for DevOps: A realistic perspective. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Seven Cool GenAI & LLM Technical Interview Questions Seven Cool GenAI & LLM Technical Interview Questions Jun 07, 2024 am 10:06 AM

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Five schools of machine learning you don't know about Five schools of machine learning you don't know about Jun 05, 2024 pm 08:51 PM

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. Aug 01, 2024 pm 09:40 PM

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

See all articles