CIO shares: How to harness generative AI in the enterprise
Generative AI is bringing innovative opportunities to enterprises, but in this new era, senior managers need to pay close attention to the application of generative AI to ensure code quality, Reduce technical risks. Executives should carefully evaluate the reliability and security of AI solutions and develop effective monitoring measures to detect and correct potential problems in a timely manner. By establishing strict technical standards and oversight mechanisms, companies can better leverage generative AI to transform organizations at an early stage and have a profound impact on IT strategy. While large language models accelerate engineering agility, they also create issues with technical debt. Stephen O'Grady, principal analyst and co-founder of Red Monk, pointed out: "Generative systems may increase the speed of code generation, thus leading to the accumulation of technical debt."
But this should not stop CIOs from exploring and implementing AI, added Juan Perez, senior vice president and chief information officer at Salesforce. He views AI as an application that requires proper governance, security controls, maintenance and support, and lifecycle management. He said that as the number of AI products continues to increase, selecting the most appropriate model and underlying data is crucial to support the AI journey.
If applied correctly, generative AI can produce higher quality products at lower costs. Neal Sample, chief information officer of Walgreens Boots Alliance, said: “It is not a question of whether AI will have a positive impact on the overall business, but the extent and speed of the impact.” He emphasized that to promote responsible AI development, government regulation and corporate governance are of utmost importance. It's important.
Generative AI: The heart of IT strategy
Machine learning models have the potential to enable faster IT iterations. Andrea Malagodi, CIO of code testing platform Sonar, said that at least these models can automate routine, repetitive tasks, thereby freeing up bandwidth for software developers so that they can focus on more creative, higher-level work. “Investing in generative AI tools to support these teams is an investment in their growth, productivity and overall satisfaction,” he said. Meerah Rajavel, chief information officer at Palo Alto Networks, added that generative AI will It greatly facilitates development, especially code generation in mature programming languages such as Java, Python and C, but its power doesn't stop there. She believes that AI can help shift code testing left to assist with unit testing, debugging and identifying misconfigurations early in the software development cycle. "As CIO, providing our developers with the best tools to help them succeed is a key component of my job, and AI will undoubtedly help increase efficiency."
AI is at the heart of this year’s IT strategy for Carter Busse, CIO of no-code automation platform company Workato. However, the potential of AI is not limited to the IT field. It can also play a role in customer support, improving productivity and promoting cross-department innovation. Busse pointed out: "The mission of the CIO is to support the efficient development of the business, and AI is a key means for us to advance." AI can significantly promote cross-department operations, create more value for the enterprise, and promote the overall development of the organization.
So code generation isn’t the only area benefiting from the latest wave of AI. Sunny Bedi, chief information officer and chief data officer at cloud data warehouse company Snowflake, said employee productivity has been most affected. He predicts that in the future, all employees will work closely with AI assistants to help personalize the onboarding experience for new employees, coordinate internal communications, and prototype innovative ideas. He added that by leveraging large language models' out-of-the-box capabilities, enterprises can also reduce their reliance on third parties for operations such as search, document extraction, content creation and review, and chatbots.
How AI Alleviates Technical Debt
Generative AI models are not the main contributor to IT debt, but rather how they are applied. “The aspects you choose to implement AI in your organization, and the way you implement it, need to be carefully considered to avoid the creation of technical debt,” Sample noted. He further pointed out that when applying AI models to existing technology ecosystems (e.g., when using legacy Modifying connections and integrating generative AI at the same time in the stack increases the risk of accumulating technical debt.
On the other hand, if used correctly, generative AI can help eliminate old technical debt by rewriting legacy applications and automating backlog tasks. That said, CIOs shouldn't jump in without the right cloud environment and strategy. "If organizations implement generative AI too early, existing technical debt may continue to grow or, in some cases, become long-term technical debt," said Steve Watt, chief information officer at Hyland, the company behind the enterprise management software suite OnBase. . So he recommends developing a plan to address existing technical debt so new AI-driven initiatives don't collapse.
Initially, companies may increase IT debt as they experiment with AI and large language models. But Busse believes that in the long run, large language models will reduce debt, but this depends on the ability of AI to dynamically respond to changing needs. "By embedding AI into your business processes, you will be able to adapt to process changes faster, thereby reducing technical debt," he said.
Assessing the quality of AI code
Recently, questions have been raised about the quality of code generated by AI, with reports highlighting an increase in code changes and code reuse since the advent of AI assistants. Red Monk’s O’Grady said the quality of code generated by AI depends on many factors, including the deployed model, the use case at hand, and the skills of the developer. “Just like human developers, AI systems do output flawed code and will continue to do so in the future.” For example, Sonar’s Malagodi cited a recent Microsoft Research study that assessed We examined 22 models and found that these models generally performed poorly on benchmarks, suggesting fundamental blind spots in the training setup. The report explains that while AI assistants can generate functional code, this does not always go beyond functional correctness to consider other contexts such as efficiency, security and maintainability, let alone adhering to coding conventions.
Malagodi believes that there is still a lot of room for improvement in this area. "While generative AI can generate more lines of code faster, if the quality is not good, the process can become very time-consuming," he said. He urged CIOs and CTOs to take necessary measures to ensure that AI-generated The code is clean. "This means that the code generated by AI is consistent, intentional, adaptable and responsible, resulting in software that is safe, maintainable, reliable and accessible."
The roots of these models Quality issues can adversely affect code output. Alastair Pooley, chief information officer of cloud technology intelligence platform Snow Software, said that although generative AI has the potential to produce excellent technical results, data quality, model architecture and training procedures may lead to poor results. “Under-trained models or unforeseen edge cases can result in degraded output quality, introduce operational risks and compromise system reliability,” he said. All of this requires ongoing review and verification of output and quality.
Palo Alto Networks’ Rajavel added that AI is like any other tool, and the results depend on which tool you use and how you use it. To her, without proper AI governance, the model you choose may result in low-quality artifacts that do not align with the product architecture and expected results. She added that another important factor is which AI you choose for the job at hand, since no one model is one-size-fits-all.
List of Potential AI Risks
In addition to IT debt and code quality, there are a range of potential adverse outcomes to consider when deploying generative AI. “These issues may involve data privacy and security, algorithmic bias, job displacement, ethical dilemmas of AI-generated content, etc.,” Pooley said.
One aspect is that malicious individuals use generative AI to launch attacks. Rajavel noted that cybercriminals have begun leveraging this technology to conduct large-scale attacks, as generative AI is capable of drafting convincing phishing campaigns and spreading disinformation. Attackers can also target generative AI tools and models themselves, causing data leakage or poisoning content output.
O'Grady said: "Generative systems have the potential to accelerate and help attackers, but arguably the biggest concern for many enterprises is the leakage of private data from closed vendor systems."
These techniques can produce very convincing results, but the results can also be riddled with errors. In addition to errors in the model, there are cost implications to consider, and it's easy to spend a lot of money on AI unknowingly or unnecessarily, either by using the wrong model or not understanding the consumption costs. , still not used effectively.
Perez said: “AI is not without risks, and it needs to be built from the ground up, with humans controlling every area, to ensure that anyone can trust its results—from the most basic user to the most experienced engineer. Another unresolved issue for Perez is AI development and maintenance ownership, which also puts pressure on IT teams to keep up with the demand for innovation, as many IT employees lack the time to implement and train AI models and algorithms.
An issue that cannot be ignored: Employment
Then there is the result that attracts the attention of mainstream media: AI replaces the human workforce. But how generative AI will impact employment in the IT industry remains to be determined. “It’s difficult to predict the impact on employment at the moment, so that’s a potential concern,” O’Grady said.
While there are undoubtedly multiple viewpoints in this debate, Walgreen’s Sample does not believe AI poses an existential threat to humanity. Instead, he is optimistic about the potential of generative AI to improve workers’ lives. He said: "The negative view is that AI will affect many jobs, but the positive view is that AI will make humans better. Ultimately, I think AI will eliminate the need for people to do repetitive tasks that could be automated. tasks and can focus on higher-level work.”
How to ease concerns caused by AI
There are many methods that can be used to alleviate concerns caused by AI. For Perez, the quality of generative AI depends on the data these models ingest. “If you want high-quality, trustworthy AI, you need high-quality, trustworthy data,” he said. The problem, however, is that data is often riddled with errors and requires tools to integrate data from different sources and in different formats. of unstructured data. He also emphasized not just being "in the game" but putting humans more in the driver's seat. "I see AI as a trusted advisor, but not the sole decision-maker."
In order to maintain software quality, strict testing is also required to check whether the code generated by AI is accurate. To this end, Malagodi encourages enterprises to adopt a "code clean" approach, including static analysis and unit testing, to ensure proper quality checks. "When developers focus on clean code best practices, they can be confident that their code and software are safe, maintainable, reliable and accessible."
Bedi added, and As with any new technology, initial enthusiasm needs to be tempered with appropriate caution. Therefore, IT leaders should consider steps to effectively use AI assistants, such as observability tools, that can detect architectural drift and support preparation for application needs.
Governance around the adoption of AI
Pooley said: “Generative AI represents a new era of technological advancement that, if managed correctly, has the potential to bring huge benefits. However, he suggested that CIOs should balance innovation with the inherent risks, especially the need to put in place controls and utilization guidelines to limit data breaches caused by uncontrolled use of these tools. "As with many technological opportunities, CIOs find themselves held accountable if something goes wrong."
For Sample, regulators have a responsibility to fully address the risks AI poses to society. As an example, he pointed to a recent executive order issued by the Biden administration to establish new AI safety standards. Another aspect is taking the lead in developing corporate guidelines to manage this fast-paced technology. For example, Walgreens has begun to develop a governance framework around AI that includes considerations such as fairness, transparency, security, and explainability.
Workato’s Busse also advocates developing internal directives that prioritize security and governance. He recommended training employees, developing an internal playbook, and implementing an approval process for AI experiments. Pooley noted that many companies have established AI working groups to help address risks and leverage the benefits of generative AI. Some security-conscious organizations are taking more stringent measures. O’Grady added that in order to prevent penetration, many buyers will still prioritize on-premises systems.
“CIOs should take the lead in ensuring their teams have the appropriate training and skills to identify, build, implement and use generative AI in a way that benefits their organizations,” Perez said, describing Salesforce’s offerings and engineering teams are building a layer of trust between AI inputs and outputs to minimize the risks associated with using this powerful technology.
That said, intentional adoption of AI and governance of it are equally important. “Organizations are rushing to implement AI without having a clear idea of what it does and how it will best benefit their business,” Hyland’s Watt said. AI won’t solve every problem, so it’s important to understand which ones the technology can solve. problems, and what problems cannot be solved, is crucial to knowing how to maximize the solution to the problem.
Positive Impact on Business
With proper examination, generative AI will improve agility in countless areas, and CIOs expect that generative AI will be used to achieve tangible business outcomes, such as user experience . Perez said: "Generative AI will allow enterprises to create experiences for customers that once felt impossible. AI will no longer be just a tool for niche teams. Everyone will have the opportunity to use it to increase productivity and efficiency."
But the user experience benefits aren’t limited to external customers. The internal employee experience will also benefit, Rajavel added. She predicts that AI assistants trained on internal data could cut IT requests in half by simply fetching answers already available on internal corporate pages.
Sample said Walgreens is also improving the customer experience through generative AI-driven voice assistants, chatbots and text messages. By reducing call volume and improving customer satisfaction, team members can better focus on in-store customers. In addition, the company has deployed AI to optimize in-store operations such as supply chain, floor space and inventory management, helping leaders make decisions about business revenue and profits. But vigilance is key.
O'Grady said: "As with all previous technological waves, AI will undoubtedly bring significant negative impacts and collateral damage. Overall, AI will accelerate development and enhance human capabilities, but at the same time It will greatly expand the scope of various problems."
The above is the detailed content of CIO shares: How to harness generative AI in the enterprise. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
