Generative AI is bringing innovative opportunities to enterprises, but in this new era, senior managers need to pay close attention to the application of generative AI to ensure code quality, Reduce technical risks. Executives should carefully evaluate the reliability and security of AI solutions and develop effective monitoring measures to detect and correct potential problems in a timely manner. By establishing strict technical standards and oversight mechanisms, companies can better leverage generative AI to transform organizations at an early stage and have a profound impact on IT strategy. While large language models accelerate engineering agility, they also create issues with technical debt. Stephen O'Grady, principal analyst and co-founder of Red Monk, pointed out: "Generative systems may increase the speed of code generation, thus leading to the accumulation of technical debt."
But this should not stop CIOs from exploring and implementing AI, added Juan Perez, senior vice president and chief information officer at Salesforce. He views AI as an application that requires proper governance, security controls, maintenance and support, and lifecycle management. He said that as the number of AI products continues to increase, selecting the most appropriate model and underlying data is crucial to support the AI journey.
If applied correctly, generative AI can produce higher quality products at lower costs. Neal Sample, chief information officer of Walgreens Boots Alliance, said: “It is not a question of whether AI will have a positive impact on the overall business, but the extent and speed of the impact.” He emphasized that to promote responsible AI development, government regulation and corporate governance are of utmost importance. It's important.
Generative AI: The heart of IT strategy
AI is at the heart of this year’s IT strategy for Carter Busse, CIO of no-code automation platform company Workato. However, the potential of AI is not limited to the IT field. It can also play a role in customer support, improving productivity and promoting cross-department innovation. Busse pointed out: "The mission of the CIO is to support the efficient development of the business, and AI is a key means for us to advance." AI can significantly promote cross-department operations, create more value for the enterprise, and promote the overall development of the organization.
So code generation isn’t the only area benefiting from the latest wave of AI. Sunny Bedi, chief information officer and chief data officer at cloud data warehouse company Snowflake, said employee productivity has been most affected. He predicts that in the future, all employees will work closely with AI assistants to help personalize the onboarding experience for new employees, coordinate internal communications, and prototype innovative ideas. He added that by leveraging large language models' out-of-the-box capabilities, enterprises can also reduce their reliance on third parties for operations such as search, document extraction, content creation and review, and chatbots.
Generative AI models are not the main contributor to IT debt, but rather how they are applied. “The aspects you choose to implement AI in your organization, and the way you implement it, need to be carefully considered to avoid the creation of technical debt,” Sample noted. He further pointed out that when applying AI models to existing technology ecosystems (e.g., when using legacy Modifying connections and integrating generative AI at the same time in the stack increases the risk of accumulating technical debt.
On the other hand, if used correctly, generative AI can help eliminate old technical debt by rewriting legacy applications and automating backlog tasks. That said, CIOs shouldn't jump in without the right cloud environment and strategy. "If organizations implement generative AI too early, existing technical debt may continue to grow or, in some cases, become long-term technical debt," said Steve Watt, chief information officer at Hyland, the company behind the enterprise management software suite OnBase. . So he recommends developing a plan to address existing technical debt so new AI-driven initiatives don't collapse.
Initially, companies may increase IT debt as they experiment with AI and large language models. But Busse believes that in the long run, large language models will reduce debt, but this depends on the ability of AI to dynamically respond to changing needs. "By embedding AI into your business processes, you will be able to adapt to process changes faster, thereby reducing technical debt," he said.
Recently, questions have been raised about the quality of code generated by AI, with reports highlighting an increase in code changes and code reuse since the advent of AI assistants. Red Monk’s O’Grady said the quality of code generated by AI depends on many factors, including the deployed model, the use case at hand, and the skills of the developer. “Just like human developers, AI systems do output flawed code and will continue to do so in the future.” For example, Sonar’s Malagodi cited a recent Microsoft Research study that assessed We examined 22 models and found that these models generally performed poorly on benchmarks, suggesting fundamental blind spots in the training setup. The report explains that while AI assistants can generate functional code, this does not always go beyond functional correctness to consider other contexts such as efficiency, security and maintainability, let alone adhering to coding conventions.
Malagodi believes that there is still a lot of room for improvement in this area. "While generative AI can generate more lines of code faster, if the quality is not good, the process can become very time-consuming," he said. He urged CIOs and CTOs to take necessary measures to ensure that AI-generated The code is clean. "This means that the code generated by AI is consistent, intentional, adaptable and responsible, resulting in software that is safe, maintainable, reliable and accessible."
The roots of these models Quality issues can adversely affect code output. Alastair Pooley, chief information officer of cloud technology intelligence platform Snow Software, said that although generative AI has the potential to produce excellent technical results, data quality, model architecture and training procedures may lead to poor results. “Under-trained models or unforeseen edge cases can result in degraded output quality, introduce operational risks and compromise system reliability,” he said. All of this requires ongoing review and verification of output and quality.
Palo Alto Networks’ Rajavel added that AI is like any other tool, and the results depend on which tool you use and how you use it. To her, without proper AI governance, the model you choose may result in low-quality artifacts that do not align with the product architecture and expected results. She added that another important factor is which AI you choose for the job at hand, since no one model is one-size-fits-all.
List of Potential AI Risks
One aspect is that malicious individuals use generative AI to launch attacks. Rajavel noted that cybercriminals have begun leveraging this technology to conduct large-scale attacks, as generative AI is capable of drafting convincing phishing campaigns and spreading disinformation. Attackers can also target generative AI tools and models themselves, causing data leakage or poisoning content output.
O'Grady said: "Generative systems have the potential to accelerate and help attackers, but arguably the biggest concern for many enterprises is the leakage of private data from closed vendor systems."
These techniques can produce very convincing results, but the results can also be riddled with errors. In addition to errors in the model, there are cost implications to consider, and it's easy to spend a lot of money on AI unknowingly or unnecessarily, either by using the wrong model or not understanding the consumption costs. , still not used effectively.
Perez said: “AI is not without risks, and it needs to be built from the ground up, with humans controlling every area, to ensure that anyone can trust its results—from the most basic user to the most experienced engineer. Another unresolved issue for Perez is AI development and maintenance ownership, which also puts pressure on IT teams to keep up with the demand for innovation, as many IT employees lack the time to implement and train AI models and algorithms.
Then there is the result that attracts the attention of mainstream media: AI replaces the human workforce. But how generative AI will impact employment in the IT industry remains to be determined. “It’s difficult to predict the impact on employment at the moment, so that’s a potential concern,” O’Grady said.
While there are undoubtedly multiple viewpoints in this debate, Walgreen’s Sample does not believe AI poses an existential threat to humanity. Instead, he is optimistic about the potential of generative AI to improve workers’ lives. He said: "The negative view is that AI will affect many jobs, but the positive view is that AI will make humans better. Ultimately, I think AI will eliminate the need for people to do repetitive tasks that could be automated. tasks and can focus on higher-level work.”
There are many methods that can be used to alleviate concerns caused by AI. For Perez, the quality of generative AI depends on the data these models ingest. “If you want high-quality, trustworthy AI, you need high-quality, trustworthy data,” he said. The problem, however, is that data is often riddled with errors and requires tools to integrate data from different sources and in different formats. of unstructured data. He also emphasized not just being "in the game" but putting humans more in the driver's seat. "I see AI as a trusted advisor, but not the sole decision-maker."
In order to maintain software quality, strict testing is also required to check whether the code generated by AI is accurate. To this end, Malagodi encourages enterprises to adopt a "code clean" approach, including static analysis and unit testing, to ensure proper quality checks. "When developers focus on clean code best practices, they can be confident that their code and software are safe, maintainable, reliable and accessible."
Bedi added, and As with any new technology, initial enthusiasm needs to be tempered with appropriate caution. Therefore, IT leaders should consider steps to effectively use AI assistants, such as observability tools, that can detect architectural drift and support preparation for application needs.
Pooley said: “Generative AI represents a new era of technological advancement that, if managed correctly, has the potential to bring huge benefits. However, he suggested that CIOs should balance innovation with the inherent risks, especially the need to put in place controls and utilization guidelines to limit data breaches caused by uncontrolled use of these tools. "As with many technological opportunities, CIOs find themselves held accountable if something goes wrong."
For Sample, regulators have a responsibility to fully address the risks AI poses to society. As an example, he pointed to a recent executive order issued by the Biden administration to establish new AI safety standards. Another aspect is taking the lead in developing corporate guidelines to manage this fast-paced technology. For example, Walgreens has begun to develop a governance framework around AI that includes considerations such as fairness, transparency, security, and explainability.
Workato’s Busse also advocates developing internal directives that prioritize security and governance. He recommended training employees, developing an internal playbook, and implementing an approval process for AI experiments. Pooley noted that many companies have established AI working groups to help address risks and leverage the benefits of generative AI. Some security-conscious organizations are taking more stringent measures. O’Grady added that in order to prevent penetration, many buyers will still prioritize on-premises systems.
“CIOs should take the lead in ensuring their teams have the appropriate training and skills to identify, build, implement and use generative AI in a way that benefits their organizations,” Perez said, describing Salesforce’s offerings and engineering teams are building a layer of trust between AI inputs and outputs to minimize the risks associated with using this powerful technology.
That said, intentional adoption of AI and governance of it are equally important. “Organizations are rushing to implement AI without having a clear idea of what it does and how it will best benefit their business,” Hyland’s Watt said. AI won’t solve every problem, so it’s important to understand which ones the technology can solve. problems, and what problems cannot be solved, is crucial to knowing how to maximize the solution to the problem.
With proper examination, generative AI will improve agility in countless areas, and CIOs expect that generative AI will be used to achieve tangible business outcomes, such as user experience . Perez said: "Generative AI will allow enterprises to create experiences for customers that once felt impossible. AI will no longer be just a tool for niche teams. Everyone will have the opportunity to use it to increase productivity and efficiency."
But the user experience benefits aren’t limited to external customers. The internal employee experience will also benefit, Rajavel added. She predicts that AI assistants trained on internal data could cut IT requests in half by simply fetching answers already available on internal corporate pages.
Sample said Walgreens is also improving the customer experience through generative AI-driven voice assistants, chatbots and text messages. By reducing call volume and improving customer satisfaction, team members can better focus on in-store customers. In addition, the company has deployed AI to optimize in-store operations such as supply chain, floor space and inventory management, helping leaders make decisions about business revenue and profits. But vigilance is key.
O'Grady said: "As with all previous technological waves, AI will undoubtedly bring significant negative impacts and collateral damage. Overall, AI will accelerate development and enhance human capabilities, but at the same time It will greatly expand the scope of various problems."
The above is the detailed content of CIO shares: How to harness generative AI in the enterprise. For more information, please follow other related articles on the PHP Chinese website!