Home > Technology peripherals > AI > How Artificial Intelligence Can Empower Organizations in 2023

How Artificial Intelligence Can Empower Organizations in 2023

WBOY
Release: 2023-05-23 13:43:06
forward
948 people have browsed it

Since the release of ChatGPT at the end of 2022, the Internet has been filled with various pessimistic or optimistic emotions. Love it or hate it, artificial intelligence is coming to your development team. Even if you don’t plan to develop an AI product or employ AI to develop bots to write code, it can still be integrated into the tools and platforms used to build, test, and run your manually written source code.

Artificial intelligence tools pose distinctive risks. Moreover, these risks could offset the huge productivity benefits of automating tasks that once required the human brain. These risks arise from how AI is trained, built, hosted, and used, all of which are different from other software tools currently used by developers. Understanding risk is the first step to managing it, and to help you understand the potential risks associated with upcoming AI tools, we’ve written some interview questions that can be counted as part of the AI ​​onboarding process.

These questions should be asked regardless of the type of AI or the purpose you hope to use it for.

#1. Where will the artificial intelligence server you choose be hosted?

# Modern artificial intelligence currently requires specialized, expensive hardware to accomplish the amazing tasks we see in the headlines today. Unless you plan to acquire a brand new data center, your AI robots will work remotely and require the same security considerations as remote human workers using remote access and offsite data storage.

What protections are in place to prevent IP loss when code leaves the boundary? Everything from smart TVs to cars are reporting usage data to manufacturers. Some are using this data to improve their software, but others are selling it to advertisers. Understand exactly how your AI tool will use or process source code or other private data for its primary mission.

#2. Will your input be used for future training of artificial intelligence?

#The ongoing training of AI models will be an area of ​​increasing concern for owners and those whose data is used to train the models. For example, owners may want to prevent advertisers from biasing AI bots in a direction that favors their customers. Artists who share their work online have had their styles copied by AI image-generating bots, and they are concerned about the loss or theft of their original author's identity.

#3. How accurate are the results?

ChatGPT’s most famous shortcoming is the inaccuracy of its results. It will confidently assert as truth what is false. This is called the "illusion" of artificial intelligence. Understanding how and where AI might hallucinate can help manage it when it does.

Beyond that, AI owners and developers will have their own set of security concerns. These new concerns include threats to AI training models. These threats could undermine its results or reveal proprietary information about how the model works. Additionally, AI models will have to interface with APIs, web, mobile and other applications that need to be built securely.

Developers will have to ask specific questions when using AI tools, such as AI security scanners, to manage risks introduced during software development.

#4. Is an AI tool best suited for this use case?

#Understanding what AI is good at and what it is not good at is key. A task can be further broken down into "making decisions based on learned rules" or "writing content that passes learned rules", the better the AI ​​will do at this. The further the problem deviates, the worse the AI ​​performs.

What safeguards are in place if the tool does not capture content or creates the illusion that it does not exist?

Never introduce a single point of failure into your process, especially one that can create hallucinations. Rely on traditional defense-in-depth practices or a “Swiss cheese” approach to managing risk, where even if one layer misses a problem, the next layer catches it.

What is required to monitor the results of the review tool? This problem is really just old wine in new bottles: traditional journaling guidelines are divided into two parts. The first part is to capture the data of important events, and the second part is the audit log. Until AI further matures and its shortcomings are understood or mitigated, humans will remain integral to the workflow.

More and more developers are "hiring" ChatGPT to write source code. Preliminary reports indicate that ChatGPT is capable of writing source code in many programming languages ​​and is fluent in all common and publicly discussed languages. But due to the training and model limitations of this beta, the code it produces isn't always perfect. Often, it contains business logic flaws that may change the way the software operates, syntax errors that may merge different versions of the software, and other seemingly human-made issues. In other words, ChatGPT is a junior developer. When using code written by this junior developer, you must consider how to manage it.

Who will be its stewards and ensure that the code is functional, optimizable, high-quality, and meets security standards? Junior developers need guidance from senior developers. Every line of code has to be tested and some have to be fixed. However, preliminary reports indicate that this proofreading process is faster and easier than writing code from scratch.

#5. Is it injecting or remixing the training code into your code base?

#A more insidious threat is that sometimes AI bots like GitHub Copilot produce source code that completely copies blocks of code from their training data. This requires anti-plagiarism tools to ensure copyright risks are managed.

#6. Where does the robot get its training data?

#An artificial intelligence model is only as good as its training data. If a bot is trained with stale or incorrect code, it will also produce stale and incorrect results.

#7. Where is the engine hosted?

#Similarly, artificial intelligence robots that analyze source code will need to bring the source code to their processing facilities. Pay special attention to how data is protected, used and processed after it leaves your company.

The December release of ChatGPT heralds a new era in software development. It’s important to adapt to these changes rather than be knocked to the ground by them. As you adopt these new tools, understand that the more things change, the more one principle remains the same: It's better to prevent a security incident than to be knocked to the ground by one.

Source: www.cio.com

The above is the detailed content of How Artificial Intelligence Can Empower Organizations in 2023. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template