Translator|Zhu Xianzhong
Planning|Xu Jiecheng
Unlike other software development tools that developers trust, AI tools have some limitations in terms of training, construction, hosting, and usage methods. Unique Risks.
Since the release of ChatGPT at the end of 2022, the Internet has been filled with arguments supporting and doubting it in almost equal proportions. Whether you like it or not, AI is making its way into your development organization. Even if you don’t plan to develop an AI product or leverage an AI tool to write the code for you, it may still be integrated into the tools and platforms used to build, test, and run source code.
AI tools have some special risks that may affect the productivity gains brought by automated tasks. These risks mainly stem from how AI is trained, built, hosted, and used. AI tools differ in many ways from other tools that developers trust. Understanding risks is the first step to managing them. To help you understand the potential risks of AI tools, we have designed some interview questions for AI tools. These questions can determine whether the tool can successfully "join" your company.
Generally speaking, all AI tools have certain commonalities. Regardless of the type or purpose of artificial intelligence, the following questions should be asked before choosing to use it:
- Where is the infrastructure of this AI tool? None of modern artificial intelligence requires dedicated and expensive hardware to support it. Unless you plan to acquire a new data center, your AI tools will only work remotely and require remote access and off-site data storage, which will create certain security risks.
- What protections are in place to prevent IP loss when code leaves processing boundaries? From smart TVs to smart cars, all artificial intelligence products are contributing data to their manufacturers. Some businesses use this data to optimize their software, but others sell it to advertisers. Therefore, it is necessary that you understand exactly how the AI tool will use or process the source code or other private data it uses for its primary task.
- Can your input be used for model training tasks? The continuous training of artificial intelligence models is a task of great concern to all model companies and model trainers. For example, model owners often do not want advertisers to get too involved in their model training in order to achieve free advertising.
- How accurate are the results? The most fatal shortcoming of ChatGPT is the inaccuracy of its results. It excels at generating both lies and truth; this is called AI illusion. Understanding the ways and scenarios in which artificial intelligence can make mistakes can help manage when AI tools make mistakes.
#In addition, all companies and developers of artificial intelligence will have their own series of security issues. These new concerns include threats to AI-trained models that could corrupt their results and leak proprietary information about how the models operate, as well as the quality of the results generated by the models. Additionally, AI models must interact with the traditional world through APIs, web access, mobile apps, and other applications that need to be built securely.
In addition to general questions, developers must ask other aspects when using AI tools, such as the use of AI security scanners, in order to manage the risks introduced during the software development process.
- #Are AI tools suitable for such scenarios? It’s critical to understand what AI is not good at. For example, if a task can be broken down into "making a decision based on a learning rule" or "writing content that follows a learning rule"; then AI is usually very good at such tasks. If the problem changes beyond this, the AI may perform poorly.
- What protective measures should be taken if an AI tool makes an error? Never introduce a single point of failure into your process, especially one that could create hallucinations. The recommended approach should be to rely on traditional practices associated with defense in depth, or an approach to managing risk—the idea that if one layer in the system creates a problem, the next layer will catch it.
- How should the results of the review tool be monitored? In fact, this is an old question brought up again. Traditional problem log capture solutions are usually divided into two parts: the first is to obtain data on important events; the second is the audit log. Until AI matures further and its flaws are understood or mitigated, humans will still need to maintain control of the cycle.
Nowadays, more and more developers "hire" ChatGPT to write source code. Preliminary reports indicate that ChatGPT is capable of writing source code in multiple programming languages and is fluent in all common languages. Because of the limitations of the current beta's training and models, the code it produces isn't always perfect. It often contains business logic flaws that can change the way the software runs, syntax errors that can mix different versions of the software, and other seemingly human-like issues.
Roughly speaking, ChatGPT is only a junior programmer. So, who will be its superior?
In other words, ChatGPT is a junior developer level. Therefore, when working with code written by this junior developer, you must consider how to manage it:
- Who will be their superior to ensure the overall effectiveness of the code they write? Junior developers often need assistance from senior developers. Every line of code must be tested, and some must be fixed. However, reports indicate that this proofreading process is more time-consuming and complex than writing code from scratch.
- Is it injecting or remixing the training code into the code base? A more insidious threat is that sometimes AI bots like GitHub Copilot will produce source code that perfectly replicates blocks of code from the training data. Therefore, anti-plagiarism tools need to be utilized to ensure license risks are managed.
- Where do AI tools get training data? The ability level of an artificial intelligence model is closely related to its training data. If an AI is trained using old or incorrect code, then it will produce old and incorrect results.
- Where is the engine hosted? AI robots that analyze source code need to integrate the source code into their corresponding processing devices. Special consideration should be given to how data is protected, used and disposed of after it leaves company control.
In any case, the release of ChatGPT in December 2022 heralds a new era of software development. It's important to keep an eye on changes in tools like these and not get overwhelmed by them. When adopting these new tools, be aware that the more things change, the more they should stay the same: It’s always better to prevent a security incident than to discover one.
Original link: https://thenewstack.io/hiring-an-ai-tool-to-code-what-to-ask-at-the-interview/
The above is the detailed content of How to conduct an interview to 'hire' ChatGPT coder?. For more information, please follow other related articles on the PHP Chinese website!