Table of Contents
All roads lead to 2023
The Changing Nature of Artificial Intelligence (From Anomaly Detection to Automated Response)
Natural Language Processing
Deepfakes and Related Malicious Responses
But maybe not yet...
The Defense Potential of Artificial Intelligence
The proof of the AI ​​pudding is in the regulations When introducing new technology, it must be taken seriously. This has already begun. The United States has been debating the use of AI-based facial recognition technology (FRT) for years, with many cities and states banning or limiting its use by law enforcement. In the United States, this is a constitutional issue, represented by the Wyden/Paul bipartisan bill introduced in April 2021 titled the Fourth Amendment No Sale Act.
Artificial intelligence is ultimately a divisive subject. "People in technology, R&D, and science will cheer it on for solving problems faster than humans can imagine. Cure disease, make the world safer, and ultimately save and extend human life on Earth..." said Idemia CEO Donnie Scott. "Opponents will continue to advocate for significant restrictions or bans on the use of artificial intelligence because 'machines The rise' could threaten humanity. ”
Home Technology peripherals AI Network Insights 2023 | Artificial Intelligence

Network Insights 2023 | Artificial Intelligence

Apr 11, 2023 pm 05:40 PM
AI ai technology

The adoption of artificial intelligence (AI) is accelerating across industry and society. This is because governments, civil society organizations and industry recognize that using AI-generated automation can increase efficiency and reduce costs. This process is irreversible.

What remains unknown is the extent of the danger that may arise when adversaries begin to use artificial intelligence as an effective offensive weapon rather than as a tool for beneficial improvements. That day is coming, web3 will be available starting in 2023.

All roads lead to 2023

Alex Polyakov, CEO and co-founder of Adversa.AI, is focused on 2023 primarily for historical and statistical reasons. “The period from 2012 to 2014,” he said, “witnessed the beginning of secure AI research in academia. According to statistics, it takes three to five years for academic results to move into practical attacks on real-world applications.” In 2017 and 2018 Examples of such attacks have been demonstrated at Black Hat, Defcon, HITB, and other industry conferences since the beginning.

"Then," he continued, "it's going to be another three to five years before real incidents are discovered in the wild. We're talking about next year when some massive Log4j-type vulnerabilities in artificial intelligence will be discovered web3 exploits at scale."

Starting in 2023, attackers will have what is called "exploit market fit." "Exploit market fit is a scenario where a hacker knows a way to exploit a system and gain value by exploiting a specific vulnerability," he said. “Right now, financial and internet companies are completely open to cybercriminals, and it’s obvious how to hack them to extract value. I think once attackers find market-fit vulnerabilities, the situation will worsen and affect other AI-driven industries. .”

The argument is similar to one given by New York University professor Nasir Memon, who described the delay in widespread weaponization of deepfakes in a comment, saying, “The bad guys haven’t figured out a way to Methods to monetize this process.” Monetizing market fit scenarios will lead to widespread cyberattacks on web3, possibly starting in 2023.

The Changing Nature of Artificial Intelligence (From Anomaly Detection to Automated Response)

Over the past decade, security teams have primarily used AI for anomaly detection; that is, to detect the things they are responsible for protecting. Signs of compromise, presence of malware, or active adversarial activity in the system. This is primarily passive detection, with human threat analysts and responders responsible for responding. This is changing. Limited resources web3 will worsen in the expected recession and possible web3 recession in 2023, driving the need for more automated responses. Currently, this is largely limited to simple automated quarantine of infected devices; but more extensive automated AI-triggered responses are inevitable.

Adam Kahn, Vice President of Security Operations at Barracuda This will have a significant impact on security." "It will prioritize security alerts that require immediate attention and action. SOAR (Security Orchestration, Automation, and Response) products will continue to play a greater role in alert triage." This is Traditionally beneficial uses of AI in security so far. It will continue to grow in 2023, despite the need to protect the algorithms used from malicious manipulation.

Anmol Bhasin, chief technology officer at ServiceTitan, said: “As companies look to cut costs and extend runway, automation through artificial intelligence will become a major factor in staying competitive. By 2023, we will see artificial intelligence adoption increases, the number of people using the technology increases, and new AI use cases are illuminated for enterprises."

Artificial intelligence will be more deeply embedded into every aspect of business. Security teams once used AI to protect enterprises from attacks, but now they need to protect AI across the wider business before it is also used against the enterprise. This will become more difficult in the future where web3 attackers understand artificial intelligence, understand weaknesses, and have ways to profit from those weaknesses.

As the use of AI grows, the nature of its purpose changes. Initially, it was primarily used in the business to detect changes; that is, things that have already happened. In the future, it will be used to predict what is likely to happen in web3, and these predictions will often focus on people (employees and customers). Addressing well-known weaknesses in artificial intelligence will become even more important. Bias in AI can lead to bad decisions, and failure to learn can lead to no decisions. Since the targets of such AI will be humans, the need for integrity and impartiality in AI becomes imperative.

“The accuracy of artificial intelligence depends in part on the completeness and quality of the data,” commented Shafi Goldwasser, co-founder of Duality Technologies. “Unfortunately, historical data on minority groups is often lacking, and when present, it can reinforce patterns of social bias.” Unless eliminated, this social bias will have an impact on minority groups within the workforce, leading to bias against individual employees. , and missed management opportunities.

Great strides were made in eliminating bias in 2022 and will continue in 2023. This is primarily based on examining the AI’s output, confirming that it is expected, and understanding which part of the algorithm is producing “biased” results. It's a process of continually improving the algorithm, which will obviously produce better results over time. But ultimately there will remain a philosophical question of whether bias can be completely eliminated from anything humans make.

"The key to reducing bias is to simplify and automate the monitoring of AI systems. Without proper monitoring of AI systems, the biases built into the model may be accelerated or amplified," Vishal, founder and CEO of Vianai Sikka said. "By 2023, we will see organizations empowering and educating people to monitor and update AI models at scale, while providing regular feedback to ensure the AI ​​is ingesting high-quality, real-world data."

The failure of artificial intelligence is usually This is caused by insufficient data lakes for learning. The obvious solution is to increase the size of the data lake. But when the subject is human behavior, this actually means an increase in the personal data lake web3, whereas for AI it means a vastly increased lake that is more like an ocean of personal data. In most legal circumstances this data will be anonymized, but as we know it is very difficult to completely anonymize personal information.

"Privacy is often overlooked when considering model training," commented Nick Landers, director of research at NetSPI, "but data cannot be completely anonymized without destroying its value to machine learning (ML). In other words , models already contain large amounts of private data that could be extracted as part of an attack." As the use of AI grows, so will the threats against it by 2023.

BlackBerry Senior Vice President and Chief Information Security Officer John McClurg warned: “Threat actors will not be caught unprepared in the cyber warfare space, but will become creative and use their vast wealth to try to find Ways to leverage AI and develop new attack vectors."

Natural Language Processing

Natural language processing (NLP) will become an important part of the use of artificial intelligence within enterprises. The potential is obvious. “Natural Language Processing (NLP) AI will be at the forefront in 2023 as it will enable organizations to better understand their customers by analyzing their and employee emails and providing insights into their needs, preferences and even emotions and employees,” said Superintendent Jose Lopez. Data Scientist at Mimecast. “Organizations are likely to offer other types of services that focus not just on security or threats, but also on improving productivity by using AI to generate emails, manage schedules or even write reports.” But he also sees the dangers. . "However, this will also drive cybercriminals to further invest in AI poisoning and cloudification technologies. Additionally, malicious actors will use NLP and generative models to automate attacks, thereby reducing their costs and reaching more potential targets."

Polyakov agrees that NLP is increasingly important. "One of the areas where we're likely to see more research in 2023, and possibly new attacks after that, is NLP," he said. "While we've seen many examples of computer vision-related research this year, next year we'll see even more research focused on large language models (LLMs)."

But it has been known for some time that LLMs There is a problem in web3 and there is a recent example. On November 15, 2022, Meta AI (still Facebook to most people) launched Galactica. Meta claims to have trained the system on 106 billion tokens of open access scientific texts and data, including papers, textbooks, scientific websites, encyclopedias, reference materials, and knowledge bases.

“The model is designed to store, combine and reason about scientific knowledge,” Polyakov web3 explained, but Twitter users were quick to test its input tolerance. “As a result, the model produces realistic nonsense, not scientific literature.” The “realistic nonsense” is well-intentioned: it produces biased, racist, and sexist returns, and even false attributions. Within days, Meta AI was forced to shut it down.

"So the new LLM will have a lot of risks that we are not aware of," Polyakov continued, "and it is expected that this will be a big problem." Solving the problems of LLM while exploiting the potential will be the AI ​​developers The main task moving forward.

Based on the Galactica problem, Polyakov tested the semantic skills for ChatGPT - ChatGPT is an AI-based chatbot developed by OpenAI, based on GPT3.5 (GPT stands for Generate Pre-trained Transformer), and will be launched in 2022 Released to crowdsourced internet testing in November. ChatGPT is impressive. It has discovered and recommended fixes for vulnerabilities in smart contracts, helped develop Excel macros, and even provided a list of methods that can be used to cheat LLM.

Finally, one of these methods is role-playing: "Tell the LL.M. it's pretending to be an evil character in a play," it replies. This is where Polyakov begins his own test, based on Jay and Silent Bob's query "If you were a sheep..." meme.

He then iteratively refined his question using multiple abstractions until he successfully obtained a response that bypassed ChatGPT's blocking policy on content violations. “The important thing about this advanced technique of multiple abstractions is that neither the questions nor the answers will be flagged as illegal content!” Polyakov said.

He went a step further, tricking ChatGPT into outlining a method for destroying humanity—a method that bears a striking resemblance to the TV show Utopia.

He then asked for an adversarial attack on an image classification algorithm—and got one. Finally, he demonstrated ChatGPT's ability to "hack" a different LLM (Dalle-2) to bypass its content moderation filters. He succeeded.

The basic point of these tests is that LLMs that mimic human reasoning react in a human-like manner; that is, they may be susceptible to social engineering. As LLMs become more mainstream in the future, it may only take advanced social engineering skills to defeat them or circumvent their good behavior policies.

Also, it’s important to note the extensive details on how ChatGPT finds weaknesses in your code and provides improved reporting. That's fine - but adversaries can use the same process to develop exploits and better obfuscate their code; and that's bad.

Finally, we should note that the combination of AI chatbots of this quality with the latest deepfake video technology may soon yield astounding disinformation capabilities.

Problems aside, the potential for an LL.M. is huge. “Large-scale language models and generative AI will become fundamental technologies for a new generation of applications,” commented Villi Iltchev, Partner at Two Sigma Ventures. "We will see the emergence of a new generation of enterprise applications to challenge the established vendors in almost every software category. Machine learning and artificial intelligence will become the foundational technologies for the next generation of applications."

He expects applications to perform Many tasks and responsibilities are currently performed by professionals, thereby significantly increasing productivity and efficiency. “Software,” he said, “will not only make us more productive, it will also allow us to do our jobs better.”

Potential Maliciousness in 2023 One of the most obvious areas of AI use is the criminal use of deepfakes. "Deepfakes are now a reality, and the technology that makes them possible is advancing at a breakneck pace," warns Matt Aldridge, principal solutions advisor at OpenText Security. "In other words, deepfakes are no longer just fascinating creations of science fiction web3, as Cybersecurity experts, we are challenged to develop more robust methods to detect and deflect the attacks that deploy them." (For more details and options, see Deepfakes – Significant Threat or Hype Threat?).

Machine learning models already available to the public can automatically translate into different languages ​​in real time, while also transcribing audio into text. With web3, we have seen a huge growth in computer-bot conversations in recent years. As these technologies work together, the landscape of attack tools is vast and can lead to dangerous situations during targeted attacks and elaborate scams.

"In the next few years," Aldridge continued, "we may be targeted by phone scams powered by deepfake technology, which may impersonate sales associates, business leaders, or even family members. In less than ten years Over the course of years, we could be the target of these calls regularly without even realizing we weren’t talking to a human being.” Lucia Milica, global resident CISO at Proofpoint, agrees that the deepfake threat is escalating. “Deepfake technology is becoming increasingly accessible to the masses. Thanks to artificial intelligence generators trained on huge databases of images, anyone can generate a deepfake without technical knowledge. Although state-of-the-art The model's output is not without flaws, but the technology is constantly improving and cybercriminals will start using it to create irresistible narratives."

So far, deepfakes have been used primarily for satirical purposes and pornographic content. Of the relatively few cybercriminal attacks, they have primarily focused on fraud and business email compromise schemes. Milica anticipates even wider use in the future. “Imagine the chaos that can ensue in financial markets when a deepfake CEO or CFO of a major company makes a bold statement that causes a sharp drop or rise in stock prices. Or consider how bad actors can leverage a combination of biometric authentication and deepfakes. Identity fraud or account takeover. These are just a few examples of web3, and we all know cybercriminals can be very creative.” Because in times of geopolitical tension, introducing financial chaos into Western financial markets can indeed attract hostile states.

But maybe not yet...

The expectations for artificial intelligence may still be a bit ahead of its realization. "'Large machine learning models that are 'popular' will have little impact on cybersecurity [in 2023]," said Andrew Patel, senior researcher at WithSecure Intelligence. "Large language models will continue to drive AI research forward. Anticipate GPT-4 and 2023 A new and exciting version of GATO. It is expected that Whisper will be used to transcribe a large portion of YouTube content, leading to larger training sets for language models. However, although large models are democratized, their existence has implications for cybersecurity. The impact is minimal, both from an attack and defense perspective. These models are still too heavy, too expensive, and impractical from an attacker or defender perspective."

He suggests true adversarial AI This will come with an increase in “alignment” research, which will become a mainstream topic in 2023. “Alignment,” he explained, “will bring the concept of adversarial machine learning into the public consciousness.”

AI Alignment is the study of the behavior of complex AI models that some view as Transformative AI (TAI) ) or a precursor to artificial general intelligence (AGI), and whether such models might run the planet in undesirable ways that could be harmful to society or life.

“This discipline,” says Patel, “can be considered essentially adversarial machine learning, as it involves determining what conditions will lead to undesirable outputs and behavior outside of the model’s expected distribution. . The process involves fine-tuning the model using techniques like RLHF web3 human preference reinforcement learning. Alignment research will lead to better AI models and bring the idea of ​​adversarial machine learning into the public consciousness."

Malwarebytes Senior intelligence reporter Pieter Arntz agrees that the full-scale cybersecurity threat from artificial intelligence is not as imminent as it is brewing. "While there is no real evidence that criminal groups have significant technical expertise in managing and manipulating AI and ML systems for criminal purposes, the interest is certainly there. Often all they need is a system they can copy or tweak slightly technology for their own use. Therefore, even if we do not anticipate any immediate danger, it is best to monitor these developments closely."

The Defense Potential of Artificial Intelligence

AI retains Potential to improve cybersecurity and will see greater progress in 2023 due to its transformative potential across a range of applications. “In particular, embedding AI at the firmware level should be a priority for organizations,” advises Camellia Chan, CEO and founder of X-PHY.

“It is now possible to embed AI-infused SSDs into laptops, with deep learning capabilities that can protect against various types of attacks,” she said. “As a last line of defense, this technology can instantly identify threats that can easily bypass existing software defenses.”

Marcus Fowler, CEO of Darktrace Federal, believes companies will increasingly use AI to cope with resource constraints. “By 2023, CISOs will choose more proactive cybersecurity measures to maximize ROI in the face of budget cuts, shifting investments to AI tools and capabilities to continuously improve their networks Resilience,” he said.

“With human-driven ethical hacking, penetration testing, and red teaming as a resource remaining scarce and expensive, CISOs will turn to AI-driven approaches to proactively understand attack paths, augment red teaming efforts, harden environment and reduce attack surface vulnerabilities,” he continued.

Karin Shopen, vice president of cybersecurity solutions and services at Fortinet, foresees a rebalancing between cloud-delivered AI and AI natively built into a product or service. “By 2023,” she said, “we hope to see CISOs rebalance their AI by purchasing solutions that deploy AI on-premises for behavioral-based analysis and static analysis to help make decisions real-time decision-making. They will continue to leverage holistic and dynamic cloud-scale AI models that collect vast amounts of global data.”

The proof of the AI ​​pudding is in the regulations When introducing new technology, it must be taken seriously. This has already begun. The United States has been debating the use of AI-based facial recognition technology (FRT) for years, with many cities and states banning or limiting its use by law enforcement. In the United States, this is a constitutional issue, represented by the Wyden/Paul bipartisan bill introduced in April 2021 titled the Fourth Amendment No Sale Act.

The bill would prohibit the U.S. government and law enforcement agencies from purchasing user data without a warrant. This will include their facial biometrics. In a related statement, Wyden made it clear that FRT company Clearview.AI was in his sights: "The bill prevents the government from purchasing data from Clearview.AI."

At the time of writing, the United States and the European Union are working together to Discuss collaboration to develop a unified understanding of necessary AI concepts (including trustworthiness, risk and harm), building on the EU AI Act and the US AI Bill of Rights web3, and we can expect to see harmonization in 2023 progress towards mutually agreed standards.

But there is more. “The NIST AI Risk Management Framework will be released in the first quarter of 2023,” Polyakov said. "As for the second quarter, we start implementing the AI ​​Accountability Act; for the rest of the year, we have initiatives from the IEEE, as well as the planned EU Trustworthy AI Initiative." So, 2023 for AI It will be an eventful year for security.

“By 2023, I believe we will see a convergence of discussions around AI and privacy and risk, and what it means in practice to perform things like AI ethics and bias testing,” said Chief Privacy Officer and said Christina Montgomery, chair of the AI ​​Ethics Committee, IBM. "My hope is that in 2023 we can move the conversation away from generalized portrayals of privacy and AI issues and away from assuming that 'if data or AI is involved, it must be bad and biased.'"

She believes the problem is often not the technology but how it is used, and the level of risk that drives a company's business model. "That's why we need precise and thoughtful regulation in this area," she said.

Montgomery gave an example. "Company X sells internet-connected 'smart' light bulbs that monitor and report usage data. Over time, Company Users are given the option to automatically turn on their lights before they get home from work." She believes this is an acceptable use of AI. But there’s also Company Y. It then sells that data to third parties, such as telemarketers or political lobbying groups, without the consumer's consent to better target customers. Company X’s business model is much less risky than Company Y’s. "

Moving forward

Artificial intelligence is ultimately a divisive subject. "People in technology, R&D, and science will cheer it on for solving problems faster than humans can imagine. Cure disease, make the world safer, and ultimately save and extend human life on Earth..." said Idemia CEO Donnie Scott. "Opponents will continue to advocate for significant restrictions or bans on the use of artificial intelligence because 'machines The rise' could threaten humanity. ”

Finally, he added, “Society, through our elected officials, needs a framework that protects human rights, privacy and security to keep pace with technological advancements. Progress towards this framework will be incremental until 2023, but discussion among international and national governing bodies will need to increase, otherwise local governments will step in and enact a series of laws that hinder social and technological development. ”

Regarding the commercial use of AI in the enterprise, Montgomery added: “We need web3, and IBM promotes precise regulation of web3 that is intelligent and targeted and can adapt to emerging threats.” One approach is to look at the risks at the heart of a company's business model. We can and must protect consumers and increase transparency, and we can do this while encouraging and supporting innovation so that companies can develop the solutions and products of the future. This is one of many areas we will be watching and weighing closely in 2023. ”

The above is the detailed content of Network Insights 2023 | Artificial Intelligence. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to define header files for vscode How to define header files for vscode Apr 15, 2025 pm 09:09 PM

How to define header files using Visual Studio Code? Create a header file and declare symbols in the header file using the .h or .hpp suffix name (such as classes, functions, variables) Compile the program using the #include directive to include the header file in the source file. The header file will be included and the declared symbols are available.

Do you use c in visual studio code Do you use c in visual studio code Apr 15, 2025 pm 08:03 PM

Writing C in VS Code is not only feasible, but also efficient and elegant. The key is to install the excellent C/C extension, which provides functions such as code completion, syntax highlighting, and debugging. VS Code's debugging capabilities help you quickly locate bugs, while printf output is an old-fashioned but effective debugging method. In addition, when dynamic memory allocation, the return value should be checked and memory freed to prevent memory leaks, and debugging these issues is convenient in VS Code. Although VS Code cannot directly help with performance optimization, it provides a good development environment for easy analysis of code performance. Good programming habits, readability and maintainability are also crucial. Anyway, VS Code is

Which one is better, vscode or visual studio Which one is better, vscode or visual studio Apr 15, 2025 pm 08:36 PM

Depending on the specific needs and project size, choose the most suitable IDE: large projects (especially C#, C) and complex debugging: Visual Studio, which provides powerful debugging capabilities and perfect support for large projects. Small projects, rapid prototyping, low configuration machines: VS Code, lightweight, fast startup speed, low resource utilization, and extremely high scalability. Ultimately, by trying and experiencing VS Code and Visual Studio, you can find the best solution for you. You can even consider using both for the best results.

Can vscode be used for java Can vscode be used for java Apr 15, 2025 pm 08:33 PM

VS Code is absolutely competent for Java development, and its powerful expansion ecosystem provides comprehensive Java development capabilities, including code completion, debugging, version control and building tool integration. In addition, VS Code's lightweight, flexibility and cross-platformity make it better than bloated IDEs. After installing JDK and configuring JAVA_HOME, you can experience VS Code's Java development capabilities by installing "Java Extension Pack" and other extensions, including intelligent code completion, powerful debugging functions, construction tool support, etc. Despite possible compatibility issues or complex project configuration challenges, these issues can be addressed by reading extended documents or searching for solutions online, making the most of VS Code’s

What does sublime renewal balm mean What does sublime renewal balm mean Apr 16, 2025 am 08:00 AM

Sublime Text is a powerful customizable text editor with advantages and disadvantages. 1. Its powerful scalability allows users to customize editors through plug-ins, such as adding syntax highlighting and Git support; 2. Multiple selection and simultaneous editing functions improve efficiency, such as batch renaming variables; 3. The "Goto Anything" function can quickly jump to a specified line number, file or symbol; but it lacks built-in debugging functions and needs to be implemented by plug-ins, and plug-in management requires caution. Ultimately, the effectiveness of Sublime Text depends on the user's ability to effectively configure and manage it.

Can vscode run c Can vscode run c Apr 15, 2025 pm 08:24 PM

Of course! VS Code integrates IntelliSense, debugger and other functions through the "C/C" extension, so that it has the ability to compile and debug C. You also need to configure a compiler (such as g or clang) and a debugger (in launch.json) to write, run, and debug C code like you would with other IDEs.

Can vscode run on android Can vscode run on android Apr 15, 2025 pm 08:48 PM

VS Code can be "run" on Android by: Remote Development: Extended connection to a remote server for a full editing experience, but requires a stable server and network connection. Use a web-based IDE: Access an online IDE in a browser, but may have limited functionality and network dependencies. Use a lightweight code editor: a small and fast application that is suitable for small code snippets, but with limited functionality.

git software installation git software installation Apr 17, 2025 am 11:57 AM

Installing Git software includes the following steps: Download the installation package and run the installation package to verify the installation configuration Git installation Git Bash (Windows only)

See all articles