Compilation丨Yifeng
produced | 51CTO technology stack (WeChat ID: blog51cto)
Generative The demand for artificial intelligence (generative AI) is growing, and concerns about the safety and reliability of LLM have become more prominent than ever. Enterprises want to ensure that large-scale language models (LLMs) developed for internal and external use provide high-quality output without straying into uncharted territory. To meet this need, there are several key aspects to consider. First, the interpretability of the LLM model should be enhanced so that it can transparently display the source and logical reasoning process of its generated results. This will help users understand the quality of the output and assess its trustworthiness. Secondly, more tools and techniques need to be provided to verify and detect the accuracy and correctness of LLM output. These tools can help users realize these concerns when using
Microsoft. For a long time, Microsoft's model using OpenAI can only call APIs, and lacks control over the secrets in the black box. Microsoft recently announced the launch of new Azure AI tools to help solve the hallucination problem of large models, and can also solve security vulnerabilities such as prompt input attacks, where models are attacked to generate privacy-infringing or other harmful content - just like Microsoft Own AI image creator generates Taylor Swift deepfake images as well.
It is reported that the security tool will be widely rolled out in the next few months, and a specific timetable has not yet been disclosed.
With the popularity of LLM, the problem of prompt injection attacks has become particularly prominent. Essentially, an attacker can alter a model's input prompts in a way that bypasses the model's normal operation, including security controls, and manipulate it to display personal or harmful content, compromising security or privacy. These attacks can be carried out in two ways: direct attacks, where the attacker interacts directly with LLM; or indirect attacks, which involve the use of third-party data sources such as malicious web pages.
To address these two forms of prompt injection, Microsoft is adding Prompt Shields to Azure AI. This is a comprehensive capability that uses advanced machine learning (ML) algorithms and natural language processing to automatically analyze prompts and third-party data for malicious intent and prevent them from reaching the model.
It will be integrated into three related products of Microsoft: Azure OpenAI service (Editor’s note: Azure OpenAI is a cloud-based service product launched by Microsoft, which provides access to OpenAI Access to powerful language models. The core advantage of Azure OpenAI is that it combines the advanced technology of OpenAI with the security and enterprise-level commitment of Microsoft Azure), Azure AI content security and Azure AI Studio.
In addition to its efforts to stop hint injection attacks that threaten security and safety, Microsoft is also introducing tools focused on the reliability of generative AI applications. This includes pre-built Security Center system message templates and a new feature called Groundedness Detection.
As Microsoft explains, Security Center system message templates allow developers to build system messages that guide model behavior toward safe, responsible, and data-based output. Basic detection uses a fine-tuned custom language model to detect artifacts or inaccurate material in the text output produced by the model. Both will be available in Azure AI Studio and Azure OpenAI products.
Notably, detecting fundamental metrics will also be accompanied by automated assessments to stress-test the risks and security of generative AI applications. These metrics will measure the likelihood that an application is jailbroken and produces any inappropriate content. The assessment will also include natural language explanations to guide developers on how to build appropriate mitigations to resolve issues.
“Today, many organizations lack the resources to stress-test their generative AI applications so that they can confidently move from prototype to market adoption. First, build a High-quality test datasets can be challenging, such as with jailbreak attacks. Even with high-quality data, evaluation can be a complex and manual process, and development teams may find it difficult to interpret the results to inform effective mitigations, ” Sarah Bird, chief product officer of Microsoft Security AI, pointed out in a blog post.
During actual use of Azure AI, Microsoft will provide real-time monitoring to help developers pay close attention to trigger security Inputs and outputs for functions such as prompt shield. This feature, which is integrated into the Azure OpenAI service and AI Studio products, will generate detailed visualizations that highlight the number and proportion of blocked user inputs/model outputs, along with a breakdown by severity/category.
Using this visual, real-time monitoring, developers can understand harmful request trends over time and adjust their content filter configurations, controls, and broader application design to Enhanced security.
Microsoft has been working on strengthening its AI products for a long time. Previously, Microsoft CEO Satya Nadella emphasized in an interview that Microsoft is not completely dependent on OpenAI, but is also developing its own AI projects and helping OpenAI build its products: "I am very concerned about our current situation. I am very satisfied with the relationship. I also think that this will help us control the destiny of our respective companies."
has changed the pattern of "All in OpenAI", and Microsoft has also used Mistral in Large model inside. Recently, Microsoft's newly established team Microsoft AI has been making frequent moves, and it even hired Mustafa Suleyman and his team from Inflection AI. This seems to be a way to reduce dependence on Sam Altman and OpenAI.
Now, the addition of these new security and reliability tools builds on the work the company has already done, giving developers a better, more secure way to build their Generative AI applications on top of the provided models.
Reference link: https://venturebeat.com/ai/microsoft-launches-new-azure-ai-tools-to-cut-out-llm -safety-and-reliability-risks/
The above is the detailed content of Better, safer, and less dependent on OpenAI, Microsoft's new AI trend launches large model security tool Azure AI. For more information, please follow other related articles on the PHP Chinese website!