The craze for generative artificial intelligence has swept across the U.S. federal government. Microsoft announced the launch of the Azure OpenAI service, allowing Azure government customers to access GPT-3, GPT-4, and Embeddings.
Government agencies will have access to ChatGPT use cases through the service without sacrificing "the rigorous security and compliance standards they need to meet government requirements for sensitive data," Microsoft said in a statement. ."
Microsoft claims that it has developed an architecture that allows government customers to "securely access large language models in commercial environments from Azure Government." Microsoft says it will be accessed through the Python SDK REST APIs or Azure AI Studio, all without exposing government data to the public internet.
Microsoft promises: "Only queries submitted to the Azure OpenAI service will be transferred to the Azure OpenAI model in the commercial environment." "Azure Government directly peers with the Microsoft Azure commercial network, not directly with the public Internet Or Microsoft Enterprise Network peering."
Microsoft reports that it encrypts all Azure traffic using the IEEE 802.1AE — or MACsec — network security standard, and that all traffic resides on the global backbone , the backbone network consists of more than 250,000 kilometers of optical fiber and submarine cable systems.
Azure OpenAI Service for government has been fully launched and is available to approved enterprise or government customers.
Microsoft has always wanted to win the trust of the U.S. government — but it has also made mistakes.
News emerged that more than a terabyte of sensitive government military documents had been exposed on the public internet - and the Department of Defense and Microsoft blamed each other over the issue.
OpenAI, a Microsoft subsidiary and the creator of ChatGPT, is also less than satisfactory in terms of security. In March, a poor open source library exposed some users' chat history. Since then, a number of high-profile companies — including Apple, Amazon and several banks — have banned internal use of ChatGPT over concerns it could expose confidential internal information.
Britain’s spy agency GCHQ even warned of the risk. So, is the U.S. government doing the right thing by handing over its secrets to Microsoft, even though those secrets obviously won't be transferred to an untrusted network?
Microsoft says it won’t exclusively use government data to train its OpenAI model, so the top-secret data likely won’t be leaked in replies to others. But that doesn't mean it can be safe by default. In its announcement, Microsoft politely acknowledged that when government users use OpenAI models, some data will still be recorded.
Microsoft said: "Microsoft allows customers with additional limited access qualifications and demonstrating specific use cases to request modifications to Azure OpenAI's content management capabilities."
It added: "If Microsoft approves customer modifications Data logging requests, any questions and responses associated with approved Azure subscriptions will not be stored, and data logging in Azure Commerce will be set to off." This means that unless the government agency meets certain criteria , otherwise the question and reply—the text returned by the AI model—will be retained.
The above is the detailed content of Microsoft provides GPT large model to the US government, how to ensure security?. For more information, please follow other related articles on the PHP Chinese website!