ECM offers a fully managed OpenAI service offering for its customers. This offering is made of 2 Azure services: 

Services provided by Azure API management to secure, load balance and ensure connected applications are able to access the Open AI service in a predictable fashion. 

API Management provides several important features for ensuring availability of the OpenAI Service


 
Be the first to like this

OpenAI offers a range of services and products that leverage artificial intelligence and natural language processing technologies. Azure OpenAI Service provides REST API access to OpenAI's powerful language models. 

Service Highlights

 

Feature 

 

Azure OpenAI Gov 

 

Azure OpenAI Global 

Authorization 

FedRAMP High 

Various (e.g., GDPR, HIPAA) 

Data Centers 

US-based 

Global (multiple regions) 

Compliance 

FISMA, DFARS 

GDPR, HIPAA, etc. 

Support 

Dedicated government support 

Priority business support 

Availability 

US-only 

Global 

Language Support 

English 

Multi-language support 

Scalability 

Designed for government agencies 

Designed for all business sizes 

Use Cases

Things to know

Model options

Deployment Options


Not all models go through a deprecation period prior to retirement. Some models/versions only have a retirement date.

Fine-tuned models are subject to the same deprecation and retirement schedule as their equivalent base model.


Compliance and Standards

Security and Monitoring

Required RBAC Permissions

The following actions are the minimum required to deploy the OpenAI service and a model deployment

Required Info

Pricing Model 

Service Quotas 

Automated Deployment

 Monitoring and Logging: 

Compliance and Standards: 

Secure Development Practices:  


Ethical AI Principles
: 

Collaboration and Transparency: 

Response and Incident Management: 

User Education and Awareness: 

Availability: 

Compliance Certifications: 

Azure OpenAI offers multiple security tiers to cater to your specific compliance needs: 

Logging: 

OpenAI, like many other technology companies, typically directs diagnostic logs to several logging targets for monitoring and troubleshooting purposes. While specific details can vary based on their current infrastructure and practices, here are some common logging targets where diagnostic logs may be sent: 

  1. Centralized Logging Systems: 

     Cloud Platform Services: 

 Application-specific Logging: 

 Security Information and Event Management (SIEM) Systems: 

 Container Orchestration Platforms: 

 Monitoring and Alerting Systems: 

Database and Application Performance Logs: 

Customized Logging Pipelines: 

Extensive Security Logging: 

Enhanced Security Analysis: 

Real-time Threat Detection: 

Advanced Analytics Integration: 

Customizable Security: 

Bring Your Own Data (BYOD): 

These logging targets enable OpenAI to maintain operational visibility, troubleshoot issues promptly, monitor performance metrics, and ensure the reliability and security of their AI services and applications. The specific choice of logging targets may evolve over time based on technological advancements and organizational priorities. 

Azure OpenAI: Government vs. Global Offerings 

 This document compares and contrasts the key features of Azure OpenAI Government (Gov) and Azure OpenAI Global offerings. 

Target Audience 

Security and Compliance 

Azure OpenAI Gov: 

Azure OpenAI Global: 

Feature Comparison Table 

                 Azure Global: 

                 Azure OpenAI Service Models Legacy 

         Azure OpenAI Service Models 


Additional Resources 

By following these steps, you can deploy and utilize OpenAI services through Azure effectively, leveraging the power of advanced AI models for your applications. 

Pricing: 

OpenAI offers a variety of pricing plans for its ChatGPT and API services to cater to different needs: 

ChatGPT Plans: 

API Pricing: 

Additionally, OpenAI provides special pricing for educational institutions and nonprofits, offering discounts for broader deployment and accessibility. 

For the latest details and updates, you can visit the OpenAI pricing page and the API pricing page on their official website. 

Consider using Provisioned Throughput Units (PTUs) for optimal scaling and minimal latency variance.

Quotas and limits reference 

The following sections provide you with a quick guide to the default quotas and limits that apply to Azure OpenAI: 

 

Limit Name 

Limit Value 

OpenAI resources per region per Azure subscription 

30 

Default DALL-E 2 quota limits 

2 concurrent requests 

Default DALL-E 3 quota limits 

2 capacity units (6 requests per minute) 

Maximum prompt tokens per request 

Varies per model. For more information, see Azure OpenAI Service models 

Max fine-tuned model deployments