GPT-4o was released to the general public by OpenAI a few months ago. Microsoft recently has partnered with OpenAI to integrate GPT-4o Mini with Azure AI. OpenAI’s GPT-4o mini comprehends more than previous versions and scored higher than GPT-3.5 Turbo at 82% on Measuring Massive Multitask Language Understanding (MMLU).
The former GPT-3.5 scored at 70%. GPT-4o offers applications at a lower cost with extreme speed:
We are also announcing safety features by default for GPT-4o mini, expanded data residency and service availability, plus performance upgrades to Microsoft Azure OpenAI Service.
The combination of GPT-4o mini and Azure AI, enables fast text processing and image, audio, and video capabilities. GPT-4o mini is especially adept at streaming scenarios such as assistants, code interpreter, and retrieval.
Safety plays a pivotal role in Azure AI Content Safety features resulting in prompt shields and protected material detection:
We have invested in improving the throughput and speed of the Azure AI Content Safety capabilities—including the introduction of an asynchronous filter—so you can maximize the advancements in model speed while not compromising safety.
Azure AI has data residency for all 27 regions. Data residency allows customers can determine where their data is stored and where their data is processed, “offering a complete data residency solution that helps customers meet their unique compliance requirements.”
GPT-4o mini uses a cheaper global pay-as-you-go deployment at 15 cents per million input tokens and 60 cents per million output tokens:
Global pay-as-you-go offers customers the highest possible scale, offering 15M tokens per minute (TPM) throughput for GPT-4o mini and 30M TPM throughput for GPT-4o. Azure OpenAI Service offers GPT-4o mini with 99.99% availability and the same industry leading speed as our partner OpenAI.