Convergence India
header banner
Day 09: OpenAI debuts OpenAI o1 API coupled with other exciting tools for developers
It is now available to build agentic applications for any purpose ranging from streamlining customer support, optimizing supply chain decisions, and forecasting complex financial trends.

By Kumar Harshit

on December 18, 2024

On day 09 of 12 Dyas, 12 Livestreams, OpenAI debuts the API version of OpenAI o1, the reasoning model designed by the company to handle complex multi-step tasks with advanced accuracy, and other exciting features for developers including real-time API improvements, and a new fine-tuning method, among others. 

The 9th day of the live stream has been devoted to the ones who drive the growth of these platforms in the real sense. The new tools unveiled by the company aim to leverage the attractiveness of AI development in the course of OpenAI’s journey. 

OpenAI o1 API 

It is now available to build agentic applications to streamline customer support, optimize supply chain decisions, and forecast complex financial trends which the developers have already been using too for, the blog post states. The key features that have been unveiled are: 

  1. Function calling⁠: This makes the integration of the API with external data and APIs extremely seamless.
  2. Developer messages: This allows the developers to specify instructions or context for the model to follow, such as defining tone, style, and other behavioral guidance. 
  3. Vision capabilities: this feature allows the developers to couple reason over images enabling many more applications in science, manufacturing, or coding, where visual inputs matter significantly.
  4. Lower latency: the new model ranks higher in efficiency as it uses on average 60% fewer reasoning tokens than o1-preview for a given request.
  5. Processing time: A new `reasoning_effort` API parameter coupled in the API allows the developers to control how long the model thinks before answering. 

Additionally, it has been observed that o1-2024-12-17 significantly outperforms GPT-4o in our function calling and Structured Outputs testing. The company has also rolled out the access incrementally while working to expand access to additional usage tiers and ramp up rate limits. 

Read bout OpenAI o1 here at: OpenAI unveils OpenAI o1, with image upload support and a 34% performance boost

Evaluation Of OpenAI O1 API 1

Performance Metrices released by OpenAI

Other features unveiled 

  1. WebRTC Support: WebRTC is an open standard that makes it easier to build and scale real-time voice products across platforms—whether for browser-based apps, mobile clients, IoT devices, or direct server-to-server setups. It has been designed to enable smooth and responsive interactions in real-world conditions.  
  2. Preference Fine tuning: The fine-tuning API now supports Preference Fine-Tuning⁠(opens in a new window) to make it easy to customize models based on user and developer preferences. This method uses Direct Preference Optimization (DPO)⁠(opens in a new window) to compare pairs of model responses, teaching the model to distinguish between preferred and non-preferred outputs. By learning from pairwise comparisons rather than fixed targets, Preference Fine-Tuning is especially effective for subjective tasks where tone, style, and creativity matter. 
  3. 2 New SDKs: Finally, the company has introduced two new official Software Development Kits (SDKs) for Go⁠ and Java⁠ in beta, in addition to our existing official Python, Node.js, and .NET libraries⁠. OpenAI’s goal is to make it easier to use, no matter what programming language you choose. 

Read about the developments of Day 08 here: Day 08: OpenAI rolls out ChatGPT Search to all the GPT users to expedite information hunt