On day 09 of 12 Dyas, 12 Livestreams, OpenAI debuts the API version of OpenAI o1, the reasoning model designed by the company to handle complex multi-step tasks with advanced accuracy, and other exciting features for developers including real-time API improvements, and a new fine-tuning method, among others.
The 9th day of the live stream has been devoted to the ones who drive the growth of these platforms in the real sense. The new tools unveiled by the company aim to leverage the attractiveness of AI development in the course of OpenAI’s journey.
We're bringing OpenAI o1 to the API. We're rolling out access to developers on usage tier 5 starting today, and rollout will continue over the next few weeks.
— OpenAI Developers (@OpenAIDevs) December 17, 2024
o1 supports:
⚙️ Function calling
????️ Structured Outputs
???? Vision
???? Developer messages
???? Reasoning effort pic.twitter.com/Ax8TT0IRke
OpenAI o1 API
It is now available to build agentic applications to streamline customer support, optimize supply chain decisions, and forecast complex financial trends which the developers have already been using too for, the blog post states. The key features that have been unveiled are:
- Function calling: This makes the integration of the API with external data and APIs extremely seamless.
- Developer messages: This allows the developers to specify instructions or context for the model to follow, such as defining tone, style, and other behavioral guidance.
- Vision capabilities: this feature allows the developers to couple reason over images enabling many more applications in science, manufacturing, or coding, where visual inputs matter significantly.
- Lower latency: the new model ranks higher in efficiency as it uses on average 60% fewer reasoning tokens than o1-preview for a given request.
- Processing time: A new `reasoning_effort` API parameter coupled in the API allows the developers to control how long the model thinks before answering.
Additionally, it has been observed that o1-2024-12-17 significantly outperforms GPT-4o in our function calling and Structured Outputs testing. The company has also rolled out the access incrementally while working to expand access to additional usage tiers and ramp up rate limits.
Read bout OpenAI o1 here at: OpenAI unveils OpenAI o1, with image upload support and a 34% performance boost
Other features unveiled
- WebRTC Support: WebRTC is an open standard that makes it easier to build and scale real-time voice products across platforms—whether for browser-based apps, mobile clients, IoT devices, or direct server-to-server setups. It has been designed to enable smooth and responsive interactions in real-world conditions.
- Preference Fine tuning: The fine-tuning API now supports Preference Fine-Tuning(opens in a new window) to make it easy to customize models based on user and developer preferences. This method uses Direct Preference Optimization (DPO)(opens in a new window) to compare pairs of model responses, teaching the model to distinguish between preferred and non-preferred outputs. By learning from pairwise comparisons rather than fixed targets, Preference Fine-Tuning is especially effective for subjective tasks where tone, style, and creativity matter.
- 2 New SDKs: Finally, the company has introduced two new official Software Development Kits (SDKs) for Go and Java in beta, in addition to our existing official Python, Node.js, and .NET libraries. OpenAI’s goal is to make it easier to use, no matter what programming language you choose.
Read about the developments of Day 08 here: Day 08: OpenAI rolls out ChatGPT Search to all the GPT users to expedite information hunt