OpenAI unveils the o3 and o3 mini on the last day of its 12 days of 'Shipmas'
OpenAI CEO Sam Altman saved the biggest reveal for the final day of the company's 'Shipmas' campaign: A preview of its most advanced model yet, o3.
- OpenAI's marketing campaign "Shipmas" ended Friday.
- The campaign included 12 days of product releases, demos, and new features.
- On the final day, OpenAI previewed o3, its most advanced model yet.
OpenAI released new features and products ahead of the holidays, a campaign it called "Shipmas."
The company saved the most exciting news for the final day: a preview of o3, its most advanced model yet, which the company said could be available to the public as soon as the end of January.
Here's everything OpenAI has released so far for "Shipmas."
'Shipmas' Day 1
OpenAI started the promotion with a bang by releasing the full version of its latest reasoning model, o1.
OpenAI previewed o1 in September, describing it as a series of artificial-intelligence models "designed to spend more time thinking before they respond." Until now, only a limited version of these models was available to ChatGPT Plus and Team users.
Now, these users have access to the full capabilities of o1 models, which Altman said are faster, smarter, and easier to use than the preview. They're also multimodal, which means they can process images and texts jointly.
Max Schwarzer, a researcher at OpenAI, said the full version of o1 was updated based on user feedback from the preview version and said it's now more intelligent and accurate.
"We ran a pretty detailed suite of human evaluations for this model, and what we found was that it made major mistakes about 34% less often than o1 preview while thinking fully about 50% faster," he said.
Along with o1, OpenAI unveiled a new tier of ChatGPT called ChatGPT Pro. It's priced at $200 a month and includes unlimited access to the latest version of o1.
'Shipmas' Day 2
On Friday, OpenAI previewed an advancement that allows users to fine-tune o1 on their own datasets. Users can now leverage OpenAI's reinforcement-learning algorithms — which mimic the human trial-and-error learning process — to customize their own models.
The technology will be available to the public next year, allowing anyone from machine-learning engineers to genetic researchers to create domain-specific AI models. OpenAI has already partnered with the Reuters news agency to develop a legal assistant based on o1-mini. It has also partnered with the Lawrence Berkeley National Laboratory to develop computational methods for assessing rare genetic diseases.
'Shipmas' Day 3
OpenAI announced on December 9 that its AI video generator Sora was launching to the public.
Sora can generate up to 20-second videos from written instructions. The tool can also complete a scene and extend existing videos by filling in missing frames.
"We want our AIs to be able to understand video and generate video and I think it really will deeply change the way that we use computers," the CEO added.
Rohan Sahai, Sora's product lead, said a product team of about five or six engineers built the product in months.
The company showed off the new product and its various features, including the Explore page, which is a feed of videos shared by the community. It also showed various style presets available like pastel symmetry, film noir, and balloon world.
The team also gave a demo of Sora's Storyboard feature, which lets users organize and edit sequences on a timeline.
Sora is rolling out to the public in the US and many countries around the world. However, Altman said it will be "a while" before the tool rolls out in the UK and most of Europe.
ChatGPT Plus subscribers who pay $20 monthly can get up to 50 generations per month of AI videos that are 5 seconds long with a resolution of 720p. ChatGPT Pro users who pay $200 a month get unlimited generations in the slow queue mode and 500 faster generations, Altman said in the demo. Pro users can generate up to 20-second long videos that are 1080p resolution, without watermarks.
'Shipmas' Day 4
OpenAI announced that it's bringing its collaborative canvas tool to all ChatGPT web users — with some updates.
The company demonstrated the tech in a holiday-themed walkthrough of some of its new capabilities. Canvas is an interface that turns ChatGPT into a writing or coding assistant on a project. OpenAI first launched it to ChatGPT Plus and Team users in October.
Starting Tuesday, canvas will be available to free web users who'll be able to select the tool from a drop-down of options on ChatGPT. The chatbot can load large bodies of text into the separate canvas window that appears next to the ongoing conversation thread.
Canvas can get even more intuitive in its responses with new updates, OpenAI said. To demonstrate, they uploaded an essay about Santa Claus's sleigh and asked ChatGPT to give its editing notes from the perspective of a physics professor.
For writers, it can craft entire bodies of text, make changes based on requests, and add emojis. Coders can run code in canvas to double-check that it's working properly.
'Shipmas' Day 5
OpenAI talked about its integration with Apple for the iPhone, iPad, and macOS.
As part of the iOS 18.2 software update, Apple users can now access ChatGPT directly from Apple's operating systems without an OpenAI account. This new integration allows users to consult ChatGPT through Siri, especially for more complex questions.
They can also use ChatGPT to generate text through Apple's generative AI features, collectively called Apple Intelligence. The first of these features was introduced in October and included tools for proofreading and rewriting text, summarizing messages, and photo-editing features. They can also access ChatGPT through the camera control feature on the iPhone 16 to learn more about objects within the camera's view.
'Shipmas' Day 6
OpenAI launched its highly anticipated video and screensharing capabilities in ChatGPT's Advanced Voice Mode.
The company originally teased the public with a glimpse of the chatbot's ability to "reason across" vision along with text and audio during OpenAI's Spring Update in May. However, Advanced Voice Mode didn't become available for users until September, and the video capabilities didn't start rolling out until December 12.
In the livestream demonstration on Thursday, ChatGPT helped guide an OpenAI employee through making pour-over coffee. The chatbot gave him feedback on his technique and answered questions about the process. During the Spring Update, OpenAI employees showed off the chatbot's ability to act as a math tutor and interpret emotions based on facial expressions.
Users can access the live video by selecting the Advanced Voice Mode icon in the ChatGPT app and then choosing the video button on the bottom-left of the screen. Users can share their screen with ChatGPT by hitting the drop-down menu and selecting "Share Screen."
'Shipmas' Day 7
For "Shipmas" Day 7, OpenAI introduced Projects, a new way for users to "organize and customize" conversations within ChatGPT. The tool allows users to upload files and notes, store chats, and create custom instructions.
"This has been something we've been hearing from you for a while that you really want to see inside ChatGPT," OpenAI chief product officer Kevin Weil said. "So we can't wait to see what you do with it."
During the live stream demonstration, OpenAI employees showed a number of ways to use the feature, including organizing work presentations, home maintenance tasks, and programming.
The tool started to roll out to Plus, Pro, and Teams users on Friday. The company said in the demonstration it will roll out the tool to free users "as soon as possible."
'Shipmas' Day 8
OpenAI is rolling out ChatGPT search to all logged-in free users on ChatGPT, the company announced during its "Shipmas" livestream on Monday. The company previously launched the feature on October 31 to Plus and Team users, as well as waitlist users.
The new feature is also integrated into Advanced Voice Mode now. On the livestream, OpenAI employees showed off its ability to provide quick search results, search while users talk to ChatGPT, and act as a default search engine.
"What's really unique about ChatGPT search is the conversational nature," OpenAI's search product lead, Adam Fry, said.
The company also said it made Search faster and "better on mobile," including the addition of some new maps experiences. ChatGPT search feature is rolling out globally to all users with an account.
'Shipmas' Day 9
OpenAI launched tools geared toward developers on Tuesday.
It launched o1 out of preview in the API. OpenAI's o1 is its series of AI models designed to reason through complex tasks and solve more challenging problems. Developers have experimented with o1 preview since September to build agentic applications, customer support, and financial analysis, OpenAI employee Michelle Pokrass said.
The company also added some "core features" to o1 that it said developers had been asking for on the API, including function calling, structured outputs, vision inputs, and developer messages.
OpenAI also announced new SDKs and a new flow for getting an API key.
'Shipmas' Day 10
OpenAI is bringing ChatGPT to your phone through phone calls and WhatsApp messages.
"ChatGPT is great but if you don't have a consistent data connection, you might not have the best connection," OpenAI engineer Amadou Crookes said in the livestream. "And so if you have a phone line you can jump right into that experience."
You can add ChatGPT to your contacts or dial the number at 1-800-ChatGPT or 1-800-242-8478. The calling feature is only available for those living in the US. Those outside the US can message ChatGPT on WhatsApp.
OpenAI employees in the live stream demonstrated the calling feature on a range of devices including an iPhone, flip phone, and even a rotary phone. OpenAI product lead Kevin Weil said the feature came out of a hack-week project and was built just a few weeks ago.
'Shipmas' Day 11
OpenAI focused on features for its desktop apps during Thursday's "Shipmas" reveal. Users can now see and automate their work on MacOS desktops with ChatGPT.
Additionally, users can click the "Works With Apps" button, which allows them to work with more coding apps, such as Textmate, BB Edit, PyCharm, and others. The desktop app will support Notion, Quip, and Apple Notes.
Also, the desktop app will have Advanced Voice Mode support.
The update became available for the MacOS desktop on Thursday. OpenAI CPO Kevin Weil said the Windows version is "coming soon."
'Shipmas' Day 12
OpenAI finished its "12 days of Shipmas" campaign by introducing o3, the successor to the o1 model. The company first launched the o1 model in September and advertised its "enhanced reasoning capabilities."
The rollout includes the o3 and 03-mini models. Although "o2" should be the next model number, an OpenAI spokesperson told Bloomberg that it didn't use that name "out of respect' for the British telecommunications company.
Greg Kamradt of Arc Prize, which measures progress toward artificial general intelligence, appeared during the livestream and said o3 did notably better than o1 during tests by ARC-AGI.
OpenAI CEO Sam Altman said during the livestream that the models are available for public safety testing. He said OpenAI plans to launch the o3 mini model "around the end of January" and the o3 model "shortly after that."
In a post on X on Friday, Weil said the o3 model is a "massive step up from o1 on every one of our hardest benchmarks."