
OpenAI's GPT-4.5 Staggered Rollout: GPU Shortage Blamed
So, OpenAI's Sam Altman dropped a bit of a bombshell recently: the much-anticipated GPT-4.5 rollout is going to be... staggered. Why? According to Altman himself, they're "out of GPUs." Yep, those powerful graphics cards that are the lifeblood of AI development are apparently in short supply at OpenAI HQ.
Altman took to X (formerly Twitter, of course) to explain the situation. He described GPT-4.5 as "giant" and "expensive," and emphasized the need for "tens of thousands" more GPUs before they can open the floodgates to more ChatGPT users. It sounds like this model is a real beast!
Here's the breakdown of the rollout: ChatGPT Pro subscribers get first dibs starting this Thursday, followed by ChatGPT Plus customers next week. So, if you're paying for the premium tier, you'll get a head start on experiencing the power of GPT-4.5.
And speaking of expensive, brace yourselves for the pricing. OpenAI is charging a whopping $75 per million tokens (that's roughly 750,000 words) fed into the model, and $150 per million tokens generated by it. To put that in perspective, that's 30 times the input cost and 15 times the output cost of OpenAI’s current workhorse, GPT-4o. Ouch! Some are already calling the pricing "unhinged," and speculating that this points to a significantly larger model size.
Altman acknowledged the issue, stating, "We’ve been growing a lot and are out of GPUs. We will add tens of thousands of GPUs next week and roll it out to the Plus tier then. This isn’t how we want to operate, but it’s hard to perfectly predict growth surges that lead to GPU shortages."
This isn't the first time OpenAI has cited a lack of computing power as a bottleneck. They're apparently working on long-term solutions, including developing their own AI chips and building a massive network of datacenters. It seems the future of AI development is going to be heavily reliant on access to serious computing resources.
Source: TechCrunch