I was listening to a podcast recently featuring the CEO of an investment management firm, who was asked about GPT3 and the groundbreaking implications for a wide range of industries from it. My ears perked up when she uttered quite the jaw-dropping food-for-thought stat: She said that if GPT3 were done in 2016, it would have cost OpenAI $800 million in computing costs. Incredibly, by the time OpenAI produced GPT3, it only cost them $4.5 million in computing costs.
But wait – there’s more. In her estimate, if GPT3 were produced today, it would only cost about $450,000 in computing costs!
What’s really going on here? Where does this enormous drop in computing costs come from? The trend is that the cost of training these AI models appears to be dropping at a rate approaching 70% per year.
That’s how we can likely expect an avalanche of new foundational models.
For example, those in the financial industry are paying close attention to BloombergGPT. This first-of-its-kind model can rapidly evaluate financial data to assess risk and economic sentiment. Amazon has unveiled its Amazon Bedrock and Amazon Titan models to build and scale generative AI applications. We’ve heard plenty about Google’s Bard model, which can now write code in as many as 20 different computer languages.
The Pitch For AI Domination
If there’s one brand to keep an eye on in this great AI race, it’s not Microsoft or OpenAI. It’s Amazon.
Why? Amazon will say, “Look. You’re already our customer. You’re already familiar with AWS. We’ve already got a platform built for you. We’re cheaper than GPT4 as a foundational model and we’re already providing a computing environment that’s much more user-friendly and easier for your IT people. What’s not to like? It’s all here.”
In the meantime, I’m finding that consumers have been adapting to ChatGPT and some have been accessing Bard, even though Google has been relatively conservative in the rolling out of Bard. OpenAI sat on GPT4 for at least six months while they were making it more reliable. And Open AI internally has been continually working on cutting the cost of training for its model.
Amid this, Emad Mostaque, founder and CEO of Stability AI, claimed that GPT4 was coming up with thousands of questions the model would be quizzed on…by another model.
So, for every answer one GPT4 had, another GPT4 tried to verify the answer. Therefore, instead of human-reinforced learning, we’re getting to the next phase of machine reference learning – start with a foundational model and then add reinforced learning by making one GPT4 check the other. Mostaque pointed to this relationship as the reason the computing cost has been rapidly collapsing.
Talman Advantage #5: A Real Partner With A Plan
When a recruiter talks to you on the phone for 20 minutes just once, there’s only so much they know about you beyond the resume. On the other hand, Roy Talman & Associates will work with you to gain a robust understanding of your skill set, goals, work style preferences and more. Then, rather than “blasting” your resume out to the hiring universe with random results, we’ll make a plan with you on what order we will present you to various firms that we feel are a best fit.
Your career deserves more than a quick chat. Partner with a recruiter who can help you feel more in control of the process – as you should be. Talk to Talman first.
Essentially, I get the distinct impression that this rush of new AI models is like a lot of race cars accelerating around a track – and while they’re accelerating, you have even more cars entering the race.
The only question is: Who’s going to drive all of them? How fast will companies jump on these rapidly moving bandwagons? In our next Talman Tidbit, we’ll begin to answer this very question. We’ll also speak to the hardware side of rapid model development, including why some companies, despite their enthusiasm for machine learning, may experience some inertia on implementation
When news stories on AI and machine learning are swirling in the media, it’s hard to know what the straight story is on how fast companies are adapting to new technologies in reality. That’s when more financial and technological leaders Talk To Talman First.
With our deep knowledge over the last several decades of the movements of companies, including how their cultures are shifting, we can speak in much greater detail to how they are (or aren’t) embracing certain advancements – no matter what the headlines say. And that’s only going to help you get more clarity on how to “hedge your bets” on the investments you make on technology and candidates