On a global scale, including among the clients we have at Roy Talman & Associates, we’re witnessing more and more companies viewing machine learning tools as a powerful resource to be incorporated into their business process.
It’s the kind of titanic shift we’ve seen play out before.
Many of us remember early on in our careers when all files were flat files from a computer. Those, in time, gave way to databases and the idea of knowing how to use databases was paramount – ultimately, you couldn’t have a system without a database.
Once again, we’re now on the cusp of another momentous turn in the technology landscape. In a growing number of situations, various machine learning capabilities will be built into systems that we don’t even think to be machine learning enabled at all.
For example, insurance companies will experience extraordinary change by building machine learning into their claim adjusting process. If you get into an accident, the company will ask you to take a few pictures of your car, tell you where to go to have it fixed and share how much the company is willing to pay to have the car fixed. You don’t need an appraiser whatsoever.
In another example, AutoX, the predominant self-driving (and driverless) car on the road in China, has touted its vehicle’s ability to handle the most challenging traffic scenarios possible. As a result of so many variables, AutoX claims its system delivers a computing capacity of 2200 TOPS (Tera Operations per Second Operations – the number of computing operations a chip can process in one second).
To give this some context, 2200 TOPS represents a very sizable supercomputer. Now, 100% of that capacity is in a car with 15 million data points, over two dozen cameras and 220 million pixels streaming per second. How does just one vehicle reach this unfathomable capacity? Broadly, machine learning is being used on an industrial scale, forcing compute requirements to escalate even further.
We see that same trend toward an increased machine learning presence with AutoX’s counterpart, Tesla. During Tesla’s special “AI Day,” most of the content was focused on developing a chip that Elon Musk calls Dojo. The surface of a Dojo chip is used to build a Dojo computer for Tesla, which allows Tesla to train their systems for autonomous driving. The company will be taking on some major competitors to become the most valuable semiconductor company in the world, courtesy of the Dojo chip.
Essentially, it takes that much more computing to train machine learning systems. What companies are quickly realizing is this: The bigger the system, the more intelligence and better quality required.
Talman Advantage #9: A Smoother Transition Into The New Environment
Thanks to close rapport with senior managers and relationships with clients that have lasted for many years, Roy Talman & Associates has the in-depth knowledge of a firm’s work atmosphere that few can bring to the table.
As a result, we can often guide what to expect from the culture you’re about to join, hopefully making your integration into that environment all the more seamless. Make your first days in a new role better than you ever expected by talking to Talman first.
The Virtuous Cycle
The bigger the system, the better it is. Suppose you can figure out how to spend the money the right way to benefit from it. To put things in perspective, it used to be that a system with 500 million parameters was considered significant and a system with 1.7 billion parameters was very large. Over time, computer scientists found that the more parameters a system had, the “smarter” it became in a variety of areas.
Fast forward to today, in which GPT-3, a system that has learned to code, blog, tweet, summarize emails and more in the last year, has a gigantic 175 billion parameters.
The future systems are likely to have – get this – over a trillion parameters.
That substantial jump from a system of 175 billion parameters to a system having over a trillion parameters is for a good reason. By all indications, systems will need to be that big to enact the unprecedented change felt across our society. As more high-capacity systems proliferate on the level of Tesla and AutoX, it should ultimately lead to an explosion of machine learning capabilities in many industries and applications.
Machine learning is rapidly becoming today what electricity was for people in the early 1900s. It took a few decades for electricity to permeate because the “killer app” for electricity was truly electric motors. Then manufacturers needed to redesign everything to accommodate electric motors and do away with steam power.
We’re approaching a very similar situation with machine learning today. Building systems that can be taught to solve real-life problems is relatively painless and the learning curve is shortening. There is still far to go, but the pace at which we are moving upward is accelerating. This should, in turn, lead to increased velocity in hiring talented people who understand machine learning and helping these companies push the full-throttle adoption of new systems. As machine learning is still relatively new, identifying top talent for it is critical to a company’s success.
With hiring smarter in mind, Talk To Talman First. Roy Talman & Associates has 40+ years of experience in interfacing with ideal candidates, challenging them and recommending the very best options based on the person you need today and the person that can potentially grow by leaps and bounds in your organization in the days to come tomorrow.