When you think about it, the number of computer chips surrounding us is absolutely mind-boggling. We simply don’t realize how many things have chips in them or how many chips are in our devices.
If you are staring at a giant computer screen, how many chips are in it? You have to factor in the depth of the screen. The camera. The microphone. And guess what? We’re not even talking about the actual computer itself yet!
Just think about all those potential chips that impact each of our lives in this way. Now multiply it for every person in every developed country of the world. If this sounds like an infinite number of chips required, you’re right – and it’s only about to be taken up several notches to new levels of need.
Here’s why: The pandemic, in essence, supercharged a massive shift in efficiencies as remote work has taken hold in markets across the globe. Rather than spending 90 minutes one way commuting on a train or in a car, we are already counting on new technologies to make remote work possible in a way that feels easy for us to slip right into, whether we’re working remotely for a couple of days per month or a couple of days per week.
All of which is to say that, with the need for ever-more intelligent technologies to bring us together as millions of us are working remotely, we may soon be relying on a staggering number of chip operations more so than we ever have.
Are we ready for that kind of shift and reliance? Maybe. But in the same breath, we have to face facts: The way chips are now designed and manufactured has fundamentally changed from several years ago. Let’s take a closer look at what we’re talking about, with a little trip back to the past for some context here.
The Early Dominance of Intel
Most of the people unfamiliar with computer hardware likely believe that the marketplace is still exclusively comprised of the likes of Intel and AMD. The answer is no. Plus, it’s essential to recognize that there is a gigantic difference between designing a chip and manufacturing a chip.
For many years, it used to be that Intel would design and manufacture chips that most of the population used. In fact, it was such a given that we could say if a chip existed, Intel was the likely name behind it seeing the light of day.
However, as time progressed, the landscape changed and gamers were at the forefront of that change. That’s right. Gamers! How so? Gamers use a game co-processor – a graphical processing unit essential to the gaming experience because the screens on every game need to be refreshed faster and faster.
With the arrival of this processing unit, Intel wasn’t the only “game in town” in the chip business. Companies such as Nvidia emerged for this particular market segment, while AMD and Intel made acquisitions to grow their market share. Chips could now feature a CPU and a GPU, resulting in them being called “system-on-a-chip” technology.
That said, in the minds of most people, “he who designs it also manufactures it” – regardless of whether or not that’s true. For instance, AMD used to manufacture their own chips, but to focus purely on design, it gave up chip manufacturing in favor of trusting another business to handle that responsibility.
Talman Advantage #8: True Help To Hit The Ground Running
With an offer coming, do you have a solid understanding of what you’ll be doing in the first 6 months? The first year? Having placed a variety of senior people at each client’s firm, Roy Talman & Associates can help you clarify a whole lot about the environment you’re about to join, your role and the true expectations of your new manager. A recruiter without the overwhelming credibility that we do may not be able to shed as much light on what’s in store for you on Day 1 and beyond. So get the insight you need and Talk to Talman First.
Apple Changes The Game Of Chip Design
For many years, like a variety of other companies, Apple had relied on Intel for chip design until a major problem presented itself: Intel’s manufacturing had fallen far behind others in the field who could manufacture more chips at a higher rate.
Namely, chipmaker TSMC (Taiwan Semiconductor Manufacturing) was simultaneously investing so much money in designing new fabrication factories that they could manufacture more chips of a smaller geometry. Chips are measured by the shortest stretch of silicon – referred to as nanometers. For an exceptionally long period, Intel’s chip was measured at 10 nanometers. Then, TSMC created a seven-nanometer chip, causing it to leapfrog over Intel.
That was all Apple needed to see and hear to make a monumental decision – they didn’t need to rely exclusively on Intel for its chips anymore.
What’s more, Apple viewed a lot of components within Intel’s chip as non-essential. They would design their own chips and turn to another manufacturer like TSMC to produce those chips.
Chips Get Cooked And Potentially Burned
If there’s one thing that’s rarely been a problem for chip manufacturers, it’s the question of speed. After all, the speed of chips has been continuously doubling every two to three years. When manufacturers found that a chip could perform more than 10 billion operations per second, the real issue arose. While impressive on the surface, that level of speed is so intense that it doesn’t take long before the chip needs to be cooled down. Otherwise, it will be literally cooking.
That’s why an alternative route was required so that manufacturers would refrain from making unbelievably fast chips and instead focus on low energy consumption. As a result, chips were made to have more considerable capabilities without going faster. These simple, tiny chips could go into all kinds of applications and practically any device you could think of. That type of versatility is exciting, but the world of making chips wasn’t done evolving.
Raising The Stakes
Today, those who can design highly complex chips stand atop a hierarchy of companies where, quite frankly, the price of a chip doesn’t matter to them.
The manufacturers we’re talking about producing very complex chips are playing by an entirely different standard in which multiple steps are required, involving photolithography in which hundreds or even thousands of layers after layers of photos are superimposed over one another. In the end, it’s not surprising to hear that it can take hundreds of steps using specialized lithography equipment to produce a very complex chip – which also explains why it costs as much to make these types of chips as it does. It is so expensive that you need all of these machines because each machine produces its own codes or next layer. Acquiring such an array of machines is incredibly challenging from both a cost and a logistical perspective.
In other words, we have two very different categories in chip manufacturing: One that calls for a chip to be run through a variety of steps in a process, creating a chip that costs much more than its predecessors. The other avenue is to make a straightforward chip worth much less than the first kind but can also be manufactured in far fewer steps, using older machines.
The stakes of all this are raised significantly, knowing that there are only so many facilities in the world capable of manufacturing chips, period. A company could spend $10 billion building a foundry to make chips, but you still have to ensure that a large percentage of the chips get made and sold.
No wonder so many foundries need to run at a 24/7 pace, often with every one of their multi-million-dollar machines running simultaneously and with robots moving things from one station to the next.
All of this, technically speaking, refers to hardware. But how are you going to use hardware if you don’t have a language for it? That requires sophisticated software to be developed. Nvidia was one of the first to do so through a language called CUDA, specifically the language to program their GPUs.
As time has gone on, a great deal more software has been developed. One of the things that is happening now to some degree is that when you look at GPUs, they’re reverse-engineered so that software can run very well on them.
Conversely, in the old days, you designed things for any and all potential problems, making it very hard to predict how your chips were going to be used. Intel would focus on creating a chip that could allow any software to be run on it as a “Swiss Army Knife of Chips.” However, if you could understand precisely how the chips would be used most of the time, you could design software in a certain way with confidence.
Today, you don’t need to spend $10 billion to build a foundry. All you need to do is design the chip. Instead of building a massive facility for manufacturing, what if you hired a few hundred people? How much would that cost? Would that be as bad as you think?
Here’s what I mean: I was watching Elon Musk of Tesla present during the company’s special “AI Day,” showing us people on staff who were likely superstar designers of his that he could pay hundreds of thousands of dollars per year. If those people added up to $10 million a year, that’s chump change for the likes of Elon Musk!
In other words, as a designer, you can go to a manufacturer and receive access to state-of-the-art fabrication technology that would otherwise really cost you if you were approaching manufacturing on your own. All of this is causing many people to say, “We don’t need Intel. We can design our own chips, then partner with a manufacturer. Plus, with potentially four more global foundries on top of the original ones out there, we don’t have to look at just one manufacturer as our only option.”
That’s the dynamic of designing a chip yourself and sending it out for manufacturing to a select number of shops as we sit here today. Might it change tomorrow? What kinds of companies and fields might be affected most? It demands a watchful eye because it could impact us all in some form or fashion. It wouldn’t be the first time we’ve seen this particular area evolve.
Fortunately, if there’s one form of consistency that companies can rely on, it’s the dependability of Roy Talman & Associates to communicate the nature of how specific processes, technologies and industries are currently shifting to impact how we work, how we hire and how we live. It all could start with the design and manufacturing of one tiny chip that’s the domino to a host of other events occurring.
Rather than try to decipher where those chips are falling, Talk To Talman First. We’ll supply the much-needed perspective from 40+ years of experience on these kinds of technological movements to help you plan accordingly, move forward and go full speed ahead in the direction of your goals.