Larger Font   Reset Font Size   Smaller Font  

Thank You for Being Late, Page 5

Thomas L. Friedman


  And it just kept coming true.

  “The fact that something similar is going on for fifty years is truly amazing,” Moore said to me. “You know, there were all kinds of barriers we could always see that [were] going to prevent taking the next step, and somehow or other, as we got closer, the engineers had figured out ways around these.”

  What is equally striking in Moore’s 1965 article is how many predictions he got right about what these steadily improving microchips would enable:

  Integrated circuits will lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment. The electronic wristwatch needs only a display to be feasible today …

  In telephone communications, integrated circuits in digital filters will separate channels on multiplex equipment. [They] will also switch telephone circuits and perform data processing.

  Computers will be more powerful, and will be organized in completely different ways … Machines similar to those in existence today will be built at lower costs and with faster turn-around.

  Moore could fairly be said to have anticipated the personal computer, the cell phone, self-driving cars, the iPad, big data, and the Apple Watch. The only thing he missed, I joked with him, was “microwave popcorn.”

  I asked Moore, when was the moment he came home and said to his wife, Betty, “Honey, they’ve named a law after me”?

  “For the first twenty years, I couldn’t utter the term ‘Moore’s law’—it was embarrassing,” he responded. “It wasn’t a law. Finally, I got accustomed to it, where now I could say it with a straight face.”

  Given that, is there something that he wishes he had predicted—like Moore’s law—but didn’t? I asked him.

  “The importance of the Internet surprised me,” said Moore. “It looked like it was going to be just another minor communications network that solved certain problems. I didn’t realize it was going to open up a whole universe of new opportunities, and it certainly has. I wish I had predicted that.”

  There are so many wonderful examples of Moore’s law in action that it is hard to pick a favorite. Here’s one of the best I have ever come across, offered by the writer John Lanchester in a March 15, 2015, essay in the London Review of Books entitled “The Robots Are Coming.”

  “In 1996,” wrote Lanchester, “in response to the 1992 Russo-American moratorium on nuclear testing, the U.S. government started a program called the Accelerated Strategic Computing Initiative [ASCI]. The suspension of testing had created a need to be able to run complex computer simulations of how old weapons were ageing, for safety reasons, and also—it’s a dangerous world out there!—to design new weapons without breaching the terms of the moratorium.”

  In order to accomplish that, Lanchester added:

  ASCI needed more computing power than could be delivered by any existing machine. Its response was to commission a computer called ASCI Red, designed to be the first supercomputer to process more than one teraflop. A “flop” is a floating point operation, i.e., a calculation involving numbers which include decimal points … (computationally much more demanding than calculations involving binary ones and zeroes). A teraflop is a trillion such calculations per second. Once Red was up and running at full speed, by 1997, it really was a specimen. Its power was such that it could process 1.8 teraflops. That’s 18 followed by 11 zeros. Red continued to be the most powerful supercomputer in the world until about the end of 2000.

  I was playing on Red only yesterday—I wasn’t really, but I did have a go on a machine that can process 1.8 teraflops. This Red equivalent is called the PS3 [PlayStation 3]: it was launched by Sony in 2005 and went on sale in 2006. Red was only a little smaller than a tennis court, used as much electricity as eight hundred houses, and cost $55 million. The PS3 fits underneath a television, runs off a normal power socket, and you can buy one for under two hundred [pounds]. Within a decade, a computer able to process 1.8 teraflops went from being something that could only be made by the world’s richest government for purposes at the furthest reaches of computational possibility, to something a teenager could reasonably expect to find under the Christmas tree.

  Now that Moore’s law has entered the second half of the chessboard, how much farther can it go? A microchip, or chip, as we said, is made up of transistors, which are tiny switches; these switches are connected by tiny copper wires that act like pipes through which electrons flow. The way a chip operates is that you push electrons as fast as possible through many copper wires on a single chip. When you send electrons from one transistor to another, you are sending a signal to turn a given switch on and off and thus perform some kind of computing function or calculation. With each new generation of microchips, the challenge is to push electrons through thinner and thinner wires to more and smaller switches to shut the electron flow on and off faster and faster to generate more computing power with as little energy and heat as possible for as low a cost as possible in as small a space as possible.

  “Someday it has to stop,” said Moore. “No exponential like this goes on forever.”

  We are not there yet, though.

  For fifty years the industry has kept finding new ways to either shrink transistor dimensions by roughly 50 percent at roughly the same cost, thus offering twice the transistors for the same price, or fit the same number of transistors for half the cost. It has done so by shrinking the transistors and making the wires thinner and more closely spaced. In some cases, this has involved coming up with new structures and materials, all to keep that exponential growth roughly on track every twenty-four months or so. Just one example: the earliest integrated circuits used one layer of aluminum wire pipes; today they use thirteen layers of copper pipes, each placed on top of the other with nanoscale manufacturing.

  “I have probably seen the death of Moore’s law predicted a dozen times,” Intel’s CEO, Brian Krzanich, told me. “When we were working at three microns [one-thousandth of a millimeter: 0.001 millimeters, or about 0.000039 inches], people said, ‘How will we get below that—can we make film thickness thin enough to make such devices and could we reduce the wavelength of light to pattern such small features?’ But each time we found breakthroughs. It is never obvious beforehand and it is not always the answer that is first prescribed that provides the breakthrough. But every time we have broken through the next barrier.”

  Truth be told, said Krzanich, the last two iterations of Moore’s law were accomplished after closer to two and a half years rather than two, so there has been some slowing down. Even so, whether the exponential is happening every one, two, or three years, the important point is that thanks to this steady nonlinear improvement in microchips, we keep steadily making machines, robots, phones, watches, software, and computers smarter, faster, smaller, cheaper, and more efficient.

  “We are at the fourteen-nanometer generation, which is way below anything you can see with the human eye,” Krzanich explained, referring to Intel’s latest microchip. “The chip might be the size of your fingernail and on that chip will be over one billion transistors. We know how to get to ten nanometers pretty well, and we have most of the answers for seven and even five. Beyond five nanometers there are a bunch of ideas that people are thinking about. But that is how it has always been through time.”

  Bill Holt, Intel’s executive vice president of technology and manufacturing, is the man in charge of keeping Moore’s law going. On the tour he gave me of Intel’s Portland, Oregon, chip fabrication plant, or fab, I watched through windows into the clean room where twenty-four hours a day robots move the chips from one manufacturing process to the next, while men and women in white lab coats make sure the robots are happy. Holt, too, has little patience for those who are sure Moore’s law is running out. So much work is being done now with new materials that can pack more transistors that use less energy and create less heat, says Holt, that he is confident in ten years “something” will come alo
ng and lead the next generation of Moore’s law.

  But even if new materials aren’t discovered, it is important to remember that from the very beginning the processing power in microchips was also improved by software advances, not just silicon. “More powerful chips were what enabled more sophisticated software, and some of that more sophisticated software was then used to make the chips themselves get faster through new designs and optimization of all the complexity that was growing on the chip itself,” remarked Craig Mundie.

  And it is these mutually reinforcing breakthroughs in chip design and software that have laid the foundation for the recent breakthroughs in artificial intelligence, or AI. Because machines are now able to absorb and process data at previously unimagined rates and amounts, they can now recognize patterns and learn much as our biological brains do.

  But it all started with that first microchip and Moore’s law. “Plenty of people have predicted the end of Moore’s law plenty of times,” Holt concluded, “and they predicted it for different reasons. The only thing they all have in common is that they were all wrong.”

  Sensors: Why Guessing Is Officially Over

  There was a time when you might have referred to someone as “dumb as a fire hydrant” or “dumb as a garbage can.”

  I wouldn’t do that anymore.

  One of the major and perhaps unexpected consequences of technological acceleration is this: fire hydrants and garbage cans are now getting really smart. For instance, consider the Telog Hydrant Pressure Recorder, which attaches to a fire hydrant and broadcasts its water pressure wirelessly straight to the desktop of the local utility, greatly reducing blowouts and hydrant breakdowns. And now you can pair that with Bigbelly garbage cans, which are loaded with sensors that wirelessly announce when they are full and in need of being emptied—so the garbage collectors can optimize their service routes and the city can become cleaner for less money. Yes, even the garbageman is a tech worker now. The company’s website notes that “each Bigbelly receptacle measures 25" W × 26.8" D × 49.8" H and uses built-in solar panels to run motorized compactors, which dramatically reduce waste volumes to help create greener, cleaner streets … The receptacles have built-in cloud computing technology to digitally signal to trash collectors that they have reached capacity and need immediate attention.”

  That garbage can could take an SAT exam!

  What is making hydrants and garbage cans so much smarter is another acceleration, not directly related to computing per se but critical for expanding what computing can now do—and that is sensors. WhatIs.com defines a sensor as “a device that detects and responds to some type of input from the physical environment. The specific input could be light, heat, motion, moisture, pressure, or any one of a great number of other environmental phenomena. The output is generally a signal that is converted to human-readable display at the sensor location or transmitted electronically over a network for reading or further processing.”

  Thanks to the acceleration of the miniaturization of sensors, we are now able to digitize four senses—sight, sound, touch, and hearing—and are working on the fifth: smell. A wirelessly connected fire hydrant pressure sensor creates a digital measurement that tells the utility when the pressure is too high and too low. A temperature sensor tracks the expansion and contraction of the liquid in a thermometer to create a digital temperature readout. Motion sensors emit regular energy flows—microwaves, ultrasonic waves, or light beams—but send out a digital signal when that flow is interrupted by a person or car or animal entering its path. Police now bounce sensor beams off cars to measure their speed, and bounce sound waves off buildings to locate the source of a gunshot. The light sensor on your computer measures the light in your work area and then adjusts the screen brightness accordingly. Your Fitbit is a combination of sensors measuring the number of steps you take, the distance you’ve gone, the calories you’ve burned, and how vigorously you move your limbs. The camera in your phone is a still and video camera capturing and transmitting images from anywhere to anywhere.

  This vast expansion in our ability to sense our environment and turn it into digitized data was made possible by breakthroughs in materials science and nanotechnology that created sensors so small, cheap, smart, and resistant to heat and cold that we could readily install them and fasten them to measure stress under extreme conditions and then transmit the data. Now we can even paint them—using a process called 3-D inking—on any parts of any machine, building, or engine.

  To better understand the world of sensors I visited General Electric’s huge software center in San Ramon, California, to interview Bill Ruh, GE’s chief digital officer. That in itself is a story. GE, thanks in large part to its accelerating ability to put sensors all over its industrial equipment, is becoming more of a software company, with a big base now in Silicon Valley. Forget about washing machines—think intelligent machines. GE’s ability to install sensors everywhere is helping to make possible the “industrial Internet,” also known as the “Internet of Things” (IoT), by enabling every “thing” to carry a sensor that broadcasts how it is feeling at any moment, thus allowing its performance to be immediately adjusted or predicted in response. This Internet of Things, Ruh explained, “is creating a nervous system that will allow humans to keep up with the pace of change, make the information load more usable,” and basically “make every thing intelligent.”

  General Electric itself gathers data from more than 150,000 GE medical devices, 36,000 GE jet engines, 21,500 GE locomotives, 23,000 GE wind turbines, 3,900 gas turbines, and 20,700 pieces of oil and gas equipment, all of which wirelessly report to GE how they are feeling every minute.

  This new industrial nervous system, argued Ruh, was originally accelerated by advances in the consumer space—such as camera-enabled smartphones with GPS. They are to the industrial Internet in the twenty-first century, said Ruh, what the moonshot was to industrial progress in the twentieth century—they drove a great leap forward in an array of interlinked technologies and materials, making all of them smaller, smarter, cheaper, and faster. “The smartphone enabled sensors to get so cheap that they could scale, and we could put them everywhere,” said Ruh.

  And now those sensors are churning out insights at a level of granularity we have never had before. When all of these sensors transmit their data to centralized data banks, and then increasingly powerful software applications look for the patterns in that data, we can suddenly see weak signals before they become strong ones, and we can see patterns before they cause problems. Those insights can then be looped back for preventive action—when we empty the garbage bins at the optimal moment or adjust the pressure in a fire hydrant before a costly blowout, we are saving time, money, energy, and lives and generally making humanity more efficient than we ever imagined we could be.

  “The old approach was called ‘condition-based maintenance’—if it looks dirty, wash it,” explained Ruh. “Preventive maintenance was: change the oil every six thousand miles, whether you drive it hard or not.” The new approach is “predictive maintenance” and “prescriptive maintenance.” We can now predict nearly the exact moment when a tire, engine, car or truck battery, turbine fan, or widget needs to be changed, and we can prescribe the exact detergent that works best for that particular engine operating under different circumstances.

  If you look at the GE of the past, added Ruh, it was based on mechanical engineers’ belief that by using physics you could model the whole world and right away get insights into how things worked. “The idea,” he explained, “was that if you know exactly how the gas turbine and combustion engine work, you can use the laws of physics and say: ‘This is how it is going to work and when it is going to break.’ There was not a belief in the traditional engineering community that the data had much to offer. They used the data to verify their physics models and then act upon them. The new breed of data scientists here say: ‘You don’t need to understand the physics to look for and find the patterns.’ There are patterns that a human
mind could not find, because the signals are so weak early on that you won’t see them. But now that we have all this processing power, those weak signals just pop out at you. And so as you get that weak signal, it now becomes clear that it is an early indication that something is going to break or is becoming inefficient.”

  In the past, the way we detected weak signals was with intuition, added Ruh. Experienced workers knew how to process weak data. But now, with big data, “with a much finer grain of fidelity, we can make finding the needle in the haystack the norm”—not the exception. “And we can then augment the human worker with machines, so they work as colleagues, and enable them to process weak signals together and overnight become like a thirty-year veteran.”

  Think about that. The intuition about how a machine is operating on a factory floor used to come from working there for thirty years and being able to detect a slightly different sound signature emanating from the machine, telling you something might not be exactly right. That is a weak signal. Now, with sensors, a new employee can detect a weak signal on the first day of work—without any intuition. The sensors will broadcast it.

  This ability to generate and apply knowledge so much faster is enabling us to get the most not only out of humans but also out of cows. Guessing is over for dairy farmers, too, explained Joseph Sirosh, corporate vice president of the Data group in Microsoft’s Cloud and Enterprise Division. Sounds like a pretty brainy job—managing bits and bytes. But when I sat down with Sirosh to learn about the acceleration in sensing, he chose to explain it to me with a very old example: cows.