Superconductivity’s First Century - IEEE Spectrum

2022-07-02 01:24:01 By : Ms. Cindy Lin

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

Ohm Killer: A winding machine weaves superconducting niobium-titanium wire into multiple-strand cable.Photo: Peter Ginter/Getty Images

Absolute zero, as the name suggests, is as cold as it gets.

In 1848, Lord Kelvin, the great British physicist, pegged it at –273 °C. He thought that bringing something to this temperature would freeze electrons in their tracks, making what is normally a conductor into the perfect resistor. Others believed that electrical resistance would diminish gradually as a conductor cooled, so that by the time it reached absolute zero, all vestiges of resistance would disappear. It turns out that everybody was wrong.

Heike Kamerlingh Onnes, professor of physics at Leiden University, in the Netherlands, found the answer early in 1911 by measuring the resistance of mercury that was frozen solid and chilled to within a few degrees of absolute zero. He found that the resistance declined in proportion to the temperature all the way down to 4.3 kelvins (4.3 °C above absolute zero), at which point it fell abruptly to zero. Onnes first thought he had a short circuit. It took him a while to realize that what he had was, in fact, the makings of a Nobel Prize—the discovery of superconductivity.

Since then, physicists have sought to understand the quantum-mechanical origins of superconductivity, and engineers have tried to make use of it. While scientific efforts in this area have been rewarded by no fewer than seven Nobel prizes, all commercial applications of superconductivity have pretty much fizzled except one, which came out of the blue: magnetic resonance imaging (MRI).

Why did MRI alone pan out? Can we expect to see a second widespread application anytime soon? Without a crystal ball, it’s hard to know, of course, but reviewing the evolution of superconductivity’s first century offers some interesting clues about what we might expect for its second.

Onnes himself expected that superconductivity would be valuable because it would allow for the transmission of electrical power without a loss of energy in the wires. Those early hopes were, however, dashed by the observation that there were few materials that became superconducting at temperatures above 4 K and that those materials stop superconducting if you try to pass much current through them. This is why for the next five decades most of the research in this field was centered on finding materials that could remain superconducting while carrying appreciable amounts of current. But that was not the only requirement for practical devices. The people working on them also needed to find superconducting materials that weren’t too expensive and that could be drawn into thin, reasonably strong wires.

In 1962 researchers at Westinghouse Research Laboratories, in Pennsylvania, developed the first commercial superconducting wire, an alloy of niobium and titanium. Soon after, other researchers, at the Rutherford Appleton Laboratory, in the United Kingdom, improved it by adding copper cladding. At the time, the most promising application appeared to be in the giant magnets physicists use for particle accelerators, as superconducting magnets were able to offer much higher magnetic fields than ones made from ordinary copper wire.

With this and other similar applications in mind, one of us (Abetti) and his fellow scientists and engineers at the General Electric Co. location in Schenectady, N.Y., succeeded within months in building the world’s first 10-tesla magnet using superconducting wire. Although a scientific and technical triumph, that magnet was a commercial failure. Development costs ran to more than US $200 000, well above the fixed-price contract of $75 000 that Bell Telephone Laboratories had paid GE for this magnet, which was to be used for basic research in materials science.

Around this time, engineers at GE and elsewhere demonstrated some other practical applications for superconductors, such as for the windings of large generators, motors, and transformers. But superconducting versions of such industrial machinery never caught on. The problem was that the existing equipment was technologically mature, having already achieved high electrical efficiencies. Indeed, motors, generators, and transformers were practically commodities, with reliability and low cost being what customers most cared about.

That seemingly left only one niche open: superconducting cables for power transmission in areas where overhead lines could not be used—over large bodies of water and in densely populated areas, for example. While the promised gains in efficiency were attractive, the need for expensive and unreliable cooling vessels for the cables made them a dicey proposition.

GE’s management considered the market for superconductivity’s one proven product—magnets—too small and uncertain. But one of the researchers at GE wouldn’t take no for an answer. In 1971, Carl Rosner created an independent spin-off, Intermagnetics General Corp., or IGC, in Latham, N.Y., which made and sold laboratory-size magnets and received government research-and-development grants. The new company was immediately profitable.

At about this time, Martin Wood, a senior research officer at the University of Oxford’s Clarendon Laboratory, and his wife, Audrey, also decided to try to turn superconductivity into a business. In addition to design and consulting work, their newly hatched company, Oxford Instruments, developed and marketed magnets for research purposes, building the first high-field superconducting magnet outside the United States in March of 1962. By 1970, Oxford Instruments had 95 employees.

The 1970s saw the emergence of a few other start-ups that used superconductivity for building such things as sensitive magnetometers. And various research efforts were spawned to explore other applications, including superconductive magnets for storing energy and for levitating high-speed trains. But GE, Philips, Siemens, Westinghouse, and other big players still showed little interest in superconductivity, which, in the view of the managements of these companies, seemed destined to remain a sideshow. They were, however, proved very wrong toward the end of the decade, when a stunning new use for superconducting magnets appeared on the scene: MRI.

MRI was an outgrowth of an analysis technique chemists had long been using called nuclear magnetic resonance (NMR), which itself has since created a significant market for superconducting magnets. There were hints as far back as the 1950s that the NMR signals emanating from different points within a magnet could be distinguished, but it wasn’t until the late 1970s that it became apparent that medically relevant images could be made this way.

The advent of medical imagers that required a person to be immersed in an intense magnetic field brought a swift change in the business calculus at GE, where managers suddenly smelled a billion-dollar market. They knew that the technical risk involved in making the superconducting magnets required for MRI was low—after all, GE’s spin-off IGC had already fabricated comparable products. They knew also that GE could take advantage of its long-established presence in the medical-imaging market, for which it had produced X-ray machines and, more recently, computerized axial tomography scanners. Also, by this time GE had a more entrepreneurial climate, which encouraged the company’s operating units to take risks.

This really was superconductivity’s golden moment, and GE seized it. In 1984 the company rolled out its first MRI system, and by the end of the decade, GE could boast an installed base of over 1000 imagers. Although it constructed its MRI magnets in-house, GE used niobium-titanium wire manufactured by IGC.

Meanwhile, IGC learned to build MRI magnets of its own, which it sold to GE’s competitors. And with a budget of only $5 million, IGC succeeded in building MRI scanners that were functionally equivalent to those GE, Hitachi, Philips, Siemens, and Toshiba were then selling. IGC, however, lacked the marketing clout and reputation within the health-care industry to compete with these multinational giants. So it fell back on its main business, manufacturing superconducting MRI magnets, which it sold primarily to Philips.

Although there were a few attempts at the time to find other commercial uses for superconductivity—in X-ray photolithography or for separating ore minerals, for example—MRI provided the only substantial market. It was around this time, though, that yet another scientific breakthrough put superconductivity back on everyone’s radar.

In 1986, Karl Alexander Müller and Johannes Georg Bednorz, researchers at IBM Research–Zurich, concocted a barium-lanthanum-copper oxide that displayed superconductive properties at 35 K. That’s 12 or so kelvins warmer than any other superconductive material known at the time. What made this discovery even more remarkable was that the material was a ceramic, and ceramics normally don’t conduct electricity. There had been hints of superconducting ceramics before, but until this time, none of them had shown much promise.

Müller and Bednorz’s work triggered a flurry of research around the world. And within a year scientists at the University of Alabama at Huntsville and the University of Houston found a similar ceramic compound that showed superconductivity at temperatures they could attain using liquid nitrogen. Before, all superconductors had required liquid helium—an expensive, hard-to-produce substance—for cooling. Liquid nitrogen, however, can be made from air without that much effort. So the new high-temperature superconductors, in principle, threw the door wide open for all sorts of practical uses, or at least they appeared to.

The discovery of high-temperature superconductors sparked tremendous publicity—which in retrospect is easy to see was hype. Newsweek called it a dream come true. The cover of Time magazine showed a futuristic automobile controlled by superconducting circuits. BusinessWeek declared, “Superconductors! More important than the light bulb and the transistor” on its cover. Many sober scientists and engineers shared this enthusiasm. Among them were Yet-Ming Chiang, David A. Rudman, John B. Vander Sande, and Gregory J. Yurek, the four MIT professors who founded American Superconductor Corp. during this time of feverish excitement over the new high-temperature materials.

Despite all the hoopla, managers at Oxford Instruments, one of the few companies with any real experience using superconductivity at that point, had a dim view of the prospects for the high-temperature ceramics. For the most part, they decided to stick to their former course: working to improve the company’s low-temperature niobium-titanium wire and making incremental improvements in its MRI magnets. Oxford Instruments put only a small effort into studying the new high-temperature superconductors.

The management at IGC, which at the time included one of us (Haldar), saw more promise in the new materials and worked hard to see how they could commercialize them for such things as electrical transmission cables, industrial-scale current limiters, energy-storage coils, motors, and generators. American Superconductor, which went public in 1991, did the same.

It took more than a decade to do, but IGC eventually developed a high-temperature superconducting wire and in collaboration with Waukesha Electric Systems, in Wisconsin, built a transformer with it in 1998. In 2000, IGC and Southwire, of Carrollton, Ga., demonstrated a superconducting transmission cable. Soon after, Haldar and his IGC colleagues established a subsidiary, called IGC-SuperPower, to develop and market electrical devices based on high-temperature superconductivity.

In 2001, American Superconductor tested a superconducting cable for the transmission of electrical power at one of Detroit Edison’s substations. In 2006, SuperPower connected a 30-meter superconductive power cable to the grid near Albany, N.Y. American Superconductor carried out an even more impressive demonstration of this kind in 2008, when it threw the switch for a 600-meter-long superconducting power cable used by New York’s Long Island Power Authority, part of a program funded by the U.S. Department of Energy.

While all these projects were technically successful, they were merely government-sponsored demonstrations; electric utilities are hardly clamoring for such products. The only commercial initiative now slated to use superconducting cables is the proposed Tres Amigas Superstation in New Mexico, an enterprise aimed at tying the eastern, western, and Texas power grids together in one spot. Using superconducting cables would allow the station to transfer massive amounts of power, and because these lines can be relatively short, they wouldn’t be prohibitively expensive.

But Tres Amigas is an exception. For the most part, the electric power industry has shown a stunning lack of interest in superconductors, despite the many potential benefits over conventional copper and aluminum wire: three to five times as much capacity within a given conduit size, half the power losses, no need for toxic or flammable insulating materials. With all those advantages, you might well wonder why this technology hasn’t taken the electric-power industry by storm over the past two decades.

One reason (other than cost) may stem from the changing nature of electric utilities, which in many countries have lost their former monopoly status. These companies are by and large reluctant to make substantial investments in infrastructure, especially for projects that don’t promise a quick return [see “How the Free Market Rocked the Grid,” IEEE Spectrum, December 2010]. So the last thing many of them desire is to assume the risk of adopting anything as radical as superconductive cables, generators, or transformers.

It may be that superconductivity just needs time to mature. Plenty of technologies work that way. Perhaps the next generation of wind turbines will sport superconducting generators in their nacelles, an application that American Superconductor is working toward. A better bet in our view, though, is that superconductivity will remain limited to applications like MRI, where it’s very difficult to build something any other way.

What will those applications be? A ship that cuts through the waves using superconducting magnetic propulsion instead of propellers? Unlikely: Japanese engineers built such a vessel in 1991, and it’s long since been mothballed. An antigravity device that can make living creatures float? Probably not: The 2010 Nobel laureate Andre Geim demonstrated that this could be done in 1997, and it hasn’t been put to any real use. A magnetically levitated train that can top 580 kilometers per hour? Japanese engineers built one in 2003; yet few rail systems are giving up on wheels. A supercooled microprocessor that can run at 500 gigahertz? Perhaps: IBM and Georgia Tech captured that speed record in 2007, but it would be hard to make such a setup practical.

We certainly don’t know what’s ahead. But we suspect that the next big thing for superconductivity, whatever it is, will, like MRI, take the world by surprise.

Pradeep Haldar and Pier Abetti together have almost a half century of experience with superconductivity. Haldar, an IEEE Senior Member, is a professor of nanoengineering at the University of Albany, of the State University of New York. Abetti, an IEEE Life Fellow and professor of enterprise management at Rensselaer Polytechnic Institute, once had Haldar as a student. Only after Abetti began lecturing about the commercialization of superconductivity did he learn that Haldar had been a director of technology in that very industry. “He made me get up and talk about the whole thing,” says Haldar.

The beam-steering approach aims to make 5G relays and IoT devices batteryless

Prototype of a 64-element millimeter-wave-band phased-array transceiver.

The quest to transmit electric power wirelessly and over distance has been a goal of electrical engineers since the end of the 19th century, when Nikola Tesla tried his hand at it, to no avail. In the 1970s, NASA and the U.S. Department of Energy engineers achieved some notable successes in wireless power transfer (WPT) in the kilowatt-kilometer range, their efforts spurred on by the energy crises of the time. Interest waned, however, as energy became plentiful again.

Now, with the advent of 5G and its ability to transmit at high frequencies in the millimeter-wave-band range, new opportunities and approaches are opening up for WPT. Researchers at the Tokyo Institute of Technology have developed a prototype 64-element millimeter-wave-band phased-array transceiver that can send and receive data while simultaneously receiving power. The aim is to employ the transceiver initially as a 5G relay, and later to integrate into Internet of Things (IoT) devices. This would enable such devices to shed their batteries, plugs, and cables, says lead researcher Atsushi Shirane. The result would be devices that are smaller, more practical, and capable of speedier communications, with potentially reduced maintenance costs.

Shirane notes that the transceiver has overcome the two main hurdles that have thwarted similar research efforts to date based on rectennas and array rectennas: short transmission distances and a fixed direction from which power can be received. “What’s more,” he says, “it’s the first device to achieve simultaneous reception of power and communication signals with beam steering using phase-shifting.” Shirane presented the team’s findings at the 2022 IEEE Symposium on LSI Technology & Circuits held in Honolulu, Hawaii, this June.

The front side of the transceiver is a 64-element phased array of antennas laid out in four quadrants. The backside has a flexible printed circuit board housing four custom-made radio-frequency integrated circuit chips, each one individually wired to one of the four antenna quadrants. The chips are integrated on an 8-by-8 array and work as a fully passive unit. Each chip incorporates a phase shifter to enable beam steering, and a rectifier. The rectifier's power and communication outputs are connected to the application device.

The transceiver has two modes of working.

In receiving mode, a base station transmits a 28-gigahertz communications signal and 24-GHz WPT signal that are simultaneously received by the four quadrants of antennas and sent on to their respective transceiver chips. The WPT signal activates the device, and both the communications and power signals are phase-shifted to enable fine spatial beam steering up to plus or minus 45 degrees. The signals are sent on to a 16-way power combiner that aligns the phases and produces a common output, and which facilitates a longer transmission distance. A rectifier then converts the WPT signal into direct current to run the application, while the 28-GHz communications signal is down-converted to an intermediary frequency—for example, 4 GHz—to make it easier for the application to manage.

The reverse process happens in transmission mode with the 4-GHz intermediary signal from the application being up-converted to 28 GHz and sent back in the same direction from which it was received using retro-reflective backscattering.

Shirane explains that WTP performance depends on the number of antenna elements and the output power of the base station. The prototype transceiver with 64 antennas produced 1 milliwatt, and it continued to generate 46 percent of its output at 4.5 meters, even at a plus or minus 45-degree received angle. He estimates that an array of 1,024 elements would generate 10 mW.

With proof of concept established, the researchers are now working to fabricate transceivers with a larger number of arrays and higher frequencies to increase the output power and communication speed and distance.

“As a first step towards commercialization,” says Shirane, “we aim to apply the technology as a batteryless 5G relay transceiver to extend the service area coverage of millimeter-wave 5G communications. And after increasing DC power generation, the transceiver can also be adopted for batteryless IoT devices.”

New tools track and reduce emissions from machine learning

Matthew Hutson is a freelance writer who covers science and technology, with specialties in psychology and AI. He’s written for Science, Nature, Wired, The Atlantic, The New Yorker, and The Wall Street Journal. He’s a former editor at Psychology Today and is the author of The 7 Laws of Magical Thinking. Follow him on Twitter at @SilverJacket.

Machine-learning models are growing exponentially larger. At the same time, they require exponentially more energy to train, so that they can accurately process images or text or video. As the AI community grapples with its environmental impact, some conferences now ask paper submitters to include information on CO2 emissions. New research offers a more accurate method for calculating those emissions. It also compares factors that affect them, and tests two methods for reducing them.

Several software packages estimate the carbon emissions of AI workloads. Recently a team at Université Paris-Saclay tested a group of these tools to see if they were reliable. “And they’re not reliable in all contexts,” says Anne-Laure Ligozat, a co-author of that study who was not involved in the new work.

The new approach differs in two respects, says Jesse Dodge, a research scientist at the Allen Institute for AI and the lead author of the new paper, which he presented last week at the ACM Conference on Fairness, Accountability, and Transparency (FAccT). First, it records server chips’ energy usage as a series of measurements, rather than summing their use over the course of training. Second, it aligns this usage data with a series of data points indicating the local emissions per kilowatt-hour (kWh) of energy used. This number also changes continually. “Previous work doesn’t capture a lot of the nuance there,” Dodge says.

The new tool is more sophisticated than older ones but still tracks only some of the energy used in training models. In a preliminary experiment, the team found that a server’s GPUs used 74% of its energy. CPUs and memory used a minority, and they support many workloads simultaneously, so the team focused on GPU usage. They also didn’t measure the energy used to build the computing equipment, or to cool the data center, or to build it and transport engineers to and from the facility. Or the energy used to collect data or run trained models. But the tool provides some guidance on ways to reduce emissions during training.

“What I hope is that the vital first step towards a more green future and more equitable future is transparent reporting,” Dodge says. “Because you can’t improve what you can’t measure.”

The researchers trained 11 machine-learning models of different sizes to process language or images. Training ranged from an hour on one GPU to eight days on 256 GPUs. They recorded energy used every second or so. They also obtained, for 16 geographical regions, carbon emissions per kWh of energy used throughout 2020, at five-minute granularity. Then they could compare emissions from running different models in different regions at different times.

Powering the GPUs to train the smallest models emitted about as much carbon as charging a phone. The largest model contained six billion parameters, a measure of its size. While training it only to 13% completion, GPUs emitted almost as much carbon as does powering a home for a year in the United States. Meanwhile, some deployed models, such as OpenAI’s GPT-3, contain more than 100 billion parameters.

Allen Institute on AI et al from FAccT 2022

The biggest measured factor in reducing emissions was geographical region: Grams of CO2 per kWh ranged from 200 to 755. Besides changing location, the researchers tested two CO2-reduction techniques, allowed by their temporally fine-grained data. The first, Flexible Start, could delay training up to 24 hours. For the largest model, which required several days of training, delaying it up to a day typically reduced emissions less than 1%, but for a much smaller model, such a delay could save 10–80%. The second, Pause and Resume, could pause training at times of high emissions, as long as overall training time didn’t more than double. This method benefited the small model only a few percent, but in half the regions it benefited the largest model 10–30%. Emissions per kWh fluctuate over time in part because, lacking sufficient energy storage, grids must sometimes rely on dirty power sources when intermittent clean sources such as wind and solar can’t meet demand.

Ligozat found these optimization techniques the most interesting part of the paper. But they were based on retrospective data. Dodge says in the future, he’d like to be able to predict emissions per kWh so as to implement them in real time. Ligozat offers another way to reduce emissions: “The first good practice is just to think before running an experiment,” she says. “Be sure that you really need machine learning for your problem.”

Register for this webinar to enhance your modeling and design processes for microfluidic organ-on-a-chip devices using COMSOL Multiphysics

If you want to enhance your modeling and design processes for microfluidic organ-on-a-chip devices, tune into this webinar.

You will learn methods for simulating the performance and behavior of microfluidic organ-on-a-chip devices and microphysiological systems in COMSOL Multiphysics. Additionally, you will see how to couple multiple physical effects in your model, including chemical transport, particle tracing, and fluid–structure interaction. You will also learn how to distill simulation output to find key design parameters and obtain a high-level description of system performance and behavior.

There will also be a live demonstration of how to set up a model of a microfluidic lung-on-a-chip device with two-way coupled fluid–structure interaction. The webinar will conclude with a Q&A session. Register now for this free webinar!