Engineering | Popular Science https://www.popsci.com/category/engineering/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Thu, 01 Jun 2023 22:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Engineering | Popular Science https://www.popsci.com/category/engineering/ 32 32 Workers rely on medieval era tech to reconstruct the Notre Dame https://www.popsci.com/technology/notre-dame-reconstruction-medieval-tools/ Thu, 01 Jun 2023 22:00:00 +0000 https://www.popsci.com/?p=545258
Notre Dame de Paris cathedral on sunny day
Carpenters are using the same tools and materials to reconstruct Notre Dame as were used to first build it. Deposit Photos

Laborers are taking a decidedly old school approach to rebuilding the fire-ravaged cathedral.

The post Workers rely on medieval era tech to reconstruct the Notre Dame appeared first on Popular Science.

]]>
Notre Dame de Paris cathedral on sunny day
Carpenters are using the same tools and materials to reconstruct Notre Dame as were used to first build it. Deposit Photos

It’s been a little over four years since a major fire ravaged France’s iconic Notre Dame de Paris cathedral, causing an estimated $865 million of damage to the majority of its roof and recognizable spire. Since then, the French government, engineers, and a cadre of other dedicated restoration experts have been hard at work rebuilding the architectural wonder, which is currently slated to reopen to the public by the end of 2024.

It’s a tight turnaround, and one that would be much easier to meet if carpenters used modern technology and techniques to repair the iconic building. But as AP News explained earlier this week, it’s far more important to use the same approaches that helped first construct Notre Dame—well over 800 years ago. According to the recent dispatch, rebuilders are consciously employing medieval era tools such as hand axes, mallets, and chisels to reforge the cathedral’s hundreds of tons’ worth of oak wood roofing beams.

Although it would progress faster with the use of modern equipment and materials, that’s not the point. Instead, it’s ethically and artistically far more imperative to stay true to “this cathedral as it was built in the Middle Ages,” explained Jean-Louis Georgelin, a retired general for the French overseeing the project.

[Related: The Notre Dame fire revealed a long-lost architectural marvel.]

Thankfully, everything appears to be on track for the December 2024 reopening. Last month, overseers successfully conducted a “dry run” to assemble and erect large sections of the timber frame at a workshop in western France’s Loire Valley. The next time the pieces are put together will be atop the actual Notre Dame cathedral.

As rudimentary as some of these construction techniques may seem now, at the time they were considered extremely advanced. Earlier this year, in fact, researchers discovered Notre Dame was likely the first Gothic-style cathedral to utilize iron for binding sections of stonework together.

It’s not all old-school handiwork, however. The team behind Notre Dame’s rebuilt roofing plans to transport the massive components to Paris via trucks, and then lifted into place with help from a large mechanical crane. Over this entire process, detailed computer analysis was utilized to make absolutely sure carpenters’ measurements and handhewn work were on the right track. Still, the melding of bygone and modern technology appears to perfectly complement one another, ensuring that when Notre Dame finally literally and figuratively rises from the ashes, it will be as stunning as ever.

The post Workers rely on medieval era tech to reconstruct the Notre Dame appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How do sound waves work? https://www.popsci.com/reviews/what-are-sound-waves/ Wed, 28 Jul 2021 00:00:00 +0000 https://www.popsci.com/?p=384369
Blue sine waves on a black background. Sine waves, like these, are a way to envision how sound works.
Whether you’re recording or just vibin’, the science of sound can be cool. Pawel Czerwinski / Unsplash

Sound waves are vibrations that can move us, hurt us, and maybe even heal us.

The post How do sound waves work? appeared first on Popular Science.

]]>
Blue sine waves on a black background. Sine waves, like these, are a way to envision how sound works.
Whether you’re recording or just vibin’, the science of sound can be cool. Pawel Czerwinski / Unsplash

We live our entire lives surrounded by them. They slam into us constantly at more than 700 miles per hour, sometimes hurting, sometimes soothing. They have the power to communicate ideas, evoke fond memories, start fights, entertain an audience, scare the heck out of us, or help us fall in love. They can trigger a range of emotions and they even cause physical damage. This reads like something out of science fiction, but what we’re talking about is very much real and already part of our day-to-day lives. They’re sound waves. So, what are sound waves and how do they work?

If you’re not in the industry of audio, you probably don’t think too much about the mechanics of sound. Sure, most people care about how sounds make them feel, but they aren’t as concerned with how the sound actually affects them. Understanding how sound works does have a number of practical applications, however, and you don’t have to be a physicist or engineer to explore this fascinating subject. Here’s a primer on the science of sound to help get you started.

What’s in a wave

When energy moves through a substance such as water or air, it makes a wave. There are two kinds of waves: longitudinal ones and transverse ones. Transverse waves, as NASA notes, are probably what most people think of when they picture waves—like the up-down ripples of a battle rope used to work out. Longitudinal waves are also known as compression waves, and that’s what sound waves are. There’s no perpendicular motion to these, rather, the wave moves in the same direction as the disturbance.

How sound waves work

Sound waves are a type of energy that’s released when an object vibrates. Those acoustic waves travel from their source through air or another medium, and when they come into contact with our eardrums, our brains translate the pressure waves into words, music, or signals we can understand. These pulses help you place where things are in your environment.

We can experience sound waves in ways that are more physical, not just physiological, too. If sound waves reach a microphone—whether it’s a plug-n-play USB livestream mic or a studio-quality microphone for vocals—it transforms them into electronic impulses that are turned back into sound by vibrating speakers. Whether listening at home or at a concert, we can feel the deep bass in our chest. Opera singers can use them to shatter glass. It’s even possible to see sound waves sent through a medium like sand, which leaves behind a kind of sonic footprint. 

That shape is rolling peaks and valleys, the signature of a sine (aka sinusoid) wave. If the wave travels faster, those peaks and valleys form closer together. If it moves slower, they spread out. It’s not a poor analogy to think of them somewhat like waves in the ocean. It’s this movement that allows sound waves to do so many other things. 

It’s all about frequency

When we talk about a sound wave’s speed, we’re referring to how fast these longitudinal waves move from peak to trough and back to peak. Up … and then down … and then up … and then down. The technical term is frequency, but many of us know it as pitch. We measure sound frequency in hertz (Hz), which represents cycles-per-second, with faster frequencies creating higher-pitched sounds. For instance, the A note right above Middle C on a piano is measured at 440 Hz—it travels up and down at 440 cycles per second. Middle C itself is 261.63 Hz—a lower pitch, vibrating at a slower frequency.

Sine waves. Illustration.
It’s sine waves of various frequencies that send waves of emotion through you. Deeper troughs mean higher tones. Wikipedia

Understanding frequencies can be useful in many ways. You can precisely tune an instrument by analyzing the frequencies of its strings. Recording engineers use their understanding of frequency ranges to dial in equalization settings that help sculpt the sound of the music they’re mixing. Car designers work with frequencies—and materials that can block them—to help make engines quieter. And active noise cancellation uses artificial intelligence and algorithms to measure external frequencies and generate inverse waves to cancel environmental rumble and hum, allowing top-tier ANC headphones and earphones to isolate the wearer from the noise around them. The average frequency range of human hearing is 20 to 20,000 Hz.

What’s in a name? 

The hertz measurement is named for the German physicist Heinrich Rudolf Hertz, who proved the existence of electromagnetic waves. 

woman talking through a handheld megaphone
Can you hear me now? Cottonbro / Pexels

Getting amped

Amplitude equates to sound’s volume or intensity. Using our ocean analogy—because, hey, it works—amplitude describes the height of the waves.

We measure amplitude in decibels (dB). The dB scale is logarithmic, which means there’s a fixed ratio between measurement units. And what does that mean? Let’s say you have a dial on your guitar amp with evenly spaced steps on it numbered one through five. If the knob is following a logarithmic scale, the volume won’t increase evenly as you turn the dial from marker to marker. If the ratio is 4, let’s say, then turning the dial from the first to the second marker increases the sound by 4 dB. But going from the second to the third marker increases it by 16 dB. Turn the dial again and your amp becomes 64 dB louder. Turn it once more, and you’ll blast out a blistering 256 dB—more than loud enough to rupture your eardrums. But if you’re somehow still standing, you can turn that knob one more time to increase your volume to a brain-walloping 1,024 decibels. That’s almost 10 times louder than any rock concert you’ll ever encounter, and it will definitely get you kicked out of your rehearsal space. All of which is why real amps aren’t designed that way.

Twice as nice

We interpret a 10 dB increase in amplitude as a doubling of volume. 

Parts of a sound wave

Timbre and envelope are two characteristics of sound waves that help determine why, say, two instruments can play the same chords but sound nothing alike. 

Timbre is determined by the unique harmonics formed by the combination of notes in a chord. The A in an A chord is only its fundamental note—you also have overtones and undertones. The way these sound together helps keep a piano from sounding like a guitar, or an angry grizzly bear from sounding like a rumbling tractor engine. 

[Related: Even plants pick up on good vibes]

But we also rely on envelopes, which determine how a sound’s amplitude changes over time. A cello’s note might swell slowly to its maximum volume, then hold for a bit before gently fading out again. On the other hand, a slamming door delivers a quick, sharp, loud sound that cuts off almost instantly. Envelopes comprise four parts: Attack, Decay, Sustain, and Release. In fact, they’re more formally known as ADSR Envelopes. 

  • Attack: This is how quickly the sound achieves its maximum volume. A barking dog has a very short attack; a rising orchestra has a slower one. 
  • Decay: This describes how fast the sound settles into its sustained volume. When a guitar player plucks a string, the note starts off loudly but quickly settles into something quieter before fading out completely. The time it takes to hit that sustained volume is decay. 
  • Sustain: Sustain isn’t a measure of time; it’s a measure of amplitude, or volume. It’s how loud the plucked guitar note is after the initial attack but before it fades out. 
  • Release: This is the time it takes for the note to drift off to silence. 

Speed of sound

Science fiction movies like it when spaceships explode with giant, rumbling, surround-sound booms. However, sound needs to travel through a medium so, despite Hollywood saying otherwise, you’d never hear an explosion in the vacuum of space. 

Sound’s velocity, or the speed it travels at, differs depending on the density (and even temperature) of the medium it’s moving through—it’s faster in the air than water, for instance. Generally, sound moves at 1,127 feet per second, or 767.54 miles per hour. When jets break the sound barrier, they’re traveling faster than that. And knowing these numbers lets you estimate the distance of a lightning strike by counting the time between the flash and thunder’s boom—if you count to 10, it’s approximately 11,270 feet away, or about a quarter-mile. (Very roughly, of course.) 

A stimulating experience

Anyone can benefit from understanding the fundamentals of sound and what sound waves are. Musicians and content creators with home recording set-ups and studio monitors obviously need a working knowledge of frequencies and amplitude. If you host a podcast, you’ll want as many tools as possible to ensure your voice sounds clear and rich, and this can include understanding the frequencies of your voice, what microphones are best suited to them, and how to set up your room to reflect or dampen the sounds you do or do not want. Having some foundational information is also useful when doing home-improvement projects—when treating a recording workstation, for instance, or just soundproofing a new enclosed deck. And who knows, maybe one day you’ll want to shatter glass. Having a better understanding of the physics of sound opens up wonderful new ways to explore and experience the world around us. Now, go out there and make some noise!

This post has been updated. It was originally published on July 27, 2021.

The post How do sound waves work? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The tallest building in the world remains unchallenged—for now https://www.popsci.com/technology/tallest-building-in-the-world/ Wed, 31 May 2023 18:00:00 +0000 https://www.popsci.com/?p=544427
the burj khalifa, the tallest building in the world in dubai
The Burj Khalifa. Depositphotos

The Burj Khalifa soars over 2,700 feet high, and a tower designed to rise even higher is on pause. What happens next is anyone's guess.

The post The tallest building in the world remains unchallenged—for now appeared first on Popular Science.

]]>
the burj khalifa, the tallest building in the world in dubai
The Burj Khalifa. Depositphotos

For more than a decade, the king of the skyscrapers—the tallest building in the world—has been the Burj Khalifa in Dubai. With a total height of 2,722 feet, it’s the undisputed champion of the vertical world, a megatall building constructed with a core of reinforced concrete that sits on a piled raft foundation

Since its completion, the 163-story building has become a shining part of the world’s architectural and cultural landscape, providing a soaring platform for content that will make your stomach clench. A woman donned flight attendant garb and stood at its dizzying pinnacle not once but twice to hawk for Emirates airlines, with the second stunt involving an enormous A380 aircraft flying behind her. And Tom Cruise famously scaled its glass exterior in a Mission Impossible film.

The Burj Khalifa has owned the superlative designation of tallest building in the world since 2010, towering over everything else. “That’s pretty good staying power considering that there was actually a pretty high rate of replacement—between the replacement of the Sears Tower by Kuala Lumpur’s Petronas Towers, then Taipei 101, and then we moved onto the Burj, which is considerably higher than its predecessors by a good margin,” says Daniel Safarik, the director for research and thought leadership at Council on Tall Buildings and Urban Habitat (CTBUH) in Chicago.

“That begs the inevitable question then: What’s going to be the next new tallest building in the world? And I think the answer is, we don’t know,” he adds. “Initially it was projected to be the Jeddah Tower in Saudi Arabia, but that building has stopped construction with no specified resumption date.” 

The tallest building in the world rises into the unknown 

Adrian Smith is the architect behind the Burj Khalifa, as well as for the on-pause Jeddah Tower. In a video chat from Chicago, he reflects on the question of when and if another building will surpass the height of the Burj. “I think inevitably, that’s the case,” he says.

“One of the interesting things about the ‘tallest building in the world’ as a title, is that if one is serious about doing the tallest building in the world, there’s an enormous amount of publicity that goes along with that,” he adds. Smith is now at Adrian Smith + Gordon Gill Architecture and formerly was at Skidmore, Owings & Merrill, which is known as SOM. “We’ve had clients hire us to do world’s tallest buildings before—they get an enormous amount of publicity and then for whatever reason, it doesn’t happen. Usually, 90 percent of the time, that reason is money.” 

As for the on-pause Jeddah Tower, which used to be called Kingdom Tower, Smith says that “it’s pursuing the process of starting up again,” and adds, “I have nothing that I can really disclose at all.”

Earlier this year, the Los Angeles Times took a close look at the Jeddah Tower’s frozen progress, and other mega projects in Saudi Arabia, reporting that the tower, at 826 feet tall, “remains a construction site with no construction.” 

[Related: 6 architectural facts about history’s tallest buildings]

But regardless of the Jeddah Tower’s question mark, the Burj remains a decisive and enormous exclamation point. Each time a new tallest building in the world rises up, its designers, engineers, and contractors are pushing into unexplored territory. “First of all, the structure is the most important single thing in a supertall building,” Smith reflects. “And the reason it’s the most important thing is that very few of them are done, and the history of the design process for a supertall—especially a world’s tallest—if it’s truly a world’s tallest, it’s never been done, you don’t know what you’re going to run into.”

The world’s tallest tower came from ‘a tube’

The Burj Khalifa’s core, which is supported by buttresses, is made of reinforced concrete. That’s a change from some of the classic skyscrapers of the previous century that may come to mind. “The structure of Sears Tower is all steel,” Smith says. So too is the structure of the Empire State Building, now just the 51st tallest building in the world but standing proudly since 1931. 

“The structure of Burj Khalifa is all concrete,” he adds. “And the structure for Kingdom Tower will be all concrete as well—but when I say all concrete, they’re heavily reinforced concrete structures. A lot of steel goes into that concrete.” 

Indeed, concrete technology has evolved over the decades, allowing it to have higher and higher compressive strength—the strength it can withstand as gravity pulls on it downwards. 

Stefan Al, an architect, author of the book Supertall, and an assistant professor at Virginia Tech, charts just how much concrete has improved. In the 1950s, he says, concrete was rated at around 20 megapascals. The concrete in the Burj was 80 megapascals, and today’s can do about 250 megapascals. “So basically it’s gotten 10 times stronger—or 10 times more able to withstand compression, meaning you can have 10 times more weight coming from top,” he says. “That’s certainly super impressive.” 

There’s another benefit to concrete (don’t get it confused with cement), which is the way it gets up to where workers need it—by being pumped up and then flowing out of a tube. Reinforced concrete’s current popularity is “a function of concrete’s ability to pump, because that makes it much easier to work with,” Al says. 

That’s different from working with steel way up high, because for that, Al says, “you need super-large cranes” to hoist the beams upwards. And concrete is quick. Al notes that using concrete in a city like New York can result in a building story going up every two to three days. 

Of course, pumping concrete up against gravity produces its own challenges—and opportunities to celebrate. A company that makes concrete pumps, Putzmeister, boasted that its equipment was able to get the material up 1,988 feet—a record at the time. In 2019, they looked back on that 2008 accomplishment, punning that in helping build the Burj, “Putzmeister was a concrete part.”

Smith points out that the plans for the Jeddah Tower call for it to be made out of concrete as well, including even its top spire, which on the Burj is made from steel. “Every few years, technology advances and changes—the concrete gets stronger. There are new additives, new ways of making concrete that’s better for this kind of application,” he says. “If you think about Burj Khalifa and Kingdom [Jeddah] Tower, they’re ultimately built out of a tube that’s maybe 8 inches to a foot in diameter.” He chuckled. 

The second-tallest building in the world

Words like supertall and megatall may sound vague, but in fact they have specific definitions. A supertall building is at least 984 feet tall, while a megatall stands at least 1,968 feet high. At 1,776 feet tall, One World Trade in New York City is a supertall building, but not a megatall one, and is the sixth-tallest building globally. And a new second-tallest building in the world is set to be finished this year—it’s the angular Merdeka 118 in Kuala Lumpur, Malaysia, and measures a megatall 2,233 feet tall at the tippy top. (The current second tallest building in the world is the 2,073-foot Shanghai Tower.)

But architecture is about more than height, and Stefan Al highlights an exciting diversity of design he sees in new modern buildings. “You can really speak of a new generation of skyscrapers, which are much taller, but also, you could say, more exuberant” compared to what came before, he observes. “Most of the 20th century, we only had a handful of supertall buildings, including the Chrysler Building and the Empire State Building, but now we have more than 100, and most of them have been built in the last 20 years.” 

As for buildings with wild and varied new styles, he cites the “super slender” trend in New York City, with the skinny and supertall 111 West 57th Street as a notable example. Another is the Central Park Tower, which Adrian Smith + Gordon Gill designed. 

But buildings get even more interesting. The Capital Gate Tower in Abu Dhabi may only be 540 feet tall, but it looks like it could tip over. “It deliberately leans 18 degrees,” Al points out. He says that buildings like this one “are not very logical from a structural perspective.” 

Or check out the M.C. Escher-like CCTV Headquarters in Beijing, or Mexico City’s cool Torre Reforma

So will a building ever exceed the height of the Burj Khalifa? Al thinks so, saying he anticipates it happening “within our lifetime.” 

Safarik, of the CTBUH in Chicago, is more cautious, noting that the future seems murky when it comes to a building rising higher than the Burj. But one thing is clear: When it comes to the tallest buildings in the word, things have changed since the CTBUH was founded the same year that the US landed on the moon. 

“If you were to have looked at the 100 tallest buildings in the world in 1969, you would be almost certainly looking at steel buildings that were office function, and they would be in North America, and predominantly in the United States,” Safarik says. 

Now? They are “composite buildings—some combination of both steel and concrete,” he adds. “And the buildings would largely be located in [the] Middle East and Asia, and they would have mixed functions—so that’s how the coin has really flipped over the interceding half century.”

The post The tallest building in the world remains unchallenged—for now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Termite mounds may one day inspire ‘living, breathing’ architecture https://www.popsci.com/technology/termites-green-architecture/ Mon, 29 May 2023 19:00:00 +0000 https://www.popsci.com/?p=544116
Large termite mound in the African Savannah
Termites could soon help build buildings instead of destroy them. Deposit Photos

Termites can be a nuisance to humans, but their homes may teach us a thing or two about sustainability.

The post Termite mounds may one day inspire ‘living, breathing’ architecture appeared first on Popular Science.

]]>
Large termite mound in the African Savannah
Termites could soon help build buildings instead of destroy them. Deposit Photos

Termites are often thought to be structural pests, but two researchers have taken a slightly contrarian viewpoint. As detailed in a new paper recently published in Frontiers in Materials, David Andréen of Lund University and Rupert Soar of Nottingham Trent University studied termites’ tens of millions of years of architectural experience exhibited within their massive mounds. According to the duo’s findings, the insects’ abilities could inspire a new generation of green, energy efficient architecture.

Termites are responsible for building the tallest biological structures in the world, with the biggest mound ever recorded measuring an astounding 42-feet-high. These insects aren’t randomly building out their homes, however—in fact, the structures are meticulously designed to make the most of the environment around them. Termite mounds in Namibia, for example, rely on intricate, interconnected tunnels known as an “egress complex.” As explained in Frontiers’ announcement, these mounds’ complexes grow northward during the November-to-April rainy season in order to be directly exposed to the midday sun. Throughout the rest of the year, however, termites block these egress tunnels, thus regulating ventilation and moisture levels depending on the season.

To better study the architectural intricacies, Andréen and Soar created a 3D-printed copy of an egress complex fragment. They then used a speaker to simulate winds by sending oscillating amounts of CO2-air mixture through the model while tracking mass transference rates. Turbulence within the mound depended on the frequency of oscillation, which subsequently moved excess moisture and respiratory gasses away from the inner mound.

[Related: Termites work through wood faster when it’s hotter out.]

From there, the team created a series of 2D models of the egress complex. After driving an oscillating amount of water through these lattice-like tunnels via an electromotor, Andréen and Soar found that the machine only needed to move air a few millimeters back-and-forth to force the water throughout the entire model. The researchers discovered termites only need small amounts of wind power to ventilate their mounds’ egress complex.

The researchers believe integrating the egress complex design into future buildings’ walls could create promising green architecture threaded with tiny air passageways. This could hypothetically be accomplished via technology such as powder bed printers alongside low-energy sensors and actuators to move air throughout the structures.

“When ventilating a building, you want to preserve the delicate balance of temperature and humidity created inside, without impeding the movement of stale air outwards and fresh air inwards,” explained Soar, adding the egress complex is “an example of a complicated structure that could solve multiple problems simultaneously: keeping comfort inside our homes, while regulating the flow of respiratory gasses and moisture through the building envelope,” with minimal to no A/C necessary. Once realized, the team believes society may soon see the introduction of “true living, breathing” buildings.

The post Termite mounds may one day inspire ‘living, breathing’ architecture appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Electric cars are better for the environment, no matter the power source https://www.popsci.com/technology/are-electric-cars-better-for-the-environment/ Fri, 26 May 2023 14:00:00 +0000 https://www.popsci.com/?p=543822
Ioniq 6 EV
An Ioniq 6 electric vehicle. Hyundai

Experts say that across the board, EVs are a win compared to similar gas-powered vehicles.

The post Electric cars are better for the environment, no matter the power source appeared first on Popular Science.

]]>
Ioniq 6 EV
An Ioniq 6 electric vehicle. Hyundai

These days, it seems like every carmaker—from those focused on luxury options to those with an eye more toward the economical—is getting into electric vehicles. And with new US policies around purchasing incentives and infrastructure improvements, consumers might be more on board as well. But many people are still concerned about whether electric vehicles are truly better for the environment overall, considering certain questions surrounding their production process

Despite concerns about the pollution generated from mining materials for batteries and the manufacturing process for the EVs themselves, the environmental and energy experts PopSci spoke to say that across the board, electric vehicles are still better for the environment than similar gasoline or diesel-powered models. 

When comparing a typical commercial electric vehicle to a gasoline vehicle of the same size, there are benefits across many different dimensions

“We do know, for instance, if we’re looking at carbon dioxide emissions, greenhouse gas emissions, that electric vehicles operating on the typical electric grid can end up with fewer greenhouse gas emissions over the life of their vehicle,” says Dave Gohlke, an energy and environmental analyst at Argonne National Lab. “The fuel consumption (using electricity to generate the fuel as opposed to burning petroleum) ends up releasing fewer emissions per mile and over the course of the vehicle’s expected lifetime.”

[Related: An electrified car isn’t the same thing as an electric one. Here’s the difference.]

How the electricity gets made

With greenhouse gas emissions, it’s also worth considering how the electricity for charging the EV is generated. Electricity made by a coal- or oil-burning plant will have higher emissions compared to a natural gas plant, while nuclear and renewable energy will have the fewest emissions. But even an electric vehicle that got its juice from a coal plant tends to have fewer emissions compared to a gasoline vehicle of the same size, Gohlke says. “And that comes down to the fact that a coal power plant is huge. It’s able to generate electricity at a better scale, [be] more efficient, as opposed to your relatively small engine that fits in the hood of your car.” Power plants could additionally have devices in place to scrub their smokestacks or capture some of the emissions that arise.  

EVs also produce no tailpipe emissions, which means reductions in particulate matter or in smog precursors that contribute to local air pollution.

“The latest best evidence right now indicates that in almost everywhere in the US, electric vehicles are better for the environment than conventional vehicles,” says Kenneth Gillingham, professor of environmental and energy economics at Yale School of the Environment. “How much better for the environment depends on where you charge and what time you charge.”

Electric motors tend to be more efficient compared to the spark ignition engine used in gasoline cars or the compression ignition engine used in diesel cars, where there’s usually a lot of waste heat and wasted energy.

Let’s talk about EV production

“It’s definitely the case that any technology has downsides. With technology you have to use resources, [the] raw materials we have available, and convert them to a new form,” says Jessika Trancik, a professor of data, systems, and society at the Massachusetts Institute of Technology. “And that usually comes with some environmental impacts. No technology is perfect in that sense, but when it comes to evaluating a technology, we have to think of what services it’s providing, and what technology providing the same service it’s replacing.”

Creating an EV produces pollution during the manufacturing process. “Greenhouse gas emissions associated with producing an electric vehicle are almost twice that of an internal combustion vehicle…that is due primarily to the battery. You’re actually increasing greenhouse gas emissions to produce the vehicle, but there’s a net overall lifecycle benefit or reduction because of the significant savings in the use of the vehicle,” says Gregory Keoleian, the director of the Center for Sustainable Systems at the University of Michigan. “We found in terms of the overall lifecycle, on average, across the United States, taking into account temperature effects, grid effects, there was 57 percent reduction in greenhouse gas emissions for a new electric vehicle compared to a new combustion engine vehicle.” 

In terms of reducing greenhouse gas emissions associated with operating the vehicles, fully battery-powered electric vehicles were the best, followed by plug-in hybrids, and then hybrids, with internal combustion engine vehicles faring the worst, Keoleian notes. Range anxiety might still be top of mind for some drivers, but he adds that households with more than one vehicle can consider diversifying their fleet to add an EV for everyday use, when appropriate, and save the gas vehicle (or the gas feature on their hybrids) for longer trips.

The breakeven point at which the cost of producing and operating an electric vehicle starts to gain an edge over a gasoline vehicle of similar make and model occurs at around two years in, or around 20,000 to 50,000 miles. But when that happens can vary slightly on a case-by-case basis. “If you have almost no carbon electricity, and you’re charging off solar panels on your own roof almost exclusively, that breakeven point will be sooner,” says Gohlke. “If you’re somewhere with a very carbon intensive grid, that breakeven point will be a little bit later. It depends on the style of your vehicle as well because of the materials that go into it.” 

[Related: Why solid-state batteries are the next frontier for EV makers]

For context, Gohlke notes that the average EV age right now is around 12 years old based on registration data. And these vehicles are expected to drive approximately 200,000 miles over their lifetime. 

“Obviously if you drive off your dealer’s lot and you drive right into a light pole and that car never takes more than a single mile, that single vehicle will have had more embedded emissions than if you had wrecked a gasoline car on your first drive,” says Gohlke. “But if you look at the entire fleet of vehicles, all 200-plus-million vehicles that are out there and how long we expect them to survive, over the life of the vehicle, each of those electric vehicles is expected to consume less energy and emit lower emissions than the corresponding gas vehicle would’ve been.”

To put things in perspective, Gillingham says that extracting and transporting fossil fuels like oil is energy intensive as well. When you weigh those factors, electric vehicle production doesn’t appear that much worse than the production of gasoline vehicles, he says. “Increasingly, they’re actually looking better depending on the battery chemistry and where the batteries are made.” 

And while it’s true that there are issues with mines, the petrol economy has damaged a lot of the environment and continues to do so. That’s why improving individual vehicle efficiency needs to be paired with reducing overall consumption.

EV batteries are getting better

Mined materials like rare metals can have harmful social and environmental effects, but that’s an economy-wide problem. There are many metals that are being used in batteries, but the use of metals is nothing new, says Trancik. Metals can be found in a range of household products and appliances that many people use in their daily lives. 

Plus, there have been dramatic improvements in battery technology and the engineering of the vehicle itself in the past decade. The batteries have become cheaper, safer, more durable, faster charging, and longer lasting. 

“There’s still a lot of room to improve further. There’s room for improved chemistry of the batteries and improved packaging and improved coolant systems and software that manages the batteries,” says Gillingham.

The two primary batteries used in electric vehicles today are NMC (nickel-manganese-cobalt) and LFP (lithium-ferrous-phosphate). NMC batteries tend to use more precious metals like cobalt from the Congo, but they are also more energy dense. LFP uses more abundant metals. And although the technology is improving fast, it’s still in an early stage, sensitive to cold weather, and not quite as energy dense. LFP tends to be good for utility scale cases, like for storing electricity on the grid. 

[Related: Could swappable EV batteries replace charging stations?]

Electric vehicles also offer an advantage when it comes to fewer trips to the mechanic; conventional vehicles have more moving parts that can break down. “You’re more likely to be doing maintenance on a conventional vehicle,” says Gillingham. He says that there have been Teslas in his studies that are around eight years old, with 300,000 miles on them, which means that even though the battery does tend to degrade a little every year, that degradation is fairly modest.

Eventually, if the electric vehicle markets grow substantially, and there’s many of these vehicles in circulation, reusing the metals in the cars can increase their benefits. “This is something that you can’t really do with the fossil fuels that have already been combusted in an internal combustion engine,” says Trancik. “There is a potential to set up that circularity in the supply chain of those metals that’s not readily done with fossil fuels.”

Since batteries are fairly environmentally costly, the best case is for consumers who are interested in EVs to get a car with a small battery, or a plug-in hybrid electric car that runs on battery power most of the time. “A Toyota Corolla-sized car, maybe with some hybridization, could in many cases, be better for the environment than a gigantic Hummer-sized electric vehicle,” says Gillingham. (The charts in this New York Times article help visualize that distinction.) 

Where policies could help

Electric vehicles are already better for the environment and becoming increasingly better for the environment. 

The biggest factor that could make EVs even better is if the electrical grid goes fully carbon free. Policies that provide subsidies for carbon-free power, or carbon taxes to incentivize cleaner power, could help in this respect. 

The other aspect that would make a difference is to encourage more efficient electric vehicles and to discourage the production of enormous electric vehicles. “Some people may need a pickup truck for work. But if you don’t need a large car for an actual activity, it’s certainly better to have a more reasonably sized car,” Gillingham says.  

Plus, electrifying public transportation, buses, and vehicles like the fleet of trucks run by the USPS can have a big impact because of how often they’re used. Making these vehicles electric can reduce air pollution from idling, and routes can be designed so that they don’t need as large of a battery.  

“The rollout of EVs in general has been slower than demand would support…There’s potentially a larger market for EVs,” Gillingham says. The holdup is due mainly to supply chain problems

Switching over completely to EVs is, of course, not the end-all solution for the world’s environmental woes. Currently, car culture is very deeply embedded in American culture and consumerism in general, Gillingham says, and that’s not easy to change. When it comes to climate policy around transportation, it needs to address all the different modes of transportation that people use and the industrial energy services to bring down greenhouse gas emissions across the board. 

The greenest form of transportation is walking, followed by biking, followed by using public transit. Electrifying the vehicles that can be electrified is great, but policies should also consider the ways cities are designed—are they walkable, livable, and have a reliable public transit system connecting communities to where they need to go? 

“There’s definitely a number of different modes of transport that need to be addressed and green modes of transport that need to be supported,” says Trancik. “We really need to be thinking holistically about all these ways to reduce greenhouse gas emissions.”

The post Electric cars are better for the environment, no matter the power source appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new material creates clean electricity from the air around it https://www.popsci.com/technology/air-gen-electricity-film/ Wed, 24 May 2023 18:00:00 +0000 https://www.popsci.com/?p=543118
Concept art of water molecules passing through Air-gen material
Ambient air's water molecules can be harvested to generate clean electricity at a nanoscale level. Derek Lovley/Ella Maru Studio

The physics at play in a storm cloud, but in a thin, hole-filled film.

The post A new material creates clean electricity from the air around it appeared first on Popular Science.

]]>
Concept art of water molecules passing through Air-gen material
Ambient air's water molecules can be harvested to generate clean electricity at a nanoscale level. Derek Lovley/Ella Maru Studio

Researchers recently constructed a material capable of generating near constant electricity from just the ambient air around it—thus possibly laying the groundwork for a new, virtually unlimited source of sustainable, renewable energy. In doing so, and building upon their past innovations, they now claim almost any surface could potentially be turned into a generator via replicating the electrical properties of storm clouds… but trypophobes beware.

According to a new study published today with Advanced Materials, engineers at the University of Massachusetts Amherst have demonstrated a novel “air generator” (Air-gen) film that relies on microscopic holes smaller than 100 nanometers across—less than a thousandth the width of a single human hair. The holes’ incredibly small diameters rely on what’s known as a “mean free path,” which is the distance a single molecule can travel before colliding with another molecule of the same substance.

[Related: The US could reliably run on clean energy by 2050.]

Water molecules are floating all around in the air, and their mean free path is around 100 nm. As humid air passes through Air-gen material’s miniscule holes, the water molecules come into direct contact with first an upper, then lower chamber in the film. This creates a charge imbalance, i.e. electricity.

It’s the same physics at play in storm clouds’ lightning discharges. Although the UMass Amherst team’s product generates a miniscule fraction of a lightning bolt’s estimated 300 million volts, its several hundred millivolts of sustained energy is incredibly promising for scalability and everyday usage. This is particularly evident when considering that air humidity can diffuse in three-dimensional space. In theory, thousands of Air-gen layers can be stacked atop one another, thus scaling up the device without increasing its overall footprint. According to the researchers, such a product could offer kilowatts of power for general usage.

[Related: How an innovative battery system in the Bronx will help charge up NYC’s grid.]

The team believes their Air-gen devices could one day be far more space efficient than other renewable energy options like solar and wind power. What’s more, the material can be engineered into a variety of form factors to blend into an environment, as contrasted with something as visually noticeable as a solar farm or wind turbine.

“Imagine a future world in which clean electricity is available anywhere you go,”Jun Yao, an assistant professor of electrical and computer engineering and the paper’s senior author, said in a statement. “The generic Air-gen effect means that this future world can become a reality.”

The post A new material creates clean electricity from the air around it appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Air Force used microwave energy to take down a drone swarm https://www.popsci.com/technology/thor-weapon-drone-swarm-test/ Tue, 23 May 2023 22:03:27 +0000 https://www.popsci.com/?p=543044
THOR stands for Tactical High-power Operational Responder.
THOR stands for Tactical High-power Operational Responder. Adrian Lucero / US Air Force

The defensive weapon is called THOR, and in a recent test it zapped the drones out of the sky.

The post The Air Force used microwave energy to take down a drone swarm appeared first on Popular Science.

]]>
THOR stands for Tactical High-power Operational Responder.
THOR stands for Tactical High-power Operational Responder. Adrian Lucero / US Air Force

In the desert plain south of Albuquerque, New Mexico, and just north of the Isleta Pueblo reservation, the Air Force defeated a swarm of drones with THOR, a powerful microwave weapon. THOR, or the Tactical High-power Operational Responder, is designed to defend against drone swarms, frying electronics at scale in a way that could protect against many flying robots at once.

THOR has been in the works for years, with a successful demonstration in February 2021 at Kirtland Air Force Base, south of Albuquerque. From 2021 to 2022, THOR was also tested overseas

This latest demonstration, which took place on April 5, saw the microwave face off against a swarm of multiple flying uncrewed aerial vehicles. The event took place at the Chestnut Range, short for “Conventional High Explosives & Simulation Test,” which has long been used by the Air Force Research Lab for testing.

“The THOR team flew numerous drones at the THOR system to simulate a real-world swarm attack,” said Adrian Lucero, THOR program manager at AFRL’s Directed Energy Directorate, in a release earlier this month. “THOR has never been tested against these types of drones before, but this did not stop the system from dropping the targets out of the sky with its non-kinetic, speed-of-light High-Power Microwave, or HPM pulses,” he said.

Crucial to THOR’s concept and operation is that the weapon disables and defeats drones without employing explosive or concussive power, the kind derived from rockets, missiles, bombs, and bullets. The military lumps these technologies together as “kinetics,” and they make up the bread and butter of how the military uses force. Against drones, which can cost mere hundreds or even thousands of dollars per vehicle, missiles represent an expensive form of ammunition. While the bullets used in existing counter-rocket weapons are much cheaper than missiles, they still create the problem of dangerous debris everywhere they don’t hit. Using microwaves means that only the damaged drone itself becomes a falling danger, without an added risk from the tools used to shoot it down.

“THOR was extremely efficient with a near continuous firing of the system during the swarm engagement,” Capt. Tylar Hanson, THOR deputy program manager, said in a release. “It is an early demonstrator, and we are confident we can take this same technology and make it more effective to protect our personnel around the world.”

The THOR system fits into a broader package of directed energy countermeasures being used to take on small, cheap, and effective drones. Another directed energy weapon explored for this purpose is lasers, which can burn through a drone’s hull and circuitry, but that approach takes time to hold focus on and melt a target.

“The system uses high power microwaves to cause a counter electronic effect. A target is identified, the silent weapon discharges in a nanosecond and the impact is instantaneous,” reads an Air Force fact sheet about the weapon. In a video from AFRL, THOR is described as a “low cost per shot, speed of light solution,” which uses “a focused beam of energy to defeat drones at a large target area.”

An April 2023 report from the Government Accountability Office is much more straightforward: A High Power Microwave uses “energy to affect electronics by overwhelming critical components intended to carry electrical currents such as circuit boards, power systems, or sensors. HPM systems engage targets over an area within its wider beam and can penetrate solid objects.”

Against commercial or cheaply produced drones, the kind most likely to see use on the battlefield in great numbers today, microwaves may prove to be especially effective. While THOR is still a ways from development into a fieldable weapon, the use of low-cost drones on the battlefield has expanded tremendously since the system started development. A report from RUSI, a British think tank, found that in its fight against Russia’s invasion, “Ukrainian UAV losses remain at approximately 10,000 per month.”

While that illustrates the limits of existing drone models, it also highlights the scale of drones seeing use in regular warfare. As drone technology improves, and militaries move from adapting commercial drones to dedicated military models made close to commercial cost and scale, countering those drones en masse will likely be a greater priority for militaries. In that, weapons like THOR offer an alternative to existing countermeasures, one that promises greater effects at scale.

Watch a video about THOR, which also garnered a Best of What’s New award from PopSci in 2021, from the Air Force Research Laboratory, below:

The post The Air Force used microwave energy to take down a drone swarm appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These massive, wing-like ‘sails’ could add wind power to cargo ships https://www.popsci.com/technology/shipping-maritime-sail-oceanbird/ Tue, 23 May 2023 20:00:00 +0000 https://www.popsci.com/?p=542970
Bon voyage!
Bon voyage!. Oceanbird

The new technology is a welcome modernization of classic engineering.

The post These massive, wing-like ‘sails’ could add wind power to cargo ships appeared first on Popular Science.

]]>
Bon voyage!
Bon voyage!. Oceanbird

The concept of a sailboat might conjure up thoughts of swanky sailing holidays or fearsome pirates—and some companies are hoping to bring them back into the mainstream, albeit in a modern, emissions-focused way. According to the International Maritime Organization (IMO), there are seven types of Wind Propulsion Technologies, or sails, which could potentially help the organization bring down the shipping industry’s currently massive carbon footprint

[Related: Colombia is deploying a new solar-powered electric boat.]

Wired reports that a Swedish company called Oceanbird is building a sail that can fit onto existing vessels. The Wingsail 560 looks kind of like an airplane wing placed vertically like a mast on a boat, and this summer the company plans to test out a prototype on land. If all goes well, next year it could be making its oceanic debut on a 14-year-old car carrier, also known as a roll-on/roll-off or RoRo shipping container, called the Wallenius Tirranna.

This is how the sail, coming in at 40-meters high and weighing 200 metric tons, works—the sail has two parts, one of which is a flap that brings air into a more rigid, steel-cored component that allows for peak, yacht-racing inspired aerodynamics, according to Wired. Additionally, the wing is able to fold down or tilt in order to pass underneath bridges and reduce wind power in case of an approaching storm. One Oceanbird sail placed on an existing vessel is estimated to reduce fuel consumption from the main engine by up to 10 percent, saving around 675,000 liters of diesel each year, according to trade publication Offshore Energy.

But, the real excitement is the idea of a redesigned vessel built especially for the gigantic sails. According to Wired, the Oceanbird-designed, 200-meter-long car carrier Orcelle Wind could cut emissions by at least 60 percent compared to a sailless RoRo vessel. The company themselves even estimates that it could reduce emissions by “up to 90 percent if all emissions-influencing factors are aligned.” However, it will still be a few years before one of these hits the high seas. 

[Related: Care about the planet? Skip the cruise, for now.]

Oceanbird isn’t the only company setting sail—according to Gavin Allwright, secretary general of the International Windship Association, by the end of the year there could be 48 or 49 wind-powered vessels on the seas. One such ship already took a voyage from Rotterdam to French Guiana in late 2022 using a hybrid propulsion of traditional engines and sails. However Allwright tells Wired “we’re still in pretty early days.”

The IMO has already set a climate goal of halving emissions between 2008 and 2050, but experts have called this goal “important, but inadequate” to keep emissions low enough for a liveable future. Currently, these goals are still not being reached, with a Climate Action Tracker assessment showing that emissions are set to grow until 2050 unless further action is taken.

The post These massive, wing-like ‘sails’ could add wind power to cargo ships appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch a Google drone deliver beer and snacks to Denver’s Coors Field https://www.popsci.com/technology/wing-stadium-beer-delivery/ Tue, 23 May 2023 19:00:00 +0000 https://www.popsci.com/?p=542882
Wing's drone flying in the stadium
Wing's drone flying in the Coors Field. Wing

It might never match the pace and precision of a human vendor, but it's still a cool demonstration.

The post Watch a Google drone deliver beer and snacks to Denver’s Coors Field appeared first on Popular Science.

]]>
Wing's drone flying in the stadium
Wing's drone flying in the Coors Field. Wing

Wing, Google parent company Alphabet’s drone-delivery subsidiary, pulled off a fun demonstration delivery earlier this month: one of its drones delivered beer and peanuts to Coors Field, the Colorado Rockies’ stadium in the middle of Denver. While this novel first comes with a heavy dose of caveats, it still gives a nice glimpse of how far some drone delivery operations have come over the past few years. 

What are the caveats? According to Wing, the drone delivered a small package of beer (“Coors of course”) and peanuts to the outfield area of Coors Field during the opening party for the Association of Unmanned Vehicle Systems International’s (AIVSI) annual autonomous systems conference. There were apparently 1,000 people in the stands, though as you can see in the video, it was no game day crowd. Crucially, Wing wasn’t using its drones to deliver beers and peanuts on demand—this was purely a demonstration flight to show the drone operating in a downtown urban environment. 

“Our drones will never match the experience of flagging down a vendor and having them toss peanuts to you from 20 seats away. Nor do we think delivering during game day is a particularly compelling use-case for our technology,” writes Jonathan Bass, Wing’s head of marketing and communications in the blog post announcing the feat. “We’re more focused on supplementing existing methods of ground-based delivery to move small packages more efficiently across miles, not feet.”

And Coors Field was a suitable environment to show just how capable its drones have become. Over the past few years, the former moonshot has progressed from delivering to rural farms and lightly populated suburbs to flying packages around denser suburbs and large metro areas like Dallas-Forth Worth in Texas. As Bass explains it, despite Wing having done 1,000 deliveries on some days in one of its Australian bases of operations, the company is still regularly asked if drone delivery could work in “dense, urban environments”.

“We chose Coors Field because it’s a particularly challenging environment,” writes Bass. “Coors Field sits in the middle of Denver, Colorado—one of the fastest growing cities in America. Any professional sports stadium—with stadium seating, jumbotrons, and the like—makes for a fun challenge.”

The demonstration is all part of Wing’s plans to massively expand where it operates over the next while. Earlier this year, it announced the Wing Delivery Network. Drones in this program would work more like ride-sharing vehicles that picked up and dropped off packages as needed instead of operating from a single store or base. To make this possible, Wing unveiled a device called the AutoLoader. It sits in a parking spot outside a store and enables to staff to leave a package for a drone to autonomously collect. 

While things seem to be taking off for Wing, the scene is a bit more turbulent across the drone delivery industry. In particular, Amazon’s Prime Air is really struggling to launch. Despite first being unveiled almost a decade ago, Prime Air has now completed a total of “100 deliveries in two small US markets,” according to a report earlier this month by CNBC. The company apparently intended to reach 10,000 deliveries this year, but has had to revise those projections. It probably doesn’t help that a significant number of workers were laid off earlier this year.

Other companies are having more success. Zipline, best known for delivering medical supplies by parachute in rural Africa from catapult-launched fixed-wing drones, recently showcased a new platform that would allow it to deliver more typical packages—like a Sweetgreen salad—by lowering them on a tether from a hover-capable drone. It, along with DroneUp and Flytrex, have partnered with Walmart and collectively completed more than 6,000 deliveries last year. The big question consumers have: Are delivery drones going to be everywhere in the next few years? Probably not, but they are likely to be more present. 

Watch the drone in action below:

The post Watch a Google drone deliver beer and snacks to Denver’s Coors Field appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A super pressure balloon built by students is cruising Earth’s skies to find dark matter https://www.popsci.com/science/high-altitude-balloons-dark-matter/ Tue, 23 May 2023 10:00:00 +0000 https://www.popsci.com/?p=542439
SuperBIT high-altitude balloon with space telescope in the skies after its launch
The Superpressure Balloon-borne Imaging Telescope after launch. SuperBIT

SuperBIT belongs to a new class of budget space telescopes, ferried by high-altitude balloons rather than rockets.

The post A super pressure balloon built by students is cruising Earth’s skies to find dark matter appeared first on Popular Science.

]]>
SuperBIT high-altitude balloon with space telescope in the skies after its launch
The Superpressure Balloon-borne Imaging Telescope after launch. SuperBIT

High altitude balloons have drawn a lot of fire lately. In February, the US military shot down a spy balloon potentially operated by the Chinese government and an “unidentified aerial phenomenon” that was later revealed to likely be a hobbyist balloon.

So, when people caught sight of another large balloon in the southern hemisphere in early May, there was concern it could be another spy device. Instead, it represents the future of astronomy: balloon-borne telescopes that peer deep into space without leaving the stratosphere.

“We’re looking up, not down,” says William Jones, a professor of physics at Princeton University and head of NASA’s Super Pressure Balloon Imaging Telescope (SuperBIT) team. Launched from Wānaka, New Zealand, on April 15, the nearly 10-foot-tall telescope has already circled the southern hemisphere four times on a football stadium-sized balloon made from polyethylene film. Its three onboard cameras also took stunning images of the Tarantula Nebula and Antennae galaxies to rival those of the Hubble Space Telescope. The findings from SuperBIT could help scientists unravel one of the greatest mysteries of the universe: the nature of dark matter, a theoretically invisible material only known from its gravitational effects on visible objects.

[Related: $130,000 could buy you a Michelin-star meal with a view of the stars]

Scientists can use next-level observatories like the James Webb Space Telescope to investigate dark matter, relying on their large mirrors and positions outside Earth’s turbulent atmosphere to obtain pristine views of extremely distant celestial objects. But developing a space telescope and launching it on a powerful rocket is expensive. Lofting Hubble into orbit cost around $1.5 billion, for instance, and sending JWST to Lagrange point 2 cost nearly $10 billion.

SuperBIT took just $5 million to launch—a price cut stemming from the relative cheapness of balloons versus rockets and the lower barrier of entry for skilled workers to build the system.

“The whole thing is run by students. That’s what makes projects such as these so nimble and able to do so much with limited resources,” Jones says, referring to the SuperBIT collaborative between Princeton, the University of Durham in the UK, and the University of Toronto in Canada. “We have no professional engineers or technicians working on this full time—only the grad students have the luxury of being able to devote their full-time attention to the project.”

SuperBIT is not the first telescope carried aloft with a balloon: That honor goes to Stratoscope I, which was built in 1957 by another astronomy group at Princeton. But SuperBIT is one of a handful of new observatories made possible by 20 years of NASA research into so-called super pressure balloons. That work finally culminated in tests flights beginning in 2015 and the groundbreaking launch of SuperBIT.

Traditional balloons contain a lifting gas that expands as the sun heats it and as atmospheric pressure changes with altitude. That changes the volume of the envelope and, in turn, the balloon’s buoyancy, making it impossible to maintain a constant altitude over time.

Superpressure balloons keep the lifting gas, typically helium, pressurized inside a main envelope so that volume and buoyancy remain constant across day and night. The balloon then uses a smaller balloon—a ballonet—inside or beneath the main envelope as a ballast, filling or emptying the pocket of compressed air to change altitude and effectively steer the ship.

The super pressure balloon carrying SuperBIT can maintain an altitude of 108,000 feet (higher than 99.2 percent of Earth’s atmosphere) while carrying the 3,500-pound payload of scientific instruments. Unlike JWST and other missions, the purpose of the SuperBIT telescope isn’t to see farther or wider swaths of the universe or to detect exoplanets. Instead, it’s hunting for signs of a more ubiquitous and enigmatic entity.  

Space Telescope photo

“Dark matter is not made of any of the elements or particles that we are familiar with through everyday observations,” Jones says. That said, there’s a lot of it around us: It might make up about 27 percent of the universe. “We know this through the gravitational influence that it has on the usual matter—stars and gas, and the like—that we can see,” which make up around 5 percent of the universe, Jones explains.

Scientists estimate that the remaining 67 percent of the cosmos is made of dark energy, another largely mysterious material not to be confused with dark matter. Whereas the gravity of dark matter may help pull galaxies together and structure the way they populate the cosmos, dark energy may be responsible for the accelerating expansion of the entire universe.

Researchers probe extreme forces where dark matter might exist and calculate its presence by observing galactic clusters so massive their gravity bends the light that passes by them from more distant objects—a technique known as gravitational lensing. Astronomers can use this approach to turn galaxies into a sort of magnifying lens to see more distant objects than they normally could (something JWST excels at). It can also reveal the mass of the galactic clusters that make up the “lens,” including the amount of dark matter around them.

“After measuring how much dark matter there is, and where it is, we’re trying to figure out what dark matter is,” says Richard Massey, a member of the SuperBIT science team and a professor of physics at Durham University. “We do this by looking at the few special places in the universe where lumps of dark matter happen to be smashing into each other.”

Those places include the two large Antennae galaxies, which are in the process of colliding about 60 million light-years from Earth. Massey and others have studied the Antennae galaxies using Hubble, but it “gives it a field of view too small to see the titanic collisions of dark matter,” Massey says. “So, we had to build SuperBIT.”

Antennae galaxies in NASA SuperBIT imahe
The Antennae galaxies, cataloged as NGC 4038 and NGC 4039, are two large galaxies colliding 60 million light-years away toward the southerly constellation Corvus. The galaxies have previously been captured by the Hubble Space Telescope, Chandra X-ray Observatory, and now-retired Spitzer Space Telescope. NASA/SuperBIT

Like Hubble, SuperBIT sees light in the visible to ultraviolet range, or 300- to 1,000-nanometer wavelengths. But while Hubble’s widest field of view is less than a tenth of degree, SuperBIT’s field of view is wider at half a degree, allowing it to image wider swaths of the sky at once. That’s despite it having a smaller mirror (half a meter in diameter compared to Hubble’s 1.5 meters).

SuperBIT has another advantage over space telescopes. With less time from development to deployment and without complex accessories needed to protect it from radiation, extreme temperatures, and space debris, the SuperBIT team was able to use far more advanced camera sensors than those on existing space telescopes. Where Hubble’s Wide Field Camera 3 contains a pair of 8-megapixel sensors, Jones says, SuperBIT contains a 60-megapixel sensor. The balloon-carried telescope is also designed to float down on a parachute after the end of each flight, which means scientists can update the technology regularly from the ground.

“We’re currently communicating with SuperBIT live, 24 hours a day, for the next 100 days,” Massey says. “It has just finished its fourth trip around the world, experiencing the southern lights, turbulence over the Andes, and the quiet cold above the middle of the Pacific Ocean.” The team expects to retrieve the system sometime in late August, likely in southern Argentina, according to Jones.

[Related on PopSci+: Alien-looking balloons might be the next weapon in the fight against wildfires]

SuperBIT may just be the beginning. NASA has already funded the development of a Gigapixel class Balloon Imaging Telescope (GigaBIT), which will sport a mirror as wide as Hubble’s. Not only is it expected to be cheaper than any space telescope sensing the same spectrum of light, GigaBIT would also be “much more powerful than anything likely to be put into space in the near term,” Jones says.

As to whether SuperBIT will crack the mystery of just what dark matter is, it’s too early to tell. After a few flights, the grad students will have to pore over the project’s findings.

“What will the [data] tell us? Who knows! That’s the excitement of it—and also the guilty secret,” Massey says. “After 2,000 years of science, we still have absolutely no idea what the two most common types of stuff in the universe are, or how they behave.”

The post A super pressure balloon built by students is cruising Earth’s skies to find dark matter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The new Tacoma’s shock-absorbing seats help you keep your eyes on the prize https://www.popsci.com/technology/2024-toyota-tacoma/ Mon, 22 May 2023 22:00:00 +0000 https://www.popsci.com/?p=542738
The 2024 Toyota Tacoma
The 2024 Tacoma. Toyota

Take a look at the fourth generation of a beloved vehicle, which now comes in a new Trailhunter trim.

The post The new Tacoma’s shock-absorbing seats help you keep your eyes on the prize appeared first on Popular Science.

]]>
The 2024 Toyota Tacoma
The 2024 Tacoma. Toyota

Rejoice, Tacoma fans: The fourth generation of the beloved pickup is finally here, and there’s a lot to like. The midsize truck was redesigned from the ground up, retaining its off-road-capable bones and getting new skin, more power, and more options that should please truck buyers of all types. The last time the Taco, as it’s affectionately known, had a full workup was for model year 2016, so this has been a long-awaited update. 

In its popular TRD Pro trim, the new Tacoma includes brand-new seats for the driver and front passenger that ride on a shock absorber system. The purpose of these so-called IsoDynamic Performance Seats is to keep your head—and in turn, your eyes—steady and focused while driving (or riding in the right seat) on rugged terrain. If you’ve ever ridden a horse or performed in a marching band, you understand how important it is to keep your vision intact while moving. 

Let’s take a closer look at this and some of the Tacoma’s other new features. 

Shock-absorbing seats

When driving off-road, your entire body gets bounced around. Depending on the quality of your suspension system, you could be shaken like a James Bond martini. But wouldn’t it be better to float as though you’re moving in tune with the vehicle? Sheldon Brown, the chief engineer for the Tacoma, says the team started by plumping up the bolsters (the narrow pillows that surround your seat) in the seat and seat back, which snugs the occupant into the vehicle securely and comfortably. 

“We were looking to do something and provide better stabilization of the driver and the occupant in those high-speed or even some of the tactical off-road driving scenarios,” Brown told The Drive, which is owned by Recurrent Ventures, PopSci’s parent company. “If you think about, for example, a downhill skier or even if you look to the wild you see a cheetah chasing its prey. The eyes are focused and fixed, the body is moving but the head and the eyes are staying stable, so the goal here is to stabilize the upper torso, particularly the head.”

The Toyota engineering team started with a hot-formed steel tube to create the superstructure of the seats, and surrounded it with a lightweight reinforced resin for the seat pan and back frame. A swivel joint, spring-loaded ball joint, and articulation structure provides the flexibility and movement. The human body’s bone structure works closely with tendons and muscles for full range of motion; the new IsoDynamic Performance Seat is designed to move with those elements for a much less bone-jarring ride. 

Most notably, the seat can be customized to your liking. Airing it up is as simple as using a bicycle tire pump to achieve the level of pressure you like, and Toyota provides a set of recommended pressures based on your unique body mass. From there, you can tweak the comfort as desired. And, of course, you can turn off the adjustments entirely and it becomes a plain old truck seat. 

More power, more torque—and the manual remains

Available in a whopping eight variants—SR, SR5, TRD PreRunner, TRD Sport, TRD Off Road, Limited, TRD Pro, and Trailhunter—the 2024 Tacoma is offered with two different powertrains and myriad shiny new accessories straight from the factory. 

Starting with the base SR, the Tacoma gets a turbocharged 2.4-liter four-cylinder engine making 228 horsepower and 243 pound-feet of torque. Moving up to the SR5 and above, the same engine is tuned for 278 hp and 317 pound-feet of torque. Automatic and manual transmissions are available, and the manual option is largely attributed to Brown’s influence, as he is not just the engineer but a major Tacoma enthusiast. 

The star of the lineup is the i-Force Max hybrid powertrain. Engineers paired the turbo 2.4-liter engine with an electric motor and 1.87-kilowatt-hour battery for 326 horsepower and an impressive 465 pound-feet of torque. Standard on the TRD Pro and Trailhunter models and available on TRD Sport, TRD Off-Road, Limited variants, the i-Force Max is the most potent power combination ever offered on the Tacoma. 

“The great part about the hybrid system, which is what we just launched in the Tundra (and the motor and battery are identical, by the way) is instantaneous torque,” Brown told PopSci. “While we’re waiting for those turbos to spin up, which isn’t too long, it can really supplement the overall drive experience with an instant burst of power, especially when you’re towing or heavily laden.” 

With the i-Force Max, the truck has nearly double the torque numbers of the previous generation’s V6 capabilities. Gas mileage ranges from 19 miles per gallon to 21 miles per gallon for that model year. While we don’t know the EPA mileage ratings for the new Tacoma, Toyota has definitely made efforts to improve those numbers with a massive air dam in front that creates better aerodynamics. Don’t fret, though, off-roaders: it can be removed to increase ground clearance as necessary. 

The new Trailhunter trim.
The new Trailhunter trim. Kristin Shaw

Trailhunter vs TRD Pro

New for 2024 is the Trailhunter trim, designed for the ever-increasing overlanding population. Since 2020, the popularity of overlanding (in basic terms, camping in or near your car over long distances) has exploded, and Toyota is making the most of that trend with the Trailhunter. 

Before this trim debuted this year, the TRD Pro was the top of the line for ruggedness, but it’s built more for driving fast in the desert. The Trailhunter fills a need for go-everywhere adventurers with a whole catalog of accessories available straight from the factory, all of which can be rolled into a monthly payment versus purchasing piece by piece. Two years ago, the Trailhunter was teased at the Specialty Equipment Market Association annual trade show as a concept, and enthusiasts will be excited to see it in production. 

Toyota chose custom shocks from an Australian company called Old Man Emu to cushion the ride for both on- and off-road comfort. It’s also key for carrying a heavy load with lots of gear, which is what overlanders tend to do with on-board refrigerators from Dometic, rooftop tents, hydraulic lifts, and spare tires. For the uninitiated, Old Man Emu shocks were created Down Under, and are a popular choice to replace factored suspension components for other outdoors-focused brands like Land Rover

“In the Australian outback, Old Man Emu is the OG of overlanding,” Brown says. “They have a reputation for building good, reliable solutions for the aftermarket and we wanted to partner with them to work on the development together. This is a custom-tuned set that you can’t buy off the shelf.” 

The Trailhunter also boasts an onboard air compressor for airing up tires after an off-roading session, plus a fuel tank protector, mid-body skid plate, front bash plate, and rock sliders all designed to safeguard the truck from damage. 

Stay tuned, because the 2024 Toyota Tacoma is scheduled for dealerships later this year. As soon as we can get behind the wheel, we’ll tell you more about how it performs. 

The post The new Tacoma’s shock-absorbing seats help you keep your eyes on the prize appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Spy tech and rigged eggs help scientists study the secret lives of animals https://www.popsci.com/technology/oregon-zoo-sensor-condor-egg/ Mon, 22 May 2023 11:00:00 +0000 https://www.popsci.com/?p=542389
eggs in a nest
The Oregon Zoo isn't putting all its eggs in a basket when it comes to condor conservation. The Dark Queen / Unsplash

The field of natural sciences has been embracing sensors, cameras, and recorders packaged in crafty forms.

The post Spy tech and rigged eggs help scientists study the secret lives of animals appeared first on Popular Science.

]]>
eggs in a nest
The Oregon Zoo isn't putting all its eggs in a basket when it comes to condor conservation. The Dark Queen / Unsplash

Last week, The New York Times went backstage at the Oregon Zoo for an intimate look at the fake eggs the zoo was developing as a part of its endangered Condor nursery program. 

The idea is that caretakers can swap out the real eggs the birds lay for smart egg spies that look and feel the same. These specially designed, 3D-printed eggs have been equipped with sensors that can monitor the general environment of the nest and the natural behaviors of the California condor parents (like how long they sat on the egg for, and when they switched off between parents). 

In addition to recording data related to surrounding temperature and movement, there’s also a tiny audio recorder that can capture ambient sounds. So what’s the use of the whole charade? 

The Oregon Zoo’s aim is to use all the data gathered by the egg to better recreate natural conditions within their artificial incubators, whether that has to do with adjusting the temperatures they set these machines to, integrating periodic movements, or play back the sounds from the nest, which will ideally improve the outcomes from its breeding efforts. And it’s not the only group tinkering with tech like this.

A ‘spy hippo’

This setup at the Oregon Zoo may sound vaguely familiar to you, if you’ve been a fan of the PBS show “Spy in the Wild.” The central gag of the series is that engineers craft hyper-realistic robots masquerading as animals, eggs, boulders, and more to get up close and personal with a medley of wildlife from all reaches of the planet. 

[Related: Need to fight invasive fish? Just introduce a scary robot]

If peeking at the inner lives of zoo animals is a task in need of an innovative tech solution, imagine the challenges of studying animals in their natural habitats, in regions that are typically precarious or even treacherous for humans to visit. Add on cameras and other heavy equipment, and it becomes an even more demanding trip. Instead of having humans do the Jane Goodall method of community immersion with animals, these spies in disguise can provide invaluable insights into group or individual behavior and habits without being intrusive or overly invasive to their ordinary way of life.  

A penguin rover

Testing unconventional methods like these is key for researchers to understand as much as they can about endangered animals, since scientists have to gather important information in a relatively short time frame to help with their conservation. 

[Related: Open data is a blessing for science—but it comes with its own curses

To prove that these inventions are not all gimmick and have some practical utility, a 2014 study in Nature showed that a penguin-shaped rover can get more useful data on penguin colonies than human researchers, whose presence elevated stress levels in the animals. 

The point of all this animal espionage?

Minimizing the effects created by human scientists has always been a struggle in behavioral research for the natural sciences. Along with the advancement of other technologies like better cameras and more instantaneous data transfer, ingenious new sensor devices like the spy eggs are changing the field itself. The other benefit is that every once in a while, non-scientist humans can also be privy to the exclusive access provided into the secret lives of these critters, like through “Spy in the Wild,” and use these as portals for engaging with the world around them.

The post Spy tech and rigged eggs help scientists study the secret lives of animals appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
‘Extended reality’ will help preserve some of Afghanistan’s most endangered historical sites https://www.popsci.com/technology/mit-afghanistan-ways-of-seeing-history/ Fri, 19 May 2023 17:00:00 +0000 https://www.popsci.com/?p=542227
A digital rendering of the Green Mosque in Balkh, Afghanistan
MIT digitally recreated four historical sites located in Afghanistan. Nikolaos Vlavianos/MIT

Four at-risk, hard-to-reach historical sites in Afghanistan are being painstakingly recreated for virtual preservation.

The post ‘Extended reality’ will help preserve some of Afghanistan’s most endangered historical sites appeared first on Popular Science.

]]>
A digital rendering of the Green Mosque in Balkh, Afghanistan
MIT digitally recreated four historical sites located in Afghanistan. Nikolaos Vlavianos/MIT

A documentary project using cutting edge 3D imaging, drone photography, and virtual reality combined with painstakingly detailed hand drawings is digitally preserving some of Afghanistan’s most awe-inspiring, endangered historical sites. On Friday, MIT previewed the impending release of “Ways of Seeing,” a collaborative effort between MIT Libraries and its Aga Khan Documentation Center alongside the Aga Khan Trust for Culture that aims to create “extended reality” (XR) experiences of significant architectural locales throughout the country.

“Ways of Seeing” currently focuses on four separate historical sites across Afghanistan: the Green Mosque in Balkh, a Buddhist dome south of Kabul known as the Parwan Stupa, the 15th century tomb of Queen Gawhar Saad, and the 200-foot-tall Minaret of Jam, built during the 12th century in a remote location in western Afghanistan. According to MIT’s announcement, scholars chose the sites for their architectural and religious diversity, as well as the relative inaccessibility of some of the locales.

[Related: Staggering 3D scan of the Titanic shows the wreck down to the millimeter.]

To amass the visual data, MIT researchers worked alongside an Afghan digital production crew that traveled to the chosen sites after being remotely trained to pilot a “3D scanning aerial operation.” Once there, the on-location journalists collected between 15,000 and 30,000 images at each location. Meanwhile, Nikolaos Vlavianos, a PhD candidate in MIT’s Department of Architecture Design and Computation group, led an effort to “computationally [generate] point clouds and mesh geometry with detailed texture mapping.”

Side-by-side of hand drawn and renderings of the Green Mosque.
CREDIT: Jelena Pejkovic (Left), Nikolaos Vlavianos (Right)

Afterwards, Jelena Pejkovic, an MIT alum and practicing architect, created detailed drawings of the locations via VERNADOC, a traditional ink rendering technique first developed by the Finnish architect Markku Mattila. “I wanted to rediscover the most traditional possible kind of documentation—measuring directly by hand, and drawing by hand,” Pejkovic said in Friday’s announcement.

While “Ways of Seeing” is meant to provide a cutting-edge means of digital preservation of remote and potentially at-risk historical sites, the team ultimately hopes to make the archive available to displaced Afghans around the world, as well as “anyone keen to witness them,” says Fontini Christia, a political science professor at MIT who led the project. Christia’s team also hopes this approach to extended reality modeling could eventually be scaled and replicated for other at-risk heritage sites around the world in the face of environmental catastrophes, wars, and cultural appropriation. “Ways of Seeing” is scheduled to be publicly released by the end of June 2023.

The post ‘Extended reality’ will help preserve some of Afghanistan’s most endangered historical sites appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Dirty diapers could be recycled into cheap, sturdy concrete https://www.popsci.com/technology/diaper-concrete-homes/ Thu, 18 May 2023 20:00:00 +0000 https://www.popsci.com/?p=542051
Close-up of children's diapers stacked in a piles
Mixing disposable diapers into concrete can cut down on one of landfills' biggest problems. Deposit Photos

Diapers are a scourge on landfills. Mixing them into buildings' concrete frames could dramatically reduce that problem.

The post Dirty diapers could be recycled into cheap, sturdy concrete appeared first on Popular Science.

]]>
Close-up of children's diapers stacked in a piles
Mixing disposable diapers into concrete can cut down on one of landfills' biggest problems. Deposit Photos

American families need over one trillion diapers every year for the 4 million babies born across the country annually. Diaper use can extend far past the first year of infants’ lives—and they generally don’t finish potty training until somewhere between 1.5 to 3 years old. Extrapolate those needs to the entire world, and it’s easy to see how disposable diapers are the third-most prevalent consumer product found in landfills. Because most diapers contain plastics such as polyester, polyethylene, and polypropylene, they are expected to linger in those same landfills for about 500 years before breaking down.

But what if disposable diapers’ lifespans expanded far beyond their one-and-done use? Environmental engineers recently pondered that very question, and have reportedly found a surprising solution: diaper domiciles.

As detailed in a paper published on Thursday with Scientific Reports, a trio of researchers at Japan’s University of Kitakyushu combined six different amounts of washed, dried, and shredded diaper waste with gravel, sand, cement, and water, then cured their samples for 28 days. Afterwards, they tested their composite materials’ resiliencies, and recorded some extremely promising results.

[Related: Steel built the Rust Belt. Green steel could help rebuild it.]

For a three-story, 36-square-meter floor plan, the team found that the cured diaper waste could replace as much as 10 percent of sand within a structure’s traditional concrete support beams and columns. In a single-story home, that percentage nearly tripled. Meanwhile, diapers could swap out 40 percent of the sand needed in partition wall mortar, alongside 9 percent of the sand in flooring and garden paving. All told, disposable diaper waste could replace as much as 8 percent of all sand in a single-story, 36-square-meter floor plan.

The team’s results are extremely promising for low- and middle-income nations facing intense housing crises. For the purposes of their study, researchers adhered to Indonesian building codes to mirror a real world application. “Like other developing countries, low-cost housing provision in Indonesia has been a serious concern in the last three decades,” writes the team in their article. Indonesia’s urban population is growing at around 4 percent per year, resulting in an annual housing deficit of as much as 300,000 homes per year, the authors also noted.

Moving forward, researchers note that collaboration would be needed with government and waste facility officials to develop a means for large-scale collection, sanitization, and shredding of diaper waste. At the same time, nations’ building regulations must be amended to allow for diaper-imbued concrete. Still, the findings are a creative potential solution to the literal and figurative mountain of a sustainability issue—one that may soon finally be toppled. Just make sure it’s all sanitized first.

The post Dirty diapers could be recycled into cheap, sturdy concrete appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This summer could push US energy grids to their limits https://www.popsci.com/technology/summer-energy-grid-report/ Thu, 18 May 2023 19:00:00 +0000 https://www.popsci.com/?p=542036
Sun setting behind an high voltage power line transformer
The NERC's assessment warns two-thirds of North America is at an elevated risk for blackouts this summer. Deposit Photos

A new assessment shows that most of the US may not possess enough energy reserves to handle seasonal heatwaves, severe storms, and hurricanes.

The post This summer could push US energy grids to their limits appeared first on Popular Science.

]]>
Sun setting behind an high voltage power line transformer
The NERC's assessment warns two-thirds of North America is at an elevated risk for blackouts this summer. Deposit Photos

A worrying new report from the North American Electric Reliability Corporation (NERC) estimates over two-thirds of North America will see elevated risks of energy grid shortfalls and blackouts over the summer if faced with extreme temperature spikes and dire weather. While resources remain “adequate” for normal seasonal peak demand, the major non-profit international regulatory authority’s 2023 Summer Reliability Assessment warns most of the US—including the West, Midwest, Texas, Southeast, and New England regions—may not possess enough energy reserves to handle heatwaves, severe storms, and hurricanes.

NERC’s report is particularly troubling given this year’s El Niño forecast. El Niño historically produces wetter-than-average conditions along the Gulf Coast alongside drier climates for areas such as the Pacific Northwest and the Rocky Mountains. While a naturally occurring event, both El Niño and La Niña weather patterns are expected to rapidly strengthen by the end of the decade due to the exacerbations from climate change. On top of this, industry watchdogs say the US power grid still requires critical maintenance, repairs, and modernization. “The system is close to its edge,” warned NERC’s Director of Reliability Assessment and Performance Analysis John Moura in a call with reporters.

In Texas, for example, the NERC explains that “dispatchable generation may not be sufficient to meet reserves during an extreme heat wave that is accompanied by low winds.” Wildfire risks in the West and Northwest, on the other hand, could jeopardize the ability to transfer electricity as needed, resulting in “localized load shedding.”

[Related: How an innovative battery system in the Bronx will help charge up NYC’s grid.]

“This report is an especially dire warning that America’s ability to keep the lights on has been jeopardized. That’s unacceptable,” Jim Matheson, the CEO of the National Rural Electric Cooperative Association, said in a statement.“Federal policies must recognize the compromised reliability reality facing the nation before it’s too late.”

In addition to reliability concerns during peak performance times, the NERC report notes that continued supply chain issues concerning labor, material, and equipment have affected preseason maintenance for generation and transmission facilities across North America.

Still, NERC’s assessment isn’t entirely bad news—much of northern Canada and the US East Coast face a low risk of exceeding their operating reserves. Meanwhile, no region in North America is currently staring down a “high” risk of not meeting their needs during normal peak conditions. “Increased, rapid deployment of wind, solar and batteries have made a positive impact,” said Mark Olson, NERC’s manager of Reliability Assessments. “However, generator retirements continue to increase the risks associated with extreme summer temperatures, which factors into potential supply shortages in the western two-thirds of North America if summer temperatures spike.”

The post This summer could push US energy grids to their limits appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Wendy’s wants underground robots to deliver food to your car https://www.popsci.com/technology/wendys-underground-delivery-robot/ Thu, 18 May 2023 16:30:00 +0000 https://www.popsci.com/?p=541984
Wendy's chain restaurant at night.
Wendy's wants to automate its drive-thru. Batu Gezer / Unsplash

The concept is similar to a pneumatic tube system.

The post Wendy’s wants underground robots to deliver food to your car appeared first on Popular Science.

]]>
Wendy's chain restaurant at night.
Wendy's wants to automate its drive-thru. Batu Gezer / Unsplash

Wendy’s announced this week that it is going to try using underground autonomous robots to speed up how customers collect online orders. The burger joint plans to pilot the system designed by “hyperlogistics” company Pipedream, and aims to be able to send food from the kitchen to designated parking spots.

Wendy’s seems to be on a quest to become the most technologically advanced fast food restaurant in the country. Last week, it announced that it had partnered with Google to develop its own AI system (called Wendy’s FreshAI) that could take orders at a drive-thru. This week, it’s going full futuristic. (Pipedream’s current marketing line is “Someday we’ll use teleportation, until then we’ll use Pipedream.”)

According to a PR email sent to PopSci, digital orders now make up 11 percent of Wendy’s total sales and are growing. On top of the 75 to 80 percent of orders that are placed at a drive-thru.

The proposed autonomous system aims “to make digital order pick-up fast, reliable and invisible.” When customers or delivery drivers are collecting an online order, they pull into a dedicated parking spot with an “Instant Pickup portal,” where there will be a drive-thru style speaker and kiosk to confirm the order with the kitchen. In a matter of seconds, the food is then sent out by robots moving through an underground series of pipes using “Pipedream’s temperature-controlled delivery technology.” The customer can then grab their order from the kiosk without ever leaving their car. Apparently, the “first-of-its-kind delivery system” is designed so that drinks “are delivered without a spill and fries are always Hot & Crispy.”

[Related: What robots can and can’t do for a restaurant]

Wendy’s is far from the first company to try and use robots to streamline customer orders, though most go further than the parking lot. Starship operates a delivery service on 28 university campuses while Uber Eats is still trialing sidewalk delivery robots in Miami, Florida; Fairfax, Virginia; and Los Angeles, California. Whether these knee-height six-wheeled electric autonomous vehicles can graduate from school and make it into the real world remains to be seen.

The other big semi-autonomous delivery bets are aerial drones. Wing, a subsidiary of Google-parent Alphabet, unveiled a device called the Auto-Loader earlier this year. It also calls for a dedicated parking spot and aims to make it quicker and easier for staff at partner stores to attach deliveries to one of the company’s drones. 

What sets Wendy’s and Pipedream’s solution apart is that it all happens in a space that the restaurant controls. Starship, Uber Eats, and Wing are all trying to bring robots out into the wider world where they can get attacked by students, take out power lines, and otherwise have to deal with humans, street furniture, and the chaos of existence. Providing Wendy’s abides by building ordinances and any necessary health and safety laws, cost is the only stopping them adding tube-dwelling robots to every restaurant the company controls. Really, the option Wendy’s is trialing has more in common with a pneumatic tube system—hopefully it will be a bit more practical.

The post Wendy’s wants underground robots to deliver food to your car appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Navy SEALs will finally stay dry in a cozy new submarine https://www.popsci.com/technology/navy-seals-dry-combat-submersible/ Tue, 16 May 2023 22:04:15 +0000 https://www.popsci.com/?p=541534
A SEAL Delivery Vehicle (SDV) Mark 11 is seen in Hawaii in 2020. The DOD notes: "This photo has been altered for security purposes"
A SEAL Delivery Vehicle (SDV) Mark 11 is seen in Hawaii in 2020. The DOD notes: "This photo has been altered for security purposes". Christopher Perez / US Navy

The existing method of transportation involves a sub that is exposed to the elements. That should change soon.

The post Navy SEALs will finally stay dry in a cozy new submarine appeared first on Popular Science.

]]>
A SEAL Delivery Vehicle (SDV) Mark 11 is seen in Hawaii in 2020. The DOD notes: "This photo has been altered for security purposes"
A SEAL Delivery Vehicle (SDV) Mark 11 is seen in Hawaii in 2020. The DOD notes: "This photo has been altered for security purposes". Christopher Perez / US Navy

Navy SEALs have a well-earned reputation as an amphibious force. The special operations teams, whose acronym derives from “Sea, Air and Land,” are trained to operate from a range of vehicles, departing as needed to carry out missions through water, in the sky, or on the ground. When deploying covertly in the ocean, SEALs have for decades taken the SEAL Delivery Vehicle, a flooded transport in which the crew ride submerged and immersed in ocean water.  Now, Special Operations Command says the new enclosed submarine—in other words, it’s dry inside—should be ready for operation before the end of May.

This new submarine, in contrast to the open-water SEAL Delivery Vehicle, is called the Dry Combat Submersible. It’s been in the works since at least 2016, and was designed as a replacement for a previous enclosed transport submarine, the Advanced SEAL Delivery System. This previous advanced sub, developed in the early 2000s, was canceled after a prototype caught fire in 2008. That, compounded by cost overruns in the program, halted development on the undersea vehicle. It also came at a time when SEALs were operating largely on land and through the air, as part of the increased operational tempo of the Iraq and Afghanistan wars. 

But now, it appears to be full-steam ahead for the Dry Combat Submersible. The news was confirmed at the SOF [Special Operations Forces] Week conference in Tampa, Florida, which ran from May 8 through 11. The convention is a place for Special Operations Forces from across the military to talk shop, meet with vendors selling new and familiar tools, and gather as a chattering class of silent professionals. It is also, like the Army, Navy, and Air Force association conventions, a place for the military to announce news directly relevant to those communities.

“This morning we received an operational test report. So that means the Dry Combat Submersible is going to be operational by Memorial Day, and we’re coming to an end scenario,” John Conway, undersea program manager at SOCOM’s program executive office-maritime, said on May 10, as reported by National Defense Magazine.

The flooded submersible in use today allows four SEALs and two drivers, clad in wetsuits, to travel undetected under the surface of the water several miles. With just the driver and navigator, the craft can traverse 36 nautical miles at 4 knots, a journey taking nine hours. With the four SEALs, the distance is limited, not just by the weight of passengers and their gear, but by the conditions of the submersible itself.

“Because the SEALs are exposed to the environment water temperature can be a more limiting factor than battery capacity,” wrote Christopher J. Kelly, in a 1998 study of the submarine in joint operations.

When Lockheed Martin announced in 2016 that it would be manufacturing Dry Combat Submersibles, it offered no specifics on the vehicle other than that it would weigh more than 30 tons and be capable of launch from surface ships. (The current SEAL Delivery Vehicle is launched from larger submarines.) The Dry Combat Submersible, at announcement, promised “longer endurance and operate at greater depths than swimmer delivery vehicles (SDV) in use,” the ability to travel long distances underwater, and an overall setup that “allows the personnel to get closer to their destination before they enter the water, and be more effective upon arrival.”

Concept art for the vehicle showed a passenger capacity of at least nine, though it would still be a fairly compact ride. The S351 Nemesis, made by MSubs, who has partnered with Lockheed Martin on this project, and is the likely basis for the Dry Combat Submersible. As listed, the Nemesis has a capacity for eight passengers and one pilot. The nemesis can travel as far as 66 nautical miles, and do so at a speed of 5 knots, or make the journey in 13 hours. 

Once in the Navy’s hands, the new submersible will ensure better starts to operations for SEALs, who can arrive at missions having only briefly donned wetsuits, instead of dealing with the fullness of the ocean for hours.

As the Pentagon shifts focus from terrestrial counter-insurgencies to the possibility of major power war, especially in and over the islands of the Pacific, the Dry Combat Submersible will expand how its SEALs can operate. It’s a lot of effort for a relatively small part of the overall military, but the precise application of specialized forces can have an outsized impact on the course of subsequent operations, from harbor clearing to covert action behind fortified lines. 

The post Navy SEALs will finally stay dry in a cozy new submarine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An inside look at the data powering McLaren’s F1 team https://www.popsci.com/technology/mclaren-f1-data-technology/ Tue, 16 May 2023 19:00:00 +0000 https://www.popsci.com/?p=541361
McLaren's F1 race car
McLaren’s F1 race car, seen here in the garage near the track, belonging to driver Oscar Piastri. McLaren

Go behind the scenes at the Miami Grand Prix and see how engineers prep for the big race.

The post An inside look at the data powering McLaren’s F1 team appeared first on Popular Science.

]]>
McLaren's F1 race car
McLaren’s F1 race car, seen here in the garage near the track, belonging to driver Oscar Piastri. McLaren

Formula 1, a 70-year old motorsport, has recently undergone a cultural renaissance. That renaissance has been fueled in large part by the growing popularity of the glitzy, melodrama-filled Netflix reality series, “Drive To Survive,” which Mercedes team principal Toto Wolff once said was closer to the fictional “Top Gun” than a documentary. Relaxed social media rules after F1 changed owners also helped provide a look into the interior lives of drivers-turned-new-age-celebrities. 

As a result, there’s been an explosion of interest among US audiences, which means more eyeballs and more ticket sales. Delving into the highly technical world of F1 can be daunting, so here are the basics to know about the design of the sport—plus an inside look at the complex web of communications and computer science at work behind the scenes. 

Data and a new era of F1

Increasingly, Formula 1 has become a data-driven sport; this becomes evident when you look into the garages of modern F1 teams. 

“It started really around 60, 70 years ago with just a guy with a stopwatch, figuring out which was the fastest lap—to this day and age, having every car equipped with sensors that generate around 1.1 million data points each second,” says Luuk Figdor, principal sports tech advisor with Amazon Web Services (AWS), which is a technology partner for F1. “There’s a huge amount of data that’s being created, and that’s per car.” Part of AWS’ job is to put this data in a format that is understandable not only to experts, but also to viewers at home, with features like F1 Insights.

There was a time where cars had unreliable radios, and engineers could only get data on race performance at the very end. Now, things look much more different. Every car is able to send instantaneous updates on steering, G-force, speed, fuel usage, engine and tire status, gear status and much more. Around the track itself, there are more accurate ways for teams to get GPS data on the car positions, weather data, and timing data. 

“This is data from certain sensors that are drilled into the track before the race and there’s also a transponder in the car,” Figdor explains. “And whenever the car passes the sensor, it sends a signal. Based on those signals you can calculate how long it took for a car to pass a certain section of the track.” 

These innovations have made racing more competitive over the years, and made the margins in speed between some of the cars much closer. Fractions of seconds can divide cars coming in first or second place.

F1 101

For newbies, here’s a quick refresher on the rules of the game. Twenty international drivers from 10 teams compete for two championships: the Driver’s Championship and the Constructors’ Championship.

Pre-season testing starts in late February, and racing spans from March to November. There are 20 or so races at locations around the world, and each race is around 300 km (186 miles), which equals 50 to 70 laps (except for the Monaco circuit, which is shorter). Drivers get points for finishing high in the order—those who place 10th or below get no points. Individuals with the highest points win the Driver’s Championship, and teams with the highest points win the Constructors’ Championship. 

A good car is as essential for winning as a good driver. And an assortment of engineers are crucial for ensuring that both the driver and the car are performing at their best. In addition to steering and shifting gears, drivers can control many other settings like engine power and brake balance. Races are rain or shine, but special tires are often required for wet roads. Every team is required to build certain elements of their car, including the chassis, from scratch (they are allowed to buy engines from other suppliers). The goal is to have a car with low air resistance, high speed, low fuel consumption, and good grip on the track. Most cars can reach speeds of around 200 mph. Certain engineering specifications create the downward lift needed to keep the cars on the ground. 

Technical regulations from the FIA contain rules about how the cars can be built—what’s allowed and not allowed. Rules can change from season to season, and teams tend to refresh their designs each year. Every concept undergoes thorough aerodynamic and road testing, and modifications can be made during the season. 

The scene backstage before a race weekend

It’s the Thursday before the second-ever Miami Grand Prix. In true Florida fashion, it’s sweltering. The imposing Hard Rock Stadium in Miami Gardens has been transformed into a temporary F1 campus in preparation for race weekend, with the race track wrapping around the central arena and its connected lots like a metal-guarded moat. Bridges take visitors in and out of the stadium. The football field that is there normally has been turned into a paddock park, where the 10 teams have erected semi-permanent buildings that act as their hubs during the week. 

Setting up everything the 10 teams need ahead of the competition is a whole production. Some might even call it a type of traveling circus

AI photo
The paddock park inside the football field of the Hard Rock Stadium. Charlotte Hu

Ed Green, head of commercial technology for McLaren, greets me in the team’s temporary building in the paddock park. He’s wearing a short-sleeved polo in signature McLaren orange, as is everyone else walking around or sitting in the space. Many team members are also sporting what looks like a Fitbit, likely part of the technology partnership they have with Google. The partnership means that the team will also use Android connected devices and equipment—including phones, tablets and earbuds—as well as the different capabilities provided by Chrome. 

McLaren has developed plenty of custom web applications for Formula 1. “We don’t buy off-the-shelf too much, in the past two years, a lot of our strategy has moved to be on web apps,” Green says. “We’ve developed a lot into Chrome, so the team have got really quick, instant access…so if you’re on the pit wall looking at weather data and video systems, you could take that with you on your phone, or onto the machines back in the engineering in the central stadium.” 

AI photo
The entrance to McLaren’s garage. Charlotte Hu

This season, there are 23 races. This structure that’s been built is their hub for flyaway races, or races that they can’t drive to from the factory. The marketing, the engineers, the team hospitality, and the drivers all share the hub. The important points in space—the paddock, garage, and race track—are linked up through fiber optic cables. 

“This is sort of the furthest point from the garage that we have to keep connected on race weekend,” Green says. “They’ll be doing all the analysis of all the information, the systems, from the garage.”

To set up this infrastructure so it’s ready to transmit and receive data in time for when the cars hit the track, an early crew of IT personnel have to arrive the Saturday before to run the cabling, and get the basics in order. Then, the wider IT team arrives on Wednesday, and it’s a mad scramble to get the rest of what they need stood up so that by Thursday lunchtime, they can start running radio checks and locking everything down. 

“We fly with our IT rig, and that’s because of the cost and complexity of what’s inside it. So we have to bring that to every race track with us,” says Green. The path to and from the team hub to the garages involves snaking in and out of corridors, long hallways and lobbies under the stadium. As we enter McLaren’s garage, we first come across a wall of headsets, each with a name label underneath, including the drivers and each of their race engineers. This is how members of the team stay in contact with one another. 

AI photo
Headsets help team members stay connected. Charlotte Hu

The garage, with its narrow hallway, opens in one direction into the pit. Here you can see the two cars belonging to McLaren drivers Lando Norris and Oscar Piastri being worked on by engineers, with garage doors that open onto the race track. The two cars are suspended in various states of disassembly, with mechanics examining and tweaking them like surgeons at an operating table. The noise of drilling, whirring, and miscellaneous clunking fills the space. There are screens everywhere, running numbers and charts. One screen has the local track time, a second is running a countdown clock until curfew tonight. During the race, it will post video feeds from the track and the drivers, along with social media feeds. 

McLaren team members work on the Lando Norris McLaren MCL60 in the garage
McLaren team members work on the Lando Norris’ McLaren MCL60 in the garage. McLaren

We step onto a platform viewing area overlooking the hubbub. On the platform, there are two screens: one shows the mission control room back in England, and the other shows a diagram of the race circuit as a circle. “We look at the race as a circle, and that’s because it helps us see the gaps between the cars in time,” Green says. “Looking through the x, y, z coordinates is useful but actually they bunch up in the corners. Engineers like to see gaps in distances.” 

“This is sort of home away from home for the team. This is where we set up our garage and move our back office central services as well as engineering,” he notes. “We’re still in construction.”

From Miami to mission control in Woking

During race weekend, the mission control office in England, where McLaren is based, has about 32 people who are talking to the track in near real time. “We’re running just over 100 milliseconds from here in Miami back to base in Woking. They will get all the data feeds coming from these cars,” Green explains. “If you look at the team setting up the cars, you will see various sensors on the underside of the car. There’s an electronic control unit that sits under the car. It talks to us as the cars go around track. That’s regulated by the FIA. We cannot send information to the car but we can receive information from the car. Many, many years ago that wasn’t possible.”

For the Miami Grand Prix, Green estimates that McLaren will have about 300 sensors on each car for pressure taps (to measure airflow), temperature reading, speed checks across the car, and more. “There’s an enormous amount of information to be seen,” Green says. “From when we practice, start racing, to when we finish the race, we generate just about 1.5 terabytes of information from these two cars. So it’s a huge amount of information.” 

[Related: Inside the search for the best way to save humanity’s data]

Because the data comes in too quickly for any one person to handle, machine learning algorithms and neural networks in the loop help engineers spot patterns or irregularities. These software help package the information into a form that can be used to make decisions like when a car should switch tires, push up their speed, stay out, or make a pit stop. 

“It’s such a data-driven sport, and everything we do is founded on data in the decision-making, making better use of digital twins, which has been part of the team for a long time,” Green says. Digital twins are virtual models of objects that are based off of scanned information. They’re useful for running simulations. 

Throughout the race weekend, McLaren will run around 200 simulations to explore different scenarios such as what would happen if the safety car came out to clear debris from a crash, or if it starts raining. “We’ve got an incredibly smart team, but when you have to make a decision in three seconds, you’ve got to have human-in-the-loop technology to feed you what comes next as well,” Green says. “It’s a lot of fun.” 

[Related: Can software really define a vehicle? Renault and Google are betting on it.]

Improved computing resources and better simulation technology has helped change the sport as a whole too. Not only does it reduce the cost of testing design options (important because of the new cost cap rule that puts a ceiling on how much teams are allowed to spend on designing and building their cars), it also informs new rules for racing.  

“One of the things pre-2022, the way that the cars were designed resulted in the fact it was really hard to follow another car closely. And this is because of the aerodynamics of the car,” Figdor says. When a car zooms down the track, it distorts the air behind it. It’s like how a speedboat disrupts the water it drives through. And if you try to follow a speedboat with another speedboat in the lake, you will find that it’s quite tricky. 

“The same thing happens with Formula 1 cars,” says Figdor. “What they did in 2022 is they came up with new regulations around the design of the car that should make it easier for cars to follow each other closely on the track.”

That was possible because F1 and AWS were able to create and run realistic, and relatively fast simulations more formally called “two-car Computational Fluid Dynamics (CFD) aerodynamic simulations” that were able to measure the effects of various cars with different designs following each other in a virtual wind tunnel. “Changing regulations like that, you have to be really sure of what you’re doing. And using technology, you can just estimate many more scenarios at just a fraction of the cost,” Figdor says. 

Making sure there’s not too many engineers in the garage

The pit wall bordering the race track may be the best seat in the house, but the engineering island is one of the most important. It sits inside the garage, cramped between the two cars. Engineers from both sides of the garage will have shared resources there to look at material reliability and car performance. The engineering island is connected to the pit wall and also to a stack of servers and an IT tower tucked away in a corner of the garage. The IT tower, which has 140 terabytes of storage, 4.5 terabytes of memory, 172 logical processors, and many many batteries, keeps the team in communication with the McLaren Technology Center.  

McLaren engineers speak in the garage
McLaren engineers at the engineering island in the middle of the garage. McLaren

All the crew on the ground in Miami, about 80 engineers, make up around 10 percent of the McLaren team. It’s just the tip of the iceberg. The team of engineers at large work in three umbrella categories: design, build, and race. 

[Related: Behind the wheel of McLaren’s hot new hybrid supercar, the Artura]

AI photo
McLaren flies their customized IT rig out to every race. McLaren

The design team will use computers to mock up parts in ways that make them lighter, more structurally sound, or give more performance. “Material design is part of that, you’ll have aerodynamicists looking at how the car’s performing,” says Green. Then, the build team will take the 3D designs, and flatten them into a pattern. They’ll bring out rolls of carbon fiber that they store in a glass chiller, cut out the pattern, laminate it, bind different parts together, and put it into a big autoclave or oven. As part of that build process, a logistics team will take that car and send it out to the racetrack and examine how it drives. 

Formula 1 cars can change dramatically from the first race of the season to the last. 

“If you were to do nothing to the car that wins the first race, it’s almost certain to come last at the end of the season,” Green says. “You’ve got to be constantly innovating. Probably about 18 percent of the car changed from when we launched it in February to now. And when we cross that line in Abu Dhabi, probably 80 percent of the car will change.” 

There’s a rotating roster of engineers at the stadium and in the garage on different days of race week. “People have got very set disciplines and you also hear that on the radio as well. It’s the driver’s engineers that are going to listen to everything and they’re going to be aware of how the car’s set up,” Green says. “But you have some folks in aerodynamics on Friday, Saturday, particularly back in Woking. That’s so important now in modern F1—how you set the car up, the way the air is performing—so you can really over-index and make sure you’ve got more aerodynamic expertise in the room.”

The scene on Sunday

On race day, the makeup of engineers is a slightly different blend. There are more specialists focused on competitor intelligence, analysis, and strategy insight. Outside of speed, the data points they are really interested in are related to the air pressures and the air flows over the car. 

“Those things are really hard to measure and a lot of energy goes into understanding that. Driver feedback is also really important, so we try to correlate that feedback here,” Green says. “The better we are at correlating the data from our virtual wind tunnel, our physical wind tunnel, the manufacturing parts, understanding how they perform on the car, the quicker we can move through the processes and get upgrades to the car. Aerodynamics is probably at the moment the key differentiator between what teams are doing.” 

As technology advances, and partners work on more interesting products in-house, some of the work is sure to translate over to F1. Green says that there are some exciting upcoming projects looking at if Google could help them apply speech-to-text software to transcribe driver radios from other teams during the races—work that’s currently being done by human volunteers.

The post An inside look at the data powering McLaren’s F1 team appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
No machine can beat a dog’s bomb-detecting sniffer https://www.popsci.com/story/technology/dogs-bomb-detect-device/ Mon, 18 Mar 2019 21:21:29 +0000 https://www.popsci.com/uncategorized/dogs-bomb-detect-device/
A Labrador retriever smelling for explosives with a member of a bomb squad at the trial of the 2015 Boston Marathon bomber
A bomb-sniffing dog walks in front of a courthouse during the 2015 trial for accused Boston Marathon bomber Dzhokhar Tsarnaev. Matt Stone/MediaNews Group/Boston Herald via Getty Images

Dogs are the best bomb detectors we have. Can scientists do better?

The post No machine can beat a dog’s bomb-detecting sniffer appeared first on Popular Science.

]]>
A Labrador retriever smelling for explosives with a member of a bomb squad at the trial of the 2015 Boston Marathon bomber
A bomb-sniffing dog walks in front of a courthouse during the 2015 trial for accused Boston Marathon bomber Dzhokhar Tsarnaev. Matt Stone/MediaNews Group/Boston Herald via Getty Images

This story was first published on June 3, 2013. It covered the most up-to-date technology in bomb detection at the time, with a focus on research based off canine olfaction. Today, dogs still hold an edge to chemical sensors with their noses: They’ve even been trained to sniff out bed bugs, the coronavirus, and homemade explosives like HMTDs.

IT’S CHRISTMAS SEASON at the Quintard Mall in Oxford, Alabama, and were it not a weekday morning, the tiled halls would be thronged with shoppers, and I’d probably feel much weirder walking past Victoria’s Secret with TNT in my pants. The explosive is harmless in its current form—powdered and sealed inside a pair of four-ounce nylon pouches tucked into the back pockets of my jeans—but it’s volatile enough to do its job, which is to attract the interest of a homeland defender in training by the name of Suge.

Suge is an adolescent black Labrador retriever in an orange DO NOT PET vest. He is currently a pupil at Auburn University’s Canine Detection Research Institute and comes to the mall once a week to practice for his future job: protecting America from terrorists by sniffing the air with extreme prejudice.

Olfaction is a canine’s primary sense. It is to him what vision is to a human, the chief input for data. For more than a year, the trainers at Auburn have honed that sense in Suge to detect something very explicit and menacing: molecules that indicate the presence of an explosive, such as the one I’m carrying.

The TNT powder has no discernible scent to me, but to Suge it has a very distinct chemical signature. He can detect that signature almost instantly, even in an environment crowded with thousands of other scents. Auburn has been turning out the world’s most highly tuned detection dogs for nearly 15 years, but Suge is part of the school’s newest and most elite program. He is a Vapor Wake dog, trained to operate in crowded public spaces, continuously assessing the invisible vapor trails human bodies leave in their wake.

Unlike traditional bomb-sniffing dogs, which are brought to a specific target—say, a car trunk or a suspicious package—the Vapor Wake dog is meant to foil a particularly nasty kind of bomb, one carried into a high traffic area by a human, perhaps even a suicidal one. In busy locations, searching individuals is logistically impossible, and fixating on specific suspects would be a waste of time. Instead, a Vapor Wake dog targets the ambient air.

As the bombing at the Boston marathon made clear, we need dogs—and their noses. As I approach the mall’s central courtyard, where its two wings of chain stores intersect, Suge is pacing back and forth at the end of a lead, nose in the air. At first, I walk toward him and then swing wide to feign interest in a table covered with crystal curios. When Suge isn’t looking, I walk past him at a distance of about 10 feet, making sure to hug the entrance of Bath & Body Works, conveniently the most odoriferous store in the entire mall. Within seconds, I hear the clattering of the dog’s toenails on the hard tile floor behind me.

As Suge struggles at the end of his lead (once he’s better trained, he’ll alert his handler to threats in a less obvious manner), I reach into my jacket and pull out a well-chewed ball on a rope—his reward for a job well done—and toss it over my shoulder. Christmas shoppers giggle at the sight of a black Lab chasing a ball around a mall courtyard, oblivious that had I been an actual terrorist, he would have just saved their lives.

That Suge can detect a small amount of TNT at a distance of 10 feet in a crowded mall in front of a shop filled with scented soaps, lotions, and perfumes is an extraordinary demonstration of the canine’s olfactory ability. But what if, as a terrorist, I’d spotted Suge from a distance and changed my path to avoid him? And what if I’d chosen to visit one of the thousands of malls, train stations, and subway platforms that don’t have Vapor Wake dogs on patrol?

Dogs may be the most refined scent-detection devices humans have, a technology in development for 10,000 years or more, but they’re hardly perfect. Graduates of Auburn’s program can cost upwards of $30,000. They require hundreds of hours of training starting at birth. There are only so many trainers and a limited supply of purebred dogs with the right qualities for detection work. Auburn trains no more than a couple of hundred a year, meaning there will always be many fewer dogs than there are malls or military units. Also, dogs are sentient creatures. Like us, they get sleepy; they get scared; they die. Sometimes they make mistakes.

As the tragic bombing at the Boston Marathon made all too clear, explosives remain an ever-present danger, and law enforcement and military personnel need dogs—and their noses—to combat them. But it also made clear that security forces need something in addition to canines, something reliable, mass-producible, and easily positioned in a multitude of locations. In other words, they need an artificial nose.

Engineer in glasses and a blue coat in front of a bomb detector mass spectrometer
David Atkinson at the Pacific Northwest National Laboratory has created a system that uses a mass spectrometer to detect the molecular weights of common explosives in air. Courtesy Pacific Northwest National Laboratory

IN 1997, DARPA created a program to develop just such a device, targeted specifically to land mines. No group was more aware than the Pentagon of the pervasive and existential threat that explosives represent to troops in the field, and it was becoming increasingly apparent that the need for bomb detection extended beyond the battlefield. In 1988, a group of terrorists brought down Pan Am Flight 103 over Lockerbie, Scotland, killing 270 people. In 1993, Ramzi Yousef and Eyad Ismoil drove a Ryder truck full of explosives into the underground garage at the World Trade Center in New York, nearly bringing down one tower. And in 1995, Timothy McVeigh detonated another Ryder truck full of explosives in front of the Alfred P. Murrah Federal Building in Oklahoma City, killing 168. The “Dog’s Nose Program,” as it was called, was deemed a national security priority.

Over the course of three years, scientists in the program made the first genuine headway in developing a device that could “sniff” explosives in ambient air rather than test for them directly. In particular, an MIT chemist named Timothy Swager honed in on the idea of using fluorescent polymers that, when bound to molecules given off by TNT, would turn off, signaling the presence of the chemical. The idea eventually developed into a handheld device called Fido, which is still widely used today in the hunt for IEDs (many of which contain TNT). But that’s where progress stalled.

Olfaction, in the most reductive sense, is chemical detection. In animals, molecules bind to receptors that trigger a signal that’s sent to the brain for interpretation. In machines, scientists typically use mass spectrometry in lieu of receptors and neurons. Most scents, explosives included, are created from a specific combination of molecules. To reproduce a dog’s nose, scientists need to detect minute quantities of those molecules and identify the threatening combinations. TNT was relatively easy. It has a high vapor pressure, meaning it releases abundant molecules into the air. That’s why Fido works. Most other common explosives, notably RDX (the primary component of C-4) and PETN (in plastic explosives such as Semtex), have very low vapor pressures—parts per trillion at equilibrium and once they’re loose in the air perhaps even parts per quadrillion.

The machine “sniffed” just as a dog would and identified the explosive molecules. “That was just beyond the capabilities of any instrumentation until very recently,” says David Atkinson, a senior research scientist at the Pacific Northwest National Laboratory, in Richland, Washington. A gregarious, slightly bearish man with a thick goatee, Atkinson is the co-founder and “perpetual co-chair” of the annual Workshop on Trace Explosives Detection. In 1988, he was a PhD candidate at Washington State University when Pan Am Flight 103 went down. “That was the turning point,” he says. “I’ve spent the last 20 years helping to keep explosives off airplanes.” He might at last be on the verge of a solution.

When I visit him in mid-January, Atkinson beckons me into a cluttered lab with a view of the Columbia River. At certain times of the year, he says he can see eagles swooping in to poach salmon as they spawn. “We’re going to show you the device we think can get rid of dogs,” he says jokingly and points to an ungainly, photocopier–size machine with a long copper snout in a corner of the lab; wires run haphazardly from various parts.

Last fall, Atkinson and two colleagues did something tremendous: They proved, for the first time, that a machine could perform direct vapor detection of two common explosives—RDX and PETN—under ambient conditions. In other words, the machine “sniffed” the vapor as a dog would, from the air, and identified the explosive molecules without first heating or concentrating the sample, as currently deployed chemical-detection machines (for instance, the various trace-detection machines at airport security checkpoints) must. In one shot, Atkinson opened a door to the direct detection of the world’s most nefarious explosives.

As Atkinson explains the details of his machine, senior scientist Robert Ewing, a trim man in black jeans and a speckled gray shirt that exactly matches his salt-and-pepper hair, prepares a demonstration. Ewing grabs a glass slide soiled with RDX, an explosive that even in equilibrium has a vapor pressure of just five parts per trillion. This particular sample, he says, is more than a year old and just sits out on the counter exposed; the point being that it’s weak. Ewing raises this sample to the snout end of a copper pipe about an inch in diameter. That pipe delivers the air to an ionization source, which selectively pairs explosive compounds with charged particles, and then on to a commercial mass spectrometer about the size of a small copy machine. No piece of the machine is especially complicated; for the most part, Atkinson and Ewing built it with off-the-shelf parts.

Ewing allows the machine to sniff the RDX sample and then points to a computer monitor where a line graph that looks like an EKG shows what is being smelled. Within seconds, the graph spikes. Ewing repeats the experiment with C-4 and then again with Semtex. Each time, the machine senses the explosive.

David Atkinson may have been first to demonstrate extremely sensitive chemical detection—and that research is all but guaranteed to strengthen terror defense—but he and other scientists still have a long way to go before they approach the sophistication of a dog nose.

A commercial version of Atkinson’s machine could have enormous implications for public safety, but to get the technology from the lab to the field will require overcoming a few hurdles. As it stands, the machine recognizes only a handful of explosives (at least nine as of April), although both Ewing and Atkinson are confident that they can work out the chemistry to detect others if they get the funding. Also, Atkinson will need to shrink it to a practical size. The current smallest version of a high-performance mass spectrometer is about the size of a laser printer—too big for police or soldiers to carry in the field. Scientists have not yet found a way to shrink the device’s vacuum pump. DARPA, Atkinson says, has funded a project to dramatically reduce the size of vacuum pumps, but it’s unclear if the work can be applied to mass spectrometry.

If Atkinson can reduce the footprint of his machine, even marginally, and refine his design, he imagines plenty of very useful applications. For instance, a version affixed to the millimeter wave booths now common at American airports (the ones that require passengers to stand with their hands in the air—also invented at PNNL, by the way) could use a tube to sniff air and deliver it to a mass spectrometer. Soldiers could also mount one to a Humvee or an autonomous vehicle that could drive up and sniff suspicious piles of rubble in situations too perilous for a human or dog. If Atkinson could reach backpack size or smaller, he may even be able to get portable versions into the hands of those who need them most: the marines on patrol in Afghanistan, the Amtrak cops guarding America’s rail stations, or the officers watching over a parade or road race.

Atkinson is not alone in his quest for a better nose. A research group at MIT is studying the use of carbon nanotubes lined with peptides extracted from bee venom that bind to certain explosive molecules. And at the French-German Research Institute in France, researcher Denis Spitzer is experimenting with a chemical detector made from micro-electromechanical machines (MEMs) and modeled on the antennae of a male silkworm moth, which are sensitive enough to detect a single molecule of female pheromone in the air.

Atkinson may have been first to demonstrate extremely sensitive chemical detection—and that research is all but guaranteed to strengthen terror defense—but he and other scientists still have a long way to go before they approach the sophistication of a dog nose. One challenge is to develop a sniffing mechanism. “With any electronic nose, you have to get the odorant into the detector,” says Mark Fisher, a senior scientist at Flir Systems, the company that holds the patent for Fido, the IED detector. Every sniff a dog takes, it processes about half a liter of air, and a dog sniffs up to 10 times per second. Fido processes fewer than 100 milliliters per minute, and Atkinson’s machine sniffs a maximum of 20 liters per minute.

Another much greater challenge, perhaps even insurmountable, is to master the mechanisms of smell itself.

German shepherd patrolling Union Station in Washington, D.C.
To condition detection dogs to crowds and unpredictable situations, such as Washington, D.C.’s Union Station at Thanksgiving [above], trainers send them to prisons to interact with inmates. Mandel Ngan/Afp/Getty Images

OLFACTION IS THE OLDEST of the sensory systems and also the least understood. It is complicated and ancient, sometimes called the primal sense because it dates back to the origin of life itself. The single-celled organisms that first floated in the primordial soup would have had a chemical detection system in order to locate food and avoid danger. In humans, it’s the only sense with its own dedicated processing station in the brain—the olfactory bulb—and also the only one that doesn’t transmit its data directly to the higher brain. Instead, the electrical impulses triggered when odorant molecules bind with olfactory receptors route first through the limbic system, home of emotion and memory. This is why smell is so likely to trigger nostalgia or, in the case of those suffering from PTSD, paralyzing fear.

All mammals share the same basic system, although there is great variance in sensitivity between species. Those that use smell as the primary survival sense, in particular rodents and dogs, are orders of magnitude better than humans at identifying scents. Architecture has a lot to do with that. Dogs are lower to the ground, where molecules tend to land and linger. They also sniff much more frequently and in a completely different way (by first exhaling to clear distracting scents from around a target and then inhaling), drawing more molecules to their much larger array of olfactory receptors. Good scent dogs have 10 times as many receptors as humans, and 35 percent of the canine brain is devoted to smell, compared with just 5 percent in humans.

Unlike hearing and vision, both of which have been fairly well understood since the 19th century, scientists first explained smell only 50 years ago. “In terms of the physiological mechanisms of how the system works, that really started only a few decades ago,” says Richard Doty, director of the Smell and Taste Center at the University of Pennsylvania. “And the more people learn, the more complicated it gets.”

Whereas Atkinson’s vapor detector identifies a few specific chemicals using mass spectrometry, animal systems can identify thousands of scents that are, for whatever reason, important to their survival. When molecules find their way into a nose, they bind with olfactory receptors that dangle like upside-down flowers from a sheet of brain tissue known as the olfactory epithelium. Once a set of molecules links to particular receptors, an electrical signal is sent through axons into the olfactory bulb and then through the limbic system and into the cortex, where the brain assimilates that information and says, “Yum, delicious coffee is nearby.”

While dogs are fluent in the mysterious language of smell, scientists are only now learning the ABC’s.As is the case with explosives, most smells are compounds of chemicals (only a very few are pure; for instance, vanilla is only vanillin), meaning that the system must pick up all those molecules together and recognize the particular combination as gasoline, say, and not diesel or kerosene. Doty explains the system as a kind of code, and he says, “The code for a particular odor is some combination of the proteins that get activated.” To create a machine that parses odors as well as dogs, science has to unlock the chemical codes and program artificial receptors to alert for multiple odors as well as combinations.

In some ways, Atkinson’s machine is the first step in this process. He’s unlocked the codes for a few critical explosives and has built a device sensitive enough to detect them, simply by sniffing the air. But he has not had the benefit of many thousands of years of bioengineering. Canine olfaction, Doty says, is sophisticated in ways that humans can barely imagine. For instance, humans don’t dream in smells, he says, but dogs might. “They may have the ability to conceptualize smells,” he says, meaning that instead of visualizing an idea in their mind’s eye, they might smell it.

Animals can also convey metadata with scent. When a dog smells a telephone pole, he’s reading a bulletin board of information: which dogs have passed by, which ones are in heat, etc. Dogs can also sense pheromones in other species. The old adage is that they can smell fear, but scientists have proved that they can smell other things, like cancer or diabetes. Gary Beauchamp, who heads the Monell Chemical Senses Center in Philadelphia, says that a “mouse sniffing another mouse can obtain much more information about that mouse than you or I could by looking at someone.”

If breaking chemical codes is simple spelling, deciphering this sort of metadata is grammar and syntax. And while dogs are fluent in this mysterious language, scientists are only now learning the ABC’s.

Dog in an MRI machine with computer screens in front
Paul Waggoner at Auburn University treats dogs as technology. He studies their neurological responses to olfactory triggers with an MRI machine. Courtesy Auburn Canine Detection Institute

THERE ARE FEW people who better appreciate the complexities of smell than Paul Waggoner, a behavioral scientist and the associate director of Auburn’s Canine Research Detection Institute. He has been hacking the dog’s nose for more than 20 years.

“By the time you leave, you won’t look at a dog the same way again,” he says, walking me down a hall where military intelligence trainees were once taught to administer polygraphs and out a door and past some pens where new puppies spend their days. The CRDI occupies part of a former Army base in the Appalachian foothills and breeds and trains between 100 and 200 dogs—mostly Labrador retrievers, but also Belgian Malinois, German shepherds, and German shorthaired pointers—a year for Amtrak, the Department of Homeland Security, police departments across the US, and the military. Training begins in the first weeks of life, and Waggoner points out that the floor of the puppy corrals is made from a shiny tile meant to mimic the slick surfaces they will encounter at malls, airports, and sporting arenas. Once weaned, the puppies go to prisons in Florida and Georgia, where they get socialized among prisoners in a loud, busy, and unpredictable environment. And then they come home to Waggoner.

What Waggoner has done over tens of thousands of hours of careful study is begin to quantify a dog’s olfactory abilities. For instance, how small a sample dogs can detect (parts per trillion, at least); how many different types of scents they can detect (within a certain subset, explosives for instance, there seems to be no limit, and a new odor can be learned in hours); whether training a dog on multiple odors degrades its overall detection accuracy (typically, no); and how certain factors like temperature and fatigue affect performance.

The idea that the dog is a static technology just waiting to be obviated really bothers Waggoner, because he feels like he’s innovating every bit as much as Atkinson and the other lab scientists. “We’re still learning how to select, breed, and get a better dog to start with—then how to better train it and, perhaps most importantly, how to train the people who operate those dogs.”

Waggoner even taught his dogs to climb into an MRI machine and endure the noise and tedium of a scan. If he can identify exactly which neurons are firing in the presence of specific chemicals and develop a system to convey that information to trainers, he says it could go a long way toward eliminating false alarms. And if he could get even more specific—whether, say, RDX fires different cells than PETN—that information might inform more targeted responses from bomb squads.

The idea that the dog is a static technology just waiting to be obviated really bothers Paul Waggoner.

After a full day of watching trainers demonstrate the multitudinous abilities of CRDI’s dogs, Waggoner leads me back to his sparsely furnished office and clicks a video file on his computer. It was from a lecture he’d given at an explosives conference, and it featured Major, a yellow lab wearing what looked like a shrunken version of the Google Street View car array on its back. Waggoner calls this experiment Autonomous Canine Navigation. Working with preloaded maps, a computer delivered specific directions to the dog. By transmitting beeps that indicated left, right, and back, it helped Major navigate an abandoned “town” used for urban warfare training. From a laptop, Waggoner could monitor the dog’s position using both cameras and a GPS dot, while tracking its sniff rate. When the dog signaled the presence of explosives, the laptop flashed an alert, and a pin was dropped on the map.

It’s not hard to imagine this being very useful in urban battlefield situations or in the case of a large area and a fast-ticking clock—say, an anonymous threat of a bomb inside an office building set to detonate in 30 minutes. Take away the human and the leash, and a dog can sweep entire floors at a near sprint. “To be as versatile as a dog, to have all capabilities in one device, might not be possible,” Waggoner says.

Both the dog people and the scientists working to emulate the canine nose have a common goal: to stop bombs from blowing up. It’s important to recognize that both sides—the dog people and the scientists working to emulate the canine nose—have a common goal: to stop bombs from blowing up. And the most effective result of this technology race, Waggoner thinks, is a complementary relationship between dog and machine. It’s impractical, for instance, to expect even a team of Vapor Wake dogs to protect Grand Central Terminal, but railroad police could perhaps one day install a version of Atkinson’s sniffer at that station’s different entrances. If one alerts, they could call in the dogs.

There’s a reason Flir Systems, the maker of Fido, has a dog research group, and it’s not just for comparative study, says the man who runs it, Kip Schultz. “I think where the industry is headed, if it has forethought, is a combination,” he told me. “There are some things a dog does very well. And some things a machine does very well. You can use one’s strengths against the other’s weaknesses and come out with a far better solution.”

Despite working for a company that is focused mostly on sensor innovation, Schultz agrees with Waggoner that we should be simultaneously pushing the dog as a technology. “No one makes the research investment to try to get an Apple approach to the dog,” he says. “What could he do for us 10 or 15 years from now that we haven’t thought of yet?”

On the other hand, dogs aren’t always the right choice; they’re probably a bad solution for screening airline cargo, for example. It’s a critical task, but it’s tedious work sniffing thousands of bags per day as they roll by on a conveyor belt. There, a sniffer mounted over the belt makes far more sense. It never gets bored.

“The perception that sensors will put dogs out of business—I’m telling you that’s not going to happen,” Schultz told me, at the end of a long conference call. Mark Fisher, who was also on the line, laughed. “Dogs aren’t going to put sensors out of business either.”

Read more PopSci+ stories.

The post No machine can beat a dog’s bomb-detecting sniffer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why the EU wants to build an underwater cable in the Black Sea https://www.popsci.com/technology/eu-georgia-undersea-cable/ Mon, 15 May 2023 11:00:00 +0000 https://www.popsci.com/?p=541041
Illustration of a submarine communications cable.
Illustration of a submarine communications cable. DEPOSIT PHOTOS

According to reports, this effort will reduce reliance on communications infrastructure that runs through Russia.

The post Why the EU wants to build an underwater cable in the Black Sea appeared first on Popular Science.

]]>
Illustration of a submarine communications cable.
Illustration of a submarine communications cable. DEPOSIT PHOTOS

Since 2021, the EU and the nation of Georgia have highlighted a need to install an underwater internet cable through the Black Sea to improve the connectivity between Georgia and other European countries. 

After the start of war in Ukraine, the project has garnered increased attention as countries in the South Caucasus region have been working to decrease their reliance on Russian resources—a trend that goes for energy as well as communications infrastructure. Internet cables have been under scrutiny because they could be tapped into by hackers or governments for spying

“Concerns around intentional sabotage of undersea cables and other maritime infrastructure have also grown since multiple explosions on the Nord Stream gas pipelines last September, which media reports recently linked to Russian vessels,” the Financial Times reported. The proposed cable, which will cross international water through the Black Sea, will be 1,100 kilometers, or 684 miles long, and will link the Caucasus nations to EU member states. It’s estimated to cost €45 million (approximately $49 million). 

[Related: An undersea cable could bring speedy internet to Antarctica]

“Russia is one of multiple routes through which data packages move between Asia and Europe and is integral to connectivity in some parts of Asia and the Caucasus, which has sparked concern from some politicians about an over-reliance on the nation for connectivity,” The Financial Times reported. 

Across the dark depths of the globe’s oceans there are 552 cables that are “active and planned,” according to TeleGeography. All together, they may measure nearly 870,000 miles long, the company estimates. Take a look at a map showing existing cables, including in the Black Sea area, and here’s a bit more about how they work.

[Related: A 10-million-pound undersea cable just set an internet speed record]

The Black Sea cable is just one project in the European Commission’s infrastructure-related Global Gateway Initiative. According to the European Commission’s website, “the new cable will be essential to accelerate the digital transformation of the region and increase its resilience by reducing its dependency on terrestrial fibre-optic connectivity transiting via Russia. In 2023, the European Investment Bank is planning to submit a proposal for a €20 million investment grant to support this project.”

Currently, the project is still in the feasibility testing stage. While the general route and the locations for the converter stations have already been selected, it will have to go through geotechnical and geophysical studies before formal construction can go forward.

The post Why the EU wants to build an underwater cable in the Black Sea appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How an innovative battery system in the Bronx will help charge up NYC’s grid https://www.popsci.com/technology/ninedot-battery-energy-storage-system-bronx/ Sat, 13 May 2023 11:00:00 +0000 https://www.popsci.com/?p=540875
The four white units are the batteries, which can provide about three megawatts of power over four hours.
The four white units are the batteries, which can provide about three megawatts of power over four hours. Rob Verger

The state has a goal of getting six gigawatts of battery storage online by 2030. Take an inside look at how one small system will work.

The post How an innovative battery system in the Bronx will help charge up NYC’s grid appeared first on Popular Science.

]]>
The four white units are the batteries, which can provide about three megawatts of power over four hours.
The four white units are the batteries, which can provide about three megawatts of power over four hours. Rob Verger

On a small patch of land in the northeast Bronx in New York City sits a tidy but potent battery storage system. Located across the street from a beige middle school building, and not too far from a Planet Fitness and a Dollar Tree, the battery system is designed to send power into the grid at peak moments of demand on hot summer afternoons and evenings. 

New York state has a goal of getting a whopping six gigawatts of battery storage systems online in the next seven years, and this system, at about three megawatts, is a very small but hopefully helpful part of that. It’s intended to be able to send out those three megawatts of power over a four-hour period, typically between 4 pm and 8 pm on the toastiest days of the year, with the goal of making a burdened power grid a bit less stressed and ideally a tad cleaner. 

The local power utility, Con Edison, recently connected the battery system to the grid. Here’s how it works, and why systems like this are important.

From power lines to batteries, and back again

The source of the electricity for these batteries is the existing power distribution lines that run along the top of nearby poles. Those wires carry power at 13,200 volts, but the battery system itself needs to work with a much lower voltage. That’s why before the power even gets to the batteries themselves, it needs to go through transformers. 

battery storage
Adam Cohen, of NineDot Energy, at the battery facility in January. Rob Verger

During a January tour of the site for Popular Science, Adam Cohen, the CTO of NineDot Energy, the company behind this project, opens a gray metal door. Behind it are transformers. “They look really neato,” he says. Indeed, they do look neat—three yellowish units that take that voltage and transform it into 480 volts. This battery complex is actually two systems that mirror each other, so other transformers are in additional equipment nearby. 

After those transformers do their job and convert the voltage to a lower number, the electricity flows to giant white Tesla Megapack battery units. Those batteries are large white boxes with padlocked cabinets, and above them is fire-suppression equipment. Not only do these battery units store the power, but they also have inverters to change the AC power to DC before the juice can be stored. When the power does flow out of the batteries, it’s converted back to AC power again. 

electrical transformers
Transformer units like these convert the electricity from 13,200 volts to 480 volts. Rob Verger

The battery storage system is designed to follow a specific rhythm. It will charge gradually between 10 pm and 8 am, Cohen says. That’s a time “when the grid has extra availability, the power is cheaper and cleaner, [and] the grid is not overstressed,” he says. When the day begins and the grid starts experiencing more demand, the batteries stop charging. 

In the summer heat, when there’s a “grid event,” that’s when the magic happens, Cohen says. Starting around 4 pm, the batteries will be able to send their power back out into the grid to help destress the system. They’ll be able to produce enough juice to power about 1,000 homes over that four-hour period, according to an estimate by the New York State Energy Research and Development Authority, or NYSERDA.

[Related: How the massive ‘flow battery’ coming to an Army facility in Colorado will work]

The power will flow back up into the same wires that charged them before, and then onto customers. The goal is to try to make the grid a little bit cleaner, or less dirty, than it would have been if the batteries didn’t exist. “It’s offsetting the dirty energy that would have been running otherwise,” Cohen says. 

Of course, the best case scenario would be for batteries to get their power from renewable sources, like solar or wind, and the site does have a small solar canopy that could send a teeny tiny bit of clean energy into the grid. But New York City and the other downstate zones near it currently rely very heavily on fossil fuels. For New York City in 2022 for example, utility-scale energy production was 100 percent from fossil fuels, according to a recent report from the New York Independent System Operator. (One of several solutions in the works to that problem involves a new transmission line.) What that means is that the batteries will be drawing power from a fossil-fuel dominant grid, but doing so at nighttime when that grid is hopefully less polluting. 

Nine Dot Energy says that this is the first use of Tesla Megapacks in New York City.
NineDot Energy says that this is the first use of Tesla Megapacks in New York City. Rob Verger

How systems like these can help

Electricity is very much an on-demand product. What we consume “has to be made right now,” Cohen notes from behind the wheel of his Nissan Leaf, as we drive towards the battery storage site in the Bronx on a Friday in January. Batteries, of course, can change that dynamic, storing the juice for when it’s needed. 

This project in the Bronx is something of an electronic drop in a bucket: At three megawatts, the batteries represent a tiny step towards New York State’s goal to have six gigawatts, or 6,000 megawatts, of battery storage on the grid by 2030. Even though this one facility in the Bronx represents less than one percent of that goal, it can still be useful, says Schuyler Matteson, a senior advisor focusing on energy storage and policy at NYSERDA. “Small devices play a really important role,” he says. 

One of the ways that small devices like these can help is they can be placed near the people who are using it in their homes or businesses, so that electricity isn’t lost as it is transmitted in from further away. “They’re very close to customers on the distribution network, and so when they’re providing power at peak times, they’re avoiding a lot of the transmission losses, which can be anywhere from five to eight percent of energy,” Matteson says. 

And being close to a community provides interesting opportunities. A campus of the Bronx Charter Schools for Better Learning sits on the third floor of the middle school across the street. There, two dozen students have been working in collaboration with a local artist, Tijay Mohammed, to create a mural that will eventually hang on the green fence in front of the batteries. “They are so proud to be associated with the project,” says Karlene Buckle, the manager of the enrichment program at the schools.

Student council representatives at the Bronx Charter Schools for Better Learning (BBL2) participate in a mural project for the battery facility.
Student council representatives at the Bronx Charter Schools for Better Learning (BBL2) participate in a mural project for the battery facility. Kevin Melendez / Bronx Charter Schools for Better Learning

Grid events

The main benefit a facility like this can have is the way it helps the grid out on a hot summer day. That’s because when New York City experiences peak temperatures, energy demand peaks too, as everyone cranks up their air conditioners. 

To meet that electricity demand, the city relies on its more than one dozen peaker plants, which are dirtier and less efficient than an everyday baseline fossil fuel plant. Peaker plants disproportionately impact communities located near them. “The public health risks of living near peaker plants range from asthma to cancer to death, and this is on top of other public health crises and economic hardships already faced in environmental justice communities,” notes Jennifer Rushlow, the dean of the School for the Environment at Vermont Law and Graduate School via email. The South Bronx, for example, has peaker plants, and the borough as a whole has an estimated 22,855 cases of pediatric asthma, according to the American Lung Association. Retiring them or diminishing their use isn’t just for energy security—it’s an environmental justice issue.

So when power demand peaks, “what typically happens is we have to ramp up additional natural gas facilities, or even in some instances, oil facilities, in the downstate region to provide that peak power,” Matteson says. “And so every unit of storage we can put down there to provide power during peak times offsets some of those dirty, marginal units that we would have to ramp up otherwise.” 

By charging at night, instead of during the day, and then sending the juice out at peak moments, “you’re actually offsetting local carbon, you’re offsetting local particulate matter, and that’s having a really big benefit of the air quality and health impacts for New York City,” he says.  

[Related: At New York City’s biggest power plant, a switch to clean energy will help a neighborhood breathe easier]

Imagine, says Matteson, that a peaker plant is producing 45 megawatts of electricity. A 3-megawatt battery system coming online could mean that operators could dial down the dirty plant to 42 megawatts instead. But in an ideal world, it doesn’t come online at all. “We want 15 of [these 3 megawatt] projects to add up to 45 megawatts, and so if they can consistently show up at peak times, maybe that marginal dirty generator doesn’t even get called,” he says. “If that happens enough, maybe they retire.” 

Nationally, most of the United States experiences a peak need for electricity on hot summer days, just like New York City does, with a few geographic exceptions, says Paul Denholm, a senior research fellow focusing on energy storage at the National Renewable Energy Laboratory in Colorado. “Pretty much most of the country peaks during the summertime, in those late afternoons,” he says. “And so we traditionally build gas turbines—we’ve got hundreds of gigawatts of gas turbines that have been installed for the past several decades.” 

A very small amount of power can come from this solar canopy on site—a reminder that the cleanest energy comes from renewable sources.
A very small amount of power can come from this solar canopy on site—a reminder that the cleanest energy comes from renewable sources. Rob Verger

While the three-megawatt project in the Bronx is not going to replace a peaker plant by any means, Denholm says that in general, the trend is moving towards batteries taking over what peaker plants do. “As those power plants get old and retire, you need to build something new,” he says. “Within the last five years, we’ve reached this tipping point, where storage can now outcompete new traditional gas-fired turbines on a life-cycle cost basis.” 

Right now, New York state has 279 megawatts of battery storage already online, which is around 5 percent of the total goal of 6 gigawatts. Denholm estimates that nationally, nearly nine gigawatts of battery storage are online already. 

“There’s significant quantifiable benefits to using [battery] storage as peaker,” Denholm says. One of those benefits is a fewer local emissions, which is important because “a lot of these peaker plants are in places that have historically been [environmental-justice] impacted regions.” 

“Even when they’re charging off of fossil plants, they’re typically charging off of more efficient units,” he adds. 

If all goes according to plan, the batteries will start discharging their juice this summer, on the most sweltering days. 

The post How an innovative battery system in the Bronx will help charge up NYC’s grid appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
All the products that Google has sent to the graveyard https://www.popsci.com/technology/google-discontinued-products/ Thu, 11 May 2023 21:00:00 +0000 https://www.popsci.com/?p=540628
What happened to Google Glass?

Google Reader, Jacquard, and Wave are among the many hyped-up projects that never really took off.

The post All the products that Google has sent to the graveyard appeared first on Popular Science.

]]>
What happened to Google Glass?

At Google’s annual I/O developer’s conference, the tech giant announced a whole heap of AI-powered features that will be coming soon to its core apps, like Gmail, Docs, Sheets, Photos, and Meet. It even showcased an updated version of Project Starline, the 3D video-calling booth it announced back in 2021

While all very fun and exciting, Google’s flashy new project announcements are usually met with some degree of trepidation by the tech press. The company has undeniably revolutionized search and advertising, and products like Gmail and Docs are incredibly popular. But, it has also announced countless products with great fanfare, failed to support them, then quietly killed them. Let’s have a look at some of the high and low lights from Google’s product graveyard. 

Google Glass, Wave, Reader, and the other ones people are still bitter about

Over the past two decades, Google has killed off a lot of products—and some of them were pretty popular, or at least had diehard fans. Others, not so much. 

Google Reader is, perhaps, the biggest victim here. The beloved RSS reader app was unceremoniously axed, possibly in an attempt to drive people to Google+. It’s still missed by a lot of tech writers. 

The Google URL Shortener was a handy free alternative to bit.ly and other similar services. It got killed in 2019. Another similar service, Google Go Links, that allowed you to make your own custom URL shortener was also discontinued in 2021.

Inbox by Gmail, an innovative mobile-first email app, was pulled in 2019. However, most of its features, like snoozing emails and smart replies, were added to Gmail. 

Another groundbreaking Google app was Google Wave: A real-time editing and collaborative document tool. Apps like Notion, Slack, and even Google Docs owe a lot to the trend-setting app, which was shut down in 2012. 

Less bitterly, Google Glass was discontinued for consumers in 2015 and the Glass OS version of Android was discontinued a few years later in 2017. Its official demise was announced earlier this year. Not many people were sad to see it go, though if rumors are to be believed, we might be gearing up for the next AR goggle hype-cycle

And perhaps most famously, Google+ was an attempt to build a Facebook-style social network that failed spectacularly. Despite cramming Google+ features into YouTube, Gmail, and every other Google app, it was faded off in 2019.

Now, with some of the big names out of the way, here are some products you might have forgotten Google even launched. 

Stadia, we hardly knew ya

Google Stadia was a cloud gaming service that ran through Chrome, a Chromecast, or a mobile app. The idea was that you could stream games that actually played on Google’s server. As long as you had a fast enough internet connection, it would effectively turn your smartphone, TV, or under-powered PC into a games console. 

Unfortunately, despite some dedicated fans and a lot of hype from Google, the company never delivered the one thing a games console needs: great games. It stopped operating early this year

Jacquard

One of Google’s wildest ideas, Jacquard was a collaboration between Google and Levi’s, the clothing brand. Somehow, the two companies made two generations of a smart jacket—one in 2017 and another in 2019. It featured a touch-sensitive strip of fabric on your wrist so you could play and pause music and answer phone calls. 

While it’s hard to argue that Jacquard ever really took off, Google officially killed it earlier this year.

YouTube (not so) Originals

Launched in 2016, YouTube Originals was a somewhat misguided attempt to compete with Netflix and justify the $12/month Google was asking for YouTube Premium (at the time called YouTube Red). Already big YouTubers like PewDiePie were given large budgets to make poorly received shows

Though it wasn’t all bad: Cobra Kai, a sequel to The Karate Kid, got two seasons as a YouTube Original before moving to Netflix. 

YouTube Originals was finally discontinued in late 2022. 

About 9 different messaging apps

Google has a long history of releasing messaging apps before merging them, pivoting them, killing them, and reusing the name. The situation is so ridiculous that we had to write a full explainer last year

But in short, Google currently has three communications apps: Google Chat, Google Meet, and Messages. To get to this streamlined situation, it has killed, rebranded, or otherwise discontinued: Google Talk or GChat, Google+ Messenger, SMS on Android, Google Voice, Google Messenger (a different app again), YouTube Messages, Google Allo, Google Duo, and Google Hangouts.

So, while Project Starline looks awesome, we fear there’s a good chance the general public never sees it. The AI-features look more likely to get some support, but who knows how long Google will let them stick around.

The post All the products that Google has sent to the graveyard appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This soft robotic skull implant could change epilepsy treatment https://www.popsci.com/technology/soft-electrode-epilepsy-neurosurgery/ Thu, 11 May 2023 19:00:00 +0000 https://www.popsci.com/?p=540598
The device can be folded small enough to fit a 2 centimeter hole in the skull.
The device can be folded small enough to fit a 2 centimeter hole in the skull. 2023 EPFL/Alain Herzog

The flower-shaped device can fit through a tiny hole in the skull and then delicately unfold.

The post This soft robotic skull implant could change epilepsy treatment appeared first on Popular Science.

]]>
The device can be folded small enough to fit a 2 centimeter hole in the skull.
The device can be folded small enough to fit a 2 centimeter hole in the skull. 2023 EPFL/Alain Herzog

After being approached by a neurosurgeon seeking a less invasive method to treat conditions that require a brain implant, a team of researchers at Switzerland’s Ecole Polytechnique Fédérale de Lausanne led by neurotechnology expert Stephanie Lacour started working. They took inspiration from soft robots to create a large cortical electrode array that can squeeze through a tiny hole in the skull. They published their findings in Science on May 10. 

A cortical electrode array stimulates, records, or monitors electrical activity in the brain for patients who suffer with conditions like epilepsy. Epilepsy is relatively common, and affects around 1.2 percent of the US’s population. The disorder is known to cause seizures, which are electrical activity bursts in the brain and may cause uncontrollable shaking, sudden stiffness, collapsing, and other symptoms. 

While microelectrode arrays were first invented decades ago, the use of these arrays for deep brain stimulation in epilepsy patients has only became FDA approved in the past handful of years. Even so, current devices often have certain trade offs, be it electrode resolution, cortical surface coverage, or even aesthetics, the authors write in their paper.

The researchers created a superthin flower-shaped device that can be folded small enough to fit a 2 centimeter hole in the skull, where it can rest in between the skull and the surface of the brain—a tiny, delicate area that only measures around a millimeter in width. Once deployed, the flexible electrode releases each of its six spiraled arms one by one to extend across a region of the brain around 4 centimeters in diameter. Other devices may require a hole in the skull the same size as the diameter of the electrode array. 

 “The beauty of the eversion mechanism is that we can deploy an arbitrary size of electrode with a constant and minimal compression on the brain,” Sukho Song, lead author of the study, said in an EPFL statement. “The soft robotics community has been very much interested in this eversion mechanism because it has been bio-inspired. This eversion mechanism can emulate the growth of tree roots, and there are no limitations in terms of how much tree roots can grow.”

The device, however, isn’t exactly ready for human brains yet—the team has only tested it in a mini-pig—but will continue to be developed by a spinoff of EPFL Laboratory for Soft Bioelectronic Interfaces called Neurosoft Bioelectronics. 

“Minimally invasive neurotechnologies are essential approaches to offer efficient, patient-tailored therapies,” Lacour said in the EPFL statement.

The post This soft robotic skull implant could change epilepsy treatment appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new mask adds ‘realistic’ smells to VR https://www.popsci.com/technology/virtual-reality-smell-mask/ Thu, 11 May 2023 15:00:00 +0000 https://www.popsci.com/?p=540439
From pineapples to pancakes, these scientists are bringing scents to VR.
From pineapples to pancakes, these scientists are bringing scents to VR. Nature/YouTube

The device, the authors hope, can make virtual reality feel more lifelike.

The post A new mask adds ‘realistic’ smells to VR appeared first on Popular Science.

]]>
From pineapples to pancakes, these scientists are bringing scents to VR.
From pineapples to pancakes, these scientists are bringing scents to VR. Nature/YouTube

In even the most immersive virtual reality setting, it’s unusual to encounter smells. Previous attempts at incorporating smells into VR often utilized aerosols or atomizers, which take the gear to a whole new level of bulkiness, not to mention more complicated cleaning requirements. 

However, scientists from Beihang University and the City University of Hong Kong recently published a report in Nature Communications detailing their methods for integrating smell into existing VR technology. 

The first of the two devices is a sort of a patch designed to be worn right under your nose, while a second device looks more like a soft mask. But, they both do basically the same thing—a temperature-sensing resistor controls a heating element, and this heating element warms up a smelly paraffin wax to provide the user with a number of scents (two for the nose patch and nine for the mask). When smelling time is over, magnetic induction coils sweep heat away from the face, effectively blowing out the smelly wax. 

“This is quite an exciting development,” Jas Brooks, a PhD candidate at the University of Chicago’s Human-Computer Integration Lab who has studied chemical interfaces and smell, told MIT Technology Review. “It’s tackling a core problem with smell in VR: How do we miniaturize this, make it not messy, and not use liquid?”

[Related: A new VR exhibit takes you inside the James Webb Space Telescope’s images.]

The authors were able to make 30 different scents total, from herbal rosemary to fruity pineapple to sweet baked pancakes. They even included some less-than-pleasant scents, for example a stinky durian. The 11 volunteers were able to detect said smells with an average success rate of 93 percent. 

The device, the authors hope, can make VR feel more realistic. But it can also help people who are physically far from each other feel close again, something that may have come in handy during the COVID-19 lockdowns.

Author and scientist at City University of Hong Kong Xinge Yu told New Scientist that he hopes the device can help families or couples feel closer together by creating shared smells. “In terms of entertainment,” Yu continued, “users could experience various outdoor environments with different nature smells at home by VR.”

In a health setting, using the sniffable tool could help rejig memory for people with cognitive decline, or even help people improve their sense of smell after temporary loss due to COVID or another illness, Scientific American reports. But, before any of that happens, the researchers plan to work on shrinking down the size of the tools, and maybe even fiddling with the concept of taste next.

The post A new mask adds ‘realistic’ smells to VR appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
To build a better crawly robot, add legs—lots of legs https://www.popsci.com/technology/centipede-robot-georgia-tech/ Mon, 08 May 2023 11:00:00 +0000 https://www.popsci.com/?p=539360
centipede robot
The centipede robot from Georgia Tech is a rough terrain crawler. Georgia Institute of Technology

Researchers hope that more limbs will allow them to have fewer sensors.

The post To build a better crawly robot, add legs—lots of legs appeared first on Popular Science.

]]>
centipede robot
The centipede robot from Georgia Tech is a rough terrain crawler. Georgia Institute of Technology

When traveling on rough and unpredictable roads, the more legs the better—at least for robots. Balancing on two legs is somewhat hard; on four legs, it’s slightly easier. But what if you had many many legs, like a centipede? Researchers at Georgia Institute of Technology have found that by giving a robot multiple, connected legs, it allows the machine to easily clamber over landscapes with cracks, hills, and uneven surfaces without the need for extensive sensor systems that would otherwise have helped it navigate its environment. Their results are published in a study this week in the journal Science.

The team has previously done work modeling the motion of these creepy critters. In this new study, they created a framework for operating this centipede-like robot that was influenced by mathematician Claude Shannon’s communication theory, which posits that in transmitting a signal between two points, that to avoid noise, it’s better to break up the message into discrete, repeating units. 

“We were inspired by this theory, and we tried to see if redundancy could be helpful in matter transportation,” Baxi Chong, a physics postdoctoral researcher, said in a news release. Their creation is a robot with joined parts like a model train with two legs sticking out from each segment that could allow it to “walk.” The notion is that after being told to go to a certain destination, along the way, these legs would make contact with a surface, and send information about the terrain to the other segments, which would then adjust motion and position accordingly. The team put it through a series of real-world and computer trials to see how it walked, how fast it could go, and how it performed on grass, blocks, and other rough surfaces. 

[Related: How a dumpy, short-legged bird could change water bottle designs]

“One value of our framework lies in its codification of the benefits of redundancy, which lead to locomotor robustness over environmental contact errors without requiring sensing,” the researchers wrote in the paper. “This contrasts with the prevailing paradigm of contact-error prevention in the conventional sensor-based closed-loop controls that take advantage of visual, tactile, or joint-torque information from the environment to change the robot dynamics.”

They repeated the experiment with robots that had different numbers of legs (six, 12, and 14). In future work with the robot, the researchers said that they want to hone in on finding the optimal number of legs to give its centipede-bot so that it can move smoothly in the most cost-effective way possible.

“With an advanced bipedal robot, many sensors are typically required to control it in real time,” Chong said. “But in applications such as search and rescue, exploring Mars, or even micro robots, there is a need to drive a robot with limited sensing.” 

The post To build a better crawly robot, add legs—lots of legs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An ambitious underwater ‘space station’ just got a major research collaborator https://www.popsci.com/technology/noaa-underwater-research-station-proteus/ Wed, 03 May 2023 19:00:00 +0000 https://www.popsci.com/?p=538695
A rendering of Proteus.
A rendering of Proteus. Concept designs by Yves Béhar and fuseproject

Fabien Cousteau's Proteus project will make a bigger splash this year.

The post An ambitious underwater ‘space station’ just got a major research collaborator appeared first on Popular Science.

]]>
A rendering of Proteus.
A rendering of Proteus. Concept designs by Yves Béhar and fuseproject

Today, the National Oceanic and Atmospheric Administration announced that it will be signing a new research agreement with Proteus Ocean Group, which has been drawing up ambitious plans to build a roomy underwater research facility that can host scientists for long stays while they study the marine environment up close. 

The facility, called Proteus, is the brainchild of Fabien Cousteau, the grandson of Jacques Cousteau.

“On PROTEUS™ we will have unbridled access to the ocean 24/7, making possible long-term studies with continuous human observation and experimentation,” Cousteau, founder of Proteus Ocean Group, said in a press release. “With NOAA’s collaboration, the discoveries we can make — in relation to climate refugia, super corals, life-saving drugs, micro environmental data tied to climate events and many others — will be truly groundbreaking. We look forward to sharing those stories with the world.”

This is by no means new territory for the government agency. NOAA has previously commandeered a similar reef base off the coast of Florida called Aquarius. But Aquarius is aging, and space there is relatively confined—accommodating up to six occupants in 400 sq ft. Proteus, the new project, aims to create a habitat around 2,000 sq ft for up to 12 occupants. 

This kind of habitat, the first of which is set to be located off the coast of Curacao in the Caribbean, is still on track to be operational by 2026, Lisa Marrocchino, CEO of Proteus Ocean Group, tells PopSci. A second location is set to be announced soon as well. “As far as the engineering process and partners, we’re just looking at that. We’ll be announcing those shortly. We’re evaluating a few different partners, given that it’s such a huge project.” 

[Related: Jacques Cousteau’s grandson is building a network of ocean floor research stations]

Filling gaps in ocean science is a key part of understanding its role in the climate change puzzle. Now that the collaborative research and development agreement is signed, the two organizations will soon be starting workshops on how to tackle future missions related to climate change, collecting ocean data, or even engineering input in building the underwater base. 

“Those will start progressing as we start working together,” Marrocchino says. “We’re just beginning the design process. It’s to the point where we are narrowing down the location. We’ve got one or two really great locations. Now we’re getting in there to see what can be built and what can’t be built.”

The NOAA partnership is only the beginning for Proteus. According to Marrocchino, Proteus Ocean Group has been chatting with other government agencies, and expects to announce more collaborations later this year. “The space community in particular is super excited about what we’re planning to do,” she says. “They really resonate with the idea that it’s very familiar to them in extreme environments, microgravity and pressure.”

Marrocchino also teased that there are ongoing negotiations with large multi-million dollar global brand partners, which will fund large portions of the innovative research set to happen at Proteus. “We’re seeing a trend towards big corporate brands coming towards the idea of a lab underwater,” she says. “You’ll see some partnership agreements geared towards advancing ocean science.” 

The post An ambitious underwater ‘space station’ just got a major research collaborator appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside Microsoft’s surprising push for a right to repair law—and why it matters https://www.popsci.com/technology/microsoft-right-to-repair-legislation/ Wed, 03 May 2023 01:00:00 +0000 https://www.popsci.com/?p=537845
Repair advocates say Microsoft’s support for a repair bill in Washington — a notable first for a major U.S. tech company — is bringing other manufacturers to the table for the first time.
Repair advocates say Microsoft’s support for a repair bill in Washington — a notable first for a major U.S. tech company — is bringing other manufacturers to the table for the first time. DepositPhotos

Major tech companies have long opposed the right to repair, but Microsoft is finally engaging with lawmakers and activists.

The post Inside Microsoft’s surprising push for a right to repair law—and why it matters appeared first on Popular Science.

]]>
Repair advocates say Microsoft’s support for a repair bill in Washington — a notable first for a major U.S. tech company — is bringing other manufacturers to the table for the first time.
Repair advocates say Microsoft’s support for a repair bill in Washington — a notable first for a major U.S. tech company — is bringing other manufacturers to the table for the first time. DepositPhotos

This article originally appeared in Grist.

In March, Irene Plenefisch, a senior director of government affairs at Microsoft, sent an email to the eight members of the Washington state Senate’s Environment, Energy, and Technology Committee, which was about to hold a hearing to discuss a bill intended to facilitate the repair of consumer electronics. 

Typically, when consumer tech companies reach out to lawmakers concerning right-to-repair bills — which seek to make it easier for people to fix their devices, thus saving money and reducing electronic waste — it’s because they want them killed. Plenefisch, however, wanted the committee to know that Microsoft, which is headquartered in Redmond, Washington, was on board with this one, which had already passed the Washington House.

“I am writing to state Microsoft’s support for E2SHB 1392,” also known as the Fair Repair Act, Plenefisch wrote in an email to the committee. “This bill fairly balances the interests of manufacturers, customers, and independent repair shops and in doing so will provide more options for consumer device repair.”

The Fair Repair Act stalled out a week later due to opposition from all three Republicans on the committee and Senator Lisa Wellman, a Democrat and former Apple executive. (Apple frequently lobbies against right-to-repair bills, and during a hearing, Wellman defended the iPhone maker’s position that it is already doing enough on repair.) But despite the bill’s failure to launch this year, repair advocates say Microsoft’s support — a notable first for a major U.S. tech company — is bringing other manufacturers to the table to negotiate the details of other right-to-repair bills for the first time. 

“We are in the middle of more conversations with manufacturers being way more cooperative than before,” Nathan Proctor, who heads the U.S. Public Research Interest Group’s right-to-repair campaign, told Grist. “And I think Microsoft’s leadership and willingness to be first created that opportunity.”

Across a wide range of sectors, from consumer electronics to farm equipment, manufacturers attempt to monopolize repair of their devices by restricting access to spare parts, repair tools, and technical documentation. While manufacturers often claim that controlling the repair process limits cybersecurity and safety risks, they also financially benefit when consumers are forced to take their devices back to the manufacturer or upgrade due to limited repair options.

Right-to-repair bills would compel manufactures to make spare parts and information available to everyone. Proponents argue that making repair more accessible will allow consumers to use older products for longer, saving them money and reducing the environmental impact of technology, including both electronic waste and the carbon emissions associated with manufacturing new products. 

But despite dozens of state legislatures taking up right-to-repair bills in recent years, very few of those bills have passed due to staunch opposition from device makers and the trade associations representing them. New York state passed the first electronics right-to-repair law in the country last year, but before the governor signed it, tech lobbyists convinced her to water it down through a series of revisions.

Like other consumer tech giants, Microsoft has historically fought right-to-repair bills while restricting access to spare parts, tools, and repair documentation to its network of “authorized” repair partners. In 2019, the company even helped kill a repair bill in Washington state. But in recent years the company has started changing its tune on the issue. In 2021, following pressure from shareholders, Microsoft agreed to take steps to facilitate the repair of its devices — a first for a U.S. company. Microsoft followed through on the agreement by expanding access to spare parts and service tools, including through a partnership with the repair guide site iFixit. The tech giant also commissioned a study that found repairing Microsoft products instead of replacing them can dramatically reduce both waste and carbon emissions.

Microsoft has also started engaging more cooperatively with lawmakers over right-to-repair bills. In late 2021 and 2022, the company met with legislators in both Washington and New York to discuss each state’s respective right-to-repair bill. In both cases, lawmakers and advocates involved in the bill negotiations described the meetings as productive. When the Washington state House introduced an electronics right-to-repair bill in January 2022, Microsoft’s official position on it was neutral — something that state representative and bill sponsor Mia Gregerson, a Democrat, called “a really big step forward” at a committee hearing

Despite Microsoft’s neutrality, last year’s right-to-repair bill failed to pass the House amid opposition from groups like the Consumer Technology Association, a trade association representing numerous electronics manufacturers. Later that year, though, the right-to-repair movement scored some big wins. In June 2022, Colorado’s governor signed the nation’s first right-to-repair law, focused on wheelchairs. The very next day, New York’s legislature passed the bill that would later become the nation’s first electronics right-to-repair law.

When Washington lawmakers revived their right-to-repair bill for the 2023 legislative cycle, Microsoft once again came to the negotiating table. From state senator and bill sponsor Joe Nguyen’s perspective, Microsoft’s view was, “We see this coming, we’d rather be part of the conversation than outside. And we want to make sure it is done in a thoughtful way.”

Proctor, whose organization was also involved in negotiating the Washington bill, said that Microsoft had a few specific requests, including that the bill require repair shops to possess a third-party technical certification and carry insurance. It was also important to Microsoft that the bill only cover products manufactured after the bill’s implementation date, and that manufacturers be required to provide the public only the same parts and documents that their authorized repair providers already receive. Some of the company’s requests, Proctor said, were “tough” for advocates to concede on. “But we did, because we thought what they were doing was in good faith.”

In early March, just before the Fair Repair Act was put to a vote in the House, Microsoft decided to support it. 

“Microsoft has consistently supported expanding safe, reliable, and sustainable options for consumer device repair,” Plenefisch told Grist in an emailed statement. “We have, in the past, opposed specific pieces of legislation that did not fairly balance the interests of manufacturers, customers, and independent repair shops in achieving this goal. HB 1392, as considered on the House floor, achieved this balance.”

While the bill cleared the House by a vote of 58 to 38, it faced an uphill battle in the Senate, where either Wellman or one of the bill’s Republican opponents on the Environment, Energy, and Technology Committee would have had to change their mind for the Fair Repair Act to move forward. Microsoft representatives held meetings with “several legislators,” Plenefisch said, “to urge support for HB 1392.” 

“That’s probably the first time any major company has been like, ‘This is not bad,’” Nguyen said. “It certainly helped shift the tone.”

Microsoft’s engagement appears to have shifted the tone beyond Washington state as well. As other manufacturers became aware that the company was sitting down with lawmakers and repair advocates, “they realized they couldn’t just ignore us,” Proctor said. His organization has since held meetings about proposed right-to-repair legislation in Minnesota with the Consumer Technology Association and TechNet, two large trade associations that frequently lobby against right-to-repair bills and rarely sit down with advocates. 

“A lot of conversations have been quite productive” around the Minnesota bill, Proctor said. TechNet declined to comment on negotiations regarding the Minnesota right-to-repair bill, or whether Microsoft’s support for a bill in Washington has impacted its engagement strategy. The Consumer Technology Association shared letters it sent to legislators outlining its reasons for opposing the bills in Washington and Minnesota, but it also declined to comment on specific meetings or on Microsoft.

While Minnesota’s right-to-repair bill is still making its way through committees in the House and Senate, in Washington state, the Fair Repair Act’s opponents were ultimately unmoved by Microsoft’s support. Senator Drew MacEwen, one of the Republicans on the Energy, Environment, and Technology Committee who opposed the bill, said that Microsoft called his office to tell him the company supported the Fair Repair Act.

“I asked why after years of opposition, and they said it was based on customer feedback,” MacEwen told Grist. But that wasn’t enough to convince MacEwen, who sees device repairability as a “business choice,” to vote yes.

“Ultimately, I do believe there is a compromise path that can be reached but will take a lot more work,” MacEwen said.

Washington state representative and bill sponsor Mia Gregerson wonders if Microsoft could have had a greater impact by testifying publicly in support of the bill. While Gregerson credits the company with helping right-to-repair get further than ever in her state this year, Microsoft’s support was entirely behind the scenes. 

“They did a lot of meetings,” Gregerson said. “But if you’re going to be first in the nation on this, you’ve got to do more.”

Microsoft declined to say why it didn’t testify in support of the Fair Repair Act, or whether that was a mistake. The company also didn’t say whether it would support future iterations of the Washington state bill, or other state right-to-repair bills.

But it signaled to Grist that it might. And in doing so, Microsoft appears to have taken its next small step out of the shadows.

“We encourage all lawmakers considering right to repair legislation to look at HB 1392 as a model going forward due to its balanced approach,” Plenefisch said. 

This article originally appeared in Grist at https://grist.org/technology/microsoft-right-to-repair-quietly-supported-legislation-to-make-it-easier-to-fix-devices-heres-why-thats-a-big-deal/.

Grist is a nonprofit, independent media organization dedicated to telling stories of climate solutions and a just future. Learn more at Grist.org

The post Inside Microsoft’s surprising push for a right to repair law—and why it matters appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Australia wants a laser weapon powerful enough to stop a tank https://www.popsci.com/technology/australia-anti-tank-laser-weapon/ Tue, 02 May 2023 22:00:00 +0000 https://www.popsci.com/?p=538555
An armored vehicle in Australia in 2016.
An armored vehicle in Australia in 2016. Mandaline Hatch / US Marine Corps.

Existing laser weapons focus on zapping drones out of the sky. Taking on an armored vehicle would require much more energy.

The post Australia wants a laser weapon powerful enough to stop a tank appeared first on Popular Science.

]]>
An armored vehicle in Australia in 2016.
An armored vehicle in Australia in 2016. Mandaline Hatch / US Marine Corps.

On April 4, Australia’s Department of Defence announced the award of $12.9 million to defense giant QinetiQ for a laser weapon. The move followed years of work and interest by Australia’s government in developing lasers for the battlefields of tomorrow. What is most ambitious about the Australian research into laser weapons is not the modest funding to QinetiQ, but a powerful goal set by the Department of Defence in 2020: Australia wants a laser weapon powerful enough to stop a tank.

Laser weapons, more broadly referred to as directed energy, are a science fiction concept with a profoundly mundane reality. Instead of the flashy beams or targeted phasers of Star Wars or Star Trek, lasers work most similarly to a magnifying lens held to fry a dry leaf, concentrating photons into an invisible beam that destroys with heat and time. Unlike the child’s tool for starting fires, modern directed energy weapons derive their power from electricity, either generated on site or stored in batteries. 

Most of the work of laser weapons, in development and testing, has so far focused on relatively small and fragile targets, like drones, missiles, or mortar rounds. Lasers are energy intensive. When PopSci had a chance to try using a 10-kilowatt laser against commercial drones, it still took seconds to destroy each target, a process aided by all the sensors and accouterments of a targeting pod. Because lasers are concentrated heat energy over time, cameras to track targets, and gimbals to hold and stabilize the beam against the target, all ensure that as much of the beam as possible stays focused. Once part of a drone was burned through, the whole system would crash to the ground, gravity completing the task.

Tanks, by design and definition, are the opposite of lightly armored and fragile flying machines. That makes Australia’s plan to destroy tanks by laser all the more daring.

Tanks for the idea

In the summer of 2020, Australia’s Department of Defence released a strategy called the 2020 Force Structure Plan. This document, like similar versions in other militaries, offers a holistic vision of what kinds of conflicts the country is prepared to fight in the future. Because the strategy is also focused on procurement, it offers useful insight into the weapons and vehicles the military will want to buy to meet those challenges.

The tank-killing laser comes in the section on Land Combat Support. “A future program to develop a directed energy weapon system able to be integrated onto [Australian Defence Forces] protected and armoured vehicles, and capable of defeating armoured vehicles up to and including main battle tanks. The eventual deployment of directed energy weapons may also improve land force resilience by reducing the force’s dependence on ammunition stocks and supply lines,” reads the strategy.

The latter part of the statement is a fairly universal claim across energy weapons development. While laser weapons are power-intensive, they do not need individual missiles, bullets, or shells, the same as what a chemical explosive or kinetic weapon might. Using stored and generated energy, instead of specifically manufactured ammunition pieces, could enable long-term operation on even field-renewable sources, if available. This could also get the shot per weapon use down below the cost of a bullet, though it will take many shots for that to equal the whole cost of developing a laser system.

But getting a laser to punch through the armor of a tank is a distinct and challenging task. A drone susceptible to melting by laser might have a plastic casing a couple millimeters thick. Tank armor, even for older versions of modern tanks, can be at least 600 mm thick steel or composite, and is often thicker. This armor can be enhanced by a range of add-ons, including reactive plating that detonates outward in response to impact by explosive projectiles.

Defeating tank armor with lasers means finding a way to not just hold a beam of light against the tank, but to ensure that the beam is powerful and long-lasting enough to get the job done. 

“One problem faced by laser weapons is the huge amount of power required to destroy useful targets such as missiles. To destroy something of this size requires lasers with hundreds of kilowatts or even megawatts of power. And these devices are only around 20% efficient, so we would require five times as much power to run the device itself,” wrote Sean O’Byrne, an engineering professor at UNSW Canberra and UNSW Sydney, in a piece explaining the promise and peril of anti-tank lasers.

O’Byrne continued: “We are well into megawatt territory here — that’s the kind of power consumed by a small town. For this reason, even portable directed energy devices are very large. (It’s only recently that the US has been able to make a relatively small 50kW laser compact enough to fit on an armoured vehicle, although devices operating at powers up to 300kW have been developed.)”

April’s announcement of a modest sum to develop a domestic laser weapon capability in Australia is a starting point for eventually getting to the scale of lasers powerful enough to melt tanks. Should the feat be accomplished, Australia will find itself with an energy-hunger tool, but one that can defeat hostile armor for as long as it is charged to do so.

The post Australia wants a laser weapon powerful enough to stop a tank appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How John Deere’s tech evolved from 19th-century plows to AI and autonomy https://www.popsci.com/technology/john-deere-tech-evolution-and-right-to-repair/ Tue, 02 May 2023 19:00:00 +0000 https://www.popsci.com/?p=538366
John Deere farm equipment
John Deere

Plus, catch up on what's going on with farmers' right to repair this heavy equipment.

The post How John Deere’s tech evolved from 19th-century plows to AI and autonomy appeared first on Popular Science.

]]>
John Deere farm equipment
John Deere

Buzzwords like autonomy, artificial intelligence, electrification, and carbon fiber are common in the automotive industry, and it’s no surprise that they are hot topics: Manufacturers are racing to gain an advantage over competitors while balancing cost and demand. What might surprise you, however, is just how much 180-year-old agriculture equipment giant John Deere uses these same technologies. The difference is that they’re using them on 15-ton farm vehicles.

A couple of years ago, John Deere’s chief technology officer Jahmy Hindman told The Verge that the company now employs more software engineers than mechanical engineers. You don’t have to dig much deeper to find that John Deere is plowing forward toward technology and autonomy in a way that may feel anachronistic to those outside the business.  

“It’s easy to underestimate the amount of technology in the industries we serve, agriculture in particular,” Hindman told PopSci. “Modern farms are very different from the farms of 10 years ago, 20 years ago, and 30 years ago. There are farms that are readily adopting technology that makes agriculture more efficient, more sustainable, and more profitable for growers. And they’re using high-end technology: computer vision, machine learning, [Global Navigation Satellite System] guidance, automation, and autonomy.”

PopSci took an inside look at the company’s high-tech side at its inaugural 2023 John Deere Technology Summit last month. Here’s how it’s all unfolding.

John Deere cab interior and computers
John Deere

Where it started—and where it’s going

John Deere, the OG founder behind the agricultural equipment giant, started as a blacksmith. When Deere, who was born in 1804, moved from his native Vermont to Illinois, he heard complaints from farmer clients about the commonly used cast-iron plows of the day. Sticky soil clung to the iron plows, resulting in a substantial loss in efficiency every time a farmer had to stop and scrape the equipment clean, which could be every few feet.

Deere was inspired to innovate, and grabbed a broken saw blade to create the first commercially successful, “self-scouring” steel plow in 1837. The shiny, polished surface of the steel worked beautifully to cut through the dirt much more quickly, with fewer interruptions, and Deere pivoted to a new business. Over 180 years later, the company continues to find new ways to improve the farming process.

It all starts with data, and the agriculture community harnesses and extrapolates a lot of it. Far beyond almanacs, notebooks, and intellectual property passed down from generation to generation, data used by the larger farms drives every decision a farm makes. And when it comes to profitability, every data point can mean the difference between earnings and loss. John Deere, along with competitors like Caterpillar and Mahindra, are in the business of helping farms collect and analyze data with software tied to its farm equipment. 

[Related: John Deere finally agrees to let farmers fix their own equipment, but there’s a catch]

With the uptake of technology, farming communities in the US—and around the world, for that matter—are finding ways to make their products more efficient. John Deere has promised to deliver 20 or more electric and hybrid-electric construction equipment models by 2026. On top of that, the company is working to improve upon the autonomous software it uses to drive its massive vehicles, with the goal of ensuring that every one of the 10 trillion corn and soybean seeds can be planted, cared for, and harvested autonomously by 2030.

Farming goes electric

In February, John Deere launched its first all-electric zero-turn lawn mower. (That means it can rotate in place without requiring a wide circle.) Far from the noisy, often difficult-to-start mowers of your youth, the Z370R Electric ZTrak won’t wake the neighbors at 7:00 a.m. The electric mower features a USB-C charging port and an integrated, sealed battery that allows for mowing even in wet and rainy conditions.

On a larger scale, John Deere is pursuing all-electric equipment and has set ambitious emissions reduction targets. As such, the company has vowed to reduce its greenhouse gas emissions by 50 percent by 2030 from a 2021 baseline. To grow its EV business more quickly, it will benefit from its early-2022 purchase of Kreisel Electric, an Austrian company specializing in immersion-cooled battery technology. Krieisel’s batteries are built with a modular design, which makes it ideal for different sizes of farm equipment. It also promises extended battery life, efficiency in cold and hot climates, and mechanical stability.

Even with a brand-new battery division, however, John Deere is not bullishly pushing into EV and autonomous territory. It still offers lower-tech options for farmers who aren’t ready to go down that path. After all, farm equipment can last for many years and tossing new technology into an uninterested or unwilling operation is not the best route to adoption. Instead, the company actively seeks out farmers willing to try out new products and software to see how it works in the real world. (To be clear, the farms pay for the use of the machines and John Deere offers support.)

“If it doesn’t deliver value to the farm, it’s not really useful to the farmer,” Hindman says.

See and Spray, launched last year, is a product that John Deere acquired from Blue River Technology. The software uses artificial intelligence and machine learning to recognize and distinguish crop plants from weeds. It’s programmed to “read” the field and only spray the unwanted plants, which saves farmers money by avoiding wasted product. See and Spray uses an auto-leveling carbon fiber boom and dual nozzles that can deliver two different chemicals in a single pass.

john deere see and spray tech
Kristin Shaw

Another new technology, ExactShot, reduces the amount of starter fertilizer needed during planting by more than 60 percent, the company says. This product uses a combination of sensors and robotics to spritz each seed as it’s planted versus spraying the whole row; once again, that saves farmers an immense amount of money and supplies.

Right to Repair brings victory

Just one machine designed for farmland can cost hundreds of thousands of dollars. Historically, if equipment were to break down, farmers had to call in the issue and wait for a technician directly from John Deere or an authorized repair shop for a repair. Many farms are located far away from city centers, which means a quick fix isn’t in the cards. That could be frustrating for a farmer at any time, particularly in the middle of a hectic planting or harvest season. 

At the beginning of this year, John Deere and the American Farm Bureau Federation signed a memorandum of understanding stating that farmers and independent repair shops can gain access to John Deere’s software, manuals, and other information needed to service their equipment. This issue has been a point of contention for farmers, and a new law in Colorado establishes the right to repair in that state, starting January 1 of next year. 

However, that comes with a set of risks, according to John Deere. The company says its equipment “doesn’t fit in your pocket like a cell phone or come with a handful of components; our combines can weigh more than 15 tons and are manufactured with over 18,500 parts.”

In a statement to DTN, a representative from John Deere said, “[The company] supports a customer’s decision to repair their own products, utilize an independent repair service or have repairs completed by an authorized dealer. John Deere additionally provides manuals, parts and diagnostic tools to facilitate maintenance and repairs. We feel strongly that the legislation in Colorado is unnecessary and will carry unintended consequences that negatively impact our customers.”

The company warns that modifying the software of heavy machinery could “override safety controls and put people at risk” and creates risks related to safe operation of the machine, plus emissions compliance, data security, and more. There’s a tricky balance that both benefits farmers who want control over their investments and potentially puts those same farmers—or anyone in the path of the machinery—in peril if the software is altered in a way that causes a failure of some kind. Of course, that’s true for any piece of machinery, even a car. 

[Related: John Deere tractors are getting the jailbreak treatment from hackers]

Farming machinery has come a long way from that first saw blade plow John Deere built in 1837. Today, with machine learning, the equipment can detect buildup and adjust the depth on its own without stopping the process. Even in autonomous mode, a tractor can measure wheel slip and speed, torque and tire pressure, and that helps farmers do more in less time. 

In the life cycle of farming, technology will make a big difference for reducing waste and emissions and offering better quality of life. Watching the equipment in action on John Deere’s demo farm in Texas, it’s clear that there’s more bits and bytes on those machines than anyone might imagine.

The post How John Deere’s tech evolved from 19th-century plows to AI and autonomy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How fast is supersonic flight? Fast enough to bring the booms. https://www.popsci.com/technology/how-fast-is-supersonic-flight/ Mon, 01 May 2023 22:00:00 +0000 https://www.popsci.com/?p=538001
shock waves from supersonic jet
This striking picture is a composite image showing a T-38 flying at supersonic speeds and the resulting shock waves forming off the aircraft. The process involves a technique called "schlieren visualization," according to NASA. JT Heineck / NASA

Aircraft that can travel faster than the speed of sound have evolved since 1947, even if the physics haven't changed.

The post How fast is supersonic flight? Fast enough to bring the booms. appeared first on Popular Science.

]]>
shock waves from supersonic jet
This striking picture is a composite image showing a T-38 flying at supersonic speeds and the resulting shock waves forming off the aircraft. The process involves a technique called "schlieren visualization," according to NASA. JT Heineck / NASA

To fly at supersonic speeds is to punch through an invisible threshold in the sky. Rocketing through the air at a rate faster than sound waves can travel through it means surpassing a specific airspeed, but that exact airspeed varies. On Mars, the speed of sound is different from the speed of sound on Earth. And on Earth, the speed of sound varies depending on the temperature of the air an aircraft is traveling through. 

Breaking the so-called sound barrier in 1947 made Chuck Yeager famous. But today, if a person in a military jet flies faster than the speed of sound, it’s not a significant or even noticeable moment, at least from the perspective of the occupants of the aircraft. “Man, in the airplane you feel nothing,” says Jessica Peterson, a flight test engineer for the US Air Force’s Test Pilot School at Edwards Air Force Base in California. People on the ground may beg to differ, depending on how close they are to the plane. 

Here’s what to know about the speed of supersonic flight, a type of travel that’s been inaccessible to civilians who want to experience it in an aircraft ever since the Concorde stopped flying in 2003. 

shock waves coming from supersonic jets
More shockwave visualizations from NASA involving two T-38 aircraft in a composite image. JT Heineck / NASA

Ripples in the water, shockwaves in the air 

Traveling at supersonic speed involves cruising “faster than the sound waves can move out of the way,” says Edward Haering, an aerospace engineer at NASA’s Armstrong Flight Research Center who has been researching sonic booms since the 1990s.

One way to think about the topic is to picture a boat in the water. “If you’re in a rowboat, sitting on a lake, not moving, there might be some ripples that come out, but you’re not going any faster than the ripples are,” he says. “But if you’re in a motorboat or a sailboat, you’ll start to see a V-wake coming off the nose of your boat, because you’re going faster than those ripples can get out of the way.” That’s like a plane flying faster than the speed of sound.

But, he adds, a supersonic plane pushes through those ripples in three-dimensional space. “You have a cone of these disturbances that you’re pushing through,” he says. 

The temperature of the air determines how fast sound waves move through it. In a zone of the atmosphere on Earth between about 36,000 feet up to around 65,600 feet, the temperature is consistent enough that the speed of sound theoretically stays about the same. And in that zone, on a typical day, the speed of sound is about 660 mph. That’s also referred to as Mach 1. Mach 2, or twice the speed of sound, would be about 1,320 mph in that altitude range. However, since a real-world day will likely be different from what’s considered standard, your actual speed when attempting to fly supersonic may vary.

[Related: How high do planes fly? It depends on if they’re going east or west.]

If you wanted to fly a plane at supersonic speeds at lower altitudes, the speed of sound is faster in that warmer air. At 10,000 feet, supersonic flight begins at 735 mph, NASA says. The thicker air takes more work to fly through at those speeds, though.

For the record books: the first supersonic flight

Chuck Yeager became the first documented person to fly at supersonic speeds on October 14, 1947. He recalled in his autobiography, Yeager, that he was at 42,000 feet flying at 0.96 Mach on that autumn day. “I noted that the faster I got, the smoother the ride,” he wrote. 

“Suddenly the Mach needle began to fluctuate. It went up to .965 Mach—then tipped right off the scale,” he recalled. “I thought I was seeing things! We were flying supersonic!” He learned afterwards that he had been going 700 mph, or 1.07 Mach. 

Over the radio, from below, Yeagar wrote that people in a “tracking van interrupted to report that they heard what sounded like a distant rumble of thunder: my sonic boom!” 

illustration of the shock waves coming off the x-59 nasa plane
A NASA illustration visualizes how shock waves may form off the X-59, a plane that NASA is developing that has not yet flown. NASA

Why don’t we hear sonic booms anymore?

Supersonic flight causes those loud sonic booms for those below. That’s why the FAA banned supersonic civilian flight above the US and near its coasts. As NASA notes, this prohibition formally turned 50 years old in April 2023, and before it existed, people understandably did not like hearing sonic booms. In the 1950s and 60s, the space agency says, people in “Atlanta, Chicago, Dallas, Denver, Los Angeles, and Minneapolis, among others, all were exposed to sonic booms from military fighter jets and bombers flying overhead at high altitude.” And in 1968, one specific incident in Colorado, at the Air Force Academy, was especially destructive. The event happened on May 31, when a “fighter jet broke the sound barrier flying 50 feet over the school grounds,” NASA reports. “The sonic boom blew out 200 windows on the side of the iconic Air Force Chapel and injured a dozen people.”

Sonic booms happen thanks to shock waves forming off different features on the aircraft. For example, the canopy of a fighter jet, or the inlet for its engine, can produce them. The problem occurs because of the way those various shock waves join up, coalescing into two. “When they combine, they just get higher and higher pressure,” says Haering. The way they combine is for one shock wave to come from the front of the plane, and one from the rear. People on the ground will detect a “boom, boom,” Haering says. 

Interestingly, the length of the aircraft matters in this case, affecting how far apart those booms are in time. The space shuttle, for example, measured more than 100 feet long. In that case, people would notice a “boom… boom,” Haering says. “And a very short plane, it’s booboom. And if it’s really short, and really far away, sometimes the time between those two booms [is] so short, you can’t really tell that there’s two distinct booms, so you just hear boom.” 

[Related: How does a jet engine work? By running hot enough to melt its own innards.]

The issue with these booms is leading NASA to develop a new experimental aircraft, along with Lockheed Martin, called the X-59. Its goal is to fly faster than the speed of sound, but in a quieter way than a typical supersonic plane would. Remarkably, instead of a canopy for the pilot to see the scene in front of them, the aviator will rely on an external vision system—a monitor on the inside that shows what’s in front of the plane. NASA said that the testing wrapped up in 2021 for this design, which helps keep the aircraft sleek. The ultimate goal is to manage any shock waves coming off that aircraft through its design. “On the X-59, from the tip of the nose to the back of the tail, everything is tailored to try to keep those shock waves separated,” Haering says. 

nasa x-59 being build
The X-59 being built. Lockheed Martin

NASA says they plan to fly it this year, with the goal of seeing how much noise it makes and how people react to its sound signature. The X-59 could make a noise that’s “a lot like if your neighbor across the street slams their car door,” Haering speculates. “If you’re engaged in conversation, you probably wouldn’t even notice it.” But actual flights will be the test of that hypothesis.

The X-59 has a goal of flying at Mach 1.4, at an altitude of around 55,000 feet. Translated into miles per hour, that rate is 924 mph. Then imagine that the aircraft has a tailwind, and its ground speed could surpass 1,000 mph. (Note that winds in the atmosphere will affect a plane’s ground speed—the speed the plane is moving compared to the ground below. A tailwind will make it faster and a headwind will make it slower.) 

Supersonic corridors 

At Edwards Air Force Base in California, supersonic corridors permit pilots to fly at Mach 1 or faster above certain altitudes. In one corridor, the aircraft must be at 30,000 feet or higher. In another, the Black Mountain Supersonic Corridor, the aircraft can be as low as 500 feet. Remember, the speed to fly supersonic will be higher at a low altitude than it will be at high altitudes, and it will take more effort to push through the denser air.

supersonic corridors
This map depicts the supersonic corridors where military pilots are allowed to fly faster than the speed of sound near Edwards Air Force Base in California. US Air Force Test Pilot School

“From a flight-test perspective—so that’s what we do here at Edwards, and we’re focusing on testing the new aircraft, testing the new systems—we regularly go supersonic,” says Peterson, the flight test engineer at the US Air Force’s Test Pilot School. 

[Related: Let’s talk about how planes fly]

The fact that one of the supersonic corridors is over the base means that sonic booms are audible there, although the aircraft has to be above 30,000 feet. “We can boom the base, and we hear it all the time,” she adds. 

She notes that in a recent flight in a T-38, when she broke the sound barrier at 32,000 feet, her aircraft had a ground speed of 665 mph. But at 14,000 feet, she was supersonic at a ground speed of 734 mph.

But there’s a difference between flying at supersonic speeds in a test scenario and doing it for operational reasons. Corey Florendo, a pilot and instructor also at the US Air Force Test Pilot School, notes that he’d do it “only as often as I need to,” during a real-world mission.

“When I go supersonic, I’m using a lot of gas,” he adds. 

nasa x-59 supersonic plane
An illustration depicting what the X-59 could look like in flight. Lockheed Martin

Supersonic flight thus remains available to the military in certain scenarios when they’re willing to burn the fuel, but not so for regular travelers. A Boeing 787, for example, is designed to cruise at 85 percent the speed of sound. However, one company, called Boom Supersonic, aims to bring that type of flight back for commercial travel; their aircraft, which they call Overture, could fly in tests in 2027. You may not want to hold your breath. 

Joe Jewell, an associate professor at Purdue University’s School of Aeronautics and Astronautics, reflects that supersonic flight still has a “mystique” to it. 

“It’s still kind of a rare and special thing because the challenges that we collectively referred to as the sound barrier still are there, physically,” Jewell says. Pressure waves still accrue in front of the aircraft as it pushes through the air. “It’s still there, just the same as it was in 1947, we just know how to deal with it now.”

In the video below, watch an F-16 overtake a T-38; both aircraft are flying at supersonic speeds, and a subtle rocking motion is the only indication that shock waves are interacting with the aircraft. Courtesy Jessica Peterson and the US Air Force Test Pilot School.

The post How fast is supersonic flight? Fast enough to bring the booms. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Seals provided inspiration for a new waddling robot https://www.popsci.com/technology/seal-soft-robot/ Mon, 01 May 2023 16:00:00 +0000 https://www.popsci.com/?p=537958
Two seals laying on shore near water.
Pinnipeds are getting robotic cousins. Deposit Photos

Fin-footed mammals, aka pinnipeds, provided the template for a new soft robot.

The post Seals provided inspiration for a new waddling robot appeared first on Popular Science.

]]>
Two seals laying on shore near water.
Pinnipeds are getting robotic cousins. Deposit Photos

It might be difficult to see at first, but if you squint just right, you can tell the latest animal-inspired robot owes its ungainly waddle to seals. Researchers at Chicago’s DePaul University looked at the movements of the aquatic mammal and its relatives for their new robot prototype—and while it may look a bit silly, the advances could one day help in extremely dire situations.

According to their paper’s abstract, the team writes they aimed to build a robot featuring “improved degrees of freedom, gait trajectory diversity, limb dexterity, and payload capabilities.” To do this, they studied the movements of pinnipeds—the technical term given to fin-footed mammals such as seals, walruses, and sea lions—as an alternative to existing quadrupedal and soft-limbed robots. Their final result is a simplified, three-limbed device that propels itself via undulating motions and is supported by a rigid “backbone” like those of their mammalian inspirations.

As also detailed last week via TechXplore, the robot’s soft limbs are each roughly 9.5 inches long by 1.5 inches wide, and encased in a protective outer casing. Each arm is driven by pneumatic actuators filled with liquid to obtain varying degrees of stiffness. Changing the limbs’ rigidness controls the robot’s directional abilities, something researchers say is generally missing from similar crawling machines.

[Related: Robot jellyfish swarms could soon help clean the oceans of plastic.]

Interestingly, the team realized that their pinniped product actually moves faster when walking “backwards.” While in reverse, the robot waddled at a solid 6.5 inches per second, compared to just 4.5 inches per second during forward motion. “Pinnipeds use peristaltic body movement to propel forward since the bulk of the body weight is distributed towards the back,” explains the team in its research paper. “But, the proposed soft robot design has a symmetric weight distribution and thus it is difficult to maintain stability while propelling forward. As a consequence, the robot shows limited frontal movements. Conversely, when propelling backward, the torque imbalance is countered by the body.”

But despite the reversal and slightly ungainly stride, the DePaul University team believes soft robots such as their seal-inspired creation could one day come in handy for dangerous tasks, including nuclear site inspections, search and rescue efforts, and even future planetary explorations. It might be one small step for robots, but it may prove one giant waddle for pinniped propulsion tech.

The post Seals provided inspiration for a new waddling robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a quantum computer tackles a surprisingly difficult airport problem https://www.popsci.com/technology/quantum-algorithm-flight-gate-assignment/ Mon, 01 May 2023 11:00:00 +0000 https://www.popsci.com/?p=537718
airplanes at different gates in an airport
Is there an optimal way to assign flights to gates?. Chris Leipelt / Unsplash

Here’s what a quantum algorithm for a real-world problem looks like.

The post How a quantum computer tackles a surprisingly difficult airport problem appeared first on Popular Science.

]]>
airplanes at different gates in an airport
Is there an optimal way to assign flights to gates?. Chris Leipelt / Unsplash

At first glance, quantum computers seem like machines that only will exist in the far-off future. And in a way, they are. Currently, the processing power that these devices have are limited by the number of qubits they contain, which are the quantum equivalent of the 0-or-1 bits you’ve probably heard about with classical computers. 

The engineers behind the most ambitious quantum projects have said that they can orchestrate together hundreds of qubits, but because these qubits have unique yet ephemeral quantum properties, like superposition and entanglement, keeping them in the ideal state is a tough task. All this added together means that the problems that researchers have touted that quantum computers could be better at than classical machines have still not been fully realized.

Scientists say that in general, quantum machines could be better at solving problems involving optimization operations, nature simulations, and searching through unstructured databases. But without real-world applications, it can all seem very abstract. 

The flight gate assignment challenge

A group of researchers working with IBM have been crafting and testing special algorithms for specific problems that work with quantum circuits. That means the broad category of optimization tasks becomes a more specific problem, like finding the best gate to put connecting flights in at an airport. 

[Related: In photos: Journey to the center of a quantum computer]

Here are the requirements for the problem: The computer needs to find the most optimal gates to assign incoming and connecting flights to in an airport in order to minimize the passenger travel time. In this sense, travel time to and from gates, number of passengers, the presence of a flight at a gate or not—these all become variables in a complex series of math equations. 

Essentially, each qubit represents either the gate or the flight. So the number of qubits you need to solve this problem is the number of gates multiplied by the number of flights, explains Karl Jansen, a research physicist at DESY (a particle physics research center in Germany) and an author of the preprint paper on the algorithm. 

How a ‘Hamiltonian’ is involved

In order to perform the operation on a quantum device, first they have to integrate all of this information into something called a “Hamiltonian,” which is a quantum mechanical function that measures the total energy of a system. The system, in this case, would be the connections in the airport. “If you find the minimal energy, then this corresponds to the optimal path through the airport for all passengers to find the optimal connections,” says Jansen. “This energy function, this Hamiltonian, is horribly complicated and scales exponentially. There is no way to do this on a classical computer. However, you can translate this Hamiltonian into a quantum circuit.”

[Related: IBM’s latest quantum chip breaks the elusive 100-qubit barrier]

In their study, Jansen and his colleagues only worked with around 20 qubits, which is not much and doesn’t offer an edge over the best classical algorithms directed at the problem today. At the moment it doesn’t really make sense to compare the solution time or accuracy to a classical calculation. “For this it would need 100 or 200 functioning qubits,” he notes. “What we want to know is if I make my problem size larger and larger, so I go to a larger and larger airport, is there at some point an advantage to using quantum mechanical principles to solve the problem.” 

Superposition, entanglement and interference

It’s important to note that controlling these machines means that the best minds across a wide number of industries, from applied math to chemistry to physics, must work together to design clever quantum algorithms, or instructions that tell the quantum computer what operations to perform and how. These algorithms are by nature different from classical algorithms. They can involve higher-level math, like linear algebra and matrices. “The fundamental descriptions of the systems are different,” says Jeannette Garcia, senior research manager for quantum applications and software team at IBM Research. “Namely, we have superposition and entanglement and this concept of interference.”

Although it is still to be proven, many researchers think that by using superposition, they can pack more information into the problem, and with entanglement, they could find more correlations, such as if a certain flight is correlated with another flight and another gate because they’re both domestic.

Every answer that a quantum computer gives is basically a probability, Garcia explains. A lot of work goes into formulating ways to combine answers together in a creative way to come up with the most probable answer over many repeating trials. That is what interference is—adding up or subtracting waveforms. The entanglement piece in particular is promising for chemistry but also for machine learning. “In machine learning datasets, you might have data that’s super correlated, so in other words, they are not independent from each other,” Garcia says. “This is what entanglement is. We can put that in and program that into what we’re studying as a way to conserve resources in the end, and computational power.” 

And while the new algorithm from Jansen’s team can’t really be used to make airports more efficient today, it can be translated to a variety of other problems. “Once we found really good ways of solving the flight gate assignment problem, we transferred the algorithms and improvements to these problems we are looking at for particle tracking, both at CERN but also at DESY,” Jansen said. 

In addition, you can apply the same formulation to other logistics problems such as optimizing bus routes or traffic light placements in a city. You just have to modify the information of the problem and what numbers you put into the coefficients and the binary variables. “For me, this was a good attempt to find a solution for the flight gate assignment problem,” Jansen says. “Now I’m looking at other instances where I can use this mathematical formulation to solve other problems.”

The post How a quantum computer tackles a surprisingly difficult airport problem appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Name a better duo than NASA’s hard-working Mars rover and helicopter https://www.popsci.com/science/nasa-mars-rover-helicopter-duo/ Fri, 28 Apr 2023 12:30:00 +0000 https://www.popsci.com/?p=537408
NASA Ingenuity helicopter lost in a Mars crater in a photo taken by Perseverance rover
Perseverance snapped Ingenuity on its 50th flight on Mars with this "Where's Waldo"-style pic. Hint: Look for the helicopter at center left. NASA/JPL-Caltech/ASU/MSSS

In uncharted Martian territory, Ingenuity has proven to be a trusty sidekick to Perseverance and engineers at home.

The post Name a better duo than NASA’s hard-working Mars rover and helicopter appeared first on Popular Science.

]]>
NASA Ingenuity helicopter lost in a Mars crater in a photo taken by Perseverance rover
Perseverance snapped Ingenuity on its 50th flight on Mars with this "Where's Waldo"-style pic. Hint: Look for the helicopter at center left. NASA/JPL-Caltech/ASU/MSSS

On April 19, 2021, a little more than a century after the Wright Brothers’ first test flight on Earth, humans managed to zoom a helicopter around on another planet. The four-pound aircraft, known as Ingenuity, is part of NASA’s Mars2020 exploration program, along with the Perseverance rover.

The dynamic duo made history again this month, as Ingenuity celebrated its landmark 50th flight. The small aircraft was built to fly only five times—as a demonstration of avionics customized for the thin Mars air, not a key part of the science mission—but it has surpassed that goal 10 times over with no signs of slowing down.

[Related: InSight says goodbye with what may be its last wistful image of Mars]

“Ingenuity has changed the way that we think about Mars exploration,” says Håvard Grip, NASA engineer and former chief pilot of Ingenuity. Although the helicopter started as a tech demo, proving that humans could make an aircraft capable of navigating the thin Martian atmosphere, it has become a useful partner to Percy. Ingenuity can zip up to 39 feet into the sky, scout the landscape, and inform the rover’s next moves through the Red Planet’s rocky terrain.

In the past months, Perseverance has been wrapping up its main science mission in Jezero Crater, a dried-up delta that could give astronomers insight on Mars’ possibly watery past and ancient microbial life. Ingenuity has been leap-frogging along with the rover, taking aerial shots of its robotic bestie and getting glimpses into the path ahead. This recon helps scientists determine their priorities for exploration, and helps NASA’s planning team prepare for unexpected hazards and terrain.

Aerial map showing Perseverance and Ingenuity route across Jezero Crater during NASA Mars 2020 mission
This animation shows the progress of NASA’s Perseverance Mars rover and its Ingenuity Mars Helicopter as they make the climb up Jezero Crater’s delta toward ancient river deposits. NASA/JPL-Caltech

Unfortunately, the narrow channels in the delta are causing difficulties for the helicopter’s communications with the rover, forcing them to stay close together for fear of being irreparably separated. Ingenuity also can’t fall behind the rover, because its limited stamina (up to 3-minute-long flights at time) means it might not be able to catch up. Over the past month, the team shepherded the pair through a particularly treacherous stretch of the drive, though, and they’re still going strong—even setting flight speed and frequency records at the same time. Meanwhile, Percy has been investigating some crater walls and funky-colored rocks, of which scientists are trying to figure out the origins.

Ingenuity has certainly proven the value of helicopters in planetary exploration, and each flight adds to the pile of data engineers have at their disposal for planning the next generation of aerial robots. “When we look ahead to potential future missions, helicopters are an inevitable part of the equation,” says Grip.

What exactly comes next for Ingenuity itself, though, is anyone’s guess. “Every sol [Martian day] that Ingenuity survives on Mars is one step further into uncharted territory,” Grip adds. And while the team will certainly feel a loss when the helicopter finally goes out, they’ve already completed their main mission of demonstrating that the avionics can work. All the extra scouting and data collection is a reward for building something so sturdy

[Related: Two NASA missions combined forces to analyze a new kind of marsquake]

They’re now continuing to push the craft to its limits, testing out how far they can take this technology. For those at home who want to follow along, the mission actually provides flight previews on Ingenuity’s status updates page

“It may all be over tomorrow,” says Grip. “But one thing we’ve learned over the last two years is not to underestimate Ingenuity’s ability to hang on.” 

The post Name a better duo than NASA’s hard-working Mars rover and helicopter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
New AI-based tsunami warning software could help save lives https://www.popsci.com/technology/ai-tsunami-detection-system/ Wed, 26 Apr 2023 19:17:46 +0000 https://www.popsci.com/?p=537034
tsunami warning sign in Israel
New research aims to give people more warning time before a tsunami strikes. Deposit Photos

Researchers hope that new software could lead to tsunami alerts that are faster and more accurate.

The post New AI-based tsunami warning software could help save lives appeared first on Popular Science.

]]>
tsunami warning sign in Israel
New research aims to give people more warning time before a tsunami strikes. Deposit Photos

To mitigate the death and disaster brought by tsunamis, people on the coasts need the most time possible to evacuate. Hundred-foot waves traveling as fast as a car are forces of nature that cannot be stopped—the only approach is to get out of the way. To tackle this problem, researchers at Cardiff University in Wales have developed new software that can analyze real-time data from hydrophones, ocean buoys, and seismographs in seconds. The researchers hope that their system can be integrated into existing technology, saying that with it, monitoring centers could issue warnings faster and with more accuracy. 

Their research was published in Physics of Fluids on April 25. 

“Tsunamis can be highly destructive events causing huge loss of life and devastating coastal areas, resulting in significant social and economic impacts as whole infrastructures are wiped out,” said co-author Usama Kadri, a researcher and lecturer at Cardiff University, in a statement.

Tsunamis are a rare but constant threat, highlighting the need for a reliable warning system. The most infamous tsunami occurred on December 26, 2004, after a 9.1-magnitude earthquake struck off the coast of Indonesia. The tsunami inundated the coasts of more than a dozen countries over the seven hours it lasted, including India, Indonesia, Malaysia, Maldives, Myanmar, Sri Lanka, Seychelles, Thailand and Somalia. This was the deadliest and most devastating tsunami in recorded history, killing at least 225,000 people across the countries in its wake. 

Current warning systems utilize seismic waves generated by undersea earthquakes. Data from seismographs and buoys are then transmitted to control centers that can issue a tsunami warning, setting off sirens and other local warnings. Earthquakes of 7.5 magnitude or above can generate a tsunami, though not all undersea earthquakes do, causing an occasional false alarm. 

[Related: Tonga’s historic volcanic eruption could help predict when tsunamis strike land]

These existing tsunami monitors also verify an oncoming wave with ocean buoys that outline the coasts of continents. Tsunamis travel at an average speed of 500 miles per hour, the speed of a jet plane, in the open ocean. When approaching a coastline, they slow down to the speed of a car, from 30 to 50 miles per hour. After the buoys are triggered, they issue tsunami warnings, leaving little time for evacuation. By the time waves reach buoys, people have a few hours, at the most, to evacuate.

The new system uses two algorithms in tandem to assess tsunamis. An AI model assesses the earthquake’s magnitude and type, while an analytical model assesses the resulting tsunami’s size and direction.

Once Kadri and his colleagues’ software receives the necessary data, it can predict the tsunami’s source, size, and coasts of impact in about 17 seconds. 

The AI software can also differentiate between types of earthquakes and their likelihood of causing tsunamis, a common problem faced by current systems. Vertical earthquakes that raise or lower the ocean floor are much more likely to cause tsunamis, whereas those with a horizontal tectonic slip do not—though they can produce similar seismic activity, leading to false alarms. 

“So, knowing the slip type at the early stages of the assessment can reduce false alarms and complement and enhance the reliability of the warning systems through independent cross-validation,” said co-author Bernabe Gomez Perez, a researcher who currently works at the University of California, Los Angeles in a press release.

Over 80 percent of tsunamis are caused by earthquakes, but they can also be caused by landslides (often from earthquakes), volcanic eruptions, extreme weather, and much more rarely, meteorite impacts.

This new system can also predict tsunamis not generated by earthquakes by monitoring vertical motion of the water.

The researchers behind this work trained the program with historical data from over 200 earthquakes, using seismic waves to assess the quake’s epicenter and acoustic-gravity waves to determine the size and scale of tsunamis. Acoustic-gravity waves are sound waves that move through the ocean at much faster speeds than the ocean waves themselves, offering a faster method of prediction. 

Kadri says that the software is also user-friendly. Accessibility is a priority for Kadri and his colleague, Ali Abdolali at the National Oceanic and Atmospheric Administration (NOAA), as they continue to develop their software, which they have been jointly working on for the past decade.

By combining predictive software with current monitoring systems, the hope is that agencies could issue reliable alerts faster than ever before.

Kadri says that the system is far from perfect, but it is ready for integration and real-world testing. One warning center in Europe has already agreed to host the software in a trial period, and researchers are in communication with UNESCO’s Intergovernmental Oceanographic Commission.

“We want to integrate all the efforts together for something which can allow global protection,” he says. 

The post New AI-based tsunami warning software could help save lives appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cloud computing has its security weaknesses. Intel’s new chips could make it safer. https://www.popsci.com/technology/intel-chip-trust-domain-extensions/ Tue, 25 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=536626
a computer chip from Intel
Intel's new chip comes with verified security upgrades. Christian Wiediger / Unsplash

A new security feature called Trust Domain Extensions has undergone a months-long audit.

The post Cloud computing has its security weaknesses. Intel’s new chips could make it safer. appeared first on Popular Science.

]]>
a computer chip from Intel
Intel's new chip comes with verified security upgrades. Christian Wiediger / Unsplash

Intel and Google Cloud have just released a joint report detailing a months-long audit of a new security feature on Intel’s latest server chips: Trust Domain Extensions (TDX). The report is a result of a collaboration between security researchers from Google Cloud Security and Project Zero, and Intel engineers. It led to a number of pre-release security improvements for Intel’s new CPUs.

TDX is a feature of Intel’s 4th-generation “Sapphire Rapids” Xeon processors, though it will be available on more chips in the future. It’s designed to enable Confidential Computing on cloud infrastructure. The idea is that important computations are encrypted and performed on hardware that’s isolated from the regular computing environment. This means that the cloud service operator can’t spy on the computations being done, and makes it harder for hackers and other bad actors to intercept, modify, or otherwise interfere with the code as it runs. It basically makes it safe for companies to use cloud computing providers like Google Cloud and Amazon Web Services for processing their most important data, instead of having to operate their own secure servers.

However, for organizations to rely on features like TDX, they need some way to know that they’re genuinely secure. As we’ve seen in the past with the likes of Meltdown and Spectre, vulnerabilities at the processor level are incredibly hard to detect and mitigate for, and can allow bad actors an incredible degree of access to the system. A similar style of vulnerability in TDX, a supposedly secure processing environment, would be an absolute disaster for Intel, any cloud computing provider that used its Xeon chips, and their customers. That’s why Intel invited the Google security researchers to review TDX so closely. Google also collaborated with chipmaker AMD on a similar report last year.

According to Google Cloud’s blogpost announcing the report, “the primary goal of the security review was to provide assurances that the Intel TDX feature is secure, has no obvious defects, and works as expected so that it can be confidently used by both cloud customers and providers.” Secondarily, it was also an opportunity for Google to learn more about Intel TDX so they could better deploy it in their systems. 

While external security reviews—both solicited and unsolicited—are a common part of computer security, Google and Intel engineers collaborated much more closely for this report. They had regular meetings, used a shared issue tracker, and let the Intel engineers “provide deep technical information about the function of the Intel TDX components” and “resolve potential ambiguities in documentation and source code.”

The team looked for possible methods hackers could use to execute their own code inside the secure area, weaknesses in how data was encrypted, and issues with the debug and deployment facilities. 

In total, they uncovered 81 potential attack vectors and found ten confirmed security issues. All the problems were reported to Intel and were mitigated before these Xeon CPUs entered production. 

As well as allowing Google to perform the audit, Intel is open-sourcing the code so that other researchers can review it. According to the blogpost, this “helps Google Cloud’s customers and the industry as a whole to improve our security posture through transparency and openness of security implementations.”

All told, Google’s report concludes that the audit was a success since it met its initial goals and “was able to ensure significant security issues were resolved before the final release of Intel TDX.” While there were still some limits to the researchers access, they were still able to confirm that “the design and implementation of Intel TDX as deployed on the 4th gen Intel Xeon Scalable processors meets a high security bar.” 

The post Cloud computing has its security weaknesses. Intel’s new chips could make it safer. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness https://www.popsci.com/technology/fish-disco-arctic-ocean/ Mon, 24 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=536004
northern lights over the Arctic ocean
Northern lights over the Arctic ocean. Oliver Bergeron / Unsplash

It's one of the many tools they use to measure artificial light’s impact on the Arctic ocean's sunless world.

The post Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness appeared first on Popular Science.

]]>
northern lights over the Arctic ocean
Northern lights over the Arctic ocean. Oliver Bergeron / Unsplash

During the winter, the Arctic doesn’t see a sunrise for months on end. Although completely immersed in darkness, life in the ocean goes on. Diurnal animals like humans would be disoriented by the lack of daylight, having been accustomed to regular cycles of day and night. 

But to scientists’ surprise, it seems that even the photosynthetic plankton—microorganisms that normally derive their energy from sunlight—have found a way through the endless night. These marine critters power the region’s ecosystem, through the winter and into the spring bloom. Even without the sun, daily patterns of animals migrating from surface to the depths and back again (called the diel vertical migration) remain unchanged. 

However, scientists are concerned that artificial light could have a dramatic impact on this uniquely adapted ecosystem. The Arctic is warming fast, and the ice is getting thinner—that means there’s more ships, cruises, and coastal developments coming in, all of which can add light pollution to the underwater world. We know that artificial light is harmful to terrestrial animals and birds in flight. But its impact on ocean organisms is still poorly understood. 

A research team called Deep Impact is trying to close this knowledge gap, as reported in Nature earlier this month. Doing the work, though, is no easy feat. Mainly, there’s a bit of creativity involved in conducting experiments in the darkness—researchers need to understand what’s going on without changing the behaviors of the organisms. Any illumination, even from the research ship itself, can skew their observations. This means that the team has to make good use of a range of tools that allow them to “see” where the animals are and how they’re behaving, even without light. 

One such invention is a specially designed circular steel frame called a rosette, which contains a suite of optical and acoustic instruments. It is lowered into the water to survey how marine life is moving under the ship. During data collection, the ship will make one pass across an area of water without any light, followed by another pass with the deck lights on. 

[Related: Boaty McBoatface has been a very busy scientific explorer]

There are a range of different rosettes, made up of varying instrument compositions. One rosette called Frankenstein can measure light’s effect on where zooplankton and fish move to in the water column. Another, called Fish Disco, “emits sequences of multicolored flashes to measure how they affect the behavior of zooplankton,” according to Nature

And of course, robots that can operate autonomously can come in handy for occasions like these. Similar robotic systems have already been deployed on other aquatic missions like exploring the ‘Doomsday glacier,’ scouring for environmental DNA, and listening for whales. In absence of cameras, they can use acoustic-based tech, like echosounders (a sonar system) to detect objects in the water. 

In fact, without the element of sight, sound becomes a key tool for perceiving without seeing. It’s how most critters in the ocean communicate with one another. And making sense of the sound becomes an important problem to solve. To that end, a few scientists on the team are trying to see if machine learning can be used to identify what’s in the water through the pattern of the sound frequencies they reflect. So far, an algorithm currently being tested has been able to discern two species of cod.

The post Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ancient Maya masons had a smart way to make plaster stronger https://www.popsci.com/science/ancient-maya-plaster/ Wed, 19 Apr 2023 18:16:42 +0000 https://www.popsci.com/?p=535272
Ancient Maya idol in Copán, Guatemala
The idols, pyramids, and dwellings in the ancient Maya city of Copán have lasted longer than a thousand years. DEA/V. Giannella/Contributor via Getty Images

Up close, the Mayas' timeless recipe from Copán looks similar to mother-of-pearl.

The post Ancient Maya masons had a smart way to make plaster stronger appeared first on Popular Science.

]]>
Ancient Maya idol in Copán, Guatemala
The idols, pyramids, and dwellings in the ancient Maya city of Copán have lasted longer than a thousand years. DEA/V. Giannella/Contributor via Getty Images

An ancient Maya city might seem an unlikely place for people to be experimenting with proprietary chemicals. But scientists think that’s exactly what happened at Copán, an archaeological complex nestled in a valley in the mountainous rainforests of what is now western Honduras.

By historians’ reckoning, Copán’s golden age began in 427 CE, when a king named Yax Kʼukʼ Moʼ came to the valley from the northwest. His dynasty built one of the jewels of the Maya world, but abandoned it by the 10th century, leaving its courts and plazas to the mercy of the jungle. More than 1,000 years later, Copán’s buildings have kept remarkably well, despite baking in the tropical sun and humidity for so long. 

The secret may lie in the plaster the Maya used to coat Copán’s walls and ceilings. New research suggests that sap from the bark of local trees, which Maya craftspeople mixed into their plaster, helped reinforce its structures. Whether by accident or by purpose, those Maya builders created a material not unlike mother-of-pearl, a natural element of mollusc shells.

“We finally unveiled the secret of ancient Maya masons,” says Carlos Rodríguez Navarro, a mineralogist at the University of Granada in Spain and the paper’s first author. Rodríguez Navarro and his colleagues published their work in the journal Science Advances today.

[Related: Scientists may have solved an old Puebloan mystery by strapping giant logs to their foreheads]

Plaster makers followed a fairly straightforward recipe. Start with carbonate rock, such as limestone; bake it at over 1,000 degrees Fahrenheit; mix in water with the resulting quicklime; then, set the concoction out to react with carbon dioxide from the air. The final product is what builders call lime plaster or lime mortar. 

Civilizations across the world discovered this process, often independently. For example, Mesoamericans in Mexico and Central America learned how to do it by around 1,100 BCE. While ancient people found it useful for covering surfaces or holding together bricks, this basic lime plaster isn’t especially durable by modern standards.

Ancient Maya pyramid in Copán, Guatemala, in aerial photo
Copán, with its temples, squares, terraces and other characteristics, is an excellent representation of Classic Mayan civilization. Xin Yuewei/Xinhua via Getty Images

But, just as a dish might differ from town to town, lime plaster recipes varied from place to place. “Some of them perform better than others,” says Admir Masic, a materials scientist at the Massachusetts Institute of Technology who wasn’t part of the study. Maya lime plaster, experts agree, is one of the best.

Rodríguez Navarro and his colleagues wanted to learn why. They found their first clue when they examined brick-sized plaster chunks from Copán’s walls and floors with X-rays and electron microscopes. Inside some pieces, they found traces of organic materials like carbohydrates. 

That made them curious, Rodríguez Navarro says, because it seemed to confirm past archaeological and written records suggesting that ancient Maya masons mixed plant matter into their plaster. The other standard ingredients (lime and water) wouldn’t account for complex carbon chains.

To follow this lead, the authors decided to make the historic plaster themselves. They consulted living masons and Maya descendants near Copán. The locals referred them to the chukum and jiote trees that grow in the surrounding forests—specifically, the sap that came from the trees’ bark.

Jiote or gumbo-limbo tree in the Florida Everglades
Bursera simaruba, sometimes locally known as the jiobe tree. Deposit Photos

The authors tested the sap’s reaction when mixed into the plaster. Not only did it toughen the material, it also made the plaster insoluble in water, which partly explains how Copán survived the local climate so well.

The microscopic structure of the plant-enhanced plaster is similar to nacre or mother-of-pearl: the iridescent substance that some molluscs create to coat their shells. We don’t fully understand how molluscs make nacre, but we know that it consists of crystal plates sandwiching elastic proteins. The combination toughens the sea creatures’ exteriors and reinforces them against weathering from waves.

A close study of the ancient plaster samples and the modern analog revealed that they also had layers of rocky calcite plates and organic sappy material, giving the materials the same kind of resilience as nacre. “They were able to reproduce what living organisms do,” says Rodríguez Navarro. 

“This is really exciting,” says Masic. “It looks like it is improving properties [of regular plaster].”

Now, Rodríguez Navarro and his colleagues are trying to answer another question: Could other civilizations that depended on masonry—from Iberia to Persia to China—have stumbled upon the same secret? We know, for instance, that Chinese lime-plaster-makers mixed in a sticky rice soup for added strength.

Plaster isn’t the only age-old material that scientists have reconstructed. Masic and his colleagues found that ancient Roman concrete has the ability to “self-heal.” More than two millennia ago, builders in the empire may have added quicklime to a rocky aggregate, creating microscopic structures within the material that help fill in pores and cracks when it’s hit by seawater.

[Related: Ancient architecture might be key to creating climate-resilient buildings]

If that property sounds useful, modern engineers think so too. There exists a blossoming field devoted to studying—and recreating—materials of the past. Standing structures from archaeological sites already prove they can withstand the test of time. As a bonus, ancient people tended to work with more sustainable methods and use less fuel than their industrial counterparts.

“The Maya paper…is another great example of this [scientific] approach,” Masic says.

Not that Maya plaster will replace the concrete that’s ubiquitous in the modern world—but scientists say it could have its uses in preserving and upgrading the masonry found in pre-industrial buildings. A touch of plant sap could add centuries to a structure’s lifespan.

The post Ancient Maya masons had a smart way to make plaster stronger appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot dog learned a new trick—balancing like a cat https://www.popsci.com/technology/robot-dog-balance-beam/ Wed, 19 Apr 2023 14:00:00 +0000 https://www.popsci.com/?p=535177
Just a step at a time.
Just a step at a time. Carnegie Mellon University

Without a tail and a bendy spine, nonetheless.

The post This robot dog learned a new trick—balancing like a cat appeared first on Popular Science.

]]>
Just a step at a time.
Just a step at a time. Carnegie Mellon University

We’ve seen how a quadruped robot dog can dribble a ball, climb walls, run on sand, and open doors with its “paws.” The latest test isn’t that of motion, necessarily, but of balance. This time, researchers at Carnegie Mellon University’s Robotics Institute have found a way to make an off-the-shelf quadruped robot agile and stable enough to walk across a balance beam.

Even for humans, the balance beam is quite a feat to conquer—something that leaves even gymnasts nervous. “It’s the great equalizer,” Michigan women’s gymnastics coach Beverly Plocki told the Chicago Tribune in 2016. “No other event requires the same mental focus. You stumble on the floor, it’s a minor deduction. The beam is the event of perfection. No room for error.”

[Related: A new tail accessory propels this robot dog across streams.]

But in robot dogs, their legs aren’t exactly coordinated. If three feet can touch the ground, generally they are fine, but reduce that to one or two robot feet and you’re in trouble. “With current control methods, a quadruped robot’s body and legs are decoupled and don’t speak to one another to coordinate their movements,” Zachary Manchester, an assistant professor in the Robotics Institute and head of the Robotic Exploration Lab, said in a statement. “So how can we improve their balance?”

How CMU’s scientists managed to get a robot to daintily scale a narrow beam—the first time this has been done, so the researchers claim—is by leveraging hardware often used on spacecrafts: a reaction wheel actuator. This system helps the robot balance wherever its feet are, which is pretty helpful in lieu of something like a tail or a flexible spine which helps actual four-legged animals catch their balance. 

[Related: This bumblebee-inspired bot can bounce back after injuring a wing.]

“You basically have a big flywheel with a motor attached,” said Manchester. “If you spin the heavy flywheel one way, it makes the satellite spin the other way. Now take that and put it on the body of a quadruped robot.”

The team mounted two reaction wheel actuators on the pitch and roll axis of a commercial Unitree A1 robot, making it so the little bot could balance itself no matter where its feet were. Then, they did two dexterity tests—the first dropping it upside down from about half a meter in the air. Like a cat, the robot was able to flip itself over and land on its feet. 

Second came the balance beam test, this time making the robot walk along a six-centimeter-wide balance beam, which the bot did with ballerina-like gracefulness. This could come in handy in the future, not only for purely entertainment value, but maneuvering tricky scenarios in the case of search-and-rescue, which is often a goal for development across all sorts of robots. The team will be showing off their latest endeavor at the 2023 International Conference on Robotics and Automation this summer in London.

The post This robot dog learned a new trick—balancing like a cat appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet xenobots, tiny machines made out of living parts https://www.popsci.com/technology/xenobots/ Mon, 17 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=534352
A xenobot, or a living robot, in culture, under a microscope.
Xenobots can work together to gather particulate matter into a pile. Douglas Blackiston and Sam Kriegman

The starting ingredient for these bio-robots: frog cells.

The post Meet xenobots, tiny machines made out of living parts appeared first on Popular Science.

]]>
A xenobot, or a living robot, in culture, under a microscope.
Xenobots can work together to gather particulate matter into a pile. Douglas Blackiston and Sam Kriegman

You may or may not have heard of xenobots, a kind of Frankenfrog creation that involves researchers turning frog embryo cells into tiny bio-machines that can move around, push or carry objects, and work together. These ephemeral beings were first made by a team of scientists from Tufts University and the University of Vermont in 2020. 

The goal behind building these “bots” was to understand how cells communicate with one another. Here’s a breakdown of the hard facts behind how xenobots actually work, and what they are currently used for. 

What are xenobots?

A “living robot” can sound like a scary sci-fi term, but they are not anything like the sentient androids you may have seen on screen.

“At the most basic level, this is a platform or way to build with cells and tissues, the way we can build robots out of mechanical components,” says Douglas Blackiston, a senior scientist at Tufts University. “You can almost think of it as Legos, where you can combine different Legos together, and with the same set of blocks you can make a bunch of different things.” 

Biology photo
Xenobots are tiny. Here they are against a dollar bill for size. Douglas Blackiston and Sam Kriegman

But why would someone want to build robots out of living components instead of traditional materials, like metal and plastic? One advantage is that having a bio-robot of sorts means that it is biodegradable. In environmental applications, that means if the robot breaks, it won’t contaminate the environment with garbage like metal, batteries, or plastic. Researchers can also program xenobots to fall apart naturally at the end of their lives. 

How do you make a xenobot?

The building blocks for xenobots come from the eggs laid by the female African clawed frog, which goes by the scientific name Xenopus laevis

Just like with a traditional robot, they need other essential components: a power source, a motor or actuator for movement, and sensors. But with xenobots, all of these components are biological.

A xenobot’s energy comes from the yolk that’s a part of all amphibian eggs, which can power these machines for about two weeks with no added food. To get them to move, scientists can add biological “motors” like muscle or cardiac tissue. They can arrange the motors in different configurations to get the xenobots to move in certain directions or with a certain speed.  

“We use cardiac tissue because cardiac cells pulse at a regular rate, and that gives you sort of an inchworm type of movement if you build with it,” says Blackiston. “The other types of movement we get are from cilia. These are small hair-like structures that beat on the outside of different types of tissues. And this is a type of movement that dominates the microscopic world. If you take some pond water and look, most of what you see will move around with cilia.” 

Biology photo
Swimming xenobots with cilia covering their surface. Douglas Blackiston and Sam Kriegman

Scientists can also add components like optogenetic muscle tissues or chemical receptors to allow these biobots to respond to light or other stimuli in their environment. Depending on how the xenobots are programmed, they can autonomously navigate through their surroundings or researchers can add stimulus to “drive” them around. 

“There’s also a number of photosynthetic algae that have light sensors that directly hook onto the motors, and that allows them to swim towards sunlight,” says Blackiston. “There’s been a lot of work on the genetic level to modify these to respond to different types of chemicals or different types of light sources and then to tie them to specific motors.”

[Related: Inside the lab that’s growing mushroom computers]

Even in their primitive form, xenobots can still convey some type of memory, or relay information back to the researchers about where they went and what they did. “You can pretty easily hook activation of these different sensors into fluorescent molecules that either turn on or change color when they’re activated,” Blackiston explains. For example, when the bots swim through a blue light, they might change color from green to red permanently. As they move through mazes with blue lights in certain parts of it, they will glow different colors depending on the choices they’ve made in the maze. The researcher can walk away while the maze-solving is in progress, and still be in the know about how the xenobot navigated through it.  

They can also, for example, release a compound that changes the color of the water if they sense something.  

These sensors make the xenobot easy to manage. In theory, scientists can make a system in which the xenobots are drawn to a certain wavelength of light. They could then shine this at an area in the water to collect all of the bots. And the ones that slip through can still harmlessly break down at the end of their life. 

A xenobot simulator

Blackiston, along with collaborators at Northwestern and University of Vermont, are using an AI simulator they built to design different types of xenobots. “It looks sort of like Minecraft, and you can simulate cells in a physics environment and they will behave like cells in the real world,” he says. “The red ones are muscle cells, blue ones are skin cells, and green ones are other cells. You can give the computer a goal, like: ‘use 5,000 cells and build me a xenobot that will walk in a straight line or pick something up,’ and it will try hundreds of millions of combinations on a supercomputer and return to you blueprints that it thinks will be extremely performant.”

Most of the xenobots he’s created have come from blueprints that have been produced by this AI. He says this speeds up a process that would have taken him thousands of years otherwise. And it’s fairly accurate as well, although there is a bit of back and forth between playing with the simulator and modeling the real-world biology. 

Biology photo
Xenobots of different shapes crafted using computer-simulated blueprints. Douglas Blackiston and Sam Kriegman

The xenobots that Blackiston and his colleagues use are not genetically modified. “When we see the xenobots doing kinematic self-replication and making copies of themselves, we didn’t program that in. We didn’t have to design a circuit that tells the cells how to do kinematic self replication,” says Michael Levin, a professor of biology at Tufts. “We triggered something where they learned to do this, and we’re taking advantage of the native problem-solving capacity of cells by giving it the right stimuli.” 

What can xenobots help us do?

Xenobots are not just a blob of cells congealing together—they work like an ecosystem and can be used as tools to explore new spaces, in some cases literally, like searching for cadmium contamination in water. 

“We’re jamming together cells in configurations that aren’t natural. Sometimes it works, sometimes the cells don’t cooperate,” says Blackiston. “We’ve learned about a lot of interesting disease models.”

For example, with one model of xenobot, they’ve been able to examine how cilia in lung cells may work to push particles out of the airway or spread mucus correctly, and see that if the cilia don’t work as intended, defects can arise in the system.

The deeper application is using these biobots to understand collective intelligence, says Levin. That could be a groundbreaking discovery for the space of regenerative medicine. 

“For example, cells are not hardwired to do these specific things. They can adapt to changes and form different configurations,” he adds. “Once we figure out how cells decide together what structures they’re going to form, we can take advantages of those computations and build new organs, regenerate after injury, reprogram tumors—all of that comes from using these biobots as a way to understand how collective decision-making works.” 

The post Meet xenobots, tiny machines made out of living parts appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new kind of Kevlar aims to stop bullets with less material https://www.popsci.com/technology/new-kevlar-exo-body-armor/ Sat, 15 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=534315
The new Kevlar fabric.
The new Kevlar fabric. DuPont

It's not quite the stuff of John Wick's suit, but this novel fiber is stronger than its predecessor.

The post A new kind of Kevlar aims to stop bullets with less material appeared first on Popular Science.

]]>
The new Kevlar fabric.
The new Kevlar fabric. DuPont

Body armor has a clear purpose: to prevent a bullet, or perhaps a shard from an explosion, from puncturing the fragile human tissue behind it. But donning it doesn’t come lightly, and its weight is measured in pounds. For example, the traditional Kevlar fabric that would go into soft body armor weighs about 1 pound per square foot, and you need more than one square foot to do the job. 

But a new kind of Kevlar is coming out, and it aims to be just as resistant to projectiles as the original material, while also being thinner and lighter. It will not be tailored into a John Wick-style suit, which is the stuff of Hollywood, but DuPont, the company that makes it, says that it’s about 30 percent lighter. If the regular Kevlar has that approximate weight of 1 pound per square foot, the new stuff weighs in at about .65 or .7 pounds per square foot. 

“We’ve invented a new fiber technology,” says Steven LaGanke, a global segment leader at DuPont.

Here’s what to know about how bullet-resistant material works in general, and how the new stuff is different. 

A bullet-resistant layer needs to do two tasks: ensure that the bullet cannot penetrate it, and also absorb its energy—and translate that energy into the bullet itself, which ideally deforms when it hits. A layer of fabric that could catch a bullet but then acted like a loose net after it was hit by a baseball would be bad, explains Joseph Hovanec, a global technology manager at the company. “You don’t want that net to fully extend either, because now that bullet is extending into your body.”

The key is how strong the fibers are, plus the fact that “they do not elongate very far,” says Hovanec. “It’s the resistance of those fibers that will then cause the bullet—because it has such large momentum, [or] kinetic energy—to deform. So you’re actually catching it, and the energy is going into deforming the bullet versus breaking the fiber.” The bullet, he says, should “mushroom.” Here’s a simulation video.

Kevlar is a type of synthetic fiber called a para-aramid, and it’s not the only para-aramid in town: Another para-aramid that can be used in body armor is called Twaron, made by a company called Teijin Limited. Some body armor is also made out of polyethylene, a type of plastic. 

The new form of Kevlar, which the company calls Kevlar EXO, is also a type of aramid fiber, although slightly different from the original Kevlar. Regular Kevlar is made up of two monomers, which is a kind of molecule, and the new kind has one more monomer, for a total of three. “That third monomer allows us to gain additional alignment of those molecules in the final fiber, which gives us the additional strength, over your traditional aramid, or Kevlar, or polyethylene,” says Hovanec.

Body armor in general needs to meet a specific standard in the US from the National Institute of Justice. The goal of the new kind of Kevlar is that because it’s stronger, it could still meet the same standard while being used in thinner quantities in body armor. For example, regular Kevlar is roughly 0.26 or .27 inches thick, and the new material could be as thin as 0.19 inches, says Hovanec. “It’s a noticeable decrease in thickness of the material.”  

And the ballistic layer that’s made up of a material like Kevlar or Twaron is just one part of what goes into body armor. “There’s ballistics [protection], but then the ballistics is in a sealed carrier to protect it, and then there’s the fabric that goes over it,” says Hovanec. “When you finally see the end article, there’s a lot of additional material that goes on top of it.”

The post A new kind of Kevlar aims to stop bullets with less material appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How the Tonga eruption rang Earth ‘like a bell’ https://www.popsci.com/science/tonga-volcano-tsunami-simulation/ Fri, 14 Apr 2023 18:00:00 +0000 https://www.popsci.com/?p=534151
Satellite image of the powerful eruption.
Earth-observing satellites captured the powerful eruption. NASA Earth Observatory

A detailed simulation of underwater shockwaves changes what we know about the Hunga Tonga-Hunga Ha’apai eruption.

The post How the Tonga eruption rang Earth ‘like a bell’ appeared first on Popular Science.

]]>
Satellite image of the powerful eruption.
Earth-observing satellites captured the powerful eruption. NASA Earth Observatory

When the Hunga Tonga–Hunga Haʻapai volcano in Tonga exploded on January 15, 2022—setting off a sonic boom heard as far north as Alaska—scientists instantly knew that they were witnessing history. 

“In the geophysical record, this is the biggest natural explosion ever recorded,” says Ricky Garza-Giron, a geophysicist at the University of California at Santa Cruz. 

It also spawned a tsunami that raced across the Pacific Ocean, killing two people in Peru. Meanwhile, the disaster devastated Tonga and caused four deaths in the archipelago. While tragic, experts anticipated an event of this magnitude would cause further casualties. So why didn’t it?

Certainly, the country’s disaster preparations deserve much of the credit. But the nature of the eruption itself and how the tsunami it spawned spread across Tonga’s islands, also saved Tonga from a worse outcome, according to research published today in the journal Science Advances. By combining field observations with drone and satellite data, the study team was able to recreate the event through a simulation.

2022 explosion from Hunga-Tonga volcano captured by satellites
Satellites captured the explosive eruption of the Hunga Tonga-Hunga Ha’apai volcano. National Environmental Satellite Data and Information Service

It’s yet another way that scientists have studied how this eruption shook Tonga and the whole world. For a few hours, the volcano’s ash plume bathed the country and its surrounding waters with more lightning than everywhere else on Earth—combined. The eruption spewed enough water vapor into the sky to boost the amount in the stratosphere by around 10 percent. 

[Related: Tonga’s historic volcanic eruption could help predict when tsunamis strike land]

The eruption shot shockwaves into the ground, water, and air. When Garza-Giron and his colleagues measured those waves, they found that the eruption released an order of magnitude more energy than the 1980 eruption of Mount St Helens.

“It literally rang the Earth like a bell,” says Sam Purkis, a geoscientist at the University of Miami in Florida and the Khaled bin Sultan Living Oceans Foundation. Purkis is the first author of the new paper. 

The aim of the simulation is to present a possible course of events. Purkis and his colleagues began by establishing a timeline. Scientists agree that the volcano erupted in a sequence of multiple bursts, but they don’t agree on when or how many. Corroborating witness statements with measurements from tide gauges, the study team suggests a quintet of blasts, each steadily increasing in strength up to a climactic fifth blast: measuring 15 megatons, equivalent to a hydrogen bomb.

Credit: Steven N. Ward Institute of Geophysics and Planetary Physics, University of California Santa Cruz, U.S.A.

Then, the authors simulated what those blasts may have done to the ocean—and how fearsome the waves they spawned were as they battered Tonga’s other islands. The simulation suggests the isle of Tofua, about 55 miles northeast of the eruption, may have fared worst: bearing waves more than 100 feet tall.

But there’s a saving grace: Tofua is uninhabited. The simulation also helps explain why Tonga’s capital and largest city, Nuku’alofa, was able to escape the brunt of the tsunami. It sits just 40 miles south of the eruption, and seemingly experienced much shallower waves. 

[Related: Tonga is fighting multiple disasters after a historic volcanic eruption]

The study team thinks geography is partly responsible. Tofua, a volcanic caldera, sits in deep waters and has sharp, mountainous coasts that offer no protection from an incoming tsunami. Meanwhile, Nuku’alofa is surrounded by shallower waters and a lagoon, giving a tsunami less water to displace. Coral reefs may have also helped protect the city from the tsunami. 

Researchers believed that reefs could cushion tsunamis, Purkis says, but they didn’t have the real-world data to show it. “You don’t have a real-world case study where you have waves which are tens of meters high hitting reefs,” says Purkis.

We do know of volcanic eruptions more violent than Hunga Tonga–Hunga Haʻapai: for instance, Tambora in 1815 (which famously caused a “Year Without a Summer”) and Krakatau in 1883. But those occurred before the 1960s when geophysicists started deploying the worldwide net of sensors and satellites they can use today.

Ultimately, the study authors write that this eruption resulted in a “lucky escape.” It occurred under the most peculiar circumstances: At the time of its eruption, Tonga had shut off its borders due to Covid-19, reducing the number of overseas tourists visiting the islands. Scientists credit this as another reason for the low death toll. But the same closed borders meant scientists had to wait to get data.

Ash cloud from Hunga-Tonga volcano over the Pacific ocean seen from space
Ash over the South Pacific could be seen from space. NASA

That’s part of why this paper came out 15 months after the eruption. Other scientists had been able to simulate the tsunami before, but Purkis and his colleagues bolstered theirs with data from the ground. Not only did this help them reconstruct a timeline, it also helped them to corroborate their simulation with measurements from more than 100 sites along Tonga’s coasts. 

The study team argues that the eruption serves as a “natural laboratory” for the Earth’s activity. Understanding this tsunami can help humans plan how to stay safe from them. There are many other volcanoes like Hunga Tonga–Hunga Haʻapai, and volcanoes located underwater can devastate coastal communities if they erupt at the wrong time.

Garza-Giron is excited about the possibility of comparing the new study’s results with prior studies, such as his own, about seismic activity—in addition to other data sources, likethe sounds of the ocean—to create a more complete picture of what happened that day.

“It’s not very often that we can see the Earth acting as a whole system, where the atmosphere, the ocean, and the solid earth are definitely interacting,” says Garza-Giron. “That, to me, was one of the most fascinating things about this eruption.”

The post How the Tonga eruption rang Earth ‘like a bell’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
At 441,000 pounds and 192 feet underwater, this is the world’s deepest wind turbine https://www.popsci.com/technology/scotland-seagreen-wind-farm/ Thu, 13 Apr 2023 19:30:00 +0000 https://www.popsci.com/?p=533939
Seagreen's offshore windfarm in Scotland
Seagreen's offshore windfarm in Scotland. Seagreen

It will be part of Scotland's largest wind farm when it's fully operational later this year.

The post At 441,000 pounds and 192 feet underwater, this is the world’s deepest wind turbine appeared first on Popular Science.

]]>
Seagreen's offshore windfarm in Scotland
Seagreen's offshore windfarm in Scotland. Seagreen

The foundation for the world’s deepest offshore wind turbine has just been installed 17 miles off the coast of Scotland. Last week, the roughly 441,000-pound “jacket,” or foundation, was placed at a depth of 58.6 meters—just over 192 feet—by the Sapiem 7000, the world’s third largest semi-submersible crane vessel. It was the 112th jacket installed at the 114-wind turbine Seagreen wind farm, which will be Scotland’s largest when it is fully operational later this year.

Wind turbines like these work like an inverse fan. Instead of using electricity to generate wind, they generate electricity using wind. The thin blades are shaped like aircraft wings and as the wind flows across them, the air pressure on one side decreases. This difference in air pressure across the blade generates both lift and drag, which causes the rotor to spin. The spinning rotor then powers a generator, sending electricity to the grid. 

Offshore wind farms like Seagreen have a number of advantages over land-based wind turbines. Since wind speeds at sea tend to be faster and more consistent than they are over land, it’s easier to reliably generate greater amounts of electricity. Even small increases in wind speed can have a dramatic effect: in a 15-mph wind, a turbine can generate double the amount of electricity it can generate in a 12-mph wind.

[Related: The NY Bight could write the book on how we build offshore wind farms in the future]

Also, coastal areas frequently have high energy requirements. In the US, more than 40 percent of the population, some 127 million people, live in coastal counties. By generating power offshore close to where it’s used, there is less need for long-distance energy transmission, and cities don’t have to dedicate already scarce space to power plants. 

But of course, the biggest advantage of any wind farm is that they can provide renewable energy without emitting toxic environmental pollutants or greenhouse gasses. They don’t even need or consume important non-petrochemical resources like water, although they can have other environmental impacts that engineers are trying to solve for.

The recently installed foundations at Seagreen will each support a Vestas V164-10 MW turbine. With a rotor diameter of roughly 540-feet—that’s more than one-and-a-half football fields—and standing up to 672 feet tall—more than twice the height of the Statue of Liberty—these turbines will be absolutely huge. Each one will be capable of generating up to 10,000 kilowatts (KW) of power in good conditions.

Although Seagreen actually started generating electricity last summer, when the wind farm is fully operational later this year, the 114 wind turbines will have a combined total capacity of 1,075 megawatts (MW). While that’s not enough to crack the top 100 power stations in the US, the wind farm is projected to produce around 5,000 gigawatt hours (GWh) of electricity each year, which is enough to provide clean and sustainable power to more than 1.6 million UK households. That’s around two-thirds of the population of Scotland. 

Really, the Seagreen site shows how far wind power has come. While wind farms don’t yet have the capacity to fully replace fossil fuel power plants, Seagreen will still displace more than 2 million tonnes of carbon dioxide that would otherwise have been released by Scottish electricity generation. According to Seagreen, that’s the equivalent of removing a third of all Scotland’s cars from the road. 

The post At 441,000 pounds and 192 feet underwater, this is the world’s deepest wind turbine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a dumpy, short-legged bird could change water bottle designs https://www.popsci.com/technology/sandgrouse-feather-design/ Wed, 12 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=533674
Pin-tailed sandgrouse in a body of water
Pin-tailed sandgrouse hovering in a watering hole. DEPOSIT PHOTOS

The sandgrouse’s unique feathers can hold and transport fluids.

The post How a dumpy, short-legged bird could change water bottle designs appeared first on Popular Science.

]]>
Pin-tailed sandgrouse in a body of water
Pin-tailed sandgrouse hovering in a watering hole. DEPOSIT PHOTOS

The sandgrouse isn’t considered an elegant-looking bird. In fact, it is described by online database ebird as a “dumpy, short-legged, pigeon-like bird that shuffles awkwardly on the ground.” But it also harbors a special secret: It can carry water weighing around 15 percent of its body weight over short flights. Short in this case is up to 20 miles—enough to travel from a watering hole back to their nest. The key to this ability is the architecture of their belly feathers. 

In a new study published this week in the journal Royal Society Interface, researchers used high-tech microscopes and 3D technology to reveal the detailed design of these feathers, with hopes that it might inspire future engineers to create better water bottles, or sports hydration packs. According to a press release, the team says that they plan to “print similar structures [as the bird feathers] in 3D and pursue commercial applications.”

To observe how the feathers performed their water-toting magic, the researchers examined specimens they got from natural history museums and looked at dry feathers up close using light microscopy, scanning electron microscopy, and micro-CT. They then dunked them in water, and repeated the process. 

[Related: Cuttlefish have amazing eyes, so robot-makers are copying them]

What they saw was that the feathers themselves were made up of a mesh of barbs, tubes, grooves, hooks, and helical coils, with components that could bend, curl, cluster together, and more when wetted. In other words, they were optimized for holding and retaining water.

Engineering photo
Close up of sandgrouse feathers. Johns Hopkins University

“The microscopy techniques used in the new study allowed the dimensions of the different parts of the feather to be measured,” according to the press release. “In the inner zone, the barb shafts are large and stiff enough to provide a rigid base about which the other parts of the feather deform, and the barbules are small and flexible enough that surface tension is sufficient to bend the straight extensions into tear-like structures that hold water. And in the outer zone, the barb shafts and barbules are smaller still, allowing them to curl around the inner zone, further retaining water.”

Further, the team also used measurements of dimensions of the feather components to make computer models of them. They also used these numbers to calculate factors like surface tension force and the stiffness of the components that bend. 

“To see that level of detail…This is what we need to understand in order to use those principles to create new materials,” Jochen Mueller, an assistant professor in Johns Hopkins’ Department of Civil and Systems Engineering and an author on the paper, said in a press release

One use of this design would be to make water bottles that don’t allow the water to slosh around. Another possible application is to incorporate bits of this structure into netting in desert areas to capture and collect water from fog and dew. 

The post How a dumpy, short-legged bird could change water bottle designs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
After 2,000 years of debate, Italy’s massive suspension bridge to Sicily may finally happen https://www.popsci.com/technology/strait-messina-bridge-italy-sicily/ Wed, 12 Apr 2023 16:00:00 +0000 https://www.popsci.com/?p=533557
Strait between Sicily and Italy, view from Messina, Sicily
Attempts at building a bridge across the Strait of Messina stretch back 2,000 years. Deposit Photos

Ecological, social, and economic drawbacks may still keep the bridge more myth than reality.

The post After 2,000 years of debate, Italy’s massive suspension bridge to Sicily may finally happen appeared first on Popular Science.

]]>
Strait between Sicily and Italy, view from Messina, Sicily
Attempts at building a bridge across the Strait of Messina stretch back 2,000 years. Deposit Photos

Boasting 6,637 feet between its two towers, Turkey’s 1915 Çanakkale Bridge officially nabbed the title as world’s longest suspension bridge when it opened to the public on March 18, 2022. Barely a year after earning the crown, however, Italy may be stepping up to construct an even longer engineering feat—one that’s over 2,000 years in the making, no less. However, critics argue that its odds for completion may be as figuratively long as its literal span.

As Wired explained on Tuesday, following governmental approval for restarting project planning and construction,the country is closer than it’s ever been to finally tackling a bridge that would cross the Strait of Messina to connect the island of Sicily with the mainland. 

According to current plans, the bridge would measure 3,300 meters across upon completion—over 60 percent lengthier than the 1915 Çanakkale Bridge. Not only that, but the pylon towers’ height of 380 meters would also make it the tallest bridge in the world.

“Sicily is considered a territory apart from the rest of Italy and the bridge—a physical link—would unify it with the continent,” Federico Basile, mayor of Messina, told the Financial Times in January.

But there are a few catches. First and foremost, according to Wired’s recap, is that this is far from the first time such a bridge has been proposed. In fact, the first considerations for how to tackle such a structure goes back as far as 1866. Politicians have repeatedly promised to make the suspension bridge a reality, but have so far run into numerous engineering, economic, social, and environmental issues surrounding the project. Its location in an earthquake zone doesn’t help things, either.

[Related: The 1915 Çanakkale Bridge sets engineering records.]

The eco problems could be some of the bridge’s biggest impediments, too. Speaking with Wired, the VP of the Italian branch for the World Wildlife Fund Dante Caserta argues that, “we’re still at a stage where there is no evidence that this is feasible economically, technically, and environmentally.” According to Caserta, the Messina Strait itself encompasses two environmentally protected areas that are vital to seabirds’ and sea mammals’ migratory movements. According to the sustainability organization, Nostra, the Strait of Messina “constitutes a unique source of biodiversity” in the region.

And then there’s the simple economics. Ferries are currently the main form of transport between Sicily and the Italian mainland, but their ports aren’t near where the bridge would be built, i.e. the two closest points on the Strait. That means entirely new roadways would need to be added, which reportedly could end up being half the project’s total cost of roughly $11 billion, according to Reuters.

“There would not be enough traffic to pay for the project through tolls, because over 75 percent of the people who cross the strait do so without a car,” added Caserta. “[S]o doing all this just to shave off 15 minutes doesn’t make sense, especially because it connects two areas with severe infrastructure problems.”

Still, many Italian politicians and organizations remain adamant about the bridge’s final realization. Although some hope construction could begin in the summer of 2024, those estimates are extremely tentative. As close as a bridge linking Sicily with Italy may be, it certainly might be quite a ways off.

The post After 2,000 years of debate, Italy’s massive suspension bridge to Sicily may finally happen appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Pendulums under ocean waves could prevent beach erosion https://www.popsci.com/environment/ocean-wave-pendulums/ Mon, 10 Apr 2023 17:00:00 +0000 https://www.popsci.com/?p=533009
Ocean waves crashing on rocky shoreline on cloudy day.
A relatively simple underwater system could absorb some of waves' energy before they reach shore. Deposit Photos

Waves are getting worse, but letting these cylinders take the hit could help slow coastal erosion.

The post Pendulums under ocean waves could prevent beach erosion appeared first on Popular Science.

]]>
Ocean waves crashing on rocky shoreline on cloudy day.
A relatively simple underwater system could absorb some of waves' energy before they reach shore. Deposit Photos

Climate change is giving us stronger, more destructive ocean waves, which in turn exacerbate already serious coastal erosion issues. With this in mind, researchers are designing a new underwater engineering project that could help literally swing the pendulum back in humanity’s favor. As first highlighted by New Scientist on Sunday, a team at the Italian National Research Council’s Institute of Marine Science are working on MetaReef—a system of upside-down, submersible pendulum prototypes capable of absorbing underwater energy to mitigate wave momentum.

Although still in its laboratory design phases, MetaReef is already showing promising results. To test early versions of their idea, the team tethered together 11, half-meter-long plastic cylinders to the bottom of a narrow, 50-meter long tank. Each cylinder is made from commercial PVC pipes, filled with air to make them less dense than water, and subsequently waterproofed with polyurethane foam. A steel cable then anchors each cylinder with just enough tension to keep them in place underwater, while also able to swing back and forth depending on currents’ strength and direction.

[Related: Maritime students gear up to fight high-seas cyberattacks.]

It’s not as simple as just anchoring a series of tubes under the waves, however. Researchers needed to hone both the cylinders’ size and distance between one another to ensure optimal results that wouldn’t accidentally create a watery echo chamber to exacerbate current strengths. Once the parameters were fine tuned, a piston at one end of the tank generated waves that interacted with the cylinders. By absorbing the tidal energy, the team’s MetaReef managed to reduce wave amplitudes by as much as 80 percent.

Of course, ocean current interactions are much more complicated than pistons splashing water in a relatively small tank. Speaking with New Scientist, Mike Meylan,  a professor of information and physical sciences, warned that especially strong storms—themselves increasingly frequent—could easily damage pendulum systems deployed in the real world. That said, researchers are confident that MetaReef’s customizability alongside further experimentation could yield a solid new tool in protecting both threatened coastlines, and valuable structures such as offshore platforms. This malleability is contrasted with artificial coastal reefs, which while effective, are much more static and limited in placement than MetaReef, or similar designs.

The team is presenting their findings this week at the annual International Workshop on Water Waves and Floating Bodies held in Giardini Naxos, Italy. Although societal shifts in energy consumption remain the top priority to stemming the worst climate catastrophes, tools like MetaReef could still offer helpful, customizable aids that deal with damage already done to our oceanic ecosystems.

The post Pendulums under ocean waves could prevent beach erosion appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These glasses can pick up whispered commands https://www.popsci.com/technology/echospeech-glasses/ Sat, 08 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=532690
silent speech-recognizing glasses
They may look like ordinary glasses but they're not. Cornell University

It's like a tiny sonar system that you wear on your face.

The post These glasses can pick up whispered commands appeared first on Popular Science.

]]>
silent speech-recognizing glasses
They may look like ordinary glasses but they're not. Cornell University

These trendy-looking glasses from researchers at Cornell have a special ability—and it doesn’t have to do with nearsightedness. Embedded on the bottom of the frames are tiny speakers and microphones that can emit silent sound waves and receive echoes back. 

This ability comes in handy for detecting mouth movements, allowing the device to detect low-volume or even silent speech. That means you can whisper or mouth a command, and the glasses will pick it up like a lip reader. 

The engineers behind this contraption, called EchoSpeech, are set to present their paper on it at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Germany this month. “For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer,” Ruidong Zhang, a doctoral student at Cornell University and an author on the study, said in a press release. The tech could also be used by its wearers to give silent commands to a paired device, like a laptop or a smartphone. 

[Related: Your AirPods Pro can act as hearing aids in a pinch]

In a small study that had 12 people wearing the glasses, EchoSpeech proved that it could recognize 31 isolated commands and a string of connected digits issued by the subjects with error rates of less than 10 percent. 

Here’s how EchoSpeech works. The speakers and microphones are placed on different lenses on different sides of the face. When the speakers emit sound waves around 20 kilohertz (near ultrasound), it travels in a path from one lens to the lips and then to the opposite lens. As the sound waves from the speakers reflect and diffract after hitting the lips, their distinct patterns are captured by microphones and used to make “echo profiles” for each phrase or command. It effectively works like a simple, miniaturized sonar system

Through machine learning, these echo profiles can be used to infer speech, or the words that are spoken. While the model is pre-trained on select commands, it also goes through a fine-tuning phase for each individual that takes every new user around 6 to 7 minutes to complete. This is just to enhance and improve its performance.

[Related: A vocal amplification patch could help stroke patients and first responders]

The soundwave sensors are connected to a micro-controller with a customized audio amplifier that can communicate with a laptop through a USB cable. In a real-time demo, the team used a low-power version of EchoSpeech that could communicate wirelessly through Bluetooth with a micro-controller and a smartphone. The Android phone that the device connected to handled all processing and prediction and transmitted results to certain “action keys” that let it play music, interact with smart devices, or activate voice assistants.

“Because the data is processed locally on your smartphone instead of uploaded to the cloud, privacy-sensitive information never leaves your control,” François Guimbretière, a professor at Cornell University and an author on the paper, noted in a press release. Plus, audio data takes less bandwidth to transmit than videos or images, and takes less power to run as well. 

See EchoSpeech in action below: 

The post These glasses can pick up whispered commands appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Can this 10-story wooden building survive a seismic shakedown? https://www.popsci.com/technology/tallwood-project-earthquake/ Fri, 07 Apr 2023 14:00:00 +0000 https://www.popsci.com/?p=532366
Key components of shake testing structure being installed.
Key components of shake testing structure being installed. UC San Diego Jacobs School of Engineering

An earthquake-testing instrument is about to find out what structures can survive 'the big one.'

The post Can this 10-story wooden building survive a seismic shakedown? appeared first on Popular Science.

]]>
Key components of shake testing structure being installed.
Key components of shake testing structure being installed. UC San Diego Jacobs School of Engineering

For nearly two decades, the University of California San Diego has been home to a key instrument for understanding earthquakes: a 40-by-25-foot steel platform that uses a hydraulic system to mimic seismic movements. This “shake table,” which can literally shake whatever is on top of it, is one of the largest in the world. It has been used to test more than 30 structures since it was opened in 2004, with the results informing changes to building codes and road regulations. But for the past nine months, the table has stood still as an unprecedented experiment was prepared for it: a bespoke 10-story wooden building, the highest ever to be put to a high-magnitude test. 

According to principal investigator Shiling Pei, the goal of the aptly-named TallWood project is to prove that wooden buildings can withstand strong shaking without losing their structural integrity. With a lower carbon footprint than concrete or steel, wood has become an increasingly popular choice in recent years as a more sustainable building material. Plus, the flexibility of wooden structures makes them particularly well-suited to riding out earthquakes, Pei says—think about the ability of tree branches to bend without snapping. And while years of research and modeling allow him to endorse such buildings confidently, he is eager to demonstrate it in a real-world forum that will prove the power of wooden designs to a broader audience. 

“It is proof that, using current technology, we can build a 10-story building and produce resilient results after earthquakes,” Pei said. “It is our plan to throw about 40 earthquakes at the building, and the building will not be damaged structurally.” At least, that’s the hope.

[Related: Why most countries don’t have enough earthquake-resilient buildings]

Pei, who is also a civil and environmental engineering professor at the Colorado School of Mines, specializes in timber systems and hazard mitigation through engineering, which makes him a unique fit for this project. But he’s just one part of the extensive, multidisciplinary team that has been working on this project since 2016 as part of the National Science Foundation’s Natural Hazards Engineering Research Infrastructure (NHERI) program. The team includes professionals from six universities, more than two dozen industry partners, and the United States Forest Service, among other government agencies. 

Members of this team first put their timber theories to the test in 2017, building a two-story wooden building for the shake table. It held up successfully over the course of around 30 “earthquakes,” Pei said, including movements modeled after the 6.7-magnitude Northridge Earthquake, which happened nearly three decades ago in California. 

The 10-story TallWood project builds on that research, literally, not only in height but in design. At its core, it is a mass timber system, meaning it is comprised of layers of wood laminated together into solid panels, which can be put together in the desired form—like an “IKEA assembly on steroids,” as Pei describes it. While this model is being built for the shake table only, the hope is that the design could be replicated in the real world if successful. 

The key to the earthquake-adaptive design are the so-called “rocking walls,” which are made to allow movement. Rather than the walls being firmly affixed to the foundation of steel base beams, which form the ground support for the shake table, they are placed on top, and held in position with steel bars that run up the entire structure. These bars act like rubber bands, keeping the walls in place but also affording some flexibility. So, if an earthquake were to occur, the rocking walls would shake and even lift up from the foundation, with the bars preventing them from moving too much out of line. This design attempts to prevent buildings from attaining the sort of structural damage that has been observed after past earthquakes, which can lead to collapse or be difficult to fix. 

Other features like columns and bendable plates also help dissipate energy, while steel armoring helps mitigate any structural damage. Co-principal investigator Jeffrey Berman, a civil engineering professor at the University of Washington, described the overall composition during a NHERI radio interview as an essentially “damage-free structural system” since this design can accommodate an abundance of movement. 

The TallWood building is also outfitted with nonstructural features like doors, windows, stairs, ceilings, and additional walls. Keri Ryan, an earthquake engineer at the University of Nevada Reno and another co-principal investigator with the project, explained in another NHERI radio interview that this is important for getting a fuller picture of how real wooden buildings would respond to earthquake stress.

“The earthquake engineering community has focused mainly on structural design in the past, but in past earthquakes, a lot of the damage is focused on nonstructural systems,” Ryan said, citing the Northridge Earthquake as one example of this phenomenon. 

The ultimate hope is that the project reveals that a large wooden structure can be resilient to earthquakes, perhaps even performing better than a concrete, brick, or steel one, while at the same time being better for the environment overall.

While construction on the TallWood building was completed earlier this year, the team is still putting on final touches, including more than 700 sensors to record data like building displacement and acceleration. They are also installing dozens of cameras to actually see what the test looks like from various angles of the building since it will of course be empty during the simulated seismic shocks. The final step before the TallWood testing begins is to run smaller tests on the shake table to ensure the hydraulic-powered system is working properly after months at rest. 

Once the plate is given the go-ahead for the official shaking to begin later this spring, those hundreds of data points will be compared to the research team’s models to see how they stacked up and if any adjustments need to be made. If all goes as expected and the building’s structural components remain intact, Pei said the TallWood team hope its design could be replicated in other buildings and used to inform future building codes. All of the data will eventually be shared in a publicly-accessible database, as well, allowing other researchers to integrate the data into their own models and experiments. 

To stay updated on the next steps for TallWood, including testing dates, you can check out the project website. There are also livestream cameras aimed at the structure daily, which you can watch here

Update on May 17: Watch video of the structure being tested, below.

The post Can this 10-story wooden building survive a seismic shakedown? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Quantum computers can’t teleport things—yet https://www.popsci.com/technology/wormhole-teleportation-quantum-computer-simulation/ Fri, 07 Apr 2023 12:28:09 +0000 https://www.popsci.com/?p=532454
Google Sycamore processor for quantum computer hanging from a server room with gold and blue wires
Google's Sycamore quantum computer processor was recently at the center of a hotly debate wormhole simulation. Rocco Ceselin/Google

It's almost impossible to simulate a good wormhole without more qubits.

The post Quantum computers can’t teleport things—yet appeared first on Popular Science.

]]>
Google Sycamore processor for quantum computer hanging from a server room with gold and blue wires
Google's Sycamore quantum computer processor was recently at the center of a hotly debate wormhole simulation. Rocco Ceselin/Google

Last November, a group of physicists claimed they’d simulated a wormhole for the first time inside Google’s Sycamore quantum computer. The researchers tossed information into one batch of simulated particles and said they watched that information emerge in a second, separated batch of circuits. 

It was a bold claim. Wormholes—tunnels through space-time—are a very theoretical product of gravity that Albert Einstein helped popularize. It would be a remarkable feat to create even a wormhole facsimile with quantum mechanics, an entirely different branch of physics that has long been at odds with gravity. 

And indeed, three months later, a different group of physicists argued that the results could be explained through alternative, more mundane means. In response, the team behind the Sycamore project doubled down on their results.

Their case highlights a tantalizing dilemma. Successfully simulating a wormhole in a quantum computer could be a boon for solving an old physics conundrum, but so far, quantum hardware hasn’t been powerful or reliable enough to do the complex math. They’re getting there very quickly, though.

[Related: Journey to the center of a quantum computer]

The root of the challenge lies in the difference of mathematical systems. “Classical” computers, such as the device you’re using to read this article, store their data and do their computations with “bits,” typically made from silicon. These bits are binary: They can be either zero or one, nothing else. 

For the vast majority of human tasks, that’s no problem. But binary isn’t ideal for crunching the arcana of quantum mechanics—the bizarre rules that guide the universe at the smallest scales—because the system essentially operates in a completely different form of math.

Enter a quantum computer, which swaps out the silicon bits for “qubits” that adhere to quantum mechanics. A qubit can be zero, one—or, due to quantum trickery, some combination of zero and one. Qubits can make certain calculations far more manageable. In 2019, Google operators used Sycamore’s qubits to complete a task in minutes that they said would have taken a classical computer 10,000 years.

There are several ways of simulating wormholes with equations that a computer can solve. The 2022 paper’s researchers used something called the Sachdev–Ye–Kitaev (SYK) model. A classical computer can crunch the SYK model, but very ineffectively. Not only does the model involve particles interacting at a distance, it also features a good deal of randomness, both of which are tricky for classical computers to process.

Even the wormhole researchers greatly simplified the SYK model for their experiment. “The simulation they did, actually, is very easy to do classically,” says Hrant Gharibyan, a physicist at Caltech, who wasn’t involved in the project. “I can do it in my laptop.”

But simplifying the model opens up new questions. If physicists want to show that they’ve created a wormhole through quantum math, it makes it harder for them to confirm that they’ve actually done it. Furthermore, if physicists want to learn how quantum mechanics interact with gravity, it gives them less information to work with.

Critics have pointed out that the Sycamore experiment didn’t use enough qubits. While the chips in your phone or computer might have billions or trillions of bits, quantum computers are far, far smaller. The wormhole simulation, in particular, used nine.

While the team certainly didn’t need billions of qubits, according to experts, they should have used more than nine. “With a nine-qubit experiment, you’re not going to learn anything whatsoever that you didn’t already know from classically simulating the experiment,” says Scott Aaronson, a computer scientist at the University of Texas at Austin, who wasn’t an author on the paper.

If size is the problem, then current trends give physicists reason to be optimistic that they can simulate a proper wormhole in a quantum computer. Only a decade ago, even getting one qubit to function was an impressive feat. In 2016, the first quantum computer with cloud access had five. Now, quantum computers are in the dozens of qubits. Google Sycamore has a maximum of 53. IBM is planning a line of quantum computers that will surpass 1,000 qubits by the mid-2020s.

Additionally, today’s qubits are extremely fragile. Even small blips of noise or tiny temperature fluctuations—qubits need to be kept at frigid temperatures, just barely above absolute zero—may cause the medium to decohere, snapping the computer out of the quantum world and back into a mundane classical bit. (Newer quantum computers focus on trying to make qubits “cleaner.”)

Some quantum computers use individual particles; others use atomic nuclei. Google’s Sycamore, meanwhile, uses loops of superconducting wire. It all shows that qubits are in their VHS-versus-Betamax era: There are multiple competitors, and it isn’t clear which qubit—if any—will become the equivalent to the ubiquitous classical silicon chip.

“You need to make bigger quantum computers with cleaner qubits,” says Gharibyan, “and that’s when real quantum computing power will come.”

[Related: Scientists eye lab-grown brains to replace silicon-based computer chips]

For many physicists, that’s when great intangible rewards come in. Quantum physics, which guides the universe at its smallest scales, doesn’t have a complete explanation for gravity, which guides the universe at its largest. Showing a quantum wormhole—with qubits effectively teleporting—could bridge that gap.

So, the Google users aren’t the only physicists poring over this problem. Earlier in 2022, a third group of researchers published a paper, listing signs of teleportation they’d detected in quantum computers. They didn’t send a qubit through a simulated wormhole—they only sent a classical bit—but it was still a promising step. Better quantum gravity experiments, such as simulating the full SYK model, are about “purely extending our ability to build processors,” Gharibyan explains.

Aaronson is skeptical that a wormhole will ever be modeled in a meaningful form, even in the event that quantum computers do reach thousands of qubits. “There’s at least a chance of learning something relevant to quantum gravity that we didn’t know how to calculate otherwise,” he says. “Even then, I’ve struggled to get the experts to tell me what that thing is.”

The post Quantum computers can’t teleport things—yet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet the first 4 astronauts of the ‘Artemis Generation’ https://www.popsci.com/science/artemis-2-astronauts/ Mon, 03 Apr 2023 17:14:45 +0000 https://www.popsci.com/?p=525007
Artemis II astronauts in orange NASA and Canadian Space Agency spacesuits
Official crew portrait for Artemis II. Clockwise from left: NASA Astronauts Christina Koch and Victor Glover, Canadian Space Agency Astronaut Jeremy Hansen, and NASA astronaut and Artemis II commander Reid Wiseman. Josh Valcarcel/NASA

Scheduled to launch in November 2024, these American and Canadian astronauts will be the first humans to visit the moon in more than 50 years.

The post Meet the first 4 astronauts of the ‘Artemis Generation’ appeared first on Popular Science.

]]>
Artemis II astronauts in orange NASA and Canadian Space Agency spacesuits
Official crew portrait for Artemis II. Clockwise from left: NASA Astronauts Christina Koch and Victor Glover, Canadian Space Agency Astronaut Jeremy Hansen, and NASA astronaut and Artemis II commander Reid Wiseman. Josh Valcarcel/NASA

Years after Apollo 17 commander Eugene Cernan returned from NASA’s last crewed mission to the moon, he still felt the massive weight of the milestone. “I realize that other people look at me differently than I look at myself, for I am one of only 12 human beings to have stood on the moon,” he wrote in his autobiography. “I have come to accept that and the enormous responsibility it carries, but as for finding a suitable encore, nothing has ever come close.”

Cernan, who died in 2017, and his crewmates will soon be joined in their lonely chapter of history by four new astronauts, bringing the grand total of people who’ve flown to the moon to 28. Today, NASA and the Canadian Space Agency announced the crew for Artemis II, the first mission to take humans beyond low-Earth orbit since Apollo 17 in 1972. The 10-day mission will take the team on a gravity-assisted trip around the moon and back.

The big reveal occurred at Johnson Space Center in Houston, Texas, in front of an audience of NASA partners, politicians, local students, international astronauts, and Apollo alums. NASA Director of Flight Operations Norman Knight, NASA Chief Astronaut Joe Acaba, and Johnson Space Center Director Vanessa White selected the crew. They were joined on stage during the announcement by NASA Administrator Bill Nelson and Canada’s Minister of Innovation, Science, and Industry Francois-Philippe Champagne. 

“You are the Artemis generation,” Knight said after revealing the final lineup. “We are the Artemis generation.” These are the four American and Canadian astronauts representing humanity in the next lunar launch.

Christina Koch – Mission Specialist, NASA

Koch has completed three missions to the International Space Station (ISS) and set the record for the longest spaceflight for a female astronaut in 2020. Before that, the Michigan native conducted research at the South Pole and tinkered on instruments at the Goddard Flight Space Center. She will be the only professional engineer on the Artemis II crew. “I know who mission control will be calling when it’s time to fix something on board,” Knight joked during her introduction.

Koch relayed her anticipation of riding NASA’s Space Launch System (SLS) on a lunar flyby and back to those watching from home: “It will be a four-day journey [around the moon], testing every aspect of Orion, going to the far side of the moon, and splashing down in the Atlantic. So, am I excited? Absolutely. But one thing I’m excited about is that we’re going to be carrying your excitement, your dreams, and your aspirations on your mission.”

[Related: ‘Phantom’ mannequins will help us understand how cosmic radiation affects female bodies in space]

After the Artemis II mission, Koch will officially be the first woman to travel beyond Earth’s orbit. Koch and her team will circle the moon for 6,400 miles before returning home.

Jeremy Hansen – Mission Specialist, Canada

Hansen’s training experience has brought him to the ocean floor off Key Largo, Florida, the rocky caves of Sardinia, Italy, and the frigid atmosphere above the Arctic Circle. The Canadian fighter pilot led ISS communications from mission control in 2011, but this will mark his first time in space. Hansen is also the only Canadian who’s ever flown on a lunar mission.

“It’s not lost on any of us that the US could go back to the moon by themselves. Canada is grateful for that global mindset and leadership,” he said during the press conference. He also highlighted Canada’s can-do attitude in science and technology: “All of those have added up to this step where a Canadian is going to the moon with an international partnership. Let’s go.”

Victor Glover – Pilot, NASA

Glover is a seasoned pilot both on and off Earth. Hailing from California, he’s steered or ridden more than 40 different types of craft, including the SpaceX Crew Dragon Capsule in 2020 during the first commercial space flight ever to the ISS. His outsized leadership presence in his astronaut class was mentioned multiple times during the event. “In the last few years, he has become a mentor to me,” Artemis II commander Reid Wiseman said.

[Related on PopSci+: Victor J. Glover on the cosmic ‘relay race’ of the new lunar missions]

In his speech, Glover looked into the lofty future of human spaceflight. “Artemis II is more than a mission to the moon and back,” he said. “It’s the next step on the journey that gets humanity to Mars. We have a lot of work to do to get there, and we understand that.” Glover will be the first Black astronaut to travel to the moon.

G. Reid Wiseman – Commander, NASA

Wiseman got a lot done in his single foray into space. During a 2014 ISS expedition, he contributed to upwards of 300 scientific experiments and conducted two lengthy spacewalks. The Maryland native served as NASA’s chief astronaut from 2020 to 2022 and led diplomatic efforts with Roscosmos, Russia’s space agency. 

“This was always you,” Knight said while talking about Wiseman’s decorated military background. “It’s what you were meant to be.”

Flight commanders are largely responsible for safety during space missions. As the first astronauts to travel on the SLS rocket and Orion spacecraft, the Artemis II crew will test the longevity and stability of NASA and SpaceX’s new flight technology as they exit Earth’s atmosphere, slingshot into the moon’s gravitational field, circumnavigate it, and attempt a safe reentry. Wiseman will be in charge of all that with the support of his three fellow astronauts and guidance from mission control.

The post Meet the first 4 astronauts of the ‘Artemis Generation’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The new Lamborghini Revuelto is a powerful hybrid beast https://www.popsci.com/technology/lamborghini-revuelto-plug-in-hybrid/ Mon, 03 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=524666
The Lamborghini Revuelto is the automaker's first plug-in hybrid.
The Lamborghini Revuelto is the automaker's first plug-in hybrid. Lamborghini

This new plug-in hybrid is an important first for the Italian automaker, but its electric-only range is just six miles.

The post The new Lamborghini Revuelto is a powerful hybrid beast appeared first on Popular Science.

]]>
The Lamborghini Revuelto is the automaker's first plug-in hybrid.
The Lamborghini Revuelto is the automaker's first plug-in hybrid. Lamborghini

For decades, Automobili Lamborghini has built its reputation on creating supercars with large-displacement engines. Mid-mounted naturally aspirated V12 combustion engines have been its signature since the debut of the classically stunning Miura in 1966.

But change is on the horizon, and Lamborghini’s rivals at Ferrari and McLaren have already begun the shift toward turbocharged smaller-displacement engines to maximize efficiency. Characteristically, Lamborghini is plotting a different course. Battery-electric Lamborghinis are on the CAD screens of the company’s engineers, but before they debut, Lamborghini aims to give its naturally aspirated V12 models a fitting send-off with a hybrid-electric assist.

The Revuelto is that V12 tribute model. As is customary, the car’s name comes from a traditional Spanish fighting bull. Revuelto was famous in 1880, so you’re forgiven if you haven’t heard of him. The word means “mixed up,” and it was chosen in reference to the Revuelto’s combination of combustion and electric power. The bull was said to be mixed up because eight different times he leapt out of the ring into the crowd in the stands.

The Revuelto is a plug-in hybrid-electric vehicle

In a step toward the electric future, Lamborghini has for the first time ever added a plug-in hybrid drivetrain that boosts efficiency and, crucially, lets the Revuelto drive into the fashionable city centers of Europe, where there are often prohibitions on combustion power. This is only the first from Lamborghini, which will electrify its entire portfolio in coming years, states chairman and CEO Stephan Winkelmann during my visit to Lamborghini’s Sant’Agata Bolognese, Italy headquarters to view the Revuelto.

“The Miura and Countach established the V12 engine as an icon of Lamborghini,” notes Winkelmann. 

“However, things change and we have new challenges in front of us right here and right now,” he continues. “Geopolitics are a constant companion to all of our planning.” 

The company will roll out a hybrid-electric Huracan by the end of 2024, with the first battery-electric cars arriving in 2028 or 2029. Considering the likely finite lifespan of the Revuelto, one might expect that Lamborghini would make the vehicle simply an evolutionary development, but instead they went the extra mile with a full redesign. 

The Revuelto features an all-new carbon fiber platform, an all-new combustion engine, an all-new transmission, and even a new drivetrain layout in the chassis. The chassis is 10 percent lighter and 25 percent stiffer than before, and employs a new carbon fiber front impact structure in place of the Aventador’s aluminum structure.

Lamborghini Revuelto
The V12 and trio of electric motors produce a combined 1,000 horsepower. Lamborghini

The Revuelto’s V12 engine, explained 

The new 814-horsepower, 6.5-liter, L545 V12 engine still rides behind the cockpit, nestled in an all-aluminum rear subframe that is where the rear suspension attaches. At a time when rivals’ engines are muted by turbochargers, you’ll hear the Revuelto’s song better than ever, because the L545 now spins to a 9,500-rpm rev limit and explodes each combustion stroke with the force of a 12.6:1 compression ratio rather than the Aventador’s 11.8:1 ratio.

This 12-cylinder beast is even 37 pounds lighter than the Aventador’s power plant. As the Revuelto contains the last Lamborghini V12, we can chart the progress from the original engine in the Miura, which displaced 3.5 liters, spun to 6,500 rpm and churned out 280 horsepower under the more optimistic rating system of that era.

The Miura’s V12 rode side saddle, bolted transversely across the back of the cockpit, with its transaxle behind it. Its replacement, the Countach, rotated the V12 90 degrees into a longitudinal position and routed power to a transmission installed ahead of the engine. This “Longitudinale Posteriore” location was the source of the Countach’s LP500 designation, and the layout has remained that way ever since.

Until now. The Revuelto’s 8-speed dual-clutch paddle-shifted transmission was designed by Lamborgini’s engineers and is built by Graziano, the same company that built the Aventador’s transmission and also supplies them to McLaren for that company’s sports cars like the Artura, which is also a plug-in hybrid. The Aventador’s single-clutch automated manual transmission was consistently criticized for clunky shifts, so the buttery smooth action of the new dual clutch should be a dramatic improvement, especially in urban driving.

The gearbox contains a 147.5-hp electric motor from Germany’s Mahle that boosts the power going to the road. The electric motor also serves as the V12’s starter, and provides the Revuelto’s reverse function, eliminating the need for a reverse gear in the transmission. This motor can also work as a generator, letting the combustion engine recharge the battery pack when driving in Recharge mode.

This gearbox is a transverse design, mounted behind the longitudinal engine, which provides abundant packaging benefits. But crucially for the hybrid-electric Revuelto, this location leaves the car’s center tunnel vacant, so there is space there now for the car’s 3.8-kilowatt-hour lithium-ion battery pack.

The Revuelto’s battery and electric motors 

Yes, 3.8 kWh is a tiny battery. Lamborghini engineers wanted to minimize the amount of mass the battery would add to the car, and the short six miles of electric-only driving range should be enough to get the Revuelto to the trendy urban club’s valet parking line on electric power. 

The Revuelto is all-wheel drive thanks to a pair of 147.5-hp electric motors under the front hood. These are Yasa axial flux motors from Britain, another similarity to the McLaren Artura, which also employs compact pancake-shaped axial-flux motors.

The V12 and trio of electric motors produce a combined 1,000 horsepower. Remember that combustion engines and electric motors produce their peak power at different speeds, so you can’t just add up the peak power of all the motors in a hybrid system to calculate the actual horsepower total. They combine to push the Revuelto to 60 mph in less than 2.5 seconds and to a top speed of more than 219 mph.

Revuelto’s performance also benefits from advanced aerodynamics in a body shell that incorporates extra space for improved comfort. There’s an extra inch of headroom to make it easier to operate while wearing a helmet for track driving and the added 3.3 inches of legroom is a blessing, as the front wheel wells intrude into the footwell of mid-engine cars like the Revuelto.

Despite the added size, the Revuelto optimizes the balance between drag and downforce using adaptive aerodynamics, such as a rear wing that can lie flat for less drag or stand up for traction-boosting downforce. The transverse transmission leaves more space under the car’s rear, so the diffuser ramps upward at a steeper angle, contributing to the 74 percent increase in rear downforce.

At the front, downforce is increased by 33 percent thanks to a complex front splitter. That’s the chin jutting out from beneath the front bumper, and on the Revuelto it has a radial leading edge in the center between the headlights and slanted outer edges that provide downforce and create vortices (like the ones you might see off airplane wing tips in humid air) to push airflow away from the drag-inducing front tires.

Lamborghini Revuelto
The engine, albeit beneath a cover, is visible in the rear. Lamborghini

The engine is exposed (kind of) 

Revuelto’s coolest styling detail is its exposed engine. While typical cars have their engines covered with sheet metal hoods, and exhibitionist supercars have recently showcased their power plants beneath glass covers, the Revuelto’s combustion V12 is on proud display through an opening in the engine cover. At least, it appears to be. That’s because the engine wears a plastic cover that looks like a crinkle-finish intake plenum, so that is what is actually visible from outside the car. 

The engine’s exhaust note is authentic, even if the engine itself is wearing a mask. Since this is the final V12, and to draw a contrast with turbocharged rivals with fewer cylinders, Lamborghini engineers prioritized Revuelto’s sound, says chief technical officer Rouven Mohr. “It is not only about the numbers,” he says, referring to the car’s impressive performance. “It is also about the heart. The sound. And the Revuelto is the best-sounding Lamborghini ever.”

Engineers specifically targeted the sharp frequencies in the engine’s exhaust note to cultivate a mellower bellow, he explains. And in an unprecedented Lamborghini capability, the car’s six miles of pure electric driving range means that you can also drive completely silently when exiting your neighborhood in the morning. Your neighbors will surely think this combination of roar and snore is the best kind of “mixed up” at 6 am.

The post The new Lamborghini Revuelto is a powerful hybrid beast appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why most countries don’t have enough earthquake-resilient buildings https://www.popsci.com/environment/earthquake-engineering-buildings/ Thu, 30 Mar 2023 16:00:00 +0000 https://www.popsci.com/?p=524264
Complete collapse of tall apartment building after Turkey Syria earthquake. Seen from aerial view.
A destroyed apartment block is seen on February 20, 2023 in Hatay, Turkey. Chris McGrath/Getty Images

Outdated construction killed more people in Turkey and Syria than the earthquakes themselves.

The post Why most countries don’t have enough earthquake-resilient buildings appeared first on Popular Science.

]]>
Complete collapse of tall apartment building after Turkey Syria earthquake. Seen from aerial view.
A destroyed apartment block is seen on February 20, 2023 in Hatay, Turkey. Chris McGrath/Getty Images

Amidst the rubble of broken door frames, shards of windows, and concrete pillars in Gaziantep, a Turkish city known for its Byzantine citadels and castles, social media users shared photos of one building in particular that stood firm last month. The surviving structure that made the social media rounds is a stout office, which despite two successive earthquakes that measured 7.8 and 7.7 on the Richter scale, appeared to not even have shattered glass. 

The building was designed by the Union of Chambers of Turkish Engineers and Architects, who had for years warned the national government and local officials that earthquakes do not kill people, but poorly constructed buildings do. Their warnings turned out to be prescient. The February 6 earthquake and its aftershocks caused the collapse or severe damage of more than 160,000 buildings in Turkey and Syria. And many of those buildings were apartments, where a large part of the population lives, leading to the deaths of more than 40,000 people. 

The destruction of some buildings but not others has drawn attention to the importance of earthquake-safe architecture and building codes in the two countries. Popular Science spoke with structural engineers who study how buildings can withstand earthquakes to understand what makes certain designs more life-saving. 

How to engineer buildings for earthquakes

The most important factor when constructing a building that can withstand an earthquake is to make it strong and deformable, meaning that it’s both sturdy but can also bend and sway with the ground without collapsing, says Mehrdad Sasani, an engineering professor at Northeastern University in Massachusetts, who researches earthquake-resilience modeling for buildings. 

On a granular level, this means that buildings built from concrete, stone, brick, and similar materials need reinforcing steel bars to make them strong. And to make buildings deformable, engineers need to carefully design and construct the building’s beams and columns, and place the steel bars in a particular way. This strategy is called seismic detailing, and it focuses on areas where the building would experience more impact from severe tremors. The metal rods have multiple joints to help to stabilize the columns, but also allow them to sway and absorb pressure. In finished construction, the metal rods disappear inside the columns and beams, usually behind an apartment’s walls.

[Related: A humble seismograph beneath the Great Smoky Mountains could be one of the best in the world]

Before engineers construct the columns and add the stabilizers, though, they must consider a number of factors, including the ground beneath the building itself. They design buildings based on how severely the earth might shake, rather than a certain magnitude or the earthquake’s number on the Richter scale. “The more severely the ground shakes, the more damaging it’s going to be. That is not directly related to magnitude, but magnitude certainly affects it,” Sasani says. 

 “The point is this, when the ground starts to shake, buildings start to move. And if that movement is more than the buildings can tolerate, they could collapse. And that shaking is what we design for.”

When building codes save lives

Engineering is only half of the challenge with making dwellings safer during earthquakes. A large reason why the Gaziantep earthquake was so deadly was likely that some developers didn’t follow Turkey’s building codes, which Sasani says are adequate. Three apartment buildings completed in 2019 that were reviewed by the BBC, presumably up to Turkish codes, still collapsed during the disaster. And up to 75,000 buildings in southern Turkey were given construction amnesties, where property owners could pay a fee to be forgiven for construction violations according to Pelin Pınar Giritlioğlu, Istanbul head of the Union of Chambers of Turkish Engineers.

“We cannot design buildings to be earthquake-proof. We make them earthquake-resistant. Even if you follow the standards and codes, there is always a probability of failure and collapse, but that probability is a low probability,” Sasani says. “If buildings were designed based on the code, would some of them collapse? Yes. Would this many [buildings] have collapsed in Turkey and Syria? The answer is no.”

Silver drill-like machine to test structural strength of apartment buildings after Turkey Syria earthquake
An engineer of Istanbul Metropolitan Municipality checks the pillar of a flat on March 10, 2023 in Istanbul, Turkey. Since the 7.8 magnitude earthquake of February 6, the municipality has received more than 140,000 requests from residents wishing to have the structural safety of their building tested. OZAN KOSE/AFP via Getty Images

Syria first included earthquake safety measures in its building codes in 1995, and updated them again in 2013. But the country is in the midst of a 12-year civil war, and bombing from the Assad regime likely made structures more vulnerable to collapse during the earthquake, says Bilal Hamad, an engineering professor at the American University of Beirut and a former mayor of Beirut. As a comparison, some of his students studied how the 2020 Port of Beirut explosion, where large quantities of ammonium nitrate that had been idling for several years in the port suddenly combusted, compared to the power of an earthquake. 

“Shelling is applying a blast on a building. It’s energy,” Hamad says. “They found similarities, because the [chemical] blast is almost like an earthquake. It is worse even, because the earthquake energy blast is underground. It gets dampened by the soil. But the [chemical] blast is above ground level, and there is nothing that dampens it.”

A way forward for resource-strapped countries

When planning for earthquakes, engineers consider two types of buildings: old and new ones. The way an “old” building is defined is if it was built before engineers applied earthquake science to the most commonly used rules for construction. In the US, Sasani says most codes, which are determined locally, were updated with modern research on seismic forces and construction techniques in the 1970s. Anything built before then is considered an old building. 

Sasani says the majority of countries use US building codes, but it takes some time for the rules to travel across land and sea, meaning the date they actually adopted the code could be well past the 1970s. For example, Hamad says that Lebanon didn’t require any earthquake safety in building until 2005. And it took another seven years, until 2012, for the government to require builders to hire technical offices to ensure the buildings were built up to code. In the earthquake’s aftermath, Turkey’s president claimed that 98 percent of the buildings that collapsed were built before 1999, although experts have cast doubt on the statistic, saying it’s been used to divert blame from his construction amnesties policy.

Old structures will therefore probably need rehabilitation, especially if they involve masonry and are built from brick, stone, or other similar material, according to Sasani. Bricks and the like are connected by mortar, which makes them more susceptible to earthquakes, he says. Masonry structures are more common in the global south, where buildings are older, while the US has more structures built from wood, which is less susceptible to earthquake damage. 

These older buildings can be rehabilitated through mechanisms such as adding what’s called a shear wall in engineering to make the building stiffer. But rehabilitation is expensive. Sometimes, it’s even cheaper to scrap a building and start a new one. “If you don’t have the resources, you think about your more important priorities,” says Sasani. “If you haven’t had an earthquake in so many decades, you don’t think about that as a priority.” (The last time Turkey and Syria experienced mass casualties from an earthquake was in the 1990s.)

So, what are resource-strapped countries to do? Building inspections aren’t expensive, Sasani says—that’s where they could start. “The least less-developed countries can do is to have codes and standards that are implemented,” he says. “That would reduce the likelihood of events like this in the future.”

[Related: Disaster prep can save lives, but isn’t as accessible to those most at risk]

It also doesn’t cost very much to make future buildings earthquake safe, explains Karim Najjar, an architecture professor at the American University of Beirut, who researches climate-responsive design strategies. He says adding additional beams and columns to strengthen a building’s infrastructure is usually only a fraction of the total design. “These measures make 5 to 10 percent of the costs for the structures,” Najjar writes in an email to PopSci. “Often cement is reduced in the concrete for maximizing profits,” which can make the building less strong, and therefore, less earthquake-resilient.

Hamad instead estimates the shell of a building costs about 20 percent of the total project. If a building is designed to resist earthquakes, that 20 percent only goes up by 3 to 5 percent. “There would be sheer walls, more reinforcement, beams, columns, and foundations,” he says. “Why not? Safety comes before changing a bathroom.”

The post Why most countries don’t have enough earthquake-resilient buildings appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Gordon Moore, modern computing pioneer, dies at 94 https://www.popsci.com/technology/gordon-moore-obituary/ Mon, 27 Mar 2023 18:00:00 +0000 https://www.popsci.com/?p=523253
Intel co-founder Gordon Moore laughing in 2015 during 50th anniversary of Moore's Law
Moore's Law helped shape the trajectory of computer innovation for over half a century. Walden Kirsch/Intel Corporation

The co-founder of Intel predicted modern computing power more than half a century ago.

The post Gordon Moore, modern computing pioneer, dies at 94 appeared first on Popular Science.

]]>
Intel co-founder Gordon Moore laughing in 2015 during 50th anniversary of Moore's Law
Moore's Law helped shape the trajectory of computer innovation for over half a century. Walden Kirsch/Intel Corporation

Gordon Moore, co-founder of Intel and one of the most influential minds in computing history, died on Friday at his home in Hawai’i at the age of 94. Known as one of the architects of modern electronics for his work on transistors and microprocessors, his observations on the trajectory of computing advancements—later known as Moore’s law—has proved highly accurate for nearly six decades.

Writing for the 35th anniversary issue of Electronics Magazine in 1965, Moore, a Caltech graduate and engineer, estimated that technological advancements ensured transistors’ physical dimensions would continue shrinking. Roughly double the number of transistors, he forecasted,could fit on the same sized chip every year. A decade later, Moore revised this timespan to two years, but this general idea proved remarkably stable over the ensuing years. As PC Mag notes, Intel’s first commercially available microprocessor contained 2,250 transistors in 1971. Apple’s M2 Max chip, released in January, contains 67 billion transistors.

“Integrated circuits will lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment,” Moore wrote in his 1965 article, “Cramming More Components onto Integrated Circuits,” twenty years before PCs revolutionized computing and forty years prior to the iPhone’s debut.

[Related: Here’s the simple law behind your shrinking gadgets.]

Moore’s law guided the trajectory of Intel and rival processor companies’ innovations for years. The law was recently declared “dead” by Nvidia CEO Jansen Huang as advances have slowed in recent years—an inevitable conclusion to Moore’s observation as transistors’ physical limitations are reached and supply costs increase. However, this is an assertion that is not without its detractors. Intel, for its part, maintains Moore’s law is still in effect, and expects 1 trillion transistors per chip by the end of the decade.

Regardless of when Moore’s law finally taps out, its guiding principles helped propel the modern computing industry for years, providing a benchmark for companies as electronics quickly integrated themselves into the fabric of global society. As physical limitations make themselves known, researchers have turned to next generation advancements like quantum computing to continue cutting-edge innovation. “Gordon’s vision lives on as our true north as we use the power of technology to improve the lives of every person on Earth,” Intel CEO Pat Gelsinger said in a statement on Friday.

“All I was trying to do was get that message across, that by putting more and more stuff on a chip we were going to make all electronics cheaper,” Moore said in a 2008 interview. Outside of his electronics advancements, Moore started an environmental protection foundation alongside is wife using $5 billion of his own Intel stock, and remained an ardent financial supporter of the Search for Extraterrestrial Life, or SETI.

The post Gordon Moore, modern computing pioneer, dies at 94 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tiny, fast lasers are unlocking the mysteries of photosynthesis https://www.popsci.com/technology/ultrafast-spectroscopy-photosynthesis/ Mon, 27 Mar 2023 10:00:00 +0000 https://www.popsci.com/?p=522857
plant leaf up close
How does photosynthesis really work? This tool might help us figure it out. Clay Banks / Unsplash

Seeing the process in fractions of a blink could provide insights for clean fuel and more climate-sturdy plants.

The post Tiny, fast lasers are unlocking the mysteries of photosynthesis appeared first on Popular Science.

]]>
plant leaf up close
How does photosynthesis really work? This tool might help us figure it out. Clay Banks / Unsplash

Renewable energy is easy for plants. These green organisms take water, sunlight and carbon dioxide and make their own fuel. The magic happens within teeny molecular structures too small for the human eye to perceive. 

But while this process is a breeze for plants, truly understanding what happens is surprisingly hard for humans. Scientists know that it involves electrons, charge transfers, and some atomic-level physics, but the specifics of what happens and when are a bit hazy. Efforts have been made to decipher this mystery utilizing a range of tools from nuclear magnetic resonance to quantum computers.

Enter an approach that shoots laser pulses at live plant cells to take images of them, study author Tomi Baikie, a fellow at the Cavendish Laboratory at Cambridge University, explained to Earther. Using this tech, Baikie and his colleagues delved into the reaction centers of plant cells. Their findings were published this week in the journal Nature

Engineering photo
An animation of the photosynthesis process. Mairi Eyres

The technique they used allowed the researchers to carefully watch what the electrons are doing, and “follow the flow of energy in the living photosynthetic cells on a femtosecond scale – a thousandth of a trillionth of a second,” according to a press release from University of Cambridge. 

Being able to have such a close eye on the electrons enabled the scientists to be able to make observations such as where the protein complex could leak electrons, and how charges move down the chain of chemical reactions. “We didn’t know as much about photosynthesis as we thought we did, and the new electron transfer pathway we found here is completely surprising,” Jenny Zhang, who coordinated the research, said in the statement.

[Related: The truth about carbon capture technology]

Knowing the intricacies behind how this natural process functions “opens new possibilities for re-wiring biological photosynthesis and creates a link between biological and artificial photosynthesis,” the authors wrote in the paper. That means they could one day use this knowledge to help reengineer plants to tolerate more sun, or create new formulas for cleaner, light-based fuel for people to use. 

Although the possibilities of “hacking” photosynthesis is more speculative, the team is excited about the potential of ultrafast spectroscopy itself, seeing how it can provide “rich information” on the “dynamics of living systems.” As PopSci previously reported, “using ultrashort pulses for spectroscopy allows scientists to peer into the depths of molecules and atoms, or into processes that start and finish in tiny fractions of a blink.”

The post Tiny, fast lasers are unlocking the mysteries of photosynthesis appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Room-temperature superconductors could zap us into the future https://www.popsci.com/science/room-temperature-superconductor/ Sat, 25 Mar 2023 16:00:00 +0000 https://www.popsci.com/?p=522900
Superconductor cuprate rings lit up in blue and green on a black grid
In this image, the superconducting Cooper-pair cuprate is superimposed on a dashed pattern that indicates the static positions of electrons caught in a quantum "traffic jam" at higher energy. US Department of Energy

Superconductors convey powerful currents and intense magnetic fields. But right now, they can only be built at searing temperatures and crushing pressures.

The post Room-temperature superconductors could zap us into the future appeared first on Popular Science.

]]>
Superconductor cuprate rings lit up in blue and green on a black grid
In this image, the superconducting Cooper-pair cuprate is superimposed on a dashed pattern that indicates the static positions of electrons caught in a quantum "traffic jam" at higher energy. US Department of Energy

In the future, wires might cross underneath oceans to effortlessly deliver electricity from one continent to another. Those cables would carry currents from giant wind turbines or power the magnets of levitating high-speed trains.

All these technologies rely on a long-sought wonder of the physics world: superconductivity, a heightened physical property that lets metal carry an electric current without losing any juice.

But superconductivity has only functioned at freezing temperatures that are far too cold for most devices. To make it more useful, scientists have to recreate the same conditions at regular temperatures. And even though physicists have known about superconductivity since 1911, a room-temperature superconductor still evades them, like a mirage in the desert.

What is a superconductor?

All metals have a point called the “critical temperature.” Cool the metal below that temperature, and electrical resistivity all but vanishes, making it extra easy to move charged atoms through. To put it another way, an electric current running through a closed loop of superconducting wire could circulate forever. 

Today, anywhere from 8 to 15 percent of mains electricity is lost between the generator and the consumer because the electrical resistivity in standard wires naturally wicks some of it away as heat. Superconducting wires could eliminate all of that waste.

[Related: This one-way superconductor could be a step toward eternal electricity]

There’s another upside, too. When electricity flows through a coiled wire, it produces a magnetic field; superconducting wires intensify that magnetism. Already, superconducting magnets power MRI machines, help particle accelerators guide their quarry around a loop, shape plasma in fusion reactors, and push maglev trains like Japan’s under-construction Chūō Shinkansen.

Turning up the temperature

While superconductivity is a wondrous ability, physics nerfs it with the cold caveat. Most known materials’ critical temperatures are barely above absolute zero (-459 degrees Fahrenheit). Aluminum, for instance, comes in at -457 degrees Fahrenheit; mercury at -452 degrees Fahrenheit; and the ductile metal niobium at a balmy -443 degrees Fahrenheit. Chilling anything to temperatures that frigid is tedious and impractical. 

Scientists made it happen—in a limited capacity—by testing it with exotic materials like cuprates, a type of ceramic that contains copper and oxygen. In 1986, two IBM researchers found a cuprate that superconducted at -396 degrees Fahrenheit, a breakthrough that won them the Nobel Prize in Physics. Soon enough, others in the field pushed cuprate superconductors past -321 degrees Fahrenheit, the boiling point of liquid nitrogen—a far more accessible coolant than the liquid hydrogen or helium they’d otherwise need. 

“That was a very exciting time,” says Richard Greene, a physicist at the University of Maryland. “People were thinking, ‘Well, we might be able to get up to room temperature.’”

Now, more than 30 years later, the search for a room-temperature superconductor continues. Equipped with algorithms that can predict what a material’s properties will look like, many researchers feel that they’re closer than ever. But some of their ideas have been controversial.

The replication dilemma

One way the field is making strides is by turning the attention away from cuprates to hydrates, or materials with negatively charged hydrogen atoms. In 2015, researchers in Mainz, Germany, set a new record with a sulfur hydride that superconducted at -94 degrees Fahrenheit. Some of them then quickly broke their own record with a hydride of the rare-earth element lanthanum, pushing the mercury up to around -9 degrees Fahrenheit—about the temperature of a home freezer.

But again, there’s a catch. Critical temperatures shift when the surrounding pressure changes, and hydride superconductors, it seems, require rather inhuman pressures. The lanthanum hydride only achieved superconductivity at pressures above 150 gigapascals—roughly equivalent to conditions in the Earth’s core, and far too high for any practical purpose in the surface world.

[Related: How the small, mighty transistor changed the world]

So imagine the surprise when mechanical engineers at the University of Rochester in upstate New York presented a hydride made from another rare-earth element, lutetium. According to their results, the lutetium hydride superconducts at around 70 degrees Fahrenheit and 1 gigapascal. That’s still 10,000 times Earth’s air pressure at sea level, but low enough to be used for industrial tools.

“It is not a high pressure,” says Eva Zurek, a theoretical chemist at the University at Buffalo. “If it can be replicated, [this method] could be very significant.”

Scientists aren’t cheering just yet, however—they’ve seen this kind of an attempt before. In 2020, the same research group claimed they’d found room-temperature superconductivity in a hydride of carbon and sulfur. After the initial fanfare, many of their peers pointed out that they’d mishandled their data and that their work couldn’t be replicated. Eventually, the University of Rochester engineers caved and retracted their paper.

Now, they’re facing the same questions with their lutetium superconductor. “It’s really got to be verified,” says Greene. The early signs are inauspicious: A team from Nanjing University in China recently tried to replicate the experiment, without success.

“Many groups should be able to reproduce this work,” Greene adds. “I think we’ll know very quickly whether this is correct or not.”

But if the new hydride does mark the first room-temperature superconductor—what next? Will engineers start stringing power lines across the planet tomorrow? Not quite. First, they have to understand how this new material behaves under different temperatures and other conditions, and what it looks like at smaller scales.

“We don’t know what the structure is yet. In my opinion, it’s going to be quite different from a high-pressure hydride,” says Zurek. 

If the superconductor is viable, engineers will have to learn how to make it for everyday uses. But if they succeed, the result could be a gift for world-changing technologies.

The post Room-temperature superconductors could zap us into the future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The first 3D printed rocket launch was both a failure and a success https://www.popsci.com/technology/relativity-space-terran-launch/ Fri, 24 Mar 2023 15:00:00 +0000 https://www.popsci.com/?p=522693
Upper portion of Relativity Space's 3D printed Terran rocket at night prior to launch
Relativity's Terran rocket remains impressive, despite failing its debut launch. Relativity Space

Relativity Space's Terran rocket failed to achieve orbit, but still moved the industry forward.

The post The first 3D printed rocket launch was both a failure and a success appeared first on Popular Science.

]]>
Upper portion of Relativity Space's 3D printed Terran rocket at night prior to launch
Relativity's Terran rocket remains impressive, despite failing its debut launch. Relativity Space

Third time was unfortunately not the charm for Relativity Space. After two scrubbed attempts, Terran—the aerospace startup’s 110-foot rocket largely composed of 3D-printed materials—completed its first stage liftoff from Cape Canaveral Space Force Station on Wednesday night. Unfortunately, it failed to reach its intended 125-mile-high orbit. Instead, the unmanned vehicle’s second stage briefly ignited, before shutting off entirely and subsequently plummeting into the Atlantic Ocean. Still, there’s much to celebrate for the upstart rocket company.

Supporters hope Relativity’s Terran rocket, which is made from 85-percent 3D-printed metal materials, will prove a major step forward for the company as it attempts to compete within the private spacefaring industry alongside the heavy hitters of Elon Musk’s SpaceX and Jeff Bezos’ Blue Origin. During its second launch attempt earlier this month, Terran came within less than a second of takeoff before aborting the flight after its first stage rockets malfunctioned.

[Related: What to expect from space exploration in 2023.]

Formed in 2015, Relativity Space aims to create a line of entirely 3D-printed, reusable rockets for a variety of potential projects, including a goal to transport the first commercial mission to Mars. According to its official website rundown, the company’s line of hopeful spacefaring vehicles in Long Beach, California, are built using a combination of massive 3D printers, artificial intelligence aids, and autonomous robotics. In doing so, Relativity claims production requires 100 times fewer parts, and can be finished in less than 60 days.

The commitment to 3D-printed material even extends as far as Relativity’s line of Aeon rocket engines, with reduced part counts within the igniters, turbopumps, combustion chambers, thrusters, and pressurization systems. Each engine uses a combination of liquid oxygen and liquid natural gas as propellants.

[Related: How loud is a rocket launch?]

Success, in this case, is… well, relative. As TechCrunch notes, very few space launch platforms ever achieve orbit during the first flight. Additionally, Terran withstood its “Max Q” threshold, a term referring to when the vehicle encounters the most atmospheric stress and resistance, as well as successfully cut off main engines and separated from the first stage as planned. In this sense, Relativity proved that 3D-printed rockets can hold up during some of the most intense moments involved in an orbital launch, which is certainly reason enough to celebrate.

“Maiden launches are always exciting and today’s flight was no exception,” said Relativity Space launch commentator Arwa Tizani Kelly following Wednesday’s launch attempt.

Representatives for Relativity did not respond for comment at the time of writing. It is unknown if Wednesday’s results will affect future rocket launch timelines, including plans to test the company’s larger Terran R spacecraft in 2024.

The post The first 3D printed rocket launch was both a failure and a success appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The secret to Voyagers’ spectacular space odyssey https://www.popsci.com/science/voyager-1-and-2-engineering/ Tue, 21 Mar 2023 13:00:00 +0000 https://www.popsci.com/?p=521007
Deep Space photo
Christine Rösch

'Simple' hardware and software from the 1970s pushed the Voyager mission to the solar system's edge. But how long can it keep going?

The post The secret to Voyagers’ spectacular space odyssey appeared first on Popular Science.

]]>
Deep Space photo
Christine Rösch

IN 1989, rock-and-roll legend Chuck Berry attended one of the biggest parties of the summer. The bash wasn’t a concert, but a celebration of two space probes about to breach the edge of our solar system: NASA’s Voyager mission

Launched from Cape Canaveral, Florida, in 1977, identical twins Voyager 1 and 2 embarked on a five-year expedition to observe the moons and rings of Jupiter and Saturn, carrying with them Golden Records preserving messages from Earth, including Berry’s smash single “Johnny B. Goode.” But 12 years later, out on the grassy “Mall” of NASA’s Jet Propulsion Laboratory, scientists celebrated as Voyager 2 made a previously unscheduled flyby of Neptune. Planetary scientist Linda Spilker remembers the bittersweet moment: the sight of the eighth planet’s azure-colored atmosphere signaled the end of the mission’s solar system grand tour.

“We kind of thought of it as a farewell party, because we’d flown by all the planets,” says Spilker. “Both of them were well past their initial lifetimes.”

Many in the scientific community expected the spacecrafts to go dark soon after. But surprisingly, the pair continued whizzing beyond the heliopause into interstellar space, where they’ve been wandering ever since, for more than three decades. Spilker, now the Voyager mission project scientist, says the probes’ journeys have shed light on the universe we live in—and ourselves. “It’s really helped shape and change the way we think about our solar system,” she says. 

Currently traveling at a distance between 12 and 14 billion miles from Earth, Voyager 1 and 2 are the oldest, farthest-flung objects ever forged by humanity. Nearly five decades on, the secret to Voyager’s apparent immortality is most likely the spacecrafts’ robust design—and their straightforward, redundant technology. By today’s standards, each machine’s three separate computer systems are primitive, but that simplicity, as well as their construction from the best available materials at the time, has played a large part in allowing the twins to survive. 

For example, the spacecrafts’ short list of commands proved versatile as they hopped from one planet to the next, says Candice Hansen-Koharcheck, a planetary scientist who worked with the mission’s camera team. This flexibility of its operations allowed engineers to turn the Voyagers into scientific chameleons, adapting to one new objective after another.

As the machines puttered far from home, new discoveries, like active volcanoes on Jupiter’s moon Io and a possible subsurface ocean on neighboring Europa, helped us realize that “we weren’t in Kansas anymore,” says Hansen-Koharcheck. Since then, many of the tools that have contributed to Voyagers’ success, such as optics and multiple fail-safes, have been translated to other long-term space missions, like the Saturn Cassini space probe and the Mars Reconnaissance Orbiter. 

Both Voyagers are expected to transmit data back to Earth until about 2025—or until the spacecrafts’ plutonium “batteries” are unable to power critical functions. But even if they do cease contact, it’s unlikely they will crash into anything or ever be destroyed in the cosmic void. 

Instead, the Voyagers may travel the Milky Way eternally, both alone and together in humanity’s most spectacular odyssey. 

Read more PopSci+ stories.

The post The secret to Voyagers’ spectacular space odyssey appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The tricky search for just the right amount of automation in our cars https://www.popsci.com/technology/alliance-innovation-lab-autonomy-tech/ Mon, 20 Mar 2023 22:00:00 +0000 https://www.popsci.com/?p=521306
the nissan ariya
The Ariya, an EV. Nissan

The director of the Alliance Innovation Lab wants there to always be a human in the loop when it comes to vehicles that can drive themselves.

The post The tricky search for just the right amount of automation in our cars appeared first on Popular Science.

]]>
the nissan ariya
The Ariya, an EV. Nissan

Nestled in the heart of California’s high-tech Silicon Valley is the Alliance Innovation Lab, where Nissan, Renault, and Mitsubishi work in partnership. The center is a cradle-to-concept lab for projects related to energy, materials, and smart technologies in cities, all with an eye toward automotive autonomy.

Maarten Sierhuis, the global director of the laboratory, is both exuberant and realistic about what Nissan has to offer as electric and software-driven vehicles go mainstream. And it’s not the apocalyptic robot-centric future portrayed by Hollywood in movies like Minority Report.

“Show me an autonomous system without a human in the loop, and I’ll show you a useless system,” Sierhuis quips to PopSci. “Autonomy is built by and for humans. Thinking that you would have an autonomous car driving around that never has to interact with any person, it’s kind of a silly idea.”

Lessons from space

Educated at The Hague and the University of Amsterdam, Sierhuis is a specialist in artificial intelligence and cognitive science. For more than a dozen years, he was a senior research scientist for intelligent systems at NASA. There, he collaborated on the invention of a Java-based programming language and human behavior simulation environment used at NASA’s Mission Control for the International Space Station.

Based on his experience, Sierhuis says expecting certain systems to fail is wise. “We need to figure there is going to be failure, so we need to design for failure,” he says. “Now, one way to do that—and the automotive industry has been doing this for a long time—is to build redundant systems. If one fails, we have another one that takes over.”

[Related: How Tesla is using a supercomputer to train its self-driving tech]

One vein of research has Nissan partnering with the Japan Aerospace Exploration Agency (JAXA) to develop an uncrewed rover prototype for NASA. Based on Nissan’s EV all-wheel drive control technology (dubbed e-4ORCE) used on the brand’s newest EV, Ariya, the rover features front and rear electric motors to navigate challenging terrain. 

Sierhuis calls the Ariya Nissan’s most advanced vehicle to date. It is a stepping stone toward combining all the technology the lab is working on in one actual product. He and the team have switched from using a Leaf to an Ariya for its hands-on research, even simulating lunar dust to test the system’s capabilities in space.

‘There is no autonomy without a human in the loop’

There is an air of distrust of autonomous technology from some car buyers, amplified by some high-profile crashes involving Tesla’s so-called “Full Self-Driving” vehicles.

“It’s hard for OEMs to decide where and how to bring this technology to market,” Sierhuis says. “I think this is part of the reason why it’s not there yet, because is it responsible to go from step zero or step one to fully autonomous driving in one big step? Maybe that’s not the right way to teach people how to interact with autonomous systems.”

From the lab team’s perspective, society is experiencing a learning curve and so the team is ensuring that technology is rolled out gradually and responsibly. Nissan’s approach is to carefully calibrate its systems so the car doesn’t take over. Computing is developed for people, and the people are at the center of it, Sierhuis says, and it should always be about that. That’s not just about the system itself; driving should still be fun.

“There is no autonomy without a human in the loop,” he says. “You should have the ability to be the driver yourself and maybe have the autonomous system be your co-driver, making you a better driver, and then use autonomy when you want it and use the fun of driving when you want it. There shouldn’t be an either-or.”

[Related: Why an old-school auto tech organization is embracing electrification]

The Ariya is equipped with Nissan’s latest driver-assist suite, enhanced by seven cameras, five millimeter-wave radars and 12 ultrasonic sonar sensors for accuracy. A high-definition 3D map predicts the road surface, and on certain roads, Nissan says the driver can take their hands off the wheel. That doesn’t mean a nap is in order, though; a driver-attention monitor ensures the driver is still engaged.

New driver assistance technologies raise questions about the relationship between technology and drivers-to-be: What if someone learns how to drive with a full suite of autonomous features and then tries to operate a car that doesn’t have the technology; are they going to be flummoxed? Ultimately, he says, this is a topic the industry hasn’t fully worked through yet.

Making cities smarter

The Alliance Innovation Lab is also studying the roads and cities where EVs operate. So-called “smart cities” integrate intelligence not just into the cars but into the infrastructure, enabling the future envisioned by EV proponents. Adding intelligence to the environment means, for example, that an intersection can be programmed to interface with a software-enabled vehicle making a right-hand turn toward a crosswalk where pedestrians are present. The autonomous system can alert the driver to a potentially dangerous situation and protect both the driver and those in the vicinity from tragedy.  

Another way to make cities smarter is by improving the efficiency of power across the board. According to the Energy Information Administration (EIA), the average home consumes about 20 kilowatt-hours per day. Nissan’s new Ariya is powered by an 87-kilowatt battery, which is enough to power a home for four days. Currently, Sierhuis says, we have a constraint optimization problem: car batteries can store a fantastic amount of power that can be shared with the grid in a bi-directional way, but we haven’t figured out how to do that effectively.  

On top of that, car batteries use power in larger bursts than inside homes, and the batteries have limited use before they must be retired. However, that doesn’t mean the batteries are trash at that point; on the contrary, they have quite a bit of energy potential in their second life. Nissan has been harnessing both new and used Leaf batteries to work in tandem with a robust solar array to power a giant soccer stadium (Johan Cruijff Arena) in Amsterdam since 2018. In the same year, Nissan kicked off a project with the British government to install 1,000 vehicle-to-grid charging points across the United Kingdom. It’s just a taste of what the brand and its lab see as a way to overcome infrastructure issues erupting around the world as EVs gain traction.

Combining EV batteries and smart technology, Nissan envisions a way for vehicles to communicate with humans and the grid to manage the system together, in space and here on Earth.

The post The tricky search for just the right amount of automation in our cars appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Our homes on Mars could be made from potato-based ‘StarCrete’ https://www.popsci.com/technology/mars-starcrete-potato/ Mon, 20 Mar 2023 20:30:00 +0000 https://www.popsci.com/?p=521245
Two hands holding pile of potatoes
Potato starch, salt, and Martian dirt could make astronauts' homes. Deposit Photos

Just add astronaut tears.

The post Our homes on Mars could be made from potato-based ‘StarCrete’ appeared first on Popular Science.

]]>
Two hands holding pile of potatoes
Potato starch, salt, and Martian dirt could make astronauts' homes. Deposit Photos

Even if humans may not arrive on Mars for (at least) a decade or two, when they do get there, they’ll have to procure shelter of some kind. To help towards that end, researchers from the University of Manchester in England have developed a new building material for future visitors to Mars that is twice as strong as traditional concrete and primarily composed of just potato starch, a bit of salt, and Martian dirt. It’s even already got a solid name to boot: StarCrete.

Judging by what is known about the environment on the Red Planet, humans won’t have a whole lot to work with once they get to Mars. That’ll be a bit of a challenge, of course, since space for supplies will be limited on the rides over, so astronauts will need to be extremely resourceful to make things work. Building structures are key to that survival, and while there are a number of high-tech possibilities, one of the most promising and strongest could be comparatively one of the simplest to achieve.

[Related: With Artemis 1 launched, NASA is officially on its way back to the moon.]

As recently detailed in a paper published in the journal Open Engineering, a team at the University of Manchester capitalized on the fact that potato starches are a likely feature of any upcoming Mars excursions’ menus. According to the team’s estimates, a roughly 25 kg (55 pounds) sack of dehydrated potatoes include enough starch for half a metric ton of their StarCrete—enough to compile around 213 bricks for structures. By combining the starch with salt and magnesium chloride taken either from Martian soil or even astronauts’ own tears, StarCrete strength increased dramatically, and could even be baked at normal microwave- or home-oven temperatures.

In their own laboratory tests using simulated Martian regolith—aka dirt—scientists measured a compressive strength of 72 Megapascals (MPa), or roughly twice that of regular concrete’s 32 MPa rating. As an added bonus, creating a similar mixture using mock moon dust showed a compressive strength of over 91 MPa, meaning the StarCrete variant is also a possibility for humans’ upcoming return to the moon.

[Related: NASA’s Curiosity rover captures a moody Martian sunset for the first time.]

Aled Roberts, the project’s lead researcher and a fellow at the university’s Future Biomanufacturing Research Hub, explained StarCrete can step in as alternative options remain far off from practical implementation. “Current building technologies still need many years of development and require considerable energy and additional heavy processing equipment, which all adds cost and complexity to a mission,” Roberts said in a statement, adding, “StarCrete doesn’t need any of this and so it simplifies the mission and makes it cheaper and more feasible.”

Meanwhile, Roberts’ team isn’t waiting for StarCrete’s potential Martian benefits. Their startup, DeakinBio, is looking to see how similar material could be employed here on Earth as a cheap, greener, alternative to existing concrete materials. At least none of the new building options require Roberts’ suggestion from previous research—a mixture that required human urine and blood for solidification.

The post Our homes on Mars could be made from potato-based ‘StarCrete’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Engineers created a paper plane-throwing bot to learn more about flight https://www.popsci.com/technology/paper-airplane-robot-epfl/ Sat, 18 Mar 2023 11:00:00 +0000 https://www.popsci.com/?p=520729
paper airplanes
Paper airplane designs are being put formally to the test. Matt Ridley / Unsplash

The bot made and launched more than 500 planes with dozens of designs. Here’s what happened.

The post Engineers created a paper plane-throwing bot to learn more about flight appeared first on Popular Science.

]]>
paper airplanes
Paper airplane designs are being put formally to the test. Matt Ridley / Unsplash

How you fold a paper airplane can determine how fast or how far it goes. A lot of people arrive at the best designs through trial, error, and perhaps a little bit of serendipity. The paper plane can be modeled after the structure of a real aircraft, or something like a dart. But this question is no child’s play for engineers at the Swiss Federal Institute of Technology Lausanne (EPFL). 

A new paper out in Scientific Reports this week proposes a rigorous, technical approach for testing how the folding geometry can impact the trajectory and behavior of these fine flying objects. 

“Outwardly a simple ‘toy,’ they show complex aerodynamic behaviors which are most often overlooked,” the authors write. “When launched, there are resulting complex physical interactions between the deformable paper structure and the surrounding fluid [the air] leading to a particular flight behavior.”

To dissect the relationship between a folding pattern and flight, the team developed a robotic system that can fabricate, test, analyze, and model the flight behavior of paper planes. This robot paper plane designer (really a robot arm fashioned with silicone grippers) can run through this whole process without human feedback. 

Engineering photo
A video of the robot at work. Obayashi et. al, Scientific Reports

[Related: How to make the world’s best paper airplane]

In this experiment, the bot arm made and launched over 500 paper airplanes with 50 different designs. Then it used footage from a camera that recorded the flights to obtain stats on how far each design flew and the characteristics of that flight. 

Engineering photo
Flying behaviors with paths mapped. Obayashi et. al, Scientific Reports

During the study, while the paper planes did not always fly the same, the researchers found that different shapes could be sorted into three broad types of “behavioral groups.” Some designs follow a nose dive path, which as you imagine, means a short flight distance before plunging to the ground. Others did a glide, where it descends at a consistent and relatively controlled rate, and covers a longer distance than the nose dive. The third type is a recovery glide, where the paper creation descends steadily before leveling off and staying at a certain height above the ground.

“Exploiting the precise and automated nature of the robotic setup, large scale experiments can be performed to enable design optimization,” the researchers noted. “The robot designer we propose can advance our understanding and exploration of design problems that may be highly probabilistic, and could otherwise be challenging to observe any trends.”

When they say that the problem is probabilistic, they are referring to the fact that every design iteration can vary in flight across different launches. In other words, just because you fold a paper plane the same way each time doesn’t guarantee that it’s going to fly the exact way. This insight can also apply to the changeable flight paths of small flying vehicles. “Developing these models can be used to accelerate real-world robotic optimization of a design—to identify wing shapes that fly a given distance,” they wrote. 

The post Engineers created a paper plane-throwing bot to learn more about flight appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These 3D printed engines can power space-bound rockets—or hypersonic weapons https://www.popsci.com/technology/3d-printed-rocket-engines/ Thu, 16 Mar 2023 15:11:40 +0000 https://www.popsci.com/?p=520110
Rockets firing from the ground into a black sky with moons and flames. Illustrated.
Ard Su for Popular Science

A Colorado company is fabricating powerful engines with names like Hadley and Ripley. Here's why.

The post These 3D printed engines can power space-bound rockets—or hypersonic weapons appeared first on Popular Science.

]]>
Rockets firing from the ground into a black sky with moons and flames. Illustrated.
Ard Su for Popular Science

ON THE COLORADO PLAINS just below the Rocky Mountains, near the quaint town of Berthoud, lies the headquarters of a space company called Ursa Major. There, just about an hour’s drive north of Denver, the company regularly test-fires rocket engines straight out the back of an onsite bunker. 

These engines, which are mostly 3D printed, aren’t just for launching satellites into space: They’re also of interest to the US military for propelling hypersonic vehicles. And their dual-use nature is a modern manifestation of the two faces that rocket technology has always had, which is that it is simultaneously useful for defensive and offensive purposes, and for cosmic exploration.

With this technology in hand, the company hopes to get both civilian and military projects off the ground.

3… 2…1… liftoff

Joe Laurienti, who founded Ursa Major in 2015, grew up not too far from Berthoud. His father worked for Ball Aerospace—the cosmic arm of the company that makes a whole lot of aluminum cans, and the former owner of Ursa Major’s current 90-acre site. “He was always working on satellites,” says Laurienti. But when Laurienti went to see one of his father’s payloads launch, he thought, “The thing my dad worked on is really important. It’s on top of this rocket. But the fire coming out the bottom is way more exciting.”

Laurienti has been chasing that fire ever since, his life consumed by propulsion: the technology that makes rockets go up fast enough to counteract gravity and reach orbit. As an adult, he joined SpaceX’s propulsion team, then slipped over to Blue Origin—hitting two of the trifecta of space-launch companies owned by famous billionaires. (The third is Richard Branson’s Virgin Galactic.)

Soon, Laurienti saw others in the industry trying to start commercial rocket companies. He, perhaps biased, didn’t think that was a good idea: The heavy hitters that were founded first would obviously win, and the others would just be also-rans.

Nevertheless, he thought he had a startup to contribute to the mix: one that wouldn’t make entire rockets but just engines, to sell to rocket companies—much like General Electric makes engines that propel aircraft from Boeing or Airbus. “I spent my career on the engines, and that was always kind of a pain point” for the industry, says Laurienti.

Rocket engines, of course, are pretty important for heaving the space-bound vehicle upward. “A little over 50 percent of launch failures in the last 10 years are propulsion-related,” explains Bill Murray, Ursa’s vice president of engineering, who’s known Laurienti since they were both undergrads at the University of Southern California. You can take that to mean that half the complexity of a rocket exists inside the engines. Take that out of some rocket maker’s equation for them? Their job theoretically gets a lot easier.

“That’s the next wave of aerospace,” thought Laurienti. “It’s specialization.” 

With that idea, he sold his SpaceX stock in preparation for his new venture. “Instead of buying a house and starting a family, I bought a 3D printer, started the company, and made my mom cry,” he says.

rocket engine test
Testing an engine called Ripley. Ursa Major

3D printing engines—and entire rockets

The 3D printer was key to Laurienti’s vision. Today, 80 percent of a given Ursa engine is 3D printed with a metal alloy—and printed as a unit, rather than as separate spat-out elements welded together later. Most space companies use additive manufacturing (another way to refer to 3D printing) to some degree, but in general, they aren’t 3D printing the majority of their hardware. And they also aren’t, in general, designing their space toys to take advantage of 3D printing’s special traits, like making a complicated piece of hardware as one single part rather than hundreds.

That kind of mindset is also important at another company, Relativity Space, which has 3D printed basically an entire rocket—including the engines. Its Terran 1 rocket is the largest 3D printed object on Earth. The team attempted to launch the rocket on March 8 and 11, but it ultimately scrubbed the shots both times due to issues with ground equipment, fuel pressure, and automation systems.

Like Laurienti, Relativity founder Tim Ellis noticed a reluctance to fully embrace 3D printing tech at traditional space companies. At Blue Origin, his former employer, Ellis was the first person to do metal 3D printing; he was an intern desperate to finish creating a turbo pump assembly before his apprenticeship was over. Later, as a full employee, Ellis would go on to start and lead a metal 3D printing division at the company. 

But the way traditional space companies like Blue Origin usually do 3D printing didn’t work for him, because he felt that it didn’t always include designing parts to take advantage of additive manufacturing’s unique capabilities. “Every 3D printed part that Relativity has made would not be possible to build with traditional manufacturing,” says Ellis. The result of that approach has been “structures that ended up looking highly integrated, [because] so many parts of our rocket engine, for example, are built in single pieces.” Those one-part pieces would, in traditional manufacturing, have been made of up to thousands of individual pieces.

He thought more people would have come over to this side by now. “It’s been a lot slower than I’ve expected, honestly, to adopt 3D printing,” he says. “And I think it’s because it’s been slower for people to realize this is not just a manufacturing technology. It’s a new way to develop products.”

Five times the speed of sound

Initially, Ursa Major’s business model focused on space launch: getting things to orbit, a process powered by the company’s first engine, called Hadley. The design, currently still in production, slurps liquid oxygen and kerosene to produce 5,000 pounds of thrust. That’s about the same as the engines on Rocket Lab’s small Electron vehicle, or VirginOrbit’s LauncherOne spaceplane. 

But then an early customer—whose name Laurienti did not share—approached the company about a different application: hypersonics. These vehicles are designed to fly within Earth’s atmosphere at more than five times the speed of sound. Usually, when people discuss hypersonics, they’re talking about fast-moving, maneuverable weapons. 

“Hey, we were buying rocket engines from someone else, but they’re not really tailored for hypersonics,” Laurienti recalls this customer saying. “You’re [in] early development. Can you make some changes?” 

They could, although it wouldn’t be as easy as flipping a switch. Hypersonic vehicles often launch from the air—from the bottom of planes—whereas rockets typically shoot from the ground on their way to space. Hypersonics also remain within the atmosphere. That latter part is surprisingly hard, in the context of high speeds.  

Just like rubbing your hand on fabric warms both up, rubbing a hypersonic vehicle against the air raises the temperature of both. “The atmosphere around you is glowing red, trying to eat your vehicle,” says Laurienti. That heat, which creates a plasma around the craft, also makes it hard to send communications signals through. Sustaining high speeds and a working machine in that harsh environment remains a challenge.

But the company seems to have figured out how to make Hadley, which is now in its fourth iteration, work in the contexts of both launching a rocket to space and propelling a hypersonic vehicle that stays within Earth’s atmosphere. As part of one of Ursa Major’s contracts, the military wanted the engine to power an aircraft called the X-60A, a program run by the Air Force Research Lab. The X-60A was built as a system on which hypersonic technologies could fly, to test their mettle and give engineers a way to clock the weapons’ behavior.

Hypersonic weapons—fast, earthbound missiles—aren’t actually faster than intercontinental ballistic missiles (ICBMs), which carry nuclear warheads and arc up into space and then back down to their targets. But they’re of interest and concern to military types because they don’t have to follow trajectories as predictable as ICBMs do, meaning they’re harder to track and shoot down. Russia, China, India, France, Australia, Germany, Japan, both Koreas, and Iran all have hypersonic weapon research programs.

To intercept these fast-moving weapons, a country might need its own hypersonics, so there’s a defensive element and an offensive one. That’s partly why the Department of Defense has invested billions of dollars in hypersonics research, in addition to its desire to keep up with other countries’ technological abilities. That, of course, often makes other countries want to keep pace or get ahead, which can lead to everyone investing more money in the research.

A long-standing duality

Rocket technology, often touted as a way for humans to explore and dream grandly, has always had a military connection—not implicitly, but in a burning-bright obvious way. “[Nazi Germany’s] V-2 rocket was the progenitor to the intercontinental ballistic missiles,” says Lisa Ruth Rand, an assistant professor of history at Caltech, who focuses on space technologies and their afterlives.

Space-destined rockets were, at least at first, basically ballistic missiles. After all, a powerful stick of fire is a powerful stick of fire, no matter where it is intended to go. And that was true from the Space Age’s very beginning. “The R-7 rocket that launched Sputnik was one of the first operational ICBMs,” says Rand. The first American astronauts, she continues, shot to space on the tip of a modified Redstone ballistic missile. Then came Atlas rockets and Titan rockets, which even share the same names as the US missiles that were souped up to make them.

Rockets and flying weapons also share a kind of philosophical lineage, in terms of the subconscious meaning they impart on those who experience their fire. “They really shrunk the world, in a lot of ways, in time and space,” says Rand. “Accessing another part of the world, whether you were launching a weapon or a satellite, really made the world smaller.”

Today, in general, the development of missile technology has been decoupled from space-launch technology, as the rockets intended for orbit have been built specifically for that purpose. But it’s important not to forget where they came from. “They still all descend from the V-2 and from these military rockets,” says Rand. “And also most of them still launch DOD payloads.”

In a lot of ways, a 3D printed rocket engine that can both power a hypersonic vehicle and launch a satellite into orbit is the 21st-century manifestation of the duality that’s been there from the beginning. “Maybe it’s just saying the quiet part out loud,” says Rand. “What’s happening here—that was always kind of the case. But now we’re just making it very clear that, ‘Yeah, this has got to be used for both. We are building a company and this is our market and, yes, rockets are used for two main things: satellites and launching weapons.’”

rocket engine test
A fiery scene in Colorado: The Ripley engine fires. Ursa Major

‘A shock hitting your chest’ 

It’s no surprise that hypersonic capabilities have gotten their share of American hype—not all of it totally deserved. As defense researchers pointed out in Scientific American recently, the US has for decades put ballistic missiles on steerable maneuvering reentry vehicles, or MaRVs. Although they can only shift around toward the end of their flight, they can nonetheless change their path. Similarly, the scientists continued, while a lower-flying hypersonic might evade radar until it approaches, the US doesn’t totally rely on radar for missile defense: It also has infrared-seeking satellites that could expose a burning rocket engine like Hadley.

Still, the Air Force has been interested in what Ursa Major might be able to contribute to its hypersonic research, having funded seven programs with the company, according to the website USA Spending, which tracks federal contracts and awards. In fact, the Air Force is Ursa’s only listed government customer, having invested a few million in both the hypersonic and space-launch sides of the business. It’s also responsible for two of four of Relativity’s federal awards. 

Also of national security interest, of late, is decreasing the country’s reliance on Russian rocket engines for space launch. To that end, Ursa Major has a new engine, called Arroway, in development, which boasts 200,000 pounds of thrust. “Arroway engines will be one of very few commercially available engines that, when clustered together, can displace the Russian-made RD-180 and RD-181, which are no longer available to US launch companies,” the company said last June. It is also developing a third, in-between engine called Ripley, a scaled-up version of Hadley. 

Today, Ursa Major tests their 3D printed engines up to three times daily. On any given day, visitors in Berthoud might unknowingly be near six or nine high-powered experiments. When the static rocket engine begins its test, huge vapor clouds from the cryogenics can envelop an engineer. 

“When it lights, it’s just a shock hitting your chest,” says Laurienti. A cone of flames shoots from the back of the engine, toward a pile of sand in the field behind the bunker. Onlookers face the fire head-on, their backs to the mountains and their eyes on the prize.

Read more PopSci+ stories.

The post These 3D printed engines can power space-bound rockets—or hypersonic weapons appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A $25 whistle-like tool could be a game changer for COPD patients https://www.popsci.com/technology/pep-buddy-copd/ Mon, 13 Mar 2023 13:00:00 +0000 https://www.popsci.com/?p=519091
PEP Buddy breathing aid on lanyard for COPD patietns
The PEP Buddy helps slow patients' breathing to regulate air flow and improve oxygen levels. University of Cincinnati

The PEP Buddy is cheap, uses no electronics, and could help regulate breathing for COPD sufferers.

The post A $25 whistle-like tool could be a game changer for COPD patients appeared first on Popular Science.

]]>
PEP Buddy breathing aid on lanyard for COPD patietns
The PEP Buddy helps slow patients' breathing to regulate air flow and improve oxygen levels. University of Cincinnati

Nearly 16 million Americans suffer from chronic obstructive pulmonary disease (COPD). The often severe respiratory issues can dramatically influence patients’ day-to-day lives, making even once-simple physical tasks like walking and going to the store incredibly difficult, and sometimes even life-threatening. 

While there are a number of treatments and medications available, the therapies are often expensive, complicated, and time-consuming. Recently, however, researchers designed a cheap, simple, tiny tool that could not only alleviate COPD patients’ breathing issues, but offer relief for others dealing with anxiety and stress, as well as aid practitioners of meditation and yoga.

[Related: Seniors are struggling with chronic anxiety, but don’t seek treatment.]

As detailed in a new paper published in the journal, Respiratory Care, a team at the University of Cincinnati have created a new positive-expiratory pressure (PEP) device roughly the size and shape of whistle that attaches to a lanyard for users to keep on them during their everyday activities. Unlike existing PEP products that are often handheld, bulky, and expensive,  Muhammad Ahsan Zafar and Ralph Panos’ PEP Buddy aid only costs around $25, and includes no electronics.

Because of their respiratory system degradation and weaker air tubes, it often takes COPD sufferers longer to exhale while breathing. When their breath quickens, such as during physical activities or while stressed, more and more air stays within the lungs, causing “dynamic hyperinflation” that leads to breathlessness and lower oxygen levels. This compounds over time, and often restricts or discourages further physical movement and exertion, which then can worsen existing COPD symptoms.

[Related: How to make the most of meditation with science.]

To combat these problems, users put the device in their mouth just as they would a whistle when needed, then breathe through their nose and exhale through the product. PEP Buddy’s design simply relies on creating a slight back pressure while users breathe out, thus slowing down their exhalations to better regulate air flow. In their studies, Zafar and Panos found that around 72-percent of patients utilizing PEP Buddy over a two-week period reported a “significant impact” in reducing shortness of breath while also improving their everyday living. What’s more, over a third of those participating in the study showed no signs of dropping oxygen levels while PEP Buddy was in use.

Because there are no respiratory medications involved, the PEP Buddy can also be used by anyone looking to simply better regulate their breathwork following intense exercise or while practicing mindfulness and meditation exercises. Going forward, researchers hope to oversee a long-term study to see PEP Buddy’s potential in conjunction with rescue inhalers, alongside emergency room visits and usage within pulmonary rehabilitation programs.

The post A $25 whistle-like tool could be a game changer for COPD patients appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Butterfly-inspired ‘plasmonic paint’ could be brilliant for energy-efficient buildings https://www.popsci.com/technology/plasmonic-paint-butterflies/ Thu, 09 Mar 2023 17:00:00 +0000 https://www.popsci.com/?p=518388
Butterfly cutouts painted with plasmonic paint hues against grass background
Butterflies' vibrant hues are the result of nanostructural overlays instead of pigment molecules. University of Central Florida

Light reflection off of nanostructural geometric arrangements creates the striking hues.

The post Butterfly-inspired ‘plasmonic paint’ could be brilliant for energy-efficient buildings appeared first on Popular Science.

]]>
Butterfly cutouts painted with plasmonic paint hues against grass background
Butterflies' vibrant hues are the result of nanostructural overlays instead of pigment molecules. University of Central Florida

The exterior paint on a building is often a major factor in keeping their indoors appropriately warm or cool, and a lot of work goes into developing new concoctions to improve insulation. Unfortunately, the volatile organic compounds found in modern synthetic paint have been shown to have harmful effects on both the environment and humans. On top of all that, air conditioning still contributes to over 10 percent of all electricity consumption in the US. Thankfully, we have butterflies and squid.

Those species and others inspired a researcher at University of Central Florida’s NanoScience Technology Center to create an ultra-lightweight, environmentally safe “plasmonic paint.” The unique paint relies on nanoscale structural arrangements of aluminum and aluminum oxide instead of traditional pigments to generate its hues. As detailed in Debashis Chanda’s recent paper published in Science Advances, traditional pigment paint colorants rely on their molecules’ light absorption properties to determine colors. Chanda’s plasmonic paint, in contrast, employs light reflection, absorption, and scattering based on its nanostructural geometric arrangements to create its visual palettes.

[Related: Are monarch butterflies endangered in the US?]

“The range of colors and hues in the natural world are astonishing—from colorful flowers, birds and butterflies to underwater creatures like fish and cephalopods,” said Chanda in a statement on Wednesday. Chanda went on to explain that these examples’ structural color serves as their hue-altering mechanism, as two colorless materials combine to produce color.

Compared to traditional available paint, Chanda’s plasmonic version is both dramatically longer lasting, eco-friendly, and efficient. Normal paints fade as their pigments lose the ability to absorb light electrons, but plasmonics’ nanostructural attributes ensure color could remain as vibrant as the day it was applied “for centuries,” claimed Chanda.

A layer of plasmonic paint can achieve full coloration at just 150 nanometers thick, making it arguably the lightest paint in the world, and ensuring magnitudes less is needed for projects. Chanda estimated that just three pounds of plasmonic paint would cover an entire Boeing 747 jet exterior—a job that usually requires around 1,000 pounds of synthetic paint.

[Related: A new paint can reflect up to 98.1 percent of sunlight.]

And then there’s the energy savings. Plasmonic paint reflects the entire infrared spectrum, thereby absorbing far less heat. During testing, a surface layered with the new substance typically remained between 25 and 30F cooler than a surface painted with commonly available commercial options. That could save consumers’ bucket loads of cash, not to mention dramatically cut down on energy needed to power A/C systems.

Chanda said fine-tuning is still needed to improve plasmonics’ commercial viability, as well as scale up production abilities to make it a feasible replacement for synthetic paint. Still, natural inspirations like butterflies could be what ultimately help save their beauty for centuries to come.

“As a kid, I always wanted to build a butterfly,” said Chanda. “Color draws my interest.”

The post Butterfly-inspired ‘plasmonic paint’ could be brilliant for energy-efficient buildings appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Get a high-tech tour of the long-lost Ironton shipwreck discovered in the Great Lakes https://www.popsci.com/technology/ironton-shipwreck-lake-huron/ Tue, 07 Mar 2023 17:30:00 +0000 https://www.popsci.com/?p=517840
Underwater image of sunken ship, Ironton, in Lake Huron
The three-masted 'Ironton' has been lost at the bottom of Lake Huron for nearly 130 years. NOAA/ Undersea Vehicles Program UNCW

With help from self-driving boats and powerful sonar, the missing 19th century ship was finally discovered.

The post Get a high-tech tour of the long-lost Ironton shipwreck discovered in the Great Lakes appeared first on Popular Science.

]]>
Underwater image of sunken ship, Ironton, in Lake Huron
The three-masted 'Ironton' has been lost at the bottom of Lake Huron for nearly 130 years. NOAA/ Undersea Vehicles Program UNCW

A 191-foot-long sunken ship missing beneath the waves of Lake Huron for almost 130 years has been discovered nearly intact with the help of self-driving boats and high powered sonar imaging. 

At around 12:30 AM on September 24, 1894, a three-masted schooner barge called the Ironton collided head-on with the wooden freighter, Ohio, after being cut loose from a tow line in the face of inclement weather. Both vessels quickly sank beneath the waves, and although all of the Ohio’s crew escaped aboard a lifeboat, only two of Ironton’s crew survived the ordeal. For decades, both pieces of history rested somewhere along the bottom of Lake Huron, although their exact locations remained unknown.

[Related: Watch never-before-seen footage of the Titanic shipwreck from the 1980s.]

In 2017, however, researchers at Thunder Bay National Marine Sanctuary collaborated with the National Oceanic and Atmospheric Administration’s (NOAA) Office of Ocean Exploration and Research to begin search efforts for the roughly 100 ships known to have sunk within the 100-square-miles of unmapped lakebed. Using state-of-the-art equipment including multibeam sonar systems aboard the Great Lakes Environmental Research Lab’s 50-foot-long research vessel, RV Storm, the team scoured the sanctuary’s waters for evidence of long-lost barges, schooners, and other boats.

In May 2017, the teams finally located Ohio’s remnants, although Ironton eluded rediscovery. Two years later, Thunder Bay National Marine Sanctuary set out on another expedition, this time partnered with Ocean Exploration Trust, the organization founded by Robert Ballard, famous for his discoveries of the Titanic, Bismarck, and USS Yorktown. For their new trip, researchers also brought along BEN (Bathymetric Explorer and Navigator), a 12-foot-long, diesel-fueled, self-driving boat built and run by University of New Hampshire’s Center for Coastal and Ocean Mapping. 

By triangulating the Ohio’s now-known location, alongside wind and weather condition records for the day of the ship’s demise, RV Storm got to work with BEN’s high-resolution multibeam sonar sensor to map Lake Huron’s floors for evidence of the Ironton. With only a few days’ left to their trip, researchers finally were rewarded with 3D sonar scans of a clear, inarguable shipwreck featuring three masts.

Archaeology photo
Sonar imaging of the Ironton Credit: Ocean Exploration Trust/NOAA

[Related: For this deep-sea archaeologist, finding the Titanic at the bottom of the sea was just the start.]

Video footage provided by an underwater remotely operated vehicle (ROV) the following month confirmed their suspicions—there lay the Ironton, almost perfectly preserved thanks to Lake Huron’s extremely cold, clear waters. “Ironton is yet another piece of the puzzle of [the region’s] fascinating place in America’s history of trade,” Ballard said in a statement, adding that they “look forward to continuing to explore sanctuaries and with our partners reveal the history found in the underwater world to inspire future generations.”

Future research expeditions and divers searching for the Ironton’s exact resting place will have no trouble going forward—Thunder Bay National Marine Sanctuary plans to deploy one of its deep-water mooring buoys meant to mark the spot, as well as warn nearby travelers to avoid dropping anchors atop the fragile remains. The Ironton’s made it this far in nearly pristine condition, after all.

The post Get a high-tech tour of the long-lost Ironton shipwreck discovered in the Great Lakes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to use the power of mushrooms to improve your life https://www.popsci.com/environment/how-to-use-mushrooms-creatively/ Tue, 07 Mar 2023 13:00:06 +0000 https://www.popsci.com/?p=517411
Beech mushrooms growing on a substrate against a gold background
Beech mushrooms. Ted Cavanaugh for Popular Science

Enter the worlds of mushroom dyeing, mycotecture, and more.

The post How to use the power of mushrooms to improve your life appeared first on Popular Science.

]]>
Beech mushrooms growing on a substrate against a gold background
Beech mushrooms. Ted Cavanaugh for Popular Science

YOU’RE WALKING through a forest. The soil is soft beneath your feet, and the sun is shining brightly through the dark green treetops. To your left, you see rotten logs with dense clusters of oyster mushrooms. On your right, a thick bundle of chanterelles sprouts from the leaf-littered floor. Farther off the beaten path are stout-looking porcinis, frequently with a colony of poisonous fly agarics nearby, and, maybe, a bunch of magic blue gyms—those might ruin your nature walk, though. 

The mushroom kingdom holds many shapes and secrets beyond those of the little white buttons and baby bellas found at the grocery store. Ethical foraging is one of the easiest and most valuable ways to incorporate an array of mushrooms into your life; to get started, you can join a mycology group or contact a local guide to learn how to harvest edible fungi safely and sustainably

But there are more creative ways to incorporate the power of mushrooms into your days. Fungi are a versatile and adaptable group, which is why they offer a range of benefits to a variety of people. They’re a multifaceted food source, providing fiber, protein, and other nutrients. They can be used to create dyes, build structures, or breed new strains of mushrooms. In essence, they’re really cool, and they’re inspiring biologists, artists, and engineers to develop practices that can make the world prosper. Here’s a mini-tour of what the flourishing field of mushrooming has to offer.

Pink oyster mushrooms
Pink oyster mushrooms. Ted Cavanaugh for Popular Science

Shopping for mushrooms 

Head to the supplement aisle in any health food store, and you’re bound to find shelf space dedicated to the medicinal wonder of mushrooms. Research on fruit flies and mice shows that cordyceps, popular among consumers (and apocalyptic TV shows), has anti-cancer properties and possibly anti-aging effects, too. Reishi and turkey tail are coveted for their potential immune-stimulating effects, while lion’s mane may help soften dementia, according to a small pilot study.  

Most of these benefits have been investigated on animals or in test tubes, making it challenging to draw conclusions on human health. If you’re looking for guaranteed results, it’s better to grab fresh, whole mushrooms from the produce section than spend all your money on pills and potions. 

“Eating food is always safer and less expensive than using its supplemental form,” says Lori Chong, a registered dietitian at the Ohio State University Wexner Medical Center. With fungi, you should know which edible varieties are good to cook with. Reishi and turkey tail are not commonly used for culinary purposes because their tough texture and bitter taste make them unpalatable. On the other hand, lion’s mane, shiitake, enoki, and maitake make fine ingredients for a meal, each with its distinct flavors and properties. 

A steady intake of mushrooms can work wonders for our bodies. Eating 18 grams daily could reduce someone’s cancer risk by 45 percent, according to a scientific review of 17 observational studies. Using mushrooms to lessen meat consumption can also help reduce the risk of heart disease by lowering saturated fat in a diet—you can do this by mixing chewy stems and caps with ground meat. And they’re one of a few sources of ergothioneine, an amino acid with anti-inflammatory effects, according to several international medical papers. 

Getting them into your diet isn’t too difficult, says Chong. “Mushrooms make a great addition to any combination of stir-fried vegetables,” she explains. “They are easy to prep and quick to cook. Consider sautéing a package of mushrooms and keeping them in the refrigerator to add to an omelet, spaghetti sauce, sandwich, or salad.” 

Oh, and don’t eat them raw: Farmed mushrooms may contain agaritine, a toxic compound destroyed by heat during the cooking process. Research has found that certain store-bought varieties have less agaritine than freshly picked ones, but questions remain.

When shopping for whole mushrooms, make sure they’re firm to the touch, smooth, and dry on the surface. You don’t want any that look dried out, feel slimy, have big spots of discoloration, or show wet spots. Once you get home, store them in the fridge in a loose bag or a glass container with the lid cracked to prevent moisture buildup and fast spoilage.   

Chestnut mushrooms on blue background
Chestnut mushrooms. Ted Cavanaugh for Popular Science

Dyeing with mushrooms 

Though they’re certainly delicious, there’s much more you can do with mushrooms than eat them, including making pigments for fabric dyes, ink, and all varieties of paint. In fact, the vastness of the fungus kingdom covers every color of the rainbow, says Julie Beeler, a naturalist, teacher, and artist. “Mushrooms contain a variety of different chemical compounds that create colors ranging from red to yellow to blue and colors in between,” says Beeler. “These pigments can be found throughout the mushroom, but for certain species like Cortinarius semisanguineus [the surprise webcap], the color is concentrated in the caps. For Hydnellum caeruleum [the blue and orange hydnellum], the color is throughout the mushroom. And for Hypomyces lactifluorum [the lobster mushroom], it is only the parasitized outer layer.”

Beeler created the website Mushroom Color Atlas as an educational resource for people who want to use mushrooms to make hues. She walks beginners through the process of extracting dyes from 28 fungal varieties that are common in the wild, and she intends to add another 13 in the coming months. Those few dozen specimens can produce more than 800 colors, she notes.

Woman with gray hair and a blue shirt in front of a wall with samples of mushroom paints
Julie Beeler, founder of the Mushroom Color Atlas, turns fungi pigments into paints. Mee Ree Rales

While the practice is growing in popularity, it has centuries of history. Fungi, particularly lichens—complex organisms created by a symbiotic relationship between a fungus and an alga—have been used in cultural practices across North America, North Africa, Asia, and Europe. Prior to the Industrial Revolution, all pigments were processed naturally. Since then, pretty much every dyed item we encounter has been colored using synthetic dyes. “Mushrooms allow you to get back to natural practices that are more regenerative and sustainable for the environment and the planet as a whole,” says Beeler. 

To stain fabrics, she explains, you need a pot, similar to one for making tea. Beeler suggests cutting the fungi into smaller pieces and steeping them for about an hour in hot, but not boiling, water. (A temperature of about 160 degrees Fahrenheit will prevent the compounds from degrading.) When the color of the water has changed, you can dip natural fibers in to dye them. 

The look of your final product will depend on the mushrooms you use and your material. Wool tends to absorb more vibrant, bolder shades from the organisms than other textiles. Cotton, the world’s most widely used fiber, is surprisingly more complicated because it’s cellulose-based and requires a lengthier mordanting process to fix the chemicals to the threads. “You’ll need to be a lot more advanced to get really great colors on cotton,” says Beeler, “but you can get some incredible colors with wool.” 

Strips of mushroom-dyed fibers on a rack
The dyes can also be used to colorize fibers. Micah Fisher

If you’re not getting the look you want, you can alter the pH of the dye bath depending on what the mushroom you’re working with responds to best. Certain species prefer more acidic environments, so you can add vinegar to produce an orange tinge. Or for greater alkalinity, add a sprinkle of sodium carbonate to get a vibrant blue or green. The hues might fade over time with repeated washing or exposure to sunlight, unless you use a mordant like alum to bind them to the fibers.

The best part is that you can find your main materials almost anywhere: while moving dead limbs around your yard, during a walk through the park, or perched upon a strip of grass in a parking lot after a good rain. Some will look like the mushrooms you get from the grocery store, with the expected gills underneath; others will have more novel structures. Boletes, such as the spring king, have a spongy cap and produce a range of beautiful earth tones. Some false gill mushrooms deliver a spectrum of blues, greens, and yellows, depending on which you grab. Tooth fungi have fanglike spines and often produce blues or greens. Another excellent clue to the dyeing potential of a mushroom is whether it’s colorful inside and out. The lobster mushroom, for example, makes a variety of pinks and reds, true to its name. 

“I just love that as I’m walking in different environments, every step I’m taking, I’m thinking about that fungal underground in the soil and the mycelium, this web of connections creating a rainbow beneath my feet,” Beeler says. 

Black king mushroom on a light brown background
Black king mushroom. Ted Cavanaugh for Popular Science

Building on mushrooms

Creating structures with mycelium—the network of fungal filaments that allows mushrooms to grow aboveground—is an exercise in simulating the layers in natural ecosystems. The practice is a chance to think of the presence of trash as an opportunity to create something new. “In the living world, there isn’t really such a thing as waste,” says Merlin Sheldrake, the author of Entangled Life, a bestselling book on mycology. Scraps are always used to create something else, like a scavenger breaking down a carcass. “Are there ways that we can learn from those cyclical processes to behave more like other living organisms do?” Sheldrake continues. “Or will we continue just to produce stuff and then put it in landfills?” 

Building with fungi is a relatively new field that’s in a state of expansion. Mycelium can be used to create packaging, clothing, and even buildings; researchers are working on making the materials more robust and streamlining production. BioHAB, an architectural project in Namibia, for instance, is salvaging the remains of cleared encroacher bush, an indigenous species that drastically reduces usable land and resources, to create a substrate for farming mushrooms. The waste from cultivating the fungi is then compacted into eco-friendly bricks. The end product is strong, flexible, insulative, and soundproof, and can be used to reinforce structures in local villages, BioHAB’s website states. 

Man in blue shirt in warehouse holding a brick of compressed mycelium
Local supervisor Ivan Severus holds one of BioHAB’s signature mycelium-based bricks. MycoHab Ltd.

Similarly, NASA is looking into mycelium-based construction materials for astronaut dwellings on the moon and Mars. These composites are light and transportable, protect better against radiation, could self-replicate in their new environments for an endless resource, and, at the end of their life spans, can be turned into fertilizer.

Working with mushroom structures encourages builders to think about the whole cycle of production. “If you’re growing composite material using mycelium and hemp, for example, then you think about where the hemp is coming from,” Sheldrake explains. “Then you start thinking about the fact that you are harnessing a waste stream from another industry to produce the feedstock to grow the fungus.” 

Accessing mycotecture at the consumer level is a bit more complicated, but more opportunities are sprouting up. If you want to wear your mushrooms, luxury fashion houses like Stella McCartney, Balenciaga, and Hermès are experimenting with mycelium leather. In 2021 Hermès introduced a bag in partnership with MycoWorks, a company that develops leatherlike materials in a variety of colors from reishi. 

Sheets of brown mushroom "leather"
MycoWorks’ reishi-sourced material mimics leather. Jesse Green/MycoWorks

Pivoting to mushrooms could, in part, help buffer the effect industrialization has on the planet. Manufacturing is a major cause of environmental degradation, pollution, carbon emissions, and waste. Mushroom-sourced components can offer a break from petrochemicals and plastics if they can be produced sustainably enough and brought to scale. But the field, which is still in its infancy, has a ways to go before it can make an earnest contribution to the use of sustainable goods. 

“These fungal materials are exciting when you step back and look at how all these different industries go together and the possibilities that exist between them,” says Sheldrake. “Unless we rethink the way that we build and produce, then we are going to be in even bigger trouble than we already are.” 

Lion's mane mushroom in front of a blue-green background
Lion’s mane mushroom. Ted Cavanaugh for Popular Science

Growing your own mushrooms

When Tavis Lynch started raising mushrooms in the early 1990s, he approached it as a hobby before expanding into more complicated projects, eventually becoming a professional mycologist and commercial cultivator. He currently grows 20 indoor and outdoor mushroom varieties employing genetic pairing—creating new strains of mushrooms by mating spores from two existing varieties. 

Lynch has made a fruitful career out of something people can do at home. A DIY venture doesn’t have to be complicated. “There are a lot of different ways to grow mushrooms,” Lynch explains. “We can grow them on wheat or oat straw. We can grow them on natural logs. We can grow them on compost. We can even grow them on blended substrates that we create, typically an enriched sawdust or coffee grounds.” 

Most varieties of mushrooms bred at home are used for cooking or medicine. But the first thing to assess is the resources available where you live. Coffee grounds, compost, or sawdust will be the best substrates for anyone living in a major metropolitan area where green space is limited or tightly regulated. For those budding hobbyists, going the kitchen counter route with a tabletop kit, rearing specimens in a basement, or even hanging them somewhere in your shower will be your best bet. (Choosing a shaded, humid spot is the most important element.)

Once you’ve figured out the logistics, including what type of mushroom you want to farm, Lynch suggests finding a spawn supplier—a step that, like growing the fungi, won’t be too hard. “They’re popping up left and right every day because the trend toward home cultivation of mushrooms is massive right now,” he says. Companies such as Tavis’s Mushrooms, North Spore, Field & Forest Products, Earth Angel Mushrooms, and Mushroom Queens offer online ordering and quick shipping across the US.

I ordered a pink oyster mushroom kit online from Forest Origins. Starting the growth process was as simple as Lynch had said it would be: All I had to do was cut into the substrate bag, disturb some of the top layer with a fork, dampen it, and place it on my counter to get indirect sunlight. Then, twice a day, I came by and spritzed it with a water bottle. I started seeing fruiting bodies develop about a week into this daily ritual. Sadly, I accidentally sprayed it with bleach while cleaning and had to order another kit. 

Bleaching aside, checking on my baby mushrooms felt as good as tending to my other plants. Ensuring they had enough sun and moisture gave me a few minutes of grounding amid chaotic days. It was a reminder that nearly everything provided to us by this Earth is beautiful and useful.

“Getting out, working with your hands, having a distraction from your digital devices and from the noise of others and the city—that’s the real medicine,” says Lynch. “I’m looking out my window right now at my mushroom farm, and I wish I was out there working on it.” 

Read more PopSci+ stories.

The post How to use the power of mushrooms to improve your life appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How concept cars hint at a dazzling automotive future https://www.popsci.com/technology/concept-cars-explained/ Mon, 06 Mar 2023 23:00:00 +0000 https://www.popsci.com/?p=517640
the
The Genesis X Convertible concept. Kristin Shaw

Some concept cars never become production models, while others are more art than anything else. Here's why car makers create them.

The post How concept cars hint at a dazzling automotive future appeared first on Popular Science.

]]>
the
The Genesis X Convertible concept. Kristin Shaw

Concept cars are designed to be flights of fancy—showpieces that give automakers the chance to put their creativity on display. Quite often, a concept car represents just a blip on a timeline and a blast of buzzy excitement, later shelved in a museum for all of us to marvel at a company’s foresight or folly. 

A concept, by definition, is an idea; in this case, a concept car is an idea that takes the temperature of the public to see how buyers might react to a set of features and designs. Automakers don’t necessarily release a concept every year, and they have to balance the cost of building a vehicle that may or may not ever see the light of the production line. While it’s true that some concepts fade into oblivion, others become successful models that carry many of the same characteristics as the concept. Even those that are wildly futuristic and wacky lay the groundwork for innovations to come. 

Most recently, truck maker Ram announced the 1500 Rev, the production version of its Revolution EV concept. The Revolution (not the Rev) was unveiled at the Consumer Electronics Show in January, with some exciting features, like coach doors (which open at the center like French doors in a home), and a glass roof that adjusts its tint electronically. But when the production version launched at the Chicago Auto Show in February, some expressed disappointment in how much it looked like its gas-powered sibling. Where were the cool removable third-row seats from the concept? Where was the storage tunnel to hold long objects?

To be fair, automakers—especially when they’re large, public companies—are beholden to not just manufacturing and safety regulations but their shareholders. In the case of the Ram 1500 Rev, the company will build the production vehicle on the new all-electric architecture from its parent company Stellantis instead of the one used by the gas version of the 1500 truck.

Otherworldly concepts

There’s a long history of wild concept cars, many of which never became actual production models.

Consider the otherworldly Berlinetta Aerodynamica Tecnica series commissioned by luxury automaker Alfa Romeo in the mid-1950s. These three cars featured unusual, gorgeous bodies that evoke sea creatures in motion. And somehow, all of them survived in remarkable shape and sold as a set for more than $14 million at auction in 2020. These concepts, which never became production vehicles, were more art than realism, unlike recent modern offerings. 

In 2021, Genesis unveiled its X Concept EV, a sleek coupe with wraparound parallel LED lights defining its curves. Last year, it followed up with the X Concept convertible that peeled back the top and showed off more futuristic details. To our great joy, Automotive News reported that the X Convertible recently got the green light for official production. 

Also under the Hyundai Motor Group, Kia introduced a streamlined concept in 2011 that eventually gave way to the Stinger, which was widely lauded by the industry as a game-changer for the Korean manufacturer. Engineered by a former BMW vice president of engineering and sketched out by celebrated former Audi designer, the Stinger was finally launched to the world in 2017. It was taller than the concept and included more buttoned-down design features on the outside, but under the hood the performance was impressive, especially the 365-horsepower GT model. A moment of silence for the now-discontinued Stinger, please. Hope springs eternal, as rumors of an all-electric Stinger have been swirling. 

On the gas-powered side, the raw and rowdy Dodge Viper started life as a concept showcased for the first time at the 1989 Detroit auto show. Using an existing truck engine as its base, the concept evolved over three years into the 1992 Viper RT/10 and delighted fast-car enthusiasts for more than two and a half decades until it was discontinued in 2017. 

the ram rev electric pickup truck
The Rev. Ram

From Revolution to Rev

In the same automotive manufacturing family as the Viper, the Ram 1500 Rev moved quickly from concept to production. And while the Rev may not be exactly the same as the Revolution, it retains the benefit of sharing some parts with the gas-powered Ram 1500 pickup. That will both speed production and keep the cost on the manageable side. Ford did the same thing for its F-150 Lightning, which is purposely built to feel familiar to F-150 customers to avoid alienating its loyal base. 

The 1500 Rev will not be equipped with the removable jump seats from the concept, which could have turned the Ram pickup into the first third-row truck. Ryan Nagode, Ram/SRT’s chief designer for interiors, was inspired to add the track seating when he noticed parents hauling around stadium seats to make hours of sitting on the bleachers at their kids’ sporting events more comfortable. He wondered if something like that could be incorporated into the truck and successfully integrated the idea into the cabin of the Revolution concept. 

“There have been vehicles in the past with jump seats, and I think there is a lot of reality built into these ideas,” Nagode told PopSci at the Concept Garage of the Chicago Auto Show in February. “Obviously, some of these things take a little pushing and pulling with the engineering team, but I think it’s not far-fetched.” 

Alas, those seats won’t be included in the Rev, but the seeds of creativity could feasibly show up sometime in the future. 

The post How concept cars hint at a dazzling automotive future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside the lab that’s growing mushroom computers https://www.popsci.com/technology/unconventional-computing-lab-mushroom/ Mon, 27 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=515615
electrodes hooked up to mushrooms
Recording electrical activity of split gill fungi Schizophyllum commune. Irina Petrova Adamatzky

The lead researcher says he is “planning to make a brain from mushrooms.”

The post Inside the lab that’s growing mushroom computers appeared first on Popular Science.

]]>
electrodes hooked up to mushrooms
Recording electrical activity of split gill fungi Schizophyllum commune. Irina Petrova Adamatzky

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Upon first glance, the Unconventional Computing Laboratory looks like a regular workspace, with computers and scientific instruments lining its clean, smooth countertops. But if you look closely, the anomalies start appearing. A series of videos shared with PopSci show the weird quirks of this research: On top of the cluttered desks, there are large plastic containers with electrodes sticking out of a foam-like substance, and a massive motherboard with tiny oyster mushrooms growing on top of it. 

No, this lab isn’t trying to recreate scenes from “The Last of Us.” The researchers there have been working on stuff like this for awhile: It was founded in 2001 with the belief that the computers of the coming century will be made of chemical or living systems, or wetware, that are going to work in harmony with hardware and software.

Why? Integrating these complex dynamics and system architectures into computing infrastructure could in theory allow information to be processed and analyzed in new ways. And it’s definitely an idea that has gained ground recently, as seen through experimental biology-based algorithms and prototypes of microbe sensors and kombucha circuit boards.

In other words, they’re trying to see if mushrooms can carry out computing and sensing functions.

Computers photo
A mushroom motherboard. Andrew Adamatzky

With fungal computers, mycelium—the branching, web-like root structure of the fungus—acts as conductors as well as the electronic components of a computer. (Remember, mushrooms are only the fruiting body of the fungus.) They can receive and send electric signals, as well as retain memory. 

“I mix mycelium cultures with hemp or with wood shavings, and then place it in closed plastic boxes and allow the mycelium to colonize the substrate, so everything then looks white,” says Andrew Adamatzky, director of the Unconventional Computing Laboratory at the University of the West of England in Bristol, UK. “Then we insert electrodes and record the electrical activity of the mycelium. So, through the stimulation, it becomes electrical activity, and then we get the response.” He notes that this is the UK’s only wet lab—one where chemical, liquid, or biological matter is present—in any department of computer science.

Computers photo
Preparing to record dynamics of electrical resistance of hemp shaving colonized by oyster fungi. Andrew Adamatzky

The classical computers today see problems as binaries: the ones and zeros that represent the traditional approach these devices use. However, most dynamics in the real world cannot always be captured through that system. This is the reason why researchers are working on technologies like quantum computers (which could better simulate molecules) and living brain cell-based chips (which could better mimic neural networks), because they can represent and process information in different ways, utilizing a series of complex, multi-dimensional functions, and provide more precise calculations for certain problems. 

Already, scientists know that mushrooms stay connected with the environment and the organisms around them using a kind of “internet” communication. You may have heard this referred to as the wood wide web. By deciphering the language fungi use to send signals through this biological network, scientists might be able to not only get insights about the state of underground ecosystems, and also tap into them to improve our own information systems. 

Cordyceps fungi
An illustration of the fruit bodies of Cordyceps fungi. Irina Petrova Adamatzky

Mushroom computers could offer some benefits over conventional computers. Although they can’t ever match the speeds of today’s modern machines, they could be more fault tolerant (they can self-regenerate), reconfigurable (they naturally grow and evolve), and consume very little energy.

Before stumbling upon mushrooms, Adamatzky worked on slime mold computers—yes, that involves using slime mold to carry out computing problems—from 2006 to 2016. Physarum, as slime molds are called scientifically, is an amoeba-like creature that spreads its mass amorphously across space. 

Slime molds are “intelligent,” which means that they can figure out their way around problems, like finding the shortest path through a maze without programmers giving them exact instructions or parameters about what to do. Yet, they can be controlled as well through different types of stimuli, and be used to simulate logic gates, which are the basic building blocks for circuits and electronics.

[Related: What Pong-playing brain cells can teach us about better medicine and AI]

Computers photo
Recording electrical potential spikes of hemp shaving colonized by oyster fungi. Andrew Adamatzky

Much of the work with slime molds was done on what are known as “Steiner tree” or “spanning tree” problems that are important in network design, and are solved by using pathfinding optimization algorithms. “With slime mold, we imitated pathways and roads. We even published a book on bio-evaluation of the road transport networks,” says Adamatzky “Also, we solved many problems with computation geometry. We also used slime molds to control robots.” 

When he had wrapped up his slime mold projects, Adamatzky wondered if anything interesting would happen if they started working with mushrooms, an organism that’s both similar to, and wildly different from, Physarum. “We found actually that mushrooms produce action potential-like spikes. The same spikes as neurons produce,” he says. “We’re the first lab to report about spiking activity of fungi measured by microelectrodes, and the first to develop fungal computing and fungal electronics.”  

Computers photo
An example of how spiking activity can be used to make gates. Andrew Adamatzky

In the brain, neurons use spiking activities and patterns to communicate signals, and this property has been mimicked to make artificial neural networks. Mycelium does something similar. That means researchers can use the presence or absence of a spike as their zero or one, and code the different timing and spacing of the spikes that are detected to correlate to the various gates seen in computer programming language (or, and, etc). Further, if you stimulate mycelium at two separate points, then conductivity between them increases, and they communicate faster, and more reliably, allowing memory to be established. This is like how brain cells form habits.

Mycelium with different geometries can compute different logical functions, and they can map these circuits based on the electrical responses they receive from it. “If you send electrons, they will spike,” says Adamatzky. “It’s possible to implement neuromorphic circuits… We can say I’m planning to make a brain from mushrooms.” 

Computers photo
Hemp shavings in the shaping of a brain, injected with chemicals. Andrew Adamatzky

So far, they’ve worked with oyster fungi (Pleurotus djamor), ghost fungi (Omphalotus nidiformis), bracket fungi (Ganoderma resinaceum), Enoki fungi (Flammulina velutipes), split gill fungi (Schizophyllum commune) and caterpillar fungi (Cordyceps militari).

“Right now it’s just feasibility studies. We’re just demonstrating that it’s possible to implement computation, and it’s possible to implement basic logical circuits and basic electronic circuits with mycelium,” Adamatzky says. “In the future, we can grow more advanced mycelium computers and control devices.” 

The post Inside the lab that’s growing mushroom computers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A vocal amplification patch could help stroke patients and first responders https://www.popsci.com/technology/throat-patch-vocal-amplifier/ Fri, 24 Feb 2023 21:00:00 +0000 https://www.popsci.com/?p=515247
The new device is barely 25 micrometers thick, and can amplify to normal conversational volume.
The new device is barely 25 micrometers thick, and can amplify to normal conversational volume. DepositPhotos

The device is only 25 micrometers thick and painless.

The post A vocal amplification patch could help stroke patients and first responders appeared first on Popular Science.

]]>
The new device is barely 25 micrometers thick, and can amplify to normal conversational volume.
The new device is barely 25 micrometers thick, and can amplify to normal conversational volume. DepositPhotos

The countless ways people can accidentally hurt their voices are truly impressive—not to mention disconcerting. Everything from a particularly exuberant sing-along, to giving a long public presentation, to simply a bad bout of acid reflux can cause long-lasting, sometimes even permanent damage.

Previously there haven’t been many practical technological solutions to these issues, but a new device developed in  China could soon alleviate the literal and figurative pains of shouting to be heard. Researchers led by Qinsheng Yang at Beijing’s Tsinghua University recently created an incredibly thin patch capable of interpreting and projecting both barely voiced and even silently mouthed words when attached to the outside of a user’s throat.

[Related: A smartly trained voice monitor could save singers, teachers, and loud talkers from strain.]

Although multichannel acoustic sensors already exist, their bulky designs make them largely inconvenient to use by everyday people on a regular basis. However, a new “graphene-based intelligent, wearable artificial throat” (AT), detailed in a paper published this week in Nature Machine Intelligence, measures barely one square centimeter as well as 25 micrometers thick. It  painlessly adheres to the skin above a person’s larynx using a standard medical adhesive. Tiny wires connect the patch to a pocket-sized microcontroller reliant on a coin-sized battery that provides multiple hours of power.

When enabled, the device monitors for tiny throat vibrations which are interpreted by an AI model. Following AI analysis, artificial sound is subsequently projected via the patch itself, which is capable of emitting up to 60 decibels—the volume for most standard conversations—via electrical input courtesy of the device battery that allows for the sound waves via temperature fluctuations.  Research testing showed that the device was over 99 percent accurate when used by people speaking audibly, and over 90 percent accurate for those who couldn’t.

[Related: Why your voice sounds weird on recordings.]

The potential benefits extend to both vocal individuals, as well as those unable to audibly speak, such as those who have received a laryngectomy to remove their voice box or suffer from aphasia after a stroke. As New Scientist explains, the new instrument could soon offer an invaluable tool for those working in loud environments, such as emergency responders and pilots, as well as those who simply might want a bit of a volume boost depending on their circumstances.

It’s an exciting time for vocal therapy devices—earlier this week, another team of researchers at Northwestern University also released their findings on a new wireless voice monitor that can detect and notify users of vocal strain and fatigue.

The post A vocal amplification patch could help stroke patients and first responders appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How Google plans to fix quantum computing’s accuracy problem https://www.popsci.com/technology/google-quantum-error-correction/ Fri, 24 Feb 2023 15:00:00 +0000 https://www.popsci.com/?p=514755
google's quantum processor
A look at Google's quantum processor. Google / YouTube

Although the accuracy rate only improved by a small percent, the company claims it's a "big step forward."

The post How Google plans to fix quantum computing’s accuracy problem appeared first on Popular Science.

]]>
google's quantum processor
A look at Google's quantum processor. Google / YouTube

In a paper published in Nature this week, Google’s Quantum AI researchers have demonstrated a way to reduce errors in quantum computers by increasing the number of “qubits” in operation. According to Google CEO Sundar Pichai, it’s a “big step forward” towards “making quantum applications meaningful to human progress.”

Traditional computers use binary bits—that can either be a zero or a one—to make calculations. Whether you’re playing a video game, editing a text document, or creating some AI generated art, all the underlying computational tasks are represented by strings of binary. But there are some kinds of complex calculations, like modeling atomic interactions, that are impossible to do at scale on traditional computers. Researchers have to rely on approximations which reduce the accuracy of the simulation, and renders the whole process somewhat pointless.

This is where quantum computers come in. Instead of regular bits, they use qubits that can be a zero, a one, or both at the the same time. They can even be entangled, rotated, and manipulated in other quantum-specific ways. Not only could a workable quantum computer allow researchers to better understand molecular interactions, but they could also allow us to model complex natural phenomenon, more easily detect credit card fraud, and discover new materials. (Of course, there are also some potential downsides—quantum computers can break the classical algorithms that secure everything today from passwords and banking transactions to corporate and government secrets.)

For now though, all this is largely theoretical. Quantum computers are currently much too small and error prone to change the world. Google’s latest research goes someway towards fixing the latter half of the problem. (IBM is trying to fix the first half.)

The problem is that quantum computers are incredibly sensitive to, well, everything. They have to operate in sealed, cryogenically cooled cases. Even a stray photon can cause a qubit to “decohere” or lose its quantum state, which creates all kinds of wild errors that interfere with the calculation of the problem. Until now, adding more qubits has also meant increasing your chances of getting a random error.

According to Google, its third generation Sycamore quantum processor with 53 qubits typically experiences error rates of between 1 in 10,000 and 1 in 100. That is orders of magnitude too high to solve real world problems; Google’s researchers reckon we will need qubits with error rates of between 1 in 1,000,000,000 and 1 in 1,000,000 for that.

Unfortunately, it’s highly unlikely that anyone will be able to get that increase in performance from the current designs for physical qubits. But by combining multiple physical qubits into a single logical qubit, Google has been able to demonstrate a potential path forward. 

The research team gives a simple example of why this kind of set up can reduce errors: If “Bob wants to send Alice a single bit that reads ‘1’ across a noisy communication channel. Recognizing that the message is lost if the bit flips to ‘0’, Bob instead sends three bits: ‘111’. If one erroneously flips, Alice could take a majority vote (a simple error-correcting code) of all the received bits and still understand the intended message.”

Since qubits have additional states that they can flip to, things are a bit more complicated. It also really doesn’t help that, as we’re dealing with quantum, directly measuring their values can cause them to lose their “superposition”—a quantum quirk that allows them to have the value of ‘0’ and ‘1’ simultaneously. To overcome these issues, you need Quantum Error Correction (QEC) where information is encoded across multiple physical qubits to create a single logical qubit. 

The researchers arranged two types of qubits (one for dealing with data, and one for measuring errors) in a checkerboard. According to Google, “‘Data’ qubits on the vertices make up the logical qubit, while ‘measure’ qubits at the center of each square are used for so-called ‘stabilizer measurements’.” The measure qubits are able to tell when an error has occurred without “revealing the value of the individual data qubits” and thus destroying the superposition state.

To create a single logical qubit, the Google researchers used 49 physical qubits: 25 data qubits and 24 measure qubits. Crucially, they tested this set up against a a logical qubit composed of 17 physical qubits (9 data qubits and 8 measure qubits) and found the larger grid outperformed the smaller one by being around 4 percent more accurate. While only a small improvement, it’s the first time in the field that adding more qubits reduced the number of errors instead of increasing it. (Theoretically, a grid of 577 qubits would have an error rate close to the target 1 in 10,000,000).

And despite its recent layoffs, Google is seemingly committed to more quantum research. In his blog post, Pichai says that Google will “continue to work towards a day when quantum computers can work in tandem with classical computers to expand the boundaries of human knowledge and help us find solutions to some of the world’s most complex problems.” 

The post How Google plans to fix quantum computing’s accuracy problem appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A smartly trained voice monitor could save singers, teachers, and loud talkers from strain https://www.popsci.com/technology/opera-singers-vocal-monitor/ Wed, 22 Feb 2023 15:00:00 +0000 https://www.popsci.com/?p=514176
Close up of person wearing vocal strain monitor and haptic wrist device
The new monitor can help reduce vocal strain and fatigue in real time. Northwestern University

It's opera singer-tested.

The post A smartly trained voice monitor could save singers, teachers, and loud talkers from strain appeared first on Popular Science.

]]>
Close up of person wearing vocal strain monitor and haptic wrist device
The new monitor can help reduce vocal strain and fatigue in real time. Northwestern University

Researchers at Northwestern University recently received some world-class help in developing their newest health aid. Detailed within a paper published this week in the journal Proceedings of the National Academy of Sciences the team enlisted opera singers to assist in designing the first wearable device that monitors users’ voices for real-time vocal fatigue and strain. Once fine-tuned, the new medical tool could soon become an invaluable asset to actors, musicians, coaches, teachers, and virtually anyone else who relies on their voice for a living.

The closed-system vocal monitor consists of a small, flexible patch that adheres to one’s chest and detects subtle changes in vibration. The device then transmits its data via Bluetooth to an app installed on users’ smartphones or tablets, and an accompanying wearable haptic device such as a smartwatch can be set to alert them for any warning signs of vocal fatigue or strain. Users can even hone their experience by clicking an in-app button whenever they experience vocal discomfort, thus enabling the program to establish personalized thresholds.

[Related: Why your voice sounds weird on recordings.]

According to researchers, one of the biggest challenges involved training their monitor to distinguish between singing and speaking. To solve the issue, the team turned to a group of opera performers and professional vocalists across a spectrum of vocal ranges. These volunteers each recorded 2,500 one-second samples of both singing and speaking clips, which were subsequently fed into a machine learning program. The final algorithm used in the health monitor now distinguishes between the two forms of communication with over 95 percent accuracy.

Going forward, the team hopes to integrate its preexisting temperature, heart rate, and respiratory monitoring programs with the vocal monitor to study how all these bodily functions interact and influence one another. Gaining better insight into these complexities could one day help inform experts within vocal therapy, as well as better prevent injuries from occurring in the first place.

Although vocal injuries most often make the news whenever a popular singer cancels a tour or string of shows, they are extremely common occurrences that can befall anyone relying on their voice—that is to say, plenty of people. Basic care such as staying hydrated and warming up your voice with exercises can alleviate and prevent many issues, but additional tools such as wearable monitors could easily boost guards against the worst fallout.

“Your voice is part of your identity—whether you are a singer or not,” Theresa Broccacio, a vocal expert at Northwestern who co-led the project, said in a statement. “It’s integral to daily life, and it’s worth protecting.”

The post A smartly trained voice monitor could save singers, teachers, and loud talkers from strain appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The ability for cities to survive depends on smart, sustainable architecture https://www.popsci.com/technology/moma-nyc-architecture-exhibit/ Tue, 21 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=513938
An architecture mockup of the Hunter's Point South Park.
An architecture mockup of the Hunter's Point South Park. Charlotte Hu

Creation and destruction is ongoing in NYC. These promising projects could be models for the future of construction.

The post The ability for cities to survive depends on smart, sustainable architecture appeared first on Popular Science.

]]>
An architecture mockup of the Hunter's Point South Park.
An architecture mockup of the Hunter's Point South Park. Charlotte Hu

All around New York City, building projects seem to be constantly in the works. But in an era in which climate resiliency and a growing population are key factors that architects must consider, the approach to construction and its related waste may require some more creative thinking. 

A new exhibit at the Museum of Modern Art in Manhattan Architecture Now: New York, New Publics shines a spotlight on ideas at the cutting edge of innovation that aim to reimagine the relationship between the city’s architecture, its people, and the nature around it. Here’s a peek at some of the projects that have been selected for display. 

“All of the projects we highlight are what we see as models for future construction in New York or in the world,” says Martino Stierli, chief curator of architecture and design at the Museum of Modern Art. “This exhibition is kind of an archipelago, where each of the projects is an island and you can roam around freely.”

Working with nature 

New York has seen a renewed focus on achieving cleaner energy in the next few decades. That includes decarbonizing buildings and transportation wherever possible. For example, a project at Jones Beach Energy and Nature Center in Long Island is putting this vision into practice. Converted from a former bathhouse, the new facility, which opened in September 2020, is net-zero—meaning that it generates all the energy it needs through renewables—and is designed to have a small footprint. It also has a climate resilient landscape. 

It features solar panels atop the building, geothermal wells that heat its insides, and restored beachscape with local native plants that filter stormwater and help secure sediments against erosion. There is a battery on site that stores extra electricity produced by the solar panels that can supply power through nights and stormy weather. “The building is interesting by itself. But you have to see it as a larger environmental system,” Stierli says. 

[Related: This startup plans to collect carbon pollution from buildings before it’s emitted]

On the front of climate resiliency, another project, Hunter’s Point South Waterfront Park, has taken into account how rising seas should influence the design of coastal structures. In one way or another, engineers across New York have been thinking of ways to fight the water, or keep it off. 

“This park is designed so part of it can flood. The coastline becomes much more like what it would’ve been naturally, so the water goes back and forth… As you know, New York before civilization was basically a swamp,” says Stierli. “So instead of building high walls to keep the water out, you have these artificial flood plains, and of course that creates a new, but ancient again, biosphere for plants and animals who have always lived in this presence of saltwater.”

Architects from WEISS/MANFREDI tailored the design to the specific ecological conditions and geography of the land. The second phase of the park opened in 2018 next to the East River, which is tidal, narrow, and prone to wave action. Because of this, they developed a landscaped, walkable fortified edge that protects the emerging wetlands from harsh wave action. In an extreme flooding event, the height of the wall is calibrated to gently let water in, allowing the wetlands to act like sponges to absorb flood water. After a storm, water is then slowly released in a safe and controlled way, Marion Weiss and Michael Manfredi, two architects from the firm, explain in an email. This design was tested through computer and analogue models that factored in the specific features of the East River. And the park held its own even against the real and unexpected test of Hurricane Sandy. 

When the team conducted research into the site history, they found that a marsh shoreline existed in the past along a wider and gentler tidal estuary, Tom Balsley, principal designer at SWA, said in an email. To reinstate the marsh in the present day, they worked in collaboration with civil, marine, and marsh ecologist consultants to create a balanced habitat.

Making use of trash 

Waste is another theme that echoes throughout the exhibit, especially when it comes to addressing the building industry’s relationship with trash and its effective use of supplies. “The construction and the architecture industry really has to come in terms with the fact that our resources are finite,” Stireli says. “This idea of reusing, recycling, is probably the most important aspect of contemporary thinking in architecture.” 

The TestBeds research project, for example, imagines how life-size prototypes for future buildings can be given a second life as greenhouses, or other community structures, instead of heading to the landfill. To illustrate how different bits and pieces of buildings and developments move across the city and find new homes, MoMA created an accompanying board game to help visitors understand the rules and processes behind new construction projects. They can scan a QR code to play it.  

“This is the only project I know that actually deals with architectural mockups, because no one ever asks about them. Often, they just get left behind. They get destroyed,” he adds. “And so here we have this idea to say, these are actually valid building components and you can just integrate them into designs. These are five propositions, one of them is actually built, which is this community garden shed here.” 

On the topic of waste, Freshkills Park—formerly the Fresh Kills Landfill—is in the spotlight for its unique ambitions. “This is the site of the largest dump in the US,” Stierli says. “Former field operations who are very important landscape architects based in New York, they have been working with the city for the last 20 years or so to renaturate and to make it a park that is accessible and create a place for leisure and outdoor activities. Of course a lot of it has to do with the management of toxic waste.”

Part of this involved putting the Landfill Gas System in place that “collects and controls gas emission through a network of wells connected by pipes below the surface that convey the gas through a vacuum,” according to the park’s official website. “Once collected, the gas is processed to pipeline quality (recovery for domestic energy use) at an on–site LFG recovery plant.” There is also a leachate management system to remove and treat pollutants that are made when the waste breaks down. And of course, landfill engineers made sure to put many layers, liners, and caps between the waste and the new park soil. 

Some parts of Freshkills Park are now open to visitors. However, Stireli notes that “this is a work in progress.” 

New York, New Publics will be on display at The Museum of Modern Art in Manhattan, New York through July 29, 2023. 

The post The ability for cities to survive depends on smart, sustainable architecture appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why some Toyotas have ‘fish fins’ https://www.popsci.com/technology/toyota-aero-stabilizing-fish-fins/ Mon, 20 Feb 2023 15:00:00 +0000 https://www.popsci.com/?p=513450
Technically called aero stabilizing fins, or ASFs, these components are also known casually as fish fins.
Technically called aero stabilizing fins, or ASFs, these components are also known casually as fish fins. Kristin Shaw

Once you start noticing these aero stabilizing components, you won't be able to stop.

The post Why some Toyotas have ‘fish fins’ appeared first on Popular Science.

]]>
Technically called aero stabilizing fins, or ASFs, these components are also known casually as fish fins.
Technically called aero stabilizing fins, or ASFs, these components are also known casually as fish fins. Kristin Shaw

Take a closer look at a Toyota Tundra pickup truck, and among other places, you’ll see ridges embedded in the housings of the taillights and headlamps. The shape of these 2- to 3-inch lumps evoke a distant memory of the submarine game piece in a Battleship board game, and might go unnoticed if you’re not looking for them.

But once discovered, you can’t unsee them, and you’ll find yourself hunting for them on everything from the Toyota Tacoma to the brand’s Sienna minivan. Technically, they’re called aero stabilizing fins, or ASFs, but Toyota aerodynamics and ride handling specialists Cory Tafoya and Jesse Rydell say they’re affectionately called fish fins.

Here’s how they work, and why engineers use them on vehicles. 

Small but mighty engineering 

Odds are that you’ve noticed tiny symmetrical dimples all the way around any average golf ball. These depressions have a purpose: Unlike a ping-pong ball, which must travel only short distances, golf balls are designed to soar into the air for hundreds of yards at a time. The dimples reduce air friction, directing disruptive air around the back to reduce drag and create a smoother flight. 

“There’s no question a multi-layer cover and technologically advanced core will help your game,” Jonathan Wall wrote for Golf.com. “But without those dimples on the cover, you’re basically driving a Lamborghini with a Ford Pinto engine.” 

toyota fish fin
A fish fin. Kristin Shaw

In the automotive world, modern race cars employ a longitudinal “shark fin” along the spine, not for a fierce look but to maintain stability by directing airflow and pressure properly. Off the track, everyday drivers on US highways and city streets don’t need that kind of performance, but they definitely appreciate stable, smooth driving dynamics, and that requires a slightly different tool to direct airflow. These components, also sometimes called vortex generators, do something a bit counterintuitive: by creating air vortices, they help the air hug the sides of the vehicles. 

In general terms, a fish fin, or ASF, causes the flow of air to follow the side surface of the vehicle more closely, affecting the ride in a positive way. With extensive testing in the wind tunnel and on the track, they started to find that even though those fins were very small, they were having a noticeable improvement on ride and handling, Rydell tells PopSci.

“If we can avoid random disruption of airflow, it has an effect on the dynamics of the vehicle,” says Tafoya. “The high-level idea is to control the air in a way that’s consistent every time you drive it, or to try to make it as consistent as possible. And if we can keep that airflow close to the vehicle, we can manage what the driving dynamics feel like.”

‘I drove two and a half hours for this piece of plastic?’

The first time Mike Sweers, the executive chief engineer for the Toyota Tundra, Sequoia, Tacoma, and 4Runner vehicle programs, saw these aero stabilizers, he couldn’t believe his eyes. He had been called to Japan for meetings, and one of his teams invited him to the proving grounds, far from the office. When Sweers arrived, the team presented a new solution that they said could reduce body roll and increase stability of the Tundra as it passed large vehicles, like 18-wheel trucks. 

“’The vehicle becomes much more stable if we put these wings on the vehicle,’ they told me,” Sweers remembers. “And I’m thinking, ‘oh, this sounds great,’ and I’m looking at graphs and data and that, and I’m thinking they’re going to take me out and they’re going to have some big aircraft wing on the side of the truck, right? Then the guy reaches in his pocket [and pulls out an ASF] and says, ‘This is our proposal.’” 

toyota fish fin
Kristin Shaw

Sweers thought to himself, “I drove two and a half hours for this piece of plastic; are you kidding me?’” But as he placed the fish fins on the truck, tested it on the track, and pulled it off and tested it again, he was convinced. He spent four hours on the track that day, noting the stability while passing or experiencing crosswinds. 

Balancing road noise, drag, and driving dynamics

Tafoya says he sees the influence of the fish fins on straight stability, as they generate disruption in the airflow that creates a tighter stream around the vehicle. It may seem paradoxical that a lump creating disruption in airflow channels can direct the air, but that’s exactly what it does. With that tighter airstream, drivers feel a more precise steering field. 

“Sometimes, people will allude to some vehicles not having a very defined center or feel it’s kind of vague in the steering,” he says. “And [ASFs] actually do help to improve those characteristics too.”

Employing a wind tunnel for testing, Toyota engineers use smoke visualization (smoke trails that demonstrate air flow) to see where the flow is fastest. That helps the engineers decide where to place the fish fins to maximize efficiency. 

And as it turns out, designers and engineers interface fairly often to talk about these small pieces of plastic. There’s a balance to ensure that factors like road noise and the amount of drag on the vehicle are not affected. 

“We have it down to where we know kind of what areas we can apply [ASFs] and avoid disruption to other functions,” Tafoya says. “Surprisingly, for such a small feature it takes a lot of time in negotiation.” 

The post Why some Toyotas have ‘fish fins’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A torpedo-like robot named Icefin is giving us the full tour of the ‘Doomsday’ glacier https://www.popsci.com/technology/icefin-robot-thwaites-glacier/ Fri, 17 Feb 2023 15:00:00 +0000 https://www.popsci.com/?p=513275
The Icefin robot under the sea ice.
Icefin under the sea ice. Rob Robbins, USAP Diver

It may look like a long, narrow tube, but this robot is useful for a range of scientific tasks.

The post A torpedo-like robot named Icefin is giving us the full tour of the ‘Doomsday’ glacier appeared first on Popular Science.

]]>
The Icefin robot under the sea ice.
Icefin under the sea ice. Rob Robbins, USAP Diver

Thwaites, a notoriously unstable glacier in western Antarctica, is cracking and disintegrating, spelling bad news for sea level rise across the globe. Efforts are afoot to understand the geometry and chemistry of Thwaites, which is about the size of Florida, in order to gauge the impact that warming waters and climate change may have on it. 

An 11-foot tube-like underwater robot called Icefin is offering us a detailed look deep under the ice at how the vulnerable ice shelf in Antarctica is melting. By way of two papers published this week in the journal Nature, Icefin has been providing pertinent details regarding the conditions beneath the freezing waters. 

The torpedo-like Icefin was first developed at Georgia Tech, and the first prototype of the robot dates back to 2014. But it has since found a new home at Cornell University. This robot is capable of characterizing below-ice environments using the suite of sensors that it carries. It comes equipped with HD cameras, laser ranging systems, sonar, doppler current profilers, single beam altimeters (to measure distance), and instruments for measuring salinity, temperature, dissolved oxygen, pH, and organic matter. Its range is impressive: It can go down to depths of 3,280 feet and squeeze through narrow cavities in the ice shelf. 

Since Icefin is modular, it can be broken down, customized, and reassembled according to the needs of the mission. Researchers can remotely control Icefin’s trajectory, or let it set off on its own.  

Icefin isn’t alone in these cold waters. Its journey is part of the International Thwaites Glacier Collaboration (ITGC), which includes other radars, sensors, and vehicles like Boaty McBoatface

[Related: The ‘Doomsday’ glacier is fracturing and changing. AI can help us understand how.]

In 2020, through a nearly 2,000-foot-deep borehole drilled in the ice, Icefin ventured out across the ocean to the critical point where the Thwaites Glacier joins the Amundsen Sea and the ice starts to float. Data gathered by Icefin, and analyzed by human researchers, showed that the glacier had retreated up the ocean floor, thinning at the base, and melting outwards quickly. Additionally, the shapes of certain crevasses in the ice are helping funnel in warm ocean currents, making sections of the glacier melt faster than previously expected. 

These new insights, as foreboding as they are, may improve older models that have been used to predict the changes in Thwaites, and in the rates of possible sea level rise if it collapses. 

“Icefin is collecting data as close to the ice as possible in locations no other tool can currently reach,” Peter Washam, a research scientist from Cornell University who led analysis of Icefin data used to calculate melt rates, said in a press release. “It’s showing us that this system is very complex and requires a rethinking of how the ocean is melting the ice, especially in a location like Thwaites.”

Outside of Thwaites, you can find Icefin monitoring the ecosystems within ice-oceans around Antarctica’s McMurdo research station, or helping astrobiologists understand how life came to be in ocean worlds and their biospheres. 

Learn more about Icefin below: 

The post A torpedo-like robot named Icefin is giving us the full tour of the ‘Doomsday’ glacier appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Engineers finally peeked inside a deep neural network https://www.popsci.com/science/neural-network-fourier-mathematics/ Thu, 16 Feb 2023 19:00:00 +0000 https://www.popsci.com/?p=512935
An illustration of a circuit in the form of a human brain.
Neural networks may be viewed as black boxes, even by their creators. Deposit Photos

Nineteenth-century math can give scientists a tour of 21st-century AI.

The post Engineers finally peeked inside a deep neural network appeared first on Popular Science.

]]>
An illustration of a circuit in the form of a human brain.
Neural networks may be viewed as black boxes, even by their creators. Deposit Photos

Say you have a cutting-edge gadget that can crack any safe in the world—but you haven’t got a clue how it works. What do you do? You could take a much older safe-cracking tool—a trusty crowbar, perhaps. You could use that lever to pry open your gadget, peek at its innards, and try to reverse-engineer it. As it happens, that’s what scientists have just done with mathematics.

Researchers have examined a deep neural network—one type of artificial intelligence, a type that’s notoriously enigmatic on the inside—with a well-worn type of mathematical analysis that physicists and engineers have used for decades. The researchers published their results in the journal PNAS Nexus on January 23. Their results hint their AI is doing many of the same calculations that humans have long done themselves.

The paper’s authors typically use deep neural networks to predict extreme weather events or for other climate applications. While better local forecasts can help people schedule their park dates, predicting the wind and the clouds can also help renewable energy operators plan what to put into the grid in the coming hours.

“We have been working in this area for a while, and we have found that neural networks are really powerful in dealing with these kinds of systems,” says Pedram Hassanzadeh, a mechanical engineer from Rice University in Texas, and one of the study authors.

Today, meteorologists often do this sort of forecasting with models that require behemoth supercomputers. Deep neural networks need much less processing power to do the same tasks. It’s easy to imagine a future where anyone can run those models on a laptop in the field.

[Related: Disney built a neural network to automatically change an actor’s age]

AI comes in many forms; deep neural networks are just one of them, if a very important one. A neural network has three parts. Say you build a neural network that identifies an animal from its image. The first part might translate the picture into data; the middle part might analyze the data; and the final part might compare the data to a list of animals and output the best matches.

What makes a deep neural network “deep” is that its creators expand that middle part into a far more convoluted affair, consisting of multiple layers. For instance, each layer of an image-watching deep network might analyze successively more complex sections of the image.

That complexity makes deep neural networks very powerful, and they’ve fueled many of AI’s more impressive feats in recent memory. One of their first abilities, more than a decade ago, was to transcribe human speech into words. In later years, they’ve colorized images, tracked financial fraud, and designed drug molecules. And, as Hassanzadeh’s group has demonstrated, they can predict the weather and forecast the climate.

[Related: We asked a neural network to bake us a cake. The results were…interesting.]

The problem, for many scientists, is that nobody can actually see what the network is doing, due to the way these networks are made. They train a network by assigning it a task and feeding it data. As the newborn network digests more data, it adjusts itself to perform that task better. The end result is a “black box,” a tool whose innards are so scrambled that even its own creators can’t fully understand them. 

AI experts have devoted countless hours of their lives to find better ways of looking into their very creations. That’s already tough to do with a simple image-recognition network. It’s even more difficult to understand a deep neural network that’s crunching a system such as Earth’s climate, which consists of myriad moving parts.

Still, the rewards are worth the work. If scientists know how their neural network works, not only can they know more about their own tools, they can think about how to adapt those tools for other uses. They could make weather-forecasting models, for instance, that work better in a world with more carbon dioxide in the air.

So, Hassanzadeh and his colleagues had the idea to apply Fourier analysis—a method that’s fit neatly in the toolboxes of physicists and mathematicians for decades—to their AI. Think of Fourier analysis as an act of translation. The end language represents a dataset as the sum of smaller functions. You can then apply certain filters to blot out parts of that sum, allowing you to see the patterns 

As it happened, their attempt was a success. Hassanzadeh and his colleagues discovered that what their neural network was doing, in essence, was a combination of the same filters that many scientists would use.

“This better connects the inner workings of a neural network with things that physicists and applied mathematicians have been doing for the past few decades,” says Hassanzadeh.

If he and his colleagues are correct about the work they’ve just published, then it means that they’ve opened—slightly—something that might seem like magic with a crowbar fashioned from math that scientists have been doing for more than a century.

The post Engineers finally peeked inside a deep neural network appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA is using AI to help design lighter parts https://www.popsci.com/technology/nasa-evolved-structures-spacecraft-ai/ Thu, 16 Feb 2023 16:05:00 +0000 https://www.popsci.com/?p=512885
NASA evolved structure spacecraft part
AI-assisted engineering helped construct advanced spacecraft parts like this one. NASA

'The algorithms do need a human eye.'

The post NASA is using AI to help design lighter parts appeared first on Popular Science.

]]>
NASA evolved structure spacecraft part
AI-assisted engineering helped construct advanced spacecraft parts like this one. NASA

NASA is enlisting artificial intelligence software to assist engineers in designing the next generation of spacecraft hardware, and real world results resemble the stuff of science fiction.

The agency utilized commercially available AI software at NASA’s Goddard Space Flight Center in Maryland. NASA states that research engineer Ryan McClelland, who worked on the new materials with the assistance of AI, has dubbed them “evolved structures.” They have already been used in the design and construction of astrophysics balloon observatories, space weather monitors, and space telescopes, as well as the Mars Sample Return mission and more.

Beforehand the evolved structures are created, a computer-assisted design (CAD) specialist first sets the new objects’ “off limits” parameters, such as where the parts connects to spacecraft or other instruments, as well as other specifications like bolt and fitting placements, additional hardware, and electronics. Once those factors are defined, AI software “connects the dots” to sketch out a potential new structural design, often within just two hours or less.

The finished products result in curious, unique forms that are up to two-thirds lighter than their purely human-designed counterparts. However, proposed forms generally require some human fine-tuning, Ryan McClellans makes sure to highlight. “The algorithms do need a human eye,” McClelland said. “Human intuition knows what looks right, but left to itself, the algorithm can sometimes make structures too thin.”

[Related: NASA just announced a plane with a radical wing design.]

Optimizing materials and hardware is especially important for NASA’s spacefaring projects, given each endeavor’s unique requirements and needs. As opposed to assembly line construction for mass produced items, almost every NASA part is unique, so shortening design and construction times with AI input expands the agency’s capabilities.

When combined with other production techniques like 3D-printing, researchers envision a time when larger parts could be constructed while astronauts are already in orbit, thus reducing costly payloads. Such assembly plans might even be employed during construction of permanent human bases on the moon and Mars.

The post NASA is using AI to help design lighter parts appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cuttlefish have amazing eyes, so robot-makers are copying them https://www.popsci.com/technology/cuttlefish-eye-imaging-system/ Wed, 15 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=512718
a cuttlefish in darkness
Cuttlefish are clever critters with cool eyes. Will Turner / Unsplash

Cameras inspired by cuttlefish eyes could help robots and cars see better.

The post Cuttlefish have amazing eyes, so robot-makers are copying them appeared first on Popular Science.

]]>
a cuttlefish in darkness
Cuttlefish are clever critters with cool eyes. Will Turner / Unsplash

Cuttlefish are smart, crafty critters that have long fascinated scientists. They’re masters of disguise, creative problem solvers, and they wear their feelings on their skin. On top of all that, they have cool-looking eyes and incredible sight. With w-shaped pupils, a curved retina, and a special arrangement of cells that respond to light, they have stellar 3D vision, great perception of contrast, and an acute sensitivity to polarized light. This vision system allows these creatures to hunt in underwater environments where lighting is often uneven or less than optimal. And for an international team of roboticists wanting to create machines that can see and navigate in these same conditions, they’re looking to nature for inspiration on artificial vision. 

In a new study published this week in Science Robotics, the team created an artificial vision design that was inspired by cuttlefish eyes. It could help the robots, self-driving vehicles, and drones of the future see the world better. 

“Aquatic and amphibious animals have evolved to have eyes optimized for their habitats, and these have inspired various artificial vision systems,” the researchers wrote in the paper. For example, imaging systems have been modeled after the fish eyes with a panoramic view, the wide-spectrum vision of mantis shrimp, and the 360-field-of-view of fiddler crab eyes. 

[Related: A tuna robot reveals the art of gliding gracefully through water]

Because the cuttlefish has photoreceptors (nerve cells that take light and turn it into electrical signals) that are packed together in a belt-like region and stacked in a certain configuration, it’s good at recognizing approaching objects. This feature also allows them to filter out polarized light reflecting from the objects of interest in order to obtain a high visual contrast. 

Meanwhile, the imaging system the team made mimics the unique structural and functional features of the cuttlefish eye. It contains a w-shaped pupil that is attached on the outside of a ball-shaped lens with an aperture sandwiched in the middle. The pupil shape is intended to reduce distracting lights not in the field of vision and balance brightness levels. This device also contains a flexible polarizer on the surface, and a cylindrical silicon photodiode array that can convert photons into electrical currents. These kinds of image sensors usually pair one photodiode to one pixel. 

“By integrating these optical and electronic components, we developed an artificial vision system that can balance the uneven light distribution while achieving high contrast and acuity,” the researchers wrote. 

In a small series of imaging tests, the cuttlefish-inspired camera was able to pick up the details on a photo better than a regular camera, and it was able to fairly accurately translate the outlines of complex objects like a fish even when the light on it was harsh or shone at an angle. 

The team notes that this approach is promising for reducing blind spots that most modern cameras on cars and bots have trouble with, though they acknowledge that some of the materials used in their prototype may be difficult to fabricate on an industrial level. Plus, they note that “there is still room for further improvements in tracking objects out of sight by introducing mechanical movement systems such as biological eye movements.”

The post Cuttlefish have amazing eyes, so robot-makers are copying them appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These sex toys are designed to heal, one orgasm at a time https://www.popsci.com/health/sex-toys-medical-devices/ Tue, 14 Feb 2023 15:30:00 +0000 https://www.popsci.com/?p=511636
three MysteryVibe products
With products like Poco (left), Crescendo 2 (center), and Tenuto 2 (right), MysteryVibe works directly with medical professionals to bring sweet relief to patients and partners alike. Charlie Surbey for Popular Science

MysteryVibe set out to design sex toys and ended up creating medical devices for people with menopause, erectile dysfunction, and pelvic floor pain.

The post These sex toys are designed to heal, one orgasm at a time appeared first on Popular Science.

]]>
three MysteryVibe products
With products like Poco (left), Crescendo 2 (center), and Tenuto 2 (right), MysteryVibe works directly with medical professionals to bring sweet relief to patients and partners alike. Charlie Surbey for Popular Science

IT’S HARD to find pleasure these days. Our fast-paced, tech-centric lives can give us little space for intimacy and satisfaction in bed. But instead of leaving us to our own devices, innovators around the world are taking on the challenge of making sex better for anyone who’s experienced childbirth, reproductive health problems, or the erosion of romance.

Founded in 2014, UK-based MysteryVibe is one company filling that niche. Using robotics and other facets of engineering, it set out to create a durable sex toy that could adapt to a variety of bodies, shapes, and desires. What it didn’t know was that its first creation, a flexible six-motor vibrator called Crescendo, would also end up as a certified medical device for conditions like pelvic floor pain. A 2022 MysteryVibe-funded study at the Murcia Institute of Sexology in Spain found that exercises performed with the handheld model could significantly reduce discomfort in patients suffering from chronic pain during sex. The same institution later saw that the penis-hugging Tenuto improved erectile dysfunction after one patient’s colo­rectal cancer surgery. At the moment, the UK’s National Health Service is looking into the ability of the G-spot-targeting Poco to ease long-term vulva discomfort, with results expected later this year. 

Engineering photo
Charlie Surbey for Popular Science

MysteryVibe’s team of four engineers develops all of the company’s prototypes at an inconspicuous farmhouse in Thursley, England. One of the steps involves using a hot plate and a hot-air soldering technique to manually tweak the printed circuit boards, or PCBs, that control the sex toys. The high temperature allows staff to reassemble the electronics as they evolve.

Engineering photo
Charlie Surbey for Popular Science

Legato (set to be released in summer 2023) was devised based on a request from a medical professional to help menopause patients overcome vaginal dryness, a common symptom that makes penetrative sex more difficult and potentially painful. It’s the first sex toy designed to stimulate the labia by increasing blood flow around the vulva, improving natural lubrication.

Engineering photo
Charlie Surbey for Popular Science

Medical research has found specific vibration frequencies can effectively treat pelvic floor pain. As part of their own product development, MysteryVibe constructed its own testing rig to measure how the motors in their products interact with each other, and the exact vibrations they deliver. These efforts are still only experimental. 

Engineering photo
Charlie Surbey for Popular Science

While designing silicone-covered devices with multiple hinges inside, the MysteryVibe team learned that versatility comes at the cost of durability. This became a major engineering hurdle, as one of the original driving missions of the company was to make a sex toy that would last. The entire product line has a 24-month warranty, but with proper care, the products should survive for at least five years, says CTO and co-founder Rob Weekly.

Engineering photo
Charlie Surbey for Popular Science

During the years-long trip from conception to online retail, MysteryVibe’s offerings go through multiple transformations that take into account customer and medical feedback. The Thursley research and development center is filled with boxes of old models that engineers use to ensure software updates work seamlessly with discontinued models.  

Engineering photo
Charlie Surbey for Popular Science

Tenuto’s largest arm sits against the highly sensitive perineum, stimulating blood circulation to the genitals. With customizable oscillations from three other motors, the result can be a longer-lasting erection and an overall more blissful experience for the user and their partner.

Engineering photo
Charlie Surbey for Popular Science

3D printers play a big role in helping the company develop, test, and refine products—from silicone molds to working parts. Once the design process is complete, the newly minted prototypes are shipped to China, where, after some back and forth with the research team to improve efficiency, they go into the production line for mass manufacturing. 

Engineering photo
Charlie Surbey for Popular Science

When building samples, MysteryVibe staff members use a metal stencil to apply two soldering pastes with different melting points to the PCB plates. This allows them to work at a lower temperature to affix the electrical components to one side of the panel, protecting the hardware already assembled on the other end.

Engineering photo
Charlie Surbey for Popular Science

Next, the PCBs get transferred to a “pick-and-place” machine that can build 10 boards in one move. A robotic head creates a vacuum to suction up components from the paper reels on the left and position them exactly where they belong on the panel. From there, they get wired up to the motors.

Engineering photo
Charlie Surbey for Popular Science

Another new release in 2023, Molto is crafted to arouse the prostate by simulating the standard approach adopted by pelvic floor therapists. Besides generating pleasure, this technique also has possible health benefits and can be used to improve conditions like erectile dysfunction and enlarged prostate.  

Engineering photo
Charlie Surbey for Popular Science

Tenuto 2 posed a particular manufacturing challenge, as it’s the company’s only “over-molded” product. While the skeletons of other MysteryVibe toys are pushed into fully dried silicone pieces, in this case, the bare electronics go into a special cast and are then sealed in the molten material. If the device accidentally turns on from the pressure, or heats up too much, the team needs to reset the whole process.

Engineering photo
Charlie Surbey for Popular Science

When Crescendo launched nearly a decade ago, its six motors set it apart from the competition. More machinery means more targeted vibrations and flexibility, and customers can’t seem to get enough. The model and its successor (seen here) still hold the record for maximum motors in the industry: “I don’t think anyone’s touching that at the moment,” says Weekly. “I think the most I’ve seen is three.”

Read more PopSci+ stories.

The post These sex toys are designed to heal, one orgasm at a time appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How neutral atoms could help power next-gen quantum computers https://www.popsci.com/technology/neutral-atom-quantum-computer/ Fri, 10 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=511415
A prototype of QuEra's neutral atom quantum computer.
A prototype of QuEra's neutral atom quantum computer. QuEra

These atoms would function as individual qubits and would be controlled by "optical tweezers."

The post How neutral atoms could help power next-gen quantum computers appeared first on Popular Science.

]]>
A prototype of QuEra's neutral atom quantum computer.
A prototype of QuEra's neutral atom quantum computer. QuEra

There’s a twist in the race to build a practical quantum computer. Several companies are betting that there’s a better way to create the basic unit of a quantum computer, the qubit, than the leading method being pursued by giants like IBM and Google. 

To back up a moment to the fundamental design of quantum computers, think of qubits as the quantum equivalent of the binary bits contained in classical computers. But instead of storing either on-or-off states like bits (the famous 1 or 0), qubits store waveforms, which allows them to have a value of 1, 0, or a combination of the two. To exhibit these quantum properties, objects have to be either very small or very cold. In theory, this quality allows qubits to perform more complex calculations compared to bits. But in reality, the unique state that qubits attain is hard to maintain, and once the state is lost, so is the information carried in these qubits. So, how long qubits can stay in this quantum state currently sets the limit on the calculations that can be performed. 

A frontrunner in the race to build a useful quantum computer is IBM, and its approach to these fundamental computing units is a contraption called superconducting qubits. This technique involves engineering small pieces of superconducting metals and insulators to create a material that behaves like an artificial atom in an ultra-cold environment (a more in-depth explanation is available here). 

[Related: IBM’s latest quantum chip breaks the elusive 100-qubit barrier]

But for emerging companies like QuEra, Atom Computing, Pasqal, they’re looking to try something new, and build a quantum computer using neutral atoms, which has long been seen as a promising platform. A neutral atom is an atom that contains a balanced amount of positive and negative charges. 

Previously, this approach has largely been tested by small companies and university laboratories, but that might soon start to change. Working with qubits made from neutral atoms may in some ways be easier than fabricating an artificial atom, experts told PopSci in 2021. 

Lasers in the prototype of Atom Computing's neutral atom quantum computer.
Lasers in the prototype of Atom Computing’s neutral atom quantum computer. Atom Computing

QuEra, for example, uses rubidium atoms as qubits. Rubidium appears on the periodic table as one of the alkaline metals, with the atomic number 37. To get the atom to carry quantum information, researchers will shine a laser at it to excite it to different energy levels. Two of these levels can be isolated, and labeled as the 0 and 1 values for the qubit. In their excited states, atoms can interact with other atoms close by. Lasers also act as “optical tweezers” for individual atoms, holding them in place and reducing their movement, which cools them down and makes them easier to work with. The company says that it can pack thousands of laser-trapped atoms in a square millimeter in flexible configurations. QuEra claims that they have at times achieved coherence times of more than 1 second (coherence time is how long the qubits retain their quantum properties). For comparison, the average coherence time for IBM’s quantum chips is around 300 microseconds. 

“To assemble multiple qubits, physicists split a single laser beam into many, for example by passing it through a screen made of liquid crystals. This can create arrays of hundreds of tweezers, each trapping their own atom,” reported Nature. “One major advantage of the technique is that physicists can combine multiple types of tweezers, some of which can move around quickly — with the atoms they carry…This makes the technique more flexible than other platforms such as superconductors, in which each qubit can interact only with its direct neighbors on the chip.”

Already, peer-reviewed papers have been published, testing the possibilities of running a quantum algorithm on such a technology. A paper published in January in the journal Nature Physics even characterized the behavior of a neutral atom trapped in an optical tweezer.  

Currently, QuEra is able to work with around 256 qubits and is offered as part of Amazon Web Services’ quantum computing service. According to an Amazon blog post, these neutral atom-based processors are suitable for “arranging atoms in graph patterns, and solving certain combinatorial optimization problems.”

Meanwhile, Atom Computing, which bases its qubits off the alkaline earth metal Strontium, uses a vacuum chamber, magnetic fields, and lasers to create its array. Its prototype has caught the eyes of the Pentagon’s DARPA research division, and it recently received funding as part of the agency’s Underexplored Systems for Utility-Scale Quantum Computing (US2QC) program. 

Pasqal, another Paris-based quantum computing start-up, has also rallied quite a bit of capital behind this up-and-coming approach. Specifically, according to TechCrunch, it raised around €100M in late January to build out its neutral atom quantum computer.

The post How neutral atoms could help power next-gen quantum computers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
In the latest State of the Union, Biden highlights infrastructure, chips, and healthcare https://www.popsci.com/science/biden-state-of-the-union-2023/ Wed, 08 Feb 2023 15:00:00 +0000 https://www.popsci.com/?p=510668
U.S. President Joe Biden delivers the State of the Union address to a joint session of Congress on February 7, 2023 in the House Chamber of the U.S. Capitol in Washington, DC.
U.S. President Joe Biden delivers the State of the Union address to a joint session of Congress on February 7, 2023 in the House Chamber of the U.S. Capitol in Washington, DC. Jacquelyn Martin-Pool/Getty Images

In his second SOTU address, Biden urged Congress to ‘finish the job.’

The post In the latest State of the Union, Biden highlights infrastructure, chips, and healthcare appeared first on Popular Science.

]]>
U.S. President Joe Biden delivers the State of the Union address to a joint session of Congress on February 7, 2023 in the House Chamber of the U.S. Capitol in Washington, DC.
U.S. President Joe Biden delivers the State of the Union address to a joint session of Congress on February 7, 2023 in the House Chamber of the U.S. Capitol in Washington, DC. Jacquelyn Martin-Pool/Getty Images

On February 7, President Joe Biden gave his 2023 State of the Union Address to a joint session of a newly split Congress, with Democrats controlling the Senate and Republicans controlling the House. This is what he had to say on major science, tech, and health related issues. 

Health policy priorities—COVID and healthcare

Biden touted the progress made to combat COVID-19 since he first took office in January 2021, when the COVID-19 vaccine rollout was just getting underway since beginning in December 2020. “COVID no longer controls our lives,” he said, “while the virus is not gone, thanks to the resilience of the American people, and the ingenuity of medicine, we have broken COVID’s grip on us.” 

The administration stands to end the public health emergency on May 11. The change to formally end the national emergency declarations would restructure the federal government’s response to treating the virus as an endemic threat to public health that can be managed through normal authorities.

[Related: Biden will end COVID-19 national emergencies in May. Here’s what that means.]

He also pointed to several policies Congress can still achieve to deliver cheaper prescription drugs to the American people—for example continuing to expand Medicaid under the Affordable Care Act, and capping the cost of insulin at $35 for seniors on Medicare.

“But there are millions of other Americans who are not on Medicare, including 200,000 young people with Type I diabetes who need insulin to save their lives,” said Biden. “Let’s finish the job this time. Let’s cap the cost of insulin at $35 a month for every American who needs it.”

This was the first State of the Union after the Supreme Court overturned Roe v. Wade, and President Biden vowed to veto any national abortion ban. The Biden administration has taken steps to expand abortion access in the wake of the decision, including steps to make it easier to access the prescription pills used in a medication abortion. 

He touted the success of the PEPFAR program that has saved 25 million lives and transformed  the global fight against HIV/AIDS and the Cancer Moonshot program that Biden led while Vice President to Barack Obama. The program is a very personal initiative to the Bidens after their son Beau died of a brain tumor in 2015. 

“Our goal is to cut the cancer death rate by at least 50 percent over the next 25 years. Turn more cancers from death sentences into treatable diseases. And provide more support for patients and families,” said Biden.

When it comes to tech, CHIPS takes the spotlight

American ingenuity in tech was also on full display, with Biden highlighting the bipartisan Infrastructure Law and CHIPS and Science Act, especially when it comes to the jobs that will be created by investing in infrastructure and tech. The legislation devotes more than $50 billion intended to spur semiconductor manufacturing, research, development, and more in the United States.

[Related: Can the Chips and Science Act help the US avoid more shortages?]

“Semiconductors, the small computer chips the size of your fingertip that power everything from cellphones to automobiles, and so much more. These chips were invented right here in America. Let’s get that straight, they were invented in America,” said Biden. “America used to make nearly 40 percent of the world’s chips. But in the last few decades, we lost our edge and we’re down to producing only 10 percent.”

He also announced a new standard that will require all construction materials used in federal infrastructure projects to be made in America and stressed his administration’s commitment to providing Americans with universal access to high-speed internet. 

Climate and the environment—wins and losses

The Biden Administration’s recent flurry of environmental legislation amidst the past year’s spike in gas prices shifted the spotlight on his policies on climate change.  

The Inflation Reduction Act is also the most significant investment ever to tackle the climate crisis. Lowering utility bills, creating American jobs, and leading the world to a clean energy future,” said Biden, before touting the investments aimed at modernizing infrastructure in the face of a changing planet from electric grids to floods and water systems and clear energy.

[Related: 4 ways the Inflation Reduction Act invests in healthier forests and greener cities.]

He also called the $200 billion in profits brought in by oil and gas companies during a global energy crisis “outrageous,” and proposed quadrupling the tax on corporate stock buybacks to encourage more investment in increasing domestic energy production and keeping costs down.    

High profile attendees included wildfire experts and cancer survivors

U2 frontman Bono, Tyre Nicols’ family, and Paul Pelosi were among the high profile guests for the 535 members of Congress. Several were innovators, activists, and scientists making a mark on the science and tech world. 

These included Jennifer Gray Thompson, the CEO of After the Fire USA,  Paul Bruchez, a rancher who has worked with other landowners to restore a part of the threatened Colorado River, Grover Fugate, the Executive Director of the Rhode Island Coastal Resources Management Council (CRMC), and  David Anderson, President and CEO of NY-CREATES and the Albany Nanotech Complex. 

Some of the guests invited to the First Lady’s Box included Maurice and Kandice Barron whose daughter Ava is a survivor of a rare form of pediatric cancer, Amanda Zurawski, a woman from Texas who almost lost her life to a miscarriage due to Texas’ abortion law, and Lynette Bonar, an enrolled member of Navajo Nation who helped open the first cancer center opened on a Native American reservation.

The post In the latest State of the Union, Biden highlights infrastructure, chips, and healthcare appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Honda’s newest Accord hybrid is a sleek, brawny beast https://www.popsci.com/technology/2023-honda-accord-hybrid-review/ Tue, 07 Feb 2023 23:10:00 +0000 https://www.popsci.com/?p=510588
The 2023 Honda Accord.
The 2023 Honda Accord. Kristin Shaw

A typically boring sedan gets a trip to the gym, and the result is a lively, more efficient vehicle with a powerful hybrid powertrain.

The post Honda’s newest Accord hybrid is a sleek, brawny beast appeared first on Popular Science.

]]>
The 2023 Honda Accord.
The 2023 Honda Accord. Kristin Shaw

You can barely throw a rock in America without hitting a Honda Accord. More than 12.5 million Accords have been sold in North America since 1982, and Honda says 98 percent of those were built in the USA. The latest iteration of Honda’s Accord is now available, and it packs in some new tech upgrades along with improvements to the hybrid powertrain.

Here’s what’s new under the hood, and what it’s like to drive it.

Solid lines, subtle updates

There have been ten previous generations of Accords, and this model kicks off its eleventh. The 2023 Accord is a product of Honda’s intention to amp up its hybrid sales. Honda is actively chasing a 50 percent sales target for the hybrid versions of the Accord, and of its six trims, only the two lowest of the bunch are offered with a gas-only, no-electric-motor option. It’s clear that Honda is checking a box for gas-only fans as a transition, while gently steering its customers away from the lesser trim levels.

And for good reason: While it looks and feels very familiar, the newest Accord hybrid has been to the gym. It’s pumped up with a strengthened core and tweaked powertrain that’s more efficient.

the 2023 Honda Accord hybrid
The 2023 Accord comes in six trim levels, with all but two of them being hybrids. Kristin Shaw

The freshest Accord in the stable is longer and broader than the previous generation, giving Honda’s cash-cow sedan a sleeker profile and a livelier front end that one might attribute to a sportier vehicle. That’s due, in part, to structural updates to the chassis with new brace bars that increase the rigidity of the ride; the result is a smoother ride that absorbs mild bumps in the road like a member of a top-tier college marching band glides across the football field at halftime.  

The lineup starts at $28,390 for the gas-only Accord LX model. Then, the first hybrid skips over the EX (also a gas-only model) up to the Sport for $32,990. At the top of the lineup, the Touring trim is decked out with all the goodies, along with the hybrid powertrain, for $38,985 and up.

2023 Honda Accord hybrid interior
Kristin Shaw

Engineered with electrification in mind

Behind the wheel, I expected a pleasant ride, and I wasn’t disappointed. The Accord hasn’t lasted for 11 generations for nothing, after all. It’s an all-around favorite, with solid fuel economy figures (44-48 mpg combined for the hybrid and 32 mpg combined for the gas-only trims) and plenty of value packed in for the price.

Testing out the new Accord Touring in each of its three main drive modes (Normal, Eco, and Sport), I found that Normal makes the most sense for the majority of the time. Reserve the Eco mode only for long highway drives when you’re already moving at a good clip, because the stunted acceleration is a bummer otherwise. Sport mode was the most exciting, with a zip that made it easy to pass and merge from highway ramps onto the freeway. It also adds a weightier feel to the steering, which firms up the driving experience.

[Related: Pete Buttigieg on how to improve the deadly track record of US drivers]

Honda opted to equip the hybrid models (Sport, EX-L, Sport-L, and Touring) with an all-new 2.0-liter four-cylinder engine paired with the same two-motor hybrid-electric system that debuted in the 2023 Honda CR-V. Together, the Accord hybrid is good for 204 horsepower and 247 pound-feet of torque. Gas-only models may be cheaper, but they sacrifice horsepower, torque, and fuel efficiency in the exchange.

The two-motor hybrid system includes an electric generator motor, which supplies power to the battery; an electric propulsion motor to drive the front wheels; an Atkinson-cycle gas engine that feeds power to the battery and propulsion motor; a new, smaller intelligent power unit that protects and controls the battery; and a power control unit that acts as the brains of the hybrid system. 

2023 Honda Accord Hybrid interior
Kristin Shaw

There’s a prominent button on the console with an “e” printed on it in stylized script, and pushing it notifies the Accord to maximize your electric drive mode as much as possible, defaulting to electric over gas.

“[Pressing the button] doesn’t necessarily make the vehicle more efficient,” says Chris Martin, a communications manager with American Honda. “Let’s say you are trying to pull quietly out of your driveway or out of your neighborhood. You have to manage the throttle carefully to avoid activating the gas engine, and by pushing this button the car is going to require you to give it a little bit more throttle before it engages the gas engine. Kind of like a quiet mode.”

What’s different about the 2023 Accord hybrid system?

Previously, Honda situated the two motors in-line longitudinally, with the generator motor connected directly to the engine and the propulsion motor connected to the front wheels. Engineers for the new Accord hybrid nestled the two electric motors side-by-side instead (in the same configuration used in the new CR-V) allowing for the propulsion motor to be bigger and stronger. Honda eschewed heavy rare-earth metals for this system, which contributes to a higher top speed. The new 2.0-liter four-cylinder engine brings a promise of reduced emissions, with 22 percent less nitrogen oxides and 24 percent less total hydrocarbon emissions.

Martin says the entire core package has been improved in many ways, with an eye on improving handling and making the car quieter, smoother, and safer. The Accord chassis itself is responsible for many of the improvements that improve the drive versus the prior model. 

While Honda’s hybrids don’t claim one-pedal driving—the brand calls it “one-pedal like”—the Accord hybrid comes close. (One-pedal driving allows the driver to use just the accelerator without moving their foot to the brake, as the car slows or even stops as soon as they lift their foot off the accelerator. That’s a big benefit in stop-and-go traffic, when a light tap to the accelerator is all you need to move forward.) The new Accord features paddle shifters on the left and right side that control the amount of braking regeneration up to six levels; on the maximum regeneration setting the vehicle will slow considerably when you take your foot off the accelerator. The four-wheel disc brakes are slightly squishy, so prepare to press down a little further than expected.

On the technology front, the new Accord receives over-the-air software updates, making it easy for Honda to push out updates and plug any potential problems. Honda gifted its sedan with a camera offering a 90-degree field of view in the front, which is nearly double the amount on the previous Accord. And the radar was relocated behind the Honda logo on the grille, which bumped up its field of detection from 50 degrees to an astonishing 120 degrees. This, combined with updated driver-assist technology, helps to avoid collisions and more easily discerns objects from people and signs, for example. 

Honda uses a Google built-in system that’s standard on the top Touring trim, including Google Maps and Google Play enhanced by a speech-to-text service that also controls interior functions like climate control. 

2023 Honda Accord Hybrid
Kristin Shaw

Tip-toeing into the EV age

Much like Toyota has been saying for several years, as well as supercar makers like Lamborghini, Honda is not rushing headlong into the EV age with the purpose of being first. The brand seems content to take it slow. Honda has said publicly that it’s committed to 100 percent electric vehicles by 2040. The pathway to get them there, though, is not just to start selling all EVs right now, Martin says. Their first EV will be the Prologue in 2024, which Martin refers to as a “toe in the water for the next generation of EVs.”

Last year, Honda launched the CR-V hybrid and hoped to incentivize customers to make the switch with attractive two-year lease deals. That stopgap allows the brand to hold onto electric-hungry customers, marking time until the all-electric Prologue SUV is ready for its debut. 

“We’ve got more than the Prologue coming,” Martin says with a wink in his voice. “We haven’t announced a lot of things, but obviously as we’re going to be selling a hundred percent [EVs] by 2040 there are a lot of other products in the pipeline.”

In the meantime, car buyers can climb into the muscular 2023 Accord and enjoy both the legacy this sedan offers plus all of the new technology and engineering Honda brought to the table.

The post Honda’s newest Accord hybrid is a sleek, brawny beast appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside the project to bring ‘self-healing’ Roman concrete to American shorelines https://www.popsci.com/science/roman-concrete/ Tue, 31 Jan 2023 15:37:11 +0000 https://www.popsci.com/?p=508620
ancient-style illustration of poseidon and workers building seawall
Andre Ducci

Lessons from 2,000-year-old Roman material could help us build structures better suited for a waterlogged future.

The post Inside the project to bring ‘self-healing’ Roman concrete to American shorelines appeared first on Popular Science.

]]>
ancient-style illustration of poseidon and workers building seawall
Andre Ducci

ANCIENT ROMANS were masters of concrete, fashioning concoctions of sand, water, and rock into long-lasting marvels. Bridges, stadiums, and other structures they built with the stuff still stand tall—even harbors and breakwaters that have been soaked by tides and storms for nearly 2,000 years. This substance, robust to the microscopic level, far outlives the modern material, which generally requires steel supports in salt water and is still likely to corrode within decades.

When the Roman Empire ended, so did its method of making marine concrete. But by following chemical clues within ancient architecture, today’s scientists have revived this technique. In recent years, researchers have only gotten better at understanding it, applying lessons from fields as diverse as archaeology, civil engineering, and volcanology. They have pulled tubes of the ancient substance from under the ocean. They have zapped it with X-rays to observe its microscopic minerals. Now they’ve mixed up their own industrial version.

In 2023, for the first time in nearly two millennia, Roman-style marine concrete will be tested on a coastline. Silica-X, a US-based company that specializes in experimental glass, plans to place four or five slabs into Long Island Sound beginning this summer. Unlike virtually all other concrete products made today, which are designed to resist their environments, these 2,600-pound samples will embrace their aquatic surroundings—and are expected to become stronger over time.

As water moves through the porous solid, the material’s minerals will dissolve, and new, strengthening compounds will form. “That is actually the secret of Roman concrete,” says University of Utah geology and geophysics research associate professor Marie D. Jackson, who is working on a reboot of the stuff with a $1.4 million grant from the Advanced Research Projects Agency–Energy, a federal program that supports early-stage technology research. 

Built in 1 BCE, the Tomb of Caecilia Metella rests on a base of Roman concrete. Many of the city’s long-standing landmarks were built with a version of the mixture.
Built in 1 BCE, the Tomb of Caecilia Metella rests on a base of Roman concrete. Many of the city’s long-standing landmarks were built with a version of the mixture. Universal Images Group North America LLC / Alamy

Jackson has spent more than a decade investigating what happens when Roman concrete meets seawater. She is part of a team working alongside Silica-X; the prototypes destined to be dunked in the New York estuary are based on her recipe.

“One hundred percent, Marie is the most significant person” trying to understand and develop the substance, says her frequent collaborator, Google hardware developer Philip Brune. More than a decade ago, when Brune was a Ph.D. student, he and Jackson created the first of what they call Roman concrete analogues. After making a terrestrial type—similar to the basis of the Pantheon and Trajan’s Market—they switched to a marine variant.

Jackson has an application in mind for these historical replicants: guarding against the effects of climate change. The National Oceanic and Atmospheric Administration projects that by 2050, sea levels will rise by an average of 10 to 12 inches along American coasts. Modern concrete seawalls, which need to be replaced roughly every 30 years, already cover a substantial percentage of the US shoreline. If waves keep mounting, it will be necessary to find a more durable and sustainable option to reinforce our seaboards. 

The duality of concrete

Concrete’s ingredients are about as simple as a sugar cookie’s. Besides water and air, it requires a grainy material called aggregate, which may be sand, gravel, or crushed rock. The other necessity is cement, a mineral glue that holds the constituents together. Portland cement, invented in the mid-1800s in England, remains the basis for the majority of modern concrete formulas. This mix results in a consistently potent product. “You can make it on Mars with the same ingredients, and you know it will work,” says Admir Masic, associate professor of civil and environmental engineering and principal investigator at the Massachusetts Institute of Technology’s Concrete Sustainability Hub.

Portland cement production is the noxious part: Not only is it thirsty for fresh water and energy, it also releases loads of carbon dioxide. The manufacturing process is responsible for 7 to 8 percent of worldwide CO2 emissions, according to Sabbie Miller, a civil and environmental engineering assistant professor at the University of California, Davis. If the global concrete industry were a nation, its greenhouse gas footprint would be the third biggest on the planet, after those of China and the US. 

The concrete sector is aware of its product’s environmental legacy and is willing to work toward change, Miller says. Global construction conglomerate HeidelbergCement, for one, announced in 2021 that it would construct the first carbon-neutral cement plant by 2030, a facility that would capture greenhouse gases and lock them up in bedrock below the sea. Other types of concrete in development are designed to lock up pollution within the material itself. Miller, who is working on techniques to turn carbon into a solid, storable mineral, says these are “very much early-days, we’ll-see-if-it-works technologies.”

making an ancient-style concrete block in the lab
Philip Brune (left) and Brad Cottle mix synthetic tephra for a marine Roman concrete analog. Marie D. Jackson / University of Utah
a freshly-poured, arc-shaped piece of concrete
After being molded… Marie D. Jackson / University of Utah
attaching inserts to arc-shaped mold
…the sample goes through fracture testing. Marie D. Jackson / University of Utah

Making concrete as the Romans did should reduce troublesome emissions, researchers say, in large part because this substance won’t need to be replaced frequently. Yet the ancient process doesn’t yield quite as much compressive strength—this resource won’t hold up super-tall buildings or heavily traveled bridges. In the concrete heart of Manhattan, “We will not use Roman-inspired material,” says Masic, who co-authored a paper with Jackson and two others on reactions within the building materials in the Roman tomb of Caecilia Metella and is an inventor of what he calls a “self-healing” substance. Rather, he says, the timeless concoction could be fashioned into roads that resist wear, walls that withstand waves, and vaults that confine nuclear waste.

What Roman-style concrete does best is survive, aided by its ability to repair itself within days. “This material has phenomenal durability,” Brune says. “Nothing else that you find in the built environment lasts with as much integrity and fidelity.” A key ingredient that gives it this ability lies in the sand-like pozzolan of Pozzuoli, Italy.

ancient-style illustration of pliny talking to reporter with vesuvius erupting in background
Andre Ducci

From fire to the sea

Jackson did not set out to unlock the secrets of Roman concrete. Drawn to volcanology and rock mechanics, she studied Hawaii’s Mauna Loa in the late ’80s and early ’90s. In 1995, she spent a year in Rome with her family, living near the ruins of the Circus Maximus, once a huge chariot-racing stadium. While there, she became fascinated with the volcanic rock incorporated into the city’s celebrated classical architecture.

Roman concrete has been the subject of intense scholarship—structures that persist for thousands of years tend to attract attention. But Jackson, with her geologist’s eye, saw something powerful below the surface. “It is very difficult to understand this material unless one understands volcanic rocks,” she says. In her analysis, Jackson focused on tephra, particles spit out in a volcanic eruption, and tuff, the rock that forms when tephra firms up. 

Her first paper about Roman building materials, a collaboration with four other scientists, was published in the journal Archaeometry in 2005. The group described seven deposits where ancient builders had collected tuff and stones. These were products of explosive eruptions from two volcanoes north and south of Rome. By the first century BCE, Roman architects had recognized the resilience of these rocks and had begun to place them in what Jackson notes were “strategic positions” around the city.

While she examined materials in the Eternal City, others were separately scouring the sea. A trio of scholars and scuba divers—classical archaeologists Robert L. Hohlfelder and John Oleson, and London-based architect Christopher Brandon—launched the Roman Maritime Concrete Study in 2001. Over the next several years, they collected dozens of core samples from Egypt, Greece, Italy, Israel, and Turkey, taken from 10 Roman harbor sites and one piscina, a seaside tank for corralling edible fish.

Some of the locations they inspected were immense structures: At Caesarea Palaestinae, a port city built between 22 and 10 BCE during the reign of King Herod, Romans created a harbor from an estimated 20,000 metric tons of volcanic ash. 

To look inside the ruins, the archaeologists needed heavy machinery. “You used to whack some pieces off the outside of a big, maybe 400-cubic-meter lump of concrete on the ocean floor,” says Oleson, a University of Victoria professor emeritus. But that approach has flaws. The surface is already decayed from sea growth, so whatever breaks off might not represent what’s deeper inside. “You’ve also been whacking on it with a hammer,” he says, which can foul the opportunity to measure its material strength.

Romacons Project diver Chris Brandon collects a concrete core from Portus Julius in the Gulf of Pozzuoli. The underwater missions offered a closer look at Roman concrete.
Romacons Project diver Chris Brandon collects a concrete core from Portus Julius in the Gulf of Pozzuoli. The underwater missions offered a closer look at Roman concrete. Romacons Project

The project required a more precise, piercing touch. A cement company in Italy, Italcementi, provided funding and helped get the three men a specialized hydraulic coring rig. Diving beneath the Mediterranean, they spent hours drilling, extracting cylindrical cores up to 20 feet long. “It was difficult,” Oleson says. “In places like Alexandria, the visibility—because of all the things you don’t want to think about—was less than your arm length.”

That effort paid off. No one had been able to look at the layers within the submerged structures before. The opinion at the time was that the concrete must have been extra strong to last for thousands of years in seawater. But that wasn’t the case, Oleson and his colleagues found: “In modern engineering terms, it’s quite weak,” he says. What it was, though, was remarkably consistent in its volcanic elements. Oleson theorizes that grain ships used Neapolitan pozzolan as ballast, ferrying it to work sites hundreds of miles from its source.

In 2007, the trio’s presentation on seawater concrete won an award at the Archaeological Institute of America’s annual meeting. “I was standing there, bathing in the glory, and this short, excitable woman came up and started talking to me,” Oleson recalls. The stranger was Jackson, who Oleson says launched into a detailed explanation of the rare crystal minerals she had observed within Roman architectural concrete. Oleson, for his part, had never taken a college chemistry course, but he recognized a kindred spirit—and that this geologist had expertise his group needed. 

They gave Jackson access to the maritime samples. And when she peered inside, she found chemical laboratories on a nanometer scale.

Reactions in the rock

In his first-century work Natural History, Roman author Pliny the Elder wrote of a dust that “as soon as it comes into contact with the waves of the sea and is submerged, becomes a single stone mass, impregnable to the waves and every day stronger.” How precisely these wet grains—the pozzolan—became ever stronger would not be revealed for almost 2,000 years.

When Jackson investigated the core samples Oleson and his colleagues had obtained, she spotted some of the same features she’d seen in the architectural concrete in metropolitan Rome. But in the sunken stuff, she also saw what she labeled mineral cycling: a looping reaction in which compounds formed, dissolved, and formed new ones.

To make concrete, Romans mixed tephra with hydrated lime. That accelerates the production of a mineral glue called calcium aluminum silicate hydrate, or C-A-S-H. (The backbone of unadulterated modern concrete, C-S-H, is a similar binder.) This happens within the first months of installation, Jackson says. Within five to 10 years, the material composition changes again, consuming all the hydrated lime through a kind of microscopic interior remodeling. By then, percolating fluid “begins to really make a difference” as it produces long-lasting, cementlike minerals within.

B&W closeups of pumice clast (top) and lime clast
Microscopy images from the Jackson lab reveal the crystalline reactions of the C-A-S-H binder (top) and lime clast with seawater (bottom) in original Roman concrete. Marie D. Jackson / University of Utah (2)

Jackson and a team of scientists used sophisticated microscope and X-ray techniques, including work done at the Lawrence Berkeley National Laboratory’s Advanced Light Source, to look at these powerful but teeny crystals. “We were able to show systematically that Roman seawater concrete had continued to change over time,” she says. Within each pore of the concrete, seawater had reacted with glass or crystal compounds. In particular, she found stiff, riblike plates of a rare mineral known as aluminous tobermorite, which probably help prevent fractures, as she and her colleagues wrote in a 2017 paper in the journal American Mineralogist.

The ocean itself plays a vital role. Roman fabricators made their marine concrete mixtures with seawater, and its salts became part of the mineral structure—sodium, chlorine, and other ions helped activate the tephra-lime reaction. Once the concrete was in the tides, as fluid slowly percolated through the hulking edifices, life flourished on the facades. Worms made tubes and other invertebrates sprouted shells.

Modern reinforced concrete, meanwhile, needs a high pH to preserve the steel rebar within, which means its surface is less friendly to living things. Once it is cast, after about 28 days of hardening and curing, it is near its maximum sturdiness, Brune notes. (Attempts are underway to give newer kinds of concrete the ability to restore themselves, such as infusing the material with bacterial spores that create limestone.)

“We were able to show systematically that Roman seawater concrete had continued to change over time.”

—Marie D. Jackson

Specifically, concrete using Portland cement is as brittle as it is strong. Under too much strain, it cracks, sometimes with a sharp snap that propagates and causes wide-scale failure. “The ability of the material to carry further loads, it’s gone. It’s fractured,” Jackson says. 

Roman concrete breaks differently. Brune and Jackson have tested their analogues under strain, creating semicircles out of the blend and pressing them to the cracking point. They observed that unlike extremely inflexible substances that will fail and essentially split into halves, Roman concrete displaces the strain over many small fractures, without necessarily losing its overall integrity. “Roman concrete-style materials respond really well to that kind of cracking,” Brune says, adding that this feature could explain why the age-old recipe has endured so long despite earthquakes and the churn of aquatic environments.

White clumps of lime found in Roman concrete can also keep it robust, as Masic and fellow MIT scientists reported in a Science Advances paper in January. In lab experiments, the team drained water through cracked concrete cylinders for 30 days. Water continuously flowed through broken samples of typical concrete. But in concrete with added lime gobs, calcite crystallized to fill the gaps. 

Jackson and Brune have observed similar self-restoring abilities in their marine concrete replicas. In to-be-published experiments funded by the Advanced Research Projects Agency-Energy grant, they again cracked semicircles of the concoction. When they placed the damaged arcs in containers of seawater, chemical reactions resumed—new glue accumulated in the fractures. This, Jackson says, is concrete that self-repairs.

New trials, new island

As 2023 surges on, Roman-style concrete will venture further than ever before. US Army Corps of Engineers research geologist Charles Weiss, who studies concrete and other structural materials, has submitted a proposal to try out Jackson’s formula. If the Vicksburg, Mississippi, military lab receives the funds—“Working for the government, nothing is for sure,” he says—Corps researchers will cast the material and place it in a body of water.

Elsewhere, another federal project’s failure may have helped Jackson’s creation along. In 2018 in South Carolina, at the Department of Energy’s Savannah River National Laboratory, scientists were trying to make a product that could safely store radioactive garbage.

view of surtsey island, iceland
Surtsey Island, located nearly 20 miles off the southern end of Iceland, is still geologically young. This makes it ripe for studying tephra in its natural habitat. Arctic Images / Alamy

The national lab wanted to create foam glass, a type of bubble-filled substance meant to be inactive, and contracted the Silica-X team to help. They weren’t successful. The mixture kept reacting with its surroundings—a problem because if radioactive waste receptacles dissolve, they can release unstable particles. But what’s bad for nuclear trash is good for seawalls designed to respond to their environs. Glass designers at the lab recognized this potential and connected Jackson with the company.

Despite the growing interest in Roman-style concrete, it is neither feasible nor sustainable to mine industrial amounts of pozzolan from Naples. Instead, Philip Galland, Silica-X’s chief executive, says its production process digs into nonnuclear US waste streams to obtain silica, which is then transformed into synthetic tephra. That will be the basis for the upcoming Long Island Sound field test, Galland says, in an “area where it can offer shoreline resilience.”

Silica-X plans to assess the 3.5-foot cubes’ durability over two years. Along with its partners—the New York Department of Environmental Conservation and Alfred University, home to an influential ceramics college—the company will analyze the material’s potential as a storm-surge barrier and how it performs as a habitat for microbes and other local marine life.

At the same time, Jackson has returned to her original subject matter: volcanoes. She is the principal investigator of a project to study Surtsey, a tiny volcanic island off of Iceland that’s just 60 years old. A UNESCO World Heritage site, it emerged from the Atlantic in sprays of smoke and lava from 1963 to 1967. “I remember when it first erupted,” Jackson says, “because my dad came home from work and told us that there was a baby volcano erupting.”

At Surtsey, scientists have found microbial life in basalt rocks previously untouched by humans. (Aside from research teams who arrive by boat or helicopter, visitors are banned from the volcano.) They have drilled to the seafloor, through stone that is still hot years after the last eruption, and examined the tephra there. As it slumbers, Jackson believes this place can reveal what happened in the early years of submerged Roman concrete. 

What she knows about the material has been gleaned from stuff that’s aged for thousands of years underwater. Although the young terrain is an imperfect replica of the coveted ancient ingredient—the fluids there aren’t quite the same as what percolated through the Roman structures—Jackson says she has already spied some similar geochemical processes. The ash and seawater around the volcano offer a parallel to the early reactions that gave a great civilization its building blocks. This is a living laboratory that could teach us Roman concrete’s art of change, witnessed on a scale as massive as a new island or as tiny as minerals morphing across millennia. If all goes well, the modern version of this powerful invention will outlast its makers just the same.

Read more PopSci+ stories.

The post Inside the project to bring ‘self-healing’ Roman concrete to American shorelines appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new artificial skin could be more sensitive than the real thing https://www.popsci.com/technology/artificial-skin-iontronic/ Fri, 27 Jan 2023 15:00:00 +0000 https://www.popsci.com/?p=508099
two hands
Could artificial skin be the next frontier in electronics?. Shoeib Abolhassani / Unsplash

It can detect direct pressure as well as objects that are hovering close by.

The post A new artificial skin could be more sensitive than the real thing appeared first on Popular Science.

]]>
two hands
Could artificial skin be the next frontier in electronics?. Shoeib Abolhassani / Unsplash

The human skin is the body’s largest organ. It also provides one of our most important senses: touch. Touch enables people to interact with and perceive objects in the external world. In building robots and virtual environments, though, touch has not been the easiest feature to translate compared to say, vision. Many labs are nonetheless trying to make touch happen, and various versions of artificial skin show promise in making electronics (like the ones powering prosthetics) smarter and more sensitive.

A study out this week in the journal small presents a new type of artificial skin created by a team at Nanyang Technological University in Singapore that can not only sense direct pressure being applied on it, but also when objects are getting close to it. 

[Related: One of Facebook’s first moves as Meta: Teaching robots to touch and feel]

Already, various artificial skin mockups in the past have been able to pick up on factors like temperature, humidity, surface details and force, and turn those into digital signals. In this case, this artificial skin is “iontronic,” which means that it integrates ions and electrodes to try to enable sense. 

Specifically, it’s made up of a porous, spongy layer soaked with salty liquid sandwiched between two fabric electrode layers embedded with nickel. These raw components are low-cost, and easily scalable, which the researchers claim makes this type of technology suitable for mass production. The result is a material that is bendy, soft, and conductive. The internal chemistry of the structure makes it so that when there is pressure applied onto the material, it induces a change in capacitance, producing an electric signal. 

“We created artificial skin with sensing capabilities superior to human skin. Unlike human skin that senses most information from touching actions, this artificial skin also obtains rich cognitive information encoded in touchless or approaching operations,” corresponding author Yifan Wang, an assistant professor at Nanyang Technological University, in Singapore said in a press release. “The work could lead to next-generation robotic perception technologies superior to existing tactile sensors.”

The design of the device also creates a “fringing electric field” around the edge of the skin. This resulting electric field can sense when objects get close and can also discern the material that the object is made of. For example, it can distinguish between a plastic, metal, and human skin in a small proof-of-concept demo. 

As for use cases, the artificial skin can be put onto robot fingers or on a control interface for an electronic game that uses the touch of the finger to move the characters. In their experiment, users played the game Pac-Man, and navigated through electronic maps by interacting with a panel of the artificial skin. 

The post A new artificial skin could be more sensitive than the real thing appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA aims to fly its experimental electric plane this year https://www.popsci.com/technology/nasa-electric-plane-x-57-first-flight-plans/ Wed, 25 Jan 2023 23:00:00 +0000 https://www.popsci.com/?p=507843
The X-57 will fly in a configuration like this one—with an electric motor on each wing. Here, it undergoes testing in April, 2021.
The X-57 will fly in a configuration like this one—with an electric motor on each wing. Here, it undergoes testing in April, 2021. NASA/Lauren Hughes

Following a turbulent development that saw some components dramatically failing during testing, the X-57 is set to finally take flight in 2023. Here's what's been happening.

The post NASA aims to fly its experimental electric plane this year appeared first on Popular Science.

]]>
The X-57 will fly in a configuration like this one—with an electric motor on each wing. Here, it undergoes testing in April, 2021.
The X-57 will fly in a configuration like this one—with an electric motor on each wing. Here, it undergoes testing in April, 2021. NASA/Lauren Hughes

Sometime later this year—perhaps this summer, perhaps this fall—an electric aircraft from NASA, the X-57, is set to take flight in California. It’s what NASA describes as its “first all-electric experiment aircraft,” and when it does lift off the ground, it won’t look the way that NASA has been depicting the plane on its website.

Instead of a whopping 14 electric motors and propellers, the aircraft will have just two. But those two motors, powered by more than 5,000 cylindrical battery cells in the aircraft’s fuselage, should be enough to get it up in the air before 2023 is over, which is when the X-57 program is set to power down, too. 

Here’s what to know about how the plane will work, the challenges the program has faced, and how lessons from spaceflight helped inform the details of its battery system. 

Modification 2 

If the plane does indeed take flight this year as planned, it will do so in a form called Modification 2, which involves one electric motor and propeller on each wing giving the aircraft the thrust it needs to take to the skies.

While the aeronautics and space agency had hoped to fly the plane—which is based on a Tecnam P2006T—in additional configurations, known as Modifications 3 and 4, that won’t happen. Why? Because making a plane that flies safely on just electricity is hard, and the program is only funded through 2023. (IEEE Spectrum has more on the program’s original plans.)

“We’ve been learning a lot over the years, and we thought we’d be learning through flight tests—it turns out we had a lot of lessons to learn during the design and integration and airworthiness qualification steps, and so we ended up spending more time and resources on that,” says Sean Clark, the principle investigator for the X-57 program at NASA. 

“And that’s been hugely valuable,” he adds. “But it means that we’re not going to end up having resources for those Mod 4 [or 3] flights.” 

It will still fly as an all-electric plane, but in Mod 2, with two motors. 

Exploding transistors 

One glitch that the team had to iron out before the aircraft can safely take flight involves components that electricity from the batteries have to travel through before they reach the motors. The problem was with transistor modules inside the inverters, which change electricity from DC to AC. 

“We were using these modules that are several transistors in a package—they were specced to be able to tolerate the types of environments we were expecting to put it in,” says Clark. “But every time we would test them, they would fail. We would have transistors just blowing up in our environmental test chamber.” 

[Related: This ‘airliner of the future’ has a radical new wing design]

A component failure—such as a piece of equipment blowing up—is the type of issue that aircraft makers prefer to resolve on the ground. Clark says they figured it out. “We did a lot of dissection of them—after they explode, it’s hard to know what went wrong,” he notes, lightheartedly, in a manner suggesting an engineer faced with a messy problem. The solution was newer hardware and “redesigning the inverter system basically from the ground up,” he notes. 

They are now “working really well,” he adds. “We’ve put a full set through qualification, and they’ve all passed.”

An older rendering of the X-57 shows it with a skinny wing and 14 motors; it will not fly with this configuration.
An older rendering of the X-57 shows it with a skinny wing and 14 motors; it will not fly with this configuration. NASA Graphic / NASA Langley/Advanced Concepts Lab, AMA, Inc.

Lessons from space

Traditional aircraft burn fossil fuels, an obviously flammable and explosive substance, to power their engines. Those working on electric aircraft, powered by batteries, need to ensure that the battery cells don’t spark fires, either. Last year in Kansas, for example, an FAA-sponsored test featured a pack of aviation batteries being dropped by 50 feet to ensure they could handle the impact. They did. 

In the X-57, the batteries are a model known as 18650 cells, made by Samsung. The aircraft uses 5,120 of them, divided into 16 modules of 320 cells each. An individual module, which includes both battery cells and packaging, weighs around 51 pounds, Clark says. The trick is to make sure all of these components are packaged in the right way to avoid a fire, even if one battery experiences a failure. In other words, failure was an option, but they plan to manage any failure so that it does not start a blaze. “We found that there was not an industry standard for how to package these cells into a high-voltage, high-power pack, that would also protect them against cell failures,” Clark says. 

[Related: The Air Force wants to modernize air refueling, but it’s been a bumpy ride]

Help came from higher up. “We ended up redesigning the battery pack based on a lot of input from some of the design team that works on the space station here at NASA,” he adds. He notes that lithium batteries on the International Space Station, as well as in the EVA suits astronauts use and a device called the pistol grip tool, were relevant examples in the process. The key takeaways involved the spacing between the battery cells, as well how to handle the heat if a cell did malfunction, like by experiencing a thermal runaway. “What the Johnson [Space Center] team found was one of the most effective strategies is to actually let that heat from that cell go into the aluminum structure, but also have the other cells around it absorb a little bit of heat each,” he explains.

NASA isn’t alone in exploring the frontier of electric aviation, which represents one way that the aviation industry could be greener for short flights. Others working in the space include Beta Technologies, Joby Aviation, Archer Aviation, Wisk Aero, and Eviation with a plane called Alice. One prominent company, Kitty Hawk, shuttered last year.

Sometime this year, the X-57 should fly for the first time, likely making multiple sorties. “I’m still really excited about this technology,” says Clark. “I’m looking forward to my kids being able to take short flights in electric airplanes in 10, 15 years—it’s going to be a really great step for aviation.”

Watch a brief video about the aircraft, below:

The post NASA aims to fly its experimental electric plane this year appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this metallic material move like the T-1000 from ‘Terminator 2’ https://www.popsci.com/technology/magnetoactive-liquid-metal-demo/ Wed, 25 Jan 2023 22:00:00 +0000 https://www.popsci.com/?p=507689
Lego man liquid-metal model standing in mock jail cell
Hmm. This scene looks very familiar. Wang and Pan, et al.

A tiny figure made from the magenetoactive substance can jailbreak by shifting phases.

The post Watch this metallic material move like the T-1000 from ‘Terminator 2’ appeared first on Popular Science.

]]>
Lego man liquid-metal model standing in mock jail cell
Hmm. This scene looks very familiar. Wang and Pan, et al.

Sci-fi film fans are likely very familiar with that scene in Terminator 2 when Robert Patrick’s slick, liquid metal T-1000 robot easily congeals itself through the metal bars of a security door. It’s an iconic set piece that relied on then-cutting edge computer visual effects—that’s sort of director James Cameron’s thing, after all. But researchers recently developed a novel substance capable of recreating a variation on that ability. With more experimentation and fine-tuning, this new “magnetoactive solid-liquid phase transitional machine” could provide a host of tools for everything from construction repair to medical procedures.

[Related: ‘Avatar 2’s high-speed frame rates are so fast that some movie theaters can’t keep up.]

So far, researchers have been able to make their substance “jump” over moats, climb walls, and even split into two cooperative halves to move around an object before reforming back into a single entity, as detailed in a new study published on Wednesday in Matter. In a cheeky video featuring some strong T2 callbacks, a Lego man-shaped mold of the magnetoactive solid-liquid can even be seen liquifying and moving through tiny jail cell bars before reforming into its original structure. If that last part seems a bit impossible, well, it is. For now.

“There is some context to the video. It [looks] like magic,” Carmel Majidi, a senior author and mechanical engineer at Carnegie Mellon, explains to PopSci with a laugh. According Majidi, everything leading up to the model’s reformation is as it appears—the shape does liquify before being drawn through the mesh barrier via alternating electromagnetic currents. From there, however, someone pauses the camera to recast the mold into its original shape.

But even without the little cinema history gag, Majidi explains that he and his colleagues’ new material could have major benefits within a host of situations. The  team, made up of experts from The Chinese University of Hong Kong and Carnegie Mellon University have created a “phase-shifting” material by embedding magnetic particles within gallium, a metal featuring an extremely low melting point of just 29.8C, or roughly 85F. To accomplish this, the magnetically infused gallium is exposed to an alternating magnetic field to generate enough heat through induction. Changing the electromagnet’s path can conversely direct the liquified form, all while retaining a far less viscous state than similar phase-changing materials.

[Related: Acrobatic beetle bots could inspire the latest ‘leap’ in agriculture.]

“There’s been a tremendous amount of work on these soft magnetic devices that could be used for biomedical applications,” says Majidi. “Increasingly, those materials [could] used for diagnostics, drug delivery… [and] recovering or removing foreign objects.”

Majidi’s and colleagues’ latest variation, however, stands apart from that amorphous blob of similar substances. “What this endows those systems with is their ability to now change stiffness and change shape, so they can now have even greater mobility within that context.”

[Related: Boston Dynamics’s bipedal robots can throw heavy objects now.]

Majidi cautions, however, that any deployment in doctors’ offices is still far down the road. In the meantime, it’s much closer to being deployed in situations such as circuit assembly and repair, where the material could ooze into hard-to-reach areas before congealing as simultaneously both a conductor and solder.

Further testing needs undertaking to determine the substance’s biocompatibility in humans, but Majidi argues that it’s not hard to imagine patients one day entering an MRI-like machine that can guide ingested versions of the material for medical procedures. For now, however, it looks like modern technology is at least one step closer to catching up with Terminator 2’s visual effects wizardry from over 30 years ago.

The post Watch this metallic material move like the T-1000 from ‘Terminator 2’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From film to forensics, here’s how lidar laser systems are helping us visualize the world https://www.popsci.com/technology/lidar-use-cases/ Sat, 21 Jan 2023 12:00:00 +0000 https://www.popsci.com/?p=506859
movie set
A movie production ongoing. DEPOSIT PHOTOS

The technology is being applied in wacky and unexpected ways.

The post From film to forensics, here’s how lidar laser systems are helping us visualize the world appeared first on Popular Science.

]]>
movie set
A movie production ongoing. DEPOSIT PHOTOS

Lidar, a way to use laser light to measure how far away objects are, has come a long way since it was first put to work on airplanes in 1960. Today, it can be seen mounted on drones, robots, self-driving cars, and more. Since 2016, Leica Geosystems has been thinking of ways to apply the technology to a range of industries, from forensics, to building design, to film. (Leica Geosystems was acquired by Swedish industrial company Hexagon in 2005 and is separate from Leica Microsystems and Leica Camera). 

To do this, Leica Geosystems squeezed the often clunky, 3D-scanning lidar technology down to a container the size of a soft drink can. A line of products called BLK is specifically designed for the task of “reality capture,” and is being used in a suite of ongoing projects, including the mapping of ancient water systems hidden beneath Naples that were used to naturally cool the city, the explorations of Egyptian tombs, and the modeling of the mysterious contours of Scotland’s underground passages, as Wired UK recently covered. 

The star of these research pursuits is the BLK360, which like a 360-degree camera, swivels around on a tripod to image its surroundings. Instead of taking photos, it’s measuring everything with lasers. The device can be set up and moved around to create multiple scans that can be compiled in the end to construct a 3D model of an environment. “That same type of [lidar] sensor that’s in the self-driving car is used in the BLK360,” says Andy Fontana, Reality Capture specialist at Leica Geosystems. “But instead of having a narrow field of view, it has a wide field of view. So it goes in every direction.”

[Related: Stanford researchers want to give digital cameras better depth perception]

Besides the BLK360, Leica Geosystems also offers a flying sky scanner, a scanner for robots, and a scanner that can be carried and work on the go. For the Rhode Island School of Design-led team studying Naples waterways, they’re using both the BLK360 and the to-go devices in order to scan as much of the city as possible. Figuring out the particular designs ancient cities used to create conduits for water as a natural cooling infrastructure can provide insights on how modern cities around the world might be able to mitigate the urban heat island effect. 

Engineering photo
Leica Geosystems

Once all the scans comes off the devices, they exist as a 3D point cloud—clusters of data points in space. This format is frequently used in the engineering industry, and it can also be used to generate visualizations, like in the Scotland souterrain project. “You can see that it’s pixelated. All of those little pixels are measurements, individual measurements. So that’s kind of what a point cloud is,” Fontana explains. “What you can do with this is convert it and actually make it into a 3D surface. This is where you can use this in a lot of other applications.”

[Related: A decked out laser truck is helping scientists understand urban heat islands]

Lidar has become an increasingly popular tool in archaeology, as it is able to procure more accurate dimensions of a space than images alone with scans that take less than a minute—and can be triggered remotely from a smartphone. But Leica Geosystems has found an assortment of useful applications for this type of 3D data.

One of the industries interested in this tech is film. Imagine this scenario: a major studio constructs an entire movie set for an expensive action film. Particular structures and platforms are needed for a specific scene. After the scene is captured, the set gets torn down to make room for another set to be erected. If in the editing process, it’s decided that the footage that was taken is actually not good enough, then the crew would have to rebuild that whole structure, and bring people back—a costly process. 

However, another option now is for the movie crew to do a scan of every set they build. And if they do  miss something or need to make a last-minute addition, they can use the 3D scan to edit the scene virtually on the computer. “They can fix things in CGI way easier than having to rebuild [the physical set],” Fontana says. “And if it’s too big a lift to do it on the computer, they can rebuild it really accurately because they have the 3D data.” 

[Related: These laser scans show how fires have changed Yosemite’s forests]

Other than film, forensics is a big part of Leica Geosystems’ business. Instead of only photographing a crime scene, what they do now is they scan a scene, and this is done for a couple of different reasons. “Let’s say it’s a [car] crash scene. If they take a couple of scans you can have the entire scene captures in 2 minutes in 3D. And then you can move the cars out of the way of traffic,” says Fontana. “That 3D data can be used in court. In the scan, you can even see skidmarks. You can see that this person was braking and there were these skidmarks, and they can calculate the weight of the car, compared to the length of the skidmark, to see how fast they were going.” 

With more graphic situations, like a murder or a shooting, this 3D data can be used to create “cones that show a statistical confidence of where that bullet came from based on how it hit the wall,” he says. 

As lidar continues to be expanded in tried and true applications, the growing variety of use cases will hopefully inspire innovators to think of ever more new approaches for this old tech. 

The post From film to forensics, here’s how lidar laser systems are helping us visualize the world appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
It’s not a UFO—this drone is scooping animal DNA from the tops of trees https://www.popsci.com/technology/e-dna-drone-tree-top/ Wed, 18 Jan 2023 22:22:15 +0000 https://www.popsci.com/?p=506207
drone on branch
An eDNA sampling drone perched on a branch. ETH Zurich

This flying robot can help ecologists understand life in forest canopies.

The post It’s not a UFO—this drone is scooping animal DNA from the tops of trees appeared first on Popular Science.

]]>
drone on branch
An eDNA sampling drone perched on a branch. ETH Zurich

If an animal passes through the forest and no one sees it, does it leave a mark? A century ago, there would be no way to pick up whatever clues were left behind. But with advancements in DNA technology, particularly environmental DNA (eDNA) detecting instruments, scientists can glean what wildlife visited an area based on genetic material in poop as well as microscopic skin and hair cells that critters shed and leave behind. For ecologists seeking to measure an ecosystem’s biodiversity as non-invasively as possible, eDNA can be a treasure trove of insight. It can capture the presence of multiple species in just one sample. 

But collecting eDNA is no easy task. Forests are large open spaces that aren’t often easily accessible (canopies, for example, are hard to reach), and eDNA could be lurking anywhere. One way to break up this big problem is to focus on a particular surface in the forest to sample eDNA from, and use a small robot to go where humans can’t. That’s the chief strategy of a team of researchers from ETH Zurich, the Swiss Federal Institute for Forest, Snow and Landscape Research WSL, and company SPYGEN. A paper on their approach was published this week in the journal Science Robotics

In aquatic environments, eDNA-gathering robots sip and swim to do their jobs. But to reach the treetops, not only do researchers have to employ flying drones (which are tougher to orient and harder to protect), these drones also need to be able to perch on a variety of surfaces. 

[Related: These seawater-sipping robots use drifting genes to make ocean guest logs]

The design the Swiss team came up with looks much like a levitating basket, or maybe a miniature flying saucer. They named this 2.6-pound contraption eDrone. It has a cage-like structure made up of four arcs that extend out below the ring mainframe that measure around 17 inches in diameter. The ring and cage-like body protect it and its four propellers from obstacles, kind of like the ring around a bumper car. 

To maneuver, the eDrone uses a camera and a “haptic-based landing strategy,” according to the paper, that can perceive the position and magnitude of forces being applied to the body of the robot in order to map out the appropriate course of action. To help it grip, there are also features like non-slip material, and carbon cantilevers on the bottom of each arc. 

Once it firmly touches down, the drone uses a sticky material on each arc to peel off an eDNA sample from the tree branch and stow it away for later analysis. In a small proof-of-concept run, the eDrone was able to successfully obtain eDNA samples from seven trees across three different families. This is because different tree species have their own branch morphologies (some are cylindrical and others have more irregular branches jutting out). Different trees also host different animals and insects. 

“The physical interaction strategy is derived from a numerical model and experimentally validated with landings on mock and real branches,” the researchers wrote in the paper.  “During the outdoor landings, eDNA was successfully collected from the bark of seven different trees, enabling the identification of 21 taxa, including insects, mammals, and birds.”

Although the robot did its intended job well in these small trials, the researchers noted that there needs to be more extensive studies into how its performance may be affected by tree species beyond the ones they tested for or by changing environmental conditions like wind or overcast skies. Moreover, eDNA gathering by robot, they propose, can be an additional way to sample eDNA in forests alongside other methods like analyzing eDNA from pooled rainwater

“By allowing these robots to dwell in the environment, this biomonitoring paradigm would provide information on global biodiversity and potentially automate our ability to measure, understand, and predict how the biosphere responds to human activity and environmental changes,” the team wrote. 

Watch the drone in action below: 

The post It’s not a UFO—this drone is scooping animal DNA from the tops of trees appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The illuminating tech inside night vision goggles, explained https://www.popsci.com/technology/how-does-night-vision-work/ Mon, 16 Jan 2023 23:00:00 +0000 https://www.popsci.com/?p=505358
A night vision view of a C-17 pilot in the airplane's cockpit.
Traditionally, night vision goggles have displayed their scenes in green and black. Here, a pilot of a C-17 wears goggles in February, 2022. US Air National Guard / Mysti Bicoy

Seeing in the dark is all about photons, electrons, and phosphor.

The post The illuminating tech inside night vision goggles, explained appeared first on Popular Science.

]]>
A night vision view of a C-17 pilot in the airplane's cockpit.
Traditionally, night vision goggles have displayed their scenes in green and black. Here, a pilot of a C-17 wears goggles in February, 2022. US Air National Guard / Mysti Bicoy

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

If you want to be able to see in the dark, a good first step is to turn on the lights. That’s why cars have headlights, nocturnal hikers wear headlamps, and dog-walkers carry flashlights after the sun sets. Adding artificial light to the scene makes it knowable. 

But there’s another approach to seeing in the dark that involves a piece of military gear: night vision goggles. If you’ve ever seen a green-tinted scene in the movies and wondered how this type of equipment works, here’s a look at the three-step process that takes place inside this type of device. 

A member of the Hawaii Air National Guard holding night vision goggles up to his eyes.
A member of the Hawaii Air National Guard tries out night vision goggles; the model he is using displays with white phosphor, not green. US Air National Guard / Mysti Bicoy

How does night vision work?

When the sun is out, the reason you can see an object like a tree nearby is because light is reflecting off of it and making its way to your eyes. Of course, that reflected light doesn’t exist in the same quantities at night. Put another way, there are “very few photons” after dark, observes Matthew Renzi, the senior director of engineering at L3Harris, a defense contractor that makes a night-vision device for the Army called the ENVG-B. (As a reminder, light behaves both like a wave and a particle. The fundamental particles of light are called photons.) 

Imagine a single photon entering the goggles. The initial trick that the night vision device pulls off involves that incoming photon. “We convert that photon to an electron through a photocathode,” Renzi says. “That’s a specialized material that’s there to make that transition from a single photon of light into that electron.”

In brief, this step involves converting particles from the domain of light to the domain of electricity. 

[Related: Let’s talk about how planes fly]

The next step involves boosting the signal from that electron, and for that, the device uses onboard battery power, like from an AA battery or two. “That electron multiplies significantly,” Renzi says, noting that it could be multiplied “tens of thousands” of times. While the part of the device that converted the photon to an electron is called a photocathode, this part that turns the volume up on the electron is known as the microchannel plate. 

Three soldiers standing in a row, with the center one holding night vision goggles up to his eyes, viewed through night vision and thermal sensing.
A soldier, center, with ENVG-B night vision goggles in front of his eyes. This view shows both traditional night vision information as well as thermal sensing. US Army / Pierre Osias

Finally, the information needs to be transferred back into the visual realm, so that whoever is looking through the goggles can see the scene. That happens thanks to a phosphor screen, which the user can see when they look through the eyepiece. “The phosphor screen is what takes that energy from those electrons and converts it back into visible light,” Renzi says. 

White and black is the new green and black

That last stage is where the traditional green and black images are produced, but Renzi says that instead of green, the more modern devices today display the scene in black and white. “You might say that a green and a white are equivalent in terms of measurable performance, but the human eye perceives white and black better than it does green versus black,” he argues. The white-versus-green phosphor distinction also surfaces when it comes to night vision equipment that’s for sale

So in brief, to make a dark scene more seeable, these gadgets take in photons, convert them to electrons, amplify those electrons, and then convert that information back to the visible again. In some cases, night vision goggles will include “an illuminator,” which actually produces a small amount of new light to brighten up the scene, he says.  

Renzi notes that the parts of the electromagnetic spectrum that these types of night-vision goggles are perceiving comes from both the visible range and the near infrared; the near infrared is the part of the spectrum that’s found right next to the red part of the visible spectrum. This cheesy NASA video breaks that spectrum down:

Traditional night vision goggles focus on the light from both the visible and near-infrared parts of the electromagnetic spectrum, which are, along with shortwave infrared, “part of what we call the reflective bands—where you’re still looking at light that bounces off something,” he says. 

Meanwhile, a different piece of gear—a thermal camera—sees a different part of the electromagnetic spectrum, which is longwave infrared; that’s emissive, as opposed to reflected light. That part of the spectrum, Renzi says, is “outside of the visuals of your traditional night-vision goggles.” A gizmo from L3 Harris called the ENVG-B actually combines both traditional night vision goggles, with their focus on the visible spectrum and the near infrared (both of which are reflected light), together with thermal sensing from the longwave infrared area that can see emissive body heat, for example. 

The difference between these two types of information is imaginable in a scenario like this: “Let’s say somebody was off in the distance and behind some leaves,” he says. “You would likely pick up that heat with the long-wave infrared, [which] might have been more difficult with [just] the reflective band—but you might not know much about it at that point—you would just see that there’s some heat there.” 

Ultimately, he says that the technology that allows people to see in the dark has evolved over the decades. Older systems with a “passive imaging” approach needed “a full-moon scenario or some sort of ambient light,” he says. Today, night vision goggles can “use starlight.” 

The post The illuminating tech inside night vision goggles, explained appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This startup plans to collect carbon pollution from buildings before it’s emitted https://www.popsci.com/technology/carbonquest-building-carbon-capture/ Fri, 13 Jan 2023 15:00:00 +0000 https://www.popsci.com/?p=505296
high rise buildings in new york
High rise buildings generate a good proportion of greenhouse gas emissions. Dan Calderwood / Unsplash

CarbonQuest's carbon capture system is being installed at five other locations after a successful pilot test.

The post This startup plans to collect carbon pollution from buildings before it’s emitted appeared first on Popular Science.

]]>
high rise buildings in new york
High rise buildings generate a good proportion of greenhouse gas emissions. Dan Calderwood / Unsplash

After a successful pilot run, a start-up called CarbonQuest is expanding its footprint in New York City. Its mission? Outfit high-rise buildings in carbon capture technology.

Carbon capture tech aims to do exactly what the name implies: capture carbon dioxide emissions generated by burning fossil fuels. It’s just one of the many experimental methods to combat the climate crisis. In this case, instead of grabbing carbon out of the air, the goal would be to prevent it from being expelled by the building in the first place.

“The deal marks the first multibuilding deployment of a new technology that could prove crucial for eliminating carbon emissions from buildings,” Canary Media reported. 

One year ago, the company launched its first working system at 1930 Broadway, which is a more than 350,000 square foot luxury apartment building in Lincoln Square owned by Glenwood Management. The heart of the technology is tucked away in the building basement, taking up the equivalent of three parking spaces. 

While the building operates as usual, CarbonQuest’s equipment would capture emissions generated by activities like cooking and heating with natural gas, for example, filter out the carbon dioxide from the mix of exhaust gasses, and turn it into a liquid form by applying pressure. The company uses software to verify, measure, and report carbon dioxide emissions to third party verifiers, auditors, and regulators. Since its technology provides “point source capture,” this would in theory prevent any carbon dioxide from being discharged into the atmosphere. (Read PopSci’s guide to carbon capture and storage here.)

[Related: The truth about carbon capture technology]

This liquid carbon dioxide can then be re-used in applications like specially formulated concrete blocks, sustainable jet fuel, chemical manufacturing, algae bioreactors, and more. Currently, Glenwood Management sells its liquid carbon dioxide to concrete maker Glenwood Mason Supply (same name, but unrelated to the management company). 

Some studies suggest that injecting carbon dioxide into concrete can alter its properties, making it stronger than traditional concrete. The Department of Transportation in the City of San Jose has even used carbon dioxide-infused concrete for their ramps

Structures like 1930 Broadway are facing growing pressure to become more sustainable, with the latest incentive coming from NYC’s Local Law 97 requiring large buildings to meet new energy efficiency and greenhouse gas emissions limits by 2024. Those limits will become even more strict in 2030. (The Biden administration rolled out similar emission-cutting regulations around federal buildings.)

According to the City of New York, an estimated “20-25 percent of buildings will exceed their emissions limits in 2024, if they take no action to improve their building’s performance. In 2030, if owners take no action to make improvements, approximately 75-80 percent of buildings will not comply with their emission limits.” 

In the 1930 Broadway case study, technology implemented by CarbonQuest is “expected to cut 60-70 percent of CO2 emissions from natural gas usage” and reduce a building’s annual carbon emissions by 25 percent. Without such an instrument in place, the building could be penalized hundreds of thousands of dollars by the city every year after 2024. Glenwood Management has already ordered five more systems for other rental properties in the city to be installed by March of 2023, according to an announcement earlier this month.

“The installations at these buildings — which include The Fairmont (300 East 75th Street), The Bristol (300 East 56th Street), The Paramount Tower (240 East 39th Street), The Barclay (1755 York Avenue) and The Somerset (1365 York Avenue) — come on the heels of the success of Glenwood’s pilot project with CarbonQuest at The Grand Tier (1930 Broadway), which is the first commercially operational building carbon capture project on the market,” the companies elaborated in the press release

Buildings are the greatest source of carbon emissions in NYC by proportion—they make up around 70 percent of the city’s greenhouse gas emission (to compare, here are the greatest sources of carbon emissions nationally). 

Engineers have been brainstorming ways to make high-rise buildings greener, including rethinking design, construction materials, and the construction process. Others have considered the integration of plants and outer skins on the surfaces of these skyscrapers to help it conserve energy. CarbonQuest claims that it hasn’t had any direct competitors in the building space, although many big companies have been investing in nascent technologies that can remove and repurpose greenhouse gas emissions. 

Ultimately, capturing carbon emissions would be a separate, more non-disruptive way of reducing emissions compared to electrification or heat pumps, though it alone is not likely the end-all solution. “Building Carbon Capture can provide a cost-effective means of providing immediate reductions while the grid, over time, becomes greener,” the company noted in its FAQ page

The post This startup plans to collect carbon pollution from buildings before it’s emitted appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Will ‘flying cars’ cause traffic jams in the sky? https://www.popsci.com/technology/flying-cars-and-traffic-control/ Tue, 10 Jan 2023 14:00:00 +0000 https://www.popsci.com/?p=482617
Engineering photo
Josie Norton

Soaring over street traffic is appealing, but we need to figure out how to manage congestion in the air.

The post Will ‘flying cars’ cause traffic jams in the sky? appeared first on Popular Science.

]]>
Engineering photo
Josie Norton

ON A GOOD DAY, assuming decent weather, little traffic, and skills behind the wheel, a cab ride from John F. Kennedy International Airport in Queens to downtown Manhattan should take about 45 minutes. Yet who can reliably predict New York City traffic? That trip could last twice as long on a bad day.

Now assume you could avoid the streets entirely and get to lower Manhattan in minutes. By some estimates, more than 200 startups are racing to deploy what popular culture has dubbed flying cars. And, by their admittedly optimistic estimates, there’s a chance that the 45-minute drive on pavement from JFK will be converted to a 10-minute flight through the air by the end of this decade.

Leaders in the quest to make cabs airborne believe everyday passengers at places like airports will exchange treks on four wheels for sorties through the skies. “Being able to fly over traffic and reach your destination in a much more predictable time will be very appealing,” says John Criezis, head of mobility operations for Overair, a California-based maker of flying taxis. Market studies commissioned by NASA predict that by 2030, as many as 750 million flights will ferry passengers to and from destinations near or within US cities each year.

So let’s be glass-half-full and assume companies overcome the pervasive technical challenges of air taxis: updating hundred-year-old flight controls, manufacturing durable carbon-fiber structures, crafting designs capable of vertical and forward flight, developing batteries that last a long time and don’t overheat. (Oh, and getting clearance from the Federal Aviation Administration.) On top of that, there still remains another puzzling question. How will we manage all this new traffic in the sky?

In the search for answers, it helps to first consider what the fleet will look like. Regardless of what midcentury Popular Science may have promised us (er, sorry), these will not be family sedans that soar to Grandma’s house. Many of the proposed aircraft feature a fixed wing with multiple rotors that pivot for takeoff and landing. These lithium-battery-powered electric VTOLs—aircraft capable of vertical takeoff and landing—are meant to flip the script on traditional public transportation. “We intend to function…as an aerial ride-sharing service, to move people in and around our cities,” says Andrew Cummins, director of strategy and business development at Archer Aviation. For that to work, VTOLs will need places to land and corridors through which to fly.

Initially, Archer, Overair, and many of their counterparts expect to use existing infrastructure. They hope to rent space at some of the 5,000 public-use airports and heliports in the US. Pick any big city, and the idea is the same: Land at the airport and walk outside, past the rows of wheeled taxis, to a VTOL waiting to speed you to a helipad atop some downtown edifice mere blocks from your meeting. (And vice versa.) Last October, air-taxi company Joby Aviation announced a first-of-its-kind partnership with Delta Air Lines to ferry flyers this way in NYC and Los Angeles.

Eventually, cities will also need dedicated landing and parking areas for fleets of VTOLs—known in the biz as vertiports. Situated on the top floors of parking garages or in large parking lots, these are the spots where 6,000-pound VTOLs will recharge, be maintained, and take off.

Suchithra Rajendran, a professor of systems engineering at the University of Missouri, has spent the last five years mapping out how such a network might look in the Big Apple. By analyzing two years’ worth of taxi data—both the number of voyages taken and the pickup and drop-off points—Rajendran’s model recommends 17 vertiports, with 84 VTOLs flying among them. Assuming four passengers per flight, that adds up to 6,500 riders making 1,600 trips every day.

Models from air-taxi companies assume a big market. In huge cities like NYC and Los Angeles, they project that there are perhaps 5 million riders who would be better served by VTOLs.

Models from air-taxi companies also assume a big market. In huge cities like NYC and LA, where drivers make roughly 50 million trips every day, they project that there are perhaps 5 million riders who would be better served by VTOLs.

Figuring out the choreography of where all those flying contraptions will go is currently the job of Brock Lascara, a systems engineer at MITRE, a nonprofit research org funded by the FAA. For starters, you won’t see them zipping between buildings, he says: Cruising altitudes will reach a couple of thousand feet, which is what’s necessary to hit optimal speeds of 150 miles per hour. At the same time, the taxis can’t contest existing controlled airspace (up to 10,000 feet above sea level) intended for passenger jets during takeoff. Lascara adds that specific urban corridors—VTOL-only pathways near airports and through cities—will have to be established. Those avenues will let airliners know where commuter birds will be and let VTOL pilots know where they can, and can’t, fly.

Still, no one wants to trade congestion on the street for gridlock in the sky, which means there’s another problem to be worked out. “A big constraint point is the vertiports themselves,” says Lascara. One way around the potential commotion is through a technique called vectoring. Already used in air traffic control, it would send various VTOLs, all jockeying for landing space, on different routes to the same destinations. One craft might fly in a straight line while another swoops in a semicircle, providing enough time for the first one to land, drop off passengers, and take off again before the other needs to touch down.

Zooming so close to an urban grid will also create its own aerodynamic complications. That’s why Lascara’s counterpart, systems engineer Mike Robinson, is involved in his own MITRE project to predict how turbulent winds created by the canyons between buildings might affect flight. By running a simulator called JOULES—Joint Outdoor-Indoor Urban Large Eddy Simulation—Robinson’s team has been able to map out the wind hazards in places like Atlantic City, Chicago, and swaths of NYC. That data could help nail down where to situate vertiports so buffeting breezes don’t rattle a VTOL as it comes in for landing.

As for what to do when a storm blows in? “The weather’s the weather,” Robinson says, shrugging. We might find that there are many days when a flying taxi just can’t get airborne. The good news? Unlike when an airplane is grounded, we still have an alternative—if we can bear the traffic.

This story originally appeared in the High Issue of Popular Science. Read more PopSci+ stories.

The post Will ‘flying cars’ cause traffic jams in the sky? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What can ‘smart intersections’ do for a city? Chattanooga aims to find out. https://www.popsci.com/technology/smart-intersections-chattanooga-tennessee/ Mon, 09 Jan 2023 12:00:00 +0000 https://www.popsci.com/?p=503806
an aerial view of an intersection
Photo by John Matychuk on Unsplash

Sensors can help make an intersection more knowable. Here's how a network of them could help a Tennessee city.

The post What can ‘smart intersections’ do for a city? Chattanooga aims to find out. appeared first on Popular Science.

]]>
an aerial view of an intersection
Photo by John Matychuk on Unsplash

An intersection is a complex place, even when regulated by a traffic signal. They’re full of vehicles with potentially distracted drivers trying to inch across the asphalt, and pedestrians with different levels of mobility attempting to use crosswalks. Throw bikes and other two-wheelers into the mix, and it can get hectic and hazardous, especially for the people not protected in machines made of metal and glass. 

There are other aspects of a modern urban streetscape as well, like operators of electric vehicles who want to find a place to charge. 

Experts hope that integrating more data-collection tech, in addition to traffic signals, can potentially help with issues like these. Chattanooga, Tennessee, is planning to create 86 new so-called smart intersections that are monitored by sensors such as lidar and cameras. 

The goal of making an intersection smart is “to be able to make sense of that intersection” based on the information provided by the sensors, says Mina Sartipi, the director of the Center for Urban Informatics and Progress at the University of Tennessee, Chattanooga. It will help them answer questions like: “Where are the cars? Where are the people? How close do they get to each other? How safe is it for a wheelchair [user]? Do we allow a disabled person, or an elderly [person], or a mom or a dad pushing a stroller, enough time to cross the street or not?” 

Adding the sensors will “make that environment observable,” she adds. 

[Related: It’s an especially dangerous time to be a pedestrian in America]

The project is supported by a $4.57 million grant from the US Department of Transportation, and builds on an already existing testbed of 11 other smart intersections in the same city. All told, the city will have nearly 100 smart intersections once the new ones come online. 

The DOT grant, she says, “basically brings transportation, energy, and people together.” The energy element comes from trying to connect people driving electric vehicles to charging stations if they need it, taking into account variables like if a station is available. 

The gray area represents the expected area the smart intersection project will span.
The gray area represents the expected area the smart intersection project will span. Courtesy Center for Urban Informatics and Progress (CUIP)

Gathering data from intersections involves sensors like cameras and lidar, which use lasers to detect objects. And intersections, like people, are not all the same. “We do pay attention to the needs of each intersection as well,” she says. “It’s not necessarily copy-paste.” 

With lidar—which is also a key sensor that autonomous vehicles use to perceive the world around them—the data from those will be interpreted by a computer vision company called Seoul Robotics. “We interpret the information by looking at the objects that it sees in that world,” says William Muller, the vice president of business development at the company. “Those main three objects that we look at are people, vehicles, and bikes.”

“Because it’s all three-dimensional, it’s highly accurate,” he adds. “We’re within centimeters of accuracy, of knowing where those objects are in the three-dimensional space.” In an ideal world, the system could know if someone is crossing an intersection slowly, and the signals could take that into account—or even warn vehicles to be aware of them. 

To the west and south of Chattanooga, on an old airport runway in Texas, is a smart intersection used for research purposes at Texas A&M University’s RELLIS campus. “There’s a lot of paved surface there,” says Srinivasa Sunkari, a senior research engineer at Texas A&M Transportation Institute. Part of what makes the intersection smart, he says, is the detection sensors that it has, such as radar and a fish-eye camera. The intersection does not have regular traffic passing through it, but is used for tests. 

Sunkari says that smart intersection initiatives like in Chattanooga, “when done smartly, and when implemented with the right infrastructure, it gives an opportunity to improve pedestrian safety.” 

The project in Chattanooga starts later this year and is expected to last for three years. While connecting EV drivers with charging stations is the main focus of the $4.57 million grant, having nearly 100 intersections with rich sensor data flowing from them should allow researchers to study various aspects of them and ideally optimize the streetscape.

The post What can ‘smart intersections’ do for a city? Chattanooga aims to find out. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
New hydrogel sheets could one day replace paper towels https://www.popsci.com/technology/material-hydrogel-water/ Tue, 03 Jan 2023 12:00:00 +0000 https://www.popsci.com/?p=500904
Hand wiping up liquid spill on table with a paper towel
The sheets also could be handy for biohazards and other toxic spills. Deposit Photos

The paper-thin material can absorb more than 100 times its weight in water.

The post New hydrogel sheets could one day replace paper towels appeared first on Popular Science.

]]>
Hand wiping up liquid spill on table with a paper towel
The sheets also could be handy for biohazards and other toxic spills. Deposit Photos

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

For all their utility and ubiquity (not to mention TV commercials’ allegations), paper towels are simply not very good at their job—at least when compared with hydrogels, which are already utilized in the form of microbeads between fabric or paper sheets for products like diapers or tampons. These materials, composed of large, interlocking polymers, absorb upwards of 100 times their weight in water, making them roughly 30 percent more effective than porous options like paper towels. 

Once used, however, hydrogels tend to become brittle and deteriorate after redrying, making them inefficient and often cost prohibitive for everyday use(actual sheets of the polymers have previously been generally too labor-intensive and difficult to manufacture). If one could combine the best of both worlds, everyday cleanups—not to mention medical or hazardous situations—could become a lot more manageable and simplified.

According to a new study published earlier this month in the scientific journal Matter, researchers at the University of Maryland are doing just that. Their own new quicker(er) picker-upper is made from a novel form of hydrogel that can be folded and cut with scissors much like paper, but is capable of absorbing three times the amount of liquid as common household materials. 

“To our knowledge, this is the first hydrogel that has been reported to have such tactile and mechanical properties. We are trying to achieve some unique properties with simple starting materials,” the research paper’s co-author, Srinivasa Raghavan, said in an announcement.

[Related: Watch this penny-sized soft robot paddle with hydrogel fins.]

Appropriately, Raghavan and their group relied in part on another common household kitchen item—zip-top bags—to help craft their new tool. First, an acid, base, and other ingredients to make the hydrogel were combined in storage bags. From there, the mix generated carbon dioxide in the gel. The bag was then placed between glass and subsequently exposed to UV light to set the gel around the bubbles, making it much more porous. The new sheet was finally submerged in glycerol and alcohol, then dried to become as pliant as normal fabrics, but capable of expanding as it absorbed and retained liquid.

In test comparisons, their gel sheet was able to absorb over 25 ml of water within just 20 seconds, while a cloth pad only managed 60 percent of the same amount of liquid. Researchers also tested their hydrogel on 40 mL blood, which it absorbed almost entirely within 60 seconds, compared to gauze managing only 55 percent.

If available to the public, the material could help in countless scenarios, from everyday chores to hazardous or toxic liquid cleanups. Unfortunately, the team’s new hydrogel sheet won’t be stocked on store shelves anytime soon—they would still be out of most consumers’ price range, and they remain non-reusable for the time being. Raghavan’s team next hopes to work towards lowering overall costs, increasing absorbency even more, making them able to withstand multiple uses, as well as capable of even absorbing oil.

The post New hydrogel sheets could one day replace paper towels appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Rocket fuel might be polluting the Earth’s upper atmosphere https://www.popsci.com/science/rocket-fuel-types/ Mon, 02 Jan 2023 14:15:00 +0000 https://www.popsci.com/?p=482498
15 rocket launches from SpaceX, NASA, Blue Origin, and more in a collage
Many rockets are still using the same sooty fuels early space programs were using. Copyrighted images, see below

With more spaceships launching than before, engineers are looking for alternative rocket fuels that leave less gunk in the air.

The post Rocket fuel might be polluting the Earth’s upper atmosphere appeared first on Popular Science.

]]>
15 rocket launches from SpaceX, NASA, Blue Origin, and more in a collage
Many rockets are still using the same sooty fuels early space programs were using. Copyrighted images, see below

ON A FOGGY midsummer morning 54 miles northwest of Santa Barbara, California, SpaceX engineers hustled through a ritual they’d been through before. They loaded a Falcon 9 rocket with tens of thousands of gallons of kerosene and supercold liquid oxygen, a propellant combo that brought the craft’s nine Merlin engines roaring to life with 1.7 million pounds of thrust. Soon after, the machine shot through the stratosphere, ready to dispatch 46 of the company’s Starlink internet satellites into low Earth orbit. But the rocket made another delivery too: a trail of sooty particles that lingered over the Pacific hours after blastoff.

The launch was the company’s 32nd of 2022, maintaining its current pace of firing off close to one rocket per week. With a record number of rides shuttling equipment, astronauts, and über-rich tourists to and from Earth, the high skies have never been busier. Between government programs like China’s Long March and private shots like SpaceX’s Crew Dragon, the world tallied some 130 successful launches in 2021 and is on pace to finish 2022 with even more. Many trips, however, spew tiny bits of matter into the stratosphere, an area that hasn’t seen much pollution firsthand yet.

Climate scientists are still working to fully understand how rocket residue affects the planet’s UV shield. But even if they find warning signs, some organization or authority figure would have to step up to establish emission standards for the industry. In the meantime, a few aerospace companies are exploring sustainable alternatives, like biofuels, to power their far-flying systems.

The increasing frequency of launches has researchers like Martin N. Ross, an atmospheric physicist and project engineer at the Aerospace Corporation, a nonprofit research center in California, worried about the future of the stratosphere—and the world. Predictions for rocket traffic in the coming decades point dramatically up, like a Falcon 9 on a pad. Should the sun heat up enough of the particles from the fuel trails, as some computer models suggest it will, space travel could become a significant driver of climate change. “This is not a theoretical concern,” Ross says.


CHOOSE YOUR FUEL: KEROSENE
What is it? Kerosene, which is derived from petroleum, consists of chains of carbon and hydrogen atoms. The refined liquid-fuel version is loaded into trash can–size tanks and burned alongside an oxidizer. In those containers, SpaceX pressurizes its kerosene with helium—so much that experts say the company is using “a good fraction” of the planet’s supply of the element.
Who uses it? Rocket Lab, SpaceX, the Air Force, and many others.
How green is it? That depends on how efficiently the engine burns, but it always produces black carbon soot, a heat magnet.

Unless you are reading this while floating aboard the International Space Station, you are breathing air from the troposphere—the bottommost band of the Earth’s atmosphere, which extends upward for several miles. The layer just above that, the stratosphere, sits anywhere from 6 to 31 miles above sea level and is deathly dull by comparison: There are barely any clouds there, so it doesn’t rain. The air is thin and freezing and contains ozone, an oxygen-based gas that protects all life from solar radiation but is toxic to the lungs.

Most greenhouse gases, including the 900 million tons of carbon dioxide produced by the aviation industry each year, trap heat in the troposphere. But rockets rip their vapors at higher altitudes, making them the single direct source of emissions in the upper stratosphere.

Acid in the sky

The stratosphere was people-free until 1931, when Swiss physicist Auguste Piccard and his aide floated nearly 10 miles up, and back down, with a 500,000-cubic-foot hydrogen balloon. They were the first of many. By the 1960s, the US and Soviet space programs were regularly shooting rockets to the edge of the sky.

As astronaut and cosmonaut programs evolved during the Cold War, so did climate change research—especially studies of carbon dioxide pollution and atmospheric degradation. In the 1970s, NASA’s space shuttle program piqued the interest of atmospheric chemists like Ralph Cicerone and Richard Stolarski, who then attempted some of the first investigations into stratospheric rocket exhaust. The shuttle’s solid engines used a crystalline compound called ammonium perchlorate, which releases hydrochloric acid as a byproduct. Chlorine is highly destructive to ozone—the Environmental Protection Agency estimates a lone atom can break down tens of thousands of molecules of the atmospheric gas.

In a June 1973 report to NASA, Cicerone, Stolarski, and their colleagues calculated that 100 shuttle launches a year would produce “quite small” amounts of chlorine-containing compounds. But they warned that these chemicals could build up over time. Cicerone and Stolarski ultimately focused their attention on volcanic eruptions, because those belches represented larger and more dramatic releases of chlorine.

SpaceX Falcon rocket with smoky trail
SpaceX’s Ax-1 mission, the first all-private flight to the ISS, used a Falcon 9 rocket powered by liquid oxygen and RP-1 kerosene. Geopix/Alamy

In the 1980s, British meteorologists revealed that the ozone layer in the Antarctic stratosphere was thinning. They identified the culprit as chlorine from aerosol spray cans and to O3-munching chemicals called chlorofluorocarbons from other human-made sources. That hole began to heal only after the 1987 Montreal Protocol, the first international agreement ever ratified by every member state of the United Nations. It phased out the use of CFCs, setting the atmosphere on a decades-long path to recovery.

In the wake of that treaty, “Anything that emitted chlorine was under suspicion,” Ross says. But it remained unclear whether rocket emissions too could alter the ozone layer.

For the following two decades, the US Air Force enlisted the Aerospace Corporation and atmospheric scientists like Darin Toohey, now a University of Colorado Boulder professor, to study the chemical composition of rocket exhaust. Using NASA’s WB-57 aircraft, a jet bomber able to fly 11 miles high and retrofitted for scientific observations, teams directly sampled emissions from American launch vehicles including Titan, Athena, and Delta into the early 2000s.


CHOOSE YOUR FUEL: METHANE
What is it? CH4 naturally occurs when wetland bacteria decompose matter. It’s a relatively new choice for rocket fuel, and it debuted in 2007 with a successful NASA engine test. Burning methane creates about 10 percent more specific impulse—the rocket equivalent of gas mileage—than kerosene.
Who uses it? The Chinese National Space Administration, Indian Space Research Organization, and SpaceX, though all their versions are in the development phase.
How green is it? While methane itself is a greenhouse gas (in fact, it has more atmospheric warming power than carbon dioxide), the stuff burned as fuel is consumed in the combustion reaction. Methane engines are cleaner than the more common kerosene engines, but it isn’t clear how much sooty black carbon they emit.

Freshly collected material from the plumes gave the researchers a firmer grasp on the ways solid propellant interacted with air. For instance, they examined the particles that were expelled when shuttle boosters burned aluminum powder as fuel—and how those bits reacted to ozone. The effect wasn’t as severe as they had feared, Ross says. Though the plumes depleted nearby ozone within the first hour after a launch, the layer was quickly restored after the emissions diffused.

Meanwhile, at the turn of the 21st century, blastoffs were decreasing in the US and Russia. After the space shuttle Columbia disintegrated on reentry in 2003, killing its seven-person crew, NASA suspended other flights in the program for two years. Missions using the WB-57 aircraft to observe exhaust came to an end in 2005. Six years later, NASA officially retired the shuttle system.

New rockets, more soot

When SpaceX sent its first liquid-fueled rocket into orbit in 2008, it set the stage for more privately developed spaceflights. But the chemical it pumped into its marquee machines wasn’t anything new. A refined version of kerosene, Rocket Propellant-1 or RP-1, has powered generations of rockets, including the first-stage engines of the spaceships that ferried Apollo astronauts to the moon. It was well known and relatively cheap.

Sensing an aerospace trend, Ross, Toohey, and their colleague Michael Mills calculated what emissions would be produced by a fleet of similarly hydrocarbon-powered rockets anywhere between the Earth’s surface and 90 miles aloft. Their predictions, which they published in 2010 in the journal Geophysical Research Letters, turned up something unexpected: an emissions signature full of black carbon, the same contaminant belched by poorly tuned diesel engines on the ground. “It seemed to have a disproportionate impact on the upper atmosphere,” Toohey says.

Those dark particles are “very, very good at absorbing the sun’s radiation,” adds Eloise Marais, an atmospheric chemist at University College London. Think of how you heat up faster on a hot summer day while wearing a black shirt rather than a white one, and you get the idea.


CHOOSE YOUR FUEL: LIQUID HYDROGEN
What is it? Despite being the most abundant element on Earth (and in space), cold, flowing hydrogen is more expensive to source than other fuels. It needs to be stored in large external tanks and kept at minus 423°F to preserve its state. Think of it this way: If methane-and kerosene-powered rockets are space sedans, hydrogen-powered engines are sports cars.
Who uses it? Blue Origin and NASA for some parts of the SLS rocket system.
How green is it? Exhaust from this cryogenic fuel is mostly water vapor. When you burn hydrogen, there’s no carbon, which means no black soot.

Near the ground, rain or other precipitation will flush dark carbon out of the air. But in the stratosphere, above rain clouds, soot sticks around. “As soon as we start to put things in that layer of the atmosphere, their impact is much greater, because it’s considerably cleaner up there than it is down here,” Marais says. In other words, the pristineness of the stratosphere makes it more vulnerable to the sun’s searing rays.

Black carbon particles can persist for about two years in the stratosphere before gravity drags them back down to the ground. They also heat up as they wait: In a study published this June in the journal Earth’s Future, Marais and her colleagues calculated that soot from rockets is about 500 times more efficient at warming the air than that from planes or emitters on the surface.

Another recent model run by Ross and Christopher Maloney, a research scientist at the National Oceanic and Atmospheric Administration Chemical Sciences Laboratory, came to a similar conclusion about the dark stuff’s impact on climate change. Should space traffic increase tenfold within the next two decades, the stratosphere will warm by about 3 degrees Fahrenheit, they predict.

That uptick is enough that “stratospheric dynamics [will] begin to shift,” Maloney says. Currents carry naturally produced ozone from hotter tropical regions toward cooler poles. If rockets scorch a pool of air above the Northern Hemisphere, where most launches take place, that warm-to-cold path could be disturbed—disrupting the circulation that ferries fresh O3 northward. The upshot: a thinner ozone layer at the higher latitudes and a toastier stratosphere overall.


CHOOSE YOUR FUEL: SOLID ROCKET FUEL
What is it? Solid rocket motors, or SRMs, use powders and other ignited components to produce thrust. For NASA’s space shuttles, the mix included aluminum powder and ammonium perchlorate. Its SLS rocket uses the same formula with the additive polybutadiene acrylonitrile, a rubbery compound the space agency says has the consistency of pencil erasers.
Who uses it? NASA continues to use SRMs, especially as boosters.
How green is it? Some particles from these engines can thin regions of ozone, researchers warn. Though the impact isn’t as significant as black carbon’s, it might cause local depletions if rocket traffic continues to increase.

Sizing up old launches can help clear up some of the gray areas in this process. In a paper published this July in the journal Physics of Fluids, a pair of researchers at the University of Nicosia in Cyprus simulated the plume from a SpaceX Falcon 9 rocket from 2016. According to their model, in the first 2.75 minutes of flight, the craft generated 116 tons of carbon dioxide, which is equivalent to a year’s worth of emissions from about 70 cars.

Toohey sees these projections as validation of the black carbon concerns he raised more than a decade ago—but thinks they’re not as compelling as direct observations would be. There has been “basically no progress, except additional model studies, telling us the original hypothesis was correct,” he says. What’s needed, he adds, is detection in the style of the earlier WB-57 missions. For example, spectrometers planted on the sides of spaceships could measure black carbon.

Policy is another limiting factor. The International Air Transport Association, an influential trade organization, has set carbon-neutral goals for airlines for 2050, but there is no comparable target for space—in part because there is no equivalent leader in the industry or regulatory body like the Federal Aviation Administration. “We don’t have an agreed-upon way to measure what rocket engines are doing to the environment,” Ross says.


CHOOSE YOUR FUEL: BIOFUEL
What is it? These chemicals come from eco-friendly sources. In one example, the UK-based company Orbex is adapting diesel byproducts to make propane.
Who uses it? Orbex, BluShift Aerospace, and other small commercial groups, most of which are still working on proof of concepts.
How green is it? Sustainability is the goal behind this class of fuels. A University of Exeter scientist working as a consultant for Orbex calculated its rocket emissions are 86 percent smaller than those from a similar vehicle powered by fossil fuels.

While there are newer fuels out there, there’s no good way to determine how green they are. Even the one that burns cleanest, hydrogen, requires extra energy to be refined to its pure molecular form from methane or water. “The picture is very complex, as all propellants have environmental impact,” says Stephen Heister, who studies aerospace propulsion at Purdue University.

Atmospheric scientists say solutions to preserve the stratosphere must be developed collaboratively, as with the unified front that made the Montreal Protocol a juggernaut. “The way to deal with it is to start getting people with common interests together,” Toohey says, to find a sustainable path to space before lasting damage is done.

Photo credits for lead image: Left to right, top to bottom: Patrick T. Fallon/Getty Images; Wang Jiangbo/Xinhua/Getty Images; Zheng Bin/Xinhua/Getty Images; Yang Guanyu/Xinhua/Getty Images; Cai Yang/Xinhua/Getty Images; Wang Jiangbo/Xinhua/Getty Images; Wang Jiangbo/Xinhua/Getty Images; Korea Aerospace Research Institute/Getty Images; SOPA Images Ltd./Alamy (2); Jonathan Newton/The Washington Post/Getty Images; GeoPix/NASA/Joel Kowsky/Alamy; Wang Jiangbo/Xinhua/Getty Images; Paul Hennessy/Anadolu Agency/Getty Images; Zheng Bin/Xinhua/Getty Images

This story originally appeared in the High Issue of Popular Science. Read more PopSci+ stories.

The post Rocket fuel might be polluting the Earth’s upper atmosphere appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ford used a quantum computer to explore EV battery materials https://www.popsci.com/technology/ford-quantum-ev-battery/ Sat, 24 Dec 2022 12:00:00 +0000 https://www.popsci.com/?p=501690
One of Ford's battery modules
One of Ford's battery modules. Ford

Quantum computers can simulate the properties of new materials that might make batteries safer, more energy-dense, and easier to recycle.

The post Ford used a quantum computer to explore EV battery materials appeared first on Popular Science.

]]>
One of Ford's battery modules
One of Ford's battery modules. Ford

Quantum researchers at Ford have just published a new preprint study that modeled crucial electric vehicle (EV) battery materials using a quantum computer. While the results don’t reveal anything new about lithium-ion batteries, they demonstrate how more powerful quantum computers could be used to accurately simulate complex chemical reactions in the future. 

In order to discover and test new materials with computers, researchers have to break up the process into many separate calculations: One set for all the relevant properties of each single molecule, another for how these properties are affected by the smallest  environmental changes like fluctuating temperatures, another for all the possible ways any  two molecules can interact together, and on and on. Even something that sounds simple like two hydrogen molecules bonding requires incredibly deep calculations. 

But developing materials using computers has a huge advantage: the researchers don’t have to perform every possible experiment physically which can be incredibly time consuming. Tools like AI and machine learning have been able to speed up the research process for developing novel materials, but quantum computing offers the potential to make it even faster. For EVs, finding better materials could lead to longer lasting, faster charging, more powerful batteries. 

Traditional computers use binary bits—which can be a zero or a one—to perform all their calculations. While they are capable of incredible things, there are some problems like highly accurate molecular modeling that they just don’t have the power to handle—and because of the kinds of calculations involved, possibly never will. Once researchers model more than a few atoms, the computations become too big and time-consuming so they have to rely on approximations which reduce the accuracy of the simulation. 

Instead of regular bits, quantum computers use qubits that can be a zero, a one, or both at the same time. Qubits can also be entangled, rotated, and manipulated in other wild quantum ways to carry more information. This gives them the power to solve problems that are intractable with traditional computers—including accurately modeling molecular reactions. Plus, molecules are quantum by nature, and therefore map more accurately onto qubits, which are represented as waveforms.

Unfortunately, a lot of this is still theoretical. Quantum computers aren’t yet powerful enough or reliable enough to be widely commercially viable. There’s also a knowledge gap—because quantum computers operate in a completely different way to traditional computers, researchers still need to learn how best to employ them. 

[Related: Scientists use quantum computing to create glass that cuts the need for AC by a third]

This is where Ford’s research comes in. Ford is interested in making batteries that are safer, more energy and power-dense, and easier to recycle. To do that, they have to understand chemical properties of potential new materials like charge and discharge mechanisms, as well as electrochemical and thermal stability.

The team wanted to calculate the ground-state energy (or the normal atomic energy state) of LiCoO2, a material that could be potentially used in lithium ion batteries. They did so using an algorithm called the variational quantum eigensolver (VQE) to simulate the Li2Co2O4 and Co2O4 gas-phase models (basically, the simplest form of chemical reaction possible) which represent the charge and discharge of the battery. VQE uses a hybrid quantum-classical approach with the quantum computer (in this case, 20 qubits in an IBM statevector simulator) just employed to solve the parts of the molecular simulation that benefit most from its unique attributes. Everything else is handled by traditional computers.

As this was a proof-of-concept for quantum computing, the team tested three approaches with VQE: unitary coupled-cluster singles and doubles (UCCSD), unitary coupled-cluster generalized singles and doubles (UCCGSD) and k-unitary pair coupled-cluster generalized singles and doubles (k-UpCCGSD). As well as comparing the quantitative results, they compared quantum resources necessary to perform the calculations accurately with classical wavefunction-based approaches. They found that k-UpCCGSD produced similar results to UCCSD at lower cost, and that the results from the VQE methods agreed with those obtained using classical methods—like coupled-cluster singles and doubles (CCSD) and complete active space configuration interaction (CASCI). 

Although not quite there yet, the researchers concluded that quantum-based computational chemistry on the kinds of quantum computers that will be available in the near-term will play “a vital role to find potential materials that can enhance the battery performance and robustness.” While they used a 20-qubit simulator, they suggest a 400-qubit quantum computer (which will soon be available) would be necessary to fully model the Li2Co2O4 and Co2O4 system they considered.

All this is part of Ford’s attempt to become a dominant EV manufacturer. Trucks like its F-150 Lightning push the limits of current battery technology, so further advances—likely aided by quantum chemistry—are going to become increasingly necessary as the world moves away from gas burning cars. And Ford isn’t the only player thinking of using quantum to edge it ahead of the battery chemistry game. IBM is also working with Mercedes and Mitsubishi on using quantum computers to reinvent the EV battery. 

The post Ford used a quantum computer to explore EV battery materials appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This map-making AI could be the first step towards GPS on the moon https://www.popsci.com/science/moon-gps-navigation-lunanet/ Thu, 22 Dec 2022 11:00:00 +0000 https://www.popsci.com/?p=500984
the surface of the moon revealing beautiful craters
Landing back on the moon is in reach, but humans will need some assistance with directions to further explore the landscape. NASA Johnson

The navigation system will also work alongside the moon's future internet, LunaNet.

The post This map-making AI could be the first step towards GPS on the moon appeared first on Popular Science.

]]>
the surface of the moon revealing beautiful craters
Landing back on the moon is in reach, but humans will need some assistance with directions to further explore the landscape. NASA Johnson

For years, scientists have been working out ways to navigate across the lunar surface, a task that’s been a herculean undertaking without tools like the GPS we have on Earth. 

Since the moon has a much thinner atmosphere than Earth, it’s difficult to judge both the distance and size of faraway landmarks as there’s a lack of perspective from the horizon. Trees or buildings on Earth offer hazy but helpful points of reference for distance, but such an illusion is impossible on the moon. Additionally, without an atmosphere to scatter light, the sun’s bright rays would skew the visual and depth perception of an astronaut on the moon, making it a real challenge to get around the vast, unmapped terrain. 

On Earth, “we have GPS, and it’s easy to take advantage of that and not think about all the technology that goes into it,” says Alvin Yew, a research engineer at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “But now when we’re on the   moon, we just don’t have that.” 

Inspired by previous research on lunar navigation, Yew is developing an AI system that guides explorers across the lunar floor by scanning the horizon for distinct landmarks. Trained on data gathered from NASA’s Lunar Reconnaissance Orbiter, the system works by recreating features on the lunar horizon as they would appear to an explorer standing on the surface of the moon. 

“Because [the moon] has no atmosphere, there’s not a lot of scattering of the light,” says Yew. But by using the outline of the landscape, “we’re able to get a very clear demarcation of where the ground is relative to space.” 

[Related: With Artemis, NASA is aiming for the moon once more. But where will it land?]

Yew’s AI system would be able to navigate using geographic features like boulders, ridges, and even craters, whose distance would normally be difficult to accurately locate for a person. These measurements could be used to match features identified in images already captured by astronauts and rovers, in a similar way to how our GPS spots locations on Earth. Developing GPS-like technology that’s specifically tuned to help explorers get around the moon is especially important for supporting autonomous robotic operations, Yew says. Now that NASA’s Artemis I mission recently finished with a successful splashdown earlier this month, such technology will also be needed for humans to return to the moon in the not-so-distant future. When astronauts of the upcoming Artemis III mission make landfall, having handheld or integral systems to help conquer the new terrain could be the deciding factor in how far (and how well) they can explore, both on the moon and beyond. 

“NASA’s focus on trying to get to the moon, and eventually to Mars someday, requires an investment of these vital technologies,“ says Yew. His work is also planned to complement the moon’s future “internet,” called LunaNet. The framework will support communications, lunar navigation operations, as well as many other science services on the moon. According to NASA scientists, the collection of lunar satellites aims to offer internet access similar to Earth’s, a network that spacecraft and future astronauts can tap into without needing to schedule data transfers in advance, like space missions currently do. 

Cheryl Gramling, the associate chief for technology of the mission engineering and systems analysis division at NASA’s Goddard Space Flight Center, says the moon is a testbed where we can take lessons learned from our planet, and see how they translate to deeper space exploration. 

“You also don’t have the fundamental infrastructure that we’ve built up on the Earth,” she says. The moon is like a blank slate: “You have to think about, well, ‘what is it that you need?’”

Much like how different internet providers allow their customers access to the web and other services, Gramling says that NASA, as well as other space agencies like the ESA or JAXA, could come together to comprise LunaNet. “It’s extending the internet to space,” she says. These “providers” (in this case, space agencies like NASA, ESA, and JAXA) would be able to communicate with each other and share data across networks, much like different pieces of the Global Navigation Satellite System (GNSS) are able to work in tandem. 

“Looking at what we have implemented on Earth and taking it over to the moon is a challenge, but at the same time, it’s an opportunity to think of how we make it work,” says Juan Crenshaw, a member of NASA Goddard Space Flight Center’s navigation and mission design branch. The goal, he says, is to create a network that isn’t constrained to one single implementation or purpose, and enables standards and protocols for diverse users. “If we build an interoperable service, it allows us to provide better coverage and services to users with less assets, more efficiently.”

[Related: Is it finally time for a permanent base on the moon?]

But LunaNet is still a long way from coming online—it’ll be some time before astronauts can download games or stream their favorite space movie like they’d be able to on Earth. While LunaNet’s service volume is currently being designed to cover the entirety of the moon up to an altitude of 200 kilometers (about 125 miles), Yew says his AI could be a backup to a rover or astronaut’s navigation capabilities when the network experiences disruptions, like power or signal outages. According to NASA, Yew’s work could even help explorers find their way during similar interferences on Earth.

“When we’re doing human expeditions, you always want [backup systems] for very dangerous missions,” says Yew. His AI is “not tied to the internet, per se, but [it] can be.” Though the AI is still only in development, Yew would like to continue making improvements by testing the system in a simulated environment before hopefully utilizing real lunar landscape data from one of the Artemis missions. 

“We want to test the robustness of the algorithm to make sure that we’re returning solutions that are global, meaning I can throw you anywhere on the moon, and you can locate anywhere,” he says. “And maybe if that’s not possible, we want to test the limits of that too.”

The post This map-making AI could be the first step towards GPS on the moon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Silkworm-inspired weaving techniques can produce better nanofibers https://www.popsci.com/technology/silkworms-nanofibers-medicine-electronics/ Wed, 21 Dec 2022 18:00:00 +0000 https://www.popsci.com/?p=500950
Macro photo of a silkworm eating a mulberry leaf
Thank this little guy's spit for advances in nanofibers. Deposit Photos

The insect's intricate way of weaving silk could be used soon for relatively simple medical and electronics advancements.

The post Silkworm-inspired weaving techniques can produce better nanofibers appeared first on Popular Science.

]]>
Macro photo of a silkworm eating a mulberry leaf
Thank this little guy's spit for advances in nanofibers. Deposit Photos

Worm spit, aka silk, has inspired a relatively simple, new process of nanofiber weaving that could advance everything from wound bandages to flexible electronics.

As unappealing as it may all sound, the popular, luxurious fabric indeed stems from a two-protein compound secreted by its namesake worm, which uses its threads to help weave cocoons. However, a team of Chinese researchers have also found that—apart from expensive sheets—humans can produce far more uniform micro- and nanofibers by imitating silkworms’ head movements as they secrete, pull, and weave their silk.

[Related: How researchers leveled up worm silk to be tougher than a spider’s.]

The group recently showcased their work in a new paper published with the American Chemical Society’s journal, Nano Letters. At first, researchers poked microneedles into foam blocks soaked in a polyethylene oxide solution, then pulled the needs away via a procedure known as microadhesion-guided (MAG) spinning to create nanofiber filaments that are thousands of times smaller than a single strand of human hair.

Existing nanofiber production methods are either slow and expensive, or otherwise result in inefficient, wadded material. By imitating silkworms’ weaving movement, however, the team found they could create an array of products—pulling the foam blocks straight away from one another offered orderly fibers, while a vibrating retraction crossweaved the material. Twisting the setup gave a similarly shaped “all-in-one” fiber. Regardless of the array, the results proved to clump far less than existing methods.

[Related: Watch this bird-like robot make a graceful landing on its perch.]

Going a step further, however, the team realized that the microneedling step wasn’t actually needed at all—the foam’s abrasive surface was enough to pull apart the polyethylene oxide solution into nanofilaments. It was so simple, in fact, that one can use the foam stretching method to hand-wrap a nanofiber bandage around a person’s wrist. In their experiments, the team utilized an antibiotic fiber to ensure a sterile, bacterial growth-inhibiting dressing that easily washes off with warm water, offering a potential new medical application in the near future.

Turning to the animal world for inspiration consistently offers impressive discoveries and advancements in tech and robotics, whether it’s for weaving, flying, running, or capturing

The post Silkworm-inspired weaving techniques can produce better nanofibers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why European researchers hooked up a quantum machine to a supercomputer https://www.popsci.com/technology/lumi-vtt-quantum-enabled-supercomputer/ Thu, 08 Dec 2022 23:00:00 +0000 https://www.popsci.com/?p=495983
LUMI supercomputer
Fade Creative

Two machines are better than one.

The post Why European researchers hooked up a quantum machine to a supercomputer appeared first on Popular Science.

]]>
LUMI supercomputer
Fade Creative

VTT, a Finish research group, announced last month that it had connected a small quantum computer to Europe’s most powerful classical supercomputer. Here are the specifics: VTT’s quantum computer is a 5-qubit machine called HELMI, and LUMI is a pan-European supercomputer that ranks third on the Top500 list. Both are situated in Finland. Combining the best functionalities of HELMI and LUMI to offer hybrid services allows researchers to better use the quantum computer’s unique computing properties—and crucially, to learn how to take advantage of them to solve future problems. 

Quantum computers can in theory perform certain operations and complete different tasks far faster than traditional computers, but they are still a long way from reaching their full potential. While a traditional computer uses binary bits—which can be a zero or a one—to perform all its calculations, quantum computers use qubits that can be a zero, a one, or both at the same time. As hard to wrap your head around as that sounds, things get even more complicated when you consider that qubits can be entangled, rotated, and manipulated in other quantum ways to carry additional information. All this is to say that quantum computers aren’t just a regular computers with an extra digit to play with: they provide a completely different way of working that has its own strengths and weaknesses. 

In the pros column, quantum computers should be able to make what are currently incredibly hard computing tasks that usually involves solving linear algebra problems significantly easier.

One big example is factoring, where the computer has to divide an incredibly long number into the two numbers that equal it when multiplied together. (For example, the factors of 21 are 3 and 7.) This is an incredibly resource intensive task for traditional computers, which is why it’s at the core of nearly every encryption algorithm that’s widely used today. All of our passwords, banking transactions, and important corporate secrets are protected by the fact that current computers kind of suck at factoring large numbers. A quantum computer, though, is theoretically much better at factoring large numbers, and a sufficiently powerful one could tear through the encryption layers that protect digital life. That’s why the US Government has been working to develop quantum-resistant cryptographic algorithms.

[Related: Quantum computers could break encryption. The US government is trying to prevent that.]

Breaking encryption is just the tip of the iceberg when it comes to new problems that these machines can tackle. Quantum computers also show promise for modeling complex phenomena in nature, detecting credit card fraud, and discovering new materials. According to VTT, they could be  used for predicting short-term events, like the weather or trading patterns. 

In the cons column, quantum computers are hard to use, require a very controlled set up to operate, and have to contend with “decoherence” or losing their quantum state which gives weird results. They’re also rare, expensive, and for most tasks, way less efficient than a traditional computer. 

Still, a lot of these issues can be offset by combining a quantum computer with a traditional computer, just as VTT has done. Researchers can create a hybrid algorithm that has LUMI, the traditional supercomputer, handle the parts it does best while handing off anything that could benefit from quantum computing to HELMI. LUMI can then integrate the results of HELMI’s quantum calculations, perform any additional calculations necessary or even send more calculations to HELMI, and return the complete results to the researchers. 

Finland is now one of few nations in the world with a quantum computer and a supercomputer, and LUMI is the most powerful quantum-enabled supercomputer. While quantum computers are still a way from being broadly commercially viable, these kinds of integrated research programs are likely to accelerate progress. VTT is currently developing a 20-qubit quantum computer with a 50-qubit upgrade planned for 2024.

The post Why European researchers hooked up a quantum machine to a supercomputer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Spider robots could soon be swarming Japan’s aging sewer systems https://www.popsci.com/technology/spider-robot-japan-sewer/ Thu, 08 Dec 2022 18:00:00 +0000 https://www.popsci.com/?p=496367
Three Tmsuk spider robots meant for sewer system inspections and repairs
The stuff of nightmares. Tmsuk/YouTube

Faced with an ongoing labor shortage, Japan could turn to robots to handle utility maintenance.

The post Spider robots could soon be swarming Japan’s aging sewer systems appeared first on Popular Science.

]]>
Three Tmsuk spider robots meant for sewer system inspections and repairs
The stuff of nightmares. Tmsuk/YouTube

What’s worse—getting trapped in a dank, decrepit sewer system, or finding yourself face-to-face with an army of robotic spiders? The correct answer is getting trapped in a dank, decrepit sewer system, where you then find yourself face-to-face with an army of robotic spiders.

[Related: This spooky robot uses inflatable tentacles to grab delicate items.]

The latter half of this scenario happens if Japan’s robotics manufacturer Tmsuk has its say. As a new video report courtesy of South China Morning Post detailed earlier this week, the company recently unveiled its line of SPD1 prototypes—small robots powered by Raspberry Pi CPUs that creep along upon eight legs modeled after its arachnid inspirations. The little spider-bots also have 360-degree vision thanks to an array of very spidey-like camera eyes.

In Tmsuk’s video below, the tiny machines are in action. The company certainly seems to be leaning into the spookiness in the promotional material.

SPD1 comes as Japan continues to reckon with a labor shortage affecting over half of the country’s industries, including public utility maintenance. With some projections estimating 6.4 million job vacancies by decade’s end, businesses like Tmsuk are offering creative, if arguably off-putting, alternatives to hard-to-fill positions such as those involving sewer repairs.

“The lifespan (of sewer pipes) is 50 years, and there are many sewer pipes reaching the end of that lifespan,” Tmsuk CEO Yuji Kawakubo explained in the SCMP video interview. “There is an overwhelming shortage of manpower to inspect such pipes, and the number of sewer pipes that have not been inspected is increasing.”

[Related: Meet the world’s speediest laundry-folding robot.]

Kawakubo recounted that early iterations of the SPD1 relied on wheels for movement. However, sewer systems’ rocky, unstable terrain quickly proved too difficult. Replacing the wheel system with eight legs allowed the remote-controlled devices a much greater range of mobility and reach during testing. Tmsuk hopes the SPD1 can hit the market sometime soon after April 2024, with future editions able to handle small repair jobs on top of their current surveillance and examination capabilities.

If a swarm of SPD1 bots crawling underneath your home isn’t spooky enough, it’s worth noting that this isn’t the only spider robot in development. Last year, a UK government-funded company appropriately named Pipebots introduced its own designs for sewer repairing automatic arachnids. Like the SPD1, Pipebots hopes its products can begin traipsing through the muck and mire sometime in 2024.

The post Spider robots could soon be swarming Japan’s aging sewer systems appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Magnetic microrobots could zap the bacteria out of your cold glass of milk https://www.popsci.com/technology/magnetic-microrobots-dairy/ Thu, 08 Dec 2022 00:00:00 +0000 https://www.popsci.com/?p=496186
milk products
Aleksey Melkomukov / Unsplash

These “MagRobots” can specifically target toxins in dairy that survive pasteurization.

The post Magnetic microrobots could zap the bacteria out of your cold glass of milk appeared first on Popular Science.

]]>
milk products
Aleksey Melkomukov / Unsplash

A perfect mix of chemistry and engineering has produced microscopic robots that function like specialized immune cells—capable of pursuing pathogenic culprits with a specific mugshot. 

The pathogen in question is Staphylococcus aureus (S. aureus), which can impact dairy cows’ milk production. These bacteria also make toxins that cause food poisoning and gastrointestinal illnesses in humans (that includes the usual trifecta of diarrhea, abdominal cramps, and nausea). 

Removing the toxins from dairy products is not easy to do. The toxins tend to be stable and can’t be eradicated by common hygienic practices in food production, like pasteurization and heat sterilization. However, an international group of researchers led by a team from the University of Chemistry and Technology Prague may have come up with another way to get rid of these pesky pathogens: with a tiny army of magnetic microrobots. Plus, each “MagRobot” is equipped with an antibody that specifically targets a protein on the S. aureus bacteria, like a lock-and-key mechanism. 

In a small proof-of-concept study published in the journal Small, the team detailed how these MagRobots could bind and isolate S. aureus from milk without affecting other microbes that may naturally occur.

Bacteria-chasing nanobots have been making waves lately in medicine, clearing wounds and dental plaque. And if these tiny devices can work in real, scaled up trials, as opposed to just in the lab, they promise to cut down on the use of antibiotics. 

In the past, microscopic robots have been propelled by light, chemicals, and even ultrasound. But these MagRobots are driven through a special magnetic field. The team thought this form for control was the best option to go with since the robots wouldn’t produce any toxic byproducts, and can be remotely accessed for reconfiguring and reprogramming. 

To make the MagRobots, paramagnetic microparticles are covered with a chemical compound that allows them to be coated with antibodies that match with proteins on the cell wall of S. aureus. This allows the MagRobot to find, bind, and retrieve the bacteria. A transversal rotating magnetic field with different frequencies is used to coordinate the bots. At higher frequencies, the MagRobots moved faster. Researchers preset the trajectory of the microrobots so that they would “walk” back and forth through a control solution and a container with milk in it in three rows and two columns. They are retrieved using a permanent magnet

During the experiment, MagRobots that measured 2.8 micrometers across were able to remove around 60 percent of the S. aureus cells in one hour. When the MagRobots were placed in a milk containing both S. aureus and another bacteria, E. coli, it was able to avoid the E. coli and solely go after the S. aureus. 

“These results indicate that our system can successfully remove S. aureus remaining after the milk has been pasteurized,” the researchers wrote. “Moreover, this fuel-free removal system based on magnetic robots is specific to S. aureus bacteria and does not affect the natural milk microbiota or add toxic compounds resulting from fuel catalysis.”

Additionally, they propose that this method can be applied to a variety of other pathogens simply by modifying the surfaces of these microrobots. 

The post Magnetic microrobots could zap the bacteria out of your cold glass of milk appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Our first look at the Air Force’s new B-21 stealth bomber was just a careful teaser https://www.popsci.com/technology/b-21-raider-bomber-reveal/ Mon, 05 Dec 2022 22:00:36 +0000 https://www.popsci.com/?p=495172
the B-21 raider bomber
The B-21 Raider was unveiled on Dec. 2. At right is Secretary of Defense Lloyd Austin, who spoke at the event. DOD / Chad J. McNeeley

Northrop Grumman revealed the B-21 Raider in a roll-out ceremony. Here's what we know about it—and what remains hidden.

The post Our first look at the Air Force’s new B-21 stealth bomber was just a careful teaser appeared first on Popular Science.

]]>
the B-21 raider bomber
The B-21 Raider was unveiled on Dec. 2. At right is Secretary of Defense Lloyd Austin, who spoke at the event. DOD / Chad J. McNeeley

On Friday, the public finally got a glimpse at the Air Force’s next bomber, the B-21 Raider. Northrop Grumman, which is producing it, rolled out the futuristic flying machine at a ceremony in Palmdale, California, on Dec. 2. It’s a stealthy aircraft, meaning that it’s designed to have a minimal radar signature. It’s also intended to carry both conventional and nuclear weapons. 

The new aircraft will eventually join a bomber fleet that currently consists of three different aircraft types: the old, not-stealthy B-52s, the supersonic B-1Bs, and the B-2 flying wing, which is the B-21’s most direct ancestor. 

Here’s what to know about the B-21 Raider.

The B-21 Raider
The B-21 Raider. US Air Force

A throwback to 1988

At the B-21’s unveiling, the US Secretary of Defense, Lloyd Austin, referred to the new plane as “the first bomber of the 21st century.” Indeed, the bomber models it will eventually replace include the 1980s-era aircraft, the B-2 Spirit. 

As Peter Westwick recounts in his history of low-observable aircraft in the United States, Stealth, two aircraft makers competed against each other to build the B-2. Northrop prevailed against Lockheed to build the stealth bomber, while Lockheed had previously beaten Northrop when it came to creating the first stealth fighter: the F-117. Northrop scored the contract to build the B-2 in late 1981, and rolled out the craft just over seven years later, in 1988. 

The 1988 roll-out event, Westwick writes, included “no fewer than 41 Air Force generals,” and an audience of 2,000 people. “A tractor towed the plane out of the hangar, the crowd went wild, the press snapped photos, and then the tractor pushed it back out of sight,” he writes. It flew for the first time in 1989.

[Related: The B-21 bomber won’t need a drone escort, thank you very much]

Today, the B-2 represents the smallest segment of the US bomber fleet, by the numbers. “We only bought 21 of them,” says Todd Harrison, a defense analyst at Metrea Strategic Insights. “One has crashed, one is used for testing, and at any given time, several others will be in maintenance—so the reality is we have far too few stealthy bombers in our inventory, and the only way to get more was to design and build a whole new bomber.” 

The B-2 Spirit, seen here from a refueling aircraft, in 2012.
The B-2 Spirit, seen here from a refueling aircraft, in 2012. US Air Force / Franklin Ramos

The new bomber

The B-21, when it does fly, will join the old group of bombers. Those planes, such as the B-1, “are really aging, and are hard to keep in the air—they’re very expensive to fly, and they just don’t have the capabilities that we need in the bomber fleet of today and in the future,” Harrison says. The B-52s date to the early 1960s; one B-52 pilot once told Popular Science that being at the controls of that aircraft feels like “flying a museum.” If the B-52 is officially called the Stratofortress, it’s also been called the Stratosaurus. (A likely future scenario is that the bomber fleet eventually becomes just two models: B-52s, which are getting new engines, and the B-21.)

[Related: Inside a training mission with a B-52 bomber, the aircraft that will not die]

With the B-21, the view offered by the unveiling video is just of the aircraft from the front, a brief vision of a futuristic plane. “They’re not likely to reveal the really interesting stuff about the B-21,” observes Harrison. “What’s most interesting is what they can’t show us.” That includes internal as well as external attributes. 

Publicly revealing an aircraft like this represents a calculated decision to show that a capability exists without revealing too much about it. “You want to reveal things that you think will help deter Russia or China from doing things that might provoke us into war,” he says. “But, on the other hand, you don’t want to show too much, because you don’t want to make it easy for your adversary to develop plans and technologies to counter your capabilities.”

Indeed, the way that Secretary of Defense Austin characterized the B-21 on Dec. 2 walked that line. “The B-21 looks imposing, but what’s under the frame, and the space-age coatings, is even more impressive,” he said. He then spoke about its range, stealth attributes, and other characteristics in generalities. (The War Zone, a sibling website to PopSci, has deep analysis on the aircraft here and has interviewed the pilots who will likely fly it for the first time here.)

Mark Gunzinger, the director for future concepts and capability assessments at the Mitchell Institute for Aerospace Studies, says that the B-21 rollout, which he attended, “was very carefully staged.” 

[Related: The stealth helicopters used in the 2011 raid on Osama bin Laden are still cloaked in mystery]

“There were multiple lights on each side of the aircraft that were shining out into the audience,” he recalls. “The camera angles were very carefully controlled, reporters were told what they could and could not do in terms of taking photos, and of course, the aircraft was not rolled out all the way—half of it was still pretty much inside the hanger, so people could not see the tail section.” 

“The one word you heard the most during the presentation from all the speakers was ‘deterrence,'” Gunzinger adds. Part of achieving that is signaling to others that the US has “a creditable capability,” but at the same time, “there should be enough uncertainty about the specifics—performance specifics and so forth—so they do not develop effective countermeasures.”

The B-21 rollout concluded with Northrup Grumman’s CEO, Kathy Warden, who mentioned the aircraft’s next big moment. “The next time you see this plane, it’ll be in the air,” she said. “Now, let’s put this plane to bed.” 

And with that, it was pushed back into the hanger, and the doors closed in front of it. 

Watch the reveal video, below.

The post Our first look at the Air Force’s new B-21 stealth bomber was just a careful teaser appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The most ingenious engineering feats of 2022 https://www.popsci.com/technology/best-engineering-innovations-2022/ Thu, 01 Dec 2022 16:00:00 +0000 https://www.popsci.com/?p=490053
It's the Best of What's New.
It's the Best of What's New. IBM

Solar-powered consumer gadgets, an AI that can generate images from text, and more ground-breaking tech are the Best of What’s New.

The post The most ingenious engineering feats of 2022 appeared first on Popular Science.

]]>
It's the Best of What's New.
It's the Best of What's New. IBM

Zero-emissions vehicles, artificial intelligence, and self-charging gadgets are helping remake and update some of the most important technologies of the last few centuries. Personal devices like headphones and remote controls may be headed for a wireless, grid-less future, thanks to a smaller and more flexible solar panel. Boats can now sail human-free from the UK to the US, using a suite of sensors and AI. Chemical factories, energy facilities, trucks and ships are getting green makeovers as engineers figure out clever new ways to make them run on hydrogen, batteries, or other alternative, non-fossil fuel power sources.

Looking for the complete list of 100 winners? Check it out here.

Grand Award Winner

1915 Çanakkale by the Republic of Turkey: The world’s longest suspension bridge

Çanakkale Motorway Bridge Construction Investment Operation

Learn More

An international team of engineers had to solve several difficult challenges to build the world’s largest suspension bridge, which stretches 15,118 feet across the Dardanelles Strait in Turkey. To construct it, engineers used tugboats to float out 66,000-ton concrete foundations known as caissons to serve as pillars. They then flooded chambers in the caissons to sink them 40 meters (131 feet) deep into the seabed. Prefabricated sections of the bridge deck were carried out with barges and cranes, then assembled. Completed in March 2022, the bridge boasts a span between the two towers that measures an incredible 6,637 feet. Ultimately the massive structure shortens the commuting time across the congested strait, which is a win for everyone.

NuGen by Anglo American: World’s largest hydrogen fuel cell EV

Anglo American

Learn More

When carrying a full load of rock, the standard issue Komatsu 930E-5 mining truck weighs over 1 million pounds and burns 800 gallons of diesel per work day. Collectively, mining trucks emit 68 million tons of carbon dioxide each year (about as much as the entire nation of New Zealand). This company’s solution was to turn to hydrogen power, and so Anglo American hired American contractor First Mode to hack together a hydrogen fuel cell version of their mining truck. It’s called NuGen. Since the original Komatsu truck already had electric traction motors, powered by diesel, the engineers replaced the fossil-fuel-burning engine with eight separate 800-kw fuel cells that feed into a giant 1.1 Mwh battery. (The battery further recaptures power through regenerative braking.)  Deployed at a South African platinum mine in May, the truck refuels with green hydrogen produced using energy from a nearby solar farm.

Hydeal España by ArcelorMittal, Enagás, Grupo Fertiberia and DH2 Energy: The biggest green hydrogen hub

Negro Elkha – stock.adobe.com

Learn More

Hydrogen can be a valuable fuel source for decarbonizing industrial processes. But obtaining the gas at scale requires using energy from natural gas to split water into hydrogen and oxygen with electrical currents. To be sustainable, this process needs to be powered with renewables. That’s the goal of an industrial consortium in Spain, comprised of the four companies listed above. It’s beginning work on HyDeal España, set to be the world’s largest green hydrogen hub. Solar panels with a capacity of 9.5 GW will power electrolysers that will separate hydrogen from water at an unprecedented scale. The project will help create fossil-free ammonia (for fertilizer and other purposes), and hydrogen for use in the production of green steel. The hub is scheduled to be completed in 2030, and according to its estimates, the project will reduce the greenhouse gas footprint of Spain by 4 percent. 

DALL-E 2 by Open AI: A groundbreaking text-to-image generator

OpenAI

Learn More

Art students will often mimic the style of a master as part of their training. DALL-E 2 by Open AI takes this technique to a scale only artificial intelligence can achieve, by studying hundreds of millions of captioned images scraped from the internet. It allows users to write text prompts that the algorithm then renders into pictures in less than a minute. Compared to previous image generators, the quality of the output is getting rave reviews, and there are “happy accidents” that feel like real creativity. And it’s not just artists—urban planning advocates and even a reconstructive surgeon have used the tool to visualize rough concepts.

The P12 shuttle by Candela: A speedy electric hydrofoil ferry

Candela

Learn More

When the first Candela P12 electric hydrofoil goes into service next year in Stockholm, Sweden, it will take commuters from the suburbs to downtown in about 25 minutes. That’s a big  improvement from the 55 minutes it takes on diesel ferries. Because the P12 produces almost no wake, it is allowed to exceed the speed restrictions placed on other watercraft; it travels at roughly 30 miles per hour, which according to the company makes it the world’s fastest aquatic electric vessel. Computer-guided stabilization technology aims to make the ride feel smooth. And as a zero-emissions way to avoid traffic congestion on bridge or tunnel chokepoints without needing to build expensive infrastructure, the boats are a win for transportation.

Bioforge by Solugen: Zero-emission chemical factory

Solugen

Learn More

Petrochemical plants typically require acres of towering columns and snaking pipes to turn fossil fuels into useful products. In addition to producing toxic emissions like benzene, these facilities put out 925 million metric tons of greenhouse gas every year, according to an IEA estimate. But outside Houston, Solugen built a “Bioforge” plant that produces 10,000 tons of chemicals like fertilizer and cleaning solutions annually through a process that yields zero air emissions or wastewater. The secret sauce consists of enzymes: instead of using fossil fuels as a feedstock, these proteins turn corn syrup into useful chemicals for products much more efficiently than conventional fossil fuel processes– and at a competitive price. These enzymes even like to eat pieces of old cardboard that can’t be recycled anymore, turning trash into feedstock treasure. Solugen signed a deal this fall with a large company to turn cardboard landfill waste into usable plastics.

HydroSKIN by ILEK/U of Stuttgart: Zero-Emissions Cooling

Institute for Lightweight Structures and Conceptual Design (ILEK), University of Stuttgart

Learn More

Air conditioners and fans already consume 10 percent of the world’s electricity, and AC use is projected to triple by the year 2050. But there are other ways to cool a structure. Installed in an experimental building in Stuttgart, Germany, an external facade add-on called HydroSKIN employs layers of modern textiles to update the ancient technique of using wet cloth to cool the air through evaporation. The top layer is a mesh that serves to keep out bugs and debris. The second layer is a thick spacer fabric designed to absorb water—from rain or water vapor when it’s humid out—and then facilitate evaporation in hot weather. The third layer is an optional film that provides additional absorption. The fourth (closest to the wall of the building) is a foil that collects any moisture that soaks through, allowing it to either be stored or drained.  A preliminary estimate found that a single square meter of HydroSKIN can cool an 8x8x8 meter (26x26x26 feet) cube by 10 degrees Kelvin (18 degrees F).

Powerfoyle by Exeger: Self-charging gadgets

Exeger

Learn More

Consumer electronics in the U.S. used about 176 terawatt hours of electricity in 2020, more than the entire nation of Sweden. Researchers at the Swedish company Exeger have devised a new architecture for solar cells that’s compact, flexible, and can be integrated into a variety of self-charging gadgets. Silicon solar panels generate power cheaply at massive scale, but are fragile and require unsightly silver lines to conduct electricity.  Exeger’s Powerfoyle updates a 1980s innovation called dye-sensitized solar cells with titanium dioxide, an abundant material found in white paint and donut glaze, and a new electrode that’s 1,000 times more conductive than silicon. Powerfoyle can be printed to look like brushed steel, carbon fiber or plastic, and can now be found in self-charging headphones by Urbanista and Adidas, a bike helmet, and even a GPS-enabled dog collar.

The Mayflower by IBM: Uncrewed trans-Atlantic voyage

Collecting data in the corrosive salt waves and high winds of the Atlantic can be dull, dirty, and dangerous. Enter the Mayflower, an AI-captained, electrically-powered ship. It has 30 sensors and 16 computing devices that can process data onboard in lieu of a galley, toilets, or sleeping quarters. After the Mayflower successfully piloted itself from Plymouth in the UK to Plymouth, MA earlier this year—with pit stops in the Azores and Canada due to mechanical failures—the team is prepping a vessel more than twice the size for a longer journey. The boat is designed to collect data on everything from whales to the behavior of eddies or gyres at a hundredth the cost of a crewed voyage and without risking human life. The next milestone will be a 12,000 mile trip from the UK to Antarctica, with a return trip via the Falkland Islands.

The Wheatridge Renewable Energy Facilities by NextEra Energy Resources and Portland General Electric: A triple threat of renewable energy

Portland General Electric

Learn More

In Oregon, the Wheatridge Renewable Energy Facilities, co-owned by NextEra Energy Resources and Portland General Electric (PGE), is combining solar, wind, and battery storage to bring renewable energy to the grid at utility scale. Key to the equation are those batteries, which stabilize the intermittency of wind and solar power. All told, it touts 300 megawatts of wind, 50 megawatts of solar, and 30 megawatts of battery storage capable of serving around 100,000 homes, and it’s already started producing power. The facility is all part of the Pacific Northwestern state’s plan to achieve 100-percent carbon-free electricity by 2040. 

Correction on Dec. 2, 2022: This post has been updated to correct an error regarding the date that the suspension bridge in Turkey was completed.

The post The most ingenious engineering feats of 2022 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This device will allow the marines to make drinking water from thin air https://www.popsci.com/technology/marine-corps-atmospheric-portable-water-sustainment-unit/ Tue, 29 Nov 2022 22:11:01 +0000 https://www.popsci.com/?p=493102
A representative of U.S. Indo-Pacific Command Logistics Science and Technology briefs distinguished visitors on the Atmospheric Portable-water Sustainment Unit and Lightweight Water Purification System at Marine Corps Base Hawaii,
The Atmospheric Portable-water Sustainment Unit and Lightweight Water Purification System installed at Marine Corps Base in Hawaii. Cpl Patrick King / DVIDS

It can generate over 15 gallons in a day, or enough water for a squad of marines.

The post This device will allow the marines to make drinking water from thin air appeared first on Popular Science.

]]>
A representative of U.S. Indo-Pacific Command Logistics Science and Technology briefs distinguished visitors on the Atmospheric Portable-water Sustainment Unit and Lightweight Water Purification System at Marine Corps Base Hawaii,
The Atmospheric Portable-water Sustainment Unit and Lightweight Water Purification System installed at Marine Corps Base in Hawaii. Cpl Patrick King / DVIDS

An army may march on its stomach, but it can’t march at all if the soldiers don’t have water. To ensure that its forces are always able to hydrate wherever they operate, this year, the Marine Corps has been testing a machine that can pull drinkable water out of the air. Called the Atmospheric Portable-water Sustainment Unit, when paired with a water purification system it can generate over 15 gallons in a day, or enough water for a squad of marines.

Capt. Sean Conderman, of the 3rd Marine Littoral Regiment’s combat logistics battalion at MCBH, told The Honolulu Star-Advertiser that it’s in essence a small dehumidifier paired with a purifier. “We can mount it basically on any vehicle, and what it does is it pulls water out of the air to give us potable water without having to connect to an actual water source.” He further elaborated to The Star Advertiser that this device would be ideal in humid environments like the ones across the United States Indo-Pacific Command. 

The Atmospheric Portable-water Sustainment Unit, or APSU, is paired with the Corps’ Lightweight Water Purification System, to ensure that the water it pulls from the atmosphere is drinkable. This system generates 15 to 20 gallons of drinkable water every 24 hours. Since the Corps recommends “three to four and a half quarts (96–144 fl oz) of fluid per day for men and two to three quarts (64–96 fl oz) for women,” using the high end of the recommendations, the system can sustain 13 men, or 20 women. With variable water consumption rates across people, and production of up to 20 gallons, a single unit could sustain at least one squad, possibly a squad and a half.

Drinking water is a necessity anywhere the military operates. In the Pacific or other humid environments, it can turn the oppressively moist air into an asset, freeing forces up from a reliance on known streams, instead letting them drink from the sky. 

Snowbird Water Technologies built the APSU for the military, which it describes as an “Air Water Generator.” The air water generator “produces water from air, using an extremely efficient process by which condensation is collected and treated with an ozonator and UV light, ensuring safe and potable drinking water is produced at the tactical edge of the battlefield.

Snowbird first announced their contract with the military in April 2021, highlighting that the system can fit on the back of trailers or vehicles. Being able to bring a water generator into the field means that the water supply is constrained only by the availability of power and storage.

One possibility this opens up is that soldiers or marines could set up temporary camps in austere places where shipping in drinking water would be more trouble than it’s worth.

As the marine corps revisits its pacific past and considers island campaigns, one challenge is resupply. Logistics, or the process of getting forces in the field everything they need, is a hard problem, and it is harder over sea and in war zones. A marine regiment that can supply its own water will still need some aid: everything from food to bullets to medical supplies are depreciating quantities in war. But the ability to free itself from dependence on local water supplies, which this Atmospheric Portable-water Sustainment Unit promises, could let the marines go longer between supply drops, or move through otherwise impassible routes without sacrificing health.

For centuries, the most meaningful constraint on a military was how much food it could carry on the march (or forage in the field), and that was along routes premised on water being available. 

The ability to bring water resupply into the field expands where an army can go, and how long it can operate. Often, battles have been forced by soldiers desperate for supply seeking what they can before rations run out. With at least water resupply on hand (for as long as there’s power to run the water generator), a unit can wait, choosing instead to raid when it is most advantageous to do so.

The post This device will allow the marines to make drinking water from thin air appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>