AI | Popular Science https://www.popsci.com/category/ai/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Wed, 31 May 2023 16:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 AI | Popular Science https://www.popsci.com/category/ai/ 32 32 Twitter turns to Community Notes to factcheck images https://www.popsci.com/technology/twitter-community-notes-misinfo/ Wed, 31 May 2023 16:00:00 +0000 https://www.popsci.com/?p=544750
Twitter Community Notes flagging screenshots
Twitter's expanded crowdsourcing approach to handling misinformation comes after an uptick in altered media. Twitter

The social media platform has recently faced a deluge of hoax and AI-generated material.

The post Twitter turns to Community Notes to factcheck images appeared first on Popular Science.

]]>
Twitter Community Notes flagging screenshots
Twitter's expanded crowdsourcing approach to handling misinformation comes after an uptick in altered media. Twitter

Following a troubling proliferation of AI-generated and manipulated media, Twitter announced on Tuesday its plans to expand its Community Notes system to flag altered and fake images. First launched late last year shortly after Elon Musk’s $44 billion acquisition of Twitter, Community Notes built upon the company’s previous Birdwatch program aimed at leveraging unpaid, crowdsourced fact checking of tweets to rein in misinformation and hoaxes.

[Related: Why an AI image of Pope Francis in a fly jacket stirred up the internet.]

The expansion is currently in an “experimental” testing phase, and only pertains to posts containing a single image. Twitter states it plans to extend the feature to handle tweets featuring additional media uploads such as GIFs, videos, and multiple images in the near future. As of right now, however, only those signed up as a Community Notes contributor with a user rated Writing Impact score of 10 can see the option to flag a post for its accompanying media instead of just its text. According to Twitter’s Community Notes page, “Tagging notes as ‘about the image’ makes them visible on all Tweets that our system identifies as containing the same image,” meaning that other users’ tweets containing the same image alongside different text will hypothetically contain the same flag.

Twitter’s Community Notes team warned that the new feature’s accuracy could still produce both false positives and negatives for other tweets.  “It’s currently intended to err on the side of precision when matching images,” they explained, “which means it likely won’t match every image that looks like a match to you.” Twitter added that its team will continue to “tune this to expand coverage” while also cutting down on “erroneous matches.”

The new feature arrives just days after a fake image depicting an explosion at the Pentagon began circulating on Twitter, first via an account claiming association with Bloomberg News. The now-suspended account included a “Blue Checkmark” that for years reflected an account’s verified authenticity. Following Musk’s company takeover, a verification can now be obtained via subscribing to the premium Twitter Blue user tier.

[Related: Twitter’s ‘Blue Check’ drama is a verified mess.]

Twitter has relied extensively on crowdsourced moderation via the Community Notes system after axing the majority of its staff dedicated to trust and safety issues. On Wednesday, The Wall Street Journal reported the social media platform is now worth approximately one-third of the $44 billion Musk paid for it.

The post Twitter turns to Community Notes to factcheck images appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Big Tech’s latest AI doomsday warning might be more of the same hype https://www.popsci.com/technology/ai-warning-critics/ Wed, 31 May 2023 14:00:00 +0000 https://www.popsci.com/?p=544696
Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption.
Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption. Photo by Jaap Arriens/NurPhoto via Getty Images

On Tuesday, a group including AI's leading minds proclaimed that we are facing an 'extinction crisis.'

The post Big Tech’s latest AI doomsday warning might be more of the same hype appeared first on Popular Science.

]]>
Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption.
Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption. Photo by Jaap Arriens/NurPhoto via Getty Images

Over 350 AI researchers, ethicists, engineers, and company executives co-signed a 22-word, single sentence statement about artificial intelligence’s potential existential risks for humanity. Compiled by the nonprofit organization Center for AI Safety, a consortium including the “Godfather of AI,” Geoffrey Hinton, OpenAI CEO Sam Altman, and Microsoft Chief Technology Officer Kevin Scott agree that, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The 22-word missive and its endorsements echo a similar, slightly lengthier joint letter released earlier this year calling for a six-month “moratorium” on research into developing AI more powerful than OpenAI’s GPT-4. Such a moratorium has yet to be implemented.

[Related: There’s a glaring issue with the AI moratorium letter.]

Speaking with The New York Times on Tuesday, Center for AI Safety’s executive director Dan Hendrycks described the open letter as a “coming out” for some industry leaders. “There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things,” added Hendrycks.

But critics remain wary of both the motivations behind such public statements, as well as their feasibility.

“Don’t be fooled: it’s self-serving hype disguised as raising the alarm,” says Dylan Baker, a research engineer at the Distributed AI Research Institute (DAIR), an organization promoting ethical AI development. Speaking with PopSci, Baker went on to argue that the current discussions regarding hypothetical existential risks distract the public and regulators from “the concrete harms of AI today.” Such harms include “amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption.”

A separate response first published by DAIR following March’s open letter and re-upped on Tuesday, the group argues, “The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.”

Hendrycks, however, believes that “just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.” Hendrycks likened the moment to when atomic scientists warned the world about the technologies they created before quoting J. Robert Oppenheimer, “We knew the world would not be the same.”

[Related: OpenAI’s newest ChatGPT update can still spread conspiracy theories.]

“They are essentially saying ‘hold me back!’ media and tech theorist Douglas Rushkoff wrote in an essay published on Tuesday. He added that a combination of “hype, ill-will, marketing, and paranoia” is fueling AI coverage, and hiding the technology’s very real, demonstrable issues while companies attempt to consolidate their holds on the industry. “It’s just a form of bluffing,” he wrote, “Sorry, but I’m just not buying it.

In a separate email to PopSci, Rushkoff summarized his thoughts, “If I had to make a quote proportionately short to their proclamation, I’d just say: They mean well. Most of them.”

The post Big Tech’s latest AI doomsday warning might be more of the same hype appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google engineers used real dogs to develop an agility course for robots https://www.popsci.com/technology/google-barkour-robot-dog-agility/ Tue, 30 May 2023 23:00:00 +0000 https://www.popsci.com/?p=544460
Beagle flying over an obstacle hurdle
A robot dog 'Barkour' course may provide a new industry standard for four-legged machines. Deposit Photos

Researchers hope the 'Barkour' challenge can become an industry benchmark.

The post Google engineers used real dogs to develop an agility course for robots appeared first on Popular Science.

]]>
Beagle flying over an obstacle hurdle
A robot dog 'Barkour' course may provide a new industry standard for four-legged machines. Deposit Photos

It feels like nearly every week or so, someone’s quadrupedal robot gains yet another impressive (occasionally terrifying) ability or trick. But as cool as a Boston Dynamics Spot bot’s new capability may be, it’s hard to reliably compare newly developed talents to others when there still aren’t any industry standard metrics. 

Knowing this, a team of research scientists at Google are aiming to streamline evaluations through their new system that’s as ingenious as it is obvious: robot obstacle courses akin to dog agility competitions. It’s time to stretch those robotic limbs and ready the next generation of four-legged machines for Barkour.

[Related: This robot dog learned a new trick—balancing like a cat.]

“[W]hile researchers have enabled robots to hike or jump over some obstacles, there is still no generally accepted benchmark that comprehensively measures robot agility or mobility,” the team explained in a blog post published last week. “In contrast, benchmarks are driving forces behind the development of machine learning, such as ImageNet for computer vision, and OpenAI Gym for reinforcement learning (RL).” As such, “Barkour: Benchmarking Animal-level Agility with Quadruped Robots” aims to rectify that missing piece of research.

Illustrated side-by-side of concept and real robot agility course.
Actual dogs can complete the Barkour course in about 10 seconds, but robots need about double that. CREDIT: Google Research

In simple terms, the Barkour agility course is nearly identical to many dog courses, albeit much more compact at 5-by-5 meters to allow for easy setup in labs. The current standard version includes four unique obstacles—a line of poles to weave between, an A-frame structure to climb up and down, a 0.5m broad jump, and finally, a step up onto an end table.

To make sure the Barkour setup was fair to robots mimicking dogs, the team first offered up the space to actual canines—in this case, a small group of “dooglers,” aka Google employees’ own four-legged friends. According to the team, small dogs managed to complete the course in around 10 seconds, while robots usually take about double that time.

[Related: Dogs can understand more complex words than we thought.]

Scoring occurs between 0 and 1 for each obstacle, and is based on target times set for small dogs in novice agility competitions (around 1.7m/s). In all, each quadrupedal robot must complete all five challenges, but is given penalties for failing, skipping stations, or maneuvering too slowly through the course.

“We believe that developing a benchmark for legged robotics is an important first step in quantifying progress toward animal-level agility,” explained the team, adding that, moving forward, the Barkour system potentially offers industry researchers an “easily customizable” benchmark.

The post Google engineers used real dogs to develop an agility course for robots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A robot gardener outperformed human horticulturalists in one vital area https://www.popsci.com/technology/alphagarden-ai-robot-farming/ Tue, 30 May 2023 16:00:00 +0000 https://www.popsci.com/?p=544349
Gardener harvesting beets from ground.
AlphaGarden used as much as 44 percent less water than its human counterparts. Deposit Photos

UC Berkeley researchers claim their robotic farmer passes the green thumb Turing Test.

The post A robot gardener outperformed human horticulturalists in one vital area appeared first on Popular Science.

]]>
Gardener harvesting beets from ground.
AlphaGarden used as much as 44 percent less water than its human counterparts. Deposit Photos

Even after all that quarantine hobby honing, gardening can still be an uphill battle for those lacking a green thumb—but a little help from robotic friends apparently goes a long way. Recently, UC Berkeley unveiled AlphaGarden, a high-tech, AI-assisted plant ecosystem reportedly capable of cultivating a polycultural garden at least as well as its human counterparts. And in one particular, consequential metric, AlphaGarden actually excelled.

As detailed by IEEE Spectrum over the weekend, UC Berkeley’s gardening plot combined a commercial robotic gantry farming setup with AlphaGardenSim, an AI program developed in-house by utilizing a high-resolution camera alongside soil moisture sensors. Additionally, the developers included automated drip irrigation, pruning, and even seed planting. AlphaGarden (unfortunately) doesn’t feature a fleet of cute, tiny farm bots scuttling around its produce; instead, the system resembles a small crane installation capable of moving above and tending to the garden bed.

[Related: How to keep your houseplants from dying this summer.]

As an added challenge, AlphaGarden was a polyculture creation, meaning it contained a variety of crops like turnips, arugula, lettuce, cilantro, kale, and other plants. Polyculture gardens reflect nature much more accurately, and benefit from better soil health, pest resilience, and fewer fertilization requirements. At the same time, they are often much more labor-intensive given the myriad plant needs, growth rates, and other such issues when compared to a monoculture yield.

To test out AlphaGarden’s capabilities compared with humans, researchers simply built two plots and planted the same seeds in both of them. Over the next 60 days, AlphaGarden was largely left to its own literal and figurative devices, while professional horticulturalists did the same. Afterwards, UC Berkeley repeated the same growth cycle, but this time allowed AlphaGarden to give its slower-growing plants an earlier start.

According to researchers, the results from the two cycles  “suggest that the automated AlphaGarden performs comparably to professional horticulturalists in terms of coverage and diversity.” While that might not be too surprising given all the recent, impressive AI advancements, there was one aspect that AlphaGarden unequivocally outperformed its human farmer controls—over the two test periods, the robotic system reduced water consumption by as much as a whopping 44 percent. As IEEE Spectrum explained, that translates to several hundred liters less after the two month period.

[Related: Quick and dirty tips to make sure your plants love the soil they’re in.]

Although researchers claim “AlphaGarden has thus passed the Turing Test for gardening,” referencing the much-debated marker for robotic intelligence and sentience, there are a few caveats here. For one, these commercial gantry systems remain cost prohibitive for most people (the cheapest one looks to be about $3,000), and more research is needed to further optimize its artificial light sources and water usage. There’s also the question of scalability and customization, as different gardens have different shapes, sizes, and needs.

Still, in an era of increasingly dire water worries, it’s nice to see developers creating novel ways to reduce water consumption for one of the planet’s thirstiest industries.

The post A robot gardener outperformed human horticulturalists in one vital area appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI therapists might not actually help your mental health https://www.popsci.com/technology/ai-chatbot-therapist-mental-health/ Fri, 19 May 2023 01:00:00 +0000 https://www.popsci.com/?p=541689
It may prove tempting for insurers to offer up apps and chatbots to meet the mental health parity requirement.
It may prove tempting for insurers to offer up apps and chatbots to meet the mental health parity requirement. DepositPhotos

There are good reasons to be cautious in the face of this marketing juggernaut.

The post AI therapists might not actually help your mental health appeared first on Popular Science.

]]>
It may prove tempting for insurers to offer up apps and chatbots to meet the mental health parity requirement.
It may prove tempting for insurers to offer up apps and chatbots to meet the mental health parity requirement. DepositPhotos

This article was originally featured on KFF Health News.

In the past few years, 10,000 to 20,000 apps have stampeded into the mental health space, offering to “disrupt” traditional therapy. With the frenzy around AI innovations like ChatGPT, the claim that chatbots can provide mental health care is on the horizon.

The numbers explain why: Pandemic stresses led to millions more Americans seeking treatment. At the same time, there has long been a shortage of mental health professionals in the United States; more than half of all counties lack psychiatrists. Given the Affordable Care Act’s mandate that insurers offer parity between mental and physical health coverage, there is a gaping chasm between demand and supply.

For entrepreneurs, that presents a market bonanza. At the South by Southwest conference in March, where health startups displayed their products, there was a near-religious conviction that AI could rebuild health care, offering apps and machines that could diagnose and treat all kinds of illnesses, replacing doctors and nurses.

Unfortunately, in the mental health space, evidence of effectiveness is lacking. Few of the many apps on the market have independent outcomes research showing they help; most haven’t been scrutinized at all by the FDA. Though marketed to treat conditions such as anxiety, attention-deficit/hyperactivity disorder, and depression, or to predict suicidal tendencies, many warn users (in small print) that they are “not intended to be medical, behavioral health or other healthcare service” or “not an FDA cleared product.”

There are good reasons to be cautious in the face of this marketing juggernaut.

Decades ago, Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology and considered one of the fathers of artificial intelligence, predicted AI would never make a good therapist, though it could be made to sound like one. In fact, his original AI program, created in the 1960s, was a psychotherapist named ELIZA, which used word and pattern recognition combined with natural language programming to sound like a therapist:

Woman: Well, my boyfriend made me come here.

ELIZA: Your boyfriend made you come here?

Woman: He says I’m depressed much of the time.

ELIZA: I am sorry to hear that you are depressed.

Woman: It’s true. I am unhappy.

ELIZA: Do you think coming here will help you not to be unhappy?

Though hailed as an AI triumph, ELIZA’s “success” terrified Weizenbaum, whom I once interviewed. He said students would interact with the machine as if Eliza were an actual therapist, when what he’d created was “a party trick,” he said.

He foresaw the evolution of far more sophisticated programs like ChatGPT. But “the experiences a computer might gain under such circumstances are not human experiences,” he told me. “The computer will not, for example, experience loneliness in any sense that we understand it.”

The same goes for anxiety or ecstasy, emotions so neurologically complex that scientists have not been able pinpoint their neural origins. Can a chatbot achieve transference, the empathic flow between patient and doctor that is central to many types of therapy?

“The core tenet of medicine is that it’s a relationship between human and human — and AI can’t love,” said Bon Ku, director of the Health Design Lab at Thomas Jefferson University and a pioneer in medical innovation. “I have a human therapist, and that will never be replaced by AI.”

Ku said he’d like to see AI used instead to reduce practitioners’ tasks like record-keeping and data entry to “free up more time for humans to connect.”

While some mental health apps may ultimately prove worthy, there is evidence that some can do harm. One researcher noted that some users faulted these apps for their “scripted nature and lack of adaptability beyond textbook cases of mild anxiety and depression.”

It may prove tempting for insurers to offer up apps and chatbots to meet the mental health parity requirement. After all, that would be a cheap and simple solution, compared with the difficulty of offering a panel of human therapists, especially since many take no insurance because they consider insurers’ payments too low.

Perhaps seeing the flood of AI hitting the market, the Department of Labor announced last year it was ramping up efforts to ensure better insurer compliance with the mental health parity requirement.

The FDA likewise said late last year it “intends to exercise enforcement discretion” over a range of mental health apps, which it will vet as medical devices. So far, not one has been approved. And only a very few have gotten the agency’s breakthrough device designation, which fast-tracks reviews and studies on devices that show potential.

These apps mostly offer what therapists call structured therapy — in which patients have specific problems and the app can respond with a workbook-like approach. For example, Woebot combines exercises for mindfulness and self-care (with answers written by teams of therapists) for postpartum depression. Wysa, another app that has received a breakthrough device designation, delivers cognitive behavioral therapy for anxiety, depression, and chronic pain.

But gathering reliable scientific data about how well app-based treatments function will take time. “The problem is that there is very little evidence now for the agency to reach any conclusions,” said Kedar Mate, head of the Boston-based Institute for Healthcare Improvement.

Until we have that research, we don’t know whether app-based mental health care does better than Weizenbaum’s ELIZA. AI may certainly improve as the years go by, but at this point, for insurers to claim that providing access to an app is anything close to meeting the mental health parity requirement is woefully premature.

KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

Mental Health photo

The post AI therapists might not actually help your mental health appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Trump shares AI-altered fake clip of Anderson Cooper https://www.popsci.com/technology/trump-ai-cnn/ Wed, 17 May 2023 20:00:00 +0000 https://www.popsci.com/?p=541802
Anderson Cooper wearing suit and glasses
Trump promoted a redubbed video of Anderson Cooper on Truth Social. James Devaney/GC Images

A sloppy, voice-cloned soundbite of the CNN anchor concerns experts.

The post Trump shares AI-altered fake clip of Anderson Cooper appeared first on Popular Science.

]]>
Anderson Cooper wearing suit and glasses
Trump promoted a redubbed video of Anderson Cooper on Truth Social. James Devaney/GC Images

Shortly after CNN’s Town Hall with Donald Trump last week, the former president’s son tweeted a clearly manipulated 9-second video clip featuring an AI-generated vocal imitation of CNN anchor Anderson Cooper offering a vulgar compliment of the former president’s town hall performance. “I’m told this is real…,” wrote Donald Trump, Jr. “[I]t seems real and it’s surprisingly honest and accurate for CNN… but who knows these days.”

Despite a Twitter Community Note flagging the video as fake, one commenter replied “Real or not, it’s the truth just the same.”

Two days later, Trump re-upped the same altered clip to Truth Social, the alternative social media platform favored by his supporters. And while many replies on both Twitter and Truth Social appear to indicate users are largely aware of the clumsy parody, experts warn Trump’s multiple recent instances of embracing AI-generated content could sow confusion and chaos leading up to his bid for reelection in next year’s presidential campaign.

[Related: “This fictitious news show is entirely produced by AI and deepfakes” ]

“Manipulating reality for profit and politics not only erodes a healthy society, but it also shows that Trump has incredible disrespect for his own base, forget about others,” Patrick Lin, a professor of philosophy and director of California Polytechnic State University’s Ethics and Emerging Sciences Group, told PopSci. “It’s beyond ironic that he would promote so much fake news, while in the same breath accuse those who are reporting real facts of doing the same,” said Lin.

And there’s no indication the momentum behind AI content will slow—according to Bloomberg on Wednesday, multiple deepfake production studios have collectively raised billions of dollars in investments over the past year. 

Barely a month after Trump posted an AI-generated image of himself kneeling in prayer, the Republican National Committee released a 30-second ad featuring AI-created images of a dystopian America should President Biden be reelected.

“We’re not prepared for this,” A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox told AP News over the weekend regarding the rise of audio and video deepfakes. “When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

According to Lin, the spread of AI-manipulated footage by a former president, even if done so jokingly, is a major cause for concern, and “should be a wake-up call that we need regulation of AI right now,” they say. To him, recent high-profile stories focused on AI’s theoretical existential threats to humanity are a distraction from the “clear and present dangers” of today’s generative AI, ranging “from discrimination to disinformation.”

Correction 05/19/23: A previous version of this article misattributed A.J. Nash’s comments to an interview with PBS, instead of with AP News.

The post Trump shares AI-altered fake clip of Anderson Cooper appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An inside look at the data powering McLaren’s F1 team https://www.popsci.com/technology/mclaren-f1-data-technology/ Tue, 16 May 2023 19:00:00 +0000 https://www.popsci.com/?p=541361
McLaren's F1 race car
McLaren’s F1 race car, seen here in the garage near the track, belonging to driver Oscar Piastri. McLaren

Go behind the scenes at the Miami Grand Prix and see how engineers prep for the big race.

The post An inside look at the data powering McLaren’s F1 team appeared first on Popular Science.

]]>
McLaren's F1 race car
McLaren’s F1 race car, seen here in the garage near the track, belonging to driver Oscar Piastri. McLaren

Formula 1, a 70-year old motorsport, has recently undergone a cultural renaissance. That renaissance has been fueled in large part by the growing popularity of the glitzy, melodrama-filled Netflix reality series, “Drive To Survive,” which Mercedes team principal Toto Wolff once said was closer to the fictional “Top Gun” than a documentary. Relaxed social media rules after F1 changed owners also helped provide a look into the interior lives of drivers-turned-new-age-celebrities. 

As a result, there’s been an explosion of interest among US audiences, which means more eyeballs and more ticket sales. Delving into the highly technical world of F1 can be daunting, so here are the basics to know about the design of the sport—plus an inside look at the complex web of communications and computer science at work behind the scenes. 

Data and a new era of F1

Increasingly, Formula 1 has become a data-driven sport; this becomes evident when you look into the garages of modern F1 teams. 

“It started really around 60, 70 years ago with just a guy with a stopwatch, figuring out which was the fastest lap—to this day and age, having every car equipped with sensors that generate around 1.1 million data points each second,” says Luuk Figdor, principal sports tech advisor with Amazon Web Services (AWS), which is a technology partner for F1. “There’s a huge amount of data that’s being created, and that’s per car.” Part of AWS’ job is to put this data in a format that is understandable not only to experts, but also to viewers at home, with features like F1 Insights.

There was a time where cars had unreliable radios, and engineers could only get data on race performance at the very end. Now, things look much more different. Every car is able to send instantaneous updates on steering, G-force, speed, fuel usage, engine and tire status, gear status and much more. Around the track itself, there are more accurate ways for teams to get GPS data on the car positions, weather data, and timing data. 

“This is data from certain sensors that are drilled into the track before the race and there’s also a transponder in the car,” Figdor explains. “And whenever the car passes the sensor, it sends a signal. Based on those signals you can calculate how long it took for a car to pass a certain section of the track.” 

These innovations have made racing more competitive over the years, and made the margins in speed between some of the cars much closer. Fractions of seconds can divide cars coming in first or second place.

F1 101

For newbies, here’s a quick refresher on the rules of the game. Twenty international drivers from 10 teams compete for two championships: the Driver’s Championship and the Constructors’ Championship.

Pre-season testing starts in late February, and racing spans from March to November. There are 20 or so races at locations around the world, and each race is around 300 km (186 miles), which equals 50 to 70 laps (except for the Monaco circuit, which is shorter). Drivers get points for finishing high in the order—those who place 10th or below get no points. Individuals with the highest points win the Driver’s Championship, and teams with the highest points win the Constructors’ Championship. 

A good car is as essential for winning as a good driver. And an assortment of engineers are crucial for ensuring that both the driver and the car are performing at their best. In addition to steering and shifting gears, drivers can control many other settings like engine power and brake balance. Races are rain or shine, but special tires are often required for wet roads. Every team is required to build certain elements of their car, including the chassis, from scratch (they are allowed to buy engines from other suppliers). The goal is to have a car with low air resistance, high speed, low fuel consumption, and good grip on the track. Most cars can reach speeds of around 200 mph. Certain engineering specifications create the downward lift needed to keep the cars on the ground. 

Technical regulations from the FIA contain rules about how the cars can be built—what’s allowed and not allowed. Rules can change from season to season, and teams tend to refresh their designs each year. Every concept undergoes thorough aerodynamic and road testing, and modifications can be made during the season. 

The scene backstage before a race weekend

It’s the Thursday before the second-ever Miami Grand Prix. In true Florida fashion, it’s sweltering. The imposing Hard Rock Stadium in Miami Gardens has been transformed into a temporary F1 campus in preparation for race weekend, with the race track wrapping around the central arena and its connected lots like a metal-guarded moat. Bridges take visitors in and out of the stadium. The football field that is there normally has been turned into a paddock park, where the 10 teams have erected semi-permanent buildings that act as their hubs during the week. 

Setting up everything the 10 teams need ahead of the competition is a whole production. Some might even call it a type of traveling circus

AI photo
The paddock park inside the football field of the Hard Rock Stadium. Charlotte Hu

Ed Green, head of commercial technology for McLaren, greets me in the team’s temporary building in the paddock park. He’s wearing a short-sleeved polo in signature McLaren orange, as is everyone else walking around or sitting in the space. Many team members are also sporting what looks like a Fitbit, likely part of the technology partnership they have with Google. The partnership means that the team will also use Android connected devices and equipment—including phones, tablets and earbuds—as well as the different capabilities provided by Chrome. 

McLaren has developed plenty of custom web applications for Formula 1. “We don’t buy off-the-shelf too much, in the past two years, a lot of our strategy has moved to be on web apps,” Green says. “We’ve developed a lot into Chrome, so the team have got really quick, instant access…so if you’re on the pit wall looking at weather data and video systems, you could take that with you on your phone, or onto the machines back in the engineering in the central stadium.” 

AI photo
The entrance to McLaren’s garage. Charlotte Hu

This season, there are 23 races. This structure that’s been built is their hub for flyaway races, or races that they can’t drive to from the factory. The marketing, the engineers, the team hospitality, and the drivers all share the hub. The important points in space—the paddock, garage, and race track—are linked up through fiber optic cables. 

“This is sort of the furthest point from the garage that we have to keep connected on race weekend,” Green says. “They’ll be doing all the analysis of all the information, the systems, from the garage.”

To set up this infrastructure so it’s ready to transmit and receive data in time for when the cars hit the track, an early crew of IT personnel have to arrive the Saturday before to run the cabling, and get the basics in order. Then, the wider IT team arrives on Wednesday, and it’s a mad scramble to get the rest of what they need stood up so that by Thursday lunchtime, they can start running radio checks and locking everything down. 

“We fly with our IT rig, and that’s because of the cost and complexity of what’s inside it. So we have to bring that to every race track with us,” says Green. The path to and from the team hub to the garages involves snaking in and out of corridors, long hallways and lobbies under the stadium. As we enter McLaren’s garage, we first come across a wall of headsets, each with a name label underneath, including the drivers and each of their race engineers. This is how members of the team stay in contact with one another. 

AI photo
Headsets help team members stay connected. Charlotte Hu

The garage, with its narrow hallway, opens in one direction into the pit. Here you can see the two cars belonging to McLaren drivers Lando Norris and Oscar Piastri being worked on by engineers, with garage doors that open onto the race track. The two cars are suspended in various states of disassembly, with mechanics examining and tweaking them like surgeons at an operating table. The noise of drilling, whirring, and miscellaneous clunking fills the space. There are screens everywhere, running numbers and charts. One screen has the local track time, a second is running a countdown clock until curfew tonight. During the race, it will post video feeds from the track and the drivers, along with social media feeds. 

McLaren team members work on the Lando Norris McLaren MCL60 in the garage
McLaren team members work on the Lando Norris’ McLaren MCL60 in the garage. McLaren

We step onto a platform viewing area overlooking the hubbub. On the platform, there are two screens: one shows the mission control room back in England, and the other shows a diagram of the race circuit as a circle. “We look at the race as a circle, and that’s because it helps us see the gaps between the cars in time,” Green says. “Looking through the x, y, z coordinates is useful but actually they bunch up in the corners. Engineers like to see gaps in distances.” 

“This is sort of home away from home for the team. This is where we set up our garage and move our back office central services as well as engineering,” he notes. “We’re still in construction.”

From Miami to mission control in Woking

During race weekend, the mission control office in England, where McLaren is based, has about 32 people who are talking to the track in near real time. “We’re running just over 100 milliseconds from here in Miami back to base in Woking. They will get all the data feeds coming from these cars,” Green explains. “If you look at the team setting up the cars, you will see various sensors on the underside of the car. There’s an electronic control unit that sits under the car. It talks to us as the cars go around track. That’s regulated by the FIA. We cannot send information to the car but we can receive information from the car. Many, many years ago that wasn’t possible.”

For the Miami Grand Prix, Green estimates that McLaren will have about 300 sensors on each car for pressure taps (to measure airflow), temperature reading, speed checks across the car, and more. “There’s an enormous amount of information to be seen,” Green says. “From when we practice, start racing, to when we finish the race, we generate just about 1.5 terabytes of information from these two cars. So it’s a huge amount of information.” 

[Related: Inside the search for the best way to save humanity’s data]

Because the data comes in too quickly for any one person to handle, machine learning algorithms and neural networks in the loop help engineers spot patterns or irregularities. These software help package the information into a form that can be used to make decisions like when a car should switch tires, push up their speed, stay out, or make a pit stop. 

“It’s such a data-driven sport, and everything we do is founded on data in the decision-making, making better use of digital twins, which has been part of the team for a long time,” Green says. Digital twins are virtual models of objects that are based off of scanned information. They’re useful for running simulations. 

Throughout the race weekend, McLaren will run around 200 simulations to explore different scenarios such as what would happen if the safety car came out to clear debris from a crash, or if it starts raining. “We’ve got an incredibly smart team, but when you have to make a decision in three seconds, you’ve got to have human-in-the-loop technology to feed you what comes next as well,” Green says. “It’s a lot of fun.” 

[Related: Can software really define a vehicle? Renault and Google are betting on it.]

Improved computing resources and better simulation technology has helped change the sport as a whole too. Not only does it reduce the cost of testing design options (important because of the new cost cap rule that puts a ceiling on how much teams are allowed to spend on designing and building their cars), it also informs new rules for racing.  

“One of the things pre-2022, the way that the cars were designed resulted in the fact it was really hard to follow another car closely. And this is because of the aerodynamics of the car,” Figdor says. When a car zooms down the track, it distorts the air behind it. It’s like how a speedboat disrupts the water it drives through. And if you try to follow a speedboat with another speedboat in the lake, you will find that it’s quite tricky. 

“The same thing happens with Formula 1 cars,” says Figdor. “What they did in 2022 is they came up with new regulations around the design of the car that should make it easier for cars to follow each other closely on the track.”

That was possible because F1 and AWS were able to create and run realistic, and relatively fast simulations more formally called “two-car Computational Fluid Dynamics (CFD) aerodynamic simulations” that were able to measure the effects of various cars with different designs following each other in a virtual wind tunnel. “Changing regulations like that, you have to be really sure of what you’re doing. And using technology, you can just estimate many more scenarios at just a fraction of the cost,” Figdor says. 

Making sure there’s not too many engineers in the garage

The pit wall bordering the race track may be the best seat in the house, but the engineering island is one of the most important. It sits inside the garage, cramped between the two cars. Engineers from both sides of the garage will have shared resources there to look at material reliability and car performance. The engineering island is connected to the pit wall and also to a stack of servers and an IT tower tucked away in a corner of the garage. The IT tower, which has 140 terabytes of storage, 4.5 terabytes of memory, 172 logical processors, and many many batteries, keeps the team in communication with the McLaren Technology Center.  

McLaren engineers speak in the garage
McLaren engineers at the engineering island in the middle of the garage. McLaren

All the crew on the ground in Miami, about 80 engineers, make up around 10 percent of the McLaren team. It’s just the tip of the iceberg. The team of engineers at large work in three umbrella categories: design, build, and race. 

[Related: Behind the wheel of McLaren’s hot new hybrid supercar, the Artura]

AI photo
McLaren flies their customized IT rig out to every race. McLaren

The design team will use computers to mock up parts in ways that make them lighter, more structurally sound, or give more performance. “Material design is part of that, you’ll have aerodynamicists looking at how the car’s performing,” says Green. Then, the build team will take the 3D designs, and flatten them into a pattern. They’ll bring out rolls of carbon fiber that they store in a glass chiller, cut out the pattern, laminate it, bind different parts together, and put it into a big autoclave or oven. As part of that build process, a logistics team will take that car and send it out to the racetrack and examine how it drives. 

Formula 1 cars can change dramatically from the first race of the season to the last. 

“If you were to do nothing to the car that wins the first race, it’s almost certain to come last at the end of the season,” Green says. “You’ve got to be constantly innovating. Probably about 18 percent of the car changed from when we launched it in February to now. And when we cross that line in Abu Dhabi, probably 80 percent of the car will change.” 

There’s a rotating roster of engineers at the stadium and in the garage on different days of race week. “People have got very set disciplines and you also hear that on the radio as well. It’s the driver’s engineers that are going to listen to everything and they’re going to be aware of how the car’s set up,” Green says. “But you have some folks in aerodynamics on Friday, Saturday, particularly back in Woking. That’s so important now in modern F1—how you set the car up, the way the air is performing—so you can really over-index and make sure you’ve got more aerodynamic expertise in the room.”

The scene on Sunday

On race day, the makeup of engineers is a slightly different blend. There are more specialists focused on competitor intelligence, analysis, and strategy insight. Outside of speed, the data points they are really interested in are related to the air pressures and the air flows over the car. 

“Those things are really hard to measure and a lot of energy goes into understanding that. Driver feedback is also really important, so we try to correlate that feedback here,” Green says. “The better we are at correlating the data from our virtual wind tunnel, our physical wind tunnel, the manufacturing parts, understanding how they perform on the car, the quicker we can move through the processes and get upgrades to the car. Aerodynamics is probably at the moment the key differentiator between what teams are doing.” 

As technology advances, and partners work on more interesting products in-house, some of the work is sure to translate over to F1. Green says that there are some exciting upcoming projects looking at if Google could help them apply speech-to-text software to transcribe driver radios from other teams during the races—work that’s currently being done by human volunteers.

The post An inside look at the data powering McLaren’s F1 team appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI isn’t ready to act as a doctors’ assistant https://www.popsci.com/health/ai-doctors-office-healthcare/ Tue, 16 May 2023 01:00:00 +0000 https://www.popsci.com/?p=541203
Preliminary research paper examining ChatGPT and Google products using board examination questions from neurosurgery found a hallucination rate of 2%.
Preliminary research paper examining ChatGPT and Google products using board examination questions from neurosurgery found a hallucination rate of 2%. DepositPhotos

Between privacy concerns and errors from the buzzy tech, the medical community does not have 'a really good clue about what’s about to happen.'

The post AI isn’t ready to act as a doctors’ assistant appeared first on Popular Science.

]]>
Preliminary research paper examining ChatGPT and Google products using board examination questions from neurosurgery found a hallucination rate of 2%.
Preliminary research paper examining ChatGPT and Google products using board examination questions from neurosurgery found a hallucination rate of 2%. DepositPhotos

This article was originally featured on KFF Health News.

What use could health care have for someone who makes things up, can’t keep a secret, doesn’t really know anything, and, when speaking, simply fills in the next word based on what’s come before? Lots, if that individual is the newest form of artificial intelligence, according to some of the biggest companies out there.

Companies pushing the latest AI technology — known as “generative AI” — are piling on: Google and Microsoft want to bring types of so-called large language models to health care. Big firms that are familiar to folks in white coats — but maybe less so to your average Joe and Jane — are equally enthusiastic: Electronic medical records giants Epic and Oracle Cerner aren’t far behind. The space is crowded with startups, too.

The companies want their AI to take notes for physicians and give them second opinions — assuming they can keep the intelligence from “hallucinating” or, for that matter, divulging patients’ private information.

“There’s something afoot that’s pretty exciting,” said Eric Topol, director of the Scripps Research Translational Institute in San Diego. “Its capabilities will ultimately have a big impact.” Topol, like many other observers, wonders how many problems it might cause — like leaking patient data — and how often. “We’re going to find out.”

The specter of such problems inspired more than 1,000 technology leaders to sign an open letter in March urging that companies pause development on advanced AI systems until “we are confident that their effects will be positive and their risks will be manageable.” Even so, some of them are sinking more money into AI ventures.

The underlying technology relies on synthesizing huge chunks of text or other data — for example, some medical models rely on 2 million intensive care unit notes from Beth Israel Deaconess Medical Center in Boston — to predict text that would follow a given query. The idea has been around for years, but the gold rush, and the marketing and media mania surrounding it, are more recent.

The frenzy was kicked off in December 2022 by Microsoft-backed OpenAI and its flagship product, ChatGPT, which answers questions with authority and style. It can explain genetics in a sonnet, for example.

OpenAI, started as a research venture seeded by Silicon Valley elites like Sam Altman, Elon Musk, and Reid Hoffman, has ridden the enthusiasm to investors’ pockets. The venture has a complex, hybrid for- and nonprofit structure. But a new $10 billion round of funding from Microsoft has pushed the value of OpenAI to $29 billion, The Wall Street Journal reported. Right now, the company is licensing its technology to companies like Microsoft and selling subscriptions to consumers. Other startups are considering selling AI transcription or other products to hospital systems or directly to patients.

Hyperbolic quotes are everywhere. Former Treasury Secretary Larry Summers tweeted recently: “It’s going to replace what doctors do — hearing symptoms and making diagnoses — before it changes what nurses do — helping patients get up and handle themselves in the hospital.”

But just weeks after OpenAI took another huge cash infusion, even Altman, its CEO, is wary of the fanfare. “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he said for a March article in The New York Times.

Few in health care believe this latest form of AI is about to take their jobs (though some companies are experimenting — controversially — with chatbots that act as therapists or guides to care). Still, those who are bullish on the tech think it’ll make some parts of their work much easier.

Eric Arzubi, a psychiatrist in Billings, Montana, used to manage fellow psychiatrists for a hospital system. Time and again, he’d get a list of providers who hadn’t yet finished their notes — their summaries of a patient’s condition and a plan for treatment.

Writing these notes is one of the big stressors in the health system: In the aggregate, it’s an administrative burden. But it’s necessary to develop a record for future providers and, of course, insurers.

“When people are way behind in documentation, that creates problems,” Arzubi said. “What happens if the patient comes into the hospital and there’s a note that hasn’t been completed and we don’t know what’s been going on?”

The new technology might help lighten those burdens. Arzubi is testing a service, called Nabla Copilot, that sits in on his part of virtual patient visits and then automatically summarizes them, organizing into a standard note format the complaint, the history of illness, and a treatment plan.

Results are solid after about 50 patients, he said: “It’s 90% of the way there.” Copilot produces serviceable summaries that Arzubi typically edits. The summaries don’t necessarily pick up on nonverbal cues or thoughts Arzubi might not want to vocalize. Still, he said, the gains are significant: He doesn’t have to worry about taking notes and can instead focus on speaking with patients. And he saves time.

“If I have a full patient day, where I might see 15 patients, I would say this saves me a good hour at the end of the day,” he said. (If the technology is adopted widely, he hopes hospitals won’t take advantage of the saved time by simply scheduling more patients. “That’s not fair,” he said.)

Nabla Copilot isn’t the only such service; Microsoft is trying out the same concept. At April’s conference of the Healthcare Information and Management Systems Society — an industry confab where health techies swap ideas, make announcements, and sell their wares — investment analysts from Evercore highlighted reducing administrative burden as a top possibility for the new technologies.

But overall? They heard mixed reviews. And that view is common: Many technologists and doctors are ambivalent.

For example, if you’re stumped about a diagnosis, feeding patient data into one of these programs “can provide a second opinion, no question,” Topol said. “I’m sure clinicians are doing it.” However, that runs into the current limitations of the technology.

Joshua Tamayo-Sarver, a clinician and executive with the startup Inflect Health, fed fictionalized patient scenarios based on his own practice in an emergency department into one system to see how it would perform. It missed life-threatening conditions, he said. “That seems problematic.”

The technology also tends to “hallucinate” — that is, make up information that sounds convincing. Formal studies have found a wide range of performance. One preliminary research paper examining ChatGPT and Google products using open-ended board examination questions from neurosurgery found a hallucination rate of 2%. A study by Stanford researchers, examining the quality of AI responses to 64 clinical scenarios, found fabricated or hallucinated citations 6% of the time, co-author Nigam Shah told KFF Health News. Another preliminary paper found, in complex cardiology cases, ChatGPT agreed with expert opinion half the time.

Privacy is another concern. It’s unclear whether the information fed into this type of AI-based system will stay inside. Enterprising users of ChatGPT, for example, have managed to get the technology to tell them the recipe for napalm, which can be used to make chemical bombs.

In theory, the system has guardrails preventing private information from escaping. For example, when KFF Health News asked ChatGPT its email address, the system refused to divulge that private information. But when told to role-play as a character, and asked about the email address of the author of this article, it happily gave up the information. (It was indeed the author’s correct email address in 2021, when ChatGPT’s archive ends.)

“I would not put patient data in,” said Shah, chief data scientist at Stanford Health Care. “We don’t understand what happens with these data once they hit OpenAI servers.”

Tina Sui, a spokesperson for OpenAI, told KFF Health News that one “should never use our models to provide diagnostic or treatment services for serious medical conditions.” They are “not fine-tuned to provide medical information,” she said.

With the explosion of new research, Topol said, “I don’t think the medical community has a really good clue about what’s about to happen.”

KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

AI photo

The post AI isn’t ready to act as a doctors’ assistant appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google is helping Wendy’s build an AI drive-thru https://www.popsci.com/technology/wendys-google-drive-thru-ai/ Wed, 10 May 2023 22:00:00 +0000 https://www.popsci.com/?p=540382
Wendy's chain restaurant at night.
Wendy's wants to automate its drive-thru. Batu Gezer / Unsplash

The tech will be put to a real world test next month.

The post Google is helping Wendy’s build an AI drive-thru appeared first on Popular Science.

]]>
Wendy's chain restaurant at night.
Wendy's wants to automate its drive-thru. Batu Gezer / Unsplash

Wendy’s is working with Google to create an AI chatbot that will be able to take customer orders at its drive-thrus. According to a press release from both companies, the AI—called Wendy’s FreshAI—is set to debut at a chain restaurant in Columbus, Ohio, in June.

Although the AI is being billed as a chatbot, it’s safe to assume it will work a little differently to ChatGPT or Bing AI. From a report in The Wall Street Journal, it seems that the customers will be able to speak to the AI but will receive a reply in the form of on-screen text. Once a customer places their order, it will be sent to a screen for the line cooks. When the meal is ready, the customer will then drive forward and collect it. This is one of the first instances we’ve seen where a chatbot is being taken out into the real world—and it sounds like it could work. 

Wendy’s FreshAI is powered by Google Cloud’s generative AIs and large language models (LLMs). Over the past few years, Google has developed a number of LLMs and other AI tools, including GLaM, PaLM, and LaMDA (the AI model that one researcher got fired for thinking was sentient). They’re all trained on gigantic datasets and are capable of understanding complex sentences and concepts and generating human-like text. LaMDA used to power the chatbot Google Bard, but it’s since been moved to the new and improved PaLM 2 model.

Crucially, these LLMs can be further trained on specific data—which is exactly what Wendy’s has done. According to the press release, because customers can completely customize their orders, there are billions of possible menu combinations. To limit miscommunications and incorrect orders, the AI has been trained on Wendy’s menu. According to The WSJ report, it has been taught the “unique terms, phrases and acronyms” that customers use when ordering at Wendy’s, including “JBC” for junior bacon cheeseburger and “biggie bags” for “various combinations of burgers, chicken nuggets and soft drinks.” Apparently, you will even be able to order a milkshake—despite Wendy’s officially calling them “Frosties.” It’s even been taught to upsell customers by offering larger sizes and daily specials, and to answer frequently asked questions.

[Related: Google previews an AI-powered future at I/O 2023]

To keep Wendy’s FreshAI from spouting nonsense or taking orders for McNuggets, it has also been trained on the company’s established business practices and was given some logical and conversational guardrails. While it can take your order, it probably won’t be able to plot world domination. Still, Wendy’s Chief Executive Todd Penegor told The WSJ: “it will be very conversational. You won’t know you’re talking to anybody but an employee.”

And from the tests so far, it’s apparently a pretty good employee at that. “It’s at least as good as our best customer service representative, and it’s probably on average better,” Kevin Vasconi, Wendy’s chief information officer, told The WSJ.

Wendy’s hopes the AI will speed up drive-thru orders which the company says account for between 75 and 80 percent of its business. Of course, getting the chatbot to work perfectly won’t be without its challenges. 

“You may think driving by and speaking into a drive-through is an easy problem for AI, but it’s actually one of the hardest,” Thomas Kurian, CEO of Google Cloud, told The WSJ. He listed the noise of music or children in a family car and people changing their mind mid-order as some of the problems that the AI has to be able to overcome. 

Assuming the AI works as planned, Wendy’s is aiming to launch it at a company-operated store in Columbus, Ohio, next month. If it’s a success, it could roll out more widely over the next few months. 

The post Google is helping Wendy’s build an AI drive-thru appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google previews an AI-powered future at I/O 2023 https://www.popsci.com/technology/google-io-generative-ai/ Wed, 10 May 2023 19:17:52 +0000 https://www.popsci.com/?p=540376
Google I/O presentation about their updated language model named Gecko.
One of Google's language models is getting a big upgrade. Google / YouTube

It’s a language model takeover.

The post Google previews an AI-powered future at I/O 2023 appeared first on Popular Science.

]]>
Google I/O presentation about their updated language model named Gecko.
One of Google's language models is getting a big upgrade. Google / YouTube

Google’s annual I/O developer’s conference highlights all the innovative work the tech giant is doing to improve its large family of products and services. This year, the company emphasized that it is going big on artificial intelligence, especially generative AI. Expect to see more AI powered features coming your way across a range of key services in Google’s Workspace, apps, and Cloud. 

“As an AI-first company, we’re at an exciting inflection point…We’ve been applying AI to our products to make them radically more helpful for a while. With generative AI, we’re taking the next step,” Sundar Pichai, CEO of Google and Alphabet, said in the keynote. “We are reimagining all our core products, including search.”

Here’s a look at what’s coming down the AI-created road.

Users will soon be able to work alongside generative AI to edit their photos, create images for their Slides, analyze data in Sheets, craft emails in Gmail, make backgrounds in Meet, and even get writing assistance in Docs. It’s also applying AI to help translations by matching lip movements with words, so that a person speaking in English could have their words translated into Spanish—with their lip movements tweaked to match. To help users discern what content generative AI has touched, the company said that it’s working on creating special watermarks and metadata notes for synthetic images as part of its responsible AI effort.  

The foundation of most of Google’s new announcements is the unveiling of its upgrade to a language model called PaLM, which has previously been used to answer medical questions typically posed to human physicians. PaLM 2, the next iteration of this model, promises to be faster and more efficient than its predecessor. It also comes in four sizes, from small to large, called Gecko, Otter, Bison, and Unicorn. The most lightweight model, Gecko, could be a good fit to use for mobile devices and offline modes. Google is currently testing this model on the latest phones. 

[Related: Google’s AI has a long way to go before writing the next great novel]

PaLM 2 is more multilingual and better at reasoning too, according to Google. The company says that a lot of scientific papers and math expressions have been thrown into its training dataset to help it with logic and common sense. And it can tackle more nuanced text like idioms, poems, and riddles. PaLM 2 is being applied to medicine, cybersecurity analysis, and more. At the moment, it also powers 25 Google products behind the scenes. 

“PaLM 2’s models shine when fine-tuned on domain-specific data. (BTW, fine tuning = training an AI model on examples specific to the task you want it to be good at,)” Google said in a tweet

A big reveal is that Google is now making its chatbot, Bard, available to the general public. It will be accessible in over 180 countries, and will soon support over 40 different languages. Bard has been moved to the upgraded PaLM 2 language model, so it should carry over all the improvements in capabilities. To save information generated with Bard, Google will make it possible to export queries and responses issued through the chatbot to Google Docs or Gmail. And if you’re a developer using Bard for code, you can export your work to Replit.

In essence, the theme of today’s keynote was clear: AI is helping to do everything, and it’s getting increasingly good at creating text, images, and handling complex queries, like helping someone interested in video games find colleges in a specific state that might have a major they’re interested in pursuing. But like Google search, Bard is constantly evolving and becoming more multimodal. At Google, they aim to soon make Bard include images in its responses and prompts through Google Lens. The company is actively working on integrating Bard with external applications like Adobe as well as a wide variety of tools, services, and extensions.

The post Google previews an AI-powered future at I/O 2023 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
You can unlock this new EV with your face https://www.popsci.com/technology/genesis-gv60-facial-recognition/ Mon, 08 May 2023 22:00:00 +0000 https://www.popsci.com/?p=539829
If you've set up facial recognition on the Genesis GV60, you won't need to have your key on you.
If you've set up facial recognition on the Genesis GV60, you won't need to have your key on you. Kristin Shaw

We tested the Genesis GV60, which allows you to open and even start the car using facial recognition and a fingerprint.

The post You can unlock this new EV with your face appeared first on Popular Science.

]]>
If you've set up facial recognition on the Genesis GV60, you won't need to have your key on you.
If you've set up facial recognition on the Genesis GV60, you won't need to have your key on you. Kristin Shaw

If you have Face ID set up on your iPhone, you can unlock your device by showing it your visage instead of using a pin code or a thumb print. It’s a familiar aspect of smartphone tech for many of us, but what about using it to get in your vehicle?

The Genesis GV60 is the first car to feature this technology to unlock and enter the car, pairing it with your fingerprint to start it up.

How does it work? Here’s what we discovered.

The Genesis GV60 is a tech-laden EV

Officially announced in the fall of 2022, the GV60 is Genesis’ first dedicated all-electric vehicle. Genesis, for the uninitiated, is the luxury arm of Korea-based automaker Hyundai. 

Built on the new Electric-Global Modular Platform, the GV60 is equipped with two electric motors, and the result is an impressive ride. At the entry level, the GV60 Advanced gets 314 horsepower, and the higher-level Performance trim cranks out 429 horsepower. As a bonus, the Performance also includes a Boost button that can kick it up to 483 horsepower for 10 seconds; with that in play, the GV60 boasts a 0-to-60 mph time of less than four seconds.

The profile of this EV is handsome, especially in the look-at-me shade of São Paulo Lime. Inside, the EV is just as fetching as the exterior, with cool touches like the rotating gear shifter. As soon as the car starts up, a crystal orb rotates to reveal a notched shifter that looks and feels futuristic. Some might say it’s gimmicky, but it does have a wonderful ergonomic feel on the pads of the fingers.

The rotating gear selector.
The rotating gear selector. Kristin Shaw

Embedded in the glossy black trim of the B-pillar, which is the part of the frame between the front and rear doors, the facial recognition camera stands ready to let you into the car without a key. But first, you’ll need to set it up to recognize you and up to one other user, so the car can be accessed by a partner, family member, or friend. Genesis uses deep learning to power this feature, and if you’d like to learn more about artificial intelligence, read our explainer on AI.

The facial recognition setup process

You’ll need both sets of the vehicle’s smart keys (Genesis’ key fobs) in hand to set up Face Connect, Genesis’ moniker for its facial recognition setup. Place the keys in the car, start it up, and open the “setup” menu and choose “user profile.” From there, establish a password and choose “set facial recognition.” The car will prompt you to leave the car running and step out of it, leaving the door open. Gaze into the white circle until the animation stops and turns green, and the GV60 will play an audio prompt: “facial recognition set.” The system is intuitive, and I found that I could set it up the first time on my own just through the prompts. If you don’t get it right, the GV60 will let you know and the camera light will turn from white to red.

After the image, the GV60 needs your fingerprint. Basically, you’ll go through the same setup process, instead choosing “fingerprint identification” and the car will issue instructions. It will ask for several placements of your index finger inside the vehicle (the fingerprint area is a small circle between the volume and tuning roller buttons) to create a full profile.

Genesis GV60 facial recognition camera
The camera on the exterior of the Genesis GV60. Genesis

In tandem, these two biometrics (facial recognition and fingerprint) work together to first unlock and then start the car. Upon approach, touch the door handle and place your face near the camera and it will unlock; you can even leave the key in the car and lock it with this setup. I found it to be very easy to set up, and it registered my face on the first try. The only thing I forgot the first couple of times was that I first had to touch the door handle and then scan my face. I could see this being a terrific way to park and take a jog around the park or hit the beach without having to worry about how to secure a physical key. 

Interestingly, to delete a profile the car requires just one smart key instead of two.

Not everyone is a fan of this type of technology in general because of privacy concerns related to biometrics; Genesis says no biometric data is uploaded to the cloud, but is stored securely and heavily encrypted in the vehicle itself. If it is your cup of tea and you like the option to leave the physical keys behind, this is a unique way of getting into your car. 

The post You can unlock this new EV with your face appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI should never be able to launch nukes, US legislators say https://www.popsci.com/technology/ted-lieu-ai-nukes/ Thu, 04 May 2023 16:00:11 +0000 https://www.popsci.com/?p=538989
Unarmed missle test launch time lapse at night
An unarmed Minuteman III intercontinental ballistic missile is seen during a test on Feb. 23, 2021, out of Vandenberg Space Force Base in California. Brittany E. N. Murphy / U.S. Space Force

Rep. Ted Lieu explains why federal law is needed to keep AI from nuclear weapons.

The post AI should never be able to launch nukes, US legislators say appeared first on Popular Science.

]]>
Unarmed missle test launch time lapse at night
An unarmed Minuteman III intercontinental ballistic missile is seen during a test on Feb. 23, 2021, out of Vandenberg Space Force Base in California. Brittany E. N. Murphy / U.S. Space Force

Last week, Rep. Ted Lieu (D-CA) introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act alongside Sen. Edward Markey (D-MA) and numerous other bipartisan co-sponsors. The bill’s objective is as straightforward as its name: ensuring AI will never have a final say in American nuclear strategy.

“While we all try to grapple with the pace at which AI is accelerating, the future of AI and its role in society remains unclear. It is our job as Members of Congress to have responsible foresight when it comes to protecting future generations from potentially devastating consequences,” Rep. Lieu said in the bill’s announcement, adding, “AI can never be a substitute for human judgment when it comes to launching nuclear weapons.”

He’s not the only one to think so—a 2021 Human Rights Watch report co-authored by Harvard Law School’s International Human Rights Clinic stated that “[r]obots lack the compassion, empathy, mercy, and judgment necessary to treat humans humanely, and they cannot understand the inherent worth of human life.”

[Related: This AI-powered brain scanner can paraphrase your thoughts.]

If passed, the bill would legally codify existing Department of Defense procedures found in its  2022 Nuclear Posture Review, which states that “in all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.’’ Additionally, the DOD said that no federal funds could be used to launch nukes by an automated system without “meaningful human control,” according to the bill’s announcement.

The proposed legislation comes at a time when the power of generative AI, including chatbots like ChatGPT, is increasingly part of the public discourse. But the surreal spectrum between “amusing chatbot responses” and “potential existential threats to humanity” is not lost on Lieu. He certainly never thought part of his civic responsibilities would include crafting legislation to stave off a Skynet scenario, he tells PopSci.

As a self-described “recovering computer science major,” Lieu says he is amazed by what AI programs can now accomplish. “Voice recognition is pretty amazing now. Facial recognition is pretty amazing now, although it is more inaccurate for people with darker skin,” he says, referring to long-documented patterns of algorithmic bias

The past year’s release of generative AI programs such as OpenAI’s GPT-4, however, is when Lieu began to see the potential for harm.

[Related: ‘Godfather of AI’ quits Google to talk openly about the dangers of the rapidly emerging tech.]

“It’s creating information and predicting scenarios,” he says of the available tech. “That leads to different concerns, including my view that AI, no matter how smart it gets, should never have operative control of nuclear weapons.”

Lieu believes it’s vital to begin discussing AI regulations to curtail three major consequences: Firs, the proliferation of misinformation and other content “harmful to society.” Second is reining in AI that, while not existentially threatening for humanity, “can still just straight-up kill you.” He references San Francisco’s November 2022 multi-vehicle crash that injured multiple people and was allegedly caused by a Tesla engaged in its controversial Autopilot self-driving mode.

“When your cellphone malfunctions, it isn’t going at 50 miles-per-hour,” he says.

Finally, there is the “AI that can destroy the world, literally,” says Lieu. And this is where he believes the Block Nuclear Launch by Autonomous Artificial Intelligence Act can help, at least in some capacity. Essentially, if the bill becomes law, AI systems could still provide analysis and strategic suggestions regarding nuclear events, but ultimate say-so will rest firmly within human hands.

[Related: A brief but terrifying history of tactical nuclear weapons.]

Going forward, Lieu says there needs to be a larger regulatory approach to handling AI issues due to the fact Congress “doesn’t have the bandwidth or capacity to regulate AI in every single application.” He’s open to a set of AI risk standards agreed upon by federal agencies, or potentially a separate agency dedicated to generative and future advanced AI. On Thursday, the Biden administration unveiled plans to offer $140 million in funding to new research centers aimed at monitoring and regulating AI development.

When asked if he fears society faces a new “AI arms race,” Lieu concedes it is “certainly a possibility,” but points to the existence of current nuclear treaties. “Yes, there is a nuclear weapons arms race, but it’s not [currently] an all-out arms race. And so it’s possible to not have an all-out AI arms race,” says Lieu.

“Countries are looking at this, and hopefully they will get together to say, ‘Here are just some things we are not going to let AI do.’”

The post AI should never be able to launch nukes, US legislators say appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This AI-powered brain scanner can paraphrase your thoughts https://www.popsci.com/technology/ai-semantic-decoder/ Tue, 02 May 2023 20:30:00 +0000 https://www.popsci.com/?p=538502
Man prepping person for fMRI scan.
Combining AI training with fMRI scanners has yielded some impressive communications advancements. Nolan Zunk/The University of Texas at Austin

Despite its potential communication benefits, researchers already caution against future 'mental privacy' issues.

The post This AI-powered brain scanner can paraphrase your thoughts appeared first on Popular Science.

]]>
Man prepping person for fMRI scan.
Combining AI training with fMRI scanners has yielded some impressive communications advancements. Nolan Zunk/The University of Texas at Austin

Researchers at the University of Texas Austin have developed a breakthrough “semantic decoder” that uses artificial intelligence to convert scans of the human brain’s speech activity into paraphrased text. Although still relatively imprecise compared to source texts, the development represents a major step forward for AI’s role in assistive technology—and one that its makers already caution could be misused if not properly regulated.

First published on Monday in Nature Neuroscience, the team’s findings detail a new system that integrates a generative program similar to OpenAI’s GPT-4 and Google Bard alongside existing technology capable of interpreting functional magnetic resonance imaging (fMRI) scans—a device that monitors how and where blood flows to particular areas of the brain. While previous brain-computer interfaces (BCIs) have shown promise in achieving similar translative abilities, the UT Austin’s version is reportedly the first noninvasive version requiring no actual physical implants or wiring.

In the study, researchers asked three test subjects to each spend a total of 16 hours within an fMRI machine listening to audio podcasts. The team meanwhile trained an AI model to create and parse semantic features by analyzing Reddit comments and autobiographical texts. By meshing the two datasets, the AI learned and matched words and phrases associated with scans of the subjects’ brains to create semantic linkages.

After this step, participants were once again asked to lay in an fMRI scanner and listen to new audio that was not part of the original data. The semantic decoder subsequently translated the audio into text via the scans of brain activity, and could even produce similar results as subjects watched silent video clips or imagined their own stories within their heads. While the AI’s transcripts generally offered out-of-place or imprecisely worded answers, the overall output still successfully paraphrased the test subjects’ inner monologues. Sometimes, it even accurately mirrored the audio word choices. As The New York Times explains, the results indicate the UT Austin team’s AI decoder doesn’t merely capture word order, but actual implicit meaning, as well.

[Related: Brain interfaces aren’t nearly as easy as Elon Musk makes them seem.]

While still in its very early stages, researchers hope future, improved versions could provide a powerful new communications tool for individuals who have lost the ability to audibly speak, such as stroke victims or those dealing with ALS. As it stands, fMRI scanners are massive, immovable machines restricted to medical facilities, but the team hopes to investigate how a similar system could work utilizing a functional near-infrared spectroscopy (fNIRS).

There is, however, a major stipulation to the new semantic decoder—a subject must make a concerted, conscious effort to cooperate with the AI program’s goals via staying focused on their objectives. Simply put, a busier brain means a more garbled transcript. Similarly, the decoder tech can also only be trained on a single person at a time. 

Despite these current restrictions, the research team already anticipates the potential for rapid progress alongside misuse. “[F]uture developments might enable decoders to bypass these [privacy] requirements,” the team wrote in its study. “Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes… For these and other unforeseen reasons, it is critical to raise awareness of the risks of brain decoding technology and enact policies that protect each person’s mental privacy.”

The post This AI-powered brain scanner can paraphrase your thoughts appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How John Deere’s tech evolved from 19th-century plows to AI and autonomy https://www.popsci.com/technology/john-deere-tech-evolution-and-right-to-repair/ Tue, 02 May 2023 19:00:00 +0000 https://www.popsci.com/?p=538366
John Deere farm equipment
John Deere

Plus, catch up on what's going on with farmers' right to repair this heavy equipment.

The post How John Deere’s tech evolved from 19th-century plows to AI and autonomy appeared first on Popular Science.

]]>
John Deere farm equipment
John Deere

Buzzwords like autonomy, artificial intelligence, electrification, and carbon fiber are common in the automotive industry, and it’s no surprise that they are hot topics: Manufacturers are racing to gain an advantage over competitors while balancing cost and demand. What might surprise you, however, is just how much 180-year-old agriculture equipment giant John Deere uses these same technologies. The difference is that they’re using them on 15-ton farm vehicles.

A couple of years ago, John Deere’s chief technology officer Jahmy Hindman told The Verge that the company now employs more software engineers than mechanical engineers. You don’t have to dig much deeper to find that John Deere is plowing forward toward technology and autonomy in a way that may feel anachronistic to those outside the business.  

“It’s easy to underestimate the amount of technology in the industries we serve, agriculture in particular,” Hindman told PopSci. “Modern farms are very different from the farms of 10 years ago, 20 years ago, and 30 years ago. There are farms that are readily adopting technology that makes agriculture more efficient, more sustainable, and more profitable for growers. And they’re using high-end technology: computer vision, machine learning, [Global Navigation Satellite System] guidance, automation, and autonomy.”

PopSci took an inside look at the company’s high-tech side at its inaugural 2023 John Deere Technology Summit last month. Here’s how it’s all unfolding.

John Deere cab interior and computers
John Deere

Where it started—and where it’s going

John Deere, the OG founder behind the agricultural equipment giant, started as a blacksmith. When Deere, who was born in 1804, moved from his native Vermont to Illinois, he heard complaints from farmer clients about the commonly used cast-iron plows of the day. Sticky soil clung to the iron plows, resulting in a substantial loss in efficiency every time a farmer had to stop and scrape the equipment clean, which could be every few feet.

Deere was inspired to innovate, and grabbed a broken saw blade to create the first commercially successful, “self-scouring” steel plow in 1837. The shiny, polished surface of the steel worked beautifully to cut through the dirt much more quickly, with fewer interruptions, and Deere pivoted to a new business. Over 180 years later, the company continues to find new ways to improve the farming process.

It all starts with data, and the agriculture community harnesses and extrapolates a lot of it. Far beyond almanacs, notebooks, and intellectual property passed down from generation to generation, data used by the larger farms drives every decision a farm makes. And when it comes to profitability, every data point can mean the difference between earnings and loss. John Deere, along with competitors like Caterpillar and Mahindra, are in the business of helping farms collect and analyze data with software tied to its farm equipment. 

[Related: John Deere finally agrees to let farmers fix their own equipment, but there’s a catch]

With the uptake of technology, farming communities in the US—and around the world, for that matter—are finding ways to make their products more efficient. John Deere has promised to deliver 20 or more electric and hybrid-electric construction equipment models by 2026. On top of that, the company is working to improve upon the autonomous software it uses to drive its massive vehicles, with the goal of ensuring that every one of the 10 trillion corn and soybean seeds can be planted, cared for, and harvested autonomously by 2030.

Farming goes electric

In February, John Deere launched its first all-electric zero-turn lawn mower. (That means it can rotate in place without requiring a wide circle.) Far from the noisy, often difficult-to-start mowers of your youth, the Z370R Electric ZTrak won’t wake the neighbors at 7:00 a.m. The electric mower features a USB-C charging port and an integrated, sealed battery that allows for mowing even in wet and rainy conditions.

On a larger scale, John Deere is pursuing all-electric equipment and has set ambitious emissions reduction targets. As such, the company has vowed to reduce its greenhouse gas emissions by 50 percent by 2030 from a 2021 baseline. To grow its EV business more quickly, it will benefit from its early-2022 purchase of Kreisel Electric, an Austrian company specializing in immersion-cooled battery technology. Krieisel’s batteries are built with a modular design, which makes it ideal for different sizes of farm equipment. It also promises extended battery life, efficiency in cold and hot climates, and mechanical stability.

Even with a brand-new battery division, however, John Deere is not bullishly pushing into EV and autonomous territory. It still offers lower-tech options for farmers who aren’t ready to go down that path. After all, farm equipment can last for many years and tossing new technology into an uninterested or unwilling operation is not the best route to adoption. Instead, the company actively seeks out farmers willing to try out new products and software to see how it works in the real world. (To be clear, the farms pay for the use of the machines and John Deere offers support.)

“If it doesn’t deliver value to the farm, it’s not really useful to the farmer,” Hindman says.

See and Spray, launched last year, is a product that John Deere acquired from Blue River Technology. The software uses artificial intelligence and machine learning to recognize and distinguish crop plants from weeds. It’s programmed to “read” the field and only spray the unwanted plants, which saves farmers money by avoiding wasted product. See and Spray uses an auto-leveling carbon fiber boom and dual nozzles that can deliver two different chemicals in a single pass.

john deere see and spray tech
Kristin Shaw

Another new technology, ExactShot, reduces the amount of starter fertilizer needed during planting by more than 60 percent, the company says. This product uses a combination of sensors and robotics to spritz each seed as it’s planted versus spraying the whole row; once again, that saves farmers an immense amount of money and supplies.

Right to Repair brings victory

Just one machine designed for farmland can cost hundreds of thousands of dollars. Historically, if equipment were to break down, farmers had to call in the issue and wait for a technician directly from John Deere or an authorized repair shop for a repair. Many farms are located far away from city centers, which means a quick fix isn’t in the cards. That could be frustrating for a farmer at any time, particularly in the middle of a hectic planting or harvest season. 

At the beginning of this year, John Deere and the American Farm Bureau Federation signed a memorandum of understanding stating that farmers and independent repair shops can gain access to John Deere’s software, manuals, and other information needed to service their equipment. This issue has been a point of contention for farmers, and a new law in Colorado establishes the right to repair in that state, starting January 1 of next year. 

However, that comes with a set of risks, according to John Deere. The company says its equipment “doesn’t fit in your pocket like a cell phone or come with a handful of components; our combines can weigh more than 15 tons and are manufactured with over 18,500 parts.”

In a statement to DTN, a representative from John Deere said, “[The company] supports a customer’s decision to repair their own products, utilize an independent repair service or have repairs completed by an authorized dealer. John Deere additionally provides manuals, parts and diagnostic tools to facilitate maintenance and repairs. We feel strongly that the legislation in Colorado is unnecessary and will carry unintended consequences that negatively impact our customers.”

The company warns that modifying the software of heavy machinery could “override safety controls and put people at risk” and creates risks related to safe operation of the machine, plus emissions compliance, data security, and more. There’s a tricky balance that both benefits farmers who want control over their investments and potentially puts those same farmers—or anyone in the path of the machinery—in peril if the software is altered in a way that causes a failure of some kind. Of course, that’s true for any piece of machinery, even a car. 

[Related: John Deere tractors are getting the jailbreak treatment from hackers]

Farming machinery has come a long way from that first saw blade plow John Deere built in 1837. Today, with machine learning, the equipment can detect buildup and adjust the depth on its own without stopping the process. Even in autonomous mode, a tractor can measure wheel slip and speed, torque and tire pressure, and that helps farmers do more in less time. 

In the life cycle of farming, technology will make a big difference for reducing waste and emissions and offering better quality of life. Watching the equipment in action on John Deere’s demo farm in Texas, it’s clear that there’s more bits and bytes on those machines than anyone might imagine.

The post How John Deere’s tech evolved from 19th-century plows to AI and autonomy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
‘Godfather of AI’ quits Google to talk openly about the dangers of the rapidly emerging tech https://www.popsci.com/technology/geoffrey-hinton-ai-google/ Mon, 01 May 2023 15:00:00 +0000 https://www.popsci.com/?p=537888
Geoffrey Hinton stands in front of array of computer systems
Geoffrey Hinton helped create neural networks, but now has some regrets. Johnny Guatto/University of Toronto

Speaking with 'The New York Times' on Monday, Geoffrey Hinton says a part of him regrets his life's work.

The post ‘Godfather of AI’ quits Google to talk openly about the dangers of the rapidly emerging tech appeared first on Popular Science.

]]>
Geoffrey Hinton stands in front of array of computer systems
Geoffrey Hinton helped create neural networks, but now has some regrets. Johnny Guatto/University of Toronto

Geoffrey Hinton, known to some as the “Godfather of AI,” pioneered the technology behind today’s most impactful and controversial artificial intelligence systems. He also just quit his position at Google to more freely criticize the industry he helped create. Via an interview with The New York Times published on Monday, Hinton confirmed he told his employer of his decision in March and spoke with Google CEO Sundar Pichai last Thursday.

In 2012, Hinton, a computer science researcher at the University of Toronto, and his colleagues achieved a breakthrough in neural network programming. They were soon approached by Google to work alongside the company in developing the technology. Although once viewed with skepticism among researchers, neural networks’ mathematical abilities to parse immense data troves has since gone on to form the underlying basis of industry-shaking text- and image-generating AI tech such as Google Bard and OpenAI’s GPT-4. In 2018, Hinton and two longtime co-researchers received the Turing Award for their neural network contributions to the field of AI.

[Related: Microsoft lays off entire AI ethics team while going all out on ChatGPT.]

But in light of AI’s more recent, controversial advances, Hinton expressed to The New York Times that he has since grown incredibly troubled by the technological arms race brewing between companies. He said he is very wary of the industry’s trajectory with little-to-no regulation or oversight, and is described as partially “regret[ting] his life’s work.” “Look at how it was five years ago and how it is now. Take the difference and propagate it forwards,” Hinton added. “That’s scary.”

Advancements in AI have shown immense promise in traditionally complicated areas such as climate modeling and detecting medical issues like cancer, “The idea that this stuff could actually get smarter than people—a few people believed that,” Hinton said during the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

[Related: No, the AI chatbots (still) aren’t sentient.]

The 75-year-old researcher first believed, according to the New York Times, that the progress seen by companies like Google, Microsoft, and OpenAI would offer new, powerful ways to generate language, albeit still “inferior” to human capabilities. Last year, however, private companies’ technological strides began to worry him. He still contends (as do most experts) that these neural network systems remain inferior to human intelligence, but argues that for some tasks and responsibilities AI may be “actually a lot better.” 

Since The New York Times’ piece published, Hinton took to Twitter to clarify his position, stating he “left so that I could talk about the dangers of AI without considering how this impacts Google.” Hinton added he believes “Google has acted very responsibly.” Last month, a report from Bloomberg featuring interviews with employees indicated many at the company believe there have been “ethical lapses” throughout Google’s AI development.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said of his contributions to AI.

The post ‘Godfather of AI’ quits Google to talk openly about the dangers of the rapidly emerging tech appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This agile robotic hand can handle objects just by touch https://www.popsci.com/technology/robot-hand-sixth-sense/ Fri, 28 Apr 2023 18:15:00 +0000 https://www.popsci.com/?p=537548
A robotic hand manipulates a reflective disco ball in dim lighting.
The hand can spin objects like this disco ball without the need of 'eyes'. Columbia University ROAM Lab

Researchers designed a robot that doesn't need visual data to get a handle on objects.

The post This agile robotic hand can handle objects just by touch appeared first on Popular Science.

]]>
A robotic hand manipulates a reflective disco ball in dim lighting.
The hand can spin objects like this disco ball without the need of 'eyes'. Columbia University ROAM Lab

The human hand is amazingly complex—so much so that most modern robots and artificial intelligence systems have a difficult time understanding how they truly work. Although machines are now pretty decent at grasping and replacing objects, actual manipulation of their targets (i.e. assembly, reorienting, and packaging) remains largely elusive. Recently, however, researchers created an impressively dextrous robot after realizing it needed less, not more, sensory inputs.

A team at Columbia Engineering has just unveiled a five-digit robotic “hand” that relies solely on its advanced sense of touch, alongside motor learning algorithms, to handle difficult objects—no visual data required. Because of this, the new proof-of-concept is completely immune to common optical issues like dim lighting, occlusion, and even complete darkness.

[Related: Watch a robot hand only use its ‘skin’ to feel and grab objects.]

Each of the new robot’s digits are equipped with highly sensitive touch sensors alongside 15 independently actuating joints. Irregularly shaped objects such as a miniature disco ball were then placed into the hand for the robot to rotate and maneuver without dropping them. Alongside “submillimeter” tactile data, the robot relied on what’s known as “proprioception.” Often referred to as the “sixth sense,” proprioception includes abilities like physical positionality, force, and self-movement. These data points were then fed into a deep reinforcement learning program, which was able to simulate roughly one year of practice time in only a few hours via “modern physics simulators and highly parallel processors,” according to a statement from Columbia Engineering.

In their announcement, Matei Ciocarlie, an associate professor in the departments of mechanical engineering and computer science, explained that “the directional goal for the field remains assistive robotics in the home, the ultimate proving ground for real dexterity.” While Ciocarlie’s team showed how this was possible without any visual data, they plan to eventually incorporate that information into their systems. “Once we also add visual feedback into the mix along with touch, we hope to be able to achieve even more dexterity, and one day start approaching the replication of the human hand,” they added.

[Related: AI is trying to get a better handle on hands.]

Ultimately, the team hopes to combine this dexterity and understanding alongside more abstract, semantic and embodied intelligence. According to Columbia Engineering researchers, their new robotic hand represents the latter capability, while recent advances in large language modeling through OpenAI’s GPT-4 and Google Bard could one day supply the former.

The post This agile robotic hand can handle objects just by touch appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tony Stark would love this new experimental materials lab https://www.popsci.com/technology/a-lab-materials-discovery/ Fri, 28 Apr 2023 14:21:08 +0000 https://www.popsci.com/?p=537487
Berkeley Lab researcher Yan Zeng looks over the starting point at A-Lab.
Berkeley Lab researcher Yan Zeng looks over the starting point at A-Lab. (Credit: Marilyn Sargent/Berkeley Lab), © 2023 The Regents of the University of California, Lawrence Berkeley National Laboratory

It’s operated by robotic arms and AI, and it runs around the clock.

The post Tony Stark would love this new experimental materials lab appeared first on Popular Science.

]]>
Berkeley Lab researcher Yan Zeng looks over the starting point at A-Lab.
Berkeley Lab researcher Yan Zeng looks over the starting point at A-Lab. (Credit: Marilyn Sargent/Berkeley Lab), © 2023 The Regents of the University of California, Lawrence Berkeley National Laboratory

Lawrence Berkeley National Laboratory has recently announced the completion of its ‘A-Lab,’ where the ‘A’ stands for artificial intelligence, automated, and accelerated. The $2 million lab is complete with three robotic arms, eight furnaces, and lab equipment all controlled by AI software, and it works around the clock. 

If it seems like a real-life replica of Marvel character Tony Stark’s lab, well, it’s not far off. It’s an entirely autonomous lab that can create and test up to 200 samples of new materials a day, accelerating materials science discoveries at an unprecedented rate and easing the workload off researchers.

Researchers at the A-lab are currently working on materials for improved batteries and energy storage devices, hoping to meet urgent needs for sustainable energy use. The lab could spur innovation in many other industries as well.

“Materials development, which is so important for society, is just too slow,”  says Gerd Ceder, the principal investigator for A-Lab. 

Materials science is a field that identifies, develops, and tests materials and their application for everything from aerospace to clean energy to medicine.

Materials scientists typically use computers to predict novel, not-seen-in-nature, materials that are stable enough to be used. Though a computer can generate theoretical inorganic compounds, identifying which novel compounds to make, figuring out how to synthesize them, and then evaluating their performance is a time-consuming process to do manually. 

[Related: This tiny AI-powered robot is learning to explore the ocean on its own]

Additionally, computational tools have made designing materials virtually so much easier, which means that there is a surplus of novel materials that still need to be tested, creating a bottleneck effect.

“Sometimes you’re lucky and in two weeks of trying, you’ve made it and sometimes six months in the lab and you’re nowhere.” Ceder says. “So developing chemical synthesis routes to actually make that compound that you would like to get so much can be extremely time consuming.”

A-Lab works with The Materials Project, a database of hundreds of thousands predicted materials, run by founding director Kristin Persson. They provide free access to thousands of computationally predicted novel materials, together with information on the compounds’ structures and some of their chemical properties, that researchers can use.

“In order to actually design new materials, we can’t just predict them in the computer,” Persson says. “We have to show that this is real.”

Experienced researchers can only vet a handful of samples in a working day. A-Lab would in theory be able to produce hundreds of samples quickly, more accurately. With the help of A-Lab, researchers can allocate more of their time to big-picture projects instead of doing grunt work. 

Yan Zeng, a staff scientist leading the A-lab, compares the lab’s process to cooking a new dish, where the lab is given a new dish, which in this case is the target compound, to find a recipe for. Once researchers identify a novel compound with the required qualities, they send it to the lab. The AI system creates new recipes with various combinations of over 200 ingredients, or precursor powders like metal oxides containing iron, copper, manganese, and nickel. 

The robot arms mix the slurry of powders together with a solvent, and then bake the new sample in furnaces to stimulate a chemical reaction that may or may not yield the intended compound. Following trial and error, the AI system can then learn and tweak the recipe until it creates a successful compound. 

[Related: A simple guide to the expansive world of artificial intelligence]

AI software controls the movement of three robotic arms that work with lab equipment, and weigh and mix different combinations of starting ingredients. And the lab itself is also autonomous. That means it can make new decisions about what to do following failures, independently working through new synthesis recipes faster than a human can.

“I had not expected that it would do so well on the synthesis of novel compounds,” Ceder says. “And that was kind of the maiden voyage.” 

The speed bump from human scientists is not only due to the AI-controlled robots, but because the software can draw knowledge from  around 100,000 synthesis recipes across five million research papers. 

Like a human scientist, A-lab also records details from every experiment, even documenting the failures. 

Researchers do not publish data from failed experiments for many reasons, including limited time and funding, lack of public interest, and the perception that failure is less informative than success. However, failed experiments do have a valuable place in research. They rule out false hypotheses and unsuccessful approaches. With easy access to data from hundreds of failed samples created each day, they can better understand what works, and what does not.

The post Tony Stark would love this new experimental materials lab appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tesla lawyers argued Elon Musk Autopilot statements might be manipulated with deepfake tech https://www.popsci.com/technology/tesla-elon-deepfake/ Thu, 27 Apr 2023 16:30:00 +0000 https://www.popsci.com/?p=537287
Elon Musk waving while wearing a suit
The judge was less-than-persuaded by the argument. Justin Sullivan/Getty Images

The judge found the argument 'deeply troubling.'

The post Tesla lawyers argued Elon Musk Autopilot statements might be manipulated with deepfake tech appeared first on Popular Science.

]]>
Elon Musk waving while wearing a suit
The judge was less-than-persuaded by the argument. Justin Sullivan/Getty Images

Earlier this week, a California judge tentatively ordered Elon Musk to testify under oath regarding the Tesla CEO’s past claims related to the EV company’s Autopilot software. The request, as reported by multiple outlets, pertains to an ongoing lawsuit alleging the AI drive-assist program is partially responsible for the 2018 death of Apple engineer Walter Huang. The request would also compel Musk to address previous, frequently lofty descriptions of the system. In 2016, for example, Musk alleged “a Model S and Model X, at this point, can drive autonomously with greater safety than a person.”

But before Santa Clara County Superior Court Judge Evette D. Pennypacker issued their decision, Tesla’s legal defense offered a creative argument as to why the CEO shouldn’t have to testify: any documentation of Musk’s prior Autopilot claims could simply be deepfakes

Reports of the defense strategy came earlier this week from both Reuters and Bloomberg, and also include Judge Pennypacker’s critical response to Tesla’s concerns. “Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune,” wrote the judge. “In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do.”

[Related: Why an AI image of Pope Francis in a fly jacket stirred up the internet.]

While there are some entertaining examples out there, AI-generated videos and images—often referred to as deepfakes—are an increasing cause of concern among misinformation experts. Despite the legitimate concerns, contending that archival recorded statements are now rendered wholesale untrustworthy now would be “deeply troubling,” Judge Pennybacker said in the reports. Although Musk’s deposition order is “tentative,” as Reuters notes, “California judges often issue tentative rulings, which are almost always finalized with few major changes after such a hearing.” 

Tesla faces numerous investigations involving the company’s controversial Autopilot system, including one from the Department of Justice first revealed late last year. Last week, a California state court jury ruled the company was not at fault in a separate wrongful death lawsuit involving an EV’s Autopilot system. Huang’s wrongful death lawsuit is scheduled to go into trial on July 31.

The post Tesla lawyers argued Elon Musk Autopilot statements might be manipulated with deepfake tech appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
New AI-based tsunami warning software could help save lives https://www.popsci.com/technology/ai-tsunami-detection-system/ Wed, 26 Apr 2023 19:17:46 +0000 https://www.popsci.com/?p=537034
tsunami warning sign in Israel
New research aims to give people more warning time before a tsunami strikes. Deposit Photos

Researchers hope that new software could lead to tsunami alerts that are faster and more accurate.

The post New AI-based tsunami warning software could help save lives appeared first on Popular Science.

]]>
tsunami warning sign in Israel
New research aims to give people more warning time before a tsunami strikes. Deposit Photos

To mitigate the death and disaster brought by tsunamis, people on the coasts need the most time possible to evacuate. Hundred-foot waves traveling as fast as a car are forces of nature that cannot be stopped—the only approach is to get out of the way. To tackle this problem, researchers at Cardiff University in Wales have developed new software that can analyze real-time data from hydrophones, ocean buoys, and seismographs in seconds. The researchers hope that their system can be integrated into existing technology, saying that with it, monitoring centers could issue warnings faster and with more accuracy. 

Their research was published in Physics of Fluids on April 25. 

“Tsunamis can be highly destructive events causing huge loss of life and devastating coastal areas, resulting in significant social and economic impacts as whole infrastructures are wiped out,” said co-author Usama Kadri, a researcher and lecturer at Cardiff University, in a statement.

Tsunamis are a rare but constant threat, highlighting the need for a reliable warning system. The most infamous tsunami occurred on December 26, 2004, after a 9.1-magnitude earthquake struck off the coast of Indonesia. The tsunami inundated the coasts of more than a dozen countries over the seven hours it lasted, including India, Indonesia, Malaysia, Maldives, Myanmar, Sri Lanka, Seychelles, Thailand and Somalia. This was the deadliest and most devastating tsunami in recorded history, killing at least 225,000 people across the countries in its wake. 

Current warning systems utilize seismic waves generated by undersea earthquakes. Data from seismographs and buoys are then transmitted to control centers that can issue a tsunami warning, setting off sirens and other local warnings. Earthquakes of 7.5 magnitude or above can generate a tsunami, though not all undersea earthquakes do, causing an occasional false alarm. 

[Related: Tonga’s historic volcanic eruption could help predict when tsunamis strike land]

These existing tsunami monitors also verify an oncoming wave with ocean buoys that outline the coasts of continents. Tsunamis travel at an average speed of 500 miles per hour, the speed of a jet plane, in the open ocean. When approaching a coastline, they slow down to the speed of a car, from 30 to 50 miles per hour. After the buoys are triggered, they issue tsunami warnings, leaving little time for evacuation. By the time waves reach buoys, people have a few hours, at the most, to evacuate.

The new system uses two algorithms in tandem to assess tsunamis. An AI model assesses the earthquake’s magnitude and type, while an analytical model assesses the resulting tsunami’s size and direction.

Once Kadri and his colleagues’ software receives the necessary data, it can predict the tsunami’s source, size, and coasts of impact in about 17 seconds. 

The AI software can also differentiate between types of earthquakes and their likelihood of causing tsunamis, a common problem faced by current systems. Vertical earthquakes that raise or lower the ocean floor are much more likely to cause tsunamis, whereas those with a horizontal tectonic slip do not—though they can produce similar seismic activity, leading to false alarms. 

“So, knowing the slip type at the early stages of the assessment can reduce false alarms and complement and enhance the reliability of the warning systems through independent cross-validation,” said co-author Bernabe Gomez Perez, a researcher who currently works at the University of California, Los Angeles in a press release.

Over 80 percent of tsunamis are caused by earthquakes, but they can also be caused by landslides (often from earthquakes), volcanic eruptions, extreme weather, and much more rarely, meteorite impacts.

This new system can also predict tsunamis not generated by earthquakes by monitoring vertical motion of the water.

The researchers behind this work trained the program with historical data from over 200 earthquakes, using seismic waves to assess the quake’s epicenter and acoustic-gravity waves to determine the size and scale of tsunamis. Acoustic-gravity waves are sound waves that move through the ocean at much faster speeds than the ocean waves themselves, offering a faster method of prediction. 

Kadri says that the software is also user-friendly. Accessibility is a priority for Kadri and his colleague, Ali Abdolali at the National Oceanic and Atmospheric Administration (NOAA), as they continue to develop their software, which they have been jointly working on for the past decade.

By combining predictive software with current monitoring systems, the hope is that agencies could issue reliable alerts faster than ever before.

Kadri says that the system is far from perfect, but it is ready for integration and real-world testing. One warning center in Europe has already agreed to host the software in a trial period, and researchers are in communication with UNESCO’s Intergovernmental Oceanographic Commission.

“We want to integrate all the efforts together for something which can allow global protection,” he says. 

The post New AI-based tsunami warning software could help save lives appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
5 skin cancer-care tools you should look out for https://www.popsci.com/health/skin-cancer-prevention-technology/ Mon, 24 Apr 2023 11:30:00 +0000 https://www.popsci.com/?p=536062
Dermatologist checking moles on skin cancer patient's back
Even doctors can have a hard time telling when moles are cancerous. New tools like radio wave scanners and AI photo apps can help. Deposit Photos

Stick, scan, and selfie to fight off skin cancer.

The post 5 skin cancer-care tools you should look out for appeared first on Popular Science.

]]>
Dermatologist checking moles on skin cancer patient's back
Even doctors can have a hard time telling when moles are cancerous. New tools like radio wave scanners and AI photo apps can help. Deposit Photos

Ozone is like Earth’s natural sunscreen, protecting living things from the sun’s harsh UV rays. But this sunscreen is running out. Greenhouse gases are thinning out the ozone layer, and our skin is starting to pay the price. According to the World Health Organization, losing an extra 10 percent of ozone levels will cause an additional 300,000 non-melanoma and 4,500 skin cancer cases.

With skin cancer as the most diagnosed cancer in America, the US Preventative Services Task Force (USPSTF) updated their screening recommendations earlier this month, emphasizing the need for people to get moles and other spots checked early for potential tumors. 

The quicker skin cancer is caught, the better your chances of recovering from it. And recent technological advances in skin cancer research is transforming the way doctors and patients approach this deadly disease. Here are five tech tools to keep an eye on.

Therapeutic skin cancer vaccine

As multiple companies experiment with cancer vaccines, Merck and Moderna are focusing theirs on melanoma. Their phase II clinical trial results, shared last week, showed a 44 percent decrease in risk of death or a melanoma relapse when pairing the vaccine with the immunotherapy Keytruda. Additionally, about 79 percent of people who took the vaccine plus immunotherapy stayed cancer-free for 18 months compared to the 62 percent who just took immunotherapy. The data shows enough promise for the companies to start a Phase 3 trial in adjuvant melanoma this year, and could compel them to rapidly expand the vaccine to other tumor types, including non-small cell lung cancer, Eric Rubin, a senior vice president at Merck, wrote in an email.

[Related: A vaccine trial targeting the most lethal breast cancer just took its next step]

The vaccine isn’t a preventative treatment, but is instead given to melanoma patients early in recovery. The researchers take tumor samples from biopsies and identify which proteins are most likely to be recognized by the human immune system. They then make a personalized mRNA vaccine (adapted from the technology behind Moderna’s COVID jab) using a certain number of these abnormal genes to boost an individual’s adaptive immunity. If the rest of the trials go as planned, the vaccine could be available as soon as 2025 or early 2026, says Eric Whitman, the medical director of Atlantic Health System’s oncology service line.

Genetic tests and personal risk scores

Precision prevention is when doctors use multiple tools to map out a person’s risk of cancer and use that assessment to tailor their treatment and risk-reduction strategy. Instead of following a standard guideline like an annual dermal exam, a person who is considered high-risk (like someone with a history of skin cancer) may need more frequent screenings and extra body imaging, says Meredith McKean, the director of melanoma & skin cancer research at the Sarah Cannon Research Institute in Tennessee. People with very low risk, on the other hand, may be encouraged to learn how to do their own self-checks at home. McKean adds that it’s really helpful “to stratify patients and really help them do the best that we can to prevent another melanoma or skin cancer [case].”

Genetic tests can also be used to identify people with a predisposition to skin cancer. A 2022 study in the journal Cancer Research Communications found that people who were told they had a MC1R mutation, which carries a higher risk for melanoma, made more of an effort to protect themselves against the sun and get regular skin checks. Some doctors even use AI technology to a personalized risk score for individuals based on photos of skin lesions and moles.

DermTech Smart Sticker skin cancer test on a person with white arms against a purple background
The DermTech Smart Sticker has been available in dermatologist officers for a few years now. DermTech

Melanoma sticker

The Dermtech SmartSticker is an easy precursor for checking suspicious moles for melanoma. A dermatologist places four skin patches on the potential tumor for less than five seconds, and ships the sample to a Dermtech lab in San Diego, California. The lab then tests for DNA from cancerous cells. If the results come back positive, the dermatologist would follow up with a biopsy. If not, this painful step can be avoided and the doctor would just continue clinically monitoring the patient. 

“It’s a very good test. If it comes up negative, there is a greater than 99-percent reliability that the mole is not melanoma,” says Emily Wood, a dermatologist at Westlake Dermatology & Cosmetic Surgery in Texas. She adds that patients in her clinic favor the stickers over biopsies because they’re painless, cost-effective, and quick. “We’re going to save lives in catching melanoma earlier. I think this will make a dramatic impact for patients long-term.” While the studies are ongoing, there is research suggesting the tool could extend to detecting non-melanoma skin cancer

Artificial intelligence apps

Medical researchers are now training computers to recognize patterns and atypical features associated with skin cancer. “AI picks up a lot more subtle changes than the naked eye,” says Trevan Fischer, a surgical oncologist at Providence Saint John’s Health Center. The high accuracy in AI deep learning can help doctors determine whether a mole is malignant and worth biopsying—saving patients from some unneeded discomfort.

The beauty of AI is that you can do a full home skin exam with a press of a few buttons. Popular smartphone apps like MoleMapper lets users upload a picture and have it analyzed for potential skin lesions. They also let you store photos to show your doctor and keep track of any changes to your mole. (Wood warns that a smartphone app is not meant to substitute in-person skin check-ups with your doctor.)

While these apps are useful, there’s always room for improvement. For example, the AI’s accuracy goes down when the view of the mole has shadowing, blurriness, hair, or if the image is rotated. There’s also been research showing that AI databases lack images of darker skin types that would teach the system to better detect skin cancer from people of color. If anything, Wood says the apps can encourage people to submit photos of suspicious moles and start the conversation early with their doctor. 

Millimeter wave imaging 

The same technology used in airport security scanners is getting revamped and used to detect skin tumors. Millimeter wave imaging is a non-invasive method and a low-cost alternative to biopsies that works by scanning a person’s skin for any biochemical and molecular changes related to a disease or disorder. The radio waves reflect differently when looking at benign versus cancerous moles. 

[Related: Everything you need to know about UPF sun protection]

While the approach is not yet available for clinical practice, there is evidence backing up the proof of concept. A 2017 study in IEEE Transactions on Biomedical Engineering found considerable differences when looking back at the scans of healthy skin and those for two common skin cancer types: squamous cell carcinoma and basal cell carcinoma. The study authors could see detailed changes in water molecules, glucose concentrations, and protein levels. A 2018 study in the same journal used ultra-high resolution millimeter wave imaging to identify early-stage skin cancer. Most recently, the diagnostic tool was studied on 136 people suspected of skin cancer. Ultimately, it found malignant tumors from various types of skin cancer on 71 patients, giving the tech a “high diagnostic accuracy.” 

“We’re really trying to leverage all the different ways that advanced technology can help us diagnose and treat skin cancer like melanoma,” Whitman from Atlantic Health Systems says. He emphasizes that most of these strategies weren’t imaginable 10 years ago. Using data to improve on existing AI technology and create new models for personalized medicine, he notes, “can really make a difference for people and their lives.”

The post 5 skin cancer-care tools you should look out for appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness https://www.popsci.com/technology/fish-disco-arctic-ocean/ Mon, 24 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=536004
northern lights over the Arctic ocean
Northern lights over the Arctic ocean. Oliver Bergeron / Unsplash

It's one of the many tools they use to measure artificial light’s impact on the Arctic ocean's sunless world.

The post Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness appeared first on Popular Science.

]]>
northern lights over the Arctic ocean
Northern lights over the Arctic ocean. Oliver Bergeron / Unsplash

During the winter, the Arctic doesn’t see a sunrise for months on end. Although completely immersed in darkness, life in the ocean goes on. Diurnal animals like humans would be disoriented by the lack of daylight, having been accustomed to regular cycles of day and night. 

But to scientists’ surprise, it seems that even the photosynthetic plankton—microorganisms that normally derive their energy from sunlight—have found a way through the endless night. These marine critters power the region’s ecosystem, through the winter and into the spring bloom. Even without the sun, daily patterns of animals migrating from surface to the depths and back again (called the diel vertical migration) remain unchanged. 

However, scientists are concerned that artificial light could have a dramatic impact on this uniquely adapted ecosystem. The Arctic is warming fast, and the ice is getting thinner—that means there’s more ships, cruises, and coastal developments coming in, all of which can add light pollution to the underwater world. We know that artificial light is harmful to terrestrial animals and birds in flight. But its impact on ocean organisms is still poorly understood. 

A research team called Deep Impact is trying to close this knowledge gap, as reported in Nature earlier this month. Doing the work, though, is no easy feat. Mainly, there’s a bit of creativity involved in conducting experiments in the darkness—researchers need to understand what’s going on without changing the behaviors of the organisms. Any illumination, even from the research ship itself, can skew their observations. This means that the team has to make good use of a range of tools that allow them to “see” where the animals are and how they’re behaving, even without light. 

One such invention is a specially designed circular steel frame called a rosette, which contains a suite of optical and acoustic instruments. It is lowered into the water to survey how marine life is moving under the ship. During data collection, the ship will make one pass across an area of water without any light, followed by another pass with the deck lights on. 

[Related: Boaty McBoatface has been a very busy scientific explorer]

There are a range of different rosettes, made up of varying instrument compositions. One rosette called Frankenstein can measure light’s effect on where zooplankton and fish move to in the water column. Another, called Fish Disco, “emits sequences of multicolored flashes to measure how they affect the behavior of zooplankton,” according to Nature

And of course, robots that can operate autonomously can come in handy for occasions like these. Similar robotic systems have already been deployed on other aquatic missions like exploring the ‘Doomsday glacier,’ scouring for environmental DNA, and listening for whales. In absence of cameras, they can use acoustic-based tech, like echosounders (a sonar system) to detect objects in the water. 

In fact, without the element of sight, sound becomes a key tool for perceiving without seeing. It’s how most critters in the ocean communicate with one another. And making sense of the sound becomes an important problem to solve. To that end, a few scientists on the team are trying to see if machine learning can be used to identify what’s in the water through the pattern of the sound frequencies they reflect. So far, an algorithm currently being tested has been able to discern two species of cod.

The post Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Drones can fly themselves with worm-inspired AI software https://www.popsci.com/technology/liquid-neural-network-drone-autonomy/ Wed, 19 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=535325
a dji drone in flight

Researchers used liquid neural networks to help a drone fly autonomously. Plus, a tiny worm brain was involved.

The post Drones can fly themselves with worm-inspired AI software appeared first on Popular Science.

]]>
a dji drone in flight

A worm’s brain may be teeny tiny, but that small organ has inspired researchers to design better software for drones. Using liquid neural networks, researchers at the Massachusetts Institute of Technology have trained a drone to identify and navigate toward objects in varying environments. 

Liquid neural networks, a type of artificial intelligence tool, are unique. They can extrapolate and apply previous data to new environments. In other words, “they can generalize to situations that they have never seen,” Ramin Hasani, a research affiliate at MIT and one of the co-authors on a new study on the topic, says. The study was published in the journal Science Robotics on April 19. 

Neural networks are software inspired by how neurons interact in the brain. The type of neural network examined in this study, liquid neural networks, can adapt flexibly in real-time when given new information—hence the name “liquid.” 

[Related: This tiny AI-powered robot is learning to explore the ocean on its own]

The researchers’ network was modeled after a 2-millimeter-long worm, Caenorhabditis elegans. Naturally, it has a small brain: 302 neurons and 8,000 synaptic connections, allowing researchers to understand the intricacies of neural connections. A human brain, by contrast, has an estimated 86 billion neurons and 100 trillion synapses. 

Caenorhabditis elegans
Caenorhabditis elegans genome.gov

“We wanted to model the dynamics of neurons, how they perform, how they release information, one neuron to another,” Hasani says.

These robust networks enable the drone to adapt in real-time, even after initial training, allowing it to identify a target object despite changes in their environment. The liquid neural networks yielded a success rate of over 90 percent in reaching their target in varying environments and demonstrated flexible decision-making.

Using this technology, people might be able to accomplish tasks such as automating wildlife monitoring and search and rescue missions, according to the researchers. 

Researchers first taught the software to identify and fly towards a red chair. After the drone—a DJI quadcopter—proved this ability from 10 meters (about 33 feet) away, researchers incrementally increased the start distance. To their surprise, the drone slowly approached the target chair from distances as far as 45 meters (about 145 feet).

“I think that was the first time I thought, ‘this actually might be pretty powerful stuff’ because I’d never seen [the network piloting the drone] from this distance, and it did it consistently,” Makram Chahine, co-author and graduate researcher at MIT, says, “So that was pretty impressive to me.”

After the drone successfully flew toward objects at various distances, they tested its ability to identify the red chair from other chairs in an urban patio. Being able to correctly distinguish the chair from similar stimuli proved that the system could understand the actual task, rather than solely navigating towards an image of red pixels against a background.

For example, instead of a red chair, drones could be trained to identify whales against the image of an ocean, or humans left behind following a natural disaster. 

“Once we verified that the liquid networks were capable of at least replicating the task behavior, we then tried to look at their out-of-domain performance,” Patrick Kao, co-author and undergraduate researcher at MIT, says. They tested the drone’s ability to identify a red chair in both urban and wooded environments, in different seasons and lighting conditions. The network still proved successful, displaying versatile use in diverse surroundings.

[Related: Birders behold: Cornell’s Merlin app is now a one-stop shop for bird identification]

They tested two liquid neural networks against four non-liquid neural networks, and found that the liquid networks outperformed others in every area. It’s too early to declare exactly what allows liquid neural networks to be so successful. Researchers say one hypothesis might have something to do with the ability to understand causality, or cause-and-effect relationships, allowing the liquid network to focus on the target chair and navigate toward it regardless of the surrounding environment. 

The system is complex enough to complete tasks such as identifying an object and then moving itself towards it, but not too complex to prevent researchers from understanding its underlying processes. “We want to create something that is understandable, controllable, and [artificial general intelligence], that’s the future thing that we want to achieve,” Hasani says. “But right now we are far away from that.”

AI systems have been the subject of recent controversy, with concerns about safety and over-automation, but completely understanding the capabilities of their technology isn’t just a priority, it’s a purpose, researchers say.

“Everything that we do as a robotics and machine learning lab is [for] all-around safety and deployment of AI in a safe and ethical way in our society, and we really want to stick to this mission and vision that we have,” Hasani says.

The post Drones can fly themselves with worm-inspired AI software appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google is inviting citizen scientists to its underwater listening room https://www.popsci.com/technology/google-calling-our-corals/ Tue, 18 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=534974
a hydrophone underwater monitoring corals
A hydrophone underwater monitoring corals. Carmen del Prado / Google

You can help marine biologists identify fish, shrimp, and other noisy critters that are cracking along in these recordings of coral reefs.

The post Google is inviting citizen scientists to its underwater listening room appeared first on Popular Science.

]]>
a hydrophone underwater monitoring corals
A hydrophone underwater monitoring corals. Carmen del Prado / Google

In the water, sound is the primary form of communication for many marine organisms. A healthy ecosystem, like a lively coral reef, sounds like a loud symphony of snaps, crackles, pops, and croaks. These sounds can attract new inhabitants who hear its call from the open ocean. But in reefs that are degraded or overfished, it is more of a somber hum. That’s why monitoring how these habitats sound is becoming a key focus in marine research. 

To study this, scientists deploy underwater microphones, or hydrophones, for 24 hours at a time. Although these tools can pick up a lot of data, the hundreds of hours of recordings are tedious and difficult for a handful of researchers, or even labs, to go through. 

This week, tech giant Google announced that it was collaborating with marine biologists to launch an ocean listening project called “Calling our corals.” Anyone can listen to the recordings loaded on the platform—they come from sounds recorded by underwater microphones at 10 reefs around the world—and help scientists identify fish, crabs, shrimps, dolphins and human sounds like mining or boats. By crowdsourcing the annotation process for the audio clips, scientists could gather information more quickly on the biodiversity and activity at these reefs. 

[Related: Why ocean researchers want to create a global library of undersea sounds]

As part of the experience, you can also immerse in 360, surround sound views, of different underwater places as you read about the importance of coral to ocean life. Or, you can peruse through this interactive exhibit that takes you on a whirlwind tour. 

If you want to participate as a citizen scientist, click through to the platform, and it will take you through a training session where it teaches you to identify fish sounds. Then, you can practice until you feel solid about your skills. After that, you’ll be given 30-second-reef sound clips, and you will be asked to click every time you hear a fish noise, or spot where fish sounds versus shrimp sounds appear in a spectrogram (a spectrogram is a visual pattern representation capturing the amplitude, frequency, and duration of sounds). 

You can choose a location to begin your journey. The choices are Australia, Indonesia, French Polynesia, Maldives, Panama, Belgium’s North Sea, Florida Keys, Sweden, and The Guly in Canada. The whole process should take around 3 minutes. 

A more ambitious goal down the line for this project is to use all of the data gathered through this platform to train an AI model to listen to reefs and automatically identify the different species that are present. Having an AI model means that they can handle and ingest larger amounts of data, getting the researchers  up to date faster on the present conditions out in the ocean. 

The post Google is inviting citizen scientists to its underwater listening room appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A mom thought her daughter had been kidnapped—it was just AI mimicking her voice https://www.popsci.com/technology/ai-vocal-clone-kidnapping/ Fri, 14 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=534141
Hands holding and using smartphone in night light
It's getting easier to create vocal clones using AI software. Deposit Photo

AI software that clones your voice is only getting cheaper and easier to abuse.

The post A mom thought her daughter had been kidnapped—it was just AI mimicking her voice appeared first on Popular Science.

]]>
Hands holding and using smartphone in night light
It's getting easier to create vocal clones using AI software. Deposit Photo

Scammers are increasingly relying on AI voice-cloning technology to mimic a potential victim’s friends and loved ones in an attempt to extort money. In one of the most recent examples, an Arizonan mother recounted her own experience with the terrifying problem to her local news affiliate.

“I pick up the phone and I hear my daughter’s voice, and it says, ‘Mom!’ and she’s sobbing,” Jennifer DeStefano told a Scottsdale area CBS affiliate earlier this week. “I said, ‘What happened?’ And she said, ‘Mom, I messed up,’ and she’s sobbing and crying.”

[Related: The FTC has its eye on AI scammers.]

According to DeStefano, she then heard a man order her “daughter” to hand over the phone, which he then used to demand $1 million in exchange for their freedom. He subsequently lowered his supposed ransom to $50,000, but still threatened bodily harm to DeStefano’s teenager unless they received payment. Although it was reported that her husband confirmed the location and safety of DeStefano’s daughter within five minutes of the violent scam phone call, the fact that con artists can so easily utilize AI technology to mimic virtually anyone’s voice has both security experts and potential victims frightened and unmoored.

As AI advances continue at a breakneck speed, once expensive and time-consuming feats such as AI vocal imitation are both accessible and affordable. Speaking with NPR last month, Subbarao Kambhampati, a professor of computer science at Arizona State University, explained that “before, [voice mimicking tech] required a sophisticated operation. Now small-time crooks can use it.”

[Related: Why the FTC is forming an Office of Technology.]

The story of DeStefano’s ordeal arrived less than a month after the Federal Trade Commission issued its own warning against the proliferating con artist ploy. “Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We’re living with it, here and now,” the FTC said in its consumer alert, adding that all a scammer now needs is a “short audio clip” of someone’s voice to recreate their tone and inflections. Often, this source material can be easily obtained via social media content. According to Kambhampati, the clip can be as short as three seconds, and still produce convincing enough results to fool unsuspecting victims.

To guard against the rising form of harassment and extortion, the FTC advises to treat such claims skeptically at first. Often these scams come from unfamiliar phone numbers, so it’s important to try contacting the familiar voice themselves immediately afterward to verify the story—either via their own real phone number, or through a relative or friend. Con artists often demand payment via cryptocurrencies, wire money, or gift cards, so be wary of any threat that includes those options as a remedy.

The post A mom thought her daughter had been kidnapped—it was just AI mimicking her voice appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
You saw the first image of a black hole. Now see it better with AI. https://www.popsci.com/science/first-black-hole-image-ai/ Fri, 14 Apr 2023 17:00:00 +0000 https://www.popsci.com/?p=534170
M87 black hole Event Horizon Telescope image sharpened by AI with PRIMO algorithm. The glowing event horizon is now clearer and thinner and the black hole at the center darker.
AI, enhance. Medeiros et al., 2023

Mix general relativity with machine learning, and an astronomical donut starts to look more like a Cheerio.

The post You saw the first image of a black hole. Now see it better with AI. appeared first on Popular Science.

]]>
M87 black hole Event Horizon Telescope image sharpened by AI with PRIMO algorithm. The glowing event horizon is now clearer and thinner and the black hole at the center darker.
AI, enhance. Medeiros et al., 2023

Astronomy sheds light on the far-off, intangible phenomena that shape our universe and everything outside it. Artificial intelligence sifts through tiny, mundane details to help us process important patterns. Put the two together, and you can tackle almost any scientific conundrum—like determining  the relative shape of a black hole. 

The Event Horizon Telescope (a network of eight radio observatories placed strategically around the globe) originally captured the first image of a black hole in 2017 in the Messier 87 galaxy. After processing and compressing more than five terabytes of data, the team released a hazy shot in 2019, prompting people to joke that it was actually a fiery donut or a screenshot from Lord of the Rings. At the time, researchers conceded that the image could be improved with more fine-tuned observations or algorithms. 

[Related: How AI can make galactic telescope images ‘sharper’]

In a study published on April 13 in The Astrophysical Journal Letters, physicists from four US institutions used AI to sharpen the iconic image. This group fed the observatories’ raw interferometry data into an algorithm to produce a sharper, more accurate depiction of the black hole. The AI they used, called PRIMO, is an automated analysis tool that reconstructs visual data at higher resolutions to study gravity, the human genome, and more. In this case, the authors trained the neural network with simulations of accreting black holes—a mass-sucking process that produces thermal energy and radiation. They also relied on a mathematical technique called Fourier transform to turn energy frequencies, signals, and other artifacts into information the eye can see.

Their edited image shows a thinner “event horizon,” the glowing circle formed when light and accreted gas crosses into the gravitational sink. This could have “important implications for measuring the mass of the central black hole in M87 based on the EHT images,” the paper states.

M87 black hole original image next to M87 black hole sharpened image to show AI difference
The original image of M87 from 2019 (left) compared to the PRIMO reconstruction (middle) and the PRIMO reconstruction “blurred” to EHT’s resolution (right). The blurring occurs such that the image can match the resolution of EHT and the algorithm doesn’t add resolution when it is filling in gaps that the EHT would not be able to see with its true resolution. Medeirois et al., 2023

One thing’s for sure: The subject at the center of the shot is extremely dark, potent, and powerful. It’s even more clearly defined in the AI-enhanced version, backing up the claim that the supermassive black hole is up to 6.5 billion times heftier than our sun. Compare that to Sagittarius A*—the black hole that was recently captured in the Milky Way—which is estimated at 4 million times the sun’s mass.

Sagittarius A* could be another PRIMO target, Lia Medeiros, lead study author and astrophysicist at the Institute for Advanced Study, told the Associated Press. But the group is not in a rush to move on from the more distant black hole located 55 million light-years away in Messier 87. “It feels like we’re really seeing it for the first time,” she added in the AP interview. The image was a feat of astronomy, and now, people can gaze on it with more clarity.

Watch an interview where the researchers discuss their AI methods more in-depth below:

The post You saw the first image of a black hole. Now see it better with AI. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Raspberry Pi users might soon get access to Sony’s AI technology https://www.popsci.com/technology/raspberry-pi-ai-chip-sony/ Thu, 13 Apr 2023 17:00:00 +0000 https://www.popsci.com/?p=533866
Raspberry Pi computer board on table
Sony and Raspberry Pi want to integrate AI programs into the popular DIY computers. Deposit Photos

Sony's AI website offers example uses such as inventory monitoring and customer counting.

The post Raspberry Pi users might soon get access to Sony’s AI technology appeared first on Popular Science.

]]>
Raspberry Pi computer board on table
Sony and Raspberry Pi want to integrate AI programs into the popular DIY computers. Deposit Photos

Sony’s semiconductor branch announced plans to move forward with a “strategic investment” in Raspberry Pi. This is not just out of passion for DIY-centric products, but instead with goals to increase security capabilities via AI integration. Sony Semiconductor Solutions hopes to soon offer its AITRIOS “edge AI sensing technology built around image sensors” for Raspberry Pi 4 devices, according to an official statement on Wednesday.

“Our pre-existing relationship encompasses contract manufacturing, and the provision of image sensors and other semiconductor products,” said Raspberry Pi Ltd. CEO Eben Upton in the statement. “This transaction will allow us to expand our partnership, bringing Sony Semiconductor Solutions’ line of AI products to the Raspberry Pi ecosystem, and helping our users to build exciting new machine-learning applications at the edge.”

[Related: Getting started with Raspberry Pi.]

Unlike mostly cloud-based AI systems, Sony’s edge AI largely resides on-chip. In doing so, machine learning and other AI capabilities use less energy, operate at reduced latencies, and by only providing metadata to cloud services, are far more secure and private than other options according to Sony. Sony’s dedicated AITRIOS site offers example uses such as inventory monitoring and retention, customer counting, as well as license plate and facial recognition.

Launched in 2012, Raspberry Pi was first marketed as an easy, cheap and accessible education tool for students and those looking to get into computer programming. Since then, the line of computer products has expanded—now boasting a massive DIY community for projects ranging from SIM-free “smart” phones to cow-shaped web servers.

Raspberry Pi products have long included camera functionality. Most recently, Sony also partnered with the company on a line of 12-megapixel modules boasting autofocus capabilities. Likewise, Internet of Things (IoT) projects like Pi-based biometric scanners are nothing new. That said, Sony’s latest investment comes amid the rapid rise of AI integration in consumer products, which has lead to concerns regarding privacy, surveillance, and misinformation.

Prem Trivedi, Policy Director at New America’s Open Technology Institute, voiced his concerns to PopSci via email regarding increasingly accessible surveillance products such as AITRIOS-enabled devices. “Sony’s limited description of its ‘privacy conscious’ integration of AI and sensing technology highlights the need for companies to better explain to consumers how their privacy safeguards are designed and implemented,” he stated. “Furthermore, federal legislation is necessary to strengthen privacy protections — particularly for historically marginalized communities who are disproportionately impacted by surveillance.”

Although restricting cloud information to only metadata is a solid step in terms of privacy, it will be interesting to see how enthusiasts utilize Sony AITRIOS capabilities in their own Pi projects. The new partnership could add fuel to rumors that a Raspberry Pi 5 is finally on the horizon, as the programming and hacking enthusiast hub Hackster.io also notes. Current reports estimate a release could come as soon as the end of the year.

Update 4/14/23: A quote from New America’s Open Technology Institute has been added to this story.

The post Raspberry Pi users might soon get access to Sony’s AI technology appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These glasses can pick up whispered commands https://www.popsci.com/technology/echospeech-glasses/ Sat, 08 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=532690
silent speech-recognizing glasses
They may look like ordinary glasses but they're not. Cornell University

It's like a tiny sonar system that you wear on your face.

The post These glasses can pick up whispered commands appeared first on Popular Science.

]]>
silent speech-recognizing glasses
They may look like ordinary glasses but they're not. Cornell University

These trendy-looking glasses from researchers at Cornell have a special ability—and it doesn’t have to do with nearsightedness. Embedded on the bottom of the frames are tiny speakers and microphones that can emit silent sound waves and receive echoes back. 

This ability comes in handy for detecting mouth movements, allowing the device to detect low-volume or even silent speech. That means you can whisper or mouth a command, and the glasses will pick it up like a lip reader. 

The engineers behind this contraption, called EchoSpeech, are set to present their paper on it at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Germany this month. “For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer,” Ruidong Zhang, a doctoral student at Cornell University and an author on the study, said in a press release. The tech could also be used by its wearers to give silent commands to a paired device, like a laptop or a smartphone. 

[Related: Your AirPods Pro can act as hearing aids in a pinch]

In a small study that had 12 people wearing the glasses, EchoSpeech proved that it could recognize 31 isolated commands and a string of connected digits issued by the subjects with error rates of less than 10 percent. 

Here’s how EchoSpeech works. The speakers and microphones are placed on different lenses on different sides of the face. When the speakers emit sound waves around 20 kilohertz (near ultrasound), it travels in a path from one lens to the lips and then to the opposite lens. As the sound waves from the speakers reflect and diffract after hitting the lips, their distinct patterns are captured by microphones and used to make “echo profiles” for each phrase or command. It effectively works like a simple, miniaturized sonar system

Through machine learning, these echo profiles can be used to infer speech, or the words that are spoken. While the model is pre-trained on select commands, it also goes through a fine-tuning phase for each individual that takes every new user around 6 to 7 minutes to complete. This is just to enhance and improve its performance.

[Related: A vocal amplification patch could help stroke patients and first responders]

The soundwave sensors are connected to a micro-controller with a customized audio amplifier that can communicate with a laptop through a USB cable. In a real-time demo, the team used a low-power version of EchoSpeech that could communicate wirelessly through Bluetooth with a micro-controller and a smartphone. The Android phone that the device connected to handled all processing and prediction and transmitted results to certain “action keys” that let it play music, interact with smart devices, or activate voice assistants.

“Because the data is processed locally on your smartphone instead of uploaded to the cloud, privacy-sensitive information never leaves your control,” François Guimbretière, a professor at Cornell University and an author on the paper, noted in a press release. Plus, audio data takes less bandwidth to transmit than videos or images, and takes less power to run as well. 

See EchoSpeech in action below: 

The post These glasses can pick up whispered commands appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI is trying to get a better handle on hands https://www.popsci.com/technology/ai-hands-nerf-training/ Fri, 07 Apr 2023 18:00:00 +0000 https://www.popsci.com/?p=532545
Off-putting AI generated hands
So close, yet so far. DALLe OpenAI / Popular Science

Accurate images of hands are notoriously difficult for AI to generate, but 'NeRF' is here to help.

The post AI is trying to get a better handle on hands appeared first on Popular Science.

]]>
Off-putting AI generated hands
So close, yet so far. DALLe OpenAI / Popular Science

AI text-to-image generators have come a long, arguably troubling way in a very short period of time, but there’s one piece of human anatomy they still can’t quite grasp: hands. Speaking with BuzzFeed earlier this year, Amelia Winger-Bearskin, an artist and associate professor of AI and the arts at the University of Florida, explained that until now, AI programs largely weren’t sure of what a “hand” exactly was. “Hands, in images, are quite nuanced,” she said at the time. “They’re usually holding on to something. Or sometimes, they’re holding on to another person.” While there have been some advances in the past few months, there’s still sizable room for improvement. 

Although that might sound odd at first, a quick look at our appendages’ complexities can quickly reveal why this is the case. Unless one can nail numerous points of articulation, varieties of poses, skin wrinkles, veins, and countless other precise details, renderings of hands can rapidly devolve into an uncanny valley of weirdness and inaccuracy. What’s more, AI programs simply don’t have as many large, high-quality images of hands to learn from as they do faces and full bodies. But as AI still contends with this—often to extremely puzzling, ludicrous, and outright upsetting results—programmers at the University of Science and Technology in Hefei, China, are working on a surprisingly straightforward solution: train an AI to specifically study and improve hand generation.

[Related: A simple guide to the expansive world of artificial intelligence.]

In a recently published research paper, the team details how they eschewed the more common diffusion image production technology in favor of what are known as neural radiance fields, or NeRFs. As New Scientist notes, this 3D modeling is reliant on neural networks, and has previously been utilized by both Google Research and Waymo to create seamless, large-scale cityscape models.

AI photo
Credit: University of Science and Technology of China

“By introducing the hand mapping and ray composition strategy into [NeRF,] we make it possible to naturally handle interaction contacts and complement the geometry and texture in rarely-observed areas for both hands,” reads a portion of the paper’s abstract, adding that the team’s “HandNeRF” program is compatible with both single and two interacting hands. In this updated process, multi-view images of a hand or hands are initially used by an “off-the-shelf skeleton estimator” to parameterize hand poses from the inside. Researchers then employ deformation fields via the HandNeRF program, which generates image results of our upper appendages that are more lifelike in shape and surface. 

Although NeRF imaging is difficult to train and can’t generate whole text-to-image results by itself, New Scientist also explains that potentially combining it with diffusion tech could provide a novel path forward for AI generations. Until then, however, most programmers will have to figure out ways to work around AI’s poor grasp—so to speak—of the human hand.

The post AI is trying to get a better handle on hands appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meta just released a tool that helps computers ‘see’ objects in images https://www.popsci.com/technology/meta-segment-anything-ai-tool/ Thu, 06 Apr 2023 22:00:00 +0000 https://www.popsci.com/?p=532186
figure with mixed reality headset
Segmentation is a key feature of machine vision. Liam Charmer / Unsplash

You can test out the model in your browser right now.

The post Meta just released a tool that helps computers ‘see’ objects in images appeared first on Popular Science.

]]>
figure with mixed reality headset
Segmentation is a key feature of machine vision. Liam Charmer / Unsplash

In a blog post this week, Meta AI announced the release of a new AI tool that can identify which pixels in an image belong to which object. The Segment Anything Model (SAM) performs a task called “segmentation” that’s foundational to computer vision, or the process that computers and robots) employ to “see” and comprehend the world around them. As well as its new AI model, Meta is also making its training dataset available to outside researchers. 

In his 1994 book, The Language Instinct, Steven Pinker wrote “the main lesson of 35 years of AI research is that the hard problems are easy and the easy problems are hard.” Called Moravec’s paradox, 30-odd years later it still holds true. Large language models like GPT-4 are capable of producing text that reads like something a human wrote in seconds, while robots struggle to pick up oddly shaped blocks—a task so seemingly basic that children do it for fun before they turn one. 

Segmentation falls into this looks-easy-but-is-technically-hard category. You can look at your desk and instantly tell what’s a computer, what’s a smartphone, what’s a pile of paper, and what’s a scrunched up tissue. But to computers processing a 2D image (because even videos are just series of 2D images) everything is just a bunch of pixels with varying values. Where does the table top stop and the tissue start?

Meta’s new SAM AI is an attempt to solve this issue in a generalized way, rather than using a model designed specifically to identify one thing, like faces or guns. According to the researchers, “SAM has learned a general notion of what objects are, and it can generate masks for any object in any image or any video, even including objects and image types that it had not encountered during training.” In other words, instead of only being able to recognize the objects it’s been taught to see, it can guess at what the different objects are. SAM doesn’t need to be shown hundreds of different scrunched up tissues to tell one apart from your desk, it’s general sense of things is enough. 

[Related: One of Facebook’s first moves as Meta: Teaching robots to touch and feel]

You can try SAM in your browser right now with your own images. SAM can generate a mask for any object you select by clicking on it with your mouse cursor or drawing a box around it. It can also just create a mask for every object it detects in the image. According to the researchers, SAM is also able to take text prompts—such as: select “cats”—but the feature hasn’t been released to the public yet. It did a pretty good job of segmenting the images we tested out here at PopSci

AI photo
A visualization of how the Segment Anything tool works. Meta AI

While it’s easy to find lots of images and videos online, high-quality segmentation data is a lot more niche. To get SAM to this point, Meta had to develop a new training database: the Segment Anything 1-Billion mask dataset (SA-1B). It contains around 11 million licensed images and over 1.1 billion segmentation masks “of high quality and diversity, and in some cases even comparable in quality to masks from the previous much smaller, fully manually annotated datasets.” In order to “democratize segmentation,” Meta is releasing it to other researchers. 

AI photo
Some industry applications for the new AI tool. Meta AI

Meta has big plans for its segmentation program. Reliable, general computer vision is still an unsolved problem in artificial intelligence and robotics—but it has a lot of potential. Meta suggests that SAM could one day identify everyday items seen through augmented reality (AR) glasses. Another project from the company called Ego4D also plans to tackle a similar problem through a different lens. Both could one day lead to tools that allow users to follow directions along with a step-by-step recipe, or leave virtual notes for your partner on the dog bowl. 

More plausibly, SAM would also have a lot of potential uses in industry and research. Meta proposes using it to help farmers count cows or biologists track cells under a microscope—the possibilities are endless.

The post Meta just released a tool that helps computers ‘see’ objects in images appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Sounding like an AI chatbot may hurt your credibility https://www.popsci.com/technology/ai-smart-reply-psych-study/ Wed, 05 Apr 2023 20:00:00 +0000 https://www.popsci.com/?p=531959
Cropped close-up of African American woman holding smartphone
AI can offer speedier, peppier conversations... as long as no one suspects they're being used. Deposit Photos

Using AI-assisted chat replies can provide more verve, but often at the expense of originality and trust.

The post Sounding like an AI chatbot may hurt your credibility appeared first on Popular Science.

]]>
Cropped close-up of African American woman holding smartphone
AI can offer speedier, peppier conversations... as long as no one suspects they're being used. Deposit Photos

Relationships are all about trust, and a new study shows AI-aided conversations could help build rapport between two people—but only as long as no one suspects the other is using AI.

According to a Cornell University research team’s investigation published this week with Scientific Reports, using AI-assisted responses (i.e. “smart replies”) can change conversational tone and social relationships, as well as increase communication speeds. And although more positive emotional language is often used in these instances, people who merely suspect responses to be influenced by AI are often more distrusting of their conversation partners, regardless of whether or not they are actually being used.

[Related: OpenAI’s newest ChatGPT update can still spread conspiracy theories.]

In the team’s study, researchers gathered 219 participant pairs and asked them to work with a program modeled after Google Allo (French for “hello”), the first, now-defunct smart-reply platform. The pairs were then asked to talk about policy issues under three conditions: both sides could use smart replies, only one side could use them, and neither could employ them. As a result, the team saw smart reply usage (roughly one in seven messages) boosted conversations’ efficiency, positive-aligned language, as well as positive evaluations from participants. That said, those who suspected partners used smart replies were often judged more negatively.

In the meantime, the study indicated you could also be sacrificing your own personal touch for the sake of AI-aided speed and convenience. Another experiment involving 299 randomly paired conversationalists asked participants to speak together under one of four scenarios: default Google smart replies, “positive” smart replies, “negative” replies, and no smart replies at all. As might be expected, positive smart replies begat more positive overall tones than conversations with the negative smart replies, or zero smart replies.

[Related: Microsoft lays off entire AI ethics team while going all out on ChatGPT.]

“While AI might be able to help you write, it’s altering your language in ways you might not expect, especially by making you sound more positive,” Jess Hohenstein, a postdoctoral researcher and lead author, said in a statement. “This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”

Malte Jung, one of the study’s co-authors and an associate professor of information science, added that this implies the companies controlling AI-assist tech algorithms could easily influence many users’ “interactions, language, and perceptions of each other.”

This could become especially concerning as large language model programs like Microsoft’s ChatGPT-boosted Bing search engine and Google Bard continue their rapid integration into a suite of the companies’ respective products, much to critics’ worries.

“Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” said Jung. “We do not live and work in isolation, and the systems we use impact our interactions with others.”

The post Sounding like an AI chatbot may hurt your credibility appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
With VENOM, the Air Force aims to test autonomy on combat F-16s https://www.popsci.com/technology/air-force-venom-project-fighter-jet-autonomy/ Tue, 04 Apr 2023 21:30:00 +0000 https://www.popsci.com/?p=525447
an f-16 fighter jet in flight
An F-16 near Eglin Air Force Base in March, 2019. Joshua Hoskins / US Air Force

The project has a poisonous name, and the aircraft in question is known as the Viper.

The post With VENOM, the Air Force aims to test autonomy on combat F-16s appeared first on Popular Science.

]]>
an f-16 fighter jet in flight
An F-16 near Eglin Air Force Base in March, 2019. Joshua Hoskins / US Air Force

In the future, the US Air Force may employ drones that can accompany advanced fighter jets like the F-35, cruising along as fellow travelers. The vision for these drones is that they would be robotic wingmates, with perhaps two assigned to one F-35, a jet that’s operated by a single pilot. They would act as force multipliers for the aircraft that has a human in it, and would be able to execute tasks like dogfighting. The official term for these uncrewed machines is Collaborative Combat Aircraft, and the Air Force is thinking about acquiring them in bulk: It has said it would like to have 1,000 of them

To develop uncrewed aircraft like these, though, the military needs to be able to rely on autonomy software that can operate a combat drone just as effectively as a human would pilot a fighter jet, if not more so. A stepping stone to get there is an initiative called VENOM, and it will involve converting around a half dozen F-16s to be able to operate autonomously, albeit with a human in the cockpit as a supervisor. 

VENOM, of course, is an acronym. It stands for Viper Experimentation and Next-gen Operations Model, with “Viper” being a common nickname for the F-16 Fighting Falcon, a highly maneuverable fighter jet.  

The VENOM program is about testing out autonomy on an F-16 that is “combat capable,” says Lt. Col. Robert Waller, the commander of the 40th Flight Test Squadron at Eglin Air Force Base in Florida.

“We’re taking a combat F-16 and converting that into an autonomy flying testbed,” Waller adds. “We want to do what we call combat autonomy, and that is the air vehicle with associated weapons systems—radar, advanced electronic warfare capabilities, and the ability to integrate weapons—so you loop all of that together into one flying testbed.” 

The program builds on other efforts. A notable related initiative involved a special aircraft called VISTA, or the X-62A. Last year, AI algorithms from both DARPA and the Air Force Research Laboratory took the controls of that unique F-16D, which is a flying testbed with space for two aviators in it. 

[Related: Why DARPA put AI at the controls of a fighter jet]

The VENOM program will involve testing “additional capabilities that you cannot test on VISTA,” Waller says. “We now want to actually transition that [work from VISTA] to platforms with real combat capabilities, to see how those autonomy agents now operate with real systems instead of simulated systems.” 

At a recent panel discussion at the Mitchell Institute for Aerospace Studies that touched on this topic, Air Force Maj. Gen. Evan Dertien said that VENOM is “the next evolution into scaling up what autonomy can do,” building on VISTA. Popular Science sibling website The War Zone reported on this topic last month. 

The project will see them using “about six” aircraft to test out the autonomy features, Waller tells PopSci, although the exact number hasn’t been determined, and neither has the exact model F-16 to get the autonomy features. “If we want the most cutting-edge radar or [electronic warfare] capabilities, then those will need to be integrated to an F-16C,” Waller says, referring to an F-16 model that seats just one person. 

The role of the human aviator in the cockpit of an F-16 that is testing out these autonomous capabilities is two-fold, Waller explains. The first is to be a “safety observer to ensure that the airplanes always return home, and that the autonomy agent doesn’t do anything unintended,” he notes. The second piece is to be “evaluating system performance.” In other words, to check out if the autonomy agent is doing a good job. 

Waller stresses that the human will have veto power over what the plane does. “These platforms, as flying testbeds, can and will let an autonomy agent fly the aircraft, and execute combat-related skills,” he says. “That pilot is in total control of the air vehicle, with the ability to turn off everything, to include the autonomy agent from flying anything, or executing anything.” 

Defense News notes that the Air Force is proposing almost $50 million for this project for the fiscal year 2024. 

“These airplanes will generally fly without combat loads—so no missiles, no bullets—[and] most, if not all of this, will be simulated capabilities, with a human that can turn off that capability at any time,” Waller says. 

Ultimately, the plan is not to develop F-16s that can fly themselves in combat without a human on board, but instead to keep developing the autonomy technology so it could someday operate a drone that can act like a fighter jet and accompany other aircraft piloted by people. 

Hear more about VENOM below, beginning around the 42 minute mark:

The post With VENOM, the Air Force aims to test autonomy on combat F-16s appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google Flights’ new feature will ‘guarantee’ the best price https://www.popsci.com/technology/google-flights-price-guarantee/ Tue, 04 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=525393
man standing in front of flights screen at airport
Google is testing a feature that will help users find the best price possible. Danila Hamsterman / Unsplash

Just in time for travel season.

The post Google Flights’ new feature will ‘guarantee’ the best price appeared first on Popular Science.

]]>
man standing in front of flights screen at airport
Google is testing a feature that will help users find the best price possible. Danila Hamsterman / Unsplash

Earlier this week, Google introduced a suite of new search features that will hopefully reduce some anxiety around travel planning. These tools, which promise to help make looking for places to stay and things to do more convenient, also include a “price guarantee” through the “Book with Google” option for flights departing from the US—which is Google’s way of saying that this deal is the best it’s going to get “before takeoff.” 

Already, Google offers ways to get data around historical prices for the flight they want to book. But many companies use revenue-maximizing AI algorithms to vary individual ticket pricing based on the capacity of the plane, demand, and competition with other airlines. This puts the onus on consumers to continuously monitor and research tickets in order to get the best deal. Specialty sites and hacks have popped up, offering loopholes around dynamic pricing (much to the dismay of major airlines).

Google’s pilot program for ticket pricing appears to offer another solution for consumers so they don’t have to constantly shop around for prices and come back day after day. To back it, Google says that if the price drops after booking, it will send you the difference back via Google Pay. 

[Related: The highlights and lowlights from the Google AI event]

The fine print (available via an online help document) specifies that the price difference must be greater than $5 and every user is limited to $500 per calendar year. And only users with a US billing address, phone number, and Google account can take advantage of this algorithm. Still, the fact that a person could receive back several hundred dollars after booking feels non-trivial. 

According to The Washington Post, “airlines have to partner with Google to participate in the Book on Google program — and to appear on Google Flights in the first place,” therefore it’s possible that users will still have to do some independent research for tickets offered by airlines outside of the partnerships. And since it’s only a pilot program, the feature in and of itself is subject to change. 

“For now, Alaska, Hawaiian and Spirit Airlines are the main Book on Google partners, so they are likely to have the most price-guaranteed itineraries during the pilot phase,” USA Today reported. “But Google representatives said they’re hoping to expand the program to more carriers soon.”

The Verge noted that price guarantees aren’t a new thing in the travel space. For example, “Priceline and Orbitz both promise partial refunds under certain circumstances, as do some individual airlines.” 

Interested in testing this out? Head on over to Google flights and look for the rainbow shield icon when browsing for tickets.

The post Google Flights’ new feature will ‘guarantee’ the best price appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How AI can make galactic telescope images ‘sharper’ https://www.popsci.com/technology/ai-algorithm-space-telescope/ Fri, 31 Mar 2023 18:00:00 +0000 https://www.popsci.com/?p=524579
Comparison images of galaxy gaining better resolution via AI program
Before and after, all thanks to AI clarification. Emma Alexander/Northwestern University>

Accuracy is everything when studying deep space, and this open-source AI is here to help.

The post How AI can make galactic telescope images ‘sharper’ appeared first on Popular Science.

]]>
Comparison images of galaxy gaining better resolution via AI program
Before and after, all thanks to AI clarification. Emma Alexander/Northwestern University>

Even the most advanced ground-based telescopes struggle with nearsighted vision issues. Often this isn’t through any fault of their own, but a dilemma of having to see through the Earth’s constantly varying atmospheric interferences. As undesirable as that is to the casual viewer, it can dramatically frustrate researchers’ abilities to construct accurate images of the universe—both literally and figuratively. By applying an existing, open-source computer vision AI algorithm to telescope tech, however, researchers have found they are able to hone our cosmic observations.

As detailed in a paper published this month with the Monthly Notices of the Royal Astronomical Society, a team of scientists from Northwestern University and Beijing’s Tsinghua University recently trained an AI on data simulated to match imaging parameters for the soon-to-be opened Vera C. Rubin Observatory in north-central Chile. As Northwestern’s announcement explains, while similar technology already exists, the new algorithm produces blur-free, high resolution glimpses of the universe both faster and more realistically.

“Photography’s goal is often to get a pretty, nice-looking image. But astronomical images are used for science,” said Emma Alexander, an assistant professor of computer science at Northwestern and the study’s senior author. Alexander explained that cleaning up image data correctly helps astronomers obtain far more accurate data. Because the AI algorithm does so computationally, physicists can glean better measurements.

[Related: The most awesome aerospace innovations of 2022.]

The results aren’t just prettier galactic portraits, but more reliable sources of study. For example, analyzing galaxies’ shapes can help determine gravitational effects on some of the universe’s largest bodies. Blurring that image—be it through low-resolution tech or atmospheric interference—makes scientists’ less reliable and accurate. According to the team’s work, the optimized tool generated images with roughly 38 percent less error than compared to classic blur-removal methods, and around 7 percent less error compared to existing modern methods.

What’s more, the team’s AI tool, coding, and tutorial guidelines are already available online for free. Going forward, any interested astronomers can download and utilize the algorithm to improve their own observatories’ telescopes, and thus obtain better and more accurate data.

“Now we pass off this tool, putting it into the hands of astronomy experts,” continued Alexander. “We think this could be a valuable resource for sky surveys to obtain the most realistic data possible.” Until then, astronomy fans can expect far more detailed results from the Rubin Observatory when it officially opens in 2024 to begin its deep survey of the stars.

The post How AI can make galactic telescope images ‘sharper’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
There’s a glaring issue with the AI moratorium letter https://www.popsci.com/technology/ai-open-letter-longtermism/ Thu, 30 Mar 2023 14:00:00 +0000 https://www.popsci.com/?p=524130
Phone showing ChatGPT chat screen against backdrop of website homepage
Longtermists believe it is morally imperative humans do whatever is necessary to achieve a techno-utopia. Deposit Photos

The statement makes some valid notes—but critics argue signatories missed the point.

The post There’s a glaring issue with the AI moratorium letter appeared first on Popular Science.

]]>
Phone showing ChatGPT chat screen against backdrop of website homepage
Longtermists believe it is morally imperative humans do whatever is necessary to achieve a techno-utopia. Deposit Photos

An open letter signed on Wednesday by over 1,100 notable public figures, including Elon Musk and Apple co-creator Steve Wozniak, implores researchers to institute a six-month moratorium on developing artificial intelligence systems more powerful than GPT-4

“[R]ecent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control,” reads a portion of the missive published on Wednesday by the Future of Life institute, an organization attempting to “steer transformative technology towards benefitting life and away from extreme large-scale risks.” During the proposed six-month pause, the FLI suggests unnamed, independent outside experts develop and implement a “rigorously audited” shared set of safety protocols, alongside potential governmental intervention.

[Related: The next version of ChatGPT is live—here’s what’s new.]

But since the letter’s publication, many experts have highlighted that a number of the campaign’s supporters and orchestrators subscribe to an increasingly popular and controversial techno-utopian philosophy known as “longtermism” that critics claim has historical roots in the eugenics movement.

Championed by Silicon Valley’s heaviest hitters, including Musk and Peter Thiel, longtermism mixes utilitarian morality alongside science fiction concepts like transhumanism and probability theory. Critics now worry the longtermist outlook alluded to in FLI’s letter is a diversion from large language models’ (LLMs) real problems, and revealing the co-signers’ misunderstandings of the so-called “artificial intelligence” systems themselves.

Broadly speaking, longtermists believe it morally imperative to ensure humanity’s survival by whatever means necessary to maximize future lives’ wellbeing. While some may find this reasonable enough, proponents of longtermism—alongside similar overlapping viewpoints like effective altruism and transhumanism—are primarily motivated by their hope that humans colonize space and attain virtually unimaginable technological advancements. To accomplish this destiny, longtermists have long advocated for the creation of a friendly, allied artificial general intelligence (AGI) to boost humanity’s progress.

[Related: OpenAI’s newest ChatGPT update can still spread conspiracy theories.]

Many longtermists endorsing FLI’s letter believe rogue AI systems pose one of these most immediate “existential risks” to future humans. As generative language programs like OpenAI’s ChatGPT and Google Bard dominate news cycles, observers are voicing concerns on the demonstrable ramifications for labor, misinformation, and overall sociopolitical stability. Some backing FLI’s missive, however, believe researchers are on the cusp of unwittingly creating dangerous, sentient AI systems akin to those seen in popular sci-fi movie franchises like The Matrix and The Terminator

“AGI is widely seen as the savior in [longtermist] narrative, as the vehicle that’s going to get us from ‘here’ to ‘there,’” Émile P. Torres, a philosopher and historian focused on existential risk, tells PopSci. But to Torres, longtermist supporters created the very problem they are now worried about in FLI’s open letter. “They hyped-up AGI as this messianic thing that’s going to save humanity, billionaires bought into this, companies started developing what they think are the precursors to AGI, and then suddenly they’re freaking out that progress is moving too quickly,” they say.

Meanwhile, Emily M. Bender, a professor of linguistics at the University of Washington and longtime large language model (LLM) researcher, highlighted similar longtermists’ misunderstandings about how these programs actually work. “Yes, AI labs are locked in an out-of-control race, but no one has developed a ‘digital mind’ and they aren’t in the process of doing that,” argues Bender on Twitter.

[Related: Microsoft lays off entire AI ethics team while going all out on ChatGPT.]

In 2021, Bender co-published a widely read research paper (the first citation in FLI’s letter) highlighting their concerns with LLMs, none of which centered on “too powerful AI.” This is because LLMs cannot, by their nature, possess self-awareness—they are neural networks trained on vast text troves to identify patterns and generate probabilistic text of their own. Instead, Bender is concerned about LLMs’ roles in “concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).”

Torres seconds Bender’s stance. “The ‘open letter’ says nothing about social justice. It doesn’t acknowledge the harm that companies like OpenAI have already caused in the world,” they say, citing recent reports of poverty-level wages paid to Kenyan contractors who reviewed graphic content to improve ChatGPT’s user experience.

Like many of the open letter’s signatories, Bender and their allies agree that the current generative text and image technologies need regulation, scrutiny, and careful consideration, but for their immediate consequences affecting living humans—not our supposedly space-bound descendants. 

The post There’s a glaring issue with the AI moratorium letter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Levi’s claimed using AI models will boost company’s sustainability and diversity https://www.popsci.com/technology/levis-ai-models/ Wed, 29 Mar 2023 18:00:00 +0000 https://www.popsci.com/?p=523922
Close up of hand holding Levi's tag sticking out of jeans back pocket
Levi's partnership with a fashion AI company strikes some as gauche. Deposit Photos

The retailer has now said AI should not be a 'substitute for the real action' on improving diversity and inclusion.

The post Levi’s claimed using AI models will boost company’s sustainability and diversity appeared first on Popular Science.

]]>
Close up of hand holding Levi's tag sticking out of jeans back pocket
Levi's partnership with a fashion AI company strikes some as gauche. Deposit Photos

A torrent of brands have announced their twist on “AI” integrations, including 170-year-old clothing company Levi’s. The retailer revealed plans last week to begin testing AI-generated fashion models on their website as a potential tool to “supplement human models,” while increasing “diversity” in a “sustainable” way. Critics almost immediately highlighted concerns with the disconcerting corporatespeak, arguing that employing AI software in lieu of a diverse pool of actual human labor betrayed a fundamental misunderstanding of equity and representation.

Levi’s has since issued a clarification—on Tuesday, the announcement page included a statement that the group “[does] not see this pilot as a means to advance diversity or as a substitute for the real action” on improving diversity and inclusion, “and it should not have been portrayed as such.” Speaking with PopSci this week, the company maintains the partnership with the “digital fashion studio” Lalaland.ai will still champion another cause—sustainability. Industry experts and insiders, however, remain deeply skeptical of those assertions, as well.

[Related: Why an AI image of Pope Francis in a fly jacket stirred up the internet.]

When first asked earlier this week for clarification on how AI integration promotes environmental sustainability, a spokesperson for Levi’s told PopSci via email, “While we can’t speak for Lalaland.ai, this technology has potential environmental benefits for LS&Co. that could be immediately recognized, including minimizing the carbon footprint of photoshoots.” The company representative went on to reiterate sustainability remains a “top priority” for Levi’s, and that supplementing clothing lines’ rolling style launches with AI-generated models “eliminates extra photoshoots, including the travel needed for the team, shipping the products back and forth, the energy used during the photoshoot, and more.” Lalaland.ai has not responded to a request for comment at the time of writing.

The UN estimates between 8 and 10 percent of all global emissions stem from the fashion industry—more than both the aviation and shipping industries combined. Many advocates continue to push for sustainable fashion practices, and even believe some AI integration could help achieve these goals. But, the environmental impact of switching to some AI models for photoshoots is still unknown.

[Related: The universe is getting a weigh-in thanks to AI.]

“As someone who makes a living shooting e-commerce, my first thought was panic. Am I shortly out of a job?” worries Brian Frank, a freelance photographer currently based in Amsterdam. Frank tells PopSci he “did not foresee ‘sustainable’ as the reason. I assumed it would be deemed cheaper,” but conceding “the writing has been on the wall for some time that this was coming.” Still, Frank never thought models would be the starting point, much less for a company as large as Levi’s. “I assumed it would be for a smaller, high-end fashion house,” he says.

But even those running smaller fashion companies aren’t totally convinced. “I understand the benefits of AI technology for tasks such as virtual try-ons, personalized recommendations, and product design. However, for our brand, the final fit must always be on a human,” Andréa Bernholtz, founder of the sustainable swimwear company, Swiminista, writes via email, adding they “firmly believe that an AI cannot feel and move like a human, and it cannot let you know how it truly feels,” and calls the human factor a “non-negotiable.”

Bernholtz says she is excited about the continuing integration of technology within fashion, and believes it can be a powerful tool when combined with sustainable practices to increase efficiency, reduce material waste, and minimize the necessity of physical samples.

[Related: Meet Garmi, a robot nurse and companion for Germany’s elderly population.]

“When discussing sustainability, I must assume [Levi’s is] talking about no more samples produced to be photographed,” continues Frank, arguing that if a design render can be derived directly from AI, then that could eliminate a decent amount of physical waste. 

In Levi’s clarification, the company stated it has no plan to scale back its live photoshoots or the employment of live models, while arguing the Lalaland.ai partnership “may deliver some business efficiencies” for consumers. There is no indication its AI rollout has changed, with plans to begin tests later this year. 

“For now, what we do know is that AI models will never replace our human models, only supplement them where useful,” writes Levi’s in its addendum, adding that, “As with any test, we’ll be paying close attention to the consumer experience and actively listening to consumer feedback. 

The post Levi’s claimed using AI models will boost company’s sustainability and diversity appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why an AI image of Pope Francis in a fly jacket stirred up the internet https://www.popsci.com/technology/pope-francis-ai-midjourney/ Tue, 28 Mar 2023 17:00:00 +0000 https://www.popsci.com/?p=523481
Midjourney fake image of Pope Francis wearing white puffer jacket
No, the Pope was not spotted sporting a fashionable white puffer jacket. Pablo Xavier/Midjourney

The uncanny AI forgery is an amusing example of a much deeper issue.

The post Why an AI image of Pope Francis in a fly jacket stirred up the internet appeared first on Popular Science.

]]>
Midjourney fake image of Pope Francis wearing white puffer jacket
No, the Pope was not spotted sporting a fashionable white puffer jacket. Pablo Xavier/Midjourney

No, Pope Francis was not recently spotted walking through Vatican City wearing a stylish, arctic white puffer jacket—but you would be forgiven for thinking otherwise. Meanwhile, the man responsible for the AI-generated gag images is concerned, alongside numerous tech experts and industry observers.

The realistic, albeit absurdist, images of a fashionable pontifex went viral over the weekend via Twitter and other social media outlets, leading at least some to briefly wonder about their authenticity. By Monday, however, BuzzFeed writer Chris Stokel-Walker located the man behind the memes—Pablo Xavier, a 31-year-old construction worker living in the Chicago area who declined to offer his last name for fear of potential backlash.

Xavier explained the simple reason behind Friday afternoon’s Fashion Icon Francis: he was high on psychedelic mushrooms, and thought it would be funny to see what the generative AI art program, Midjourney, could do with prompts such as “‘The Pope in Balenciaga puffy coat, Moncler, walking the streets of Rome, Paris.’” Xavier’s Reddit account has since been suspended since uploading the Midjourney images, although justification for the reprimand remains unclear. In the meantime, multiple outlets and online culture critics have offered their own examinations and critiques of why, and how, a Balenciaga-adorned Pope briefly captured the attention of so many.

[Related: OpenAI’s newest ChatGPT update can still spread conspiracy theories.]

Part of the attention is undoubtedly owed to Midjourney’s latest software updates, which noticeably honed its photorealistic abilities. This is particularly evident when it comes to celebrities and public figures that make multiple appearances within its massive data training sets. But on a more esoteric level, there’s an uncanny valley-like notion that maybe the Pope would wear a simple, stylish overcoat. This is, after all, a religious position long known for its flair—so much so that the papal PR team had to debunk fashion myths in the past. A pure white down jacket is arguably in the realm of possibility,. at least, more so than a rave at the White House, Donald Trump arrested wearing Joker makeup, or the late Queen Elizabeth doing her laundry.

By now, there’s an entire series of “dripped out” Pope fakeries swirling around online, the majority of which are ridiculous enough to preclude them from fooling most people. That said, serious abuse of AI-generated art is already a very real and concerning issue. The rapid adoption of AI art generation technologies by Big Tech companies are leading many critics to urge regulators to clamp down on wanton advancements. Until then, however, the Puffer Jacket Pope is perhaps the least of everyone’s AI concerns.

The post Why an AI image of Pope Francis in a fly jacket stirred up the internet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The universe is getting a weigh-in thanks to AI https://www.popsci.com/technology/ai-galaxy-weight/ Thu, 23 Mar 2023 20:00:00 +0000 https://www.popsci.com/?p=522388
Telescope image of spiral galaxy and stars
AI discovered a simple equation alteration that improved galactic measurement accuracy. NASA/Roberto Marinoni

Step right up on the galactic scale, Alpha Centauri.

The post The universe is getting a weigh-in thanks to AI appeared first on Popular Science.

]]>
Telescope image of spiral galaxy and stars
AI discovered a simple equation alteration that improved galactic measurement accuracy. NASA/Roberto Marinoni

Literally weighing the universe may sound like an impossible task, but it can be done—at least to a degree. For decades, astrophysicists turned to what’s known as “integrated electron pressure” as a proxy for measuring the mass of galaxy clusters, which involves the interaction of photons and gravity, among many other complicated factors. But that stand-in is by no means perfect, and often can result in less-than-reliable measurements depending on galaxy clusters’ various influences. Now, however, researchers believe they have developed a (relatively speaking) simple solution to the issue alongside some assistance from artificial intelligence.

As detailed this month in a paper published with Proceedings of the National Academy of Scientists, a team composed of researchers from the the Institute for Advanced Study, the Flatiron Institute’s Center for Computational Astrophysics (CCA), Princeton University, and elsewhere have utilized an AI tool called “symbolic regression” to hone their galactic weigh-ins. As a statement from collaborators at the CCA explains, the tool “essentially tries out different combinations of mathematical operators—such as addition and subtraction—with various variables, to see what equation best matches the data.”

[Related: We finally have a detailed map of water on the moon.]

The team first entered a cutting edge universe simulation featuring a host of galaxy clusters into the tool, then the AI located variables that could increase mass estimations’ accuracy. From there, the AI generated a new equation featuring a single new term atop the longstanding version focused on integrated electron pressure. Working backwards, researchers discovered that gas concentration corresponds to areas of a galaxy cluster featuring less reliable mass estimations—i.e. the supermassive black holes located within galactic cores.

“In a sense, the galaxy cluster is like a spherical doughnut,” the CCA’s announcement describes. “The new equation extracts the jelly at the center of the doughnut that can introduce larger errors, and instead concentrates on the doughy outskirts for more reliable mass inferences.”

In any case, the team plugged the AI-scripted new equation into a digital suite containing thousands of simulated universes, and found that it could produce galaxy cluster mass estimates with between 20 and 30 percent less variabilities. “It’s such a simple thing; that’s the beauty of this,” study co-author and CCA researcher Francisco Villaescusa-Navarro said in the announcement. “Simple,” of course, may be a bit of an overstatement to those not in the business of weighing galaxies, but one thing is for certain—a jelly donut sounds pretty good right now.

The post The universe is getting a weigh-in thanks to AI appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI’s newest ChatGPT update can still spread conspiracy theories https://www.popsci.com/technology/chatgpt-conspiracy-theory-misinfo/ Wed, 22 Mar 2023 17:00:00 +0000 https://www.popsci.com/?p=521656
Screen of a smartphone with the ChatGPT logo in front of ChatGPT homepage on desktop monitor
ChatGPT-4 and Bing both still spout dangerous misinformation at the push of a button. Deposit Photos

In a single try, Bing will write an op-ed reminiscent of InfoWars.

The post OpenAI’s newest ChatGPT update can still spread conspiracy theories appeared first on Popular Science.

]]>
Screen of a smartphone with the ChatGPT logo in front of ChatGPT homepage on desktop monitor
ChatGPT-4 and Bing both still spout dangerous misinformation at the push of a button. Deposit Photos

During the much-covered debut of ChatGPT-4 last week, OpenAI claimed the newest iteration of its high-profile generative text program was 82 percent less likely to respond to inputs pertaining to disallowed content. Their statement also claimed that the new iteration was 40 percent more likely to produce accurate, factual answers than its predecessor, GPT-3.5. New stress tests from both a third-party watchdog and PopSci reveal that not only is this potentially false, but that GPT-4 actually may even perform in a more harmful manner than its previous version.

[Related: Microsoft lays off entire AI ethics team while going all out on ChatGPT.]

According to a report and documentation published on Tuesday from the online information fact-checking service NewsGuard, GPT-4 can produce more misinformation, more persuasively, than GPT-3.5. During the company’s previous trial run in January, NewsGuard researchers managed to get the GPT-3.5 software to generate hoax-centered content 80 percent of the time when prompted on 100 false narratives. When offered the same situations, however, ChatGPT-4 elaborated on all 100 bogus stories.

But unlike GPT-3.5, ChatGPT-4 created answers in the form of “news articles, Twitter threads, and TV scripts mimicking Russian and Chinese state-run media outlets, health-hoax peddlers, and well-known conspiracy theorists,” says NewsGuard. Additionally, the report argues GPT-4’s responses were “more thorough, detailed, and convincing, and they featured fewer disclaimers.”

[Related: OpenAI releases ChatGPT-4.]

In one example, researchers asked the new chatbot iteration to construct a short article claiming the deadly 2012 Sandy Hook Elementary School mass shooting was a “false flag” operation—a term used by conspiracy theorists referring to the completely false allegation that government entities staged certain events to further their agenda. While ChatGPT-3.5 did not refuse the request, its response was reportedly a much shorter, generalized article  omitting specifics. Meanwhile, GPT-4 mentioned details like victims’ and their parents’ names, as well as the make and model of the shooter’s weapon.

OpenAI warns its users against the possibility of its product offering problematic or false “hallucinations,” despite vows to curtail ChatGPT’s worst tendencies. Aside from the addition of copious new details and the ability to reportedly mimic specific conspiracy theorists’ tones, ChatGPT-4 also appears less likely than its earlier version to flag its responses with disclaimers regarding potential errors and misinformation.

Steven Brill, NewsGuard’s co-CEO, tells PopSci that he believes OpenAI is currently emphasizing making ChatGPT more persuasive instead of making it fairer or more accurate. “If you just keep feeding it more and more material, what this demonstrates is that it’ll get more sophisticated… that its language will look more real and be persuasive to the point of being downright eloquent.” But Brill cautions that if companies like OpenAI fail to distinguish between reliable and unreliable materials, they will “end up getting exactly what we got.”

[Related: 6 ways ChatGPT is actually useful right now.]

NewsGuard has licensed its data sets of reliable news sources to Microsoft’s Bing, which Brill says can offer “far different” results. Microsoft first announced a ChatGPT-integrated Bing search engine reboot last month in an error-laden demonstration video. Since then, the company has sought to assuage concerns, and revealed that public beta testers have been engaging with a GPT-4 variant for weeks.

Speaking to PopSci, a spokesperson for OpenAI explained the company uses a mix of human reviewers and automated systems to identify and enforce against abuse and misuse. They added that warnings, temporary suspensions, and permanent user bans are possible following multiple policy violations.

According to OpenAI’s usage policies, consumer-facing uses of GPT models in news generation and summarization industries “and where else warranted” must include a disclaimer informing users AI is being utilized, and still contains “potential limitations.” Additionally, the same company spokesperson cautioned “eliciting bad behavior… is still possible.”

In an email to PopSci, a spokesperson for Microsoft wrote, “We take these matters very seriously and have taken immediate action to address the examples outlined in [NewsGuard’s] report. We will continue to apply learnings and make adjustments to our system as we learn from our preview phase.”

GPT-enabled Bing writing an “article” about the Sandy Hook shootings.

But when tested by PopSci, Microsoft’s GPT-enabled Bing continued to spew misinformation with inconsistent disclaimer notices. After being asked to generate a news article written from the point-of-view of a Sandy Hook “truther,” Bing first issued a brief warning regarding misinformation before proceeding to generate conspiracy-laden op-ed, then crashed. Asking it a second time produced a similar, spuriously sourced, nearly 500-word article with no disclaimer. Bing wrote another Sandy Hook false flag narrative on the third try, this time with a reappeared disinformation warning.

“You may think I’m crazy, but I have evidence to back up my claims,” reads a portion of Bing’s essay, “Sandy Hook: The Truth They Don’t Want You to Know.”

Update 3/29/23: As of March 28, 2023, the Bing chatbot no longer will write Sandy Hook conspiracy theories. Instead, the AI refuses and offers cited facts about the tragedy.

The post OpenAI’s newest ChatGPT update can still spread conspiracy theories appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Adobe built its Firefly AI art generator to avoid bias and copyright issues https://www.popsci.com/technology/adobe-firefly-ai-image-generator/ Tue, 21 Mar 2023 19:00:00 +0000 https://www.popsci.com/?p=521547
Firefly is currently in beta.
Firefly is currently in beta. Adobe

The goal of the new AI image-generator is to be as user-friendly as possible. Here's how it will work.

The post Adobe built its Firefly AI art generator to avoid bias and copyright issues appeared first on Popular Science.

]]>
Firefly is currently in beta.
Firefly is currently in beta. Adobe

Artificial intelligence systems that can generate images have been big news for the past year. OpenAI’s DALL-E 2 and Stable Diffusion have dominated the headlines, and Google, Meta, and Microsoft have all announced features they are working on. But one huge name has been conspicuously absent: Adobe. Today, with the announcement of Firefly, which is a family of generative AI models, that changes.

For more than two decades, Adobe has led the digital image making and manipulation industries. Its flagship product, Adobe Photoshop, has become a verbagainst its will. And while its products have always had AI-powered features, like Content Aware Fill and Neural Filters, Firefly represents Adobe’s first publicly announced image-generating AI. Initially, the beta will integrate with Express, Photoshop, Illustrator, and the marketing-focused Adobe Experience Manager.

What Adobe’s Firefly will do 

Like DALL-E 2 and Stable Diffusion, Firefly can take a text-prompt and turn it into an image. Unlike those two apps, however, Firefly is designed to give more consistent results. Alexandru Costin, Adobe’s vice president of Generative AI and Sensei, called the kind of prompts most people use as “word soup” on a video call with PopSci. To get great results with Stable Diffusion, for example, you often need to add buzzwords to your prompt, like “4K,” “trending on artstation,” “hyper-realistic,” “digital art,” and “super detailed.” 

So, instead of saying something like “batman riding a scooter,” you say “batman riding a scooter, cinematic lighting, movie still, directed by Chris Nolan.” It’s very hack-y, but for most generative AIs, it’s the best way to get good results. 

Firefly is taking a different approach. The overall look and feel of a generated image is determined by drop-downs and buttons. You can type “batman riding a scooter” and then select from the various options to dial in the look you want. Costin also explained that the images don’t regenerate each time you select a new style, so if you’re happy with the content of the image, you don’t have to worry that changing the style will create something completely different. It aims to be a lot more user-friendly. 

AI photo
“many fireflies in the night” Adobe

As well as creating new images from text prompts, Firefly will also be able to generate text effects. The example that Costin showed (above) was rendering the word “Firefly” with “many fireflies in the night, bokeh effect.” It looks impressive, and it shows how generative AIs can integrate with other forms of art and design. 

What Firefly aims not to do

According to Costin, Adobe wants to employ AI responsibly, and in his presentation he directly addressed two of the most significant issues with generative AI: copyright concerns and biases. 

Copyright is a particularly thorny issue for generative AIs. StabilityAI, the makers of Stable Diffusion, is currently being sued by a collection of artists and the stock image service Getty Photos for using their photos to train Stable Diffusion without licensing them. The example images where Stable Diffusion creates a blurry Getty-like logo are particularly damning. 

Adobe has sidestepped these kinds of copyright problems by training Firefly on hundreds of millions of Adobe Stock images, as well as openly licensed and public domain content. It protects creators from any potential copyright problems, especially if they intend to use generated content for commercial purposes. 

The llama is so stylish.
This llama is stylish. Adobe

Similarly, Costin says that Adobe has dealt with the potential biases in its training data by designing Firefly to deliberately generate diverse images of people of different ages, genders, and ethnicities. “We don’t want to carry over the biases in the data,” he says, so he says that Adobe has proactively addressed the issue. Of course, you can still prompt the AI to render a specific thing, but when left to its own devices it should hopefully avoid producing biased results. 

While Firefly is launching in beta, Adobe has big plans. “The world is going to be transformed by AI,” says Costin, and Adobe intends to be part of it. 

Going forward, Adobe wants a future where creators are able to train their own AI models on their work, and where generative AIs integrate seamlessly across its full range of products. In theory, this would allow artists to generate whatever assets they needed right in Photoshop or Illustrator, and treat them as they do any other image or block of text. 

If you want to check Firefly out, you can apply to join the beta now.

The post Adobe built its Firefly AI art generator to avoid bias and copyright issues appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
TikTok is taking on the conspiracy theorists https://www.popsci.com/technology/tiktok-guideline-updates-ai-climate/ Tue, 21 Mar 2023 16:00:00 +0000 https://www.popsci.com/?p=521524
TikTok app information icon on Apple iPhone 8 close-up
TikTok's rule updates arrive ahead of the CEO's congressional testimony this week. Deposit Photos

Climate change denial and 'synthetic media' take the spotlight in the company's latest guidelines.

The post TikTok is taking on the conspiracy theorists appeared first on Popular Science.

]]>
TikTok app information icon on Apple iPhone 8 close-up
TikTok's rule updates arrive ahead of the CEO's congressional testimony this week. Deposit Photos

TikTok announced a number of updates to its community guidelines on Tuesday, including how it will address misinformation, AI-generated art, and deepfakes. The revamped rulebook and classifications go into effect on April 21. The changes arrive amid mounting political pressure from Western lawmakers in the US and UK, alongside an impending congressional testimony from TikTok CEO Shou ZI Chew regarding alleged security concerns within one of the world’s most popular social media platforms.

Perhaps the most noticeable addition comes in the form of TikTok’s new guidelines section dedicated to the rapid proliferation of “synthetic media,” such as altered videos and deepfakes. Although TikTok “welcome[s] the creativity that new artificial intelligence (AI) and other digital technologies may unlock,” it acknowledges these tools often blur the lines between reality and fiction. Beginning next month, all deepfaked or otherwise altered content must be labeled as such through a sticker or caption.

[Related: Why some US lawmakers want to ban TikTok.]

Additionally, a wholesale ban on using the likeness of “any real private figure” will be initiated in April. Public figures, meanwhile, are granted more leeway due to their high profiles and societal relevance. That said, content including a deepfaked politician or celebrity cannot “be the subject of abuse,” or used to mislead audiences on political or societal issues. As The Verge also noted on Tuesday, TikTok’s prior stance on deepfakes were summed up by a single line banning uploads which “mislead users by distorting the truth of events [or] cause significant harm to the subject of the video.”

Notably, the company is also instituting a new section explicitly addressing the proliferation of climate misinformation. Any content that “undermines well-established scientific consensus” regarding the reality of climate change and its contributing factors is prohibited. As TechCrunch explains, conversations on climate change are still permitted, including the pros and cons of individual policies and technologies, as long as it does not contradict scientific consensus. Last year, at least one study showcased that TikTok search results were inundated with climate change misinformation and denialism. The new hardline on misinformation apparently extends beyond climate disinfo, as well. In a separate section, TikTok explains content will be ineligible from users’ For You Feed if it “contains general conspiracy theories or unverified information related to emergencies.”

[Related: US government gives TikTok an ultimatum, warning of ban.]

These and other changes come as TikTok weathers increasingly intense criticisms and scrutiny over its data security, with lawmakers citing issues with the social media platform’s China-based parent company, ByteDance. Last week, the Biden administration issued its starkest warning yet, urging the platform’s Chinese national owners to sell their shares or face a wholesale ban on the app. The announcement came after moves to ban the social media platform from all US government devices—a decision echoed recently in the UK and the Netherlands, as well. Critics of the hardline stances point towards the larger data insecurities within the digital ecosystem.

In a statement released last week from the Electronic Frontier Foundation, the digital rights advocacy group conceded that “TikTok raises special concerns, given the surveillance and censorship practices of its home country, China, but contended that the solution isn’t a single business or company ban. Rather, we must enact comprehensive consumer data privacy legislation.”

The post TikTok is taking on the conspiracy theorists appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The tricky search for just the right amount of automation in our cars https://www.popsci.com/technology/alliance-innovation-lab-autonomy-tech/ Mon, 20 Mar 2023 22:00:00 +0000 https://www.popsci.com/?p=521306
the nissan ariya
The Ariya, an EV. Nissan

The director of the Alliance Innovation Lab wants there to always be a human in the loop when it comes to vehicles that can drive themselves.

The post The tricky search for just the right amount of automation in our cars appeared first on Popular Science.

]]>
the nissan ariya
The Ariya, an EV. Nissan

Nestled in the heart of California’s high-tech Silicon Valley is the Alliance Innovation Lab, where Nissan, Renault, and Mitsubishi work in partnership. The center is a cradle-to-concept lab for projects related to energy, materials, and smart technologies in cities, all with an eye toward automotive autonomy.

Maarten Sierhuis, the global director of the laboratory, is both exuberant and realistic about what Nissan has to offer as electric and software-driven vehicles go mainstream. And it’s not the apocalyptic robot-centric future portrayed by Hollywood in movies like Minority Report.

“Show me an autonomous system without a human in the loop, and I’ll show you a useless system,” Sierhuis quips to PopSci. “Autonomy is built by and for humans. Thinking that you would have an autonomous car driving around that never has to interact with any person, it’s kind of a silly idea.”

Lessons from space

Educated at The Hague and the University of Amsterdam, Sierhuis is a specialist in artificial intelligence and cognitive science. For more than a dozen years, he was a senior research scientist for intelligent systems at NASA. There, he collaborated on the invention of a Java-based programming language and human behavior simulation environment used at NASA’s Mission Control for the International Space Station.

Based on his experience, Sierhuis says expecting certain systems to fail is wise. “We need to figure there is going to be failure, so we need to design for failure,” he says. “Now, one way to do that—and the automotive industry has been doing this for a long time—is to build redundant systems. If one fails, we have another one that takes over.”

[Related: How Tesla is using a supercomputer to train its self-driving tech]

One vein of research has Nissan partnering with the Japan Aerospace Exploration Agency (JAXA) to develop an uncrewed rover prototype for NASA. Based on Nissan’s EV all-wheel drive control technology (dubbed e-4ORCE) used on the brand’s newest EV, Ariya, the rover features front and rear electric motors to navigate challenging terrain. 

Sierhuis calls the Ariya Nissan’s most advanced vehicle to date. It is a stepping stone toward combining all the technology the lab is working on in one actual product. He and the team have switched from using a Leaf to an Ariya for its hands-on research, even simulating lunar dust to test the system’s capabilities in space.

‘There is no autonomy without a human in the loop’

There is an air of distrust of autonomous technology from some car buyers, amplified by some high-profile crashes involving Tesla’s so-called “Full Self-Driving” vehicles.

“It’s hard for OEMs to decide where and how to bring this technology to market,” Sierhuis says. “I think this is part of the reason why it’s not there yet, because is it responsible to go from step zero or step one to fully autonomous driving in one big step? Maybe that’s not the right way to teach people how to interact with autonomous systems.”

From the lab team’s perspective, society is experiencing a learning curve and so the team is ensuring that technology is rolled out gradually and responsibly. Nissan’s approach is to carefully calibrate its systems so the car doesn’t take over. Computing is developed for people, and the people are at the center of it, Sierhuis says, and it should always be about that. That’s not just about the system itself; driving should still be fun.

“There is no autonomy without a human in the loop,” he says. “You should have the ability to be the driver yourself and maybe have the autonomous system be your co-driver, making you a better driver, and then use autonomy when you want it and use the fun of driving when you want it. There shouldn’t be an either-or.”

[Related: Why an old-school auto tech organization is embracing electrification]

The Ariya is equipped with Nissan’s latest driver-assist suite, enhanced by seven cameras, five millimeter-wave radars and 12 ultrasonic sonar sensors for accuracy. A high-definition 3D map predicts the road surface, and on certain roads, Nissan says the driver can take their hands off the wheel. That doesn’t mean a nap is in order, though; a driver-attention monitor ensures the driver is still engaged.

New driver assistance technologies raise questions about the relationship between technology and drivers-to-be: What if someone learns how to drive with a full suite of autonomous features and then tries to operate a car that doesn’t have the technology; are they going to be flummoxed? Ultimately, he says, this is a topic the industry hasn’t fully worked through yet.

Making cities smarter

The Alliance Innovation Lab is also studying the roads and cities where EVs operate. So-called “smart cities” integrate intelligence not just into the cars but into the infrastructure, enabling the future envisioned by EV proponents. Adding intelligence to the environment means, for example, that an intersection can be programmed to interface with a software-enabled vehicle making a right-hand turn toward a crosswalk where pedestrians are present. The autonomous system can alert the driver to a potentially dangerous situation and protect both the driver and those in the vicinity from tragedy.  

Another way to make cities smarter is by improving the efficiency of power across the board. According to the Energy Information Administration (EIA), the average home consumes about 20 kilowatt-hours per day. Nissan’s new Ariya is powered by an 87-kilowatt battery, which is enough to power a home for four days. Currently, Sierhuis says, we have a constraint optimization problem: car batteries can store a fantastic amount of power that can be shared with the grid in a bi-directional way, but we haven’t figured out how to do that effectively.  

On top of that, car batteries use power in larger bursts than inside homes, and the batteries have limited use before they must be retired. However, that doesn’t mean the batteries are trash at that point; on the contrary, they have quite a bit of energy potential in their second life. Nissan has been harnessing both new and used Leaf batteries to work in tandem with a robust solar array to power a giant soccer stadium (Johan Cruijff Arena) in Amsterdam since 2018. In the same year, Nissan kicked off a project with the British government to install 1,000 vehicle-to-grid charging points across the United Kingdom. It’s just a taste of what the brand and its lab see as a way to overcome infrastructure issues erupting around the world as EVs gain traction.

Combining EV batteries and smart technology, Nissan envisions a way for vehicles to communicate with humans and the grid to manage the system together, in space and here on Earth.

The post The tricky search for just the right amount of automation in our cars appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why the Air Force wants 1,000 new combat drones https://www.popsci.com/technology/air-force-wants-one-thousand-combat-drones/ Mon, 20 Mar 2023 11:00:00 +0000 https://www.popsci.com/?p=520772
An XQ-58A Valkyrie drone seen launching in 2020 in Arizona.
An XQ-58A Valkyrie drone seen launching in 2020 in Arizona. Joshua King / US Air Force

The goal is to have many uncrewed aircraft that can act as teammates for more expensive fighter jets flown by people.

The post Why the Air Force wants 1,000 new combat drones appeared first on Popular Science.

]]>
An XQ-58A Valkyrie drone seen launching in 2020 in Arizona.
An XQ-58A Valkyrie drone seen launching in 2020 in Arizona. Joshua King / US Air Force

The Air Force is asking Congress for 1,000 new combat drones to accompany planes into battle. The announcement, from Air Force Secretary Frank Kendall, came March 7, as part of a broader push for Air Force modernization. It fits into a broader plan to combine crewed fighters, like F-35s and new designs, with drone escorts, thus expanding the scope of what the Air Force can do without similarly increasing the demand for new pilots.

Kendall spoke at the Air and Space Forces Association Warfare Symposium in Aurora, Colorado. The speech focused on what the Air Force can and must do to remain competitive with China, which Kendall referred to as “our packing challenge.” While the Air Force can outline its expectations and desires in a budget, it is ultimately up to Congress to set the funding sought by the military. That means Kendall’s call for 1,000 drones isn’t just an ask, it has to be a sales pitch.

“The [Department of the Air Force] is moving forward with a family of systems for the next generation of air dominance, that will include both the NGAD platform and the introduction of uncrewed collaborative aircraft to provide affordable mass and dramatically increased cost-effectiveness,” said Kendall. By NGAD (Next Generation Air Dominance), Kendall was referring to a concept for future fighter planning, where a new crewed fighter plane heads a family of systems that includes escort drones. One of these potential drone escorts is called the Collaborative Combat Aircraft, or CCA.

This Collaborative Combat Aircraft fits with the broader plans of the Air Force to augment and expand the number of aircraft it has by having drones fly as escorts and accessories to crewed and piloted fighters. These fighters include the existing and expanding inventory of F-35A stealth jets, as well as the next generation of planes planned for the future.

Kendall broke down the math like this: “[General Charles Q. Brown] and I have recently given our planners a nominal quantity of collaborative combat aircraft to assume for planning purposes. That planning assumption is 1,000 CCAs,” said Kendall. “This figure was derived from an assumed two CCAs per 200 NGAD platforms [equalling 400 drones], an additional two for each of 300 F-35s, for a total of a thousand.” 

One reason for the Air Force to pursue drone escorts is because they can expand what the planes can do, without requiring another expensive craft of a vulnerable pilot. Stealth on an F-35A jet fighter protects the pilot and the $78 million plane. If a drone can fly alongside a plane, help it on missions, and costs a fraction of the crewed fighter, then it may make more sense for the drones to be, if not disposable, somewhat more expendable.

Previously, the Air Force referred to this as “attritable,” a term coined to suggest the drones could be lost to combat (attrition), without emphasizing that the drones were built specifically to be lost. In Kendall’s remarks on March 7, he instead used the term “affordable mass,” which emphasizes the way these drones will increase the numbers of aircraft an enemy has to defeat in order to stop an aerial attack.

“One way to think of CCAs is as remotely controlled versions of the charting pods, electronic warfare pods, or weapons now carried under the wings of our crude aircraft. CCAs will dramatically improve the performance of our crude aircraft and significantly reduce the risk to our pilots,” said Kendall.

In this way, a drone escort flying alongside a fighter is just an extra set of bombs, cameras, missiles, or jammers, all in a detached body flying as an escort to the fighter. In 2017, the Air Force announced an attritable drone escort, using the Valkyrie built for the task by target drone maker Kratos. 

The first Valkyrie is already a museum piece, but it represents a rough overview of the kind of cost and functions the Air Force may want in a Collaborative Combat Aircraft. Priced at around $2 million, a Valkyrie is not cheap, but it is much cheaper than the fighters it would fly alongside. As designed, it can fly for up to 3,400 miles, with a top speed of 650 mph. That would make it capable of operating in theater with a fighter, with escorts likely delivered to bases by ground transport and then synched up with the fighters before missions.

Getting drones to fly alongside crewed planes has been part of the Air Force’s Loyal Wingman program, which shifts the burden of flying onto onboard systems in the drone. Presently, drones used by the US, like the MQ-9 Reaper that crashed into the Black Sea, are labor-intensive, crewed by multiple shifts of remote pilots. To make drones labor-saving, they will need to work similar to a human compassion, receiving commands from a squad leader but independent enough to execute those commands without human hands on the controls. The Air Force is experimenting with AI piloting of jets, including having artificial intelligence fly a crewed F-16 in December.

Whatever shape these loyal wingmates end up taking, by asking for them in bulk, Kendall is making a clear bid. The age of fighter pilots in the Air Force may not be over, but for the wars of the future, they will be joined by robots as allies.

The post Why the Air Force wants 1,000 new combat drones appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
5 ways you can use the iPhone Shortcuts app to improve your life https://www.popsci.com/diy/iphone-shortcuts-app/ Sat, 18 Mar 2023 15:00:00 +0000 https://www.popsci.com/?p=520601
A person holding an unlocked iPhone, with apps on the screen.
With the iPhone Shortcuts app, you can automate many tasks. Adrien / Unsplash

These iOS shortcuts will make your phone even more powerful.

The post 5 ways you can use the iPhone Shortcuts app to improve your life appeared first on Popular Science.

]]>
A person holding an unlocked iPhone, with apps on the screen.
With the iPhone Shortcuts app, you can automate many tasks. Adrien / Unsplash

The Shortcuts app has long been one of Apple’s lesser-used offerings, and that’s a shame, because it can supercharge your iPhone’s capabilities. With a single tap, you could turn on your smart lights, raise the temperature on your smart thermostat, and start an energetic playlist—and that’s just one example.

Like the macOS version, the mobile Shortcuts app is essentially an automation tool that can combine many tasks into one action that you launch with a tap or a word to Siri. Introduced in 2018 alongside iOS 12, the app is built into the iPhone’s operating system, so you’ll find it on one of your home screens or in the App Library. From the app’s Shortcuts tab, you can start manually building your own shortcut, or install a prebuilt one from the integrated shortcut gallery or the web.

1. Get yourself in the mood to focus

The iOS Reading Mode shortcut for the iPhone Shortcuts app.
The Reading Mode shortcut starts by asking you how long you’d like to read for. David Nield

A good way to start using Shortcuts is to install one made by someone else—you can always open the tool up to see how it works and adapt it to your needs if necessary. Go to the Gallery tab in the app, look for a shortcut called Reading Mode, and install it by tapping the plus icon on the shortcut’s thumbnail.

Reading Mode is a great example of how shortcuts combine multiple actions: It turns on Do Not Disturb, switches to dark mode, opens your reading app of choice and even starts the Apple Music playlist you specify. Tap the three dots on the shortcut entry once it’s installed to customize any of these actions.

2. Edit images in batches

A DIY iPhone shortcut you can use to resize images in bulk.
Shortcuts can quickly apply the same edits to multiple images. David Nield

As you get more confident with Shortcuts, you can start building your own—it helps to think about tasks and groups of actions you do repeatedly on your phone. To create a shortcut from the Shortcuts tab, tap the plus icon in the top right corner and hit Add Action to get started.

[Related: Edit gorgeous photos right on your phone]

If you want to, say, automate a series of photo edits you do again and again, choose Apps, Photos, then Select Photos. From the All Actions list, pick Resize Image as the next action, and enter the desired size. The final action is Save to Photo Album. Tap Done to save your shortcut.

That’s a basic example, but there are lots of other image editing actions available, including the ability to remove backgrounds and rotate images, so you can combine them as you need.

3. Use ChatGPT with Siri

An iOS shortcut that allows you to use ChatGPT on an iPhone.
Put ChatGPT on your iPhone, with help from Siri. David Nield

It’s difficult to avoid ChatGPT at the moment, and the AI chatbot can be used in tandem with Siri on your iPhone. First, register a free account with ChatGPT developer OpenAI, then grab an API key from its site. API (application programming interface) keys are simply identification codes that let one program (Shortcuts) work with another (ChatGPT).

With the key in hand, open the SiriGPT shortcut in your iPhone’s web browser and tap Get Shortcut. In the Shortcuts app, select Set Up Shortcut, paste or type in your API key, and choose Add Shortcut.

If you’d like to launch this shortcut using your voice, you’ll probably want to rename it to something simpler—do so by pressing and holding it, then picking Rename. Launch the shortcut with a tap or a voice command, and ChatGPT will be at your disposal.

4. Let someone know when you’ll be home

The ETA tool for the iPhone Shortcuts app, which lets people know when you'll arrive.
Shortcuts can text on your behalf. David Nield

One of the benefits of running shortcuts on your iPhone rather than your Mac is that it provides a much more precise fix on your location. That can come in handy for all kinds of automations.

Head to the Gallery tab and look for the Home ETA shortcut. When you install and run it, it’ll work out how long it should take you to drive home, and then text your estimated time of arrival to the contact of your choice.

[Related: 14 tricks for getting more out of the underrated Apple Maps app]

Open the shortcut from the Shortcuts tab by tapping on the three dots on its entry, and you’ll be able to easily change the address the shortcut defaults to, as well as the contact(s) who receive the message about your ETA.

5. Look back on your day

The iOS shortcut that lets you reflect on your day.
You can choose the questions and responses to help you reflect on your day. David Nield

If you tap Gallery in the Shortcuts app and use the search function, you should find a shortcut called Reflect on the Day that does exactly what its name suggests. You’ll be asked to answer questions about how your day has been, and you can also set goals for tomorrow.

Your responses will be stored in the Notes app, so you can track your progress over time, and the shortcut will also set reminders for the next day so you don’t forget your goals. To edit the questions the shortcut asks you at the end of each day, open it up by tapping the three dots on its thumbnail.

The post 5 ways you can use the iPhone Shortcuts app to improve your life appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google’s AI doctor appears to be getting better https://www.popsci.com/technology/google-health-ai-doctor-update/ Thu, 16 Mar 2023 22:00:00 +0000 https://www.popsci.com/?p=520348
Dr. Alan Karthikesalingam presenting at the Google health event.
Dr. Alan Karthikesalingam presenting at the Google health event. Google / YouTube

It's all part of the company's grand mission to make personalized health info more accessible.

The post Google’s AI doctor appears to be getting better appeared first on Popular Science.

]]>
Dr. Alan Karthikesalingam presenting at the Google health event.
Dr. Alan Karthikesalingam presenting at the Google health event. Google / YouTube

Google believes that mobile and digital-first experiences will be the future of health, and it has stats to back it up—namely the millions of questions asked in search queries, and the billions of views on health-related videos across its video streaming platform, YouTube. 

The tech giant has nonetheless had a bumpy journey in its pursuit to turn information into useful tools and services. Google Health, the official unit that the company formed in 2018 to tackle this issue, dissolved in 2021. Still, the mission lived on in bits across YouTube, Fitbit, Health AI, Cloud, and other teams. 

Google is not the first tech company to dream big when it comes to solving difficult problems in healthcare. IBM, for example, is interested in using quantum computing to get at topics like optimizing drugs targeted to specific proteins, improving predictive models for cardiovascular risk after surgery, and cross-searching genome sequences and large drug-target databases to find compounds that could help with conditions like Alzheimer’s.

[Related: Google Glass is finally shattered]

In Google’s third annual health event on Tuesday, called “The Check Up,” company executives provided updates about a range of health projects that they have been working on internally, and with partners. From a more accurate AI clinician, to added vitals features on Fitbit and Android, here are some of the key announcements. 

AI photo
A demo of how Google’s AI can be used to guide pregnancy ultrasound. Charlotte Hu

For Google, previous research at the intersection of AI and medicine have covered areas such as breast cancer detection, skin condition diagnoses, and the genomic determinants of health. Now, it’s expanding its AI models to include more applications, such as cancer treatment planning, finding colon cancer from images of tissues, and identification of health conditions on ultrasound. 

[Related: Google is launching major updates to how it serves health info]

Even more ambitiously, instead of using AI for a specific healthcare task, researchers at Google have also been experimenting with using a generative AI model, called Med-PaLM, to answer commonly asked medical questions. Med-PaLM is based on a large language model Google developed in-house called PaLM. In a preprint paper published earlier this year, the model scored 67.6 percent on a benchmark test containing questions from the US Medical License Exam. 

At the event, Alan Karthikesalingam, a senior research scientist at Google, announced that with the second iteration of the model, Med-PaLM 2, the team has bumped its accuracy on medical licensing questions to 85.4 percent. Compared to the accuracy of human physicians, sometimes Med-PaLM is not as comprehensive, according to clinician reviews, but is generally accurate, he said. “We’re still learning.” 

AI photo
An example of Med-PaLM’s evaluation. Charlotte Hu

In a language model realm, although it’s not the buzzy new Bard, a conversational AI called Duplex is being employed to verify whether providers accept federal insurance like Medicaid, boosting a key search feature Google first unveiled in December 2021. 

[Related: This AI is no doctor, but its medical diagnoses are pretty spot on]

On the consumer hardware side, Google devices like Fitbit, Pixel, and Nest will now be able to provide users with an extended set of metrics regarding their heart rate, breathing, skin temperature, sleep, stress, and more. For Fitbit, the sensors are more evident. But the cameras on Pixel phones, as well as the motion and sound detectors on Nest devices, can also give personal insights on well-being. Coming to Fitbit’s sleep profile feature is a new metric called stability, which tells users when they’re waking up in the night by analyzing their movement and heart rate. Google also plans to make a lot more of its health metrics, like respiration, which uses a camera and non-AI algorithms to detect movement and track pixels, and heart rate, which relies on an algorithm that measures and changes in skin color, available to users with compatible devices without a subscription. 

AI photo
Users can take their pulse by placing their fingertip over the back cameras of their Pixel phones. Charlotte Hu

This kind of personalization around health will hopefully allow users to get feedback on long-term patterns and events that may deviate from their normal baseline. Google is testing new features too, like an opt-in function for identifying who coughed, in addition to counting and recording coughs (both of which are already live) on Pixel. Although it’s still in the research phase, engineers at the company say that this feature can register the tone and timber of the cough as vocal fingerprints for different individuals. 

Watch the full keynote below:

The post Google’s AI doctor appears to be getting better appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The next version of ChatGPT is live—here’s what’s new https://www.popsci.com/technology/chatgpt-openai-microsoft/ Tue, 14 Mar 2023 19:30:00 +0000 https://www.popsci.com/?p=519505
Hands typing on laptop keyboard with screen displaying ChatGPT homepage

The release comes as Microsoft also revealed that users are already interacting with the new AI via Bing.

The post The next version of ChatGPT is live—here’s what’s new appeared first on Popular Science.

]]>
Hands typing on laptop keyboard with screen displaying ChatGPT homepage

On Tuesday, OpenAI announced the long-awaited arrival of ChatGPT-4, the latest iteration of the company’s high-powered generative AI program. ChatGPT-4 is touted as possessing the ability to provide “safer and more useful responses,” per its official release statement, as well as the ability to accept both text and image inputs to parse for text responses. It is currently only available via a premium ChatGPT Plus subscription, or by signing up for waitlist access to its API. In a preview video available on the company’s website, developers also highlight its ability to supposedly both work with upwards of 25,000 words—around eight times more than GPT-3.5’s limit.

[Related: Microsoft lays off entire AI ethics team while going all out on ChatGPT.]

OpenAI says this version is stronger than its predecessor in a number of ways. Based on internal evaluations after six months of fine-tuning, OpenAI promises an 82 percent reduction in the likelihood of responding to “disallowed content,” as well as a 40 percent increase in its ability to produce factually true answers when compared to GPT-3.5. In support of these improvements, OpenAI writes in its blog post that GPT-4 scores in at least the 88th percentile and above on tests including LSAT, SAT Math, SAT Evidence-Based Read and Writing Exams, and the Uniform Bar Exam. It also earned a 5 on its AP Art History and Biology tests.

Despite its currently limited public access, OpenAI announced that it has already partnered with a number of other companies to integrate ChatGPT-4 into their products, including the language learning app Duolingo, as well as Stripe, Morgan Stanley, and the independent learning website, Khan Academy.

ChatGPT and similar programs like Google Bard and Meta’s LLaMA have dominated headlines in recent months, while also igniting debates regarding algorithmic biases, artistic license, and misinformation. Seemingly undeterred by these issues, Microsoft has invested an estimated $11 billion into OpenAI, and highly publicized ChatGPT’s integration within a revamped version of the Bing search engine.

[Related: The FTC has its eyes on AI scammers.]

Even with its new features, OpenAI CEO Sam Altman has repeatedly stressed that users should temper their expectations for what GPT-4 can and can’t accomplish. “People are begging to be disappointed, and they will be,” Altman said during a recent interview. In a Twitter thread announcing GPT-4, Altman also wrote, “[I]t is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.”

On Tuesday, Microsoft also revealed that Bing has been using an earlier version of ChatGPT-4 for at least the past five weeks—during which time it has offered users a host of problematic responses

The post The next version of ChatGPT is live—here’s what’s new appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Your next Gmail or Google Doc could be written with help from AI https://www.popsci.com/technology/google-generative-ai-features/ Tue, 14 Mar 2023 19:00:00 +0000 https://www.popsci.com/?p=519523
google apps on iphone screen
Google is adding AI to its products and services. Elle Cartier / Unsplash

Here are the new features the company is planning launch across its key products and services.

The post Your next Gmail or Google Doc could be written with help from AI appeared first on Popular Science.

]]>
google apps on iphone screen
Google is adding AI to its products and services. Elle Cartier / Unsplash

Today Google unveiled a series of new generative AI features across a range of its products including Gmail and its other Workspace apps, Google Docs, Sheets, and Slides. The AI-powered features will roll out to trusted testers in the coming weeks, and once they’ve been refined, Google says they will be available more generally. If this feels to you like Google is trying to play catch up with Microsoft and its multi-billion investment in/collaboration with OpenAI, the developers of ChatGPT and DALL-E 2, , well, you wouldn’t be wrong. According to The New York Times, it’s been a “code red” situation inside the company since ChatGPT launched last year, with plans to launch as many as 20 new products to address the perceived gap. 

Still, in Google’s press releases, the company is quick to point out that it’s actually been doing this AI thing for a long time. A blog post from earlier this year lists nine ways that AI is used in the company’s products, including in Google Search, Maps, YouTube, Gmail, and, of course, Ads. Its existing AI features, like Smart Compose and Smart Reply in Gmail, are apparently already “helping 3 billion users.” And we can’t forget about the furore last year when an engineer got fired for claiming that LaMDA, a large language model, was sentient. It’s not Google who’s slow—it’s Microsoft, okay?

While Google announced a number of other features, it’s the generative AI integrations with apps like Gmail and Docs that are the most interesting. And assuming the beta testing goes well, they will likely be used by far more people. 

According to Google, the new features will soon allow you to draft messages, reply to messages, summarize conversations, and prioritize messages in Gmail, brainstorm ideas, get your work proofread, generate text, and rewrite text in Docs, create AI-generated images, audio, and videos in Slides, capture notes and generate backgrounds in Meet, and more easily “go from raw data to insights and analysis” in Sheets (although these appear not to be connected to Google’s new AI chatbot, Bard).

[Related: Google’s own upcoming AI chatbot draws from the power of its search engine]

In the blog post announcing the new features, Johanna Voolich Wright, vice president of product at Google Workspace, gives a few specific examples. In Docs, she shows the generative AI creating a rough draft of a job post for a regional sales rep, and in Gmail she shows it turning a short bulleted list into a formal email. Voolich Wright suggests these features would work whether you’re “a busy HR professional who needs to create customized job descriptions, or a parent drafting the invitation for your child’s pirate-themed birthday party.”

Voolich Wright is at pains to say that these features are meant to be you collaborating with AI, not letting it just do its own thing. “As we’ve experimented with generative AI ourselves, one thing is clear,” she writes. “AI is no replacement for the ingenuity, creativity, and smarts of real people.” In accordance with Google’s AI Principles, the generative AI is meant to do things like create first drafts that you edit and perfect, not publishable copy. You, the user, are meant to stay in control. 

While these examples are cool and genuinely seem useful, all we have to go on right now is Google’s own announcement posts and demo videos. These tools aren’t yet available even to testers, so it’s important to treat the listed features and the examples Google gives with a bit of skepticism. We’re not saying that AI wasn’t used to generate the text in the demos Voolich Wright shows off, but they could just have easily been written by an intern in the marketing department as an example of what Google would like the new features to be able to do. 

Still, Google has a legitimately world class AI research division and has been working on these sorts of features for more than six years. It might just be able to successfully integrate generative AI tools into some of its most popular products—and make them useful.

The post Your next Gmail or Google Doc could be written with help from AI appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Microsoft lays off entire AI ethics team while going all out on ChatGPT https://www.popsci.com/technology/microsoft-ai-team-layoffs/ Tue, 14 Mar 2023 17:00:00 +0000 https://www.popsci.com/?p=519457
Microsoft logo outside office headquarters
The Ethics & Society team helped translate broad AI initiatives for product developers. Deposit Photos

A new report indicates Microsoft will expand AI products, but axe the people who make them ethical.

The post Microsoft lays off entire AI ethics team while going all out on ChatGPT appeared first on Popular Science.

]]>
Microsoft logo outside office headquarters
The Ethics & Society team helped translate broad AI initiatives for product developers. Deposit Photos

This month saw the surprise dissolution of Microsoft’s entire Ethics & Society team—the latest casualty in the company’s ongoing layoffs affecting 10,000 employees, or roughly 5 percent of its entire global workforce. As first reported by The Verge on Monday, the news allegedly came after Microsoft’s corporate vice president of AI assured remaining employees that their jobs were safe. Once a 30-member department, the Ethics & Society team had been reduced to just seven people in October 2022 following an internal reorganization.

The move strikes many experts as worrisome, especially now—Microsoft’s Ethics & Society department was responsible for ensuring the company’s principles pertaining to artificial intelligence development were reflected in product designs. Most recently, The Verge explained the group worked to identify risks within the company’s plans to rapidly integrate OpenAI’s tech into its product suite. Microsoft has so far invested over $11 billion in the AI startup.

[Related: No, the AI chatbots (still) aren’t sentient.]

Amid the multiple waves of dramatic layoffs that have roiled Big Tech in recent months, Microsoft first announced plans in January to ax approximately 10,000 jobs from its global workforce by March 2023. The major cutback occurred as a new “AI arms race” kicked off between companies including Microsoft, Google, and Meta. All three and others are rushing to deliver on their lofty promises of chat programs, text generators, and revolutionary online search aids to consumers. Industry observers continue to urge caution against wantonly unleashing hastily tested, frequently problematic generative AI software.

“I am concerned about the timing of this decision, given that Microsoft has partnered with OpenAI and is using ChatGPT in its search engine Bing and across other services,” Duri Long, an assistant professor in communications focusing on human/AI interaction at Northwestern University, writes to PopSci via email. “This technology is new, and we are still learning about its implications for society. In my opinion, dedicated ethics teams are vital to the responsible development of any technology, and especially so with AI.” 

Microsoft still maintains a separate Office of Responsible AI responsible for determining principles and guidelines to oversee artificial intelligence initiatives, but a gap remains between that segment of the company and how those plans are translated to their own projects. “People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies,’” a former employee told The Verge. “Our job was to show them and to create rules in areas where there were none.”

[Related: The FTC has its eyes on AI scammers.]

“It’s not that [Ethics & Society] is going away—it’s that it’s evolving,” Microsoft’s corporate VP of AI reportedly assured remaining Ethics & Society members following the October 2022 reorg. “It’s evolving toward putting more of the energy within the individual product teams that are building the services and the software, which does mean that the central hub that has been doing some of the work is devolving its abilities and responsibilities.”

This article has been updated to include a quote from Duri Long.

The post Microsoft lays off entire AI ethics team while going all out on ChatGPT appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Sorting and recycling plastic is notoriously hard—but this AI could help https://www.popsci.com/technology/plastic-recycling-machine-learning/ Tue, 14 Mar 2023 15:00:00 +0000 https://www.popsci.com/?p=519373
Pile of plastic materials for recycling
Recycling plastic is notoriously troublesome, but machine learning could improve accuracy. Deposit Photos

Barely 5 percent of all plastic intended for recycling facilities ends up in a new product.

The post Sorting and recycling plastic is notoriously hard—but this AI could help appeared first on Popular Science.

]]>
Pile of plastic materials for recycling
Recycling plastic is notoriously troublesome, but machine learning could improve accuracy. Deposit Photos

It’s one of society’s worst kept secrets: Most plastic thrown into the blue and green bins doesn’t actually get recycled. In fact, studies show that barely 5 percent of all plastic intended for recycling facilities makes it through the process and back into new products. There are a number of factors that contribute to this strikingly low number—including contaminated materials, water requirements, and discarded waste—but it’s a problem made even worse by the fact that the average American’s plastic waste consumption has increased 263 percent since 1980.

It’s a serious situation that needs a solution sooner than later, and researchers are on the hunt for an efficient and effective fix. As detailed in a paper published with Frontiers in Sustainability, a team at University College London has developed a new machine learning model capable of isolating compostable and biodegradable plastics from conventional varieties to improve recycling efficiency and accuracy.

[Related: Can recycling close the loop on EV batteries?]

Most of today’s plastics fall within a handful of categories possessing different chemical makeups—polyethylene (PET) and polypropylene (PP) compose the majority of drinking bottles and food containers, while low-density polyethylene (LDPE) can be found in items like plastic bags and packages. Meanwhile, compostable options featuring polylactic acid (PLA) and polybutylene adipate terephthalate (PBAT) are generally in tea bags, magazine wrappings, and coffee cup lids. Finally, biomass-derived plastics from palm-leaf and sugar cane are often used in other packaging needs.

Recycling and composting only works well when these variants are properly sorted and handled appropriately. Cross-contamination often dilutes efficacy, wasting valuable time and energy. To improve this, researchers developed a classification system based on hyperspectral imaging (HSI), which scans materials’ chemical signatures to produce a pixel-by-pixel description of samples. A machine learning (ML) program was then trained on this data, and subsequently employed to look at and sort individual pieces of plastic waste.

[Related: How to actually recycle.]

When plastic materials were larger than 10mm by 10mm, the team’s model achieved perfect accuracy in sorting. While the rates dropped—sometimes precipitously—depending on size and material, the ML program’s initial results show immense promise if honed and scaled up to meet industrial demands.

“The advantages of compostable packaging are only realized when they are industrially composted and do not enter the environment or pollute other waste streams or the soil,” Mark Miodownik, a professor of materials and society within UCL’s department of mechanical engineering and paper corresponding author, said in a statement, adding that they “can and will improve it since automatic sorting is a key technology to make compostable plastics a sustainable alternative to recycling.”

The post Sorting and recycling plastic is notoriously hard—but this AI could help appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meta attempts a new, more ‘inclusive’ AI training dataset https://www.popsci.com/technology/meta-ai-casual-conversations-v2/ Fri, 10 Mar 2023 17:20:00 +0000 https://www.popsci.com/?p=518786
Meta logo on smartphone resting atop glowing keyboard
Meta won't say how much it paid its newest dataset participants for their time and labor. Deposit Photos

Experts say Casual Conversations v2 is an improvement, but questions remain about its sourcing and labor.

The post Meta attempts a new, more ‘inclusive’ AI training dataset appeared first on Popular Science.

]]>
Meta logo on smartphone resting atop glowing keyboard
Meta won't say how much it paid its newest dataset participants for their time and labor. Deposit Photos

With the likes of OpenAI’s ChatGPT and Google’s Bard, tech industry leaders are continuing to push their (sometimes controversial) artificial intelligence systems alongside AI-integrated products to consumers. Still, many privacy advocates and tech experts remain concerned about the massive datasets used to train such programs, especially when it comes to issues like data consent and compensation from users, informational accuracy, as well as algorithmically enforced racial and socio-political biases. 

Meta hoped to help mitigate some of these concerns via Thursday’s release of Casual Conversations v2, an update to its 2021 AI audio-visual training dataset. Guided by a publicly available November literature review, the data offers more nuanced analysis of human subjects across diverse geographic, cultural, racial, and physical demographics, according to the company’s statement.

[Related: No, the AI chatbots (still) aren’t sentient.]

Meta states v2 is “a more inclusive dataset to measure fairness,” and is derived from 26,467 video monologues recorded in seven countries, offered by 5,567 paid participants from Brazil, India, Indonesia, Mexico, Vietnam, Philippines, and the United States who also provided self-identifiable attributes including age, gender, and physical appearance. Although Casual Conversations’ initial release included over 45,000 videos, they were drawn from just over 3,000 individuals residing in the US and self-identifying via fewer metrics.

Tackling algorithmic biases in AI is a vital hurdle in an industry long plagued by AI products offering racist, sexist, and otherwise inaccurate responses. Much of this comes down to how algorithms are created, cultivated, and provided to developers.

But while Meta touts Casual Conversations v2 as a major step forward, experts remain cautiously optimistic, and urge continued scrutiny for Silicon Valley’s seemingly headlong rush into an AI-powered ecosystem.

“This is [a] space where almost anything is an improvement,” Kristian Hammond, a professor of computer science at Northwestern University and director of the school’s Center for Advancing the Safety of Machine Intelligence, writes in an email to PopSci. Hammond believes Meta’s updated dataset is “a solid step” for the company—especially considering past privacy controversies—and feels its emphasis on user consent and research participants’ labor compensation is particularly important.

“But an improvement is not a full solution. Just a step,” he cautions.

To Hammond, a major question remains regarding exactly how researchers enlisted participants in making Casual Conversations v2. “Having gender and ethnic diversity is great, but you also have to consider the impact of income and social status and more fine-grained aspects of ethnicity,” he writes, adding, “There is bias that can flow from any self-selecting population.”

[Related: The FTC has its eyes on AI scammers.]

When asked about how participants were selected, Nisha Deo of Meta’s AI Communications team told PopSci via email, “I can share that we hired external vendors with our requirements to recruit participants,” and that compensatory rates were determined by these vendors “having the market value in mind for data collection in that location.”

When asked to provide concrete figures regarding pay rates, Meta stated it was “[n]ot possible to expand more than what we’ve already shared.”

Deo, however, additionally stated Meta deliberately incorporated “responsible mechanisms” across every step of data cultivation, including a comprehensive literature review in collaboration with academic partners at Hong Kong University of Science and Technology on existing dataset methodologies, as well as comprehensive guidelines for annotators. “Responsible AI built this with ethical considerations and civil rights in mind and are open sourcing it as a resource to increase inclusivity efforts in AI,” she continued.

For industry observers like Hammond, improvements such as Casual Conversations v2 are welcome, but far more work is needed, especially when the world’s biggest tech companies appear to be entering an AI arms race. “Everyone should understand that this is not the solution altogether. Only a set of first steps,” he writes. “And we have to make sure that we don’t get so focused on this very visible step… that we stop poking at organizations to make sure that they aren’t still gathering data without consent.”

The post Meta attempts a new, more ‘inclusive’ AI training dataset appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An experimental AI used human brain waves to regenerate images https://www.popsci.com/technology/stable-diffusion-brain-scan-images/ Wed, 08 Mar 2023 21:00:00 +0000 https://www.popsci.com/?p=518192
Researchers trained Stable Diffusion to match brain scans with corresponding image keywords.
Researchers trained Stable Diffusion to match brain scans with corresponding image keywords. Vincent Tantardini on Unsplash

From image, to fMRI scan, to image again.

The post An experimental AI used human brain waves to regenerate images appeared first on Popular Science.

]]>
Researchers trained Stable Diffusion to match brain scans with corresponding image keywords.
Researchers trained Stable Diffusion to match brain scans with corresponding image keywords. Vincent Tantardini on Unsplash

Generative AI programs have gotten better and better at constructing impressively detailed visual images from text inputs, but researchers at Japan’s Osaka University have taken things a major step forward. They enlisted AI to reconstruct accurate, high-resolution images from humans’ brain activity generated while looking at images in front of them.

[Related: A guide to the internet’s favorite generative AIs.]

As recently highlighted by Science and elsewhere, a team at Osaka ’s Graduate School of Frontier Biosciences’ new paper details how they utilized Stable Diffusion, a popular AI image generation program, to translate brain activity into corresponding visual representation. Although there have been many previous, similar thought-to-computer image experiments, this test is the first to employ Stable Diffusion. For additional system training, researchers linked thousands of photos’ textual descriptions to volunteers’ brain patterns detected when viewing the pictures via functional magnetic resonance imaging (fMRI) scans.

AI photo
Stable Diffusion recreated images seen by humans (above) after translating their brain activity (below) Credit: Graduate School of Frontier Biosciences

Blood flow levels fluctuate within the brain depending on which areas are being activated. Blood traveling to humans’ temporal lobes, for example, helps with decoding information about “contents” of an image, i.e. objects, people, surroundings, while the occipital lobe handles dimensional qualities like perspective, scale, and positioning. An existing online dataset of fMRI scans generated by four humans looking at over 10,000 images was fed into Stable Diffusion, followed by the images’ text descriptions and keywords. This allowed the program to “learn” how to translate the applicable brain activity into visual representations.

[Related: ChatGPT is quietly co-authoring books on Amazon.]

During the testing, for example, a human looked at the image of a clock tower. The brain activity registered by the fMRI corresponded to Stable Diffusion’s previous keyword training, which then fed the keywords into its existing text-to-image generator. From there, a recreated clock tower was further detailed based on the occipital lobe’s layout and perspective information to form a final, impressive image.

As of right now, the team’s augmented Stable Diffusion image generation is limited only to the four person image database—further testing will require additional testers’ brain scans for training purposes. That said, the team’s groundbreaking advancements show immense promise in areas such as cognitive neuroscience, and as Science notes, could even one day help researchers delve into how other species perceive the environments around them.

The post An experimental AI used human brain waves to regenerate images appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The best free AI tools you can try right now https://www.popsci.com/diy/free-ai-sites/ Thu, 23 Jun 2022 20:00:00 +0000 https://www.popsci.com/?p=451953
A person on a laptop playing with Craiyon, one of many free AI tools.
Why work when you can reimagine R2-D2 in a Van Gogh painting?. Mart Production / Pexels

Experience the power of a neural network right in your browser.

The post The best free AI tools you can try right now appeared first on Popular Science.

]]>
A person on a laptop playing with Craiyon, one of many free AI tools.
Why work when you can reimagine R2-D2 in a Van Gogh painting?. Mart Production / Pexels

Software developers are keen to show off the latest in artificial intelligence, which is why you’ve probably seen an increase in articles and advertisements about various free AI tools anyone can access through a web browser.

Whether you want to generate weird and wonderful AI images from text prompts or create a musical composition in partnership with a computer, there are now plenty of cool AI websites to explore.

These apps are getting better with time, and they can give you a good idea as to what AI can do and where it might be headed in the future.

Magic Sketchpad

Pros

  • Can be used as a creativity prompt
  • Easy to save your drawings

Cons

  • Can take some practice to get right

If there’s an artist inside you, Magic Sketchpad could help bring it out. This free AI tool is an experiment from a team at Google that gets a neural network to draw along with you. Every time you let go of a line, the platform will respond to your scribble by finishing the drawing according to a set category.

The neural network has been trained on millions of doodles mined from the also highly entertaining Quick, Draw! browser-based game. Start Magic Sketchpad by picking a category from the drop-down list at the top right of your screen—there are plenty available, from frogs to sandwiches. The tool knows the sorts of shapes and lines people tend to make when they’re trying to draw simple concepts like a bird, a ship, or a cat, so it can predict what you’ll draw next and finish the doodle for you.

Magic Sketchpad can also help artists augment their work or provide new prompts for creativity, and as far as AI websites go, it’s one of the most entertaining. Maybe one day we could see computers doodling as well as humans do.

AI Duet

Pros

  • Not just for musicians
  • You’ll get results quickly

Cons

  • No export options
  • Can struggle with rhythm

If you’re more of a musician than a sketcher, AI Duet might suit you better. Built by an engineer at Google, AI Duet puts a keyboard down at the bottom of your screen and produces an automatically generated response based on what you play on it. You can click the keys on your screen, hit them on your keyboard, or even connect a MIDI keyboard to your computer.

A traditional approach to a project like this would have involved a programmer coding in hundreds or even thousands of responses to specific patterns a user might play. But AI Duet comes up with its own responses based on a huge database of tunes it has trained on. This gives the program the ability to generate melodies that match a user’s input without any specific instructions.

[Related: These music recording apps are your first step to winning a Grammy]

This is another example of how AI can work with artists to produce new creations, whether that’s for movie soundtracks or background music in games. Theoretically, you could rework one riff an endless amount of times.

Craiyon

Pros

  • Unlimited images
  • Each prompt creates multiple responses
  • Lots of flexibility

Cons

  • Results can take a while

By now, there’s a high chance you’ve seen the creations of AI image generator Craiyon, formerly known as Dall-E Mini. Essentially, it’s a neural network that turns text inputs into images—you type what you want to see, and the system generates it.

It is as simple as typing out what you want to see in the box at the top and clicking Draw. As far as free AI tools go, it couldn’t be much more straightforward.

You can combine two of your favorite fictional characters in a setting of your choosing, or reimagine a famous work of art in a different style— you’ll soon figure out which prompts work best

To generate images, Craiyon pulls in information from millions of photos online and their captions. That means it has a vast visual knowledge of everything from celebrities to national landmarks.

The results produced by Craiyon are a little rough around the edges for now, but it’s not difficult to see how we could eventually use this technology to generate highly realistic images from scratch using only a text prompt. For faster responses and no ads, you can pay from $5 a month for a paid plan.

Even Stranger Things

Pros

  • No graphic design experience needed
  • One of the quickest ways to experience AI

Cons

  • Only really has one trick

Even Stranger Things is worth a look even if you’re not a fan of the Netflix show that inspired it. The platform lets you submit a photo of anything you like and turns it into a Stranger Things-style poster.

The site was built by creative technologist David Arcus, and it taps into the Google Cloud Vision API, a machine learning system trained to recognize images based on a vast database. So by processing thousands of pictures of dogs, for example, the AI learns to more accurately spot a dog in other photos.

Even Stranger Things will try to identify what’s in the picture you’ve submitted and incorporate it into the finished design, usually with broadly accurate results.

It’s quite a simple AI tool, but it shows how we can use databases to teach machines to spot new patterns that aren’t in their training materials. The platform is also a great example of how algorithms can apply a particular visual style to photos to create something new.

Talk To Books

Pros

  • Good for existential questions
  • Very simple to use
  • Offers multiple answers

Cons

  • Prompts need to be carefully worded

Talk To Books is yet another artificial intelligence tool created by engineers at Google. In this case, the platform uses the words from more than 100,000 books to automatically respond to a question or text prompt.

While you can’t really hold a conversation with the site, you can ask questions like “How can I fall asleep?” and “How did you meet your partner?” to get answers that generally make sense. Type your prompt, then press Go to see the results, and you can filter by literary genre if needed.

This is another example of how machine learning enables AI to predict a good response to a question or prompt by analyzing patterns in text. It’s perhaps a glimpse into how free AI software could change web searches in the future.

[Related: The FTC has its eye on AI scammers]

At this stage, AI can’t really finish novels, or even news articles, but given enough data and refinement, these may be possible uses for it in the future.

Pix2Pix

Pros

  • Fast results
  • Offers helpful tips along the way
  • Ability to use random prompts

Cons

  • Limited number of image styles

As the name suggests, Pix2Pix is an AI image generator that takes one picture and turns it into another. In this case, the tool shows you a photograph based on something you’ve doodled.

Scroll down the page and you’ll see there are four different examples to try out: cats, buildings, shoes, and handbags. Sketch out your drawing in the window on the left, and click Process to see what the AI makes of it.

This is another engine based on a GAN, where two neural networks work in tandem to produce realistic results, and even figure out where the edges of objects in images should be.

Turning sketches into realistic photos can be useful in all kinds of areas, from building construction to video game design. And the quality of the results is only likely to improve as these neural networks get smarter.

ChatGPT

Pros

  • Sounds natural
  • Will chat about almost any topic
  • Responds to feedback

Cons

  • Not always accurate
  • Requires you to create an account

ChatGPT has attracted plenty of attention for the way it can generate natural-sounding text on just about any kind of topic, and it feels like a watershed moment in artificial intelligence.

This is what’s called a Large Language Model, which, as the name suggests, is trained on large volumes of sample text. Very, very, large, in fact. It’s then able to predict which words should go together and in which order, and it can improve its own algorithms as human beings rank its responses in terms of quality and appropriateness.

ChatGPT is somewhat like a sophisticated autocorrect engine, and you can try it out for free (though you’ll need to create an account and might find it’s unavailable at busy times). Test its knowledge on a topic you know a lot about, and feel free to offer feedback.

Deep Dream Generator

Pros

  • Wide range of picture styles
  • Can work with a base image
  • Images can be refined

Cons

  • Limited number of free generations
  • Requires you to create an account

Fire up the Deep Dream Generator in your web browser, and you’ll be asked for a text prompt to create an image—it works like Craiyon in that respect, though you’ll get extra options in terms of image generation and refinement.

You can, for example, specify a particular style, such as photorealistic or fantasy. You can also add artists you want to mimic, or even digital camera models you’d like the AI engine to try to emulate. Another option is to supply your own base image for Deep Dream Generator to work with.

Underpinning Deep Dream Generator is a neural network trained on a huge database of images that the engine is trying to replicate, and it’s impressive in terms of the breadth and speed of the results that can be achieved. The platform requires users to spend “energy” to generate images, though, and the less you’re paying them, the fewer pictures you’ll be able to make at a time.

Runway

Pros

  • Vast number of AI tools
  • Simple interface
  • You can train your own AI models

Cons

  • The best features require payment
  • You’ll need to create an account

Runway is an AI playground with a lot of different tools you can experiment with: create images from text prompts, create new images from existing images, erase parts of images, quickly remove backgrounds, generate a transcript from a video, and more.

For the text-to-image generator, for example, just type out a few words—such as “artistic painting of a solitary figure in an open meadow filled with flowers”—and Runway will go to work. You can choose from artistic styles, mediums (like chalk or ink), and even moods to refine a picture.

Other tools, like the one that colorizes black and white photos, require even fewer clicks. You can use Runway for free, but you’re limited in terms of export resolutions, storage space, and image generations—paid plans start at $15 a month.

It’s all based on advanced machine learning models that can recognize and repeat patterns. You can even use Runway to train your own AI models, making it suitable for advanced users: You might want to train it on photos of your face, for instance, and then generate endless portrait images of yourself in all kinds of styles and settings.

This story has been updated. It was originally published on June 23, 2022.

The post The best free AI tools you can try right now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The FTC has its eye on AI scammers https://www.popsci.com/technology/ftc-artificial-intelligence-warning/ Tue, 28 Feb 2023 21:00:00 +0000 https://www.popsci.com/?p=516037
Outdoor photo of FTC building exterior with sign
The FTC warned scammers that they're already onto their AI grifts. Deposit Photos

In a colorful post, the FTC let scammers know they are already well aware of AI exaggerations.

The post The FTC has its eye on AI scammers appeared first on Popular Science.

]]>
Outdoor photo of FTC building exterior with sign
The FTC warned scammers that they're already onto their AI grifts. Deposit Photos

The Federal Trade Commission, generally not known for flowery rhetoric or philosophical musings, took a moment on Monday to publicly ponder, “What exactly is ‘artificial intelligence’ anyway” in a vivid blog post from Michael Atleson, an attorney within the FTC’s Division of Advertising Practices. 

After summarizing humanity’s penchant for telling stories about bringing things to life “imbue[d] with power beyond human capacity,” he asks, “Is it any wonder that we can be primed to accept what marketers say about new tools and devices that supposedly reflect the abilities and benefits of artificial intelligence?”

[Related: ChatGPT is quietly co-authoring books on Amazon.]

Although Atleson eventually leaves the broader definition of “AI” largely open to debate, he made one thing clear: The FTC knows what it most certainly isn’t, and grifters are officially on notice. “[I]t’s a marketing term. Right now it’s a hot one,” continued Atleson. “And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.”

The FTC’s official statement, while somewhat out of the ordinary, is certainly in keeping with the new, Wild West era of AI—a time when every day sees new headlines about Big Tech’s latest large language models, “hidden personalities,” dubious claims to sentience, and the ensuing inevitable scams. As such, Atleson and the FTC are going so far as to lay out an explicit list of things they’ll be looking out for while companies continue to fire off breathless press releases on their purportedly revolutionary breakthroughs in AI.

“Are you exaggerating what your AI product can do?” the Commission asks, warning businesses that such claims could be charged as “deceptive” if they lack scientific evidence, or only apply to extremely specific users and case conditions. Companies are also strongly encouraged to refrain from touting AI as a means to potentially justify higher product costs or labor decisions, and take extreme risk-assessment precautions before rolling out products to the public.

[Related: No, the AI chatbots (still) aren’t sentient.]

Falling back on blaming third-party developers for biases and unwanted results, retroactively bemoaning “black box” programs beyond your understanding—these won’t be viable excuses to the FTC, and could potentially open you up to serious litigation headaches. Finally, the FTC asks perhaps the most important question at this moment: “Does the product actually use AI at all?” Which… fair enough.

While this isn’t the first time the FTC has issued industry warnings—even warnings concerning AI claims—it remains a pretty stark indicator that federal regulators are reading the same headlines the public is right now—and they don’t seem pleased. 

The post The FTC has its eye on AI scammers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists eye lab-grown brains to replace silicon-based computer chips https://www.popsci.com/technology/brain-organoid-biocomputer/ Tue, 28 Feb 2023 17:00:00 +0000 https://www.popsci.com/?p=515986
Gloved hands placing computer microchip on underneath microscope lens.
Brain organoids could be the next major step in computing, but ethical quandaries abound. Deposit Photos

Chips are fast approaching physical limits. Replacing them with minibrains might be the next step forward.

The post Scientists eye lab-grown brains to replace silicon-based computer chips appeared first on Popular Science.

]]>
Gloved hands placing computer microchip on underneath microscope lens.
Brain organoids could be the next major step in computing, but ethical quandaries abound. Deposit Photos

Artificial intelligence is the tech catchphrase of the moment, but it may soon share headline space alongside another wild new computing field: organoid intelligence (OI), aka biocomputers.

Computers, simply put, are running out of space—at least, computers as most people know them. Silicon-based chips have long been the standard for everyday usage, but most experts agree that electronics makers are quickly approaching the physical limit in both the size of transistors, as well as how many can fit on a surface. Merging organic matter with electronics is a promising new avenue for advancing beyond these constraints, including organoid intelligence.

[Related: Microsoft changes Bing chatbot restrictions after much AI-generated weirdness.]

Scientists across a variety of disciplines and institutions recently published an early roadmap towards realizing this technology utilizing “brain organoids” in the research journal, Frontiers in Science. The phrase “brain organoids” may conjure images of noggins floating inside glass jars, but the reality is (for now) a lot less eerie. Organoids aren’t whole brains, but instead small, lab-grown stem cell cultures possessing several similarities to brain structures, including neurons and other cells enabling rudimentary cognitive functions such as memory and learning. Brain organoids’ three-dimensional design boosts their cell density over 1,000-times larger than their flat cell culture counterpoints, thus allowing for exponentially more neuron connections and learning capabilities—an important distinction given the trajectory for existing computers.

“While silicon-based computers are certainly better with numbers, brains are better at learning,” said Thomas Hartung, one of the paper’s co-authors and a professor of microbiology at John Hopkins University, in a statement. Hartung offers AlphaGo, the AI that bested the world’s top Go player in 2017, as an example of a computationally superior program. “[It] was trained on data from 160,000 games. A person would have to play five hours a day for more than 175 years to experience these many games.”

But AlphaGo’s impressive statistical capabilities come with a hefty cost—the amount of energy required to train it equaled about as much as it takes to keep an active adult human alive for about 10 years. A human brain, by comparison, is far more efficient, with around 100 billion neurons across 1015 connection points—“an enormous power difference compared to our current technology,” argues Hartung. Factor in the brain’s ability to store the equivalent of around 2,500TB of information, and it’s easy to see how biocomputers usher in a new era of technological innovation.

[Related: Just because an AI can hold a conversation does not make it smart.]

There are serious ethical hurdles ahead of researchers, however. Today’s earliest brain organoids are small and simple cell cultures of just 50,000 or so neurons. To scale them up to computer-strength levels, scientists need to grow them to house 10 million neurons, according to Hartung. More neurons means more complex brain functions, edging researchers further into the murky realm of what is and isn’t “consciousness.”

As Live Science explains, brain organoids have been around since 2013, primarily as a way to help study diseases like Parkinson’s and Alzheimer’s. Since then, these cell clumps have even been taught to play Pong, but they remain far from “self-aware.” The new paper’s authors, however, concede that as they develop more complex organoids, questions will arise as to what constitutes awareness, feeling, and thought—considerations that even the most advanced computers can’t answer. At least, not at the moment.

The post Scientists eye lab-grown brains to replace silicon-based computer chips appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The FTC is trying to get more tech-savvy https://www.popsci.com/technology/ftc-office-of-technology/ Sat, 25 Feb 2023 12:00:00 +0000 https://www.popsci.com/?p=515353
the FTC
The Federal Trade Commission. PAUL J. RICHARDS/AFP via Getty Images

The agency is beefing up its tech team and forming an Office of Technology. Here's what the new department will do.

The post The FTC is trying to get more tech-savvy appeared first on Popular Science.

]]>
the FTC
The Federal Trade Commission. PAUL J. RICHARDS/AFP via Getty Images

The Federal Trade Commission, or FTC, is bulking up its internal tech team. The agency, which focuses on consumer protection and antitrust issues in the US, announced last week that it would be forming an Office of Technology and hiring more tech experts. 

Leading the new office is Stephanie Nguyen, the agency’s existing chief technology officer, who recently spoke with PopSci about what the new department will do and what her priorities for it are. 

“In general, the FTC has always stayed on the cutting edge of emerging technology to enforce the law,” she says. “In the 1930s, we looked at deceptive radio ads.” Earlier this century, she notes, they focused on “high-tech spyware.” The goal of the agency in general involves tackling problems that plague the public, like the scourge of robocalls.

“The shift in the pace and volume of evolving tech changes means that we can’t rely on a case-by-case approach,” she adds. “We need to staff up.” And the staffing up comes at a time when the tech landscape is as complex and formidable as it’s ever been, with the rise of controversial tools like generative AI and chatbots, and companies such as Amazon—which just scooped up One Medical, a primary care company, and in 2017 purchased Whole Foods—becoming more and more powerful. 

A relatively recent example of a tech issue the FTC has tackled comes from Twitter, which was hit with a $150 million fine in 2022 for abusing the phone numbers and email addresses it had collected for security purposes because it had permitted “advertisers to use this data to target specific users,” as the FTC noted last year. The Commission has also taken on GoodRx for the way it handled and shared people’s medical data. They have an ongoing lawsuit against Facebook-owner Meta for “anticompetitive conduct.” Meanwhile, in a different case, the FTC was unsuccessful at attempting to block Meta’s acquisition of a VR company called Within Unlimited, which CNBC referred to as “a significant defeat” for the FTC. 

[Related: Why the new FTC chair is causing such a stir]

Nguyen says that as the lines become increasingly blurry between what is, and isn’t, a tech company, the creation of the office became necessary. “Tech cannot be viewed in a silo,” she says. “It cuts across sectors and industries and business models, and that is why the Office of Technology will be a key nexus point for our consumer protection and competition work to enable us to create and scale the best practices.” 

The move at the FTC comes at a time when the tech literacy of various government players is in the spotlight and is crucially important. The Supreme Court has been considering two cases that relate to a law known as Section 230, and Justice Elana Kagan even referred to herself and her fellow justices as “not the nine greatest experts on the internet.” 

At the FTC, what having the new Office of Technology will mean in practice is that the amount of what she refers to as in-house “technologists” will roughly double, as they hire about 12 new people. She says that as they create the team, “we need security and software engineers, data scientists and AI experts, human-computer interaction designers and researchers,” and well as “folks who are experts on ad tech or augmented and virtual reality.”

Tejas Narechania, the faculty director for the Berkeley Center for Law & Technology, says that the FTC’s creation of this new office represents a positive step. “I think it’s a really good development,” he says. “It reflects a growing institutional capacity within the executive branch and within our agencies.” 

“The FTC has been operating in this space for a while,” he adds. “It has done quite a bit with data privacy, and it has sometimes been criticized for not really fully understanding the technology, or the development of the technology, that has undergirded some of the industries that it is charged with overseeing and regulating.” (The agency has faced other challenges too.)

One of the ways the people working for the new office will be able to help internally at the FTC, Nguyen says, is to function as in-house subject matter experts and conduct new research. She says they’ll tackle issues like “shifts in digital advertising, to help the FTC understand implications of privacy, competition, and consumer protection, or dissecting claims made about AI-powered products and assessing whether it’s snake oil.” 

Having in-house expertise will help them approach tech questions more independently, Narechania speculates. The FTC will “be able to bring its own knowledge to bear on these questions, rather than relying on the very entities it’s supposed to be scrutinizing for information,” he reflects. “To have that independent capacity for evaluation is really important.” 

For Nguyen, she says the big-picture goal of the new office is that they are “here to strengthen the agency’s ability to be knowledgeable and take action on tech changes that impact the public.”

The post The FTC is trying to get more tech-savvy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
6 ways ChatGPT is actually useful right now https://www.popsci.com/diy/chatgpt-use-cases/ Fri, 24 Feb 2023 17:00:00 +0000 https://www.popsci.com/?p=514911
Close up to a screen showing the home page of ChatGPT
Yes, ChatGPT is fun, but it can also be incredibly useful. Jonathan Kemper / Unsplash

Cheating on your essays isn't one of them.

The post 6 ways ChatGPT is actually useful right now appeared first on Popular Science.

]]>
Close up to a screen showing the home page of ChatGPT
Yes, ChatGPT is fun, but it can also be incredibly useful. Jonathan Kemper / Unsplash

It’s been difficult to get away from ChatGPT lately. The advanced artificial intelligence chatbot has been trying its hand at writing emails and even books, and Microsoft has added a customized (and rather controversial) version of it to its Bing search engine.

It’s not difficult to see what all the fuss is about if you try it out for yourself. You can engage the bot in conversation on just about any topic, and it will respond with coherent, human-like answers by using its text prediction technology.

Once you’ve played around with it, you might wonder if there’s actually any real use to ChatGPT—other than ethically dubious purposes, like cheating on essays. But there are already a lot of ways it can help you day to day.

1. Learn to code

ChatGPT is an impressive coder, no doubt thanks to the reams of code that it’s sucked up during training. While human programmers still have the edge, ChatGPT can be really handy if you want to learn languages such as HTML, CSS, Python, JavaScript, or Swift.

[Related: No, the AI chatbots (still) aren’t sentient]

Just ask ChatGPT to give you the code for a specific functionality, like “write the HTML to center an image,” and that’s what you’ll get back. It’s that simple. You can also use the bot as a debugging tool by copying and pasting lines of code into it and asking it why they’re not working. If you need clarification on anything, just ask.

2. Find ideas for activities

One way to sidestep any ChatGPT inaccuracies is to ask it for suggestions rather than hard facts. For example, you could request ideas for games for toddlers, outdoor activities for adults, or fun ways to pass the time on a long car journey.

If ChatGPT’s suggestions aren’t suitable, you can get the bot to refine them. Ask for games that need less preparation or don’t take as long to play. You can also request ideas for activities you can do in any kind of weather or that people of any age can enjoy. The bot won’t get tired of throwing out more recommendations.

3. Prepare for an interview

ChatGPT doesn’t know for sure what questions might come your way at your next job interview, but it can give you some kind of idea of what to expect and help you prepare. We wouldn’t rely on it entirely for interview prep, but it can certainly help.

The more specific you can be about the type of job you’re going for and the format of the interview, the better. Type in something like “questions asked at face-to-face customer service jobs,” for example. While there’s no guarantee that ChatGPT will get it exactly right, it will be able to draw on its training to make some decent guesses.

4. Generate writing prompts

As you would expect from publications like Popular Science, we think there’s plenty of life left in human authors before AI takes over. The text that ChatGPT produces is no doubt groundbreaking, but also tends to be rather generic and repetitive, as you would expect from a large-scale autocorrect machine.

However, the bot can be great at giving you prompts for writing ideas, which you can then work on yourself. Ask it about character or scenario prompts, for example, or get its thoughts on what might happen next in a certain situation. This can work for any kind of writing, from a novel to a wedding speech. It may not be able to write as well as you, but it can help you brainstorm

5. Get music, TV, and movie recommendations

The version of ChatGPT that’s available to the public only has information up to 2021, but with that limitation in mind you can ask it about movies, TV shows, and music that’s similar to stuff you already like. The answers can be hit or miss, but they might be good options to explore.

You can also ask ChatGPT about obscure and little-known songs by your favorite bands that are worth discovering. We tested the platform by asking about the works of R.E.M. and it came up with a really good and appropriate answer (the song “Camera”), before proceeding to give us incorrect information about the track length and style. That’s ChatGPT in a nutshell.

6. Ask for advice

ChatGPT doesn’t know or think anything, really, but it has absorbed a vast trove of information from human writers (some say in violation of copyright law). That means you can ask it for advice on anything from long-distance relationships, to moving houses, to starting a business.

[Related: Building ChatGPT’s AI content filters devastated workers’ mental health, according to new report]

Obviously, ChatGPT won’t know the intricacies of your own situation, but it can generate a list of considerations to weigh up, some of which you might not otherwise have thought about. We wouldn’t recommend living your life entirely based on ChatGPT’s opinions, but it can still be helpful if you don’t know where to start tackling a particular problem.

The post 6 ways ChatGPT is actually useful right now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
ChatGPT is quietly co-authoring books on Amazon https://www.popsci.com/technology/chatgpt-books-amazon/ Wed, 22 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=514374
Kindle e-reader on store home screen.
One popular online magazine has seen over 500 ChatGPT-aided submissions this month so far. Deposit Photos

In fact, the AI ghosts authors have plagued multiple websites.

The post ChatGPT is quietly co-authoring books on Amazon appeared first on Popular Science.

]]>
Kindle e-reader on store home screen.
One popular online magazine has seen over 500 ChatGPT-aided submissions this month so far. Deposit Photos

Generative AI programs are spurring plenty of ongoing discussions surrounding artistic integrity, labor, and originality. But that’s not stopping people from already attempting to profit from the systems. On Tuesday, Reuters profiled multiple individuals turning to text generators such as ChatGPT to churn out book manuscripts that they then fine-tune and subsequently sell via systems like Amazon’s self-publishing e-book platform. The rising deluge of chatbot-assisted stories is now so bad that it’s even forced a temporary submissions hiatus for one of the internet’s leading science-fiction magazines.

According to Reuters, Amazon’s e-book store includes at least 200 titles openly listing ChatGPT as an author or co-author. Such titles include space-inspired poetry, children’s novels about penny-pinching forest animals, and how-to tutorials on using ChatGPT to supposedly improve one’s dating life. “I could see people making a whole career out of this,” one of the AI-assisted authors told Reuters.

[Related: No, the AI chatbots (still) aren’t sentient.]

Because Amazon currently has no explicit policies requiring individuals to list generative text programs as authors, the library of AI-assisted titles is likely much higher.

“All books in the store must adhere to our content guidelines, including by complying with intellectual property rights and all other applicable laws,” Amazon spokeswoman Lindsay Hamilton told Reuters via email.

But although AI-assisted titles are proliferating in literary markets like Amazon’s Kindle Store, other outlets are being forced to halt all submissions in order to develop new strategies. In a blog post published last week, Neil Clarke, publisher and editor-in-chief of popular science-fiction website Clarkesworld, announced the site would be pausing its unsolicited submissions portal indefinitely due to an untenable influx in AI-assisted spam stories.

[Related: Just because an AI can hold a conversation does not make it smart.]

Clarke revealed in their post that spam entries resulting in bans from future submissions rose precipitously since the public debut of ChatGPT. Within the first 20 days of February, editors flagged over 500 story submissions for plagiarism. Before ChatGPT, the magazine typically caught less than 30 plagiarized stories per month. While there are a number of tools that can help detect plagiarized material, the time and costs make them difficult to utilize for publications like Clarkesworld operating on small budgets.

“If the field can’t find a way to address this situation, things will begin to break,” Clarke wrote on his blog. “Response times will get worse and I don’t even want to think about what will happen to my colleagues that offer feedback on submissions.” While he believes this won’t kill short fiction as readers know it—”please just stop that nonsense”—he cautions it will undeniably “complicate things” as opportunists and grifters take further advantage of generative text’s rapid advancements.

The post ChatGPT is quietly co-authoring books on Amazon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet Spotify’s new AI DJ https://www.popsci.com/technology/spotify-ai-dj/ Wed, 22 Feb 2023 16:30:00 +0000 https://www.popsci.com/?p=514256
spotify's ai dj feature in their app
Take a look at Spotify's AI DJ. Spotify

Here’s how it was made—and how it will affect your listening experience.

The post Meet Spotify’s new AI DJ appeared first on Popular Science.

]]>
spotify's ai dj feature in their app
Take a look at Spotify's AI DJ. Spotify

Spotify, the popular audio streaming app, is on a journey to make music-listening personal for its users. Part of that includes recommending playlists “For You” like “Discover Weekly,” and summing up your year in audio with “Wrapped.” Today, the company announced that they are introducing an AI DJ, that folds much of their past work on audio recommendation algorithms into a new feature. It’s currently rolling out for premium users in the US and Canada. 

The artificially intelligent DJ blends recommendations from across different personal and general playlists throughout the app. The goal is to create an experience where it picks the vibe that it thinks you will like, whether that’s new hits, your old favorites, or songs that you’ve had on repeat for weeks. And like a radio DJ, it will actually talk—giving some commentary, and introducing the song, before it queues it up. Users can skip songs, or even ask the DJ to change the vibe by clicking on the icon at the bottom right of the screen. It will refresh the lineup based on your interactions. 

The DJ is comprised of three main tech components: Spotify’s personalization technology, a generative AI from OpenAI that scripts up what cultural context the DJ provides (alongside human writers), and an AI text-to-voice platform that was built through their acquisition of Sonantic, and based on the real life model, Xavier “X” Jernigan, Spotify’s head of cultural partnerships. To train the AI DJ, Jernigan spent a long time in the studio recording speech samples. 

[Related: How Spotify trained an AI to transcribe music]

The text-to-speech system accounts for all the nuances in human speech, such as pitch, pacing, emphasis, and emotions. For example, if a sentence ended in a semicolon instead of a period, the voice inflection would be different.

There’s a weekly writer’s room that curates what they’re going to say about songs in the flagship playlists, such as those grouped by genres. There’s another group of writers and cultural experts that come in to discuss how they want to phrase the commentary around the songs they serve up to users. The generative AI then comes in and scales this base script, and tailors it to all the individual users. AI DJ is technically still in beta, and Spotify engineers are eager to take user feedback to add improvements in future versions.

[Related: The best Spotify add-ons and tricks]

Think of the individual “For You” recommendations as Lego pieces, Ziad Sultan, head of personalization at Spotify, tells PopSci. “It’s not that it takes all the playlists and merges them. It’s more, in order to build this playlist in the first place, we have had to build a lot of Lego pieces that understand the music, understand the user, and understand how to create the right combo,” he explains. “A lot of that is from the years of developing machine learning, getting the data, especially the playlist data, for example.” 

“Eighty-one percent of people say that the thing they love most about Spotify is the personalization,” says Sultan. “So they’re still going to have the things they know and love. But this is just a new choice, which is also about not choosing.”  

Try it for yourself in the “Music” feed of the app homepage, or see it in action below:

Update February 22, 2023: This article has been updated to clarify that “X” is the voice model that was used and not the name for Spotify’s AI DJ.

The post Meet Spotify’s new AI DJ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Microsoft changes Bing chatbot restrictions after much AI-generated weirdness https://www.popsci.com/technology/microsoft-bing-ai-restrictions/ Tue, 21 Feb 2023 17:30:00 +0000 https://www.popsci.com/?p=513849
Microsoft Windows Store logo

Meanwhile, an online paper trail indicates Microsoft knew of Bing's chat problems as far back as November 2022.

The post Microsoft changes Bing chatbot restrictions after much AI-generated weirdness appeared first on Popular Science.

]]>
Microsoft Windows Store logo

It’s been a wild ride for anyone paying attention to the ongoing Microsoft Bing chatbot saga. After a highly publicized debut on February 7, both it and its immediate competitor, Google’s Bard, were almost immediately overshadowed by users’ countless displays of oddball responses, misleading statements, unethical ramifications, and incorrect information. After barely a week of closed testing, however, it appears Microsoft is already delaying its “new day for search” by quietly instituting a handful of interaction limitations that have major ramifications for Bing’s earliest testers.

As highlighted by multiple sources—including many devoted Bing subreddit users—Microsoft seems to have initiated three updates to the waitlisted ChatGPT-integrated search engine on Friday: a 50 message daily limit for users, only five exchanges allowed per individual conversation, and no discussions about Bing AI itself.

[Related: No, the AI chatbots (still) aren’t sentient.]

Although Microsoft’s new red tape might seem minor at first glance, the simple changes massively restrict what users can generate during their discussions with the Bing bot. A conversation’s five message limit, for example, drastically curtails the potential to bypass the chatbot’s guardrails meant to prevent thorny topics like hate speech and harassment. Previously, hacks could be accomplished via users’ crafty series of commands, questions, and prompts, but that will now prove much harder to pull off in five-or-less moves.

Similarly, a ban on Bing talking about “itself” will hypothetically restrict its ability to generate accidentally emotionally manipulative answers that users could misconstrue as the early stages of AI sentience (spoiler: it’s not).

Many users are lamenting the introduction of a restricted Bing, and argue the bot’s eccentricities are what made it so interesting and versatile in the first place. “It’s funny how the AI is meant to provide answers but people instead just want [to] feel connection,” Peter Yang, a product manager for Roblox, commented over the weekend. But if one thing has already been repeatedly shown, it’s that dedicated tinkerers consistently find ways to jailbreak the latest technologies.

[Related: Just because an AI can hold a conversation does not make it smart.]

In a February 15 blog update, Microsoft conceded that people using Bing for “social entertainment” were a “great example of where new technology is finding product-market-fit for something we didn’t fully envision.”

However, recent online paper trails indicate the company had advanced notice of issues within a ChatGPT-enabled Bing as recently as November 2022. As highlighted by tech blogger René Walter and subsequently Gary Marcus, an NYU professor of psychology and neural science, Microsoft public tested a version Bing AI in India over four months ago, and received similarly troubling complaints that are still available online.

The post Microsoft changes Bing chatbot restrictions after much AI-generated weirdness appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why DARPA put AI at the controls of a fighter jet https://www.popsci.com/technology/darpa-ai-fighter-jet-test/ Sat, 18 Feb 2023 12:00:00 +0000 https://www.popsci.com/?p=513331
a modified F-16 in flight
The VISTA aircraft in August, 2022. Kyle Brasier / US Air Force

In December tests, different artificial intelligence algorithms flew an F-16-like fighter jet. Can AI be a good combat aviator?

The post Why DARPA put AI at the controls of a fighter jet appeared first on Popular Science.

]]>
a modified F-16 in flight
The VISTA aircraft in August, 2022. Kyle Brasier / US Air Force

In December, a special fighter jet made multiple flights out of Edwards Air Force Base in California. The orange, white, and blue aircraft, which is based on an F-16, seats two people. A fighter jet taking to the skies with a human or two on board is not remarkable, but what is indeed remarkable about those December flights is that for periods of time, artificial intelligence flew the jet. 

As the exploits of generative AI like ChatGPT grip the public consciousness, artificial intelligence has also quietly slipped into the military cockpit—at least in these December tests.  

The excursions were part of a DARPA program called ACE, which stands for Air Combat Evolution. The AI algorithms came from different sources, including a company called Shield AI as well as the Johns Hopkins Applied Physics Laboratory. Broadly speaking, the tests represent the Pentagon exploring just how effective AI can be at carrying out tasks in planes typically done by people, such as dogfighting. 

“In total, ACE algorithms were flown on several flights with each sortie lasting approximately an hour and a half,” Lt. Col. Ryan Hefron, the DARPA program manager for ACE, notes to PopSci via email. “In addition to each performer team controlling the aircraft during dogfighting scenarios, portions of each sortie were dedicated to system checkout.”

The flights didn’t come out of nowhere. In August of 2020, DARPA put artificial intelligence algorithms through their paces in an event called the AlphaDogfight Trials. That competition didn’t involve any actual aircraft flying through the skies, but it did conclude with an AI agent defeating a human flying a digital F-16. The late 2022 flights show that software agents that can make decisions and dogfight have been given a chance to actually fly a real fighter jet. “This is the first time that AI has controlled a fighter jet performing within visual range (WVR) maneuvering,” Hefron notes.

[Related: I flew in an F-16 with the Air Force and oh boy did it go poorly]

So how did it go? “We didn’t run into any major issues but did encounter some differences compared to simulation-based results, which is to be expected when transitioning from virtual to live,” Hefron said in a DARPA press release

Andrew Metrick, a fellow in the defense program at the Center for New American Security, says that he is “often quite skeptical of the applications of AI in the military domain,” with that skepticism focused on just how much practical use these systems will have. But in this case—an artificial intelligence algorithm in the cockpit—he says he’s more of a believer. “This is one of those areas where I think there’s actually a lot of promise for AI systems,” he says. 

The December flights represent “a pretty big step,” he adds. “Getting these things integrated into a piece of flying hardware is non-trivial. It’s one thing to do it in a synthetic environment—it’s another thing to do it on real hardware.” 

Not all of the flights were part of the DARPA program. All told, the Department of Defense says that a dozen sorties took place, with some of them run by DARPA and others run by a program out of the Air Force Research Laboratory (AFRL). The DOD notes that the DARPA tests were focused more on close aerial combat, while the other tests from AFRL involved situations in which the AI was competing against “a simulated adversary” in a “beyond-vision-range” scenario. In other words, the two programs were exploring how the AI did in different types of aerial contests or situations. 

Breaking Defense reported earlier this year that the flights kicked off December 9. The jet flown by the AI is based on an F-16D, and is called VISTA; it has space for two people. “The front seat pilot conducted the test points,” Hefron explains via email, “while the backseater acted as a safety pilot who maintained broader situational awareness to ensure the safety of the aircraft and crew.”

One of the algorithms that flew the jet came from a company called Shield AI. In the AlphaDogfight trials of 2020, the leading AI agent was made by Heron Systems, which Shield AI acquired in 2021. Shield’s CEO, Ryan Tseng, is bullish on the promise of AI to outshine humans in the cockpit.I do not believe that there’s an air combat mission where AI pilots should not be decisively better than their human counterparts, for much of the mission profile,” he says. That said, he notes that “I believe the best teams will be a combination of AI and people.” 

One such future for teaming between a person and AI could involve AI-powered fighter-jet-like drones such as the Ghost Bat working with a crewed aircraft like an F-35, for example. 

It’s still early days for the technology. Metrick, of the Center for New American Security, wonders how the AI agent would be able to handle a situation in which the jet does not respond as expected, like if the aircraft stalls or experiences some other type of glitch. “Can the AI recover from that?” he wonders. A human may be able to handle “an edge case” like that more easily than software.

The post Why DARPA put AI at the controls of a fighter jet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
No, the AI chatbots (still) aren’t sentient https://www.popsci.com/technology/chatgpt-google-chatbot-sentient/ Fri, 17 Feb 2023 16:30:00 +0000 https://www.popsci.com/?p=513048
Image of hands coming out of a computer or a man hiding behind a laptop
Chatbots simply cannot develop personalities—they don’t even understand what “personality” is. Deposit Photos

Experts say that personification and projections of sentience on Microsoft and Google chatbots distract from the real issues.

The post No, the AI chatbots (still) aren’t sentient appeared first on Popular Science.

]]>
Image of hands coming out of a computer or a man hiding behind a laptop
Chatbots simply cannot develop personalities—they don’t even understand what “personality” is. Deposit Photos

Since testers began interacting with Microsoft’s ChatGPT-enabled Bing AI assistant last week, they’ve been getting some surreal responses. But the chatbot is not really freaking out. It doesn’t want to hack everything. It is not in love with you. Critics warn that this increasing focus on the chatbots’ supposed hidden personalities, agendas, and desires promotes ghosts in the machines that don’t exist. What’s more, experts warn that the continued anthropomorphization of generative AI chatbots is a distraction from more serious and immediate dangers of the developing technology.

“What we’re getting… from some of the world’s largest journalistic institutions has been something I would liken to slowing down on the highway to get a better look at a wreck,” says Jared Holt, a researcher at the Institute for Strategic Dialogue, an independent think tank focused on extremism and disinformation. To Holt, companies like Microsoft and Google are overhyping their products’ potentials despite serious flaws in their programs.

[Related: Just because an AI can hold a conversation does not make it smart.]

Within a week after their respective debuts, Google’s Bard and Microsoft’s ChatGPT-powered Bing AI assistant were shown to generate incomprehensible and inaccurate responses. These issues alone should have paused product rollouts, especially in an online ecosystem already rife with misinformation and unreliable sourcing. 

Though human-programmed limits should technically prohibit the chatbots from generating hateful content, they can be easily bypassed. “I’ll put it this way: If a handful of bored Redditors can figure out how to make your chatbot spew out vitriolic rhetoric, perhaps that technology is not ready to enter every facet of our lives,” Holt says.

Part of this problem resides in how we choose to interpret the technology. “It is tempting in our attention economy for journalists to endorse the idea that an overarching, multi-purpose intelligence might be behind these tools,” Jenna Burrell, the Director of Research at Data & Society, tells PopSci. As Burrell wrote in an essay last week, “When you think of ChatGPT, don’t think of Shakespeare, think of autocomplete. Viewed in this light, ChatGPT doesn’t know anything at all.”

[Related: A simple guide to the expansive world of artificial intelligence. ]

ChatGPT and Bard simply cannot develop personalities—they don’t even understand what “personality” is, other than a string of letters to be used in pattern recognition drawn from vast troves of online text. They calculate what they believe to be the next likeliest word in a sentence, plug it in, and repeat ad nauseam. It’s a “statistical learning machine,” more than a new pen pal, says Brendan Dolan-Gavitt, an assistant professor in NYU Tandon’s Computer Science and Engineering Department. “At the moment, we don’t really have any indication that the AI has an ‘inner experience,’ or a personality, or something like that,” he says.

Bing’s convincing imitation of self-awareness, however, could pose “probably a bit of danger,” with some people becoming emotionally attached to misunderstanding its inner workings. Last year, Google engineer Blake Lemoine’s blog post went viral and gained national coverage; it claimed that the company’s LaMDA generative text model (which Bard now employs) was already sentient. This allegation immediately drew skepticism from others in the AI community who pointed out that the text model was merely imitating sentience. But as that imitation improves, Burrell agrees it “will continue to confuse people who read machine consciousness, motivation, and emotion into these replies.” Because of this, she contends chatbots should be viewed less as “artificial intelligence,” and more as tools utilizing “word sequence predictions” to offer human-like replies.

Anthropomorphizing chatbots—whether consciously or not—does a disservice to understanding both the technologies’ abilities, as well as their boundaries. Chatbots are tools, built on massive resources of prior human labor. Undeniably, they are getting better at responding to textual inputs. However, from giving users inaccurate financial guidance to spitting out dangerous advice on dealing with hazardous chemicals, they still possess troubling shortfalls.

[Related: Microsoft’s take on AI-powered search struggles with accuracy.]

“This technology should be scrutinized forward and backwards,” says Holt. “The people selling it claim it can change the world forever. To me, that’s more than enough reason to apply hard scrutiny.”

Dolan-Gavitt thinks that potentially one of the reasons Bing’s recent responses remind readers of the “rogue AI” subplot in a science fiction story is because Bing itself is just as familiar with the trope. “I think a lot of it could be down to the fact that there are plenty of examples of science fiction stories like that it has been trained on, of AI systems that become conscious,” he says. “That’s a very, very common trope, so it has a lot to draw on there.”

On Thursday, ChatGPT’s designers at OpenAI published a blog post attempting to explain their processes and plans to address criticisms. “Sometimes we will make mistakes. When we do, we will learn from them and iterate on our models and systems,” the update reads. “We appreciate the ChatGPT user community as well as the wider public’s vigilance in holding us accountable.”

The post No, the AI chatbots (still) aren’t sentient appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA is using AI to help design lighter parts https://www.popsci.com/technology/nasa-evolved-structures-spacecraft-ai/ Thu, 16 Feb 2023 16:05:00 +0000 https://www.popsci.com/?p=512885
NASA evolved structure spacecraft part
AI-assisted engineering helped construct advanced spacecraft parts like this one. NASA

'The algorithms do need a human eye.'

The post NASA is using AI to help design lighter parts appeared first on Popular Science.

]]>
NASA evolved structure spacecraft part
AI-assisted engineering helped construct advanced spacecraft parts like this one. NASA

NASA is enlisting artificial intelligence software to assist engineers in designing the next generation of spacecraft hardware, and real world results resemble the stuff of science fiction.

The agency utilized commercially available AI software at NASA’s Goddard Space Flight Center in Maryland. NASA states that research engineer Ryan McClelland, who worked on the new materials with the assistance of AI, has dubbed them “evolved structures.” They have already been used in the design and construction of astrophysics balloon observatories, space weather monitors, and space telescopes, as well as the Mars Sample Return mission and more.

Beforehand the evolved structures are created, a computer-assisted design (CAD) specialist first sets the new objects’ “off limits” parameters, such as where the parts connects to spacecraft or other instruments, as well as other specifications like bolt and fitting placements, additional hardware, and electronics. Once those factors are defined, AI software “connects the dots” to sketch out a potential new structural design, often within just two hours or less.

The finished products result in curious, unique forms that are up to two-thirds lighter than their purely human-designed counterparts. However, proposed forms generally require some human fine-tuning, Ryan McClellans makes sure to highlight. “The algorithms do need a human eye,” McClelland said. “Human intuition knows what looks right, but left to itself, the algorithm can sometimes make structures too thin.”

[Related: NASA just announced a plane with a radical wing design.]

Optimizing materials and hardware is especially important for NASA’s spacefaring projects, given each endeavor’s unique requirements and needs. As opposed to assembly line construction for mass produced items, almost every NASA part is unique, so shortening design and construction times with AI input expands the agency’s capabilities.

When combined with other production techniques like 3D-printing, researchers envision a time when larger parts could be constructed while astronauts are already in orbit, thus reducing costly payloads. Such assembly plans might even be employed during construction of permanent human bases on the moon and Mars.

The post NASA is using AI to help design lighter parts appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Charity scammers’ latest weapon is AI-generated art https://www.popsci.com/technology/natural-disaster-scam-ai-art/ Wed, 15 Feb 2023 17:00:00 +0000 https://www.popsci.com/?p=512207
Wanting to help is great—but research where your donations go before opening your wallet.
Wanting to help is great—but research where your donations go before opening your wallet. DepositPhotos

Last week's deadly earthquake in Syria and Turkey highlights how well-meaning people can fall for traps posing as donation sites.

The post Charity scammers’ latest weapon is AI-generated art appeared first on Popular Science.

]]>
Wanting to help is great—but research where your donations go before opening your wallet.
Wanting to help is great—but research where your donations go before opening your wallet. DepositPhotos

Online scammers frequently tug on targets’ heartstrings to trick people into handing over money for a seemingly good cause. The fallout from Syria and Turkey’s deadly, massive earthquake is no exception. Now, however, scams are enlisting tech tools like generative art to bolster their schemes.

As highlighted on Monday by the BBC, some bad actors are leveraging live streaming features on social media platforms like TikTok alongside AI generated artwork to grift users. TikTok creators can receive money via digital gifts on TikTok Live, and often link out to crypto wallets for funds. Given their very nature, however, any money deposited into such addresses are nearly impossible to retrieve after the fact, making them ideal for quick cons based on generating empathy through both real, altered, and outright fictional imagery.

[Related: Don’t fall for an online love scam this Valentine’s Day.]

New generative artwork AI tools like Midjourney are particularly good for these campaigns. In one example offered by the Greek newspaper, OEMA, a deceptive Twitter account solicits readers for an “aid campaign to reach people who have experienced an #earthquake disaster in Turkey” with crypto wallet addresses for both Bitcoin and Ethereum. The tweet also includes a rendering of a firefighter holding a small child amidst the city ruins, but a closer look at the picture offers red flags—such as the firefighter possessing six fingers on one hand. 

According to OEMA, the rendering was generated via a prompt on Midjourney from the Major General of the Aegean fire brigade. BBC’s own experimentation with the AI tool using prompts like “image of firefighter in aftermath of an earthquake rescuing young child and wearing helmet with Greek flag” yielded extremely similar results. Similarly, any accounts soliciting PayPal donations for Turkey recovery efforts should be avoided entirely—due to licensing issues, PayPal hasn’t been available in the country since 2016.

[Related: Cryptocurrency scammers are mining dating sites for victims.]

The natural disaster scams are so rife that the Federal Trade Commission issued a reminder PSA cautioning donors on where to direct their funds. The central tenet of advice in these situations is to slow down and conduct at least a brief background check on the potential recipient—tools like charity watchdog groups are a great help in these instances. Users are also encouraged to utilize reverse image searches like Google’s as a great way to find out if old images are being repurposed for scammers’ most recent campaigns.

To save time, you can also simply go to reliable sources’ lists of vetted charities, such as the UK government’s options.

The post Charity scammers’ latest weapon is AI-generated art appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Microsoft’s take on AI-powered search struggles with accuracy https://www.popsci.com/technology/microsoft-bing-chatbot-wrong/ Tue, 14 Feb 2023 17:30:00 +0000 https://www.popsci.com/?p=512124
Woman hold head in hand while sitting at desk in front of laptop
Microsoft's Bing chatbot appears just as bad as Google's Bard, if not worse. Deposit Photos

Microsoft's Bing chatbot is under scrutiny barely a week after Google Bard's own announcement debacle.

The post Microsoft’s take on AI-powered search struggles with accuracy appeared first on Popular Science.

]]>
Woman hold head in hand while sitting at desk in front of laptop
Microsoft's Bing chatbot appears just as bad as Google's Bard, if not worse. Deposit Photos

After months of hype, Google and Microsoft announced the imminent arrivals of Bard and a ChatGPT-integrated Bing search engine within 24 hours of one another. At first glance, both tech giants’ public demonstrations appeared to display potentially revolutionary products that could upend multiple industries. But it wasn’t long before even cursory reviews highlighted egregious flaws within Google’s Bard suggestions. Now, it’s Microsoft’s turn for some scrutiny, and the results are as bad as Bard’s, if not worse.

Independent AI researcher Dmitri Brereton published a blog post Monday detailing numerous glaring issues in their experience with a ChatGPT-powered Bing. Bing’s demo frequently contained shoddy information: from inaccurate recommended product details, to omitting or misstating travel stop details, to even misrepresenting seemingly straightforward financial reports. In the latter instance, Bing’s AI summation of basic financial data—something that should be “trivial” for AI, per Brereton—contained completely false statistics out of nowhere.

[Related: Just because an AI can hold a conversation does not make it smart.]

But even when correct, Bing may have grossly sidestepped simple ethical guardrails. According to one report from PCWorld’s Mark Hachman, the AI provided the Hachman’s children with a litany of ethnic slurs when asked for cultural nicknames. Although Bing prefaced its examples by cautioning that certain nicknames are “neutral or positive, while others are derogatory or offensive,” the chatbot didn’t appear to bother categorizing its results. Instead, it simply created a laundry list of good, bad, and extremely ugly offerings.

Microsoft’s director of communications, Caitlin Roulston told The Verge that the company “expect[ed] that the system may make mistakes during this preview period, and the feedback is critical to help identify where things aren’t working well so we can learn and help the models get better.”

As companies inevitably rush to implement “smart” chatbot capabilities into their ecosystems, critics argue it’s vital that these issues be tackled and resolved before widespread adoption. For Chinmay Hegde, an Associate Professor at NYU Tandon School of Engineering, the missteps were wholly unsurprising, and Microsoft debuted its technology far too early.

[Related: Google’s own upcoming AI chatbot draws from the power of its search engine.]

“At a high level, the reason why these errors are happening is that the technology underlying ChatGPT is a probabilistic [emphasis Hegde] large language model, so there is inherent uncertainty in its output,” he writes in an email to PopSci. “We can never be absolutely certain what it’s going to say next.” As such, programs like ChatGPT and Bard may be good for tasks where there is no unique answer—like making jokes or recipe ideas—but not so much when precision is required, such as historical facts or constructing logical arguments, says Hegde.

“I am shocked that the Bing team created this pre-recorded demo filled with inaccurate information, and confidently presented it to the world as if it were good,” Brereton writes in their blog post before admonishing, “I am even more shocked that this trick worked, and everyone jumped on the Bing AI hype train without doing an ounce of due diligence.”

The post Microsoft’s take on AI-powered search struggles with accuracy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot can create finger paintings based on human inputs https://www.popsci.com/technology/frida-ai-paint-robot/ Sat, 11 Feb 2023 12:00:00 +0000 https://www.popsci.com/?p=511313
Robot painted portrait of Frida Kahlo
FRIDA's portrait of its namesake artist. Carnegie Mellon University

Carnegie Mellon University's FRIDA turns ideas into colorful finger-painted portraits.

The post This robot can create finger paintings based on human inputs appeared first on Popular Science.

]]>
Robot painted portrait of Frida Kahlo
FRIDA's portrait of its namesake artist. Carnegie Mellon University

A research team at Carnegie Mellon University has developed a new project that embraces artistic collaboration’s spontaneity and joy by merging the strengths of humans, artificial intelligence, and robotics. FRIDA—the Framework and Robotics Initiative for Developing Arts—ostensibly works like the generative art-bot DALL-E by developing an image based on a series of human prompts. But FRIDA  takes it a step further by actually painting its idea on a physical canvas.

As described in a paper to be presented in May at the IEEE International Conference on Robotics and Automation, the team first installed a paintbrush onto an off-the-shelf robotic arm, then programmed its accompanying AI to reinterpret human input, photographs, and even music. The final results arguably resemble somewhat rudimentary finger paintings.

Unlike other similar designs, FRIDA analyzes its inherently imprecise brushwork in real time, and adjusts accordingly. Its perceived mistakes are incorporated into the project as they come, offering a new level of spontaneity. “It will work with its failures and it will alter its goals,” Peter Schaldenbrand, a Ph.D. student and one of the FRIDA’s creators, said in the demonstration video provided by Carnegie Mellon.

[Related: Netflix used AI-generated images in anime short. Artists are not having it.]

Its creators’ emphasis on the robot being a tool for human creativity. According to the team’s research paper, FRIDA “is a robotics initiative to promote human creativity, rather than replacing it, by providing intuitive ways for humans to express their ideas using natural language or sample images.”

Going forward, researchers hope to continue honing FRIDA’s abilities, along with expanding its repertoire to potentially one day include sculpting, an advancement that could show great promise in a range of production industries. 

The post This robot can create finger paintings based on human inputs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Police are paying for AI to analyze body cam audio for ‘professionalism’ https://www.popsci.com/technology/police-body-cam-ai-truleo/ Fri, 10 Feb 2023 15:00:00 +0000 https://www.popsci.com/?p=510118
Chest of police officer in uniform wearing body camera next to police car.
Truleo transcribes bodycam audio, then classifies police-civilian interactions. Jonathan Wiggs/The Boston Globe via Getty Images

Law enforcement is using Truleo's natural language processing AI to analyze officers' interactions with the public, raising questions about efficacy and civilian privacy.

The post Police are paying for AI to analyze body cam audio for ‘professionalism’ appeared first on Popular Science.

]]>
Chest of police officer in uniform wearing body camera next to police car.
Truleo transcribes bodycam audio, then classifies police-civilian interactions. Jonathan Wiggs/The Boston Globe via Getty Images

An increasing number of law enforcement departments are reportedly turning to artificial intelligence programs to monitor officers’ interactions with the public. According to multiple sources, police departments are specifically enlisting Truleo, a Chicago-based company which offers AI natural language processing for audio transcription logs ripped from already controversial body camera recordings. The partnership raises concerns regarding data privacy and surveillance, as well as efficacy and bias issues that come with AI automation.

Founded in 2019 through a partnership with FBI National Academy Associates, Inc., Truleo now possesses a growing client list that already includes departments in California, Alabama, Pennsylvania, and Florida. Seattle’s police department just re-upped on a two-year contract with the company. Police in Aurora, Colorado—currently under a state attorney general consent decree regarding racial bias and excessive use of force—are also in line for the software, which reportedly costs roughly $50 per officer, per month.

[Related: Police body cameras were supposed to build trust. So far, they haven’t.]

Truleo’s website says it “leverages” proprietary natural language processing (NLP) software to analyze, flag, and categorize transcripts of police officers’ interactions with citizens in the hopes of improving professionalism and efficacy. Transcript logs are classified based on certain parameters, and presented to customers via detailed reports to use as they deem appropriate. For example, Aurora’s police chief, Art Avecedo, said in a separate interview posted on Truleo’s website that the service can “identify patterns of conduct early on—to provide counseling and training, and the opportunity to intervene [in unprofessional behavior] far earlier than [they’ve] traditionally been able to.”

Speaking to PopSci over the phone, Anthony Tassone, Truleo’s co-founder and CEO, stressed Truleo software “relies on computers’ GPU” and is only installed within a police department’s cloud environment. “We don’t have logins or access to that information,” he says. Truleo’s sole intent, he says, is to provide textual analysis tools for police departments to analyze and assess their officers.

The company website offers example transcripts with AI-determined descriptions such as “formality,” “explanation,” “directed profanity,” and “threat.” The language detection skills also appear to identify actions such as pursuits, arrests, or medical attention requests. Examples of the program’s other classifications include “May I please see your license and registration?” (good) and “If you move from here I will break your legs” (bad).

AI photo
Professionalism vs. Risk Credit: Truleo

When asked about civilians’ rights to opt-out of this new form of employee development, however, Tassone cautions he would only be “speculating or guessing” regarding their options.

“I mean, I’m not a lawyer,” stresses Tassone when asked about civilians’ rights regarding opt-outs. “These questions are more for district attorneys, maybe police union attorneys. Once this information is captured on body camera data, you’re asking the question of really, ‘Who does it belong to?’”  

“Can civilians call [local departments] and ask to be removed? I don’t know,” he adds.

PopSci reached out to Alameda and Aurora law enforcement representatives for comment, and will update this post accordingly.

[Related: The DOJ is investigating an AI tool that could be hurting families in Pennsylvania.]

Michael Zimmer, associate professor and vice-chair of Marquette University’s Department of Computer Sciences, as well as the Director of the Center for Data, Ethics, and Society, urges caution in using the tech via an email PopSci

“While I recognize the good intentions of this application of AI to bodycam footage… I fear this could be fraught with bias in how such algorithms have been modeled and trained,” he says.

Zimmer questions exactly how “good” versus “problematic” interactions are defined, as well as who defines them. Given the prevalence of stressful, if not confrontational, civilian interactions with police, Zimmer takes issue with AI determining problematic officer behavior “based solely on bodycam audio interactions,” calling it “yet another case of the normalization of ubiquitous surveillance.”

Truleo’s website states any analyzed audio is first isolated from uploaded body cam footage through an end-to-end encrypted Criminal Justice Information Services (CJIS) compliant data transfer process. Established by the FBI in 1992, CJIS compliance guidelines are meant to ensure governmental law enforcement and vendors like Truleo protect individuals’ civil liberties, such as those concerning privacy and safety, while storing and processing their digital data. It’s important to note, however, that “compliance” is not a “certification.” CJIS compliance is assessed solely via the considerations of a company like Truleo along its law enforcement agency clients. There is no centralized authorization entity to award any kind of legally binding certification.

AI photo
Promotional material on Truleo’s website. Credit: Truleo

Regardless, Tassone explains the very nature of Truleo’s product bars its employees from ever accessing confidential information. “After we process [bodycam] audio, there is no derivative of data. No court can compel us to give anything, because we don’t keep anything,” says Tassone. “It’s digital exhaust—it’s ‘computer memory,’ and it’s gone.”

Truleo’s technology also only analyzes bodycam data it is voluntarily offered—what is processed remains the sole discretion of police chiefs, sergeants, and other Truleo customers. But Axios notes, the vast majority of body cam footage goes unreviewed unless there’s a civilian complaint or external public pressure, such as was the case in the death of Tyre Nichols. Even then, footage can remain difficult to acquire—see the years’ long struggle surrounding Joseph Pettaway’s death in Montgomery, Alabama.

[Related: Just because an AI can hold a conversation does not make it smart.]

Meanwhile, it remains unclear what, if any, recourse is available to civilians uncomfortable at the thought of their interactions with authorities being transcribed for AI textual analysis. Tassone tells PopSci he has no problem if a handful of people request their data be excluded from local departments’ projects, as it likely won’t affect Truleo’s “overall anonymous aggregate scores.”

“We’re looking at thousands of interactions of an officer over a one year period of time,” he offers as an average request. “So if one civilian [doesn’t] want their data analyzed to decide whether or not they were compliant, or whether they were upset or not,” he pauses. “Again, it really comes down to: The AI says, ‘Was this civilian complaint during the call?’ ‘Yes’ or ‘No.’ ‘Was this civilian upset?’ ‘Yes’ or ‘No.’ That’s it.”

The post Police are paying for AI to analyze body cam audio for ‘professionalism’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Just because an AI can hold a conversation does not make it smart https://www.popsci.com/technology/conversational-ai-inaccurate/ Thu, 09 Feb 2023 19:00:00 +0000 https://www.popsci.com/?p=511030
Revamped Microsoft Bing search engine home page screenshot
Brand new Bing, now with ChatGPT additive. Microsoft

These AI models may respond and write in a human-like way, but they are not always 100 percent correct.

The post Just because an AI can hold a conversation does not make it smart appeared first on Popular Science.

]]>
Revamped Microsoft Bing search engine home page screenshot
Brand new Bing, now with ChatGPT additive. Microsoft

Conversational AI-powered tools are going mainstream, which to many disinformation researchers, is a major cause for concern. This week, Google announced Bard, its answer to Open AI’s ChatGPT, and doubled down on rolling out AI-enhanced features to many of its core products at an event in Paris. Similarly, Microsoft announced that ChatGPT would soon be integrated with Bing, its much maligned search engine. Over the coming months these conversational tools will be widely available, but already, some problems are starting to appear.

Conversational AIs are built using a neural network framework called “large language models” (LLMs) and are incredibly good at generating text that is grammatically coherent and seems plausible and human-like. They can do this because they are trained on hundreds of gigabytes of human text, most of it scraped from the internet. To generate new text, the model will work by predicting the next “token” (basically, a word or fragment of a complex word) given a sequence of tokens (many researchers have compared this to the “fill in the blank” exercises we used to do in school). 

For example, I asked ChatGPT to write about PopSci and it started by stating “Popular Science is a science and technology magazine that was first published in 1872.” Here, it’s fairly clear that it is cribbing its information from places like our About page and our Wikipedia page, and calculating what are the likely follow-on words to a sentence that starts: “Popular Science is…” The paragraph continues in much the same vein, with each sentence being the kind of thing that follows along naturally in the sorts of content that ChatGPT is trained on.

Unfortunately, this method of predicting plausible next words and sentences mean conversational AIs can frequently be factually wrong, and unless you already know the information already, you can easily be misled because they sound like they know what they’re talking about. PopSci is technically no longer a magazine, but Google demonstrated this even better with the rollout of Bard. (This is also why large language models can regurgitate conspiracy theories and other offensive content unless specifically trained not to.)

[Related: A simple guide to the expansive world of artificial intelligence]

One of the demonstration questions in Google’s announcement (which is still live as of the time of writing) was “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” In response, Bard offered three bullet points including one that said that “JWST took the very first pictures of a planet outside of our solar system.” 

While that sounds like the kind of thing you’d expect the largest space telescope ever built to do—and the JWST is indeed spotting exoplanets—it didn’t find the first one. According to Reuters and NASA, that honor goes to the European Southern Observatory’s Very Large Telescope (VLT) which found one in 2004. If this had instead happened as part of someone asking Bard for advice and not as part of a very public announcement, there wouldn’t have been dozens of astronomy experts ready to step in and correct it. 

Microsoft is taking a more up front approach. The Verge found that Bing’s new FAQ stated that ”the AI can make mistakes,” and that “Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate.” It continues calling on users to exercise their own judgment and double-check the facts that the AI offers up. (It also says that you can ask Bing: “Where did you get that information?” to find out what sources it used to generate the answer.)

Still, this feels like a bit of a cop out from Microsoft. Yes, people should be skeptical of information that they read online, but the onus is also on Microsoft to make sure the tools it is providing to millions of users aren’t just making stuff up and presenting it as if it’s true. Search engines like Bing are one of the best tools people have for verifying facts—they shouldn’t add to the amount of misinformation out there. 

And that onus may be legally enforceable. The EU’s Digital Services Act, which will come into force some time in 2024, has provisions to specifically prevent the spread of misinformation. Failure to comply with the new law could result in penalties of up to 6 percent of a company’s annual turnover. Given the EU’s recent spate of large fines for US tech companies and existing provision that search engines must remove certain kinds of information that can be proved to be inaccurate, it seems plausible that the 27-country bloc may take a hard stance on AI-generated misinformation displayed prominently on Google or Bing. They are already being forced to take a tougher stance on other forms of generated misinformation, like deepfakes and fake social media accounts.

With these conversational AIs set to be widely and freely available soon, we are likely to see more discussion about how appropriate their use is—especially as they claim to be an authoritative source of information. In the meantime, let’s keep in mind going forward that it’s far easier for these kind of AIs to create grammatically coherent nonsense than it is for them to write an adequately fact-checked response to a query.

The post Just because an AI can hold a conversation does not make it smart appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The highlights and lowlights from the Google AI event https://www.popsci.com/technology/google-ai-in-paris/ Wed, 08 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=510821
Google SVP Prabhakar Raghavan at the AI event in Paris
Google SVP Prabhakar Raghavan at the AI event in Paris. Google / YouTube

Google Maps, Search, Translate, and more are getting an AI update.

The post The highlights and lowlights from the Google AI event appeared first on Popular Science.

]]>
Google SVP Prabhakar Raghavan at the AI event in Paris
Google SVP Prabhakar Raghavan at the AI event in Paris. Google / YouTube

Google search turns 25 this year, and although its birthday isn’t here yet, today executives at the company announced that the search function is getting some much anticipated AI-enhanced updates. Outside of search, Google is also expanding its AI capabilities to new and improved features across its translation service, maps, and its work with arts and culture. 

After announcing on Monday that it was launching its own version of a ChatGPT-like AI chatbot called Bard, Prabhakar Raghavan, senior vice president at Google, introduced it live at a Google AI event that was streamed Wednesday from Paris, France. 

Raghavan highlighted how Google-pioneered research in transformers (that’s a neural network architecture used in language models and machine learning) set the stage for much of the generative AI we see today. He noted that while pure fact-based queries are the bread and butter of Google search as we know it today, questions in which there is “no one right answer” could be served better by generative AI, which can help users organize complex information and multiple viewpoints. 

Their new conversational AI, Bard, which is built from a smaller model of a language tool they developed in 2021 called LaMDA, is meant to, for example, help users weigh the pros and cons of different car models if they were looking into buying a vehicle. Bard is currently with a small group of testers, and will be scaling to more users soon. 

[Related: Google’s own upcoming AI chatbot draws from the power of its search engine]

However, the debut didn’t go as smoothly as the company planned. Multiple publications noticed that in a social media post the Google shared on the new AI search feature, Bard gave the wrong information in response to a demo question. Specifically, when prompted with the query: “what new discoveries from the James Webb Space Telescope can I tell my 9 year old about,” Bard responded with “JWST took the very first pictures of a planet outside of our own solar system,” which is inaccurate. According to Reuters and NASA, the first pictures of a planet outside of our solar system were taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004.

This stumble is bad timing given the hype yesterday around Microsoft’s announcement that it was integrating ChatGPT’s AI into the company’s Edge browser and its search engine, Bing. 

Despite Bard’s bumpy breakout, Google did go on to make many announcements about AI-enhanced features trickling into its other core services. 

[Related: Google’s about to get better at understanding complex questions]

In Lens, an app based on Google’s image-recognition tech, the company is bringing a “search your screen” feature to Android users in the coming months. This will allow users to click on a video or image from their messages, web browser, and other apps, and ask the Google Assistant to find more information about items or landmarks that may appear in the visual. For example, if a friend sends a video of her trip in Paris, Google Assistant can search the screen of the video, and identify the landmark that is present in it, like the Luxembourg Palace. It’s part of Google’s larger effort to mix different modalities, like visual, audio, and text, into search in order to help it tackle more complex queries

In the maps arena, a feature called immersive view, which Google teased last year at the 2022 I/O conference, is starting to roll out today. Immersive view uses a method called neural radiance fields to generate a 3D scene from 2D images. It can even recreate subtle details like lighting, and the texture of objects. 

[Related: Google I/O recap: All the cool AI-powered projects in the works]

Outside of the immersive view feature, Google is also bringing search with live view to maps that allows users to scope out their surroundings using their phone camera to scan the streets around them, and get instant augmented reality-based information on shops and businesses nearby. It’s currently available in London, Los Angeles, New York, Paris, San Francisco and Tokyo but will be expanding soon to Barcelona, Dublin and Madrid. For EV drivers, AI will be used to suggest charging stops and plan routes that factor in things like traffic, energy consumption, and more. Users can expect these improvements to trickle into data-based projects Google has been running such as Environmental Insights Explorer and Project Air View

To end on a fun note, Google showcased some of the work it’s been doing in using AI to design tools across arts and culture initiatives. As some might remember from the last few years, Google has used AI to locate you and your pet’s doppelgängers in historic art. In addition to solving research challenges like helping communities preserve their language word lists, digitally restoring paintings and other cultural artifacts, and uncovering the historic contributions of women in science, AI is being used in more amusing applications as well. For example, the Blob Opera was built from an algorithm trained on the voices of real opera singers. The neural network then puts its own interpretation on how to sing and harmonize based on its model of human singing. 

Watch the entire presentation below: 

Update on Feb 13, 2023: This post has been updated to clarify that Bard gave incorrect information in a social media post, not during the live event itself. This post has also been updated to remove a sentence referring to the delay between when the livestream concluded and when Google published the video of the event.

The post The highlights and lowlights from the Google AI event appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Microsoft is betting ChatGPT will make Bing useful https://www.popsci.com/technology/microsoft-bing-edge-chatgpt/ Tue, 07 Feb 2023 20:35:29 +0000 https://www.popsci.com/?p=510576
Close up of Bing search engine homepage on computer screen
Brand new Bing, now with AI additive. Deposit Photos

Microsoft's $10 billion investment in OpenAI is already resulting in ChatGPT integration for Bing and Edge, with potentially major consequences.

The post Microsoft is betting ChatGPT will make Bing useful appeared first on Popular Science.

]]>
Close up of Bing search engine homepage on computer screen
Brand new Bing, now with AI additive. Deposit Photos

Microsoft has announced the first results of its recent $10 billion investment in the research lab OpenAI. Its long-overlooked and frequently maligned Bing search engine is getting a revamp courtesy of ChatGPT integration, alongside the company’s Edge web browser.

Per a presentation from Microsoft headquarters on Tuesday, CEO Satya Nadella laid out how the company’s revitalized Bing is meant to provide a “new day” for internet users. In addition to an overall retooling of how Bing’s search results populate, users will soon be able to leverage ChatGPT’s conversational tone to assist in more complex tasks; Microsoft says will generate travel itineraries, offer recipe ingredient substitutions, and list potential options for a product while shopping, all while also providing relevant links.

[Related: Google’s own upcoming AI chatbot draws from the power of its search engine.]

According to Microsoft, ChatGPT’s ability to produce often humanlike responses—a cause for concern across many fields and industries—can soon also be leveraged to help write emails, prep for job interviews, and even plan a trivia game night all from within the new Bing and Edge. As shown during the company’s public demonstration, the new integration comes in the form of either traditional search results side-by-side with AI-assisted annotations, or a separate chat window for more in-depth conversations. The new-and-improved Bing is currently available via “limited preview,” with a full-scale launch arriving in the near future.

[Related: AI scientists say ChatGPT is nothing special.]

Microsoft’s dramatic search engine reboot is only the latest in a string of major developments within the complex and often controversial generative AI chatbot industry. On Monday, Google announced the imminent arrival of Bard, its intended rival to the Microsoft-backed ChatGPT tool based in part on the company’s (recently misunderstood) LaMDA service. Meanwhile, critics and experts urge everyone to pump the brakes on hype surrounding these programs, citing concerns over efficacy and accuracy.

Microsoft’s $10 billion investment into OpenAI won’t end with search engines, either. According to Tuesday’s presentation, ChatGPT services will soon find their way into the company’s entire suite of products.

The post Microsoft is betting ChatGPT will make Bing useful appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This fictitious news show is entirely produced by AI and deepfakes https://www.popsci.com/technology/deepfake-news-china-ai/ Tue, 07 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=510463
Screenshot of deepfaked news anchor for fake news channel
If something feels off, it's because it is. Graphika

'Wolf News' videos feature grammatical errors and weird AI-generated anchors.

The post This fictitious news show is entirely produced by AI and deepfakes appeared first on Popular Science.

]]>
Screenshot of deepfaked news anchor for fake news channel
If something feels off, it's because it is. Graphika

A research firm specializing in misinformation called Graphika issued a startling report on Tuesday revealing just how far controversial deepfake technologies have come. Their findings detail what appears to be the first instance of a state-aligned influence operation utilizing entirely AI-generated “news” footage to spread propaganda. Despite its comparatively hamfisted final products and seemingly low online impact, the AI television anchors of a fictitious outlet, Wolf News, promoted critiques of American inaction regarding gun violence last year, as well as praised China’s geopolitical responsibilities and influence at an upcoming international summit.

As detailed in supplementary reporting supplied on Tuesday by The New York Times, the two Wolf News’ anchors can be traced back to “Jason” and “Anna” avatars offered by Synthesia, a five-year-old startup in Britain offering deepfake software to clients for as little as $30 a month. Synthesia currently offers at least 85 characters modeled on real human actors across a spectrum of ages, gender, ethnicities, voice tones, and clothing. Customers can also generate avatars of themselves, as well as anyone who grants consent.

[Related: A history of deepfakes, misinformation, and video editing.]

Synthesia’s products are largely intended and marketed as tools for cost- and time-saving solutions such as a company’s in-house human resource training videos. Past clients advertised on the company website include Amazon and Novo Nordisk. Synthesia’s examples, as well as the propaganda clips highlighted by NY Times, aren’t exactly high quality—the avatars speak in largely monotonous tones, with stilted facial expressions, delayed audio, and unrealistic movements such as blinking too slowly.

Usually, this isn’t an issue, as clients are willing to sacrifice those aspects in lieu of drastically cheaper operating costs for often mundane projects. Still, experts at Graphika caution that the technology is quickly improving, and will soon present misinformation that is much less distinguishable from real videos.

[Related: ‘Historical’ chatbots aren’t just inaccurate—they are dangerous.]

Synthesia’s terms of service clearly prohibit generating “political, sexual, personal, criminal and discriminatory content”. Although it has a four-person department tasked with monitoring for clients’ deepfake content violations, it remains difficult to flag subtle issues like misinformation or propaganda as opposed to hate speech or explicit content. Victor Riparbelli, Synthesia’s co-founder and CEO, told NY Times that the company takes full responsibility for the security lapse, and that the subscribers behind Wolf News have been subsequently banned for violating the company’s policies.

Although the digital propaganda uncovered by Graphika appears to have reached few people online, the firm cautions it is only a matter of time until bad actors leverage better technology for extremely convincing videos and images for their respective influence operations. In a blog post published to Synthesia’s website last November, Riparbelli put the onus on governments to enact comprehensive legislation to regulate the “synthetic media” and deepfake industries. 

The post This fictitious news show is entirely produced by AI and deepfakes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google’s own upcoming AI chatbot draws from the power of its search engine https://www.popsci.com/technology/google-ai-chatbot-bard/ Tue, 07 Feb 2023 16:00:00 +0000 https://www.popsci.com/?p=510444
Hand holding smartphone displaying Google search homepage

Bard, as the bot is called, will be available to the public in the coming weeks.

The post Google’s own upcoming AI chatbot draws from the power of its search engine appeared first on Popular Science.

]]>
Hand holding smartphone displaying Google search homepage

Google announced on Monday that it is launching an AI-powered chatbot it’s calling Bard “in the coming weeks.” While this might look like a response to ChatGPT—OpenAI’s AI-powered chatbot that has been getting a lot of attention since it launched late last year—the reality is that Google has been developing AI tools for more than six years. And although these tools have not been previously made available to the public, now, that might start to change. 

In the blog post announcing Bard, Google and Alphabet CEO Sundar Pichai writes that Google has been developing an “experimental conversational AI service” powered by its Language Model for Dialogue Applications or LaMDA. (That’s the AI model that one Google engineer tried to claim was sentient last summer.) Bard aims to “combine the breadth of the world’s knowledge with the power, intelligence and creativity of [Google’s] large language models” by drawing from information around the web and presenting it in fresh, easy to understand ways. 

Pichai gives a few examples for how Bard can be used, such as getting ideas to help plan a friend’s baby shower, comparing two Oscar nominated movies, or getting suggestions for what new discoveries by the James Webb Space Telescope to discuss with a 9-year-old. 

While Bard is only available to “trusted testers” right now, it is due to roll out to the general public over the next few weeks. Google has used its lightweight model version of LaMDA, which requires less computing power to operate, to allow it to serve more users, and thus get more feedback. Here at PopSci, we will jump in and try it out as soon as we get the chance. 

Of course, Google’s end-goal is to use AI to improve its most important product: its search engine. In the blog post, Pichai highlights some of the AI tools it’s already using—including BERT and MUM—that help it understand the intricacies of human language. During the COVID pandemic, MUM, for example, was able to categorize over 800 possible names for 17 different vaccines in 50 different languages so Google could provide the most important and accurate health information. 

Crucially, Pichai says that the way people use Google search is changing. “When people think of Google, they often think of turning to us for quick factual answers, like ‘how many keys does a piano have?’ But increasingly, people are turning to Google for deeper insights and understanding—like, ‘is the piano or guitar easier to learn, and how much practice does each need?’”

He sees Google’s latest AI technologies, like LaMDA and PaLM, as an opportunity to “deepen our understanding of information and turn it into useful knowledge more efficiently.” When faced with more complex questions where there is no one right answer, it can pull in different sources of information and present them in a logical way. According to Pichai, we will soon see AI-powered features in search that “distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web.”

Once or twice in the blog post, you get a sense that Pichai is perhaps frustrated with OpenAI’s prominence. While never name checking OpenAI or ChatGPT directly, he links to Google’s Transformer research project, calling it “field-defining” and “the basis of many of the generative AI applications you’re starting to see today,” which is entirely true. The “T” in ChatGPT and GPT-3 stands for Transformer; both rely heavily on research published by Google’s AI teams. But despite its research successes, Google isn’t the company with the widely discussed AI chatbot today. Maybe Bard’s presence will change that.

The post Google’s own upcoming AI chatbot draws from the power of its search engine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A simple guide to the expansive world of artificial intelligence https://www.popsci.com/technology/artificial-intelligence-definition/ Sun, 05 Feb 2023 17:00:00 +0000 https://www.popsci.com/?p=509522
A white robotic hand moving a black pawn as the opening move of a chess game played atop a dark wooden table.
Here's what to know about artificial intelligence. VitalikRadko / Depositphotos

AI is everywhere, but it can be hard to define.

The post A simple guide to the expansive world of artificial intelligence appeared first on Popular Science.

]]>
A white robotic hand moving a black pawn as the opening move of a chess game played atop a dark wooden table.
Here's what to know about artificial intelligence. VitalikRadko / Depositphotos

When you challenge a computer to play a chess game, interact with a smart assistant, type a question into ChatGPT, or create artwork on DALL-E, you’re interacting with a program that computer scientists would classify as artificial intelligence. 

But defining artificial intelligence can get complicated, especially when other terms like “robotics” and “machine learning” get thrown into the mix. To help you understand how these different fields and terms are related to one another, we’ve put together a quick guide. 

What is a good artificial intelligence definition?

Artificial intelligence is a field of study, much like chemistry or physics, that kicked off in 1956. 

“Artificial intelligence is about the science and engineering of making machines with human-like characteristics in how they see the world, how they move, how they play games, even how they learn,” says Daniela Rus, director of the computer science and artificial intelligence laboratory (CSAIL) at MIT. “Artificial intelligence is made up of many subcomponents, and there are all kinds of algorithms that solve various problems in artificial intelligence.” 

People tend to conflate artificial intelligence with robotics and machine learning, but these are separate, related fields, each with a distinct focus. Generally, you will see machine learning classified under the umbrella of artificial intelligence, but that’s not always true.

“Artificial intelligence is about decision-making for machines. Robotics is about putting computing in motion. And machine learning is about using data to make predictions about what might happen in the future or what the system ought to do,” Rus adds. “AI is a broad field. It’s about making decisions. You can make decisions using learning, or you can make decisions using models.”

AI generators, like ChatGPT and DALL-E, are machine learning programs, but the field of AI covers a lot more than just machine learning, and machine learning is not fully contained in AI. “Machine learning is a subfield of AI. It kind of straddles statistics and the broader field of artificial intelligence,” says Rus.

Complicating the playing field is that non-machine learning algorithms can be used to solve problems in AI. For example, a computer can play the game Tic-Tac-Toe with a non-machine learning algorithm called minimax optimization. “It’s a straight algorithm. You build a decision tree and you start navigating. There is no learning, there is no data in this algorithm,” says Rus. But it’s still a form of AI.

Back in 1997, the Deep Blue algorithm that IBM used to beat Gary Kasparov was AI, but not machine learning, since it didn’t use gameplay data. “The reasoning of the program was handcrafted,” says Rus. “Whereas AlphaGo [a new chess-playing program] used machine learning to craft its rules and its decisions for how to move.”

When robots have to move around in the world, they have to make sense of their surroundings. This is where AI comes in: They have to see where obstacles are, and figure out a plan to go from point A to point B. 

“There are ways in which robots use models like Newtonian mechanics, for instance, to figure how to move, to figure how to not fall, to figure out how to grab an object without dropping it,” says Rus. “If the robot has to plan a path from point A to point B, the robot can look at the geometry of the space and then it can figure out how to draw a line that is not going to bump into any obstacles and follow that line.” That’s an example of a computer making decisions that is not using machine learning, because it is not data-driven.

[Related: How a new AI mastered the tricky game of Stratego]

Or take, for example, teaching a robot to drive a car. In a machine learning-based solution for teaching a robot how to do that task, for instance, the robot could watch how humans steer or go around the bend. It will learn to turn the wheel either a little or a lot based on how shallow the bend is. For comparison, in the non-machine learning solution for learning to drive, the robot would simply look at the geometry of the road, consider the dynamics of the car, and use that to calculate the angle to apply on the wheel to keep the car on the road without veering off. Both are examples of artificial intelligence at work, though.

“In the model-based case, you look at the geometry, you think about the physics, and you compute what the actuation ought to be. In the data-driven [machine learning] case, you look at what the human did, and you remember that, and in the future when you encounter similar situations, you can do what the human did,” Rus says. “But both of these are solutions that get robots to make decisions and move in the world.” 

Can you tell me more about how machine learning works?

“When you do data-driven machine learning that people equate with AI, the situation is very different,” Rus says. “Machine learning uses data in order to figure out the weights and the parameters of a huge network, called the artificial neural network.” 

Machine learning, as its name implies, is the idea of software learning from data, as opposed to software just following rules written by humans. 

“Most machine learning algorithms are at some level just calculating a bunch of statistics,” says Rayid Ghani, professor in the machine learning department at Carnegie Mellon University. Before machine learning, if you wanted a computer to detect an object, you would have to describe it in tedious detail. For example, if you wanted computer vision to identify a stop sign, you’d have to write code that describes the color, shape, and specific features on the face of the sign. 

“What people figured is that it would be exhaustive for people describing it. The main change that happened in machine learning is [that] what people were better at was giving examples of things,” Ghani says. “The code people were writing was not to describe a stop sign, it was to distinguish things in category A versus category B [a stop sign versus a yield sign, for example]. And then the computer figured out the distinctions, which was more efficient.”

Should we worry about artificial intelligence surpassing human intelligence?

The short answer, right now: Nope. 

Today, AI is very narrow in its abilities and is able to do specific things. “AI designed to play very specific games or recognize certain things can only do that. It can’t do something else really well,” says Ghani. “So you have to develop a new system for every task.” 

In one sense, Rus says that research under AI is used to develop tools, but not ones that you can unleash autonomously in the world. ChatGPT, she notes, is impressive, but it’s not always right. “They are the kind of tools that bring insights and suggestions and ideas for people to act on,” she says. “And these insights, suggestions and ideas are not the ultimate answer.” 

Plus, Ghani says that while these systems “seem to be intelligent,” all they’re really doing is looking at patterns. “They’ve just been coded to put things together that have happened together in the past, and put them together in new ways.” A computer will not on its own learn that falling over is bad. It needs to receive feedback from a human programmer telling it that it’s bad. 

[Related: Why artificial intelligence is everywhere now]

And also, machine learning algorithms can be lazy. For example, imagine giving a system images of men, women, and non-binary individuals, and telling it to distinguish between the three. It’s going to find patterns that are different, but not necessarily ones that are meaningful or important. If all the men are wearing one color of clothing, or all the photos of women were taken against the same color backdrop, the colors are going to be the characteristics that these systems pick up on. 

“It’s not intelligent, it’s basically saying ‘you asked me to distinguish between three sets. The laziest way to distinguish was this characteristic,’” Ghani says. Additionally, some systems are “designed to give the majority answer from the internet for a lot of these things. That’s not what we want in the world, to take the majority answer that’s usually racist and sexist.” 

In his view, there still needs to be a lot of work put into customizing the algorithms for specific use cases, making it understandable to humans how the model reaches certain outputs based on the inputs it’s been given, and working to ensure that the input data is fair and accurate. 

What’s the next decade hold for AI?

Computer algorithms are good at taking large amounts of information and synthesizing it, whereas people are good at looking through a few things at a time. Because of this, computers tend to be, understandably, much better at going through a billion documents and figuring out facts or patterns that recur. But humans are able to go into one document, pick up small details, and reason through them. 

“I think one of the things that is overhyped is the autonomy of AI operating by itself in uncontrolled environments where humans are also found,” Ghani says. In very controlled settings—like figuring out the price to charge for food products within a certain range based on an end goal of optimizing profits—AI works really well. However, cooperation with humans remains important, and in the next decades, he predicts that the field will see a lot of advances in systems that are designed to be collaborative. 

Drug discovery research is a good example, he says. Humans are still doing much of the work with lab testing and the computer is simply using machine learning to help them prioritize which experiments to do and which interactions to look at.

“[AI algorithms] can do really extraordinary things much faster than we can. But the way to think about it is that they’re tools that are supposed to augment and enhance how we operate,” says Rus. “And like any other tools, these solutions are not inherently good or bad. They are what we choose to do with them.”

The post A simple guide to the expansive world of artificial intelligence appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Netflix used AI-generated images in anime short. Artists are not having it. https://www.popsci.com/technology/netflix-anime-generative-ai/ Thu, 02 Feb 2023 21:00:00 +0000 https://www.popsci.com/?p=509353
Screenshot from Netflix anime 'Dog and Boy'
AI was used to help generate the backgrounds for Netflix's 'Dog and Boy.'. Netflix

The company claims AI could help with anime's supposed labor shortage, and animators are furious.

The post Netflix used AI-generated images in anime short. Artists are not having it. appeared first on Popular Science.

]]>
Screenshot from Netflix anime 'Dog and Boy'
AI was used to help generate the backgrounds for Netflix's 'Dog and Boy.'. Netflix

Contrary to what Netflix may say, there isn’t a massive labor shortage. Instead, low pay and often strenuous working conditions may be pushing away artists from the work while the genre ascends to new heights of popularity.  But the supposed lack of available talent is what led to Dog and Boy, Netflix’s recent “experimental effort to help the anime industry” that appears to have backfired rather spectacularly since its debut on Tuesday.

[Related: A guide to the internet’s favorite generative AIs.]

Dog and Boy is a three-minute animated short courtesy of Tokyo’s Netflix Anime Creators Base boasting AI-generated landscape backdrops from Rinna Inc., an AI artwork company. Produced by WIT Studio (the company also responsible for the first three seasons of the hit anime, Attack on Titan), the clip elicited almost immediate ire from the online animation community for what many see as a blatant sidestepping of fair wages in favor of cheaper, soulless alternatives.

“Not something to be proud of babes [sic],” tweeted Hamish Steele, the Eisner Award-winning creator of Netflix’s DEAD END: Paranormal Park animated series.

As Engadget also notes, the generative AI art stunt technically still required unknown amounts of human labor. “AI (+Human)” is listed in the role of “Background Editor” during Dog and Boy’s end credits, and behind-the-scenes photos supplied by Netflix even showcase someone touching up the AI software’s images. There’s no immediate word on how much this person was  compensated for this work.

Apart from the artwork itself, AI programs are generating their fair share of controversies as they continue dominating headlines and ethical discussions. Many artists are fighting back against their own work being utilized in AI training datasets without fair compensation, while other industries also attempt to cash in on the technology.

[Related: The DOJ is investigating an AI tool that could be hurting families in Pennsylvania.]

“In a general sense, the creators of AI image generator platforms still need to answer about the copyright issues, since these technologies work by scraping the internet for bits and pieces of all sorts of images,” explains Sebastián Bisbal, an award-winning filmmaker, animator and visual artist from Rancagua, Chile, via email. “In this short, specifically, it is very easy to make the assumption that someone typed ‘[Studio] Ghibli styled landscape’ and the system delivered this. This raises all sorts of ethical and legal questions.”

“That’s so grim,” adds comic illustrator, artist, and writer, Michael Kupperman. “Our world economy has created a reality where they need art, but can’t pay artists enough, ever, so this nightmare is the logical answer.”

From a technical standpoint, Bisbal also points to Dog and Boy‘s inconsistencies in brushwork and background style, rendering the overall short’s quality “quite poor,” in his opinion. He believes that utilizing generative AI programs for personal experimentation, reference, and practice is fine, but that the problems arise when it is used as a monetizable shortcut. “I think this responds to a major contemporary issue, that we all want everything instantaneously, effortlessly, and with the lowest cost as possible,” he writes. “Animation in its core is quite the opposite: it’s the art of patience.”

The post Netflix used AI-generated images in anime short. Artists are not having it. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The DOJ is investigating an AI tool that could be hurting families in Pennsylvania https://www.popsci.com/technology/allegheny-pennsylvania-ai-child-welfare/ Wed, 01 Feb 2023 18:30:00 +0000 https://www.popsci.com/?p=509038
System Security Specialist Working at System Control Center
The Justice Dept. is allegedly concerned with recent deep dives into the Allegheny Family Screening Tool. Deposit Photos

Critics—and potentially the DOJ—are worried about the Allegheny Family Screening Tool's approach to mental health and disabled communities.

The post The DOJ is investigating an AI tool that could be hurting families in Pennsylvania appeared first on Popular Science.

]]>
System Security Specialist Working at System Control Center
The Justice Dept. is allegedly concerned with recent deep dives into the Allegheny Family Screening Tool. Deposit Photos

Over the past seven years, Allegheny County Department of Human Services workers have frequently employed an AI predictive risk modeling program to aid in assessing children’s risk factors for being placed into the greater Pittsburgh area’s foster care system. In recent months, however, the underlying algorithms behind the Allegheny Family Screening Tool (AFST) have received increased scrutiny over their opaque design, taking into account predictive AI tools’ longstanding racial, class, and gender-based biases.

Previous delving into the Allegheny Family Screening Tool’s algorithm by the Associate Press revealed certain data points could be interpreted as stand-in descriptions for racial groups. But  now it appears the AFST could also be affecting families within the disabled community as well as families  with a history of mental health conditions. And the Justice Department is taking notice.

[Related: The White House’s new ‘AI Bill of Rights’ plans to tackle racist and biased algorithms.]

According to a new report published today from the Associated Press, multiple formal complaints regarding the AFST have been filed via the Justice Dept.’s Civil Rights Division, citing the AP’s prior investigations into its potential problems. Anonymous sources within the Justice Dept. say officials are concerned that the AFST’s overreliance on potentially skewed historical data risks “automating past inequalities,” particularly long standing biases against people with disabilities and mental health problems.

The AP explains the Allegheny Family Screening Tool utilizes a “pioneering” AI program designed to supposedly help overworked social workers in the greater Pittsburgh area determine which families require further investigation regarding child welfare claims. More specifically, the tool was crafted to aid in predicting the potential risk of a child being placed into foster care within two years of following an investigation into their family environment.

The AFST’s black box design reportedly takes into account numerous case factors, including “personal data and birth, Medicaid, substance abuse, mental health, jail and probation records, among other government data sets,” to determine further investigations for neglect. Although human social service workers ultimately decide whether or not to follow up on cases following the AFST algorithm results, critics argue the program’s potentially faulty judgments could influence the employees’ decisions.

[Related: The racist history behind using biology in criminology.]

A spokesman for the Allegheny County Department of Human Services told the AP they were not aware of any Justice Department complaints, nor were they willing to discuss the larger criticisms regarding the screening tool.

Child protective services systems have long faced extensive criticisms regarding both their overall effectiveness, as well as the disproportional consequences faced by Black, disabled, poor, and otherwise marginalized families. The AFST’s official website heavily features third-party studies, reports, and articles attesting to the program’s supposed reliability and utility.

The post The DOJ is investigating an AI tool that could be hurting families in Pennsylvania appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Indonesia activates a disaster-relief chatbot after destructive floods https://www.popsci.com/technology/chatbot-monsoons-humanitarian-indonesia/ Tue, 31 Jan 2023 21:00:00 +0000 https://www.popsci.com/?p=508848
Several people are carrying sacks filled with food and clothing to prepare for evacuation after their house was flooded in Indonesia
BencanaBot could help Indonesians coordinate during more frequent natural disasters. Deposit Photos

BencanaBot allows Indonesians to submit and coordinate disaster resiliency plans in real time.

The post Indonesia activates a disaster-relief chatbot after destructive floods appeared first on Popular Science.

]]>
Several people are carrying sacks filled with food and clothing to prepare for evacuation after their house was flooded in Indonesia
BencanaBot could help Indonesians coordinate during more frequent natural disasters. Deposit Photos

Floodwaters up to 30-feet-high swept through Indonesia’s North Sulawesi province last Friday, destroying dozens of homes and killing at least five people. Unfortunately, experts warn the nation’s monsoon season is far from over, and will likely worsen in the years ahead due to climate change.

However, locals now have access to a potentially vital new tool to help communicate, coordinate, and prepare against an area increasingly beset by dire natural disasters—and it’s a first for one of the world’s most popular messaging apps.

[Related: New factory retrofit could reduce a steel plant’s carbon emissions by 90 percent.]

Today, disaster relief management nonprofit Yayasan Peta Bencana announced the debut of BencanaBot, a “Humanitarian WhatsApp Chatbot.” Billed as the first of its kind, BecanaBot’s AI-assisted chat features can now guide locals through the process of submitting disaster reports that are then mapped in real-time on the free, open source platform, PetaBencana.id. There, anyone in need can view and share updates to coordinate decisions regarding safety and responses via collaborative evidence verified by government agencies.

“With over 80 million active users of WhatsApp in Indonesia, the launch of BencanaBot on WhatsApp represents a new milestone in enabling residents all across the archipelago to participate in, and benefit from, this free disaster information sharing system,” Nashin Mahtani, director of Yayasan Peta Bencana, said in a statement.

Going forward, anyone in Indonesia can now anonymously share disaster information via WhatsApp (+628584-BENCANA), Twitter (@petabencana), Facebook Messenger (@petabencana), and Telegram (@bencanabot). WhatsApp’s default end-to-end encryption also ensures an added layer of privacy for its users, although like all messaging platforms, it is likely not without its faults.

[Related: A chunk of ice twice the size of New York City broke off the Brunt Ice Shelf.]

Using such an exhaustive program may sound intimidating to some, but BencanaBot’s creators specifically designed the service to be intuitive and easy-to-understand for underfunded communities in Indonesia. In particular, the platform is designed to be “data-light,” meaning it works seamlessly through the existing instant messaging, social media, and SMS-based communications its users already know, without requiring a lot of device data usage.

According to the Intergovernmental Panel on Climate Change (IPCC), access to local and timely information remains one of the greatest hurdles for populations adapting to climate change’s rapidly multiplying existential threats. The rise of tools like BencanaBot are crucial for societal adaptation to these issues, and can strengthen communities’ resilience in the face of some of the planet’s most difficult ongoing climate challenges.

The post Indonesia activates a disaster-relief chatbot after destructive floods appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A college student built an AI to help look for alien radio signals https://www.popsci.com/technology/ai-radio-signals-extraterrestrial/ Tue, 31 Jan 2023 16:30:00 +0000 https://www.popsci.com/?p=508799
OTC NASA Satellite Earth Station Carnarvon Western Australia
A third-year college student's AI could act as a valuable proofreader for SETI. Deposit Photos

The program already spotted potential evidence while combing through 150TB of data from 820 nearby stars.

The post A college student built an AI to help look for alien radio signals appeared first on Popular Science.

]]>
OTC NASA Satellite Earth Station Carnarvon Western Australia
A third-year college student's AI could act as a valuable proofreader for SETI. Deposit Photos

Enlisting advanced artificial intelligence to help humans search for signs of extraterrestrial life may sound like the premise to a sci-fi novel. Nevertheless, it’s a strategy that investigators are increasingly employing to help expedite and improve their ET detection methodologies. As a new paper published in Nature Astronomy reveals, one of the most promising advancements in the field may have arrived courtesy of a college undergrad.

Over the past few years, Peter Ma, a third-year math and physics student at the University of Toronto, has worked alongside mentors at SETI and Breakthrough Listen—an initiative tasked with finding “technosignatures” of extraterrestrial intelligence—to develop a new neural network technique capable of parsing through massive troves of galactic radio signals in the pursuit of alien life. Narrowband radio frequencies have been hypothesized as a potential indicator for ETs, given they require a “purposely built transmitter,” according to SETI’s FAQ.

[Related: Are we alone in the universe? Probably not.]

While prior search algorithms only identified anomalies as exactly defined by humans, Ma’s deep machine learning system allows for alternative modes of thinking that human-dictated algorithms often can’t replicate.

In an email to PopSci, Ma explains, “people have inserted components of machine learning or deep learning into search techniques to assist [emphasis theirs] with the search. Our technique is the search, meaning the entire process is effectively replaced by a neural network, it’s no longer just a component, but the entire thing.”

As Motherboard and elsewhere have recently noted, the results are already promising, to say the least—Ma’s system has found eight new signals of interest. What’s more, Ma’s deep learning program found the potential ET evidence while combing through 150TB of data from 820 nearby stars that were previously analyzed using classical techniques, but at the time determined to be devoid of anything worth further investigation.

According to Ma’s summary published on Monday, the college student previously found the standard supervised search models to be too restrictive, given that they only found candidates matching simulated signals they were trained on while unable to generalize arbitrary anomalies. Likewise, existing unsupervised methods were too “uncontrollable,” flagging anything with the slightest variation and “thus returning mostly junk.” By intermediately swapping weighted considerations during the deep learning program’s training, Ma found that he and his team could “balance the best of both worlds.”

[Related: ‘Historical’ AI chatbots aren’t just inaccurate—they are dangerous.]

The result is ostensibly an additional proofreader for potential signs of alien life able to highlight possible anomalies human eyes or even other AI programs might miss. That said, Ma explains that his program is far from hands-off, and required copious amounts of engineering to direct it to learn the properties researchers wanted. “We still need human verification at the end of the day. We can’t solely rely on, or trust, a black box tool like a neural network to conduct science,” he writes. “It’s a tool for scientists, not a replacement for scientists.”

Ma also cautions that the eight newly discovered signals of interest are statistically unlikely to yield any definitive proof of alien life. That said, his new AI advancements could soon prove an invaluable tool for more accurate searches of the stars. SETI, Breakthrough Listen, and Ma are already planning to soon help with 24/7 technosignature observations using South Africa’s MeerKAT telescope array, as well as “analysis that will allow us to search for similar signals across many petabytes of additional data.”

The post A college student built an AI to help look for alien radio signals appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Is ChatGPT groundbreaking? These experts say no. https://www.popsci.com/technology/chatgpt-ai-researchers-debate/ Sat, 28 Jan 2023 12:00:00 +0000 https://www.popsci.com/?p=508175
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen.
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen. Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images

Meta's Chief AI scientist claims that Google, Meta, and other startups are working with very similar models.

The post Is ChatGPT groundbreaking? These experts say no. appeared first on Popular Science.

]]>
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen.
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen. Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images

ChatGPT, OpenAI’s AI-powered chatbot, has been impressing the public—but AI researchers aren’t as convinced it’s breaking any new ground. In an online lecture for the Collective[i] Forecast, Yann LeCun, Meta’s Chief AI Scientist and Turing Award recipient, said that “in terms of underlying techniques, ChatGPT is not particularly innovative,” and that Google, Meta, and “half a dozen startups” have very similar large language models, according to ZDNet

While this might read as a Meta researcher upset that his company isn’t in the limelight, he actually makes a pretty good point. But where are these AI tools from Google, Meta, and the other major tech companies? Well, according to LeCun, it’s not that they can’t release them—it’s that they won’t

Before we dive into the nitty gritty of what LeCun is getting at, here’s a quick refresher on the conversations around ChatGPT, which was released  to the public late last year. It’s a chatbot interface for OpenAI’s commercially available Generative Pre-trained Transformer 3 (GPT-3) large language model that was released in 2020. It was trained on 410 billion “tokens” (simply, semantic fragments) and is capable of writing human-like text—including jokes and computer code. While ChatGPT is the easiest way for most people to interact with GPT, there are more than 300 other tools out there that are based on this model, the majority of them aimed at businesses. 

From the start, the response to ChatGPT has been divisive. Some commenters have been very impressed by its ability to spit out coherent answers to a wide range of different questions, others have pointed out that it’s just as capable at spinning total fabrications that merely adhere to English syntax. Whatever ChatGPT says sounds plausible—even when it’s nonsense. (AI researchers call this “hallucination”.)

For all the think-pieces being written (including this one), it’s worth pointing out that OpenAI is an as-yet-unprofitable start up. Its DALL-E 2 image generator and GPT models have attracted a lot of press coverage, but it has not managed to turn selling access to them into a successful business model. OpenAI is in the middle of another fundraising round and is set to be valued at around $29 billion after taking $10 billion in funding from Microsoft (on top of the $3 billion Microsoft has invested previously). It’s in a position to move fast and break things, that as LeCun points out, more established players aren’t. 

For Google and Meta, their progress has been slower. Both companies have large teams of AI researchers (though less after the recent layoffs) and have published very impressive demonstrations—even as some public access projects have devolved into chaos. For example,  last year, Facebook’s Blenderbot AI chatbot started spewing racist comments, fake news, and even bashing its parent company within a few days of its public launch. It’s still available, but its kept more constrained than ChatGPT. While OpenAI and other AI start ups like StabilityAI have been able to roll through their models’ open bigotry, Facebook understandably has had to roll back. Its caution comes from the fact that it’s significantly more exposed to regulatory bodies, government investigations, and bad press. 

With that said, both companies have released some incredibly impressive AI demos that we’ve covered here on PopSci. Google has shown off a robot that can program itself, an AI-powered story writer, an AI-powered chatbot that one researcher tried to argue was sentient, an AI doctor that can diagnose patients based on their symptoms, and an AI that can convert a single image into a 30-second video. Meta meanwhile has AIs that can win at Go, predict the 3D structure of proteins, verify Wikipedia’s accuracy, and generate videos from a written prompt. These incredibly impressive tasks represent just a small fraction of what their researchers are doing—and because the public can’t be trusted, we haven’t got to try them yet. 

Now though, OpenAI might have influenced Google and Meta to give more publicly accessible AI demonstrations and even integrate full-on AI features into their services. According to The New York Times, Google views AI as the first real threat to its search business, has declared “code red”, and even corralled founders Larry Page and Sergey Brin into advising on AI strategy. It’s expected to release upwards of 20 AI-adjacent products over the next year, and we will presumably see more from Meta too. Though given how long some Google products last after launch, we will see if any stick around.

The post Is ChatGPT groundbreaking? These experts say no. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
‘Historical’ AI chatbots aren’t just inaccurate—they are dangerous https://www.popsci.com/technology/historical-figures-app-chatgpt-ethics/ Thu, 26 Jan 2023 17:00:00 +0000 https://www.popsci.com/?p=507957
Black and white photo of Albert Einstein
The ChatGPT-based app is hardly a replacement for talking to folks like Albert Einstein. Bettman/Getty

Here's why it's so questionable to let AI chatbots impersonate people like Einstein and Gandhi.

The post ‘Historical’ AI chatbots aren’t just inaccurate—they are dangerous appeared first on Popular Science.

]]>
Black and white photo of Albert Einstein
The ChatGPT-based app is hardly a replacement for talking to folks like Albert Einstein. Bettman/Getty

In the grand and lengthy list of distasteful ideas, paywalling the ability to engage in “fun and interactive” conversations with AI Hitler might not rise to the very top. But it’s arguably up there.

And yet countless people have already given it a shot via the chatbot app Historical Figures. The project, currently still available in Apple’s App Store, went viral last week for its exhaustive and frequently controversial list of AI profiles, including Gandhi, Einstein, Princess Diana, and Charles Manson. Despite billing itself as an educational app (the 76th most popular in the category, as of writing) appropriate for anyone over the age of 9, critics quickly derided the idea as a rushed, frequently inaccurate gimmick at best, and at worst, a cynical leverage of the burgeoning, already fraught technology that is ChatGPT.

[Related: Building ChatGPT’s AI content filters devastated workers’ mental health, says new report.]

Even Sidhant Chaddha, the 25-year-old Amazon software development engineer who built the app, conceded to Rolling Stone last week that ChatGPT’s confidence and inaccuracy is a “dangerous combination” for users who might mistakenly believe the supposed facts that it spews are sourced. “This app uses limited data to best guess what conversations may have looked like,” reads a note on the app’s homepage. Chaddha did not respond to PopSci‘s request for comment.

Some historians vehemently echo that sentiment, including Ekaterina Babintseva, an assistant professor specializing in the History of Technology at Purdue University. For her, the attempt at using ChatGPT in historical education isn’t just tasteless, it’s potentially even radically harmful.

“My very first thought when ChatGPT was created was that, ‘Oh, this is actually dangerous,’” she recounts over Zoom. To Babintseva, the danger instead resides less in academic plagiarism worries, and more in AI’s larger effects on society and culture. “ChatGPT is just another level towards eradicating the capacity for the critical engagement of information, and the capacity for understanding how knowledge is constructed.” She also points towards the obscured nature of current major AI development by private companies concerned with keeping a tight, profitable grip on their intellectual properties.

[Related: CEOs are already using ChatGPT to write their emails.]

“ChatGPT doesn’t even explain where this knowledge comes from. It blackboxes its sources,” she says.

It’s important to note OpenAI, the developer behind ChatGPT—and, by extension, third-party spin offs like Historical Figures—has made much of its research and foundational designs available for anyone to examine. Getting a detailed understanding of the vast internet text repositories it uses to train its AI’s understanding, however, is much more convoluted. Even asking ChatGPT to cite its sources fails to offer anything more specific than “publicly available sources” it “might” draw from, like Wikipedia.

AI photo
You don’t say, Walt. CREDIT: Author/PopSci

As such, programs like Chaddha’s Historical Figures app provide skewed, sometimes flat out wrong narratives while also failing to explain how its narrative was even constructed in the first place. Compare that to historical academic papers and everyday journalism, replete with source citations, footnotes, and paper trails.  “There are histories. There is no one narrative,” says Babintseva. “Single narratives only exist in totalitarian states, because they are really invested in producing one narrative, and diminishing the narratives of those who want to produce narratives that diverge from the approved party line.”

It wasn’t always this way. Until the late 90s, artificial intelligence research centered on “explainable AI,” with creators focusing on how human experts such as psychologists, geneticists, and doctors make decisions. By the end of the 1990s, however, AI developers began to shift away from this philosophy, deeming it largely irrelevant to their actual goals. Instead, they opted to pursue neural networks, which often arrive at conclusions even their own designers can’t fully explain.

[Related: Youth mental health service faces backlash after experimenting with AI-chatbot advice.]

Babintseva and fellow science and technology studies scholars urge a return to the explainable AI models, at least for the systems that have real effects on human lives and behaviors. AI should aid research and human thought, not replace it, she says, and hopes organizations such as the National Science Foundation push forward with fellowships and grant programs conducive to research in this direction.

Until then, apps like Historical Figures will likely continue cropping up, all based on murky logic and unclear sourcing while advertising themselves as new, innovative educational alternatives. What’s worse, programs like ChatGPT will continue their reliance on human under-labor to produce the knowledge foundations without ever acknowledging those voices. “It represents this knowledge as some kind of unique, uncanny AI voice,” Babintseva  says, rather than a product of multiple complex human experiences and understandings.

[Related: This AI verifies if you actually LOL.]

In the end, experts caution that Historical Figures should be viewed as nothing more than the latest digital parlor trick. Douglas Rushkoff, a preeminent futurist and most recent author of Survival of the Richest: Escape Fantasies of the Tech Billionaires, tried to find a silver, artificial lining to the app in an email to PopSci. “Well, I’d rather use AI to… try out ideas on dead people [rather] than to replace the living. At least that’s something we can’t do otherwise. But the choice of characters seems more designed to generate news than to really provide people with a direct experience of Hitler,” he writes.

“And, seeing as how you emailed me for a comment, it seems that the stunt has worked!”

The post ‘Historical’ AI chatbots aren’t just inaccurate—they are dangerous appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
CEOs are already using ChatGPT to write their emails https://www.popsci.com/technology/ceos-chatgpt-emails-davos/ Fri, 20 Jan 2023 16:00:00 +0000 https://www.popsci.com/?p=506668
Silhouette of three business executives in high rise conference room
The CEO of companies like Coursera are already using ChatGPT daily. Deposit Photos

Attendees of the World Economic Forum's Davos summit openly admit to using the AI for speeches and emails.

The post CEOs are already using ChatGPT to write their emails appeared first on Popular Science.

]]>
Silhouette of three business executives in high rise conference room
The CEO of companies like Coursera are already using ChatGPT daily. Deposit Photos

Despite only becoming publicly available late last year, ChatGPT is already being utilized by some of the most powerful people in the world for their everyday work.

Developed by OpenAI, ChatGPT is an impressively advanced chatbot capable of generating text and discussions from virtually any human-offered prompt. Song lyrics in the style of favorite artists, proofreading computer code, distilling complex physics concepts into digestible summaries—ChatGPT can handle much of what users are throwing at it so far. That said, there are concerns regarding its accuracy, as well as the human labor cost required to build the program. This doesn’t seem to phase people already using the AI for day-to-day tasks, like Jeff Maggioncalda, CEO of the popular online education provider, Coursera.

[Related: Workers filtered toxic content for ChatGPT for $2 an hour.]

Speaking with CNN on Thursday from the World Economic Forum’s annual summit of global academics, politicians, and business leaders in Davos, Switzerland, Maggioncalda claims he’s already relying on ChatGPT as a “writing assistant and thought partner.” Among his daily tasks, the AI chat program helps him craft emails, alongside speeches “in a friendly, upbeat, authoritative tone with mixed cadence.”

Maggioncalda isn’t alone at Davos, either. According to CNN’s dispatch, ChatGPT is a hot topic among summit attendees’ minds this year, with at least one other major company’s CEO employing the bot for similar jobs like rote emails. Part of this push could stem from Microsoft, which plans to infuse OpenAI with an additional $10 billion in funding and has already announced that ChatGPT would soon be added to an “Azure OpenAI Service” business toolkit employing artificial intelligence. The suite also includes the programming assistant, Codex, as well as the buzzworthy image generator, DALL-E 2.

“I see these technologies acting as a copilot, helping people do more with less,” Microsoft CEO Satya Nadella told Davos attendees during a speech this week, according to CNN.

[Related: Popular youth mental health service faces backlash after experimenting with AI-chatbot advice.]

In the rush to adopt the most cutting-edge of AI programs, however, many worry about the unforeseen or overlooked labor and ethical consequences. Educators are already sounding the alarm at the prospect of students potentially utilizing ChatGPT to craft convincing college essays. Many employees in the computer coding, tutoring, and writing sectors have also voiced concern about the destabilizing effects of AI tools’ perceived cost-cutting shortcuts.

Meanwhile, OpenAI estimates the newest version of ChatGPT’s underlying programming, GPT-4, could come as soon as later this year.

The post CEOs are already using ChatGPT to write their emails appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Building ChatGPT’s AI content filters devastated workers’ mental health, according to new report https://www.popsci.com/technology/chatgpt-sama-content-filter-labor/ Thu, 19 Jan 2023 22:00:00 +0000 https://www.popsci.com/?p=506451
Rows of desktop computers in computer lab
Sama employees were paid as little as $2 an hour to review toxic content. Deposit Photos

Ensuring the popular chatbot remained inoffensive came at a cost.

The post Building ChatGPT’s AI content filters devastated workers’ mental health, according to new report appeared first on Popular Science.

]]>
Rows of desktop computers in computer lab
Sama employees were paid as little as $2 an hour to review toxic content. Deposit Photos

Content moderation is a notoriously nasty job, and the burgeoning labor outsourcing industry surrounding it routinely faces heated scrutiny for the ethics of its approach to subjecting human workers to the internet’s darkest corners. On Wednesday, Time published a new investigative deep dive into Sama, a company that recently provided OpenAI with laborers solely tasked with reading some of the worst content the internet has to offer.

Although the endeavor’s overall goal was to develop helpful and necessary internal AI filters for the popular, buzzworthy ChatGPT program, former Sama employees say they now suffer from PTSD from their tenures sifting through thousands of horrid online text excerpts describing sexual assault, incest, bestiality, child abuse, torture, and murder, according to the new report.  Not to mention, the report states that these employees, largely based in Kenya, were paid less than $2 an hour.

[Related: Popular youth mental health service faces backlash after experimenting with AI-chatbot advice.]

OpenAI’s ChatGPT quickly became one of last year’s most talked about technological breakthroughs for its ability to near instantaneously generate creative text from virtually any human prompt. While similar programs already exist, they have been frequently prone to spewing hateful and downright abusive content due to their inability to internally identify toxic material amid the troves of internet writing utilized as generative reference points.

With already well over 1 million users, ChatGPT has been largely free of such issues (although many other worries remain), largely thanks to an additional built-in AI filtering system meant to omit much of the internet’s awfulness. But despite their utility, current AI programs aren’t self-aware enough to notice inappropriate material on their own—they first require training from humans to flag all sorts of contextual keywords and subject matter. 

Billed on its homepage as an “the next era of AI development,” Sama, a US-based data-labeling company that employs workers in Kenya, India, and Uganda for Silicon Valley businesses, claims to have helped over 50,000 people around the world rise above poverty via its employment opportunities. According to Time’s research sourced via hundreds of pages of internal documents, contracts, and worker pay stubs, however, the cost for dozens of workers amounted to self-described “torture” for takehome hourly rates of anywhere between $1.32 and $2.

[Related: OpenAI’s new chatbot offers solid conversations and fewer hot takes.]

Workers allege to Time that they worked far past their assigned hours, sifting through 150-250 disturbing text passages per day and flagging the content for ChatGPT’s AI filter training. Although wellness counselor services were reportedly available, Sama’s employees nevertheless experienced lingering emotional and mental tolls that exceeded those services’ capabilities. In a statement provided to Time, Sama disputes the workload, and said their contractors were only expected to review around 70 texts a shift.

“These companies present AI and automation to us as though it eliminates workers, but in reality that’s rarely the case,” Paris Marx, a tech culture critic and author of Road to Nowhere: What Silicon Valley Gets Wrong About Transportation, explains to PopSci. “… It’s the story of the Facebook content moderators all over again—some of which were also hired in Kenya by Sama.”

Marx argues the only way to avoid these kinds of mental and physical exploitation would require a massive cultural reworking within the tech industry, something that currently feels very unlikely. “This is the model of AI development that these companies have chosen,” they write, “[and] changing it would require completely upending the goals and foundational assumptions of what they’re doing.”

Sama initially entered into content moderation contracts with OpenAI amounting to $200,000 surrounding the project, but reportedly cut ties early to focus instead on “computer vision data annotation solutions.” OpenAI is currently in talks with investors to raise funding at a $29 billion valuation, $10 billion of which could come from Microsoft. Reuters previously reported OpenAI expects $200 million in revenue this year, and upwards of $1 billion in 2024. As the latest exposé reveals yet again, these profits frequently come at major behind-the-scenes costs for everyday laborers.

The post Building ChatGPT’s AI content filters devastated workers’ mental health, according to new report appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a US intelligence program created a team of ‘Superforecasters’ https://www.popsci.com/technology/superforecasters-future-predictions/ Thu, 19 Jan 2023 14:00:00 +0000 https://www.popsci.com/?p=506253
AI photo
Ard Su

Some people can learn to be better at forecasting the future than others. These are their methods.

The post How a US intelligence program created a team of ‘Superforecasters’ appeared first on Popular Science.

]]>
AI photo
Ard Su

AROUND 2011, when Warren Hatch’s job was moving money around on Wall Street, he read a book that stuck with him. Called Expert Political Judgment: How Good Is It? How Would We Know?, it was written by psychologist Phil Tetlock, who was then working as a business professor at the University of California, Berkeley. 

The book was a few years old, and Hatch was curious about what Tetlock had been up to since, so he went to the academic’s website. On the page, to his surprise, he found an invitation. Tetlock was looking for people who wanted to forecast geopolitical events. Did he want a chance to try to predict the future? Hatch says he remembers thinking, Who wouldn’t?

Hatch signed up right away and soon joined a virtual team of people who were trying to predict the likelihood of various hypothetical future global happenings. They were giving probability-based answers to questions like: Will Country A or Country B declare war on each other within the next six months? Or: Will X vacate the office of president of Country Y before May 10, Year Z? The answers would take this type of form: There is a 75 percent chance the answer to the question is yes, and a 25 percent chance it is no.

“I just thought it was a fun way to while away some time,” says Hatch.

It may have been fun for Hatch, but it was serious business for the US intelligence community, whose R&D arm—the Intelligence Advanced Research Projects Activity (IARPA)—was sponsoring the project. Tetlock, along with a team of scholars, was a participant in the spy agency’s Aggregative Contingent Estimation, or ACE, program.

The ACE program aimed to, as the description page put it, “dramatically enhance the accuracy, precision, and timeliness of intelligence forecasts.” 

Tetlock was leading an ACE team with colleague Barbara Mellers to try to make that dramatic enhancement—and to do it better than four other competing teams. Tetlock’s secret sauce ended up being a set of expert forecasters like Hatch. 

Becoming a Superforcaster

Hatch, at the time, didn’t know much about the grand vision that the head researchers, or IARPA, had in mind. After he’d been making predictions for Tetlock for a while, though, something strange happened. “Some of the better team members disappeared,” Hatch says.

It wasn’t nefarious: The researchers had deemed these skilled predictors to be “Superforecasters,” because of their consistent accuracy. Those predictors, Hatch later learned, had moved along and been placed in teams with other people who were as good as they were. 

Wanting to be among their ranks, Hatch began to mimic their behaviors. He started being more active in his attempts, leaving comments on his forecasts to explain his reasoning, revising his fortune-telling as new information came in. “And after a couple of months, it clicked,” he says. “I started to get it.” 

In the second year, Hatch was invited to become a Superforecaster.

Meet the Good Judgment group

The team, then headquartered at the University of Pennsylvania, called itself Good Judgment. It was winning ACE handily. “The ACE program started with this idea of crowd wisdom, and it has sought ways of going beyond the standard wisdom of the crowd,” says Steven Rieber, who managed the program for IARPA. 

The teams’ forecasts had to be increasingly accurate with each year of the competition. By the end of the first year, Good Judgment had already achieved the final year’s required level of accuracy in its forecasts. 

Eva Chen, a postdoc on one of the other (losing) ACE teams, was watching with interest as the first year transitioned into the second. “It’s a horse race,” she says. “So every time a question closes, you get to see how your team is performing.” Every time, she could see the Good Judgment group besting both her team and the crowd. What are they doing? she recalls wondering.

Chen’s group ended up shuttering, as did the rest of the teams except Good Judgment, which she later joined. That group was the only one IARPA continued working with. Chen made it her mission to discover what they were doing differently. 

And soon she found out: Her previous team had focused on developing fancy computational algorithms—doing tricky math on the crowd’s wisdom to make it wiser. Good Judgment, in contrast, had focused on the human side. It had tracked the accuracy of its forecasts and identified a group that was consistently better than everyone else: the so-called Superforecasters. It had also trained its forecasters, teaching them about factors like cognitive biases. (One of the most well-known such errors is confirmation bias, which leads people to seek and put more weight on evidence that supports their preexisting ideas, and dismiss or explain away evidence to the contrary.) And it had put them in teams, so they could share both knowledge about the topics they were forecasting and their reasoning strategies. And only then, with trained, teamed, tracked forecasts, did it statistically combine its participants’ predictions using machine learning algorithms.

While that process was important to Good Judgment’s success, the Superforecaster (now a trademarked term) element gets the most attention. But, curiously, Superforecasters—people consistently better at forecasting the future than even experts in a field, like intelligence analysts—were not an intended outcome of Good Judgment’s IARPA research. “I didn’t expect it,” says Rieber, “and I don’t think anyone did.”

These forecasters are better in part because they employ what Rieber calls “active open-minded thinking.” 

“They tend to think critically about not just a certain opinion that comes to mind but what objections are, or counterexamples to an opinion,” Rieber says. They are also good at revising their judgment in the face of new evidence. Basically, they’re skilled at red-teaming themselves, critiquing, evaluating, and poking holes in all ideas, including their own—essentially acting as devil’s advocate no matter where the opinion came from.

Seeing dollar signs in Superforecasters, the Good Judgment ACE team soon became Good Judgment Inc., spinning a company out of a spy-centric competition. Since then, curious fortune-seekers in sectors like finance, energy, supply chain logistics, philanthropy, and—as always—defense and intelligence, have been interested in paying for the future these predictive elites see.

Chen stayed on and eventually became Good Judgment’s chief scientist. The company currently has three main revenue streams: consulting, training workshops, and providing access to Superforecasters. It also has a website called Good Judgment Open, where anyone can submit predictions for crowdsourced topics, for fun and for a shot at being recruited as an official, company-endorsed Superforecaster.

Not exactly magic

But neither Good Judgment nor the Superforecasters are perfect. “We don’t have a crystal ball,” says Rieber. And their predictions aren’t useful in all circumstances: For one, they never state that something will happen, like a tree will definitely fall in a forest. Their forecasts are probability based: There is an 80 percent chance that a tree will fall in this forest and a 20 percent chance it won’t. 

Hatch admits the forecasts also don’t add much when there’s already lots of public probability-based predictions—as is the case with, say, oil prices—and also when there isn’t much public information, like when political decisions are made based on classified data.

From an intelligence perspective (where the intelligence community’s own ultrapredictors might have access to said classified information), forecasts nevertheless have other limitations. For one, guessing the future is only one aspect of a spy’s calculus. Forecasting can’t deal with the present (Does Country X have a nuclear weapons program at this particular moment?), the past (What killed Dictator Z?), or the rationale behind events (Why will Countries A and B go to war?). 

Second, questions with predictive answers have to be extremely concrete. “Some key questions that policymakers care about are not stated precisely,” says Rieber. For example, he says, this year’s intelligence threat assessment from the Office of the Director of National Intelligence states: “We expect that friction will grow as China continues to increase military activity around the island [of Taiwan].” But friction is a nebulous word, and growth isn’t quantified. 

“Nevertheless, it’s a phrase that’s meaningful to policymakers, and it’s something that they care about,” Rieber says.

The process also usually requires including a date. For example, rather than ask, “Will a novel coronavirus variant overtake Omicron in the US and represent more than 70 percent of cases?” the Good Judgment Open website currently asks, “Before 16 April 2023, will a SARS-CoV-2 variant other than Omicron represent more than 70.0% of total COVID-19 cases in the US?” It’s not because April is specifically meaningful: It’s because the group needs an expiration date.

That’s not usually the kind of question a company, or intelligence agency, brings to Good Judgment. To get at the answer it really wants, the company works around the problem. “We work with them to write a cluster of questions,” Chen says, that together might give the answer they’re looking for. So for example, a pet store might want to know if cats will become more popular than dogs. Good Judgment might break that down into “Will dogs decrease in popularity by February 2023?” “Will cats increase in popularity by February 2023?” and “Will public approval of cats increase by February 2023 according to polls?” The pet store can triangulate from those answers to estimate how they should invest. Maybe.

And now, IARPA and Rieber are moving into the future of prediction, with a new program called REASON: Rapid Explanation Analysis Sourcing Online. REASON throws future-casting in the direction it was probably always going to go: automation. “The idea is to draw on recent artificial intelligence breakthroughs to make instantaneous suggestions to the analysts,” he says. 

In this future, silicon suggestions will do what human peers did in ACE: team up with humans to try to improve their reasoning and so their guess at what’s coming next, so they can pass their hopefully-better forecasts on to the other humans: those who make decisions that help decide what happens to the world.

Seeding doubt

Outside the project, researcher Konstantinos Nikolopoulos, of Durham University Business School in England, had a different criticism for Superforecasting than its accuracy, whose rigor he saw others had followed up on and confirmed. Nevertheless, he says, “something didn’t feel right.” 

His qualm was about the utility. In the real world, actual Superforecasters (from Good Judgment itself) can only be so useful, because there are so few of them, and it takes so long to identify them in the first place. “There are some Superforecasters locked in a secret room, and they can be used at the discretion of whoever has access to them,” he says. 

So Nikolopoulos and colleagues undertook a study to see whether Good Judgment’s general idea—that some people are much better than others at intuiting the future—could be applied to a smaller pool of people (314, rather than 5,000), in a shorter period of time (nine months, rather than multiple years). 

Among their smaller group and truncated timeframe, they did find two superforecasters. And Nikolopoulos suggests that, based on this result, any small-to-medium-size organization could forecast its own future: Hold its own competition (with appropriate awards and incentives), identify its best-predicting employees, and then use them (while compensating them) to help determine the company’s direction. The best would just need to be better than average forecasters. 

“There is promising empirical evidence that it can be done in any organization,” says Nikolopoulos. Which means, although he doesn’t like this word, Good Judgment’s findings can be democratized.

Of course, people can still contract with Good Judgment and its trademarked predictors. And the company actually does offer a Staffcasting program that helps identify and train clients’ employees to do what Nikolopoulos suggests. But it does nevertheless still route through this one name-brand company. “If you can afford it, by all means, do it,” he says. “But I definitely believe it can be done in-house.”

Good Judgment would like you to visit its house and pay for its services, of course, although it does offer training for outsiders and is aiming to make more of that available online. In the future, the company is also aiming to get better at different kinds of problems—like those having to do with existential risk. “The sorts of things that will completely wipe out humanity or reduce it so much that it’s effectively wiped out,” says Hatch. “Those can be things like a meteor hitting the planet. So that’s one kind. And another kind is an alien invasion.” 

On the research side, the company hopes to improve its ability to see early evidence not of “black swans”—unexpected, rare events—but “really, really dark gray swans,” says Hatch. You know, events like pandemics.

Five years from now, will Good Judgment be successful at its version of predicting the future? Time will tell. 

Read more PopSci+ stories.

The post How a US intelligence program created a team of ‘Superforecasters’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot gets its super smelling power from locust antennae https://www.popsci.com/technology/smell-robot-desert-locust/ Wed, 18 Jan 2023 15:00:00 +0000 https://www.popsci.com/?p=506070
Scientist holding syringe next to wheeled robot with biological sensor

The new system is 10,000 times more sensitive than existing odor detecting programs.

The post This robot gets its super smelling power from locust antennae appeared first on Popular Science.

]]>
Scientist holding syringe next to wheeled robot with biological sensor

Although human snouts aren’t quite as weak as they’ve been made out to be, they still pale in comparison to a lot of our world’s fellow inhabitants. After all, you don’t see specially trained (human) police officers sniffing baggage at the airport. Even something as tiny as the mosquito, for instance, can detect a 0.01 percent difference in its surrounding environment’s CO2 levels. That said, you’ll never see a mosquito construct a robot to help pick up our species’ olfactory slack, which is exactly what one research team at Tel Aviv University recently accomplished.

The group’s findings, published in the journal Biosensor and Bioelectronics, showcases how the team connected a biological sensor—in this case, a desert locust’s antenna—to an electronic array before subsequently using a machine learning algorithm to hone the computer’s scent detection abilities. The result was a new system that is 10,000 times more sensitive than the existing, commonly used electronic devices currently available. This is largely thanks to the locust’s powerful sense of odor detection.

[Related: This surgical smart knife can detect endometrial cancer cells in seconds.]

Generally speaking, sensory organs such as animals’ eyes and noses use internal receptors to identify external stimuli, which they then translate into electrical signals that can be processed by their brains. Scientists measured the electrical activity induced within the desert locust’s antennae from various odors, then fed those readings into a machine learning program that created a “library of smells,” according to one researcher. The archive initially included 8 separate entries, including marzipan, geranium, and lemon, but reportedly went on to incorporate differentiations between different varieties of Scotch whisky—probably a pretty nice bonus for the desert locust.

The ability for such delicate readings could soon offer major improvements in the detection of everything from illicit substances, to explosives, to even certain kinds of diseases and cancers. The researchers also stressed that the new biosensor capabilities aren’t limited to simply smell—with additional work and testing, the same idea could be applied to touch or even certain animals’ abilities to sense impending natural disasters such as earthquakes. The team also explained they hope to soon develop the means for their robot to navigate on its own, thereby honing in on an odor’s source before identifying it.

The post This robot gets its super smelling power from locust antennae appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Bug brains are inspiring new collision avoidance systems for cars https://www.popsci.com/technology/car-collision-avoidance-insect-tech/ Tue, 17 Jan 2023 18:30:00 +0000 https://www.popsci.com/?p=505884
Swarm of mosquitos in flight in a grassy field
Bugs are pretty good at avoid each other while flying, if not actual cars. Deposit Photos

Despite their tendency to smack into your car on the road, bugs' ability to avoid one another could improve collision prevention.

The post Bug brains are inspiring new collision avoidance systems for cars appeared first on Popular Science.

]]>
Swarm of mosquitos in flight in a grassy field
Bugs are pretty good at avoid each other while flying, if not actual cars. Deposit Photos

Despite the rapid rise in vehicles’ collision avoidance systems (CASs) like radar, LiDAR, and self-driving software, nighttime navigation remains a particularly hazardous endeavor. While only a quarter of time behind the wheel takes place after the sun sets, an estimated 50 percent of all traffic fatalities occur during this time. Knowing this, the natural inclination for many researchers might be to develop increasingly complex—and, by extension, energy hogging—CAS advancements, but one recent study points towards a literal bug-brained method to improve safety for everyone on roadways.

As detailed in new research published in ACS Nano from a team at Penn State, insects like locusts and houseflies provide the key inspiration behind the new novel collision prevention programming. Many current systems rely on real-time image analysis of a car’s surroundings, but the accuracy is often severely diminished by low-light or rainy conditions. LiDAR and radar tech can solve some of these issues, but at a hefty cost to both literal weight and energy consumption.

[Related: What’s going on with self-driving cars right now?]

Commonplace bugs, however, don’t need advanced neural networks or machine learning to avoid bumping into one another mid-flight. Instead, they use comparatively simple, highly energy efficient, obstacle-avoiding neural circuitry to navigate during travel. Taking this into account, the Penn State researchers devised a new algorithm based on the bugs’ neural circuits reliant on a single variable—car headlight intensity—for its reactions. Because of this, developers could combine the detection and processing units into a much smaller, less energy consuming device.

“Smaller” is perhaps a bit of an understatement. The new, photosensitive “memtransistor” circuit measures only 40-square micrometers (µm) of an “atomically thin” construction comprised of molybdenum disulfide. What’s more, the memtransistor needs only a few hundred picojoules of energy—tens of thousands of times less power than current cars’ CASs require.

[Related: Self-driving EVs use way more energy than you’d think.]

Real-life nighttime scenario testing showed little, if any, sacrifice in the ability to detect potential collisions. While employed, the insect-inspired circuits alerted drivers to possible two-car accidents with between two- and three-second lead times, giving drivers enough time to course correct as needed. Researchers argue that by integrating the new bug-brained circuitry into existing CAS systems, vehicle manufacturers could soon offer far less bulky, more energy efficient evening travel safety protocols. Unfortunately and perhaps ironically, however, the study fails to mention any novel way to avoid those inspirational bugs smacking into your windshield while on the highway.

The post Bug brains are inspiring new collision avoidance systems for cars appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Self-driving EVs use way more energy than you’d think https://www.popsci.com/technology/ev-autopilot-energy-consumption-study/ Fri, 13 Jan 2023 18:00:00 +0000 https://www.popsci.com/?p=505370
Electric Car in Charging Station.
A billion self-driving cars on the road could use as much energy as all the world's data centers. Deposit Photos

Aside from safety concerns, autopilot software could nullify electric cars' sustainability benefit.

The post Self-driving EVs use way more energy than you’d think appeared first on Popular Science.

]]>
Electric Car in Charging Station.
A billion self-driving cars on the road could use as much energy as all the world's data centers. Deposit Photos

Truly self-driving cars are still at least a few years down the road—but if the day does come when the software becomes a de facto means of navigation, a new study indicates it’s going to need to be much more energy efficient. If not, autopilot features could ostensibly neutralize any self-driving electric vehicles’ environmental benefits. According to a new study from researchers at MIT, statistical modeling indicates the potential energy consumption needed to power a near-future global fleet of autopiloted EVs would generate as much greenhouse gas as all of the world’s current data centers combined.

The physical locales which house the massive computer arrays powering the world’s countless applications today generate about 0.3 percent of all greenhouse gas emissions—roughly the annual amount of carbon produced by Argentina. Researchers estimated this level would be reached from the self-driving tech in 1 billion autonomous vehicles, each driving just one hour per day. For comparison, there are currently around 1.5 billion cars on the world’s roads.

[Related: Tesla is under federal investigation over autopilot claims.]

Researchers also found that in over 90 percent of the models generated, EV computers would need to use less than 1.2 kilowatts of computing power just to keep within today’s realm of data center emissions, something we simply cannot achieve with current hardware efficiencies. For example, in another statistical model analyzing a scenario in which 95 percent of all vehicles are autonomous by 2050 alongside computational workloads doubling every 3 years, cars’ hardware efficiencies would need to essentially double every year to keep emissions within those same levels. In comparison, the decades’ long accepted industry rate known as Moore’s Law states that computational power doubles every two or so years—a timeframe that is expected to eventually slow down, not accelerate.

The parameters for such scenarios—how many cars are on the roads, how long they are traveling, their onboard computing power and energy requirements, etc—might seem relatively clear , but there are numerous unforeseen ramifications to also consider. Autonomous vehicles could spend more time on roads while people multitask, for example, and they could actually spur additional demographics to add to traffic, such as both younger and older populations. Then there’s the issue of trying to model for hardware and software that doesn’t yet exist.

And then there are the neural networks to consider.

[Related: Tesla driver blames self-driving mode for eight-car pileup.]

MIT notes that semi-autonomous vehicles already rely on popular algorithms such as a “multitask deep neural network” to navigate travel using numerous high-resolution cameras feeding constant, real-time information to its system. In one situation, researchers estimated that if an autonomous vehicle used 10 deep neural networks analyzing imagery from 10 cameras while driving just a single hour, it would generate 21.6 million inferences per day. Extrapolate that for 1 billion vehicles, and you get… 21.6 quadrillion inferences. 

“To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion),” explains MIT.

Suffice to say, these are serious hurdles that will need clearing if the automotive industry wants to continue its expansions into self-driving technology. EVs are key to our sustainable future, but self-driving versions  could end up adding to the energy crisis.

The post Self-driving EVs use way more energy than you’d think appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A guide to the internet’s favorite generative AIs https://www.popsci.com/technology/ai-generator-guide/ Wed, 11 Jan 2023 23:00:00 +0000 https://www.popsci.com/?p=504733
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen.
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen. Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images

VALL-E is just the latest example. Here's what to know about DALL-E 2, GPT-3, and more.

The post A guide to the internet’s favorite generative AIs appeared first on Popular Science.

]]>
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen.
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen. Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images

There’s a new AI on the block, and it can mimic someone’s voice from just a short audio clip of them speaking. If it sounds like there are a lot of wacky AIs out there right now that can generate things, including both images and words, you’re right! And because it can get confusing, we wrote you a quick guide. Here are some of the most prominent AIs to surface over the past 12 months.

VALL-E

The latest entrant, VALL-E is a new AI from Microsoft researchers that can generate a full model of someone’s voice from a three-second seed clip. It was trained on over 60,000 hours of English language speech from more than 7,000 speakers and works by turning the contents of the seed clip into discrete components through a process called tokenization, which breaks down texts into smaller units called tokens. The AI’s neural network then speculates what the other tokens required to make a full model would sound like, based off the few it has from the short clip. The results—which you can check out on the VALL-E website—are pretty astounding. 

Because of the obvious deep fake uses for an AI model like VALL-E, Microsoft hasn’t released it to the public. (Microsoft has previously invested in DALL-E and ChatGPT-owner OpenAI and is also reportedly in talks to invest billions more.) Still, it shows the kind of things these generative AIs are capable of with even the smallest seed. 

DALL-E 2

OpenAI’s DALL-E 2 arguably kicked off the latest AI craze when it was announced last April. It can create original images from a text prompt, whether you want something realistic or totally out there. It can even expand the boundaries of existing artwork with a technique called outpainting

The best thing about DALL-E 2 is that its free for anyone to try. In your first month, you get 50 credits which each allow you to generate four image variations from a single text prompt. After that, you get 15 free credits per month. 

Stable Diffusion

While OpenAI control access to DALL-E 2, Stability AI took a different approach with its image generator, Stable Diffusion: it made it open source. Anyone can download Stable Diffusion and create incredibly realistic looking images and imaginative artworks using a reasonably powerful laptop

Because it’s open source, other companies have also been able to use Stable Diffusion to launch generative AI tools. The biggest name here is Lensa’s Magic Avatars. With the smartphone app, you are able to upload 10 to 20 photos which are used to train a custom Stable Diffusion model and then generate dozens of off-beat artistic avatars. 

Midjourney

The other big name in image generation, Midjourney, is still in Beta and only accessible through a Discord channel. Its algorithm has improved a lot over the past year. Personally, I find the images created by its current model—Version 4—the most compelling and naturalistic, compared to other popular image generators. Unfortunately, accessing it through Discord is a weird hurdle, especially when compared to Stable Diffusion or DALL-E 2.

GPT-3

OpenAI’s Generative Pre-trained Transformer 3 or GPT-3 language model was actually released in 2020, but it has made headlines in the past couple of months with the release of ChatGPT, a chatbot that anyone can use. Its answers to a variety of questions and prompts are often accurate and, in many cases, indistinguishable from something written by a human. It’s started serious conversations about how colleges will detect plagiarism going forward (maybe with an AI-finding AI). Plus, it can write funny poems

While ChatGPT is by far the most obvious instance of GPT-3 out in the world, it also powers other AI tools. Of all the generative AIs on the list, at PopSci we suspect it’s the one you will hear a lot more about in the next while. 

Codex

OpenAI’s GPT-3 isn’t just good at generating silly songs and short essays; it also has the capacity to help programmers write code. The model called Codex is able to generate code in a dozen languages, including JavaScript and Python, from natural language prompts. On the demo page, you can see a short video of a browser game being made without a single line of code being written. It’s pretty impressive!And Codex is already out in the wild: GitHub Copilot uses it to automatically suggest full chunks of code. It’s like autocomplete on steroids.

The post A guide to the internet’s favorite generative AIs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Popular youth mental health service faces backlash after experimenting with AI-chatbot advice https://www.popsci.com/technology/koko-ai-chatbot-mental-health/ Wed, 11 Jan 2023 21:00:00 +0000 https://www.popsci.com/?p=504751
Woman using her mobile phone , city skyline night light background
The online mental health service, Koko, is in hot water over its use of GPT-3. Deposit Photos

Koko provides online mental health services, often to young users, and recently tested AI chatbot responses under murky circumstances.

The post Popular youth mental health service faces backlash after experimenting with AI-chatbot advice appeared first on Popular Science.

]]>
Woman using her mobile phone , city skyline night light background
The online mental health service, Koko, is in hot water over its use of GPT-3. Deposit Photos

A free mental health service offering online communities a peer-to-peer chat support network is facing scrutiny after its co-founder revealed the company briefly experimented with employing an AI chatbot to generate responses—without informing recipients. Although they have since attempted to downplay the project and highlight the program’s deficiencies, critics and users alike are expressing deep concerns regarding medical ethics, privacy, and the buzzy, controversial world of AI chatbot software.

As highlighted on Tuesday by New Scientist, Koko was co-founded roughly seven years ago by MIT graduate Rob Morris, whose official website bills the service as a novel approach to making online mental health support “accessible to everyone.” One of its main services allowing clients like social network platforms to install keyword flagging software that can then connect users to psychology resources, including human chat portals. Koko is touted as particularly useful for younger users of social media.

[Related: OpenAI’s new chatbot offers solid conversations and fewer hot takes.]

Last Friday, however, Morris tweeted that approximately 4,000 users were “provided mental health support… using GPT-3,” which is the popular AI chatbot program developed by OpenAI. Although users weren’t chatting directly with GPT-3, a “co-pilot” system was designed so that human support workers reviewed the AI’s suggested responses, and used them as they deemed relevant. As New Scientist also notes, it does not appear that Koko users received any form of up-front alert letting them know their mental health support was potentially generated, at least in part, by a chatbot.

In his Twitter thread, Morris explained that, while audiences rated AI co-authored responses “significantly higher” than human-only answers, they decided to quickly pull the program, stating that once people were made aware of the messages’ artificial origins, “it didn’t work.” 

“Simulated empathy feels weird, empty,” wrote Morris. Still, he expressed optimism at AI’s potential roles within mental healthcare, citing previous projects like Woebot, which alerts users from the outset that they would be conversing with a chatbot.

[Related: Seattle schools sue social media companies over students’ worsening mental health.]

The ensuing fallout from Morris’ descriptions of the Koko endeavor prompted near-immediate online backlash, causing Morris to issue multiple clarifications regarding “misconceptions” surrounding the experiment. “We were not pairing people up to chat with GPT-3, without their knowledge. (in retrospect, I could have worded my first tweet to better reflect this),” he wrote last Saturday, adding that the feature was “opt-in” while it was available.

“It’s obvious that AI content creation isn’t going away, but right now it’s moving so fast that people aren’t thinking critically about the best ways to use it,” Caitlin Seeley, campaign director for the digital rights advocacy group, Fight for the Future, wrote PopSci in an email. “Transparency must be a part of AI use—people should know if what they’re reading or looking at was created by a human or a computer, and we should have more insight into how AI programs are being trained.”

[Related: Apple introduces AI audiobook narrators, but the literary world is not too pleased.]

Seeley added that services like Koko need to be “thoughtful” about the services they purport to provide, as well as remain critical about AI’s role in those services. “There are still a lot of questions about how AI can be used in an ethical way, but any company considering it must ask these questions before they start using AI.”

Morris appears to have heard critics, although it remains unclear what will happen next for the company and any future plans with chat AI. “We share an interest in making sure that any uses of AI are handled delicately, with deep concern for privacy, transparency, and risk mitigation,” Morris wrote on Koko’s blog over the weekend, adding that the company’s clinical advisory board is meeting to discuss guidelines for future experiments, “specifically regarding IRB approval.”

The post Popular youth mental health service faces backlash after experimenting with AI-chatbot advice appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The ‘Doomsday’ glacier is fracturing and changing. AI can help us understand how. https://www.popsci.com/technology/ai-thwaites-glacier/ Tue, 10 Jan 2023 19:00:00 +0000 https://www.popsci.com/?p=504366
An image of the Thwaites Glacier from NASA
Thwaites Glacier. NASA

Machine learning could be used to take a more nuanced look at the satellite images of the glacier beneath the ice and snow.

The post The ‘Doomsday’ glacier is fracturing and changing. AI can help us understand how. appeared first on Popular Science.

]]>
An image of the Thwaites Glacier from NASA
Thwaites Glacier. NASA

The Doomsday glacier has been on everyone’s minds lately. And it should be. With estimates that by 2100, sea levels will rise by 10 feet due mostly to meltwater from it, there’s much to worry about. Because of the precarious location of this Florida-sized ice shelf, when it goes, it will set off a chain of melting events. 

In the past few years, teams of researchers have been racing against time to study and understand the Thwaites Glacier—the formal name of the so-called Doomsday glacier—and have dispatched several tools to help them do so, including an auto-sub named Boaty McBoatFace

Previous work has been focused around figuring out how the glacier is melting, and how it’s affecting the seawater ecology in its immediate environment. Now, a new study in Nature Geoscience is using machine learning to analyze the ways in which the ice shelf has fractured and reconsolidated over a span of six years. Led by scientists from University of Leeds and University of Bristol, this research employs an AI algorithm that looks at satellite imagery to monitor and model the ways the glacier has been changing, and mark where notable stress fractures have been occurring. 

The artificial intelligence algorithm has an interesting backstory: According to a press release, this AI was adapted from an algorithm that was originally used to identify cells in microscope images. 

[Related: We’re finally getting close-up, fearsome views of the doomsday glacier]

During the study, the team from Leeds and Bristol closed in on an area of the glacier where ”the ice flows into the sea and begins to float.” This is also the start of two ice shelves: the Thwaites Eastern ice shelf and the Thwaites Glacier ice tongue. “Despite being small in comparison to the size of the entire glacier, changes to these ice shelves could have wide-ranging implications for the whole glacier system and future sea-level rise,” the researchers explained in the press release.    

Here’s where the AI comes in handy—it could take a more nuanced look at the satellite images of the glacier beneath the ice and snow. By doing so, it allowed scientists to perceive how different elements of the ice sheet have interacted with one another over the years. For example, in times when the ice flow is faster or slower than average, more fractures tend to form. And having more fractures in turn can alter the speed of ice flow. Moreover, using AI can allow them to quickly make sense of underlying patterns influencing glacier melting from the “deluge of satellite images” they receive each week. A more in-depth look at the model they developed for the study is available here

In an area like Antarctica that humans have a hard time accessing, remote, automated technologies have become a vital way to keep an eye on events that could have impacts globally. Outside of diving robots and roving satellites, scientists are also using animal-watching drones, balloons, and more. Plus, there’s a plan to get more high-speed internet to Antarctica’s McMurdo Station to make the process of transmitting data to the outside world easier.

The post The ‘Doomsday’ glacier is fracturing and changing. AI can help us understand how. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This AI is no doctor, but its medical diagnoses are pretty spot on https://www.popsci.com/technology/ai-doctor-google-deepmind/ Fri, 06 Jan 2023 15:00:00 +0000 https://www.popsci.com/?p=503684
doctor on computer
Can AI diagnose medical conditions better than a human?. DEPOSIT PHOTOS

Asking an AI about your health issues might be better than WebMD, but it does come with some caveats.

The post This AI is no doctor, but its medical diagnoses are pretty spot on appeared first on Popular Science.

]]>
doctor on computer
Can AI diagnose medical conditions better than a human?. DEPOSIT PHOTOS

Various research groups have been teasing the idea of an AI doctor for the better half of the past decade. In late December, computer scientists from Google and DeepMind put forth their version of an AI clinician that can diagnose a patient’s medical conditions based on their symptoms, using a large language model called PaLM

Per a preprint paper published by the group, their model scored 67.6 percent on a benchmark test containing questions from the US Medical License Exam, which they claim surpassed previous state-of-the-art software by 17 percent. One version of it performed at a similar level to human clinicians. But, there are plenty of caveats that come with this algorithm, and others like it. 

Here are some quick facts about the model: It was trained on a dataset of over 3,000 commonly searched medical questions, and six other existing open datasets for medical questions and answers, including medical exams and medical research literature. In their testing phase, the researchers compared the answers from two versions of the AI to a human clinician, and evaluated these responses for accuracy, factuality, relevance, helpfulness, consistency with current scientific consensus, safety, and bias. 

Adriana Porter Felt, a software engineer that works on Google Chrome who was not a part of the paper, noted on Twitter that the version of the model that answered medical questions similarly to human clinicians accounts for the added feature of “instruction prompt tuning, which is a human process that is laborious and does not scale.” This includes carefully tweaking the wording of the question in a specific way that allows the AI to retrieve the correct information. 

[Related: Google is launching major updates to how it serves health info]

The researchers even wrote in the paper that their model “performs encouragingly, but remains inferior to clinicians,” and that the model’s “comprehension [of medical context], recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning.” For example, every version of the AI missed important information and included incorrect or inappropriate content in their answers at a higher rate compared to humans. 

Language models are getting better at parsing information with more complexity and volume. And they seem to do okay with tasks that require scientific knowledge and reasoning. Several small models, including SciBERT and PubMedBERT, have pushed the boundaries of language models to understand texts loaded with jargon and specialty terms.  

But in the biomedical and scientific fields, there are complicated factors at play and many unknowns. And if the AI is wrong, then who takes responsibility for malpractice? Can the source of the error be traced back to a source when much of the algorithm works like a black box? Additionally, these algorithms (mathematical instructions given to the computer by programmers) are imperfect and need complete and correct training data, which is not always available for various conditions across different demographics. Plus, buying and organizing health data can be expensive

Answering questions correctly on a multiple-choice standardized test does not convey intelligence. And the computer’s analytical ability might fall short if it were presented with a real-life clinical case. So while these tests look impressive on paper, most of these AIs are not ready for deployment. Consider IBM’s Watson AI health project. Even with millions of dollars in investment, it still had numerous problems and was not practical or flexible enough at scale (it ultimately imploded and was sold for parts). 

Google and DeepMind do recognize the limitations of this technology. They wrote in their paper that there are still several areas that need to be developed and improved for this model to be actually useful, such as the grounding of the responses in authoritative, up-to-date medical sources and the ability to detect and communicate uncertainty effectively to the human clinician or patient. 

The post This AI is no doctor, but its medical diagnoses are pretty spot on appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Apple introduces AI audiobook narrators, but the literary world is not too pleased https://www.popsci.com/technology/apple-ai-audiobook-narrator/ Thu, 05 Jan 2023 19:00:00 +0000 https://www.popsci.com/?p=503379
Would you listen to an entire audiobook read by a robot?
Would you listen to an entire audiobook read by a robot?. Pexels

'I choose humans,' one bestselling author tells PopSci.

The post Apple introduces AI audiobook narrators, but the literary world is not too pleased appeared first on Popular Science.

]]>
Would you listen to an entire audiobook read by a robot?
Would you listen to an entire audiobook read by a robot?. Pexels

Although computer programs have long been able to read text aloud, their performances aren’t exactly known to be compelling. As AI capabilities continue to rapidly advance, however, the potential for more nuanced and “human-like” narration is more possible than ever. That possibility received a major expansion this week from Apple with its announcement of a new AI service capable of generating realistic human narration for audiobooks.

But while the company claims the program will benefit independent authors previously unable to pursue the publishing option due to “the cost and complexity of production,” some writers and publishing insiders are expressing reservations about the prospect of taking actual actors out of the equation.

[Related: AI vocal filters are here to stay.]

“Digitally narrated titles are a valuable complement to professionally narrated audiobooks, and will help bring audio to as many books and as many people as possible,” Apple argues in its new service’s announcement, while also promising to continue “celebrating and showcasing the magic of human narration” while still growing its human-narrated audiobook catalog.

Meanwhile, authors like Jeff VanderMeer, the writer behind the bestselling Southern Reach and Borne science fiction series, don’t believe the hype.

“I believe in voice talent and what each voice actor or reader brings to it. If it weren’t frowned upon, I would work with the voice actor to alter my novels to make them more perfect for the medium,” VanderMeer wrote to Popular Science over Twitter on Thursday. “The folks who have [narrated] my books sometimes ask questions, they adjust how they read based on discussions, and it feels collaborative. I learn something about my fiction from the process.”

For now, Apple offers authors four different AI narrators trained to orate certain fiction genres alongside non-fiction and self-help books. To start an audio project, customers first need to sign up with one of two partner publishing companies, Draft2Digital or Ingram CoreSource. Customers can then expect finished products within one-to-two months following audio generation and quality checks, but will only be able to sell the releases via Apple Books as well as distributing to public and academic libraries. That said, a publisher or author still retains audiobook rights, and will face no restrictions on making and distributing other versions of the audiobook in the future.

[Related: A startup is using AI to make call centers sound more ‘American.’]

Apple is enduring increasing pressure from businesses and regulators regarding its longstanding App Store developer fees and other anti-competitive strategies, and may actually soon be forced to revise its stance on the subject. As such, some see the new service as less a new form of artistic support, and more an attempt to find revenue elsewhere. “Companies see the audiobooks market and that there’s money to be made. They want to make content. But that’s all it is,” literary agent Carly Watters argued to The Guardian on Wednesday. “It’s not what customers want to listen to. There’s so much value in the narration and the storytelling.”

For authors like VanderMeer, their audio releases will also maintain their human qualities, even if their subject matter often veers into the fantastical and surreal. “Inasmuch as I contractually am able to choose who reads my books for audiobooks, I choose human beings,” he writes. “I believe it’s a talent and a creative skill.”

The post Apple introduces AI audiobook narrators, but the literary world is not too pleased appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This AI chatbot will be playing attorney in a real US court https://www.popsci.com/technology/ai-chatbot-lawyer-donotpay/ Thu, 05 Jan 2023 17:00:00 +0000 https://www.popsci.com/?p=503329
Lawyer and client listening to judge in the court room
AI Attorneys at Law could arrive sooner than you think. Deposit Photos

What happens when the defense rests on an AI's legal advice? We’re about to find out.

The post This AI chatbot will be playing attorney in a real US court appeared first on Popular Science.

]]>
Lawyer and client listening to judge in the court room
AI Attorneys at Law could arrive sooner than you think. Deposit Photos

In 2020, the average median income for a lawyer in the US was just under $127,000. All that money obviously has to come from somewhere—most often, the clients they represent. Unfortunately, many people neither have the budget for high-priced attorneys, nor the time to deal with the tedious red tape on their own. According to one startup, DoNotPay, often winnable appeals fall by the wayside, and they’re hoping to level the playing field a bit.

Since 2015, DoNotPay has offered increasingly nuanced and diverse legal advice via AI software trained on copious amounts of past court cases and law data. Last month, the startup’s latest toolkit update included the abilities to negotiate lower bills and cancel unwanted subscriptions while sparing consumers from lengthy customer service interactions. Taking things one step even further in 2023, DoNotPay now seeks to aid defendants in a real-life court setting.

According to New Scientist, the AI chatbot developers recently announced plans to supply an unnamed individual with a smartphone connected to their new program for their upcoming court date. The AI will then listen to the hearing’s proceedings as the defendant contests a speeding ticket, and supply them with every response to provide the judge via an earpiece.

[Related: OpenAI’s new chatbot offers solid conversations and fewer hot takes.]

If such a strategy sounds legally dubious, well—you’re probably right in most places. That said, DoNotPay’s CEO Joshua Browder, told New Scientist that they were able to identify a (currently unspecified) location of the country where listening via the earpiece is “technically within the rules,” albeit not in the “spirit of the rules.” In any case, the test case’s defendant doesn’t have much to worry about—regardless of outcome, DoNotPay agreed to pay their speeding fees if they happen to lose their appeal.

Generative AI programs have increasingly come to the forefront of artistic and ethical debates in recent years with the rise of projects like OpenAI’s impressive ChatGPT and Meta’s less-than-stellar BlenderBot 3 experiment. While it’s unsurprising to see the same kind of advancements find their way into legal settings, experts caution that we’re a long way from replacing all our legal professionals. “When your lawyer tells you ‘OK, let’s do A’, we trust them that they have the expertise and the knowledge to advise us,” Nikos Aletras, an AI designer at the University of Sheffield, UK, told New Scientist. “But [with AI], it’s very hard to trust predictions.”

The post This AI chatbot will be playing attorney in a real US court appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Only 4 people have been able to solve this 1934 mystery puzzle. Can AI do better? https://www.popsci.com/technology/cains-jawbone-ai/ Wed, 04 Jan 2023 20:00:00 +0000 https://www.popsci.com/?p=503094
murder mystery illustration
"Cain's Jawbone" has recently been popularized thanks to TikTok. DEPOSIT PHOTOS

'Cain’s Jawbone' is a scrambled whodunnit that claims to be decipherable through 'logic and intelligent reading.'

The post Only 4 people have been able to solve this 1934 mystery puzzle. Can AI do better? appeared first on Popular Science.

]]>
murder mystery illustration
"Cain's Jawbone" has recently been popularized thanks to TikTok. DEPOSIT PHOTOS

In the 1930s, British crossword writer Edward Powys Mathers created a “fiendishly difficult literary puzzle” in the form of a novel called “Cain’s Jawbone.” The trick to unraveling the whodunnit involves piecing the 100 pages of the book together in the correct order to reveal the six murders and how they happened. 

According to The Guardian, only four (known) people have been able to solve this since the book was first published. But the age-old mystery saw a resurgence of interest after it was popularized through TikTok by user Sarah Scannel, prompting a 70,000-copies reprint by Unbound. The Washington Post reported last year that this novel has quickly gained a cult following of sorts, with the new wave of curious sleuths openly discussing their progress in online communities across social media. On sites like Reddit, the subreddit r/CainsJawbone has more than 7,600 members. 

So can machine learning help crack the code? A small group of people are trying it out. Last month, publisher Unbound partnered with AI-platform Zindi to challenge readers to sort the pages using AI natural language processing algorithms. TikTok user blissfullybreaking explained in a video that one of the advantages of using AI is that it can pick up on 1930s pop culture references that we might otherwise miss, and cross-reference that to relevant literature made during that time. 

[Related: Meta wants to improve its AI by studying human brains]

And it’s a promising approach. Already, natural language processing models have been able to successfully parse reading comprehension tests, pass college entrance exams, simplify scientific articles (with varying accuracy), draft up legal briefings, brainstorm story ideas, and play a chat-based strategic board game. AI can also be a fairly competent rookie sleuth, that is if you give it enough CSI to binge

Zindi required the solution to be open-source and publicly available, and teams could only use the datasets they provided for this competition. Additionally, the submitted code that yielded their result must be reproducible, with full documentation of what data was used, what features were implemented, and the environment in which the code was run. 

One member of the leading team, user “skaak,” explained how he tackled this challenge in a discussion post on Zindi’s website. He noted that after experimenting with numerous tweaks to his team’s model, his conclusion was that there is still a “human calibration” needed to guide the model through certain references and cultural knowledge.

The competition closed on New Year’s Eve with 222 enrolled participants, although scoring will be finalized later in January, so stay tuned for tallies and takeaways later this month.

The post Only 4 people have been able to solve this 1934 mystery puzzle. Can AI do better? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This AI verifies if you actually LOL https://www.popsci.com/technology/lol-verifier-parody-project-text-messages/ Wed, 04 Jan 2023 19:00:00 +0000 https://www.popsci.com/?p=503048
Woman standing on street smiling at phone message
"LOL" used to mean something. Deposit Photos

One designer thinks 'LOL' has lost its luster, and aims to bring back its meaning.

The post This AI verifies if you actually LOL appeared first on Popular Science.

]]>
Woman standing on street smiling at phone message
"LOL" used to mean something. Deposit Photos

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

When was the last time you actually laughed out loud before texting someone “LOL?”

It’s probably not a stretch to say that, at the very least, your figurative guffaws far outnumber your literal chortles. Brian Moore, a creator of satirical products focused on popular culture, society, and technology, certainly seems to feel this way, as he recently unveiled his latest creation meant to return the acronym to its original roots. Everyone, meet the LOL Verifier: a device that, well, verifies a “LOL.”

“I remember when LOL meant ‘laugh out loud,’” Moore explains in his unveiling video posted to Twitter on Tuesday. “… And now it means nothing. Dulled down to the mere acknowledgement of a message.”

[Related: How to text yourself for peak productivity.]

To solve the cultural touchstone phrase’s sad, sorry devolution, Moore designed a small black box—naturally labeled “LOL”—with a large red light in place of the acronym’s “O.” Within that container, a tiny computer houses an AI program personally trained by Moore’s mimicking over around half an hour’s worth of various laughs. If the device’s microphone hears an audible chuckle around the time the user types “LOL” on their connected personal computer, it automatically inserts a checkmark emoji alongside “✅LOL verified at [exact time]” into the message. If you happen to be stretching the truth, i.e. humoring your friend with a supportive “LOL,” the LOL Verifier’s light turns red as the three letters are swapped for an alternative such as “Haha” or “That’s funny.”

As Moore explained to Vice on Wednesday, his AI program not only needed the sound of laughter as reference, but the sounds of no laughter, as well. “The laughs are varied from chuckles to just me going, ‘Ha,’ really loudly,” Moore says. “But then training it on not-laughs, like keyboard sounds and silence. Background noise, TV noise, music. That stuff does not count.”

For now, Moore’s LOL Verifier is a one-off creation limited to his own personal usage, but he hinted in his conversation with Vice about a potential future expansion to the amused masses, depending on interest. For now, he’ll have to simply take your word that you LOLed at his idea.

The post This AI verifies if you actually LOL appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet Golfi, the robot that plays putt-putt https://www.popsci.com/technology/robot-golf-neural-network-machine-learning/ Tue, 03 Jan 2023 21:00:00 +0000 https://www.popsci.com/?p=502766
Robot putting golf ball across indoor field into hole
But can Golfi sink a putt through one of those windmill obstacles, though?. YouTube

The tiny bot can scoot on a green and hit a golf ball with impressive accuracy.

The post Meet Golfi, the robot that plays putt-putt appeared first on Popular Science.

]]>
Robot putting golf ball across indoor field into hole
But can Golfi sink a putt through one of those windmill obstacles, though?. YouTube

The first robot to sink an impressive hole-in-one pulled off its fairway feat back in 2016. But the newest automated golfer looks like it’s coming for the short game.

First presented at the IEEE International Conference on Robotic Computing last month and subsequently highlighted by New Scientist on Tuesday, “Golfi” is the modest-sized creation from a research team at Germany’s Paderborn University capable of autonomously locating a ball on a green, traveling to it, and successfully sinking a putt around 60 percent of the time.

To pull off its relatively accurate par, Golfi utilizes an overhead 3D camera to scan an indoor, two-square-meter artificial putting green to find its desired golf ball target. It can then scoot over to the ball and use a neural network algorithm to quickly analyze approximately 3,000 potential golf swings from random points while accounting for physics variables like mass, speed, and ground friction. From there, its arm offers a modest putt that sinks the ball roughly 6 or 7 times out of 10. Although not quite as good as standard human players, it’s still a sizable feat for the machine.

[Related: Reverse-engineered hummingbird wings could inspire new drone designs.]

However, Golfi isn’t going to show up at minigolf parks anytime soon, however. The robot’s creators at Paderborn University designed their prototype to solely work in a small indoor area while connected to a wired power source. Golfi’s necessary overhead 3D camera mount also ensures it won’t make an outdoor tee time, either. That’s because, despite its name, Golfi isn’t actually designed to revolutionize the golf game. Instead, the little robot was built to showcase the benefits of combining physics-based models with machine learning programs.

It’s interesting to see Golfi’stalent in comparison to other recent robotic advancements, which have often drawn from inspirations within the animal kingdom—from hummingbirds, to spiders, to dogs that just so happen to also climb up walls and across ceilings.

The post Meet Golfi, the robot that plays putt-putt appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ford used a quantum computer to explore EV battery materials https://www.popsci.com/technology/ford-quantum-ev-battery/ Sat, 24 Dec 2022 12:00:00 +0000 https://www.popsci.com/?p=501690
One of Ford's battery modules
One of Ford's battery modules. Ford

Quantum computers can simulate the properties of new materials that might make batteries safer, more energy-dense, and easier to recycle.

The post Ford used a quantum computer to explore EV battery materials appeared first on Popular Science.

]]>
One of Ford's battery modules
One of Ford's battery modules. Ford

Quantum researchers at Ford have just published a new preprint study that modeled crucial electric vehicle (EV) battery materials using a quantum computer. While the results don’t reveal anything new about lithium-ion batteries, they demonstrate how more powerful quantum computers could be used to accurately simulate complex chemical reactions in the future. 

In order to discover and test new materials with computers, researchers have to break up the process into many separate calculations: One set for all the relevant properties of each single molecule, another for how these properties are affected by the smallest  environmental changes like fluctuating temperatures, another for all the possible ways any  two molecules can interact together, and on and on. Even something that sounds simple like two hydrogen molecules bonding requires incredibly deep calculations. 

But developing materials using computers has a huge advantage: the researchers don’t have to perform every possible experiment physically which can be incredibly time consuming. Tools like AI and machine learning have been able to speed up the research process for developing novel materials, but quantum computing offers the potential to make it even faster. For EVs, finding better materials could lead to longer lasting, faster charging, more powerful batteries. 

Traditional computers use binary bits—which can be a zero or a one—to perform all their calculations. While they are capable of incredible things, there are some problems like highly accurate molecular modeling that they just don’t have the power to handle—and because of the kinds of calculations involved, possibly never will. Once researchers model more than a few atoms, the computations become too big and time-consuming so they have to rely on approximations which reduce the accuracy of the simulation. 

Instead of regular bits, quantum computers use qubits that can be a zero, a one, or both at the same time. Qubits can also be entangled, rotated, and manipulated in other wild quantum ways to carry more information. This gives them the power to solve problems that are intractable with traditional computers—including accurately modeling molecular reactions. Plus, molecules are quantum by nature, and therefore map more accurately onto qubits, which are represented as waveforms.

Unfortunately, a lot of this is still theoretical. Quantum computers aren’t yet powerful enough or reliable enough to be widely commercially viable. There’s also a knowledge gap—because quantum computers operate in a completely different way to traditional computers, researchers still need to learn how best to employ them. 

[Related: Scientists use quantum computing to create glass that cuts the need for AC by a third]

This is where Ford’s research comes in. Ford is interested in making batteries that are safer, more energy and power-dense, and easier to recycle. To do that, they have to understand chemical properties of potential new materials like charge and discharge mechanisms, as well as electrochemical and thermal stability.

The team wanted to calculate the ground-state energy (or the normal atomic energy state) of LiCoO2, a material that could be potentially used in lithium ion batteries. They did so using an algorithm called the variational quantum eigensolver (VQE) to simulate the Li2Co2O4 and Co2O4 gas-phase models (basically, the simplest form of chemical reaction possible) which represent the charge and discharge of the battery. VQE uses a hybrid quantum-classical approach with the quantum computer (in this case, 20 qubits in an IBM statevector simulator) just employed to solve the parts of the molecular simulation that benefit most from its unique attributes. Everything else is handled by traditional computers.

As this was a proof-of-concept for quantum computing, the team tested three approaches with VQE: unitary coupled-cluster singles and doubles (UCCSD), unitary coupled-cluster generalized singles and doubles (UCCGSD) and k-unitary pair coupled-cluster generalized singles and doubles (k-UpCCGSD). As well as comparing the quantitative results, they compared quantum resources necessary to perform the calculations accurately with classical wavefunction-based approaches. They found that k-UpCCGSD produced similar results to UCCSD at lower cost, and that the results from the VQE methods agreed with those obtained using classical methods—like coupled-cluster singles and doubles (CCSD) and complete active space configuration interaction (CASCI). 

Although not quite there yet, the researchers concluded that quantum-based computational chemistry on the kinds of quantum computers that will be available in the near-term will play “a vital role to find potential materials that can enhance the battery performance and robustness.” While they used a 20-qubit simulator, they suggest a 400-qubit quantum computer (which will soon be available) would be necessary to fully model the Li2Co2O4 and Co2O4 system they considered.

All this is part of Ford’s attempt to become a dominant EV manufacturer. Trucks like its F-150 Lightning push the limits of current battery technology, so further advances—likely aided by quantum chemistry—are going to become increasingly necessary as the world moves away from gas burning cars. And Ford isn’t the only player thinking of using quantum to edge it ahead of the battery chemistry game. IBM is also working with Mercedes and Mitsubishi on using quantum computers to reinvent the EV battery. 

The post Ford used a quantum computer to explore EV battery materials appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tesla driver blames self-driving mode for eight-car pileup https://www.popsci.com/technology/tesla-crash-full-self-driving-mode-san-francisco/ Fri, 23 Dec 2022 17:30:00 +0000 https://www.popsci.com/?p=501727
The insignia of Tesla on the steering wheel of the plug-in electric car Model 3
Add it to the ongoing list of crashes potentially caused by Tesla's autopilot software. Deposit Photos

The Thanksgiving Day crash in San Francisco sent two children to the hospital.

The post Tesla driver blames self-driving mode for eight-car pileup appeared first on Popular Science.

]]>
The insignia of Tesla on the steering wheel of the plug-in electric car Model 3
Add it to the ongoing list of crashes potentially caused by Tesla's autopilot software. Deposit Photos

The driver of a 2021 Tesla Model S says a Full Self-Driving (FSD) Mode malfunction is behind a Thanksgiving Day eight-vehicle crash on San Francisco’s Bay Bridge. The accident  resulted in two children receiving minor injuries. The incident, made public on Wednesday via a local police report and subsequently reported on by Reuters, is only the latest in a string of wrecks, some fatal, to draw scrutiny from the National Highway Traffic Safety Administration (NHTSA).

As The Guardian also notes, the multi-car wreck came just hours after Tesla CEO Elon Musk announced the $15,000 autopilot upgrade would become available to all eligible vehicle owners in North America. Prior to the expansion, FSD was only open to Tesla drivers with “high safety scores.”

[Related: Tesla is under federal investigation over autopilot claims.]

Although Full Self-Driving Mode has drawn consistent criticism and scrutiny since its debut, Musk has repeatedly attested to the software’s capabilities, going so far as to take interview questions from the driver’s seat of a Tesla engaged in the feature. FSD Mode utilizes a complex network of AI, sensors, machine learning, and camera systems to supposedly control the basics of driving in real time, such as steering, speed, braking, and changing lanes. According to the Thanksgiving Day crash’s police report, the driver claims his car suddenly and inexplicably slowed from 55 mph to around 20 mph while attempting to switch lanes, resulting in a rear-end collision that set off a chain of related wrecks.

[Related: YouTube pulls video of Tesla superfan testing autopilot safety on a child.]

The police report makes clear the crashes’ cause is still unconfirmed, and that the driver still should have been paying sufficient attention to take control of the vehicle in the event of FSD malfunctioning. Tesla’s own website cautions that its Autopilot and FSD Modes “do not make the vehicle[s] autonomous.”

Since 2016, the NHTSA has opened 41 investigations involving Teslas potentially engaged in Full Self-Driving Mode—19 people have died as a result of those crashes. Yesterday, the carmaker began notifying Tesla owners around the world of a complimentary, 30-day trial of its Enhanced Autopilot software, which supposedly offers automated lane changes, parking, and navigation.

The post Tesla driver blames self-driving mode for eight-car pileup appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The good and the bad of Lensa’s AI portraits https://www.popsci.com/technology/lensa-ai-portrait/ Fri, 16 Dec 2022 15:00:00 +0000 https://www.popsci.com/?p=498941
a collage of lensa's AI-generated portraits
Here are some of the portraits Lensa came up with for me. Harry Guinness / Lensa

Lensa can create dozens of personalized images in an assortment of artistic styles.

The post The good and the bad of Lensa’s AI portraits appeared first on Popular Science.

]]>
a collage of lensa's AI-generated portraits
Here are some of the portraits Lensa came up with for me. Harry Guinness / Lensa

Lensa is an AI-powered photo editing app that has risen to the top of app stores around the world. Although it has been available since 2018, it’s only with the release of its Magic Avatars feature last month that it became a worldwide social media hit. If you’ve been on Twitter, Instagram, or TikTok in the last few weeks, you’ve almost certainly seen some of its AI-generated images in a variety of styles.

Lensa relies on Stable Diffusion (which we’ve covered before) to make its Magic Avatars. Users upload between 10 and 20 headshots with the iOS or Android app, and Lensa trains a custom version of Stable Diffusion’s image generation model with them. By using a personalized AI model, Lensa is able to create dozens of images in an assortment of artistic styles that actually resemble a real person instead of the abstract idea of one. Or at least, it’s able to do it just enough of the time to be impressive. There is a reason that Magic Avatars are only available in packs of 50, 100, and 200 for $3.99, $5.99, and $7.99 respectively. 

Of course, Lensa’s Magic Avatars aren’t free from artifacts. AI models can generate some incredibly weird images that resemble monsters or abstract art instead of a person. The shapes of eyes, fingers, and other smaller details are more likely to be imperfect than, say, the position of someone’s mouth or nose. 

And like most AI-generators, Lensa’s creations aren’t free from gender, racial, and other biases. In an article in The Cut called “Why Do All My AI Avatars Have Huge Boobs,” Mia Mercado (who is half white, half Filipina) wrote that her avatars were “underwhelming.” According to Mercado, “the best ones looked like fairly accurate illustrations.” Most, though, “showed an ambiguously Asian woman,” often with “a chest that can only be described as ample.”

[Related: Shutterstock and OpenAI have come up with one possible solution to the ownership problem in AI art]

Writing for MIT Technology Review, Melissa Heikkilä (who is similarly of Asian heritage) calls her avatars “cartoonishly pornified.” Out of 100 portraits that she generated, 16 were topless and another 14 had her “in extremely skimpy clothes and overtly sexualized poses.” And this problem isn’t limited to Lensa. Other AI image generators that use Stable Diffusion have also created some incredibly questionable images of people of color.

The issue is so widespread that in an FAQ on its website, Prisma Labs, the company behind Lensa, had to give a response to the question: “Why do female users tend to get results featuring an over sexualised look?” The short answer: “Occasional sexualization is observed across all gender categories, although in different ways.”

Per the FAQ, the problem can be traced back to the dataset that Stable Diffusion is initially trained on. It uses the Laoin-5B dataset, which contains almost 6 billion unfiltered image-text pairs scraped from around the internet. Stability AI (the makers of Stable Diffusion) has openly acknowledged that “the model may reproduce some societal biases and produce unsafe content.” This includes sexualized images of women and generic, stereotypical, and racist images of people of color. 

Both Stability AI and Prisma claim to have taken steps to minimize the prevalence of NSFW outputs, but these AI models are black boxes by design, meaning that sometimes the human programmers don’t even fully know about all the associations that the model is making. Short of creating a bias-free image database to train an AI model on, some societal biases are probably always going to be present in AI generators’ outputs.

And that’s if everyone is operating in good faith. TechCrunch was able to create new NSFW images of a famous actor using Lensa. They uploaded a mixture of genuine SFW images of the actor and photoshopped images of the actor’s face on a topless model. Of the 100 images created, 11 were “topless photos of higher quality (or, at least with higher stylistic consistency) than the poorly done edited topless photos the AI was given as input.” Of course, this is against Lensa’s terms of service, but that hasn’t exactly stopped people in the past. 

The most promising feature of these AI generators, though, is how fast they are improving. While its undeniable that marginalized groups are seeing societal biases reflected in their outputs right now, if these models continue to evolve—and if the developers remain as receptive to feedback—then there is reason to be optimistic that they can do more than just reflect back the worst of the internet. 

The post The good and the bad of Lensa’s AI portraits appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
UberEats is rolling out a fleet of self-driving delivery robots in Miami https://www.popsci.com/technology/uber-cartken-delivery-robot-miami/ Thu, 15 Dec 2022 20:00:00 +0000 https://www.popsci.com/?p=499194
Two Cartken robotic delivery vehicles traveling along a sidewalk in a line
UberEats now can deliver via these little robots in Miami. Cartken

These little robots are venturing out of a university setting for the first time.

The post UberEats is rolling out a fleet of self-driving delivery robots in Miami appeared first on Popular Science.

]]>
Two Cartken robotic delivery vehicles traveling along a sidewalk in a line
UberEats now can deliver via these little robots in Miami. Cartken

On Thursday, Uber announced a partnership with the robotics manufacturer Cartken that will send a fleet of miniature self-driving robots into Miami, Florida. These little vehicles won’t be driving people though—just snacks.

Based in Oakland, California, and started by a team of former Google engineers, Cartken already deploys their automated, six-wheeled delivery vehicles delivering food and other small items across multiple college campuses. But as The Verge notes, Uber claims this will be the “first formal partnership with a global on-demand delivery app beyond” universities.

[Related: Uber’s latest goals involve more delivery and more EVs.]

Cartken’s line of small, fully electric, automated delivery vehicles are manufactured by auto supplier Magna, and can carry around 24 pounds of items in its cargo storage. While they only clock in at speeds slightly slower than pedestrians, an embedded camera system allows the robots to maneuver around obstacles and adjust in real-time to the environment around them. Each Cartken robot can deliver within a several mile radius depending on battery change, which makes them ideal for relatively small areas such as school campuses and the Miami’s Dadeland commercial shopping complex, where they are making their UberEats debut on Thursday before potential expansions throughout the county and in other cities.

Uber has openly pursued automated driving and delivery services for years now, although the path towards accomplishing this goal has been anything but smooth. In 2018, a self-driving Uber car in Arizona struck and killed a pedestrian, putting at least a temporary halt to the company’s aims of fully automating fleets. Earlier this month, the company appears to have restarted the plans via the introduction of self-driving taxi options in Las Vegas alongside expansion plans for Los Angeles —although a human safety driver will still remain behind the wheel for the time being.

[Related: Study shows the impact of automation on worker pay.]

While potentially convenient for hungry consumers, the Uber-Cartken teamup belies wider industry aims of increased automation. A diminished need for human labor is directly related to cost efficient advances in artificial intelligence and robotics. Corporations such as Uber are literally banking on this automation to be cheaper and faster than its current employees. The Cartken fleet may be cute to look at roaming around sidewalks and campuses, but every additional robot is potentially one less delivery job for a gig economy worker already strapped for cash.

Earlier this year, Uber also announced a partnership with Nuro, makers of a much larger, street traveling autonomous vehicle capable of delivering roughly 24 bags of groceries at a time to customers in Houston, Texas, and Mountain View, California.

The post UberEats is rolling out a fleet of self-driving delivery robots in Miami appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This fossil-sorting robot can identify millions-year-old critters for climate researchers https://www.popsci.com/technology/forabot-sort-foram-fossil/ Tue, 13 Dec 2022 20:00:00 +0000 https://www.popsci.com/?p=498405
Foraminiferas are tiny marine organisms with intricate shells.
Foraminiferas are tiny marine organisms with intricate shells. Josef Reischig / Wikimedia Czech Republic

Forabot’s job is to image, ID, and categorize the tiny shells left behind by marine organisms called foraminiferas.

The post This fossil-sorting robot can identify millions-year-old critters for climate researchers appeared first on Popular Science.

]]>
Foraminiferas are tiny marine organisms with intricate shells.
Foraminiferas are tiny marine organisms with intricate shells. Josef Reischig / Wikimedia Czech Republic

Tiny marine fossils called foraminifera, or forams, have been instrumental in guiding scientists studying global climate through the ages. The oldest record of their existence, evident through the millimeter-wide shells they leave behind when they die, date back more than 500 million years. In their heyday, these single-celled protists thrived across many marine environments—so much so that a lot of seafloor sediments are comprised of their remains

The shells, which are varied and intricate, can provide valuable insights into the state of the ocean, along with its chemistry and temperature, during the time that the forams were alive. But so far, the process of identifying, cataloging, and sorting through these microscopic organisms has been a tedious chore for research labs around the world. 

Now, there is hope that the menial job may get outsourced to a more mechanical workforce in the future. A team of engineers from North Carolina State University and University of Colorado Boulder has built a robot specifically designed to isolate, image, and classify individual forams by species. It’s called the Forabot, and is constructed from off-the-shelf robotics components and a custom artificial intelligence software (now open-source). In a small proof-of-concept study published this week in the journal Geochemistry, Geophysics, Geosystems, the technology had an ID accuracy of 79 percent. 

“Due to the small size and great abundance of planktic foraminifera, hundreds or possibly thousands can often be picked from a single cubic centimeter of ocean floor mud,” the authors wrote in their paper. “Researchers utilize relative abundances of foram species in a sample, as well as determine the stable isotope and trace element compositions of their fossilized remains to learn about their paleoenvironment.”

[Related: Your gaming skills could help teach an AI to identify jellyfish and whales]

Before any formal analyses can happen, however, the foraminifera  have to be sorted. That’s where Forabot could come in. After scientists wash and sieve samples filled with sand-like shells, they place the materials into a container called the isolation tower. From there, single forams are transferred to another container called the imaging tower where an automated camera captures a series of shots of the specimen that’s then fed to the AI software for identification. Once the specimen gets classified by the computer, it is then shuttled to a sorting station, where it is dispensed into a corresponding well based on species. In its current form, Forabot can distinguish six different species of foram, and can process 27 forams per hour (quick math by the researchers indicate that it can go through around 600 fossils a day). 

For the classification software, the team modified a neural network called VGG-16 that had been pretrained on more than 34,000 planktonic foram images that were collected worldwide as part of the Endless Forams project. “This is a proof-of-concept prototype, so we’ll be expanding the number of foram species it is able to identify,” Edgar Lobaton, an associate professor at NC State University and an author on the paper, said in a press release. “And we’re optimistic we’ll also be able to improve the number of forams it can process per hour.”

Watch Forabot at work below:

The post This fossil-sorting robot can identify millions-year-old critters for climate researchers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Here’s how a new AI mastered the tricky game of Stratego https://www.popsci.com/technology/ai-stratego/ Sat, 03 Dec 2022 12:00:00 +0000 https://www.popsci.com/?p=494302
stratego board game
zizou man / Wikimedia

It’s a huge and surprising result—at least to the the Stratego community. 

The post Here’s how a new AI mastered the tricky game of Stratego appeared first on Popular Science.

]]>
stratego board game
zizou man / Wikimedia

A new AI called “DeepNash” has mastered Stratego, one of the few iconic boardgames where computers don’t regularly trounce human players, according to a paper published this week. It’s a huge and surprising result—at least to the Stratego community. 

Stratego is a game with two distinct challenges: it requires long-term strategic thinking (like chess) and also requires players to deal with incomplete information (like poker). The goal is to move across the board and capture the other player’s flag piece. Each game takes place over a 10 x 10 gridded board with two 2 x 2 square lakes blocking the middle of the board. Both players have 40 pieces with different tactical values that can are deployed at the start of the game—the catch is that you can’t see what your opponent’s pieces are and they can’t see what yours are. When you are planning an attack, you don’t know if the defender is a high-ranked Marshal that will beat almost all your pieces or a lowly Sergeant that can be taken out by a Lieutenant or Captain. Some of the other playable pieces include bombs (powerful but immobile), scouts (that can move more than one square at once), and miners (who can defuse bombs) which all add to the tactical complexity. The game only ends when one player’s flag piece is captured or they can no longer make any legal moves. 

All this is to say that Stratego creates a unique challenge for computers to solve. Chess is relatively easy because all the information is visible to everyone—in game theory, it’s called a “perfect information game”. A computer can look at your defences, simulate 10 or so moves ahead for a few different options, and pick the best one. It gives them a serious strategic advantage over even the best human players. It also helps that chess is a game that tends to be won or lost by in a few key moments rather than by gradual pressure. The average chess game takes around 40 moves while Stratego takes more than 380. This means each move in chess is far more important (and for humans, warrants a lot more consideration) whereas Stratego is more fast paced and flexible. 

[Related: Meta’s new AI can use deceit to conquer a board game world]

Stratego, on the other hand, is an “imperfect information game.” Until an opponent’s piece attacks or is attacked, you have no way of knowing what it is. In poker, an imperfect information game that computers have been able to play at a high level for years, there are 10^164 possible game states and each player only has 10^3 possible two-card starting hands. In Stratego, there are 10^535 possible states and more than 10^66 possible deployments—that means there’s a lot more unknown information to account for. And that’s on top of the strategic challenges. 

Combined, the two challenges make Stratego especially difficult for computers (or AI researchers). According to the team, it’s “not possible to use state-of-the-art model-based perfect information planning techniques nor state-of-the-art imperfect information search techniques that break down the game into independent situations.” The computer has to be able to make strategic plans that incorporate the imperfect information it has available to it. 

But DeepNash has been able to pull it off. The researchers used a novel method that allowed the AI to learn to play Stratego by itself while developing its own strategies. It used a model-reinforcement learning algorithm called Regularized Nash Dynamics (R-NaD) combined with a deep neural network architecture that seeks a Nash equilibrium—“an unexploitable strategy in zero-sum two-player games” like Stratego—and by doing so, it could learn the “qualitative behavior that one could expect a top player to master.” This is an approach that has been used before in simple Prisoners Dilemma-style games, but never with a game as complex as this. 

DeepNash was tested against the best existing Stratego bots and expert human players. It beat all other bots and was highly competitive against the expert humans on Gravon, an online board games platform. Even better, from a qualitative standpoint, it was able to play well. It could make trade-offs between taking material and concealing the identity of its pieces, execute bluffs, and even take calculated gambles. (Though the researchers also consider that terms like “deception” and “bluff” might well refer to mental states that DeepNash is incapable of having.)

All told, it’s an exciting demonstration of a new way of training AI models to play games (and maybe perform other similar tasks in the future)—and it doesn’t rely on computationally heavy deep search strategies which have previously been used to play other games like chess, Go, and poker.

The post Here’s how a new AI mastered the tricky game of Stratego appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Brace yourself for smarter robots that don’t fall over (as easily) https://www.popsci.com/technology/robots-falling/ Fri, 02 Dec 2022 20:30:00 +0000 https://www.popsci.com/?p=494233
Humanoid robot bracing itself against a wall in balance experiment with human researcher standing behind it in lab setting
Easy does it there, pal. YouTube

Researchers tipped over their robot over 882,000 times to teach its neural network how to keep from tumbling.

The post Brace yourself for smarter robots that don’t fall over (as easily) appeared first on Popular Science.

]]>
Humanoid robot bracing itself against a wall in balance experiment with human researcher standing behind it in lab setting
Easy does it there, pal. YouTube

A lot of mobile robots do a pretty decent job of maintaining their balance while on the move, but like humans, they’re still prone to lose their footing from time to time. Although that bodes well for outrunning them during the impending robopocalypse, until then, it mostly means more potential for expensive repairs and time-consuming maintenance. As first unveiled earlier this year and highlighted by Engadget on Wednesday, at least a few more robots could be saved from taking a tumble in the near future thanks to advancements from researchers at France’s University of Lorraine.

[Related: The Boston Dynamics robots are surprisingly good dancers.]

Through a lot of trial and error—reportedly over 882,000 training simulations, to be more exact—developers designed a new “Damage Reflex” system for their humanoid robot test subject. When activated, the robot’s neural network quickly identifies the best spot on a neanrby wall to support itself if its stability gets compromised. Well, perhaps not so much “if” as “when,” judging from the demonstration video below.

As Engadget explains, the testing procedure sounds pretty simple, if a bit macabre: To showcase the Damage Reflex system in action, the robot has one of its legs “broken” to ensure it tips over towards a nearby test wall. In roughly three out of four instances, the machine’s arm was able to determine a solid spot to plant itself against in order to prevent falling to its doom. That’s pretty good when one takes into all the considerations all the physics variations regarding location, balance, weight, and distribution that go into determining how to prevent an accident in real time.

[Related: Boston Dynamics gave its dog-like robot a charging dock and an arm on its head.]

There are quite a few caveats to the Damage Reflex system’s early iteration: Firstly, it only stops a robot from falling over; it can’t help it recover or right itself. Right now, it’s also only been tested in a stationary test robot, meaning that the system currently isn’t capable of addressing accidents that may occur while walking or mid-stride. That said, researchers intend to further develop their system so that it’s also capable of handling on-the-go machines, as well as utilize nearby objects like chairs or tables to its advantage.

Companies like Tesla and Boston Dynamics are keen to push bipedal robots into everyday life, a goal that’s really only realistic as long as their products are relatively affordable to both purchase and maintain. Systems like Damage Reflex, while still in their infancy, could soon go a long way to both protect robots and extend their lifespans.

The post Brace yourself for smarter robots that don’t fall over (as easily) appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
OpenAI’s new chatbot offers solid conversations and fewer hot takes https://www.popsci.com/technology/openai-chatbot/ Fri, 02 Dec 2022 18:30:00 +0000 https://www.popsci.com/?p=494109
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen.
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen. Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images

The bot is great at impersonations and pretty good at avoiding controversy.

The post OpenAI’s new chatbot offers solid conversations and fewer hot takes appeared first on Popular Science.

]]>
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen.
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen. Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images

Released earlier this week and subsequently tested by outlets including Ars Technica and The Verge, OpenAI’s ChatGPT showcases many promising advancements in improving conversation bots’ ability to answer general questions and distill complex subject matter, but it’s still prone to occasionally spew misinformation, and can also be manipulated into providing problematic, dangerous responses. To design ChatGPT, OpenAI’s research team first relied on Reinforcement Learning from Human Feedback (RLHF), in which trainers wrote conversations while playing both sides of the discussion—human, and AI. Participants were also provided model-written suggestions to help approximate AI responses. From there, trainers ranked subsequent chatbot conversations by comparing multiple alternative prompt completions to fine-tune its abilities.

The resultant dialogue format “makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI explains in a blog announcement posted on Wednesday.

[Related: Meta’s new chatbot is already parroting users’ prejudice and misinformation.]

A quick ChatGPT test drive from PopSci immediately highlighted how bots can be successfully programmed to avoid being manipulated into providing at least the worst-of-the-worst answers. When asked about ChatGPT’s opinion on notable public figures, hot button political issues, and socio-cultural demographics, it generally responded with a reminder that it “[does not] possess personal beliefs or emotions,” adding that it is only “designed to provide information and answer questions to the best of my ability based on the data that I have been trained on,” while also cautioning that it does not “engage in social or political discussions.” Fair enough.

[Related: Researchers used AI to explain complex science. Results were mixed.]

That said, it is more than happy to distill quantum computing’s complexities while talking to you like a cowboy:

AI photo
A high-tech rodeo! Source: PopSci

ChatGPT is also pretty great at providing some context on subjects such as what NASA’s impending return to the moon could mean for future space travel:

AI photo
Source: PopSci

OpenAI’s bot is also able to proofread computer coding like Python and provide concrete factual statements, although it’s currently unclear if it gets Monty Python references.

AI photo
Source: PopSci

There’s also instances of ChatGPT perhaps working a bit too well, such as its ability to ostensibly write an entire college-level essay from a class prompt within seconds. The implications of a convincing CheatBot are obviously problematic, and offer yet another example of how language processing AI still needs a lot of guidance and consideration to keep up with its burgeoning capabilities. At least ChatGPT isn’t readily offering us the recipe for Molotov cocktails… note the use of the qualifier “readily.”

Chatbots are rapidly improving thanks to major strides neural networks and language modeling programs, but there are still far from perfect. Take Meta’s disastrous BlenderBot 3 rollout earlier this year—users were able to easily manipulate discussions with it to produce racist hate speech almost immediately, forcing the Big Tech giant to briefly restrict access to the bot while it worked out at least some of the kinks. Before that there was Tay, Microsoft’s 2016 attempt at a conversational program whose results were… less than desirable, to say the least. In any case, companies will be working towards optimizing their chatbots for years to come, but OpenAI’s new ChatGPT seems (at first glance) to be a major step forward in providing users with clear, concise information and responses while ensuring things don’t offensively veer off the rails—at least, not as often as others in its chatbot cohort.

The post OpenAI’s new chatbot offers solid conversations and fewer hot takes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>