Glenn Greenwald and his colleagues at The Intercept has just released an extensive report on the NSA use of XKEYSCORE. And here’s a video on the same topic:
Check out the related news here.
Google Faces French Ultimatum Over Right to Be Forgotten
by Stephanie Bodoni
June 12, 2015 — 5:22 PM HKT
Updated on June 12, 2015 — 11:24 PM HKT
Google Inc. risks French fines after being handed a 15-day ultimatum to extend the so-called right to be forgotten to all its websites, including those outside the European Union.
France’s data protection regulator, CNIL, ordered the world’s most-used search engine to proceed with delistings of links across its network, irrespective of the domain name, according to a statement on Friday. CNIL said it received “hundreds of complaints following Google’s refusals.”
The order comes more than a year after a ruling by the EU’s highest court created a right to be forgotten, allowing people to seek the deletion of links on search engines if the information was outdated or irrelevant. The ruling created a furor, with Mountain View, California-based Google appointing a special panel to advise it on implementing the law. The panel opposed applying the ruling beyond EU domains.
If Google “doesn’t comply with the formal notice within the 15 days,” Isabelle Falque-Pierrotin, the president of CNIL “will be in position to nominate a rapporteur to draft a report recommending to the CNIL Select Committee to impose a sanction to the company,” the watchdog said.
“We’ve been working hard to strike the right balance in implementing the European court’s ruling, cooperating closely with data protection authorities,” Al Verney, a spokesman for Google in Brussels, said in an e-mailed statement. “The ruling focused on services directed to European users, and that’s the approach we are taking in complying with it.”
EU data protection chiefs, currently headed by Falque-Pierrotin, last year already urged Google to also remove links, when needed, from .com sites.
Google Chairman Eric Schmidt has argued that the EU court’s ruling in May 2014 — in which it ordered search links tied to individuals cut when those people contend the material is irrelevant or outdated — didn’t need to be extended to the U.S. site.
“It is easy circumventing the right to be forgotten by using the domain Google.com,” said Johannes Caspar, the Hamburg data protection commissioner. “Google should be compliant with the decision and fill the protection gap quickly.”
Google has removed 342,161, or 41.3 percent, of links that it has “fully processed,” according to a report on its website.
The U.K.’s Information Commissioner’s Office said in a statement that its experience with removal requests “suggests that, for the most part, Google are getting the balance right between the protection of the individual’s privacy and the interest of internet users.”
The right-to-be-forgotten rules add to separate demands for curbs on Google’s market power being considered by lawmakers this week. EU antitrust regulators in April escalated their four-year-old probe into Google, sending the company a statement of objections accusing the Internet giant of abusing its dominance of the search-engine market.
The same day, the EU also started a new investigation into Google’s Android mobile-phone software.
Note: The announcements start from 50:25 onwards.
And here’s a nice article from Quartz that sums up the key Google announcements:
Everything Google just announced at its I/O developer conference
Brace yourself.(Alice Truong/Quartz)
As anticipated, Google made a flurry of announcements during the two-and-a-half-hour keynote at its I/O developer conference. The company debuted the new capabilities of its next Android release, along with a photo-sharing app with unlimited storage; updates to its lo-fi virtual-reality headset made of cardboard; and much, much more.
Here’s a rundown of what was announced today:
Android M: Google didn’t reveal what the M actually stands for, but the next major release of Google’s mobile operating system will be packed with new goodies (many of which are broken out below). A feature called Chrome Custom Tab will let developers use Google’s browser within their apps, so they don’t have to build their own from scratch. M also will include more nuanced app permissions, with apps prompting users to grant or deny permissions when a feature launches, rather than at installation. (Users would be able to easily modify permissions after the fact as well.)
M’s hardware changes: Though some smartphone manufactures, such as Samsung, have already added fingerprint readers to their devices, Google is officially adding support for this in Android M. In addition, it will support USB type-C, the next-generation standard for charging and file transfer. When users plug in a USB type-C cable, they’ll be able to choose the type of connection, depending on whether they want to charge the device, use the device as a battery pack to charge another device, transfer files or photos, or connect to external devices such as keyboards.
Android Pay: Google didn’t talk about the fate of Google Wallet, but it did introduce Android Pay. Like Apple Pay, it’ll allow merchants to accept tap-to-pay transactions at the store, as well as purchases made on mobile apps. So far, about 7,000 merchants have agreed to accept Android Pay. People with Android M devices will be able to authorize payments with their fingerprints, similar to how Apple Pay works with Touch ID.
Power conservation: A new M feature called Doze will help mobile devices conserve battery life. When a device has been left unattended for an extended period, it’ll automatically enter a power-saving mode that will still allow alarms and important notifications to come through. With this feature, Google says, smartphone charges can last twice as long.
Google Photos: The company launched a new photo and video service with unlimited storage. The interface of makes it easy to scan through years of photos and can group photos of the same person over time (even back to birth, as indicated by the conference demo). The app also can be used to create collages, animations, and movies with soundtracks.
Android TV, Chromecast, and HBO Now: Playing catch-up to Apple, Google announced that HBO’s standalone streaming service, HBO Now, will head to Chromecast and Android devices. The company also revealed that it’s sold 17 million Chromecast devices, and that 20,000 apps have been built for its streaming dongle.
Android Auto: Android Now now has 35 car manufacturers on board, including GM, Hyundai, and Volkswagen. Just this week, Android Auto made its way to its first consumer car: the 2015 Hyundai Sonata.
Android Wear: Updates to Android Wear, the software used in Android smartwatches, include a low-power, always-on mode. This will let people keep useful information, such as directions, on their wrist without the display going dark. New wrist gestures will allow wearers to navigate the menus of a smartwatch so they don’t need to use both hands. And users will be able to add emoji to messages by drawing them on the watch face—the software would then detect and select the proper emoji.
Project Brillo and Weave: Based on Android, Project Brillo is Google’s underlying operating system for connected devices. Google also introduced Weave, a language that will allow internet-of-things devices to communicate with each other, with Nest products, and with smartphones.
A smarter Google Now: Google Now currently helps users plan their days, letting them know when to commute or pulling up boarding passes when they’re at the airport. But the company’s vision is to make it smarter and more actionable. The service is getting better at understanding context, so it can pull up information such as reviews or show times when a movie is referenced. In addition, with more than 100 partners on board for a pilot, it’ll be able to do things like hail an Uber or Lyft, reorder groceries from Instacart, and make restaurant reservations on OpenTable.
Faster loading and offline support: Good news for the next billion: Google has streamlined Search, Chrome, YouTube, and Maps so they work faster on slow internet connections. A more lightweight version of search on mobile is about 10 times smaller and loads 30% faster. Changes to Chrome, such as putting in placeholder images instead of loading actual ones, mean sites are about 80% smaller and use less memory. In some countries, offline access is available for Chrome, YouTube, and Maps.
Cardboard VR: Last year, Google showed off its lo-fi virtual reality headset, which can be constructed from cardboard. The headset has since been redesigned so it takes only three steps to construct and can fit phones with displays of up to 6 inches. The software developer kit will now support iOS as well as Android. Google also announced Expeditions, which will let students take field trips to far-flung parts of the globe using Cardboard.
Immersive 360-degree video: To create immersive video for virtual reality, Google previewed a new multi-camera array that can shoot videos in 360 degrees. Though the idea is to make this system, called Jump, available to anyone, Google also tapped GoPro to build and sell its own array with 16 Hero4 cameras.
Tools to test and increase exposure of apps: Cloud Test Lab, a result of Google’s acquisition last year of Appurify, will let developers easily test their apps on 20 Android devices. Universal App Campaigns will help them advertise their apps across AdMob, YouTube, and search ads in Google Play. Developers only have to set their ad budgets and specify how much they want to spend to add each new user. Google also will offer granular analytics for Google Play listings, so developers know if the photos they’ve chosen are attracting (or deterring) new users.
Google snooping on your web browsing or email may now be the least of your worries.
Late last week, it became known that Google has filed its creepiest patents yet – for a toy that can control other Wi-fi connected devices. Well for starters, just imagine this: If that toy senses you’re looking at it, it will rotate its head and look back at you…
It’s simple. Whenever Bruce Schneier speaks, listen.
How we sold our souls – and more – to the internet giants
Sunday 17 May 2015 11.00 BST
Last year, when my refrigerator broke, the repair man replaced the computer that controls it. I realised that I had been thinking about the refrigerator backwards: it’s not a refrigerator with a computer, it’s a computer that keeps food cold. Just like that, everything is turning into a computer. Your phone is a computer that makes calls. Your car is a computer with wheels and an engine. Your oven is a computer that cooks lasagne. Your camera is a computer that takes pictures. Even our pets and livestock are now regularly chipped; my cat could be considered a computer that sleeps in the sun all day.
Computers are being embedded into all sort of products that connect to the internet. Nest, which Google purchased last year for more than $3bn, makes an internet-enabled thermostat. You can buy a smart air conditioner that learns your preferences and maximises energy efficiency. Fitness tracking devices, such as Fitbit or Jawbone, collect information about your movements, awake and asleep, and use that to analyse both your exercise and sleep habits. Many medical devices are starting to be internet-enabled, collecting and reporting a variety of biometric data. There are – or will be soon – devices that continually measure our vital signs, moods and brain activity.
This year, we have had two surprising stories of technology monitoring our activity: Samsung televisions that listen to conversations in the room and send them elsewhere for transcription – just in case someone is telling the TV to change the channel – and a Barbie that records your child’s questions and sells them to third parties.
All these computers produce data about what they’re doing and a lot of it is surveillance data. It’s the location of your phone, who you’re talking to and what you’re saying, what you’re searching and writing. It’s your heart rate. Corporations gather, store and analyse this data, often without our knowledge, and typically without our consent. Based on this data, they draw conclusions about us that we might disagree with or object to and that can affect our lives in profound ways. We may not like to admit it, but we are under mass surveillance.
Internet surveillance has evolved into a shockingly extensive, robust and profitable surveillance architecture. You are being tracked pretty much everywhere you go, by many companies and data brokers: 10 different companies on one website, a dozen on another. Facebook tracks you on every site with a Facebook Like button (whether you’re logged in to Facebook or not), while Google tracks you on every site that has a Google Plus g+ button or that uses Google Analytics to monitor its own web traffic.
Most of the companies tracking you have names you’ve never heard of: Rubicon Project, AdSonar, Quantcast, Undertone, Traffic Marketplace. If you want to see who’s tracking you, install one of the browser plug-ins that let you monitor cookies. I guarantee you will be startled. One reporter discovered that 105 different companies tracked his internet use during one 36-hour period. In 2010, the seemingly innocuous site Dictionary.com installed more than 200 tracking cookies on your browser when you visited.
It’s no different on your smartphone. The apps there track you as well. They track your location and sometimes download your address book, calendar, bookmarks and search history. In 2013, the rapper Jay Z and Samsung teamed up to offer people who downloaded an app the ability to hear the new Jay Z album before release. The app required that users give Samsung consent to view all accounts on the phone, track its location and who the user was talking to. The Angry Birds game even collects location data when you’re not playing. It’s less Big Brother and more hundreds of tittletattle little brothers.
Most internet surveillance data is inherently anonymous, but companies are increasingly able to correlate the information gathered with other information that positively identifies us. You identify yourself willingly to lots of internet services. Often you do this with only a username, but increasingly usernames can be tied to your real name. Google tried to enforce this with its “real name policy”, which required users register for Google Plus with their legal names, until it rescinded that policy in 2014. Facebook pretty much demands real names. Whenever you use your credit card number to buy something, your real identity is tied to any cookies set by companies involved in that transaction. And any browsing you do on your smartphone is tied to you as the phone’s owner, although the website might not know it.
Surveillance is the business model of the internet for two primary reasons: people like free and people like convenient. The truth is, though, that people aren’t given much of a choice. It’s either surveillance or nothing and the surveillance is conveniently invisible so you don’t have to think about it. And it’s all possible because laws have failed to keep up with changes in business practices.
In general, privacy is something people tend to undervalue until they don’t have it anymore. Arguments such as “I have nothing to hide” are common, but aren’t really true. People living under constant surveillance quickly realise that privacy isn’t about having something to hide. It’s about individuality and personal autonomy. It’s about being able to decide who to reveal yourself to and under what terms. It’s about being free to be an individual and not having to constantly justify yourself to some overseer.
This tendency to undervalue privacy is exacerbated by companies deliberately making sure that privacy is not salient to users. When you log on to Facebook, you don’t think about how much personal information you’re revealing to the company; you chat with your friends. When you wake up in the morning, you don’t think about how you’re going to allow a bunch of companies to track you throughout the day; you just put your cell phone in your pocket.
But by accepting surveillance-based business models, we hand over even more power to the powerful. Google controls two-thirds of the US search market. Almost three-quarters of all internet users have Facebook accounts. Amazon controls about 30% of the US book market, and 70% of the ebook market. Comcast owns about 25% of the US broadband market. These companies have enormous power and control over us simply because of their economic position.
Our relationship with many of the internet companies we rely on is not a traditional company-customer relationship. That’s primarily because we’re not customers – we’re products those companies sell to their real customers. The companies are analogous to feudal lords and we are their vassals, peasants and – on a bad day – serfs. We are tenant farmers for these companies, working on their land by producing data that they in turn sell for profit.
Yes, it’s a metaphor, but it often really feels like that. Some people have pledged allegiance to Google. They have Gmail accounts, use Google Calendar and Google Docs and have Android phones. Others have pledged similar allegiance to Apple. They have iMacs, iPhones and iPads and let iCloud automatically synchronise and back up everything. Still others let Microsoft do it all. Some of us have pretty much abandoned email altogether for Facebook, Twitter and Instagram. We might prefer one feudal lord to the others. We might distribute our allegiance among several of these companies or studiously avoid a particular one we don’t like. Regardless, it’s becoming increasingly difficult to avoid pledging allegiance to at least one of them.
After all, customers get a lot of value out of having feudal lords. It’s simply easier and safer for someone else to hold our data and manage our devices. We like having someone else take care of our device configurations, software management, and data storage. We like it when we can access our email anywhere, from any computer, and we like it that Facebook just works, from any device, anywhere. We want our calendar entries to appear automatically on all our devices. Cloud storage sites do a better job of backing up our photos and files than we can manage by ourselves; Apple has done a great job of keeping malware out of its iPhone app store. We like automatic security updates and automatic backups; the companies do a better job of protecting our devices than we ever did. And we’re really happy when, after we lose a smartphone and buy a new one, all of our data reappears on it at the push of a button.
In this new world of computing, we’re no longer expected to manage our computing environment. We trust the feudal lords to treat us well and protect us from harm. It’s all a result of two technological trends.
The first is the rise of cloud computing. Basically, our data is no longer stored and processed on our computers. That all happens on servers owned by many different companies. The result is that we no longer control our data. These companies access our data—both content and metadata—for whatever profitable purpose they want. They have carefully crafted terms of service that dictate what sorts of data we can store on their systems, and can delete our entire accounts if they believe we violate them. And they turn our data over to law enforcement without our knowledge or consent. Potentially even worse, our data might be stored on computers in a country whose data protection laws are less than rigorous.
The second trend is the rise of user devices that are managed closely by their vendors: iPhones, iPads, Android phones, Kindles, ChromeBooks, and the like. The result is that we no longer control our computing environment. We have ceded control over what we can see, what we can do, and what we can use. Apple has rules about what software can be installed on iOS devices. You can load your own documents onto your Kindle, but Amazon is able to delete books it has already sold you. In 2009, Amazon automatically deleted some editions of George Orwell’s Nineteen Eighty-Four from users’ Kindles because of a copyright issue. I know, you just couldn’t write this stuff any more ironically.
It’s not just hardware. It’s getting hard to just buy a piece of software and use it on your computer in any way you like. Increasingly, vendors are moving to a subscription model—Adobe did that with Creative Cloud in 2013—that gives the vendor much more control. Microsoft hasn’t yet given up on a purchase model, but is making its MS Office subscription very attractive. And Office 365’s option of storing your documents in the Microsoft cloud is hard to turn off. Companies are pushing us in this direction because it makes us more profitable as customers or users.
Given current laws, trust is our only option. There are no consistent or predictable rules. We have no control over the actions of these companies. I can’t negotiate the rules regarding when Yahoo will access my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello. I don’t even know the cloud providers to whom those companies have outsourced their infrastructures. If any of those companies delete my data, I don’t have the right to demand it back. If any of those companies give the government access to my data, I have no recourse. And if I decide to abandon those services, chances are I can’t easily take my data with me.
Political scientist Henry Farrell observed: “Much of our life is conducted online, which is another way of saying that much of our life is conducted under rules set by large private businesses, which are subject neither to much regulation nor much real market competition.”
The common defence is something like “business is business”. No one is forced to join Facebook or use Google search or buy an iPhone. Potential customers are choosing to enter into these quasi-feudal user relationships because of the enormous value they receive from them. If they don’t like it, goes the argument, they shouldn’t do it.
This advice is not practical. It’s not reasonable to tell people that if they don’t like their data being collected, they shouldn’t email, shop online, use Facebook or have a mobile phone. I can’t imagine students getting through school anymore without an internet search or Wikipedia, much less finding a job afterwards. These are the tools of modern life. They’re necessary to a career and a social life. Opting out just isn’t a viable choice for most of us, most of the time; it violates what have become very real norms of contemporary life.
Right now, choosing among providers is not a choice between surveillance or no surveillance, but only a choice of which feudal lords get to spy on you. This won’t change until we have laws to protect both us and our data from these sorts of relationships. Data is power and those that have our data have power over us. It’s time for government to step in and balance things out.
Adapted from Data and Goliath by Bruce Schneier, published by Norton Books. To order a copy for £17.99 go to bookshop.theguardian.com. Bruce Schneier is a security technologist and CTO of Resilient Systems Inc. He blogs at schneier.com, and tweets at @schneierblog
Some thoughts for the weekend… listen especially to the first six and a half minutes of this clip below about the conspiracy theories surrounding the recent mysterious death of Dave Goldberg, the husband of Facebook Chief Operating Officer Sheryl Sandberg – the “Facebook-NSA Queen”.
(Above) Photo credit: dailydotcom
Are you wondering why this “problem” (data overload – see article below) did not happen earlier…?
NSA is so overwhelmed with data, it’s no longer effective, says whistleblower
Summary:One of the agency’s first whistleblowers says the NSA is taking in too much data for it to handle, which can have disastrous — if not deadly — consequences.
By Zack Whittaker for Zero Day | April 30, 2015 — 14:29 GMT (22:29 GMT+08:00)
NEW YORK — A former National Security Agency official turned whistleblower has spent almost a decade and a half in civilian life. And he says he’s still “pissed” by what he’s seen leak in the past two years.
In a lunch meeting hosted by Contrast Security founder Jeff Williams on Wednesday, William Binney, a former NSA official who spent more than three decades at the agency, said the US government’s mass surveillance programs have become so engorged with data that they are no longer effective, losing vital intelligence in the fray.
That, he said, can — and has — led to terrorist attacks succeeding.
Binney said that an analyst today can run one simple query across the NSA’s various databases, only to become immediately overloaded with information. With about four billion people — around two-thirds of the world’s population — under the NSA and partner agencies’ watchful eyes, according to his estimates, there is too much data being collected.
“That’s why they couldn’t stop the Boston bombing, or the Paris shootings, because the data was all there,” said Binney. Because the agency isn’t carefully and methodically setting its tools up for smart data collection, that leaves analysts to search for a needle in a haystack.
“The data was all there… the NSA is great at going back over it forensically for years to see what they were doing before that,” he said. “But that doesn’t stop it.”
Binney called this a “bulk data failure” — in that the NSA programs, leaked by Edward Snowden, are collecting too much for the agency to process. He said the problem runs deeper across law enforcement and other federal agencies, like the FBI, the CIA, and the Drug Enforcement Administration (DEA), which all have access to NSA intelligence.
Binney left the NSA a month after the September 11 attacks in New York City in 2001, days after controversial counter-terrorism legislation was enacted — the Patriot Act — in the wake of the attacks. Binney stands jaded by his experience leaving the shadowy eavesdropping agency, but impassioned for the job he once had. He left after a program he helped develop was scrapped three weeks prior to September 11, replaced by a system he said was more expensive and more intrusive. Snowden said he was inspired by Binney’s case, which in part inspired him to leak thousands of classified documents to journalists.
Since then, the NSA has ramped up its intelligence gathering mission to indiscriminately “collect it all.”
Binney said the NSA is today not as interested in phone records — such as who calls whom, when, and for how long. Although the Obama administration calls the program a “critical national security tool,” the agency is increasingly looking at the content of communications, as the Snowden disclosures have shown.
Binney said he estimated that a “maximum” of 72 companies were participating in the bulk records collection program — including Verizon, but said it was a drop in the ocean. He also called PRISM, the clandestine surveillance program that grabs data from nine named Silicon Valley giants, including Apple, Google, Facebook, and Microsoft, just a “minor part” of the data collection process.
“The Upstream program is where the vast bulk of the information was being collected,” said Binney, talking about how the NSA tapped undersea fiber optic cables. With help from its British counterparts at GCHQ, the NSA is able to “buffer” more than 21 petabytes a day.
Binney said the “collect it all” mantra now may be the norm, but it’s expensive and ineffective.
“If you have to collect everything, there’s an ever increasing need for more and more budget,” he said. “That means you can build your empire.”
They say you never leave the intelligence community. Once you’re a spy, you’re always a spy — it’s a job for life, with few exceptions. One of those is blowing the whistle, which he did. Since then, he has spent his retirement lobbying for change and reform in industry and in Congress.
“They’re taking away half of the constitution in secret,” said Binney. “If they want to change the constitution, there’s a way to do that — and it’s in the constitution.”
An NSA spokesperson did not immediately comment.
Here’s an insight to one man at Google to keep tab on – see the article below.
New Google security chief looks for balance with privacy
By GLENN CHAPMAN, AFP April 19, 2015 4:55am
MOUNTAIN VIEW, United States – Google has a new sheriff keeping watch over the wilds of the Internet.
Austrian-born Gerhard Eschelbeck has ranged the British city of Oxford; cavorted at notorious Def Con hacker conclaves, wrangled a herd of startups, and camped out in Silicon Valley.
He now holds the reins of security and privacy for all-things Google.
In an exclusive interview with AFP, Eschelbeck spoke of using Google’s massive scope to protect users from cyber villains such as spammers and state-sponsored spies.
“The size of our computing infrastructure allows us to process, analyze, and research the changing threat landscape and look ahead to predict what is coming,” Eschelbeck said during his first one-on-one press interview in his new post.
“Security is obviously a constant race; the key is how far can you look ahead.”
Eschelbeck took charge of Google’s 500-strong security and privacy team early this year, returning to Silicon Valley after running engineering for a computer security company in Oxford for two years.
“It was a very natural move for me to join Google,” Eschelbeck said. “What really excited me was doing security at large scale.”
Google’s range of global services and products means there are many fronts for a security expert to defend. Google’s size also means there are arsenals of powerful computer servers for defenders to employ and large-scale data from which to discern cyber dangers.
Eschelbeck’s career in security stretches back two decades to a startup he built while a university student in Austria that was acquired by security company McAfee.
What started out as a six-month work stint in California where McAfee is based turned into a 15-year stay by Eschelbeck.
He created and advised an array of computer security startups before heading off to Oxford. Eschelbeck, has worked at computer technology titans such as Sophos and Qualys, and holds patents for network security technologies.
He was confident his team was up to the challenge of fending off cyber attacks, even from onslaughts of sophisticated operations run by the likes of the US National Security Agency or the Chinese military.
Eschelbeck vowed that he would “absolutely” find any hacker that came after his network.
“As a security guy, I am never comfortable,” he said. “But, I do have a very strong team…I have confidence we have the right reactive and proactive defense mechanisms as well.”
State-sponsored cyber attacks making news in the past year come on top of well-known trends of hacking expressly for fun or profit.
The sheer numbers of attack “vectors” has rocketed exponentially over time, with weapons targeting smartphones, applications, datacenters, operating systems and more.
“You can safely assume that every property on the Internet is continuously under attack,” Eschelbeck said.
“I feel really strong about our ability to identify them before they become a threat and the ability to block and prevent them from entering our environment.”
Eschelbeck is a backer of encrypting data, whether it be an email to a friend or photos stored in the cloud.
“I hope for a time when all the traffic on the Internet is encrypted,” he said.
“You’re not sending a letter to your friend in a transparent envelop, and that is why encryption in transport is so critical.”
He believes that within five years, accessing accounts with no more than passwords will be a thing of the past.
Google lets people require code numbers sent to phones be used along with passwords to access accounts in what is referred to as “two-factor” authentication.
The Internet titan also provides “safe browsing” technology that warns people when they are heading to websites rigged to attack visitors.
Google identifies about 50,000 malicious websites monthly, and another 90,000 phishing websites designed to trick people into giving up their passwords or other valuable personal information, Eschelbeck said.
“We have some really great visibility into the Web, as you can imagine,” he said.
“The time for us to recognize a bad site is incredibly short.”
Doubling-down on privacy
Eschelbeck saw the world of online security as fairly black and white, while the privacy side of his job required subjective interpretations.
Google works closely with data protection authorities in Europe and elsewhere to try and harmonize privacy protections with the standards in various countries.
“I really believe that with security and privacy, there is more overlap than there are differences,” he said.
“We have made a tremendous effort to focus and double-down on privacy issues.”
As have other large Internet companies, Google has routinely made public requests by government agencies for information about users.
Requests are carefully reviewed, and only about 65 percent of them satisfied, according to Google.
“Privacy, to me, is protecting and securing my activities; that they are personal to myself and not visible to the whole wide world,” Eschelbeck said. — Agence France-Presse
The question is, would you buy it? See Facebook’s response here and below.
Have to feel sorry for Snowden here…
Here’s an exclusive story (below) from Al Jazeera neither Google nor the NSA wants you to know.
Exclusive: Emails reveal close Google relationship with NSA
National Security Agency head and Internet giant’s executives have coordinated through high-level policy discussions
May 6, 2014 5:00AM ET
by Jason Leopold
Email exchanges between National Security Agency Director Gen. Keith Alexander and Google executives Sergey Brin and Eric Schmidt suggest a far cozier working relationship between some tech firms and the U.S. government than was implied by Silicon Valley brass after last year’s revelations about NSA spying.
Disclosures by former NSA contractor Edward Snowden about the agency’s vast capability for spying on Americans’ electronic communications prompted a number of tech executives whose firms cooperated with the government to insist they had done so only when compelled by a court of law.
But Al Jazeera has obtained two sets of email communications dating from a year before Snowden became a household name that suggest not all cooperation was under pressure.
On the morning of June 28, 2012, an email from Alexander invited Schmidt to attend a four-hour-long “classified threat briefing” on Aug. 8 at a “secure facility in proximity to the San Jose, CA airport.”
“The meeting discussion will be topic-specific, and decision-oriented, with a focus on Mobility Threats and Security,” Alexander wrote in the email, obtained under a Freedom of Information Act (FOIA) request, the first of dozens of communications between the NSA chief and Silicon Valley executives that the agency plans to turn over.
Alexander, Schmidt and other industry executives met earlier in the month, according to the email. But Alexander wanted another meeting with Schmidt and “a small group of CEOs” later that summer because the government needed Silicon Valley’s help.
“About six months ago, we began focusing on the security of mobility devices,” Alexander wrote. “A group (primarily Google, Apple and Microsoft) recently came to agreement on a set of core security principles. When we reach this point in our projects we schedule a classified briefing for the CEOs of key companies to provide them a brief on the specific threats we believe can be mitigated and to seek their commitment for their organization to move ahead … Google’s participation in refinement, engineering and deployment of the solutions will be essential.”
Jennifer Granick, director of civil liberties at Stanford Law School’s Center for Internet and Society, said she believes information sharing between industry and the government is “absolutely essential” but “at the same time, there is some risk to user privacy and to user security from the way the vulnerability disclosure is done.”
The challenge facing government and industry was to enhance security without compromising privacy, Granick said. The emails between Alexander and Google executives, she said, show “how informal information sharing has been happening within this vacuum where there hasn’t been a known, transparent, concrete, established methodology for getting security information into the right hands.”
The classified briefing cited by Alexander was part of a secretive government initiative known as the Enduring Security Framework (ESF), and his email provides some rare information about what the ESF entails, the identities of some participant tech firms and the threats they discussed.
Alexander explained that the deputy secretaries of the Department of Defense, Homeland Security and “18 US CEOs” launched the ESF in 2009 to “coordinate government/industry actions on important (generally classified) security issues that couldn’t be solved by individual actors alone.”
“For example, over the last 18 months, we (primarily Intel, AMD [Advanced Micro Devices], HP [Hewlett-Packard], Dell and Microsoft on the industry side) completed an effort to secure the BIOS of enterprise platforms to address a threat in that area.”
“BIOS” is an acronym for “basic input/output system,” the system software that initializes the hardware in a personal computer before the operating system starts up. NSA cyberdefense chief Debora Plunkett in December disclosed that the agency had thwarted a “BIOS plot” by a “nation-state,” identified as China, to brick U.S. computers. That plot, she said, could have destroyed the U.S. economy. “60 Minutes,” which broke the story, reported that the NSA worked with unnamed “computer manufacturers” to address the BIOS software vulnerability.
But some cybersecurity experts questioned the scenario outlined by Plunkett.
“There is probably some real event behind this, but it’s hard to tell, because we don’t have any details,” wrote Robert Graham, CEO of the penetration-testing firm Errata Security in Atlanta, on his blog in December. “It”s completely false in the message it is trying to convey. What comes out is gibberish, as any technical person can confirm.”
And by enlisting the NSA to shore up their defenses, those companies may have made themselves more vulnerable to the agency’s efforts to breach them for surveillance purposes.
“I think the public should be concerned about whether the NSA was really making its best efforts, as the emails claim, to help secure enterprise BIOS and mobile devices and not holding the best vulnerabilities close to their chest,” said Nate Cardozo, a staff attorney with the Electronic Frontier Foundation’s digital civil liberties team.
He doesn’t doubt that the NSA was trying to secure enterprise BIOS, but he suggested that the agency, for its own purposes, was “looking for weaknesses in the exact same products they’re trying to secure.”
The NSA “has no business helping Google secure its facilities from the Chinese and at the same time hacking in through the back doors and tapping the fiber connections between Google base centers,” Cardozo said. “The fact that it’s the same agency doing both of those things is in obvious contradiction and ridiculous.” He recommended dividing offensive and defensive functions between two agencies.
Two weeks after the “60 Minutes” broadcast, the German magazine Der Spiegel, citing documents obtained by Snowden, reported that the NSA inserted back doors into BIOS, doing exactly what Plunkett accused a nation-state of doing during her interview.
Google’s Schmidt was unable to attend to the mobility security meeting in San Jose in August 2012.
“General Keith.. so great to see you.. !” Schmidt wrote. “I’m unlikely to be in California that week so I’m sorry I can’t attend (will be on the east coast). Would love to see you another time. Thank you !” Since the Snowden disclosures, Schmidt has been critical of the NSA and said its surveillance programs may be illegal.
Army Gen. Martin E. Dempsey, chairman of the Joint Chiefs of Staff, did attend that briefing. Foreign Policy reported a month later that Dempsey and other government officials — no mention of Alexander — were in Silicon Valley “picking the brains of leaders throughout the valley and discussing the need to quickly share information on cyber threats.” Foreign Policy noted that the Silicon Valley executives in attendance belonged to the ESF. The story did not say mobility threats and security was the top agenda item along with a classified threat briefing.
A week after the gathering, Dempsey said during a Pentagon press briefing, “I was in Silicon Valley recently, for about a week, to discuss vulnerabilities and opportunities in cyber with industry leaders … They agreed — we all agreed on the need to share threat information at network speed.”
Google co-founder Sergey Brin attended previous meetings of the ESF group but because of a scheduling conflict, according to Alexander’s email, he also could not attend the Aug. 8 briefing in San Jose, and it’s unknown if someone else from Google was sent.
A few months earlier, Alexander had emailed Brin to thank him for Google’s participation in the ESF.
“I see ESF’s work as critical to the nation’s progress against the threat in cyberspace and really appreciate Vint Cerf [Google’s vice president and chief Internet evangelist], Eric Grosse [vice president of security engineering] and Adrian Ludwig’s [lead engineer for Android security] contributions to these efforts during the past year,” Alexander wrote in a Jan. 13, 2012, email.
“You recently received an invitation to the ESF Executive Steering Group meeting, which will be held on January 19, 2012. The meeting is an opportunity to recognize our 2012 accomplishments and set direction for the year to come. We will be discussing ESF’s goals and specific targets for 2012. We will also discuss some of the threats we see and what we are doing to mitigate those threats … Your insights, as a key member of the Defense Industrial Base, are valuable to ensure ESF’s efforts have measurable impact.”
A Google representative declined to answer specific questions about Brin’s and Schmidt’s relationship with Alexander or about Google’s work with the government.
“We work really hard to protect our users from cyberattacks, and we always talk to experts — including in the U.S. government — so we stay ahead of the game,” the representative said in a statement to Al Jazeera. “It’s why Sergey attended this NSA conference.”
Brin responded to Alexander the following day even though the head of the NSA didn’t use the appropriate email address when contacting the co-chairman.
“Hi Keith, looking forward to seeing you next week. FYI, my best email address to use is [redacted],” Brin wrote. “The one your email went to — firstname.lastname@example.org — I don’t really check.”
Facebook ‘tracks all visitors, breaching EU law’
Exclusive: People without Facebook accounts, logged out users, and EU users who have explicitly opted out of tracking are all being tracked, report says
Facebook tracks the web browsing of everyone who visits a page on its site even if the user does not have an account or has explicitly opted out of tracking in the EU, extensive research commissioned by the Belgian data protection agency has revealed.
The researchers now claim that Facebook tracks computers of users without their consent, whether they are logged in to Facebook or not, and even if they are not registered users of the site or explicitly opt out in Europe. Facebook tracks users in order to target advertising.
The issue revolves around Facebook’s use of its social plugins such as the “Like” button, which has been placed on more than 13m sites including health and government sites.
Facebook places tracking cookies on users’ computers if they visit any page on the facebook.com domain, including fan pages or other pages that do not require a Facebook account to visit.
When a user visits a third-party site that carries one of Facebook’s social plug-ins, it detects and sends the tracking cookies back to Facebook – even if the user does not interact with the Like button, Facebook Login or other extension of the social media site.
EU privacy law states that prior consent must be given before issuing a cookie or performing tracking, unless it is necessary for either the networking required to connect to the service (“criterion A”) or to deliver a service specifically requested by the user (“criterion B”).
A cookie is a small file placed on a user’s computer by a website that stores settings, previous activities and other small amounts of information needed by the site. They are sent to the site on each visit and can therefore be used to identify a user’s computer and track their movements across the web.
“We collect information when you visit or use third-party websites and apps that use our services. This includes information about the websites and apps you visit, your use of our services on those websites and apps, as well as information the developer or publisher of the app or website provides to you or us,” states Facebook’s data usage policy, which was updated this year.
Facebook’s tracking practices have ‘no legal basis’
An opinion published by Article 29, the pan-European data regulator working party, in 2012 stated that unless delivering a service specifically requested by the user, social plug-ins must have consent before placing a cookie. “Since by definition social plug-ins are destined to members of a particular social network, they are not of any use for non-members, and therefore do not match ‘criterion B’ for those users.”
The same applies for users of Facebook who are logged out at the time, while logged-in users should only be served a “session cookie” that expires when the user logs out or closes their browser, according to Article 29.
The Article 29 working party has also said that cookies set for “security purposes” can only fall under the consent exemptions if they are essential for a service explicitly requested by the user – not general security of the service.
The social network tracks its users for advertising purposes across non-Facebook sites by default. Users can opt out of ad tracking, but an opt-out mechanism “is not an adequate mechanism to obtain average users informed consent”, according to Article 29.
“European legislation is really quite clear on this point. To be legally valid, an individual’s consent towards online behavioural advertising must be opt-in,” explained Brendan Van Alsenoy, a researcher at ICRI and one of the report’s author.
“Facebook cannot rely on users’ inaction (ie not opting out through a third-party website) to infer consent. As far as non-users are concerned, Facebook really has no legal basis whatsoever to justify its current tracking practices.”
Opt-out mechanism actually enables tracking for the non-tracked
The researchers also analysed the opt-out mechanism used by Facebook and many other internet companies including Google and Microsoft.
Users wanting to opt out of behavioural tracking are directed to sites run by the Digital Advertising Alliance in the US, Digital Advertising Alliance of Canada in Canada or the European Digital Advertising Alliance in the EU, each of which allow bulk opting-out from 100 companies.
But the researchers discovered that far from opting out of tracking, Facebook places a new cookie on the computers of users who have not been tracked before.
“If people who are not being tracked by Facebook use the ‘opt out’ mechanism proposed for the EU, Facebook places a long-term, uniquely identifying cookie, which can be used to track them for the next two years,” explained Günes Acar from Cosic, who also co-wrote the report. “What’s more, we found that Facebook does not place any long-term identifying cookie on the opt-out sites suggested by Facebook for US and Canadian users.”
The finding was confirmed by Steven Englehardt, a researcher at Princeton University’s department of computer science who was not involved in the report: “I started with a fresh browsing session and received an additional ‘datr’ cookie that appears capable of uniquely identifying users on the UK version of the European opt-out site. This cookie was not present during repeat tests with a fresh session on the US or Canadian version.”
Facebook sets an opt-out cookie on all the opt-out sites, but this cookie cannot be used for tracking individuals since it does not contain a unique identifier. Why Facebook places the “datr” cookie on computers of EU users who opt out is unknown.
For users worried about tracking, third-party browser add-ons that block tracking are available, says Acar: “Examples include Privacy Badger, Ghostery and Disconnect. Privacy Badger replaces social plug-ins with privacy preserving counterparts so that users can still use social plug-ins, but not be tracked until they actually click on them.
“We argue that it is the legal duty of Facebook to design its services and components in a privacy-friendly way,” Van Alsenoy added. “This means designing social plug-ins in such a way that information about individual’s personal browsing activities outside of Facebook are not unnecessarily exposed.”
A Facebook spokesperson said: “This report contains factual inaccuracies. The authors have never contacted us, nor sought to clarify any assumptions upon which their report is based. Neither did they invite our comment on the report before making it public. We have explained in detail the inaccuracies in the earlier draft report (after it was published) directly to the Belgian DPA, who we understand commissioned it, and have offered to meet with them to explain why it is incorrect, but they have declined to meet or engage with us. However, we remain willing to engage with them and hope they will be prepared to update their work in due course.”
“Earlier this year we updated our terms and policies to make them more clear and concise, to reflect new product features and to highlight how we’re expanding people’s control over advertising. We’re confident the updates comply with applicable laws including EU law.”
Van Alsenoy and Acar, authors of the study, told the Guardian: “We welcome comments via the contact email address listed within the report. Several people have already reached out to provide suggestions and ideas, which we really appreciate.”
“To date, we have not been contacted by Facebook directly nor have we received any meeting request. We’re not surprised that Facebook holds a different opinion as to what European data protection laws require. But if Facebook feels today’s releases contain factual errors, we’re happy to receive any specific remarks it would like to make.”
Let’s continue on the Facebook topic from yesterday and hear it this time from software freedom activist and computer programmer Richard Stallman (also known as rms).
Do you need convincing reasons to leave Facebook for good? Look no further than this video clip and Guardian article below.
To be honest, I signed up to Facebook only late last year but used it exclusively to promote this blog. Yet, I’m always having second thoughts…
Leave Facebook if you don’t want to be spied on, warns EU
European Commission admits Safe Harbour framework cannot ensure privacy of EU citizens’ data when sent to the US by American internet firms
Thursday 26 March 2015 19.11 GMT
The European Commission has warned EU citizens that they should close their Facebook accounts if they want to keep information private from US security services, finding that current Safe Harbour legislation does not protect citizen’s data.
The comments were made by EC attorney Bernhard Schima in a case brought by privacy campaigner Maximilian Schrems, looking at whether the data of EU citizens should be considered safe if sent to the US in a post-Snowden revelation landscape.
“You might consider closing your Facebook account, if you have one,” Schima told attorney general Yves Bot in a hearing of the case at the European court of justice in Luxembourg.
When asked directly, the commission could not confirm to the court that the Safe Harbour rules provide adequate protection of EU citizens’ data as it currently stands.
The US no longer qualifies
The case, dubbed “the Facebook data privacy case”, concerns the current Safe Harbour framework, which covers the transmission of EU citizens’ data across the Atlantic to the US. Without the framework, it is against EU law to transmit private data outside of the EU. The case collects complaints lodged against Apple, Facebook, Microsoft, Microsoft-owned Skype and Yahoo.
Schrems maintains that companies operating inside the EU should not be allowed to transfer data to the US under Safe Harbour protections – which state that US data protection rules are adequate if information is passed by companies on a “self-certify” basis – because the US no longer qualifies for such a status.
The case argues that the US government’s Prism data collection programme, revealed by Edward Snowden in the NSA files, which sees EU citizens’ data held by US companies passed on to US intelligence agencies, breaches the EU’s Data Protection Directive “adequacy” standard for privacy protection, meaning that the Safe Harbour framework no longer applies.
Poland and a few other member states as well as advocacy group Digital Rights Ireland joined Schrems in arguing that the Safe Harbour framework cannot ensure the protection of EU citizens’ data and therefore is in violation of the two articles of the Data Protection Directive.
The commission, however, argued that Safe Harbour is necessary both politically and economically and that it is still a work in progress. The EC and the Ireland data protection watchdog argue that the EC should be left to reform it with a 13-point plan to ensure the privacy of EU citizens’ data.
“There have been a spate of cases from the ECJ and other courts on data privacy and retention showing the judiciary as being more than willing to be a disrupting influence,” said Paula Barrett, partner and data protection expert at law firm Eversheds. “Bringing down the safe harbour mechanism might seem politically and economically ill-conceived, but as the decision of the ECJ in the so-called ‘right to be forgotten’ case seems to reinforce that isn’t a fetter which the ECJ is restrained by.”
An opinion on the Safe Harbour framework from the ECJ is expected by 24 June.
Facebook declined to comment.
You may want to think twice about the new MacBook.
Apple may have ideas about its newly introduced USB-C but widely reported vulnerabilities of USB devices amplify big troubles ahead, as the following article explains.
The NSA Is Going to Love These USB-C Charging Cables
Thanks to Apple’s new MacBook and Google’s new Chromebook Pixel, USB-C has arrived. A single flavor of cable for all your charging and connectivity needs? Hell yes. But that convenience doesn’t come without a cost; our computers will be more vulnerable than ever to malware attacks, from hackers and surveillance agencies alike.
The trouble with USB-C stems from the fact that the USB standard isn’t very secure. Last year, researchers wrote a piece of malware called BadUSB which attaches to your computer using USB devices like phone chargers or thumb drives. Once connected, the malware basically takes over a computer imperceptibly. The scariest part is that the malware is written directly to the USB controller chip’s firmware, which means that it’s virtually undetectable and so far, unfixable.
Before USB-C, there was a way to keep yourself somewhat safe. As long as you kept tabs on your cables, and never stuck random USB sticks into your computer, you could theoretically keep it clean. But as The Verge points out, the BadUSB vulnerability still hasn’t been fixed in USB-C, and now the insecure port is the slot where you connect your power supply. Heck, it’s shaping up to be the slot where you connect everything. You have no choice but to use it every day. Think about how often you’ve borrowed a stranger’s power cable to get charged up. Asking for a charge from a stranger is like having unprotected sex with someone you picked up at the club.
What the Verge fails to mention however, is that it’s potentially much worse than that. If everyone is using the same power charger, it’s not just renegade hackers posing as creative professionals in coffee shops that you need to worry about. With USB-C, the surveillance establishment suddenly has a huge incentive to figure out how to sneak a compromised cable into your power hole.
It might seem alarmist and paranoid to suggest that the NSA would try to sneak a backdoor into charging cables through manufacturers, except that the agency has been busted trying exactly this kind of scheme. Last year, it was revealed that the NSA paid security firm RSA $10 million to leave a backdoor in their encryption unpatched. There’s no telling if or when or how the NSA might try to accomplish something similar with USB-C cables, but it stands to reason they would try.
We live in a world where we plug in with abandon, and USB-C’s flexibility is designed to make plugging in easier than ever. Imagine never needing to guess whether or not your aunt’s house will have a charger for your phone. USB-C could become so common that this isn’t even a question. Of course she has one! With that ubiquity and convenience comes a risk that the tech could become exploited—not just by criminals, but also by the government’s data siphoning machine.
Ever wonder what happens when one’s hacked?
Here’s an insightful chilling account of how one victim attempted to trace the hacker who invaded into his onlife life and Bitcoin wallet.
Anatomy of a Hack
In the early morning hours of October 21st, 2014, Partap Davis lost $3,000. He had gone to sleep just after 2AM in his Albuquerque, New Mexico, home after a late night playing World of Tanks. While he slept, an attacker undid every online security protection he set up. By the time he woke up, most of his online life had been compromised: two email accounts, his phone, his Twitter, his two-factor authenticator, and most importantly, his bitcoin wallets.
Davis was careful when it came to digital security. He chose strong passwords and didn’t click on bogus links. He used two-factor authentication with Gmail, so when he logged in from a new computer, he had to type in six digits that were texted to his phone, just to make sure it was him. He had made some money with the rise of bitcoin and held onto the bitcoin in three protected wallets, managed by Coinbase, Bitstamp, and BTC-E. He also used two-factor with the Coinbase and BTC-E accounts. Any time he wanted to access them, he had to verify the login with Authy, a two-factor authenticator app on his phone.
Other than the bitcoin, Davis wasn’t that different from the average web user. He makes his living coding, splitting time between building video education software and a patchwork of other jobs. On the weekends, he snowboards, exploring the slopes around Los Alamos. This is his 10th year in Albuquerque; last year, he turned 40.
After the hack, Davis spent weeks tracking down exactly how it had happened, piecing together a picture from access logs and reluctant customer service reps. Along the way, he reached out to The Verge, and we added a few more pieces to the puzzle. We still don’t know everything — in particular, we don’t know who did it — but we know enough to say how they did it, and the points of failure sketch out a map of the most glaring vulnerabilities of our digital lives.
It started with Davis’ email. When he was first setting up an email account, Davis found that Partap@gmail.com was taken, so he chose a Mail.com address instead, setting up Partap@mail.com to forward to a less memorably named Gmail address.
Some time after 2AM on October 21st, that link was broken. Someone broke into Davis’ mail.com account and stopped the forwarding. Suddenly there was a new phone number attached to the account — a burner Android device registered in Florida. There was a new backup email too, email@example.com, which is still the closest thing we have to the attacker’s name.
For simplicity’s sake, we’ll call her Eve.
How did Eve get in? We can’t say for sure, but it’s likely that she used a script to target a weakness in Mail.com’s password reset page. We know such a script existed. For months, users on the site Hackforum had been selling access to a script that reset specific account passwords on Mail.com. It was an old exploit by the time Davis was targeted, and the going rate was $5 per account. It’s unclear how the exploit worked and whether it has been closed in the months since, but it did exactly what Eve needed. Without any authentication, she was able to reset Davis’ password to a string of characters that only she knew.
Eve’s next step was to take over Partap’s phone number. She didn’t have his AT&T password, but she just pretended to have forgotten it, and ATT.com sent along a secure link to firstname.lastname@example.org to reset it. Once inside the account, she talked a customer service rep into forwarding his calls to her Long Beach number. Strictly speaking, there are supposed to be more safeguards required to set up call forwarding, and it’s supposed to take more than a working email address to push it through. But faced with an angry client, customer service reps will often give way, putting user satisfaction over the colder virtues of security.
Once forwarding was set up, all of Davis’ voice calls belonged to Eve. Davis still got texts and emails, but every call was routed straight to the attacker. Davis didn’t realize what had happened until two days later, when his boss complained that Davis wasn’t picking up the phone.
Google and Authy
Next, Eve set her sights on Davis’ Google account. Experts will tell you that two-factor authentication is the best protection against attacks. A hacker might get your password or a mugger might steal your phone, but it’s hard to manage both at once. As long as the phone is a physical object, that system works. But people replace their phones all the time, and they expect to be able to replace the services, too. Accounts have to be reset 24 hours a day, and two-factor services end up looking like just one more account to crack.
Davis hadn’t set up Google’s Authenticator app, the more secure option, but he had two-factor authentication enabled — Google texted him a confirmation code every time he logged in from a new computer. Call forwarding didn’t pass along Davis’ texts, but Eve had a back door: thanks to Google’s accessibility functions, she could ask for the confirmation code to be read out loud over the phone.
Authy should have been harder to break. It’s an app, like Authenticator, and it never left Davis’ phone. But Eve simply reset the app on her phone using a mail.com address and a new confirmation code, again sent by a voice call. A few minutes after 3AM, the Authy account moved under Eve’s control.
It was the same trick that had fooled Google: as long as she had Davis’ email and phone, two-factor couldn’t tell the difference between them. At this point, Eve had more control over Davis’s online life than he did. Aside from texting, all digital roads now led to Eve.
At 3:19AM, Eve reset Davis’s Coinbase account, using Authy and his Mail.com address. At 3:55AM, she transferred the full balance (worth roughly $3,600 at the time) to a burner account she controlled. From there, she made three withdrawals — one 30 minutes after the account was opened, then another 20 minutes later, and another five minutes after that. After that, the money disappeared into a nest of dummy accounts, designed to cover her tracks. Less than 90 minutes after his Mail.com account was first compromised, Davis’ money was gone for good.
Authy might have known something was up. The service keeps an eye out for fishy behavior, and while they’re cagey about what they monitor, it seems likely that an account reset to an out-of-state number in the middle of the night would have raised at least a few red flags. But the number wasn’t from a known fraud center like Russia or Ukraine, even if Eve might have been. It would have seemed even more suspicious when Eve logged into Coinbase from the Canadian IP. Could they have stopped her then? Modern security systems like Google’s ReCAPTCHA often work this way, adding together small indicators until there’s enough evidence to freeze an account — but Coinbase and Authy each only saw half the picture, and neither had enough to justify freezing Partap’s account.
BTC-E and Bitstamp
When Davis woke up, the first thing he noticed was that his Gmail had mysteriously logged out. The password had changed, and he couldn’t log back in. Once he was back in the account, he saw how deep the damage went. There were reset emails from each account, sketching out a map of the damage. When he finally got into his Coinbase account, he found it empty. Eve had made off with 10 bitcoin, worth more than $3,000 at the time. It took hours on the phone with customer service reps and a faxed copy of his driver’s license before he could convince them he was the real Partap Davis.
What about the two other wallets? There was $2,500 worth of bitcoin in them, with no advertised protections that the Coinbase wallet didn’t have. But when Davis checked, both accounts were still intact. BTC-e had put a 48-hour hold on the account after a password change, giving him time to prove his identity and recover the account. Bitstamp had an even simpler protection: when Eve emailed to reset Davis’s authentication token, they had asked for an image of his driver’s license. Despite all Eve’s access, it was one thing she didn’t have. Davis’ last $2,500 worth of bitcoin was safe.
It’s been two months now since the attack, and Davis has settled back into his life. The last trace of the intrusion is Davis’ Twitter account, which stayed hacked for weeks after the other accounts. @Partap is a short handle, which makes it valuable, so Eve held onto it, putting in a new picture and erasing any trace of Davis. A few days after the attack, she posted a screenshot of a hacked Xfinity account, tagging another handle. The account didn’t belong to Davis, but it belonged to someone. She had moved onto the next target, and was using @partap as a disposable accessory to her next theft, like a stolen getaway car.
Who was behind the attack? Davis has spent weeks looking for her now — whole afternoons wasted on the phone with customer service reps — but he hasn’t gotten any closer. According to account login records, Eve’s computer was piping in from a block of IP addresses in Canada, but she may have used Tor or a VPN service to cover her tracks. Her phone number belonged to an Android device in Long Beach, California, but that phone was most likely a burner. There are only a few tracks to follow, and each one peters out fast. Wherever she is, Eve got away with it.
Why did she choose Partap Davis? She knew about the wallets upfront, we can assume. Why else would she have spent so much time digging through the accounts? She started at the mail.com account too, so we can guess that somehow, Eve came across a list of bitcoin users with Davis’ email address on it. A number of leaked Coinbase customer lists are floating around the internet, although I couldn’t find Davis’ name on any of them. Or maybe his identity came from an equipment manufacturer or a bitcoin retailer. Leaks are commonplace these days, and most go unreported.
Davis is more careful with bitcoin these days, and he’s given up on the mail.com address — but otherwise, not much about his life has changed. Coinbase has given refunds before, but this time they declined, saying the company’s security wasn’t at fault. He filed a report with the FBI, but the bureau doesn’t seem interested in a single bitcoin theft. What else is there to do? He can’t stop using a phone or give up the power to reset an account. There were just so many accounts, so many ways to get in. In the security world, they call this the attack surface. The bigger the surface, the harder it is to defend.
Most importantly, resetting a password is still easy, as Eve discovered over and over again. When a service finally stopped her, it wasn’t an elaborate algorithm or a fancy biometric. Instead, one service was willing to make customers wait 48 hours before authorizing a new password. On a technical level, it’s a simple fix, but a costly one. Companies are continuously balancing the small risk of compromise against the broad benefits of convenience. A few people may lose control of their account, but millions of others are able to keep using the service without a hitch. In the fight between security and convenience, security is simply outgunned.
3/5 11:10am ET: Updated to clarify Bitstamp security protocols.
It’s mid-week… thought I should share something light for a change: an alternative comic look into privacy and the government takeover of the internet in our daily lives.
Use only end-to-end encryption programs and apps like SpiderOak, Signal, RedPhone and TextSecure, according to Snowden – see article below.
And never ever anything like Dropbox, Facebook and Google, as he has previously stressed (watch this video clip):
The apps Edward Snowden recommends to protect your privacy online
Mar 05, 2015 9:57 AM ET
Andrea Bellemare, CBC News
There are a host of free, easy-to-use apps and programs that can help protect your privacy online, and if everybody uses them it can provide a sort of “herd immunity” said Edward Snowden in a live video chat from Russia on Wednesday.
Snowden appeared via teleconference in an event hosted by Ryerson University and Canadian Journalists For Expression, to launch the CJFE’s online database that compiles all of the publicly released classified documents the former U.S. National Security Agency contractor leaked. In response to a Twitter question,Snowden expanded on what tools he recommends for privacy.
“I hardly touch communications for anything that could be considered sensitive just because it’s extremely risky,” said Snowden.
But Snowden did go on to outline a few free programs that can help protect your privacy.
“You need to ensure your communications are protected in transit,” said Snowden. “It’s these sort of transit interceptions that are the cheapest, that are the easiest, and they scale the best.”
Snowden recommended using programs and apps that provide end-to-end encryption for users, which means the computer on each end of the transaction can access the data, but not any device in between, and the information isn’t stored unencrypted on a third-party server.
”SpiderOak doesn’t have the encryption key to see what you’ve uploaded,” said Snowden, who recommends using it instead of a file-sharing program like Dropbox. “You don’t have to worry about them selling your information to third parties, you don’t have to worry about them providing that information to governments.”
“For the iPhone, there’s a program called Signal, by Open Whisper Systems, it’s very good,” said Snowden.
He also recommended RedPhone, which allows Android users to make encrypted phone calls, and TextSecure, a private messenging app by Open Whisper Systems.
“I wouldn’t trust your lives with any of these things, they don’t protect you from metadata association but they do strongly protect your content from precisely this type of in-transit interception,” said Snowden.
He emphasized that encryption is for everyone, not just people with extremely sensitive information.
“The more you do this, the more you get your friends, your family, your associates to adopt these free and easy-to-use technologies, the less stigma is associated with people who are using encrypted communications who really need them,” said Snowden. “We’re creating a kind of herd immunity that helps protect everybody, everywhere.”
Sending an email message is like sending a postcard. That’s the message Hillary Clinton probably now wish she heard earlier.
Andy Yen, a scientist at CERN – the European Organization for Nuclear Research – co-founded ProtonMail, an encrypted email startup based in Geneva, Switzerland. As he explained in this TEDTalk, it is easy to make encryption easy for all to use and keep all email private.
But curiously, it seems so much like PGP.
“Those kinds of restrictive practices I think would ironically hurt the Chinese economy over the long term because I don’t think there is any US or European firm, any international firm, that could credibly get away with that wholesale turning over of data, personal data, over to a government.”
That’s a quote from Obama reported in The Guardian (see article below).
Oh great, so Obama actually understood the consequences of government gaining backdoors into encryption? He should give the same advice to his NSA director Mike Rogers who somehow struggled when asked about the issue recently.
Building backdoors into encryption isn’t only bad for China, Mr President
Wednesday 4 March 2015 16.15 GMT
Want to know why forcing tech companies to build backdoors into encryption is a terrible idea? Look no further than President Obama’s stark criticism of China’s plan to do exactly that on Tuesday. If only he would tell the FBI and NSA the same thing.
In a stunningly short-sighted move, the FBI – and more recently the NSA – have been pushing for a new US law that would force tech companies like Apple and Google to hand over the encryption keys or build backdoors into their products and tools so the government would always have access to our communications. It was only a matter of time before other governments jumped on the bandwagon, and China wasted no time in demanding the same from tech companies a few weeks ago.
As President Obama himself described to Reuters, China has proposed an expansive new “anti-terrorism” bill that “would essentially force all foreign companies, including US companies, to turn over to the Chinese government mechanisms where they can snoop and keep track of all the users of those services.”
Obama continued: “Those kinds of restrictive practices I think would ironically hurt the Chinese economy over the long term because I don’t think there is any US or European firm, any international firm, that could credibly get away with that wholesale turning over of data, personal data, over to a government.”
Bravo! Of course these are the exact arguments for why it would be a disaster for US government to force tech companies to do the same. (Somehow Obama left that part out.)
As Yahoo’s top security executive Alex Stamos told NSA director Mike Rogers in a public confrontation last week, building backdoors into encryption is like “drilling a hole into a windshield.” Even if it’s technically possible to produce the flaw – and we, for some reason, trust the US government never to abuse it – other countries will inevitably demand access for themselves. Companies will no longer be in a position to say no, and even if they did, intelligence services would find the backdoor unilaterally – or just steal the keys outright.
For an example on how this works, look no further than last week’s Snowden revelation that the UK’s intelligence service and the NSA stole the encryption keys for millions of Sim cards used by many of the world’s most popular cell phone providers. It’s happened many times before too. Ss security expert Bruce Schneier has documented with numerous examples, “Back-door access built for the good guys is routinely used by the bad guys.”
Stamos repeatedly (and commendably) pushed the NSA director for an answer on what happens when China or Russia also demand backdoors from tech companies, but Rogers didn’t have an answer prepared at all. He just kept repeating “I think we can work through this”. As Stamos insinuated, maybe Rogers should ask his own staff why we actually can’t work through this, because virtually every technologist agrees backdoors just cannot be secure in practice.
(If you want to further understand the details behind the encryption vs. backdoor debate and how what the NSA director is asking for is quite literally impossible, read this excellent piece by surveillance expert Julian Sanchez.)
It’s downright bizarre that the US government has been warning of the grave cybersecurity risks the country faces while, at the very same time, arguing that we should pass a law that would weaken cybersecurity and put every single citizen at more risk of having their private information stolen by criminals, foreign governments, and our own.
Forcing backdoors will also be disastrous for the US economy as it would be for China’s. US tech companies – which already have suffered billions of dollars of losses overseas because of consumer distrust over their relationships with the NSA – would lose all credibility with users around the world if the FBI and NSA succeed with their plan.
The White House is supposedly coming out with an official policy on encryption sometime this month, according to the New York Times – but the President can save himself a lot of time and just apply his comments about China to the US government. If he knows backdoors in encryption are bad for cybersecurity, privacy, and the economy, why is there even a debate?
This is bizarre (see article below) but a good sign that what Mega offers in encrypted communications is the real deal and the authorities are certainly not impressed, thus the pressures on credit card companies to force Paypal to block out Mega, as they did previously with WikiLeaks.
BUT don’t forget Kim Dotcom’s newly launched end-to-end encrypted voice calling service “MegaChat” comes in both free and paid versions – see my earlier piece on how to register for MegaChat.
Under U.S. Pressure, PayPal Nukes Mega For Encrypting Files
on February 27, 2015
After coming under intense pressure PayPal has closed the account of cloud-storage service Mega. According to the company, SOPA proponent Senator Patrick Leahy personally pressured Visa and Mastercard who in turn called on PayPal to terminate the account. Bizarrely, Mega’s encryption is being cited as a key problem.
During September 2014, the Digital Citizens Alliance and Netnames teamed up to publish a brand new report. Titled ‘Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions,’ it offered insight into the finances of some of the world’s most popular cyberlocker sites.
The report had its issues, however. While many of the sites covered might at best be considered dubious, the inclusion of Mega.co.nz – the most scrutinized file-hosting startup in history – was a real head scratcher. Mega conforms with all relevant laws and responds quickly whenever content owners need something removed. By any standard the company lives up to the requirements of the DMCA.
“We consider the report grossly untrue and highly defamatory of Mega,” Mega CEO Graham Gaylard told TF at the time. But now, just five months on, Mega’s inclusion in the report has come back to bite the company in a big way.
Speaking via email with TorrentFreak this morning, Gaylard highlighted the company’s latest battle, one which has seen the company become unable to process payments from customers. It’s all connected with the NetNames report and has even seen the direct involvement of a U.S. politician.
According to Mega, following the publication of the report last September, SOPA and PIPA proponent Senator Patrick Leahy (Vermont, Chair Senate Judiciary Committee) put Visa and MasterCard under pressure to stop providing payment services to the ‘rogue’ companies listed in the NetNames report.
Following Leahy’s intervention, Visa and MasterCard then pressured PayPal to cease providing payment processing services to MEGA. As a result, Mega is no longer able to process payments.
“It is very disappointing to say the least. PayPal has been under huge pressure,” Gaylard told TF.
The company did not go without a fight, however.
“MEGA provided extensive statistics and other evidence showing that MEGA’s business is legitimate and legally compliant. After discussions that appeared to satisfy PayPal’s queries, MEGA authorised PayPal to share that material with Visa and MasterCard. Eventually PayPal made a non-negotiable decision to immediately terminate services to MEGA,” the company explains.
paypalWhat makes the situation more unusual is that PayPal reportedly apologized to Mega for its withdrawal while acknowledging that company’s business is indeed legitimate.
However, PayPal also advised that Mega’s unique selling point – it’s end-to-end-encryption – was a key concern for the processor.
“MEGA has demonstrated that it is as compliant with its legal obligations as USA cloud storage services operated by Google, Microsoft, Apple, Dropbox, Box, Spideroak etc, but PayPal has advised that MEGA’s ‘unique encryption model’ presents an insurmountable difficulty,” Mega explains.
As of now, Mega is unable to process payments but is working on finding a replacement. In the meantime the company is waiving all storage limits and will not suspend any accounts for non-payment. All accounts have had their subscriptions extended by two months, free of charge.
Mega indicates that it will ride out the storm and will not bow to pressure nor compromise the privacy of its users.
“MEGA supplies cloud storage services to more than 15 million registered customers in more than 200 countries. MEGA will not compromise its end-to-end user controlled encryption model and is proud to not be part of the USA business network that discriminates against legitimate international businesses,” the company concludes.
Photo (above) credit: US-China Perception Monitor.
It’s not like the NSA has not been warned and China may just be the first of many to come.
The United States Is Angry That China Wants Crypto Backdoors, Too
February 27, 2015 // 03:44 PM EST
When the US demands technology companies install backdoors for law enforcement, it’s okay. But when China demands the same, it’s a whole different story.
The Chinese government is about to pass a new counter terrorism law that would require tech companies operating in the country to turn over encryption keys and include specially crafted code in their software and hardware so that chinese authorities can defeat security measures at will.
Technologists and cryptographers have long warned that you can’t design a secure system that will enable law enforcement—and only law enforcement—to bypass the encryption. The nature of a backdoor door is that it is also a vulnerability, and if discovered, hackers or foreign governments might be able to exploit it, too.
Yet, over the past few months, several US government officials, including the FBI director James Comey, outgoing US Attorney General Eric Holder, and NSA Director Mike Rogers, have all suggested that companies such as Apple and Google should give law enforcement agencies special access to their users’ encrypted data—while somehow offering strong encryption for their users at the same time.
“If the US forces tech companies to install backdoors in encryption, then tech companies will have no choice but to go along with China when they demand the same power.”
Their fear is that cops and feds will “go dark,” an FBI term for a potential scenario where encryption makes it impossible to intercept criminals’ communications.
But in light of China’s new proposals, some think the US’ own position is a little ironic.
“You can’t have it both ways,” Trevor Timm, the co-founder and the executive director of the Freedom of the Press Foundation, told Motherboard. “If the US forces tech companies to install backdoors in encryption, then tech companies will have no choice but to go along with China when they demand the same power.”
He’s not the only one to think the US government might end up regretting its stance.
Someday US officials will look back and realize how much global damage they’ve enabled with their silly requests for key escrow.
— Matthew Green (@matthew_d_green) February 27, 2015
Matthew Green, a cryptography professor at Johns Hopkins University, tweeted that someday US officials will “realize how much damage they’ve enabled” with their “silly requests” for backdoors.
Matthew Green, a cryptography professor at Johns Hopkins University, tweeted that someday US officials will “realize how much damage they’ve enabled” with their “silly requests” for backdoors.
Ironically, the US government sent a letter to China expressing concern about its new law. “The Administration is aggressively working to have China walk back from these troubling regulations,” US Trade Representative Michael Froman said in a statement.
A White House spokesperson did not respond to a request for comment from Motherboard.
“It’s stunningly shortsighted for the FBI and NSA not to realize this,” Timm added. “By demanding backdoors, these US government agencies are putting everyone’s cybersecurity at risk.”
In an oft-cited examples of “if you build it, they will come,” hackers exploited a system designed to let police tap phones to spy on more than a hundred Greek cellphones, including that of the prime minister.
At the time, Steven Bellovin, a computer science professor at Columbia University, wrote that this incident shows how “built-in wiretap facilities and the like are really dangerous, and are easily abused.”
That hasn’t stopped other from asking though. Several countries, including India, Kuwait and UAE, requested BlackBerry to include a backdoor in its devices so that authorities could access encrypted communications. And a leaked document in 2013 revealed that BlackBerry’s lawful interception system in India was “ready for use.”