It’s simple. Whenever Bruce Schneier speaks, listen.
How we sold our souls – and more – to the internet giants
Sunday 17 May 2015 11.00 BST
Last year, when my refrigerator broke, the repair man replaced the computer that controls it. I realised that I had been thinking about the refrigerator backwards: it’s not a refrigerator with a computer, it’s a computer that keeps food cold. Just like that, everything is turning into a computer. Your phone is a computer that makes calls. Your car is a computer with wheels and an engine. Your oven is a computer that cooks lasagne. Your camera is a computer that takes pictures. Even our pets and livestock are now regularly chipped; my cat could be considered a computer that sleeps in the sun all day.
Computers are being embedded into all sort of products that connect to the internet. Nest, which Google purchased last year for more than $3bn, makes an internet-enabled thermostat. You can buy a smart air conditioner that learns your preferences and maximises energy efficiency. Fitness tracking devices, such as Fitbit or Jawbone, collect information about your movements, awake and asleep, and use that to analyse both your exercise and sleep habits. Many medical devices are starting to be internet-enabled, collecting and reporting a variety of biometric data. There are – or will be soon – devices that continually measure our vital signs, moods and brain activity.
This year, we have had two surprising stories of technology monitoring our activity: Samsung televisions that listen to conversations in the room and send them elsewhere for transcription – just in case someone is telling the TV to change the channel – and a Barbie that records your child’s questions and sells them to third parties.
All these computers produce data about what they’re doing and a lot of it is surveillance data. It’s the location of your phone, who you’re talking to and what you’re saying, what you’re searching and writing. It’s your heart rate. Corporations gather, store and analyse this data, often without our knowledge, and typically without our consent. Based on this data, they draw conclusions about us that we might disagree with or object to and that can affect our lives in profound ways. We may not like to admit it, but we are under mass surveillance.
Internet surveillance has evolved into a shockingly extensive, robust and profitable surveillance architecture. You are being tracked pretty much everywhere you go, by many companies and data brokers: 10 different companies on one website, a dozen on another. Facebook tracks you on every site with a Facebook Like button (whether you’re logged in to Facebook or not), while Google tracks you on every site that has a Google Plus g+ button or that uses Google Analytics to monitor its own web traffic.
Most of the companies tracking you have names you’ve never heard of: Rubicon Project, AdSonar, Quantcast, Undertone, Traffic Marketplace. If you want to see who’s tracking you, install one of the browser plug-ins that let you monitor cookies. I guarantee you will be startled. One reporter discovered that 105 different companies tracked his internet use during one 36-hour period. In 2010, the seemingly innocuous site Dictionary.com installed more than 200 tracking cookies on your browser when you visited.
It’s no different on your smartphone. The apps there track you as well. They track your location and sometimes download your address book, calendar, bookmarks and search history. In 2013, the rapper Jay Z and Samsung teamed up to offer people who downloaded an app the ability to hear the new Jay Z album before release. The app required that users give Samsung consent to view all accounts on the phone, track its location and who the user was talking to. The Angry Birds game even collects location data when you’re not playing. It’s less Big Brother and more hundreds of tittletattle little brothers.
Most internet surveillance data is inherently anonymous, but companies are increasingly able to correlate the information gathered with other information that positively identifies us. You identify yourself willingly to lots of internet services. Often you do this with only a username, but increasingly usernames can be tied to your real name. Google tried to enforce this with its “real name policy”, which required users register for Google Plus with their legal names, until it rescinded that policy in 2014. Facebook pretty much demands real names. Whenever you use your credit card number to buy something, your real identity is tied to any cookies set by companies involved in that transaction. And any browsing you do on your smartphone is tied to you as the phone’s owner, although the website might not know it.
Surveillance is the business model of the internet for two primary reasons: people like free and people like convenient. The truth is, though, that people aren’t given much of a choice. It’s either surveillance or nothing and the surveillance is conveniently invisible so you don’t have to think about it. And it’s all possible because laws have failed to keep up with changes in business practices.
In general, privacy is something people tend to undervalue until they don’t have it anymore. Arguments such as “I have nothing to hide” are common, but aren’t really true. People living under constant surveillance quickly realise that privacy isn’t about having something to hide. It’s about individuality and personal autonomy. It’s about being able to decide who to reveal yourself to and under what terms. It’s about being free to be an individual and not having to constantly justify yourself to some overseer.
This tendency to undervalue privacy is exacerbated by companies deliberately making sure that privacy is not salient to users. When you log on to Facebook, you don’t think about how much personal information you’re revealing to the company; you chat with your friends. When you wake up in the morning, you don’t think about how you’re going to allow a bunch of companies to track you throughout the day; you just put your cell phone in your pocket.
But by accepting surveillance-based business models, we hand over even more power to the powerful. Google controls two-thirds of the US search market. Almost three-quarters of all internet users have Facebook accounts. Amazon controls about 30% of the US book market, and 70% of the ebook market. Comcast owns about 25% of the US broadband market. These companies have enormous power and control over us simply because of their economic position.
Our relationship with many of the internet companies we rely on is not a traditional company-customer relationship. That’s primarily because we’re not customers – we’re products those companies sell to their real customers. The companies are analogous to feudal lords and we are their vassals, peasants and – on a bad day – serfs. We are tenant farmers for these companies, working on their land by producing data that they in turn sell for profit.
Yes, it’s a metaphor, but it often really feels like that. Some people have pledged allegiance to Google. They have Gmail accounts, use Google Calendar and Google Docs and have Android phones. Others have pledged similar allegiance to Apple. They have iMacs, iPhones and iPads and let iCloud automatically synchronise and back up everything. Still others let Microsoft do it all. Some of us have pretty much abandoned email altogether for Facebook, Twitter and Instagram. We might prefer one feudal lord to the others. We might distribute our allegiance among several of these companies or studiously avoid a particular one we don’t like. Regardless, it’s becoming increasingly difficult to avoid pledging allegiance to at least one of them.
After all, customers get a lot of value out of having feudal lords. It’s simply easier and safer for someone else to hold our data and manage our devices. We like having someone else take care of our device configurations, software management, and data storage. We like it when we can access our email anywhere, from any computer, and we like it that Facebook just works, from any device, anywhere. We want our calendar entries to appear automatically on all our devices. Cloud storage sites do a better job of backing up our photos and files than we can manage by ourselves; Apple has done a great job of keeping malware out of its iPhone app store. We like automatic security updates and automatic backups; the companies do a better job of protecting our devices than we ever did. And we’re really happy when, after we lose a smartphone and buy a new one, all of our data reappears on it at the push of a button.
In this new world of computing, we’re no longer expected to manage our computing environment. We trust the feudal lords to treat us well and protect us from harm. It’s all a result of two technological trends.
The first is the rise of cloud computing. Basically, our data is no longer stored and processed on our computers. That all happens on servers owned by many different companies. The result is that we no longer control our data. These companies access our data—both content and metadata—for whatever profitable purpose they want. They have carefully crafted terms of service that dictate what sorts of data we can store on their systems, and can delete our entire accounts if they believe we violate them. And they turn our data over to law enforcement without our knowledge or consent. Potentially even worse, our data might be stored on computers in a country whose data protection laws are less than rigorous.
The second trend is the rise of user devices that are managed closely by their vendors: iPhones, iPads, Android phones, Kindles, ChromeBooks, and the like. The result is that we no longer control our computing environment. We have ceded control over what we can see, what we can do, and what we can use. Apple has rules about what software can be installed on iOS devices. You can load your own documents onto your Kindle, but Amazon is able to delete books it has already sold you. In 2009, Amazon automatically deleted some editions of George Orwell’s Nineteen Eighty-Four from users’ Kindles because of a copyright issue. I know, you just couldn’t write this stuff any more ironically.
It’s not just hardware. It’s getting hard to just buy a piece of software and use it on your computer in any way you like. Increasingly, vendors are moving to a subscription model—Adobe did that with Creative Cloud in 2013—that gives the vendor much more control. Microsoft hasn’t yet given up on a purchase model, but is making its MS Office subscription very attractive. And Office 365’s option of storing your documents in the Microsoft cloud is hard to turn off. Companies are pushing us in this direction because it makes us more profitable as customers or users.
Given current laws, trust is our only option. There are no consistent or predictable rules. We have no control over the actions of these companies. I can’t negotiate the rules regarding when Yahoo will access my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello. I don’t even know the cloud providers to whom those companies have outsourced their infrastructures. If any of those companies delete my data, I don’t have the right to demand it back. If any of those companies give the government access to my data, I have no recourse. And if I decide to abandon those services, chances are I can’t easily take my data with me.
Political scientist Henry Farrell observed: “Much of our life is conducted online, which is another way of saying that much of our life is conducted under rules set by large private businesses, which are subject neither to much regulation nor much real market competition.”
The common defence is something like “business is business”. No one is forced to join Facebook or use Google search or buy an iPhone. Potential customers are choosing to enter into these quasi-feudal user relationships because of the enormous value they receive from them. If they don’t like it, goes the argument, they shouldn’t do it.
This advice is not practical. It’s not reasonable to tell people that if they don’t like their data being collected, they shouldn’t email, shop online, use Facebook or have a mobile phone. I can’t imagine students getting through school anymore without an internet search or Wikipedia, much less finding a job afterwards. These are the tools of modern life. They’re necessary to a career and a social life. Opting out just isn’t a viable choice for most of us, most of the time; it violates what have become very real norms of contemporary life.
Right now, choosing among providers is not a choice between surveillance or no surveillance, but only a choice of which feudal lords get to spy on you. This won’t change until we have laws to protect both us and our data from these sorts of relationships. Data is power and those that have our data have power over us. It’s time for government to step in and balance things out.
Adapted from Data and Goliath by Bruce Schneier, published by Norton Books. To order a copy for £17.99 go to bookshop.theguardian.com. Bruce Schneier is a security technologist and CTO of Resilient Systems Inc. He blogs at schneier.com, and tweets at @schneierblog
Some thoughts for the weekend… listen especially to the first six and a half minutes of this clip below about the conspiracy theories surrounding the recent mysterious death of Dave Goldberg, the husband of Facebook Chief Operating Officer Sheryl Sandberg – the “Facebook-NSA Queen”.
(Above) Photo credit: dailydotcom
Are you wondering why this “problem” (data overload – see article below) did not happen earlier…?
NSA is so overwhelmed with data, it’s no longer effective, says whistleblower
Summary:One of the agency’s first whistleblowers says the NSA is taking in too much data for it to handle, which can have disastrous — if not deadly — consequences.
By Zack Whittaker for Zero Day | April 30, 2015 — 14:29 GMT (22:29 GMT+08:00)
NEW YORK — A former National Security Agency official turned whistleblower has spent almost a decade and a half in civilian life. And he says he’s still “pissed” by what he’s seen leak in the past two years.
In a lunch meeting hosted by Contrast Security founder Jeff Williams on Wednesday, William Binney, a former NSA official who spent more than three decades at the agency, said the US government’s mass surveillance programs have become so engorged with data that they are no longer effective, losing vital intelligence in the fray.
That, he said, can — and has — led to terrorist attacks succeeding.
Binney said that an analyst today can run one simple query across the NSA’s various databases, only to become immediately overloaded with information. With about four billion people — around two-thirds of the world’s population — under the NSA and partner agencies’ watchful eyes, according to his estimates, there is too much data being collected.
“That’s why they couldn’t stop the Boston bombing, or the Paris shootings, because the data was all there,” said Binney. Because the agency isn’t carefully and methodically setting its tools up for smart data collection, that leaves analysts to search for a needle in a haystack.
“The data was all there… the NSA is great at going back over it forensically for years to see what they were doing before that,” he said. “But that doesn’t stop it.”
Binney called this a “bulk data failure” — in that the NSA programs, leaked by Edward Snowden, are collecting too much for the agency to process. He said the problem runs deeper across law enforcement and other federal agencies, like the FBI, the CIA, and the Drug Enforcement Administration (DEA), which all have access to NSA intelligence.
Binney left the NSA a month after the September 11 attacks in New York City in 2001, days after controversial counter-terrorism legislation was enacted — the Patriot Act — in the wake of the attacks. Binney stands jaded by his experience leaving the shadowy eavesdropping agency, but impassioned for the job he once had. He left after a program he helped develop was scrapped three weeks prior to September 11, replaced by a system he said was more expensive and more intrusive. Snowden said he was inspired by Binney’s case, which in part inspired him to leak thousands of classified documents to journalists.
Since then, the NSA has ramped up its intelligence gathering mission to indiscriminately “collect it all.”
Binney said the NSA is today not as interested in phone records — such as who calls whom, when, and for how long. Although the Obama administration calls the program a “critical national security tool,” the agency is increasingly looking at the content of communications, as the Snowden disclosures have shown.
Binney said he estimated that a “maximum” of 72 companies were participating in the bulk records collection program — including Verizon, but said it was a drop in the ocean. He also called PRISM, the clandestine surveillance program that grabs data from nine named Silicon Valley giants, including Apple, Google, Facebook, and Microsoft, just a “minor part” of the data collection process.
“The Upstream program is where the vast bulk of the information was being collected,” said Binney, talking about how the NSA tapped undersea fiber optic cables. With help from its British counterparts at GCHQ, the NSA is able to “buffer” more than 21 petabytes a day.
Binney said the “collect it all” mantra now may be the norm, but it’s expensive and ineffective.
“If you have to collect everything, there’s an ever increasing need for more and more budget,” he said. “That means you can build your empire.”
They say you never leave the intelligence community. Once you’re a spy, you’re always a spy — it’s a job for life, with few exceptions. One of those is blowing the whistle, which he did. Since then, he has spent his retirement lobbying for change and reform in industry and in Congress.
“They’re taking away half of the constitution in secret,” said Binney. “If they want to change the constitution, there’s a way to do that — and it’s in the constitution.”
An NSA spokesperson did not immediately comment.
Have to feel sorry for Snowden here…
Here’s an exclusive story (below) from Al Jazeera neither Google nor the NSA wants you to know.
Exclusive: Emails reveal close Google relationship with NSA
National Security Agency head and Internet giant’s executives have coordinated through high-level policy discussions
May 6, 2014 5:00AM ET
by Jason Leopold
Email exchanges between National Security Agency Director Gen. Keith Alexander and Google executives Sergey Brin and Eric Schmidt suggest a far cozier working relationship between some tech firms and the U.S. government than was implied by Silicon Valley brass after last year’s revelations about NSA spying.
Disclosures by former NSA contractor Edward Snowden about the agency’s vast capability for spying on Americans’ electronic communications prompted a number of tech executives whose firms cooperated with the government to insist they had done so only when compelled by a court of law.
But Al Jazeera has obtained two sets of email communications dating from a year before Snowden became a household name that suggest not all cooperation was under pressure.
On the morning of June 28, 2012, an email from Alexander invited Schmidt to attend a four-hour-long “classified threat briefing” on Aug. 8 at a “secure facility in proximity to the San Jose, CA airport.”
“The meeting discussion will be topic-specific, and decision-oriented, with a focus on Mobility Threats and Security,” Alexander wrote in the email, obtained under a Freedom of Information Act (FOIA) request, the first of dozens of communications between the NSA chief and Silicon Valley executives that the agency plans to turn over.
Alexander, Schmidt and other industry executives met earlier in the month, according to the email. But Alexander wanted another meeting with Schmidt and “a small group of CEOs” later that summer because the government needed Silicon Valley’s help.
“About six months ago, we began focusing on the security of mobility devices,” Alexander wrote. “A group (primarily Google, Apple and Microsoft) recently came to agreement on a set of core security principles. When we reach this point in our projects we schedule a classified briefing for the CEOs of key companies to provide them a brief on the specific threats we believe can be mitigated and to seek their commitment for their organization to move ahead … Google’s participation in refinement, engineering and deployment of the solutions will be essential.”
Jennifer Granick, director of civil liberties at Stanford Law School’s Center for Internet and Society, said she believes information sharing between industry and the government is “absolutely essential” but “at the same time, there is some risk to user privacy and to user security from the way the vulnerability disclosure is done.”
The challenge facing government and industry was to enhance security without compromising privacy, Granick said. The emails between Alexander and Google executives, she said, show “how informal information sharing has been happening within this vacuum where there hasn’t been a known, transparent, concrete, established methodology for getting security information into the right hands.”
The classified briefing cited by Alexander was part of a secretive government initiative known as the Enduring Security Framework (ESF), and his email provides some rare information about what the ESF entails, the identities of some participant tech firms and the threats they discussed.
Alexander explained that the deputy secretaries of the Department of Defense, Homeland Security and “18 US CEOs” launched the ESF in 2009 to “coordinate government/industry actions on important (generally classified) security issues that couldn’t be solved by individual actors alone.”
“For example, over the last 18 months, we (primarily Intel, AMD [Advanced Micro Devices], HP [Hewlett-Packard], Dell and Microsoft on the industry side) completed an effort to secure the BIOS of enterprise platforms to address a threat in that area.”
“BIOS” is an acronym for “basic input/output system,” the system software that initializes the hardware in a personal computer before the operating system starts up. NSA cyberdefense chief Debora Plunkett in December disclosed that the agency had thwarted a “BIOS plot” by a “nation-state,” identified as China, to brick U.S. computers. That plot, she said, could have destroyed the U.S. economy. “60 Minutes,” which broke the story, reported that the NSA worked with unnamed “computer manufacturers” to address the BIOS software vulnerability.
But some cybersecurity experts questioned the scenario outlined by Plunkett.
“There is probably some real event behind this, but it’s hard to tell, because we don’t have any details,” wrote Robert Graham, CEO of the penetration-testing firm Errata Security in Atlanta, on his blog in December. “It”s completely false in the message it is trying to convey. What comes out is gibberish, as any technical person can confirm.”
And by enlisting the NSA to shore up their defenses, those companies may have made themselves more vulnerable to the agency’s efforts to breach them for surveillance purposes.
“I think the public should be concerned about whether the NSA was really making its best efforts, as the emails claim, to help secure enterprise BIOS and mobile devices and not holding the best vulnerabilities close to their chest,” said Nate Cardozo, a staff attorney with the Electronic Frontier Foundation’s digital civil liberties team.
He doesn’t doubt that the NSA was trying to secure enterprise BIOS, but he suggested that the agency, for its own purposes, was “looking for weaknesses in the exact same products they’re trying to secure.”
The NSA “has no business helping Google secure its facilities from the Chinese and at the same time hacking in through the back doors and tapping the fiber connections between Google base centers,” Cardozo said. “The fact that it’s the same agency doing both of those things is in obvious contradiction and ridiculous.” He recommended dividing offensive and defensive functions between two agencies.
Two weeks after the “60 Minutes” broadcast, the German magazine Der Spiegel, citing documents obtained by Snowden, reported that the NSA inserted back doors into BIOS, doing exactly what Plunkett accused a nation-state of doing during her interview.
Google’s Schmidt was unable to attend to the mobility security meeting in San Jose in August 2012.
“General Keith.. so great to see you.. !” Schmidt wrote. “I’m unlikely to be in California that week so I’m sorry I can’t attend (will be on the east coast). Would love to see you another time. Thank you !” Since the Snowden disclosures, Schmidt has been critical of the NSA and said its surveillance programs may be illegal.
Army Gen. Martin E. Dempsey, chairman of the Joint Chiefs of Staff, did attend that briefing. Foreign Policy reported a month later that Dempsey and other government officials — no mention of Alexander — were in Silicon Valley “picking the brains of leaders throughout the valley and discussing the need to quickly share information on cyber threats.” Foreign Policy noted that the Silicon Valley executives in attendance belonged to the ESF. The story did not say mobility threats and security was the top agenda item along with a classified threat briefing.
A week after the gathering, Dempsey said during a Pentagon press briefing, “I was in Silicon Valley recently, for about a week, to discuss vulnerabilities and opportunities in cyber with industry leaders … They agreed — we all agreed on the need to share threat information at network speed.”
Google co-founder Sergey Brin attended previous meetings of the ESF group but because of a scheduling conflict, according to Alexander’s email, he also could not attend the Aug. 8 briefing in San Jose, and it’s unknown if someone else from Google was sent.
A few months earlier, Alexander had emailed Brin to thank him for Google’s participation in the ESF.
“I see ESF’s work as critical to the nation’s progress against the threat in cyberspace and really appreciate Vint Cerf [Google’s vice president and chief Internet evangelist], Eric Grosse [vice president of security engineering] and Adrian Ludwig’s [lead engineer for Android security] contributions to these efforts during the past year,” Alexander wrote in a Jan. 13, 2012, email.
“You recently received an invitation to the ESF Executive Steering Group meeting, which will be held on January 19, 2012. The meeting is an opportunity to recognize our 2012 accomplishments and set direction for the year to come. We will be discussing ESF’s goals and specific targets for 2012. We will also discuss some of the threats we see and what we are doing to mitigate those threats … Your insights, as a key member of the Defense Industrial Base, are valuable to ensure ESF’s efforts have measurable impact.”
A Google representative declined to answer specific questions about Brin’s and Schmidt’s relationship with Alexander or about Google’s work with the government.
“We work really hard to protect our users from cyberattacks, and we always talk to experts — including in the U.S. government — so we stay ahead of the game,” the representative said in a statement to Al Jazeera. “It’s why Sergey attended this NSA conference.”
Brin responded to Alexander the following day even though the head of the NSA didn’t use the appropriate email address when contacting the co-chairman.
“Hi Keith, looking forward to seeing you next week. FYI, my best email address to use is [redacted],” Brin wrote. “The one your email went to — firstname.lastname@example.org — I don’t really check.”
Facebook ‘tracks all visitors, breaching EU law’
Exclusive: People without Facebook accounts, logged out users, and EU users who have explicitly opted out of tracking are all being tracked, report says
Facebook tracks the web browsing of everyone who visits a page on its site even if the user does not have an account or has explicitly opted out of tracking in the EU, extensive research commissioned by the Belgian data protection agency has revealed.
The researchers now claim that Facebook tracks computers of users without their consent, whether they are logged in to Facebook or not, and even if they are not registered users of the site or explicitly opt out in Europe. Facebook tracks users in order to target advertising.
The issue revolves around Facebook’s use of its social plugins such as the “Like” button, which has been placed on more than 13m sites including health and government sites.
Facebook places tracking cookies on users’ computers if they visit any page on the facebook.com domain, including fan pages or other pages that do not require a Facebook account to visit.
When a user visits a third-party site that carries one of Facebook’s social plug-ins, it detects and sends the tracking cookies back to Facebook – even if the user does not interact with the Like button, Facebook Login or other extension of the social media site.
EU privacy law states that prior consent must be given before issuing a cookie or performing tracking, unless it is necessary for either the networking required to connect to the service (“criterion A”) or to deliver a service specifically requested by the user (“criterion B”).
A cookie is a small file placed on a user’s computer by a website that stores settings, previous activities and other small amounts of information needed by the site. They are sent to the site on each visit and can therefore be used to identify a user’s computer and track their movements across the web.
“We collect information when you visit or use third-party websites and apps that use our services. This includes information about the websites and apps you visit, your use of our services on those websites and apps, as well as information the developer or publisher of the app or website provides to you or us,” states Facebook’s data usage policy, which was updated this year.
Facebook’s tracking practices have ‘no legal basis’
An opinion published by Article 29, the pan-European data regulator working party, in 2012 stated that unless delivering a service specifically requested by the user, social plug-ins must have consent before placing a cookie. “Since by definition social plug-ins are destined to members of a particular social network, they are not of any use for non-members, and therefore do not match ‘criterion B’ for those users.”
The same applies for users of Facebook who are logged out at the time, while logged-in users should only be served a “session cookie” that expires when the user logs out or closes their browser, according to Article 29.
The Article 29 working party has also said that cookies set for “security purposes” can only fall under the consent exemptions if they are essential for a service explicitly requested by the user – not general security of the service.
The social network tracks its users for advertising purposes across non-Facebook sites by default. Users can opt out of ad tracking, but an opt-out mechanism “is not an adequate mechanism to obtain average users informed consent”, according to Article 29.
“European legislation is really quite clear on this point. To be legally valid, an individual’s consent towards online behavioural advertising must be opt-in,” explained Brendan Van Alsenoy, a researcher at ICRI and one of the report’s author.
“Facebook cannot rely on users’ inaction (ie not opting out through a third-party website) to infer consent. As far as non-users are concerned, Facebook really has no legal basis whatsoever to justify its current tracking practices.”
Opt-out mechanism actually enables tracking for the non-tracked
The researchers also analysed the opt-out mechanism used by Facebook and many other internet companies including Google and Microsoft.
Users wanting to opt out of behavioural tracking are directed to sites run by the Digital Advertising Alliance in the US, Digital Advertising Alliance of Canada in Canada or the European Digital Advertising Alliance in the EU, each of which allow bulk opting-out from 100 companies.
But the researchers discovered that far from opting out of tracking, Facebook places a new cookie on the computers of users who have not been tracked before.
“If people who are not being tracked by Facebook use the ‘opt out’ mechanism proposed for the EU, Facebook places a long-term, uniquely identifying cookie, which can be used to track them for the next two years,” explained Günes Acar from Cosic, who also co-wrote the report. “What’s more, we found that Facebook does not place any long-term identifying cookie on the opt-out sites suggested by Facebook for US and Canadian users.”
The finding was confirmed by Steven Englehardt, a researcher at Princeton University’s department of computer science who was not involved in the report: “I started with a fresh browsing session and received an additional ‘datr’ cookie that appears capable of uniquely identifying users on the UK version of the European opt-out site. This cookie was not present during repeat tests with a fresh session on the US or Canadian version.”
Facebook sets an opt-out cookie on all the opt-out sites, but this cookie cannot be used for tracking individuals since it does not contain a unique identifier. Why Facebook places the “datr” cookie on computers of EU users who opt out is unknown.
For users worried about tracking, third-party browser add-ons that block tracking are available, says Acar: “Examples include Privacy Badger, Ghostery and Disconnect. Privacy Badger replaces social plug-ins with privacy preserving counterparts so that users can still use social plug-ins, but not be tracked until they actually click on them.
“We argue that it is the legal duty of Facebook to design its services and components in a privacy-friendly way,” Van Alsenoy added. “This means designing social plug-ins in such a way that information about individual’s personal browsing activities outside of Facebook are not unnecessarily exposed.”
A Facebook spokesperson said: “This report contains factual inaccuracies. The authors have never contacted us, nor sought to clarify any assumptions upon which their report is based. Neither did they invite our comment on the report before making it public. We have explained in detail the inaccuracies in the earlier draft report (after it was published) directly to the Belgian DPA, who we understand commissioned it, and have offered to meet with them to explain why it is incorrect, but they have declined to meet or engage with us. However, we remain willing to engage with them and hope they will be prepared to update their work in due course.”
“Earlier this year we updated our terms and policies to make them more clear and concise, to reflect new product features and to highlight how we’re expanding people’s control over advertising. We’re confident the updates comply with applicable laws including EU law.”
Van Alsenoy and Acar, authors of the study, told the Guardian: “We welcome comments via the contact email address listed within the report. Several people have already reached out to provide suggestions and ideas, which we really appreciate.”
“To date, we have not been contacted by Facebook directly nor have we received any meeting request. We’re not surprised that Facebook holds a different opinion as to what European data protection laws require. But if Facebook feels today’s releases contain factual errors, we’re happy to receive any specific remarks it would like to make.”
Let’s continue on the Facebook topic from yesterday and hear it this time from software freedom activist and computer programmer Richard Stallman (also known as rms).
Do you need convincing reasons to leave Facebook for good? Look no further than this video clip and Guardian article below.
To be honest, I signed up to Facebook only late last year but used it exclusively to promote this blog. Yet, I’m always having second thoughts…
Leave Facebook if you don’t want to be spied on, warns EU
European Commission admits Safe Harbour framework cannot ensure privacy of EU citizens’ data when sent to the US by American internet firms
Thursday 26 March 2015 19.11 GMT
The European Commission has warned EU citizens that they should close their Facebook accounts if they want to keep information private from US security services, finding that current Safe Harbour legislation does not protect citizen’s data.
The comments were made by EC attorney Bernhard Schima in a case brought by privacy campaigner Maximilian Schrems, looking at whether the data of EU citizens should be considered safe if sent to the US in a post-Snowden revelation landscape.
“You might consider closing your Facebook account, if you have one,” Schima told attorney general Yves Bot in a hearing of the case at the European court of justice in Luxembourg.
When asked directly, the commission could not confirm to the court that the Safe Harbour rules provide adequate protection of EU citizens’ data as it currently stands.
The US no longer qualifies
The case, dubbed “the Facebook data privacy case”, concerns the current Safe Harbour framework, which covers the transmission of EU citizens’ data across the Atlantic to the US. Without the framework, it is against EU law to transmit private data outside of the EU. The case collects complaints lodged against Apple, Facebook, Microsoft, Microsoft-owned Skype and Yahoo.
Schrems maintains that companies operating inside the EU should not be allowed to transfer data to the US under Safe Harbour protections – which state that US data protection rules are adequate if information is passed by companies on a “self-certify” basis – because the US no longer qualifies for such a status.
The case argues that the US government’s Prism data collection programme, revealed by Edward Snowden in the NSA files, which sees EU citizens’ data held by US companies passed on to US intelligence agencies, breaches the EU’s Data Protection Directive “adequacy” standard for privacy protection, meaning that the Safe Harbour framework no longer applies.
Poland and a few other member states as well as advocacy group Digital Rights Ireland joined Schrems in arguing that the Safe Harbour framework cannot ensure the protection of EU citizens’ data and therefore is in violation of the two articles of the Data Protection Directive.
The commission, however, argued that Safe Harbour is necessary both politically and economically and that it is still a work in progress. The EC and the Ireland data protection watchdog argue that the EC should be left to reform it with a 13-point plan to ensure the privacy of EU citizens’ data.
“There have been a spate of cases from the ECJ and other courts on data privacy and retention showing the judiciary as being more than willing to be a disrupting influence,” said Paula Barrett, partner and data protection expert at law firm Eversheds. “Bringing down the safe harbour mechanism might seem politically and economically ill-conceived, but as the decision of the ECJ in the so-called ‘right to be forgotten’ case seems to reinforce that isn’t a fetter which the ECJ is restrained by.”
An opinion on the Safe Harbour framework from the ECJ is expected by 24 June.
Facebook declined to comment.
Something is fundamentally wrong…
The new Windows 10, reportedly to be released this summer, comes with Windows Hello, which will log in users with biometric authentication, ie. the technology will unlock the devices by using the users’ face, fingerprint or iris which Microsoft label as “more personal and more secure” with security and privacy accounted for.
Well, let’s see how this would last. Recall Apple’s fingerprint reading technology on its previous iPhones was hacked within 24 hours.
And speaking of facial recognition, I know someone whose six year old son managed to fool a Samsung smartphone because of the resemblance to his mother. All it took for him was to stare at her mom’s phone while she was asleep and… Bingo!
So here’s my question: what about identical twins?
Good luck, Windows 10.
If there’s any one lesson on computer/phone scams you need to remember: Microsoft, or Apple for that matter, will not initiate a call to offer a remote computer scan to fix a “problem”.
So here’s an actual incident when the scammers called and met their match – it was a computer security researcher on the line, who recorded the entire conversation (his two audio files below).
At one point, after allowing the scammer to gain some limited control of his computer screen, he informed the caller that she was busted, who in turn threatened to hack him (second audio file).
Enjoy witnessing scammers at work and here’s the article for a brief background.
Oh by the way, the caller’s number was 949-000-7676.
This is bizarre (see article below) but a good sign that what Mega offers in encrypted communications is the real deal and the authorities are certainly not impressed, thus the pressures on credit card companies to force Paypal to block out Mega, as they did previously with WikiLeaks.
BUT don’t forget Kim Dotcom’s newly launched end-to-end encrypted voice calling service “MegaChat” comes in both free and paid versions – see my earlier piece on how to register for MegaChat.
Under U.S. Pressure, PayPal Nukes Mega For Encrypting Files
on February 27, 2015
After coming under intense pressure PayPal has closed the account of cloud-storage service Mega. According to the company, SOPA proponent Senator Patrick Leahy personally pressured Visa and Mastercard who in turn called on PayPal to terminate the account. Bizarrely, Mega’s encryption is being cited as a key problem.
During September 2014, the Digital Citizens Alliance and Netnames teamed up to publish a brand new report. Titled ‘Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions,’ it offered insight into the finances of some of the world’s most popular cyberlocker sites.
The report had its issues, however. While many of the sites covered might at best be considered dubious, the inclusion of Mega.co.nz – the most scrutinized file-hosting startup in history – was a real head scratcher. Mega conforms with all relevant laws and responds quickly whenever content owners need something removed. By any standard the company lives up to the requirements of the DMCA.
“We consider the report grossly untrue and highly defamatory of Mega,” Mega CEO Graham Gaylard told TF at the time. But now, just five months on, Mega’s inclusion in the report has come back to bite the company in a big way.
Speaking via email with TorrentFreak this morning, Gaylard highlighted the company’s latest battle, one which has seen the company become unable to process payments from customers. It’s all connected with the NetNames report and has even seen the direct involvement of a U.S. politician.
According to Mega, following the publication of the report last September, SOPA and PIPA proponent Senator Patrick Leahy (Vermont, Chair Senate Judiciary Committee) put Visa and MasterCard under pressure to stop providing payment services to the ‘rogue’ companies listed in the NetNames report.
Following Leahy’s intervention, Visa and MasterCard then pressured PayPal to cease providing payment processing services to MEGA. As a result, Mega is no longer able to process payments.
“It is very disappointing to say the least. PayPal has been under huge pressure,” Gaylard told TF.
The company did not go without a fight, however.
“MEGA provided extensive statistics and other evidence showing that MEGA’s business is legitimate and legally compliant. After discussions that appeared to satisfy PayPal’s queries, MEGA authorised PayPal to share that material with Visa and MasterCard. Eventually PayPal made a non-negotiable decision to immediately terminate services to MEGA,” the company explains.
paypalWhat makes the situation more unusual is that PayPal reportedly apologized to Mega for its withdrawal while acknowledging that company’s business is indeed legitimate.
However, PayPal also advised that Mega’s unique selling point – it’s end-to-end-encryption – was a key concern for the processor.
“MEGA has demonstrated that it is as compliant with its legal obligations as USA cloud storage services operated by Google, Microsoft, Apple, Dropbox, Box, Spideroak etc, but PayPal has advised that MEGA’s ‘unique encryption model’ presents an insurmountable difficulty,” Mega explains.
As of now, Mega is unable to process payments but is working on finding a replacement. In the meantime the company is waiving all storage limits and will not suspend any accounts for non-payment. All accounts have had their subscriptions extended by two months, free of charge.
Mega indicates that it will ride out the storm and will not bow to pressure nor compromise the privacy of its users.
“MEGA supplies cloud storage services to more than 15 million registered customers in more than 200 countries. MEGA will not compromise its end-to-end user controlled encryption model and is proud to not be part of the USA business network that discriminates against legitimate international businesses,” the company concludes.
Photo (above) credit: US-China Perception Monitor.
It’s not like the NSA has not been warned and China may just be the first of many to come.
The United States Is Angry That China Wants Crypto Backdoors, Too
February 27, 2015 // 03:44 PM EST
When the US demands technology companies install backdoors for law enforcement, it’s okay. But when China demands the same, it’s a whole different story.
The Chinese government is about to pass a new counter terrorism law that would require tech companies operating in the country to turn over encryption keys and include specially crafted code in their software and hardware so that chinese authorities can defeat security measures at will.
Technologists and cryptographers have long warned that you can’t design a secure system that will enable law enforcement—and only law enforcement—to bypass the encryption. The nature of a backdoor door is that it is also a vulnerability, and if discovered, hackers or foreign governments might be able to exploit it, too.
Yet, over the past few months, several US government officials, including the FBI director James Comey, outgoing US Attorney General Eric Holder, and NSA Director Mike Rogers, have all suggested that companies such as Apple and Google should give law enforcement agencies special access to their users’ encrypted data—while somehow offering strong encryption for their users at the same time.
“If the US forces tech companies to install backdoors in encryption, then tech companies will have no choice but to go along with China when they demand the same power.”
Their fear is that cops and feds will “go dark,” an FBI term for a potential scenario where encryption makes it impossible to intercept criminals’ communications.
But in light of China’s new proposals, some think the US’ own position is a little ironic.
“You can’t have it both ways,” Trevor Timm, the co-founder and the executive director of the Freedom of the Press Foundation, told Motherboard. “If the US forces tech companies to install backdoors in encryption, then tech companies will have no choice but to go along with China when they demand the same power.”
He’s not the only one to think the US government might end up regretting its stance.
Someday US officials will look back and realize how much global damage they’ve enabled with their silly requests for key escrow.
— Matthew Green (@matthew_d_green) February 27, 2015
Matthew Green, a cryptography professor at Johns Hopkins University, tweeted that someday US officials will “realize how much damage they’ve enabled” with their “silly requests” for backdoors.
Matthew Green, a cryptography professor at Johns Hopkins University, tweeted that someday US officials will “realize how much damage they’ve enabled” with their “silly requests” for backdoors.
Ironically, the US government sent a letter to China expressing concern about its new law. “The Administration is aggressively working to have China walk back from these troubling regulations,” US Trade Representative Michael Froman said in a statement.
A White House spokesperson did not respond to a request for comment from Motherboard.
“It’s stunningly shortsighted for the FBI and NSA not to realize this,” Timm added. “By demanding backdoors, these US government agencies are putting everyone’s cybersecurity at risk.”
In an oft-cited examples of “if you build it, they will come,” hackers exploited a system designed to let police tap phones to spy on more than a hundred Greek cellphones, including that of the prime minister.
At the time, Steven Bellovin, a computer science professor at Columbia University, wrote that this incident shows how “built-in wiretap facilities and the like are really dangerous, and are easily abused.”
That hasn’t stopped other from asking though. Several countries, including India, Kuwait and UAE, requested BlackBerry to include a backdoor in its devices so that authorities could access encrypted communications. And a leaked document in 2013 revealed that BlackBerry’s lawful interception system in India was “ready for use.”
A group of hackers known as the “Sandworm Team”, allegedly from Russia, has found a fundamental flaw in Microsoft Windows (a zero-day vulnerability impacting all supported versions of Microsoft Windows and Windows Server 2008 and 2012) and turned it into a Russian cyber-espionage campaign targeting NATO, European Union, telecommunications and energy sectors – by pulling emails and documents off computers from NATO, Ukrainian government groups, Western European government officials, and also the energy sector and telecommunications firms, according to new research from iSight Partners, a Dallas-based cybersecurity firm.
Photo credit: iSight Partners.
A new homegrown Chinese operating system aimed to sweep aside foreign rivals like Microsoft, Google and Apple could be expected this coming October, according to a Xinhua news report Sunday.
The new OS would first target desktops with smartphones and other mobile devices to follow, according to Ni Guangnan who heads the development launched in March.
Now, it’s not that China has not attempted to create its very own OS. There was a Chinese Linux OS launched some years ago for mobile devices, dubbed the China Operating System (COS). It was developed as a joint effort by a company ‘Shanghai Liantong’, ISCAS (Institute of Software at the Chinese Academy of Sciences) and the Chinese Government. But it failed to take off and was later discontinued.
But the Chinese determination to have its very own system has risen a few bars recently, not least further sparked by the Snowden revelations that the American NSA planted “backdoor” surveillance tools on US-made hardware. Similarly the US have long been suspicious of China-made devices – Hmmm, is it still possible to get laptops with NO parts made in China? Check out my earlier column here if you are keen.
More recently, after the US made poster-boys of 5 Chinese military officers they accused of cyber-espionage in May, China swiftly banned government use of Windows 8. Just last month, it was also reported that as many as 10 Apple products were pulled out of a government procurement list as the spate of mistrusts continued.
China also lamented early last year that Google had too much control over its smartphone industry via its Android mobile operating system and has discriminated against some local firms.
Any bets on a fake Chinese OS any time soon – and sooner than October?
From China with Love
It’s the one year anniversary of what is now known as the Snowden revelations, which appeared on June 5 and June 9 when The Guardian broke news of classified National Security Agency documents and Edward Snowden revealed himself in Hong Kong as the source of those leaks.
There is still much to decipher from the chronology of events in the aftermath and the sudden global awakening to the end of privacy. Among the impacts on the personal, business and political fronts, one interesting salient feature is the hypocritical rhetorical spats between the US and China in recent weeks, which could set the undertone for US-Sino relations for years to come.
Snowden said his biggest fear is that nothing would change following his bold decision a year ago.
You can find the entire column here.
End of Wins XP is No Dawn for Wins 8
Don’t be fooled into upgrading to Wins 8 after Microsoft recently ended support for the popular Wins XP OS. High time to switch to Linux instead – as I did 3 years ago.
There is an unspoken underlying tension in the workplace on privacy matters relating to office telephones, computers, emails, documents, CCTV cameras, etc. Employers like to think they reserve the right to probe what they consider their property while employees believe their turf is clear from invasion.
This tension is nowhere better exemplified than by reports last Thursday that operatives with US tech giant Microsoft Inc. hacked into a blogger’s Hotmail account in the course of an investigation to try to identify an employee accused of stealing Microsoft trade secrets.
And it is not uncommon in my business to encounter client complaints about potential espionage and other alleged misconduct by their employees, leading to their consideration to search the (company-owned) computers, emails, phone records, etc.
Take your pick: Edward Snowden, Internet and phone service providers, or just everybody?
The furor over the past week about how US intelligence agencies like the National Security Agency and the Federal Bureau of Investigation have for years scooped up massive loads of private communications data raises one critical and distressing question.
Who, worldwide and in the US, are the general public supposed to trust now that it seems all forms of digital and cyber communications risk being read by the American authorities? The Americans, it seems, don’t believe it’s that big a deal. By 62-34, according to the latest poll by Pew Research and the Washington Post, they say it’s more important to investigate the threats than protect their privacy. But what about the rest of the world?
The immediate acknowledgement, rather than point blank denial, of the massive clandestine eavesdropping programs is no doubt alarming even for those long suspicious of such covert undertakings. But the more disturbing part is that the official response amounts to plain outright lies.
Please read this entire Opinion Column here.