Category Archives: Espionage

Snowden-iphone

Shhh… Snowden: iPhone has Secret Surveillance Spyware that Can Be Remotely Controlled

The NSA whistleblower Edward Snowden revealed last week that he doesn’t use an iPhone because the Apple device has a secret surveillance spyware controlled by the US intelligence agency.

Obama-Blackberry

Obama: Why is Your Blackberry Super-Encrypted & You Want to Ban the World from Using Encryption?

Let’s have a different take on Obama and his endorsement (of Cameron’s drive) to kill encryption.

Obama is not allowed to use an iPhone because it’s “not safe”, the NSA advised him – Edward Snowden has recently said the iPhone was made to remotely track and transmit data about users.

Obama uses a Blackberry because of its reputation for security. But it’s still not safe enough, so his device was further encrypted though experts warned it’s still no absolute guarantee.

So Mr. President, you understand very well the value of encryption and privacy. And you want to ban encryption in the name of national security when you knew very well the terrorists you’re after are very apt at finding alternatives (remember Osama bin Laden?), including using primitive channels like typewriters, paper and pen, etc?

And at the same time, you’re crippling the entire world – companies, individuals and government (what did Merkel tell you?) – with the floodgates thrown open to cyber-criminals and hackers?

Reckon you can see that the equation doesn’t add up?

BlackberryJohnChen

Shhh… Blackberry to Cameron & Obama: Encryption Ban a Gift to Hackers & Cyber-Criminals

Blackberry’s CEO John Chen in his latest blog post “Encryption Needn’t Be An Either/Or Choice Between Privacy and National Security” responded to the recent push by British Prime Minister David Cameron – endorsed by US President Barack Obama last week – to ban encrypted communications in the name of national security:

Encryption Needn’t Be An Either/Or Choice Between Privacy and National Security

In the wake of the Paris terror attacks earlier this month, U.K. Prime Minister David Cameron proposed banning encrypted communications services such as those offered by Apple, Facebook and others. President Obama partially endorsed Prime Minister Cameron’s proposal a few days later, indicating he would support banning encrypted communications services that cannot be intercepted by law enforcement and national security agencies. While there is no publicly-available evidence that encrypted communications played any role in the Paris attacks, security officials say their ability to prevent future attacks will be hindered if terrorists are able to evade surveillance using encrypted communications and messaging services.

Privacy advocates have harshly criticized the Cameron-Obama proposals, arguing that encryption is a vital tool for protecting sensitive government, corporate and personal data from hacking and other forms of cyber theft. Following the recent spate of hacking attacks against Sony, Target, Home Depot, certain celebrity users of popular but hackable smartphones, and others, these advocates argue we need more, not less encryption. Further, they argue that banning encryption will not necessarily make it easier for security agencies to surveil terror plotters; after all, the terrorists will know they are being overheard and will simply communicate in new and ever-changing forms of coded language.

Reconciling these opposing perspectives on encryption requires a reasoned approach that balances legitimate national security concerns with legitimate cyber security concerns.

Privacy is Everyone’s Concern

Our dependence on computing devices for transmitting and storing sensitive personal information has become irreversible. Billions of items of personal information including health records, bank account records, social security numbers and private photographs reside on millions of computers and in the cloud. This information is transmitted via the internet every day. The same is true for highly confidential and proprietary business information. And of course no government or law enforcement agency could function without maintaining high levels of information security.

With so much information residing on computer networks and flowing through the internet, cyber security has emerged as one of society’s uppermost concerns. Protecting private and sensitive information from hacking, intrusion and exfiltration now commands the attention not just of computer professionals, but also heads of state, CEOs, Boards of Directors, small business owners, and every individual using a computer or smartphone, and even those who never use a computing device.

Modern forms of encrypting voice and data traffic provide the best protection for highly valuable and private personal, business and government information. Rendering data unreadable to the intruder greatly diminishes the incentive to hack or steal. Banning encryption, therefore, would dramatically increase the exposure of all such information to hacking and cyber theft. Clearly that is not a viable option.

Call of Duty

On the other hand, the same encryption technology that enables protection of sensitive data can also be abused by criminals and terrorists to evade legitimate government efforts to track their data and communications. Companies offering encrypted communications thus have a duty to comply with lawful requests to provide information to security agencies monitoring would-be terrorists. Companies like BlackBerry: We’ve supported FIPS 140-2 validated encryption in all of our devices for the past 10 years – longer than many of our competitors have been selling smartphones.

Depending on the particular technology involved, that information requested by law enforcement agencies might include the content of encrypted messages, but it may include other vital data such as user information, the dates and times the subscriber contacted other users, the length of such communications, the location of the user, and so forth. In many instances non-content user information can be even more important than the actual content itself, because such metadata can provide crucial leads and other vital intelligence to law enforcement and security agencies.

Let’s be clear: I am not advocating sharing data with governments for their ongoing data collection programs without a court order, subpoena or other lawful request. However, telecommunications companies, Internet Service Providers, and other players in the modern communications and messaging ecosystem need to take seriously their responsibility to comply and to facilitate compliance with reasonable and lawful requests for such information. Unfortunately, not all players in the industry view this issue the same way. Some Silicon Valley companies have publicly opposed government efforts to enable lawful surveillance and data gathering, even where lives may hang in the balance. These companies appear to be trying to position themselves as staunchly “pro-privacy,” without according sufficient weight to legitimate and reasonable governmental efforts to monitor and track would-be terrorists. Far from protecting privacy rights, this irresponsible approach risks providing ever stronger arguments to those who would subjugate all cyber privacy concerns to national security.

The answer, therefore, is not to ban encryption, because doing so would give hackers and cyber-criminals a windfall, making it much easier for them to mine billions of items of sensitive personal, business and government data. Instead, telecommunications and internet companies should cooperate with the reasonable and lawful efforts of governments to fight terrorism. That way we can help protect both privacy and lives.

ObamaCameron

Shhh… Obama to Support Cameron on Encryption Ban – Knowingly Betray Our Privacy and Security

US President Obama has openly voiced support to British Prime Minister’s idea about banning encryption but as The Guardian report (below) last week on a secret US cybersecurity document in 2009 showed, they are very well aware their decision would leave the entire world highly vulnerable to cyber attacks at the expense of their interest in national security and terrorism matters.


Secret US cybersecurity report: encryption vital to protect private data


Newly uncovered Snowden document contrasts with British PM’s vow to crack down on encrypted messaging after Paris attacks

A secret US cybersecurity report warned that government and private computers were being left vulnerable to online attacks from Russia, China and criminal gangs because encryption technologies were not being implemented fast enough.

The advice, in a newly uncovered five-year forecast written in 2009, contrasts with the pledge made by David Cameron this week to crack down on encryption use by technology companies.

In the wake of the Paris terror attacks, the prime minister said there should be no “safe spaces for terrorists to communicate” or that British authorites could not access.

Cameron, who landed in the US on Thursday night, is expected to urge Barack Obama to apply more pressure to tech giants, such as Apple, Google and Facebook, which have been expanding encrypted messaging for their millions of users since the revelations of mass NSA surveillance by the whistleblower Edward Snowden.

Cameron said the companies “need to work with us. They need also to demonstrate, which they do, that they have a social responsibility to fight the battle against terrorism. We shouldn’t allow safe spaces for terrorists to communicate. That’s a huge challenge but that’s certainly the right principle”.

But the document from the US National Intelligence Council, which reports directly to the US director of national intelligence, made clear that encryption was the “best defence” for computer users to protect private data.

Part of the cache given to the Guardian by Snowden was published in 2009 and gives a five-year forecast on the “global cyber threat to the US information infrastructure”. It covers communications, commercial and financial networks, and government and critical infrastructure systems. It was shared with GCHQ and made available to the agency’s staff through its intranet.

One of the biggest issues in protecting businesses and citizens from espionage, sabotage and crime – hacking attacks are estimated to cost the global economy up to $400bn a year – was a clear imbalance between the development of offensive versus defensive capabilities, “due to the slower than expected adoption … of encryption and other technologies”, it said.

An unclassified table accompanying the report states that encryption is the “[b]est defense to protect data”, especially if made particularly strong through “multi-factor authentication” – similar to two-step verification used by Google and others for email – or biometrics. These measures remain all but impossible to crack, even for GCHQ and the NSA.

The report warned: “Almost all current and potential adversaries – nations, criminal groups, terrorists, and individual hackers – now have the capability to exploit, and in some cases attack, unclassified access-controlled US and allied information systems.”

It further noted that the “scale of detected compromises indicates organisations should assume that any controlled but unclassified networks of intelligence, operational or commercial value directly accessible from the internet are already potentially compromised by foreign adversaries”.

The primary adversaries included Russia, whose “robust” operations teams had “proven access and tradecraft”, it said. By 2009, China was “the most active foreign sponsor of computer network intrusion activity discovered against US networks”, but lacked the sophistication or range of capabilities of Russia. “Cyber criminals” were another of the major threats, having “capabilities significantly beyond those of all but a few nation states”.

The report had some cause for optimism, especially in the light of Google and other US tech giants having in the months prior greatly increased their use of encryption efforts. “We assess with high confidence that security best practices applied to target networks would prevent the vast majority of intrusions,” it concluded.

Official UK government security advice still recommends encryption among a range of other tools for effective network and information defence. However, end-to-end encryption – which means only the two people communicating with each other, and not the company carrying the message, can decode it – is problematic for intelligence agencies as it makes even warranted collection much more difficult.

The latest versions of Apple and Google’s mobile operating systems are encrypted by default, while other popular messaging services, such as WhatsApp and Snapchat, also use encryption. This has prompted calls for action against such strong encryption from ministers and officials. Speaking on Monday, Cameron asked: “In our country, do we want to allow a means of communication between people which we cannot read?”

The previous week, a day after the attack on the Charlie Hebdo office in Paris, the MI5 chief, Andrew Parker, called for new powers and warned that new technologies were making it harder to track extremists.

In November, the head of GCHQ, Robert Hannigan, said US social media giants had become the “networks of choice” for terrorists. Chris Soghoian, principal senior policy analyst at the American Civil Liberties Union, said attempts by the British government to force US companies to weaken encryption faced many hurdles.

“The trouble is these services are already being used by hundreds of millions of people. I guess you could try to force tech companies to be less secure but then they would be less secure against attacks for anyone,” he said.

GCHQ and the NSA are responsible for cybersecurity in the UK and US respectively. This includes working with technology companies to audit software and hardware for use by governments and critical infrastructure sectors.

Such audits uncover numerous vulnerabilities which are then shared privately with technology companies to fix issues that could otherwise have caused serious damage to users and networks. However, both agencies also have intelligence-gathering responsibilities under which they exploit vulnerabilities in technology to monitor targets. As a result of these dual missions, they are faced with weighing up whether to exploit or fix a vulnerability when a product is used both by targets and innocent users.

The Guardian, New York Times and ProPublica have previously reported the intelligence agencies’ broad efforts to undermine encryption and exploit rather than reveal vulnerabilities. This prompted Obama’s NSA review panel to warn that the agency’s conflicting missions caused problems, and so recommend that its cyber-security responsibilities be removed to prevent future issues.

Another newly discovered document shows GCHQ acting in a similarly conflicted manner, despite the agencies’ private acknowledgement that encryption is an essential part of protecting citizens against cyber-attacks.

The 2008 memo was addressed to the then foreign secretary, David Miliband, and classified with one of the UK’s very highest restrictive markings: “TOP SECRET STRAP 2 EYES ONLY”. It is unclear why such a document was posted to the agency’s intranet, which is available to all agency staff, NSA workers, and even outside contractors.

The memo requested a renewal of the legal warrant allowing GCHQ to “modify” commercial software in violation of licensing agreements. The document cites examples of software the agency had hacked, including commonly used software to run web forums, and website administration tools. Such software are widely used by companies and individuals around the world.

The document also said the agency had developed “capability against Cisco routers”, which would “allow us to re-route selected traffic across international links towards GCHQ’s passive collection systems”.

GCHQ had also been working to “exploit” the anti-virus software Kaspersky, the document said. The report contained no information on the nature of the vulnerabilities found by the agency.

Security experts regularly say that keeping software up to date and being aware of vulnerabilities is vital for businesses to protect themselves and their customers from being hacked. Failing to fix vulnerabilities leaves open the risk that other governments or criminal hackers will find the same security gaps and exploit them to damage systems or steal data, raising questions about whether GCHQ and the NSA neglected their duty to protect internet systems in their quest for more intelligence.

A GCHQ spokesman said: “It is long-standing policy that we do not comment on intelligence matters. Furthermore, all of GCHQ’s work is carried out in accordance with a strict legal and policy framework, which ensures that our activities are authorised, necessary and proportionate, and that there is rigorous oversight, including from the secretary of state, the interception and intelligence services commissioners and the parliamentary intelligence and security committee.“All our operational processes rigorously support this position. In addition, the UK’s interception regime is entirely compatible with the European convention on human rights.”

Michael Beckerman, president and CEO of the Internet Association, a lobby group that represents Facebook, Google, Reddit, Twitter, Yahoo and other tech companies, said: “Just as governments have a duty to protect to the public from threats, internet services have a duty to our users to ensure the security and privacy of their data. That’s why internet services have been increasing encryption security.”

MarkCuban

Shhh… Mark Cuban on Social Media Mistakes and Self-Destruct Messaging Apps Cyber Dust

Rather than hearing from the geeks, it may be refreshing to listen the same from Mark Cuban, Shark Tank host and owner of NBA team Dallas Mavericks:

BND

Shhh… List of 3,500 Spy Names sold by German Double Agent

The double agent is reportedly known as Markus R., a 32-year-old employee of Germany’s foreign intelligence agency (BND) who allegedly passed the list to a CIA contact.

Cameron

Shhh… Paris Attacks: Dangerous Precedence & Irreversible Damages with Cameron’s Pursuit of “Safe Spaces” & Ban on Encrypted Online Messaging Apps

In the aftermath of the recent Charlie Hebdo attacks, it came as no surprise politicians were quick to up the antenna (again) on surveillance and stifle the right to privacy – whilst, in the same breath, they drape themselves publicly in Paris to embrace free speech and press freedom.

British Prime Minister David Cameron, for example, stole the headlines this week saying that, if re-elected in May, he would ban encrypted online messaging apps like WhatsApp and Snapchat if the British intelligence agencies were not given backdoors to access the communications.

“We must not allow terrorists safe space to communicate with each other,” said Cameron as he spoke about a “comprehensive piece of legislation” to close the “safe spaces” used by suspected terrorists – and also planned to encourage US President Barack Obama (who should be reminded that he has promised to pursue NSA reforms) to make internet companies like Facebook and Twitter cooperate with British intelligence agencies to track the online activities of Islamist extremists.

Backdoors are by and large security holes and what Cameron is proposing would set a dangerous precedence with irreversible consequences far beyond the loss of free speech – this is best summed up in the following open letter to David Cameron (below – and here):

Cameron-OpenLetter
Cameron-OpenLetter2

UScentralCommand-photocredit

Shhh… CENTCOM Hack: Snowden on Why US Pay the Price for Deprioritizing Cyber Defense

Photo (above) credit: US Central Command

Snowden was spot on, the US (would and) is paying the price for focusing too much on the cyber offensive at the expense of cyber defense.

“The National Security Agency has two halves, one that handles defense and one that handles offense. Michael Hayden and Keith Alexander, the former directors of NSA, they shifted those priorities… But the problem is when you deprioritize defense, you put all of us at risk,” according to Snowden.

“If we attack a Chinese university and steal the secrets of their research program, how likely is it that that is going to be more valuable to the United States than when the Chinese retaliate and steal secrets from a U.S. university, from a U.S. defense contractor, from a U.S. military agency?

“The most important thing to us is not being able to attack our adversaries, the most important thing is to be able to defend ourselves. And we can’t do that as long as we’re subverting our own security standards for the sake of surveillance.”

The website PBS.org published an exclusive interview with Snowden and his views on cyber warfare, just days before the CENTCOM hacks early this week. Interestingly, the video link is no longer working but the full unedited transcript is splashed below.

Snowden-CyberWarfare

Exclusive: Edward Snowden on Cyber Warfare

By James Bamford and Tim De Chant on Thu, 08 Jan 2015

Last June, journalist James Bamford, who is working with NOVA on a new film about cyber warfare that will air in 2015, sat down with Snowden in a Moscow hotel room for a lengthy interview. In it, Snowden sheds light on the surprising frequency with which cyber attacks occur, their potential for destruction, and what, exactly, he believes is at stake as governments and rogue elements rush to exploit weaknesses found on the internet, one of the most complex systems ever built by humans. The following is an unedited transcript of their conversation.

James Bamford: Thanks very much for coming. I really appreciate this. And it’s really interesting—the very day we’re meeting with you, this article came out in The New York Times, seemed to be downplaying the potential damage, which they really seem to have hyped up in the original estimate. What did you think of this article today?

Edward Snowden: So this is really interesting. It’s the new NSA director saying that the alleged damage from the leaks was way overblown. Actually, let me do that again.

So this is really interesting. The NSA chief in this who replaced Keith Alexander, the former NSA director, is calling the alleged damage from the last year’s revelations to be much more insignificant than it was represented publicly over the last year. We were led to believe that the sky was going to fall, that the oceans were going to boil off, the atmosphere was going to ignite, the world would end as we know it. But what he’s saying is that it does not lead him to the conclusion that the sky is falling.

And that’s a significant departure from the claims of the former NSA director, Keith Alexander. And it’s sort of a pattern that we’ve seen where the only U.S. officials who claim that these revelations cause damage rather than serve the public good were the officials that were personally embarrassed by it. For example, the chairs of the oversight committees in Congress, the former NSA director himself.

But we also have, on the other hand, the officials on the White House’s independent review panels who said that these programs had never been shown to stop even a single imminent terrorist attack in the United States, and they had no value. So how could it be that these programs were so valuable that talking about them, revealing them to the public would end the world if they hadn’t stopped any attacks?

But what we’re seeing and what this article represents is that the claims of harm that we got last year were not accurate and could in fact be claimed to be misleading, and I think that’s a concern. But it is good to see that the director of NSA himself now today, with full access to classified information, is beginning to come a little bit closer to the truth, getting a little bit closer to the President’s viewpoint on that, which is this discussion that we’ve had over the last year doesn’t hurt us. It makes us stronger. So thanks for showing that.

Bamford: Thanks. One other thing that the article gets into, which is what we’re talking about here today, is the article quotes the new NSA director, who is also the commander of Cyber Command, as basically saying that it’s possible in the future that these cyber weapons will become sort of normal military weapons, and they’ll be treated sort of like guided missiles or cruise missiles and so forth.

Snowden: Cruise missiles or drones.

Bamford: What are your thoughts about that, having spent time in this whole line of work yourself?

Snowden: I think the public still isn’t aware of the frequency with which these cyber-attacks, as they’re being called in the press, are being used by governments around the world, not just the US. But it is important to highlight that we really started this trend in many ways when we launched the Stuxnet campaign against the Iranian nuclear program. It actually kicked off a response, sort of retaliatory action from Iran, where they realized they had been caught unprepared. They were far behind the technological curve as compared to the United States and most other countries. And this is happening across the world nowadays, where they realize that they’re caught out. They’re vulnerable. They have no capacity to retaliate to any sort of cyber campaign brought against them.

The Iranians targeted open commercial companies of U.S. allies. Saudi Aramco, the oil company there—they sent what’s called a wiper virus, which is actually sort of a Fisher Price, baby’s first hack kind of a cyber-campaign. It’s not sophisticated. It’s not elegant. You just send a worm, basically a self-replicating piece of malicious software, into the targeted network. It then replicates itself automatically across the internal network, and then it simply erases all of the machines. So people go into work the next day and nothing turns on. And it puts them out of business for a period of time.

But with enterprise IT capabilities, it’s not trivial, but it’s not impossible to restore a company to working order in fairly short time. You can image all of the work stations. You can restore your backups from tape. You can perform what’s called bare metal restores, where you get entirely new hardware that matches your old hardware, if the hardware itself was broken, and just basically paint it up, restore the data just like the original target was, and you’re back in the clear. You’re moving along.

Now, this is something that people don’t understand fully about cyber-attacks, which is that the majority of them are disruptive, but not necessarily destructive. One of the key differentiators with our level of sophistication and nation-level actors is they’re increasingly pursuing the capability to launch destructive cyber-attacks, as opposed to the disruptive kinds that you normally see online, through protestors, through activists, denial of service attacks, and so on. And this is a pivot that is going to be very difficult for us to navigate.

Bamford: Let me ask you about that, because that is the focus of the program here. It’s a focus because very few people have ever discussed this before, and it’s the focus because the U.S. launched their very first destructive cyber-attack, the Stuxnet attack, as you mentioned, in Iran. Can you just tell me what kind of a milestone that was for the United States to launch their very first destructive cyber-attack?

Snowden: Well, it’s hard to say it’s the first ever, because attribution is always hard with these kind of campaigns. But it is fair to say that it was the most sophisticated cyber-attack that anyone had ever seen at the time. And the fact that it was launched as part of a U.S. authorized campaign did mark a radical departure from our traditional analysis of the levels of risks we want to assume for retaliation.

When you use any kind of internet based capability, any kind of electronic capability, to cause damage to a private entity or a foreign nation or a foreign actor, these are potential acts of war. And it’s critical we bear in mind as we discuss how we want to use these programs, these capabilities, where we want to draw the line, and who should approve these programs, these decisions, and at what level, for engaging in operations that could lead us as a nation into a war.

The reality is if we sit back and allow a few officials behind closed doors to launch offensive attacks without any oversight against foreign nations, against people we don’t like, against political groups, radicals, and extremists whose ideas we may not agree with, and could be repulsive or even violent—if we let that happen without public buy-in, we won’t have any seat at the table of government to decide whether or not it’s appropriate for these officials to drag us into some kind of war activity that we don’t want, but we weren’t aware of at the time.

Bamford: And what you seem to be talking about also is the blowback effect. In other words, if we launch an attack using cyber warfare, a destructive attack, we run the risk of having been the most industrialized and electronically connected country in the world, that that’s a major problem for the US. Is that your thinking?

Snowden: I do agree that when it comes to cyber warfare, we have more to lose than any other nation on earth. The technical sector is the backbone of the American economy, and if we start engaging in these kind of behaviors, in these kind of attacks, we’re setting a standard, we’re creating a new international norm of behavior that says this is what nations do. This is what developed nations do. This is what democratic nations do. So other countries that don’t have as much respect for the rules as we do will go even further.

And the reality is when it comes to cyber conflicts between, say, America and China or even a Middle Eastern nation, an African nation, a Latin American nation, a European nation, we have more to lose. If we attack a Chinese university and steal the secrets of their research program, how likely is it that that is going to be more valuable to the United States than when the Chinese retaliate and steal secrets from a U.S. university, from a U.S. defense contractor, from a U.S. military agency?

We spend more on research and development than these other countries, so we shouldn’t be making the internet a more hostile, a more aggressive territory. We should be cooling down the tensions, making it a more trusted environment, making it a more secure environment, making it a more reliable environment, because that’s the foundation of our economy and our future. We have to be able to rely on a safe and interconnected internet in order to compete.

Bamford: Where do you see this going in terms of destruction? In Iran, for example, they destroyed the centrifuges. But what other types of things might be targeted? Power plants or dams? What do you see as the ultimate potential damage that could come from the cyber warfare attack?

Snowden: When people conceptualize a cyber-attack, they do tend to think about parts of the critical infrastructure like power plants, water supplies, and similar sort of heavy infrastructure, critical infrastructure areas. And they could be hit, as long as they’re network connected, as long as they have some kind of systems that interact with them that could be manipulated from internet connection.

However, what we overlook and has a much greater value to us as a nation is the internet itself. The internet is critical infrastructure to the United States. We use the internet for every communication that businesses rely on every day. If an adversary didn’t target our power plants but they did target the core routers, the backbones that tie our internet connections together, entire parts of the United States could be cut off. They could be shunted offline, and we would go dark in terms of our economy and our business for minutes, hours, days. That would have a tremendous impact on us as a society and it would have a policy backlash.

The solution, however, is not to give the government more secret authorities to put kill switches and monitors and snooping devices on the internet. It’s to reorder our priorities for how we deal with threats to the security of our critical infrastructure, for our electronic infrastructure. And what that means is taking bodies like the National Security Agency that have traditionally been about securing the nation and making sure that that’s their first focus.

In the last 10 years, we’ve seen—in the last 10 years, we’ve seen a departure from that traditional role of signals intelligence gathering overseas that’s related to responding to threats that are—

Bamford: Take your time.

Snowden: Right. What we’ve seen over the last decade is we’ve seen a departure from the traditional work of the National Security Agency. They’ve become sort of the national hacking agency, the national surveillance agency. And they’ve lost sight of the fact that everything they do is supposed to make us more secure as a nation and a society.

The National Security Agency has two halves, one that handles defense and one that handles offense. Michael Hayden and Keith Alexander, the former directors of NSA, they shifted those priorities, because when they went to Congress, they saw they could get more budget money if they advertised their success in attacking, because nobody is ever really interested in doing the hard work of defense.

But the problem is when you deprioritize defense, you put all of us at risk. Suddenly, policies that would have been unbelievable, incomprehensible even 20 years ago are commonplace today. You see decisions being made by these agencies that authorize them to install backdoors into our critical infrastructure, that allow them to subvert the technical security standards that keep your communication safe when you’re visiting a banking website online or emailing a friend or logging into Facebook.

And the reality is, when you make those systems vulnerable so that you can spy on other countries and you share the same standards that those countries have for their systems, you’re also making your own country more vulnerable to the same attacks. We’re opening ourselves up to attack. We’re lowering our shields to allow us to have an advantage when we attack other countries overseas, but the reality is when you compare one of our victories to one of their victories, the value of the data, the knowledge, the information gained from those attacks is far greater to them than it is to us, because we are already on top. It’s much easier to drag us down than it is to grab some incremental knowledge from them and build ourselves up.

Bamford: Are you talking about China particularly?

Snowden: I am talking about China and every country that has a robust intelligence collection program that is well-funded in the signals intelligence realm. But the bottom line is we need to put the security back in the National Security Agency. We can’t have the national surveillance agency. We’ve got to go—look, the most important thing to us is not being able to attack our adversaries, the most important thing is to be able to defend ourselves. And we can’t do that as long as we’re subverting our own security standards for the sake of surveillance.

Bamford: That is a very strange combination, where you have one half of the NSA, the Information Assurances Directorate, which is charged with protecting the country from cyber-attacks, coexisting with the Signals Intelligence Directorate and the Cyber Command, which is pretty much focused on creating weaknesses. Can you just tell me a little bit about how that works, the use of vulnerabilities and implants and exploits?

Snowden: So broadly speaking, there are a number of different terms that are used in the CNO, computer networks operations world.

Broadly speaking, there are a number of different terms that are used to define the vernacular in the computer network operations world. There’s CNA, computer network attack, which is to deny, degrade, or destroy the functioning of a system. There’s CND, computer network defense, which is protecting systems, which is noticing vulnerabilities, noticing intrusions, cutting them off, and repairing them, patching the holes. And there’s CNE, computer network exploitation, which is breaking into a system and leaving something behind, this sort of electronic ear that will allow you to monitor everything that’s happening from that point forward. CNE is typically used for espionage, for spying.

To achieve these goals, we use things like exploits, implants, vulnerabilities, and so on. A vulnerability is a weakness in a system, where a computer program has a flaw in its code that, when it thinks it’s going to execute a normal routine task, it’s actually been tricked into doing something the attacker asks it to do. For example, instead of uploading a file to display a picture online, you could be uploading a bit of code that the website will then execute.

Or instead of logging into a website, you could enter code into the username field or into the password field, and that would crash through the boundaries of memory—that were supposed to protect the program—into the executable space of computer instructions. Which means when the computer goes through its steps of what is supposed to occur, it goes, I’m looking for user login. This is the username. This is the password. And then when it should go, check to see that these are correct, if you put something that was too long in the password field, it actually overwrites those next instructions for the computer. So it doesn’t know it’s supposed to check for a password. Instead, it says, I’m supposed to create a new user account with the maximum privileges and open up a port for the adversary to access my network, and then so on and so forth.

Vulnerabilities are generally weaknesses that can be exploited. The exploit itself are little shims of computer code that allow you to run any sort of program you want.

Exploits are the shims of computer code that you wedge into vulnerabilities to allow you to take over a system, to gain access to them, to tell that system what you wanted to do. The payload or implant follows behind the exploit. The exploit is what wedges you into the system. The payload is the instructions that are left behind. Now, those instructions often say install an implant.

The implant is an actual program that runs—it stays behind after the exploit has occurred—and says, tell me all of the files on this machine. Make a list of all of the users. Send every new email or every new keystroke that’s been recorded on this program each day to my machine as the attacker, or really anything you can imagine. It can also tell nuclear centrifuges to spin up to the maximum RPM and then spin down quickly enough that no one notices. It can tell a power plant to go offline.

Or it could say, let me know what this dissident is doing day to day, because it lives on their cell phone and it keeps track of all their movements, who they call, who they’re associating with, what wireless device it’s nearby. Really an exploit is only limited—or not an exploit. An implant is only limited by the imagination. Anything you can program a computer to do, you can program an implant to do.

Bamford: So you have the implant, and then you have the payload, right?

Snowden: The payload includes the implant. The exploit is what basically breaks into the vulnerability. The payload is what the exploit runs, and that is basically some kind of executable code. And the implant is a payload that’s left behind long term, some kind of basically listening program, some spying program, or some kind of a destructive program.

Bamford: Interviewing you is like doing power steering. I don’t have to pull this out.

Snowden: Yeah, sorry, I get a little ramble-y on my answers, and the political answers aren’t really strong, but I’m not a politician, so I’m just trying my best on these.

Bamford: This isn’t nightly news, so we’ve got an hour.

Snowden: Yeah, I hope you guys cut this so it’s not so terrible.

Producer: We’ve got two cameras, and we can carve your words up.

Snowden: (laughter) Great.

Producer: But we won’t.

Bamford: Should mention this implant now—the implant sounds a bit like what used to be sleeper agents back in the old days of the Cold War, where you have an agent that’s sitting there that can do anything. It can do sabotage. It can do espionage. It can do whatever. And looking at one of those slides that came out, what was really fascinating was the fact that the slide was a map of the world, and they had little yellow dots on it. The little yellow dots were indicated as CNEs, computer network exploitation. And you expect to see them in North Korea, China, different places like that. But what was interesting when we looked at it was there were quite a few actually in Brazil, for example, and other places that were friendly countries. Any idea why the U.S. would want to do something like that?

Snowden: So the way the United States intelligence community operates is it doesn’t limit itself to the protection of the homeland. It doesn’t limit itself to countering terrorist threats, countering nuclear proliferation. It’s also used for economic espionage, for political spying to gain some knowledge of what other countries are doing. And over the last decade, that sort of went too far.

No one would argue that it’s in the United States’ interest to have independent knowledge of the plans and intentions of foreign countries. But we need to think about where to draw the line on these kind of operations so we’re not always attacking our allies, the people we trust, the people we need to rely on, and to have them in turn rely on us. There’s no benefit to the United States hacking Angela Merkel’s cell phone. President Obama said if he needed to know what she was thinking, he would just pick up the phone and call her. But he was apparently allegedly unaware that the NSA was doing precisely that. These are similar things we see happening in Brazil and France and Germany and all these other countries, these allied nations around the world.

And we also need to remember that when we talk about computer network exploitation, computer network attack, we’re not just talking about your home PC. We’re not just talking about a control system in a factory somewhere. We’re talking about your cell phone, and we’re also talking about internet routers themselves. The NSA and its sister agencies are attacking the critical infrastructure of the internet to try to take ownership of it. They hack the routers that connect nations to the internet itself.

And this is dangerous for a number of reasons. It does provide us a real intelligence advantage, but at the same time, it’s a serious risk. If one of these hacking operations goes wrong, and this has happened in the past, and it’s a core router that connects all of the internet service providers for an entire country to the internet, we’ve blacked out that entire nation from online access until that problem can be corrected. And these routers are not your little Linksys, D-Link routers sitting at home. We’re talking $60,000, $600,000, $6 million devices, complexes, that are not easy to fix, and they don’t have an off the shelf replacement that’s ready to swap in.

So we need to be very careful, and we need to make sure that whenever we’re engaging in a cyber-warfare campaign, a cyber-espionage campaign in the United States, that we understand the word cyber is used as a euphemism for the internet, because the American public would not be excited to hear that we’re doing internet warfare campaigns, internet espionage campaigns, because we realize that we ourselves are impacted by it. The internet is shared critical infrastructure for everyone on earth. It’s not supposed to be a domain of warfare. We’re not supposed to be putting our economy on the frontlines in the battleground. But that’s increasingly what’s happening today.

So we need to put processes, policies, and procedures in place with real laws that forbid going beyond the borders of what’s reasonable to ensure that the only time that we and other countries around the world exercise these authorities are when it is absolutely necessary, there’s not alternative means of achieving the appropriate outcome, and it’s proportionate to the threat. We shouldn’t be putting an entire nation’s infrastructure at risk to spy on one company, to spy on one person. But increasingly, we see that happening more and more today.

Bamford: You mentioned the problems, the dangers involved if you’re trying to put an exploit into some country’s central nervous system when it comes to the internet. For example in Syria, there was a time when everything went down, and that was blamed on the president of Syria, Bashar al-Assad. Did you have any particular knowledge of that?

Snowden: I don’t actually want to get into that one on camera, so I’ll have to demur on that.

Bamford: Can you talk around it somehow?

Snowden: What I would say is when you’re attacking a router on the internet, and you’re doing it remotely, it’s like trying to shoot the moon with a rifle. Everything has to happen exactly right. Every single variable has to be controlled and precisely accounted for. And that’s not possible to do when you have limited knowledge of the target you’re attacking.

So if you’ve got this gigantic router that you’re trying to hack, and you want to hack it in a way that’s undetectable by the systems administrators for that device, you have to get below the operating system level of that device, of that router. Not where it says here are the rules, here are the user accounts, here are the routes and the proper technical information that everybody who’s administering this device should have access to. Down onto the firmware level, onto the hardware control level of the device that nobody ever sees, because it’s sort of a dark place.

The problem is if you make a mistake when you’re manipulating the hardware control of a device, you can do what’s called bricking the hardware, and it turns it from a $6 million internet communications device to a $6 million paperweight that’s in the way of your entire nation’s communications. And that’s something that all I can say is has happened in the past.

Bamford: When we were in Brazil, we were shown this major internet connection facility. It was the largest internet hub in the southern hemisphere, and it’s sitting in Brazil. And the Brazilians had a lot of concern, because again, they saw the slide that showed all this malware being planted in Brazil. Is that a real concern that they should have, the fact that they’ve, number one, got this enormous internet hub sitting right in Sao Paulo, and then on the second hand, they’ve got NSA flooding the country with malware?

Snowden: The internet exchange is sort of the core points where all of the international cables come together, where all of the internet service providers come together, and they trade lines with each other, where we move from separate routes, separate highways on the internet into one coherent traffic circle where everybody can get on and off on the exit they want. These are priority one targets for any sort of espionage agency, because they provide access to so many people’s communications.

Internet exchanges and internet service providers—international fiber optic landing points—these are the key tools that governments go after in order to enable their programs of mass surveillance. If they want to be able to watch the entire population of a country instead of a single individual, you have to go after those bulk interchanges. And that’s what’s happening.

So it is a real threat, and the only way that can be accounted for is to make sure that there’s some kind of independent control and auditing, some sort of routine forensic investigations into these devices, to ensure that not only were they secure when they were installed, but they hadn’t been monitored or tampered with or changed in any way since that last audit occurred. And that requires doing things like creating mathematical proofs called hashes of the validity of the actual hardware signature and software signatures on these devices and their hardware.

Bamford: Another area—you mentioned the presidential panel that looked into all these areas that are of concern now, which you’ve basically brought out these areas. And the presidential panel came out with I think 46 different recommendations. One of those recommendations dealt with restricting the use or cutting back or maybe even doing away with the idea of going after zero-day exploits. Can you tell me a little bit about your fears that you may have of the U.S. creating this market of zero-day exploits?

Snowden: So a zero-day exploit is a method of hacking a system. It’s sort of a vulnerability that has an exploit written for it, sort of a key and a lock that go together to a given software package. It could be an internet web server. It could be Microsoft Office. It could be Adobe Reader or it could be Facebook. But these zero-day exploits—they’re called zero-days because the developer of the software is completely unaware of them. They haven’t had any chance to react, respond, and try to patch that vulnerability away and close it.

The danger that we face in terms of policy of stockpiling zero-days is that we’re creating a system of incentives in our country and for other countries around the world that mimic our behavior or that see it as a tacit authorization for them to perform the same sort of operations is we’re creating a class of internet security researchers who research vulnerabilities, but then instead of disclosing them to the device manufacturers to get them fixed and to make us more secure, they sell them to secret agencies.

They sell them on the black market to criminal groups to be able to exploit these to attack targets. And that leaves us much less secure, not just on an individual level, but on a broad social level, on a broad economic level. And beyond that, it creates a new black market for computer weapons, basically digital weapons.

And there’s a little bit of a free speech issue involved in regulating this, because people have to be free to investigate computer security. People have to be free to look for these vulnerabilities and create proof of concept code to show that they are true vulnerabilities in order for us to secure our systems. But it is appropriate to regulate the use and sale of zero-day exploits for nefarious purposes, in the same way you would regulate any other military weapon.

And today, we don’t do that. And that’s why we see a growing black market with companies like Endgame, with companies like Vupen, where all they do—their entire business model is finding vulnerabilities in the most critical infrastructure software packages we have around the internet worldwide, and instead of fixing those vulnerabilities, they tear them open and let their customers walk in through them, and they try to conceal the knowledge of these zero-day exploits for as long as possible to increase their commercial value and their revenues.

Bamford: Now, of those 46 recommendations, including the one on the zero-day exploits that the panel came up with, President Obama only approved maybe five or six at the most of those 46 recommendations, and he didn’t seem to talk at all about the zero-day exploit recommendation. What do you think of that, the fact that that was sort of ignored by the President?

Snowden: I can’t comment on presidential policies. That’s a landmine for me. I would recommend you ask Chris Soghoian at the ACLU, American Civil Liberties Union, and he can get you any quote you want on that. You don’t need me to speak to that point, but you’re absolutely right that where there’s smoke, there’s fire, as far as that’s concerned.

Bamford: Well, as someone who has worked at the NSA, been there for a long time, during that time you were there, they created this entire new organization called Cyber Command. What are your thoughts on the creation of this new organization that comes just like the NSA, under the director of NSA? Again, backing up, the director of NSA for ever since the beginning was only three stars, and now he’s a four star general, or four star admiral, and he’s got this enormous largest intelligence agency in the world, the NSA, under him, and now he’s got Cyber Command. What are your thoughts on that, having seen this from the inside?

Snowden: There was a strong debate last year about whether or not the National Security Agency and Cyber Command should be split into two independent agencies, and that was what the President’s independent review board suggested was the appropriate method, because when you have an agency that’s supposed to be defensive married to an agency that’s entire purpose in life is to break things and set them on fire, you’ve got a conflict of interest that is really going to reduce the clout of the defensive agency, while the offensive branch gains more clout, they gain more budget dollars, they gain more billets and personnel assignments.

So there’s a real danger with that happening. And Cyber Command itself has always existed in a—Cyber Command itself has always been branded in a sort of misleading way from its very inception. The director of NSA, when he introduced it, when he was trying to get it approved, he said he wanted to be clear that this was not a defensive team. It was a defend the nation team. He’s saying it’s defensive and not defensive at the same time.

Now, the reason he says that is because it’s an attack agency, but going out in front of the public and asking them to approve an aggressive warfare focused agency that we don’t need is a tough sell. It’s much better if we say, hey, this is for protecting us, this is for keeping us safe, even if all it does every day is burn things down and break things in foreign countries that we aren’t at war with.

So there’s a real careful balance that needs to be struck there that hasn’t been addressed yet, but so long as the National Security Agency and Cyber Command exist under one roof, we’ll see the offensive side of their business taking priority over the defensive side of the business, which is much more important for us as a country and as a society.

Bamford: And you mentioned earlier, if we could just go back a little bit over this again, how much more money is going to the cyber offensive time than going to the cyber defensive side. Not only more money, but more personnel, more attention, more focus.

Snowden: I didn’t actually get the question on that one.

Bamford: I just wondered if you could just elaborate a little bit more on that. Again, we have Cyber Command and we have the Information Assurance Division and so forth, and there’s far more money and personnel and emphasis going on the cyber warfare side than the defensive side.

Snowden: I think the key point in analyzing the balance and where we come out in terms of offense versus defense at the National Security Agency and Cyber Command is that, more and more, what we’ve read in the newspapers and what we see debating in Congress, the fact the Senate is now trying to put forward a bill called CISPA, the Cyber Intelligence Sharing—I don’t even know what it’s called—let me take that back.

We see more and more things occurring like the Senate putting forward a bill called CISPA, which is for cyber intelligence sharing between private companies and government agencies, where they’re trying to authorize not just the total immunity, a grant of total immunity, to private companies if they share the information on all of their customers, on all the American citizens and whatnot that are using their services, with intelligence agencies, under the intent that that information be used to protect them.

Congress is also trying to immunize companies in a way that will allow them to invite groups like the National Security Agency or the FBI to voluntarily put surveillance devices on their internal networks, with the stated intent being to detect cyber-attacks as they occur and be able to respond to them. But we’re ceding a lot of authority there. We’re immunizing companies from due diligence and protecting their customers’ privacy rights.

Actually, this is a point that’s way too difficult to make in the interview. Let me dial back out of that.

What we see more and more is sort of a breakdown in the National Security Agency. It’s becoming less and less the National Security Agency and more and more the national surveillance agency. It’s gaining more offensive powers with each passing year. It’s gained this new Cyber Command that’s under the director of NSA that by any measure should be an entirely separate organization because it has an entirely separate mission. All it does is attack.

And that’s putting us, both as a nation and an economy, in a state of permanent vulnerability and permanent risk, because when we lose a National Security Agency and instead get an offensive agency, we get an attack agency in its place, all of our eyes are looking outward, but they’re not looking inward, where we have the most to lose. And this is how we miss attacks time and time again. This results in intelligence failures such as the Boston Marathon bombings or the underwear bomber, Abdul Farouk Mutallab (sic).

In recent years, the majority of terrorist attacks that have been disrupted in the United States have been disrupted due to things like the Time Square bomber, who was caught by a hotdog vendor, not a mass surveillance program, not a cyber-espionage campaign.

So when we cannibalize dollars from the defensive business of the NSA, securing our communications, protecting our systems, patching zero-day vulnerabilities, and instead we’re giving those dollars to them to be used for creating new vulnerabilities in our systems so that they can surveil us and other people abroad who use the same systems. When we give those dollars to subvert our encryption methods so we don’t have any more privacy online and we apply all of that money to attacking foreign countries, we’re increasing the state of conflict, not just in diplomatic terms, but in terms of the threat to our critical infrastructure.

When the lights go out at a power plant sometime in the future, we’re going to know that that’s a consequence of deprioritizing defense for the sake of an advantage in terms of offense.

Bamford: One other problem I think is that people think that, as you mentioned—just to sort of clarify this—people out there that don’t really follow this that closely think that the whole idea of Cyber Command was to protect the country from cyber-attacks. Is that a misconception, the fact that these people think that the whole idea of Cyber Command is to protect them from cyber-attack?

Snowden: Well, if you ask anybody at Cyber Command or look at any of the job listings for openings for their positions, you’ll see that the one thing they don’t prioritize is computer network defense. It’s all about computer network attack and computer network exploitation at Cyber Command. And you have to wonder, if these are people who are supposed to be defending our critical infrastructure at home, why are they spending so much time looking at how to attack networks, how to break systems, and how to turn things off? I don’t think it adds up as representing a defensive team.

Bamford: Now, also looking a little bit into the future, it seems like there’s a possibility that a lot of this could be automated, so that when the Cyber Command or NSA sees a potential cyber-attack coming, there could be some automatic devices that would in essence return fire. And given the fact that it’s so very difficult to—or let me back up. Given the fact that it’s so easy for a country to masquerade where an attack is coming from, do you see a problem where you’re automating systems that automatically shoot back, and they may shoot back at the wrong country, and could end up starting a war?

Snowden: Right. So I don’t want to respond to the first part of your question, but the second part there I can use, which is relating to attribution and automated response. Which is that the—it’s inherently dangerous to automate any kind of aggressive response to a detected event because of false positives.

Let’s say we have a defensive system that’s tied to a cyber-attack capability that’s used in response. For example, a system is created that’s supposed to detect cyber-attacks coming from Iran, denial of service attacks brought against a bank. They detect what appears to be an attack coming in, and instead of simply taking a defensive action, instead of simply blocking it at the firewall and dumping that traffic so it goes into the trash can and nobody ever sees it—no harm—it goes a step further and says we want to stop the source of that attack.

So we will launch an automatic cyber-attack at the source IP address of that traffic stream and try to take that system online. We will fire a denial of service attack in response to it, to destroy, degrade, or otherwise diminish their capability to act from that.

But if that’s happening on an automated basis, what happens when the algorithms get it wrong? What happens when instead of an Iranian attack, it was simply a diagnostic message from a hospital? What happens when it was actually an attack created by an independent hacker, but you’ve taken down a government office that the hacker was operating from? That wasn’t clear.

What happens when the attack hits an office that a hacker from a third country had hacked into to launch that attack? What if it was a Chinese hacker launching an attack from an Iranian computer targeting the United States? When we retaliate against a foreign country in an aggressive manner, we the United States have stated in our own policies that’s an act of war that justifies a traditional kinetic military response.

We’re opening the doors to people launching missiles and dropping bombs by taking the human out of the decision chain for deciding how we should respond to these threats. And this is something we’re seeing more and more happening in the traditional means as our methods of warfare become increasingly automated and roboticized such as through drone warfare. And this is a line that we as a society, not just in the United States but around the world, must never cross. We should never allow computers to make inherently governmental decisions in terms of the application of military force, even if that’s happening on the internet.

Bamford: And Richard Clarke has said that it’s more important for us to defend ourselves against attacks from China than to attack China using cyber tools. Do you agree with that?

Snowden: I strongly agree with that. The concept there is that there’s not much value to us attacking Chinese systems. We might take a few computers offline. We might take a factory offline. We might steal secrets from a university research programs, and even something high-tech. But how much more does the United States spend on research and development than China does? Defending ourselves from internet-based attacks, internet-originated attacks, is much, much more important than our ability to launch attacks against similar targets in foreign countries, because when it comes to the internet, when it comes to our technical economy, we have more to lose than any other nation on earth.

Bamford: I think you said this before, but in the past, the U.S. has actually used cyber warfare to attack things like hospitals and things like that in China?

Snowden: So they’re not cyber warfare capabilities. They’re CNE, computer network exploitation.

Bamford: Yeah, if you could just explain that a little.

Snowden: I’m not going to get into that on camera. But what the stories showed and what you can sort of voice over is that Chinese universities—not just Chinese, actually—scratch that—is that the National Security Agency has exploited internet exchanges, internet service providers, including in Belgium—the Belgacom case— through their allies at GCHQ and the United Kingdom.

They’ve attacked universities, hospitals, internet exchange points, internet service providers—the critical infrastructure that all of us around the world rely on.

And it’s important to remember when you start doing things like attacking hospitals, when you start doing things like attacking universities, when you start attacking things like internet exchange points, when something goes wrong, people can die. If a hospital’s infrastructure is affected, lifesaving equipment turns off. When an internet exchange point goes offline and voice over IP calls with the common method of communication—cell phone networks rout through internet communications points nowadays—people can’t call 911. Buildings burn down. All because we wanted to spy on somebody.

So we need to be very careful about where we draw the line and what is absolutely necessary and proportionate to the threat that we face at any given time. I don’t think there’s anything, any threat out there today that anyone can point to, that justifies placing an entire population under mass surveillance. I don’t think there’s any threat that we face from some terrorist in Yemen that says we need to hack a hospital in Hong Kong or Berlin or Rio de Janeiro.

Bamford: I know we’re on a time limit here, but are there questions that I haven’t—

Producer: Let’s take a two minute break here.

Bamford: One of the most interesting things about the Stuxnet attack was that the President—both President Bush and President Obama—were told don’t worry, this won’t be detected by anybody. There’ll be no return address on this. And number two, it won’t escape from the area that they’re focusing it anyway, the centrifuges. Both of those proved wrong, and the virus did escape, and it was detected, and then it was traced back to the United States. So is this one of the big dangers, the fact that the President is told is these things, the President doesn’t have the capability to look into every technical issue, and then these things can wind up hitting us back in the face?

Snowden: The problem is the internet is the most complex system that humans have ever invented. And with every internet enabled operation that we’ve seen so far, all of these offensive operations, we see knock on effects. We see unintended consequences. We see emergent behavior, where when we put the little evil virus in the big pool of all our private lives, all of our private systems around the internet, it tends to escape and go Jurassic Park on us. And as of yet, we’ve found no way to prevent that. And given the complexity of these systems, it’s very likely that we never will.

What we need to do is we need to create new international standards of behavior—not just national laws, because this is a global problem. We can’t just fix it in the United States, because there are other countries that don’t follow U.S. laws. We have to create international standards that say these kind of things should only ever occur when it is absolutely necessary, and that the response that the operation is tailored to be precisely restrained and proportionate to the threat faced. And that’s something that today we don’t have, and that’s why we see these problems.

Bamford: Another problem is, back in the Cold War days—and most people are familiar with that—when there was a fairly limited number of countries that could actually develop nuclear weapons. There were a handful of countries basically that could have the expertise, take the time, find the plutonium, put a nuclear weapon together. Today, the world is completely different, and you could have a small country like Fiji with the capability of doing cyber warfare. So it isn’t limited like it was in those days to just a handful of countries. Do you see that being a major problem with this whole idea of getting into cyber warfare, where so many countries have the capability of doing cyber warfare, and the U.S. being the most technologically vulnerable country?

Snowden: Yeah, you’re right. The problem is that we’re more reliant on these technical systems. We’re more reliant on the critical infrastructure of the internet than any other nation out there. And when there’s such a low barrier to entering the domain of cyber-attacks—cyber warfare as they like to talk up the threat—we’re starting a fight that we can’t win.

Every time we walk on to the field of battle and the field of battle is the internet, it doesn’t matter if we shoot our opponents a hundred times and hit every time. As long as they’ve hit us once, we’ve lost, because we’re so much more reliant on those systems. And because of that, we need to be focusing more on creating a more secure, more reliable, more robust, and more trusted internet, not one that’s weaker, not one that relies on this systemic model of exploiting every vulnerability, every threat out there. Every time somebody on the internet sort of glances at us sideways, we launch an attack at them. That’s not going to work out for us long term, and we have to get ahead of the problem if we’re going to succeed.

Bamford: Another thing that the public doesn’t really have any concept of, I think at this point, is how organized this whole Cyber Command is, and how aggressive it is. People don’t realize there’s a Cyber Army now, a Cyber Air Force, a Cyber Navy. And the fact that the models for some of these organizations like the Cyber Navy are things like we will dominate the cyberspace the same way we dominate the sea or the same way that we dominate land and the same way we dominate space. So it’s this whole idea of creating an enormous military just for cyber warfare, and then using this whole idea of we’re going to dominate cyberspace, just like it’s the navies of centuries ago dominating the seas.

Snowden: Right. The reason they say that they want to dominate cyberspace is because it’s politically incorrect to say you want to dominate the internet. Again, it’s sort of a branding effort to get them the support they need, because we the public don’t want to authorize the internet to become a battleground. We need to do everything we can as a society to keep that a neutral zone, to keep that an economic zone that can reflect our values, both politically, socially, and economically. The internet should be a force for freedom. The internet should not be a tool for war. And for us, the United States, a champion of freedom, to be funding and encouraging the subversion of a tool for good to be a tool used for destructive ends is, I think, contrary to the principles of us as a society.

Bamford: You had a question, Scott?

Producer: It was really just a question about (inaudible) vulnerabilities going beyond operating systems that we know of, (inaudible) and preserving those vulnerabilities, that that paradox extends over into critical infrastructure as well as—

Snowden: Let me just freestyle on that for a minute, then you can record the question part whenever you want. Something we have to remember is that everything about the internet is interconnected. All of our systems are not just common to us because of the network links between them, but because of the software packages, because of the hardware devices that comprise it. The same router that’s deployed in the United States is deployed in China. The same software package that controls the dam floodgates in the United States is the same as in Russia. The same hospital software is there in Syria and the United States.

So if we are promoting the development of exploits, of vulnerabilities, of insecurity in this critical infrastructure, and we’re not fixing it when we find it—when we find critical flaws, instead we put it on the shelf so we can use it the next time we want to launch an attack against some foreign country. We’re leaving ourselves at risk, and it’s going to lead to a point where the next time a power plant goes down, the next time a dam bursts, the next time the lights go off in a hospital, it’s going to be in America, not overseas.

Bamford: Along those lines, one of the things we’re focusing on in the program is the potential extent of cyber warfare. And we show a dam, for example, in Russia, where there was a major power plant under that. This was a facility that was three times larger than the Hoover Dam, and it exploded. One of the turbines, which weighed as much as two Boeing 747s, exploded 50 feet into the air and then crashed down and killed 75 people. And that was all because of what was originally thought was a cyber-attack, but turned out to be a mistaken piece of cyber that was sent to make this happen. It was accidental.

But the point is this is what can happen if somebody wants to deliberately do this, and I don’t think that’s what many people in the U.S. have a concept of, that this type of warfare can be that extensive. And if you could just give me some ideas along those lines of how devastating this can be, not just in knocking off a power grid, but knocking down an entire dam or an entire power plant.

Snowden: So I don’t actually want to get in the business of enumerating the list of the horrible of horribles, because I don’t want to hype the threat. I’ve said all these things about the dangers and what can go wrong, and you’re right that there are serious risks. But at the same time, it’s important to understand that this is not an existential threat. Nobody’s going to press a key on their keyboard and bring down the government. Nobody’s going to press a key on their keyboard and wipe a nation off the face of the earth.

We have faced threats from criminal groups, from terrorists, from spies throughout our history, and we have limited our responses. We haven’t resorted to total war every time we have a conflict around the world, because that restraint is what defines us. That restraint is what gives us the moral standing to lead the world. And if we go, there are cyber threats out there, this is a dangerous world, and we have to be safe, we have to be secure no matter the cost, we’ve lost that standing.

We have to be able to reject disproportionate and unjustified responses in the cyber domain just as we do in the physical domain. We reject techniques like torture regardless of whether they’re effective or ineffective because they are barbaric and harmful on a broad scale. It’s the same thing with cyber warfare. We should never be attacking hospitals. We should never be taking down power plants unless that is absolutely necessary to ensure our continued existence as a free people.

Bamford: That’s fine with me. If there’s anything that you think we didn’t cover or you want to put in there?

Snowden: I was thinking about two things. One is—I went a lot off on the politics here, and a lot of it was ramble-y, so I might try one more thing on that. The other one I was talking about the VFX thing for the cloud, how cyber-attacks happen.

Producer: So I just want sort of an outline of where you want to go to make sure we get that.

Bamford: Yeah, what kind of question you want me to ask.

Snowden: You wouldn’t even necessarily have to ask a question. It would just be—

Producer: (inaudible).

Snowden: Yeah. It would just be like a segment. I would say people ask how does a cyber-attack happen. People ask what does exploitation on the internet look like, and how do you find out where it came from. Most people nowadays are aware of what IP addresses are, and they know that you shouldn’t send an email from a computer that’s associated with you if you don’t want it to be tracked back to you. You don’t want to hack the power plant from your house if you don’t want them to follow the trail back and see your IP address.

But there are also what are called proxies, proxy servers on the internet, and this is very typical for hackers to use. They create what are called proxy chains where they gain access to a number of different systems around the world, sometimes by hacking these, and they use them as sort of relay boxes. So you’ve got the originator of an attack all the way over here on the other side of the planet in the big orb of the internet, just a giant constellation of network links all around. And then you’ve got their intended victim over here.

But instead of going directly from them to the victim in one straight path where this victim sees the originator, the attacker, was the person who sent the exploit to them, who attacked their system, you’ll see they do something where they zigzag through the internet. They go from proxy to proxy, from country to country around the world, and they use that last proxy, that last step in the chain, to launch the attack.

So while the attack could have actually come from Missouri, an investigator responding to the attack will think it came from the Central African Republic or from the Sudan or from Yemen or from Germany. And the only way to track that back is to hack each of those systems back through the chain or to use mass surveillance techniques to have basically a spy on each one of those links so you can follow the tunnel all the way home.

The more I think about it, the more I think that would be way too complicated to—

Producer: No, I was just watching your hands. That was just filling in the blanks.

Bamford: No, I was, too. That’ll be fine.

Producer: And it’s a good point of how you can automate responses and how you—

Bamford: Yeah, we can just drive in and draw in those zigzags.

Snowden: Right. I mean, yeah, the way I would see it is just sort of like stars, like a constellation of points. And you’ve got different colored paths going between them. And then you just highlight the originator and the victim. And they don’t have to be on the edges. They could even be in the center of the cloud somewhere. And then you have sort of a green line going straight between them, and it turns red when it hacks, but then you see the little police agency follow it back. And then so you put an X on it and you replace it with the zigzag line that’s green, and then it goes red when it attacks, to sort of call it the path.

Bamford: From Missouri to the Central African Republic.

Snowden: Yeah.

Producer: Are there any other visualizations that you can think of that maybe you see it as an image as opposed to a (multiple conversations; inaudible).

Snowden: I think one of the good ones to do—and you can do it pretty cheaply, even almost funny, like cartoon-like, and sort of like almost a Flash animation, like paper cutouts—would be to help people visualize the problem with the U.S. prioritizing offense over defense is you look at it—and I’ll give a voiceover here.

When you look at the problem of the U.S. prioritizing offense over defense, imagine you have two bank vaults, the United States bank vault and the Bank of China. But the U.S. bank vault is completely full. It goes all the way up to the sky. And the Chinese bank vault or the Russian bank vault of the African bank vault or whoever the adversary of the day is, theirs is only half full or a quarter full or a tenth full.

But the U.S. wants to get into their bank vault. So what they do is they build backdoors into every bank vault in the world. But the problem is their vault, the U.S. bank vault, has the same backdoor. So while we’re sneaking over to China and taking things out of their vault, they’re also sneaking over to the United States and taking things out of our vault. And the problem is, because our vault is full, we have so much more to lose. So in relative terms, we gain much less from breaking into the vaults of others than we do from having others break into our vaults. That’s why it’s much more important for us to be able to defend against foreign attacks than it is to be able to launch successful attacks against foreign adversaries.

You know, just something sort of symbolic and quick that people can instantly visualize.

Producer: The other thing I’d like to put to you, because we have to find somebody to do it, is how do you make a cyber-weapon? What is malware? What is that?

Snowden: When people are talking about malware, what they really mean is—when people are talking about malware, what they—

When people are talking about cyber weapons, digital weapons, what they really mean is a malicious program that’s used for a military purpose. A cyber weapon could be something as simple as an old virus from 1995 that just happens to still be effective if you use it for that purpose.

Custom developed digital weapons, cyber weapons nowadays typically chain together a number of zero-day exploits that are targeted against the specific site, the specific target that they want to hit. But it depends, this level of sophistication, on the budget and the quality of the actor who’s instigating the attack. If it’s a country that’s less poor or less sophisticated, it’ll be a less sophisticated attack.

But the bare bones tools for a cyber-attack are to identify a vulnerability in the system you want to gain access to or you want to subvert or you want to deny, destroy, or degrade, and then to exploit it, which means to send codes, deliver code to that system somehow, whether it’s locally in the physical realm or on the same network or remotely across the internet, across the global network, and get that code to that vulnerability, to that crack in their wall, jam it in there, and then have it execute.

The payload can then be the action, the instructions that you want to execute on that system, which typically, for the purposes of espionage, would be leaving an implant behind to listen in on what they’re doing, but could just as easily be something like the wiper virus that just deletes everything from the machines and turns them off. Really, it comes down to any instructions that you can think of that you would want to execute on that remote system.

Bamford: Along those lines, there’s one area that could really be visualized I think a lot better, and that’s the vulnerabilities. The way I’ve said it a few times but might be good if you thought about it is looking at a bank vault, and then there are these little cracks, and that enables somebody to get into the bank vault. So what the U.S. is doing is cataloguing all those little cracks instead of telling the bank how to correct those cracks. Problem is other people can find those same cracks.

Snowden: Other people can see the same cracks, yeah.

Bamford: And take the money from the bank, in which case the U.S. did a disservice to the customers of the bank, which is the public, by not telling the bank about the cracks in the first place.

Snowden: Yeah, that’s perfect. And another way to do it is not just cracks in the walls, but it could be other ways in. You can show a guy sort of peeking over the wall, you can see a guy tunneling underneath, you can see a guy going through the front door. All of those, in cyber terms, are vulnerabilities, because it’s not that you have to look for one hole of a specific type. It’s the whole paradigm. You look at the totality of their security situation, and you look for any opening by which you might subvert the intent of that system. And you just go from there. There’s a whole world of exploitation, but it goes beyond the depth of the general audience.

Producer: We can just put them all (multiple conversations; inaudible).

Bamford: Any others?

Snowden: One thing, yeah. There were a couple things I wanted to think about. One was man-in-the-middle, a type of attack you should illustrate. It’s routine hacking, but it’s related to CNE specifically, computer network exploitation. But I think in conflating in into cyber warfare helps people understand what it is.

A man-in-the-middle attack is where someone like the NSA, somebody who has access to the transmission medium that you use for communicating, actually subverts your communication. They intercept it, read it, and pass it on, or they intercept it, modify it, and pass it on.

You can imagine this as you put a letter in your mailbox for the postal carrier to pick up and then deliver, but you don’t know that the postal carrier actually took it to the person that you want until they confirm that it happened. The postal carrier could have replaced it with a different letter. They could have opened it. If it was a gift, they could have taken the gift out, things like that.

We have, over time, created global standards of behavior that mean mailmen don’t do that. They’re afraid of the penalties. They’re afraid of getting caught. And we as a society recognize that the value of having trusted means of communication, trusted mail, far outweighs any benefit that we might get from being able to freely tamper with mail. We need those same standards to apply to the internet. We need to be able to trust that when we send our emails through Verizon, that Verizon isn’t sharing with the NSA, that Verizon isn’t sharing them with the FBI or German intelligence or French intelligence or Russian intelligence or Chinese intelligence.

The internet has to be protected from this sort of intrusive monitoring or else the medium upon which we all rely for the basis of our economy and our normal life—everybody touches the internet nowadays—we’ll lose that, and it’s going to have broad effects as a consequence that we cannot predict.

Producer: Terrific. I think we ought to keep going and do like an interactive Edward Snowden kind of app.

Snowden: My lawyer would murder me.

Producer: No, you really—(inaudible) used to give classes.

Snowden: Yeah, I used to teach. It was on a much more specific level, which is why I keep having to dial back and think about it.

Producer: You’re a very clear speaker about it.

Snowden: Let me just one more time do the offense and defense and security thing. I think you guys already have enough to patch it together, but let me just try to freestyle on it.

The community of technical experts who really manage the internet, who built the internet and maintain it, are becoming increasingly concerned about the activities of agencies like the NSA or Cyber Command, because what we see is that defense is becoming less of a priority than offense. There are programs we’ve read about in the press over the last year, such as the NSA paying RSA $10 million to use an insecure encryption standard by default in their products. That’s making us more vulnerable not just to the snooping of our domestic agencies, but also foreign agencies.

We saw another program called Bullrun which subverted the—which subverts—it continues to subvert similar encryption standards that are used for the majority of e-commerce all over the world. You can’t go to your bank and trust that communication if those standards have been weakened, if those standards are vulnerable. And this is resulting in a paradigm where these agencies wield tremendous power over the internet at the price of making the rest of their nation incredibly vulnerable to the same kind of exploitative attacks, to the same sort of mechanisms of cyber-attack.

And that means while we may have a real advantage when it comes to eavesdropping on the military in Syria or trade negotiations over the price of shrimp in Indonesia—which is an actually real anecdote—or even monitoring the climate change conference, it means it results. It means we end up living in an America where we no longer have a National Security Agency. We have a national surveillance agency. And until we reform our laws and until we fix the excesses of these old policies that we inherited in the post-9/11 era, we’re not going to be able to put the security back in the NSA.

Bamford: That’s great. Just along those lines, from what you know about the project Bullrun and so forth, how secure do you think things like AES, DES, those things are, the advanced encryption standard?

Snowden: I don’t actually want to respond to that one on camera, and the answer is I actually don’t know. But yeah, so let’s leave that one.

Bamford: I mean, that would have been the idea to weaken it.

Snowden: Right. The idea would be to weaken it, but which standards? Like is it AES? Is it the other ones? DES was actually stronger than we thought it was at the time because the NSA had secretly manipulated the standard to make it stronger back in the day, which was weird, but that shows the difference in thinking between the ’80s and the ’90s. It was the S-boxes. That’s what it was called. The S-boxes was the modification made. And today, where they go, oh, this is too strong, let’s weaken it. The NSA was actually concerned back in the time of the crypto-wars with improving American security. Nowadays, we see that their priority is weakening our security, just so they have a better chance of keeping an eye on us.

Bamford: Right, well, I think that’s perfect. So why don’t we just do the—

Producer: Would you like some coffee? Something to drink?

Bamford: Yeah, we can get something from room service, if you like.

Snowden: I actually only drink water. That was one of the funniest things early on. Mike Hayden, former NSA CIA director, was—he did some sort of incendiary speech—

Bamford: Oh, I know what you’re going to say, yeah.

Snowden: —in like a church in D.C., and Barton Gellman was there. He was one of the reporters. It was funny because he was talking about how I was—everybody in Russia is miserable. Russia is a terrible place. And I’m going to end up miserable and I’m going to be a drunk and I’m never going to do anything. I don’t drink. I’ve never been drunk in my life. And they talk about Russia like it’s the worst place on earth. Russia’s great.

Bamford: Like Stalin is still in charge.

Snowden: Yeah, I know. It’s crazy.

Bamford: But you know what he was referring to, I think. You know what he was flashing back to was—and I’d be curious whether you’ve actually heard about this or not—

Snowden: Philby and Burgess and—

Bamford: Martin and Mitchel.

Snowden: I actually don’t remember the Martin and Mitchell case that well. I’m aware of the outlines of it.

Bamford: But you know what they did?

Snowden: No.

CentcomHack

Shhh… CENTCOM Hack More Than Just Twitter Feed & YouTube Channel

The CENTCOM hack was much more damaging than what the Pentagon has openly admitted (Pentagon spokesman said it was “little more than a prank or vandalism”):

ShadowPeople-thespecialhead

Shhh… Online Privacy: How to Track & Manage Our Digital Shadow

Photo (above) credit: http://thespecialhead.deviantart.com/art/Shadow-people-304525517

I found this excellent MyShadow website which not only explains what digital shadows mean but also provides a useful tool to check what traces one leaves online – by specifying the hardware and software one uses – and best of all, explores ways to mitigate them.

Have fun cleaning up your digital footprints.

Shadow-myshadowORG
Shadow-myshadowORG2
Shadow-myshadowORG3
Shadow-myshadowORG4

FacialRecog-FBI2

Shhh… Facial Recognition & Risks: FBI to Photograph All Americans

FacialRecog-FBI3

Following up on an earlier post on the same topic:

iCloud

Shhh… Facial Recognition & Risks: Encoding Your Photos with Photoscrambler

Continuing on my blog post yesterday – shouldn’t one feel guilty about posting photos of their loved ones online without knowing or truly understanding the underlying risks?

Well instead of covering the face(s), how about encoding your photos with personal secret code so that only you and those selected parties can see them? That’s what this software PhotoScrambler is about.

FacialRecogn

Shhh… Facial Recognition & Risks: How Much Is Your Face Worth?

If you’re still coining your new year resolutions… how about never to post (and tag) any photos of yourself and loved ones online?

Yes, it’s a social norm these days – just look at the Facebook sphere – but I can’t explain the risks better than this excellent presentation (below) from the Make Use Of blog about facial recognition technology and the risks of posting our photos online.

Food for thoughts?

FacialRecog-1
FacialRecog-2
FacialRecog-3
FacialRecog-4
FacialRecog-5
FacialRecog-6
FacialRecog-7
FacialRecog-8

FortuneCookie

Shhh… Get a New Home Router – 12 Millions Vulnerable to “Misfortune Cookie” Hacks

Here’s one for the (urgent) To Do List, as the following article (below) from threatpost.com explains…

12 Million Home Routers Vulnerable to Takeover

by Michael Mimoso December 18, 2014 , 12:23 pm

More than 12 million devices running an embedded webserver called RomPager are vulnerable to a simple attack that could give a hacker man-in-the-middle position on traffic going to and from home routers from just about every leading manufacturer.

Mostly ISP-owned residential gateways manufactured by D-Link, Huawei, TP-Link, ZTE, Zyxel and several others are currently exposed. Researchers at Check Point Software Technologies reported the flaw they’ve called Misfortune Cookie, to all of the affected vendors and manufacturers, and most have responded that they will push new firmware and patches in short order.

The problem with embedded device security is that, with consumer-owned gear especially, it’s up to the device owner to find and flash new firmware, leaving most of the devices in question vulnerable indefinitely.

In the case of the RomPager vulnerability, an attacker need only send a single packet containing a malicious HTTP cookie to exploit the flaw. Such an exploit would corrupt memory on the device and allow an attacker to remotely gain administrative access to the device.

“We hope this is a game-changing wake-up call,” said Shahar Tal, malware and vulnerability research manager with Check Point. “Certainly in terms of numbers, I don’t remember a vulnerability released that had 12 million endpoints online since maybe Conficker in 2008. This is really, really bad and the incredibly slow update propagation chain makes it worse.”

Tal said the vulnerable code was written in 2002 and given to chipset makers bundled in a software development kit (SDK). This SDK was given to manufacturers who used it when building their respective firmware; ISPs, Tal said, also used the same SDK to prepare custom firmware used in consumer residential devices.

“The vulnerable code is from 2002 and was actually fixed in 2005 [by AllegroSoft, makers of RomPager] and yet still did not make it into consumer devices,” Tal said. “It’s present in device firmware manufactured in 2014 that we downloaded last month. This is an industry problem; something is wrong.”

Tal said Check Point conducted Internet scans that show the 12 million devices exposed online in 189 countries. In some of those countries, Tal said, vulnerability rates hover around 10 percent, and in one country half of its Internet users are at risk.

“Even when people become aware of this, I don’t expect updated firmware to be deployed in 189 countries,” Tal said. “This will be with us for months and years to come.”

That means that vulnerable home routers are at risk to remote attacks that put not only Internet traffic at risk, but also other devices on a local network such as printers.

“The implications of these risks mean more than just a privacy violation – they also set the stage for further attacks, such as installing malware on devices and making permanent configuration changes,” Check Point wrote in an analysis published today. “This WAN-to-LAN free-crossing is also bypassing any firewall or isolation functionality previously provided by your gateway and breaks common threat models. For example, an attacker can try to access your home webcam (potentially using default credentials) or extract data from your business NAS backup drive.”

Tal said Check Point is not aware of any exploits of this issue, but assumes that researchers and black hats will soon begin pinging Shodan and doing Google searches looking for vulnerable devices.

“This is very easy to exploit once you figure out the program internals,” Tal said. “We are assuming that some researchers will do that in upcoming days and we hope vendors react as fast as possible to get consumers protected.”

Some vendors, which Tal would not name, have already shared beta versions of upgraded firmware with Check Point, and Check Point has confirmed the issue as patched in those cases.

“Everyone is aware that embedded devices are insecure, but we haven’t had one game-changing event that crosses boundaries and makes the industry understand this,” Tal said. “This one is definitely worth the attention and needs fixing.”

data-breach-DATA

New US Sanctions on North Korea – Comparing Sony & the World’s Biggest Data Breaches

In what looks like the opening salvo in response to the major cyberattack on Sony Pictures Entertainment, the United States slapped North Korea with a new round of sanctions last Friday when President Obama signed an Executive Order authorizing the imposition of sanctions and designated 3 entities and 10 individuals for being agencies or officials of the North Korean government.

According to a Treasury Department statement:

databreach-Sanctions

databreach-Sanctions2

The identifiers of these 10 individuals are:

databreach-Sanctions3

But the US government knew sanctions have had limited impact on the Hermit Kingdom. The new sanctions might be deemed as swift and decisive measures in some quarters but it is really nothing more than a window-dressing of sorts – much like animating a gun with one’s fingers under a coat as a first warning at best. Consider, for example, what kind of impact should one expect from these new sanctions anyway? The 3 organizations were already on the US sanctions list and the 10 North Koreans are highly unlikely to have assets in the US, at least not under their name.

In any case, the horizon ahead of 2015 is likely to be proliferated with more headlines about catastrophic data breaches.

And the Sony cyberattack actually pale in comparison to other data breaches on record, as shown (below) by independent data journalist and information designer David McCandless – you can also click on the bubbles to find out about these cases shown in the chart and table nicely compiled and presented in his blog.

databreachChart1
databreachChart2
databreachChart3

Airport

Shhh… The WikiLeaks’ CIA Travel Guide

I like to share with you the latest WikiLeaks release, “CIA Travel Advice to Operatives”. Its press release is pasted below (click here for the full report).

And I find it appropriate to highlight an earlier column, Spies and the Airport Screening Machine.

Enjoy!

CIA Travel Advice to Operatives – Press Release

Today, 21 December 2014, WikiLeaks releases two classified documents by a previously undisclosed CIA office detailing how to maintain cover while travelling through airports using false ID – including during operations to infiltrate the European Union and the Schengen passport control system. This is the second release within WikiLeaks’ CIA Series, which will continue in the new year.

The two classified documents aim to assist CIA undercover officials to circumvent these systems around the world. They detail border-crossing and visa regulations, the scope and content of electronic systems, border guard protocols and procedures for secondary screenings. The documents show that the CIA has developed an extreme concern over how biometric databases will put CIA clandestine operations at risk – databases other parts of the US government made prevalent post-9/11.

How to Survive Secondary Screening without Blowing your CIA Cover

The CIA manual “Surviving Secondary”, dated 21 September 2011, details what happens in an airport secondary screening in different airports around the world and how to pass as a CIA undercover operative while preserving one’s cover. Among the reasons for why secondary screening would occur are: if the traveller is on a watchlist (noting that watchlists can often contain details of intelligence officials); or is found with contraband; or “because the inspector suspects that something about the traveler is not right”.

The highlighted box titled “The Importance of Maintaining Cover––No Matter What” at the end of the document provides an example of an occasion when a CIA officer was selected for secondary screening at an EU airport. During the screening his baggage was swiped and traces of explosives found. The officer “gave the cover story” to explain the explosives; that he had been in counterterrorism training in Washington, DC. Although he was eventually allowed to continue, this example begs the question: if the training that supposedly explained the explosives was only a cover story, what was a CIA officer really doing passing through an EU airport with traces of explosives on him, and why was he allowed to continue?

The CIA identifies secondary screening as a threat in maintaining cover due to the breadth and depth of the searches, including detailed questioning, searches of personal belongings and electronic databases and collection of biometrics “all of which focus significant scrutiny on an operational traveler”.

The manual provides advice on how best to prepare for and pass such a process: having a “consistent, well-rehearsed, and plausible cover”. It also explains the benefits of preparing an online persona (for example, Linked-In and Twitter) that aligns with the cover identity, and the importance of carrying no electronic devices with accounts that are not for the cover identity, as well as being mentally prepared.

CIA Overview of EU Schengen Border Control

The second document in this release, “Schengen Overview”, is dated January 2012 and details guidelines for border officials in the EU’s Schengen zone and the threats their procedures might pose in exposing the “alias identities of tradecraft-conscious operational travelers”, the CIA terminology for US spies travelling with false ID during a clandestine operation. It outlines how various electronic systems within Schengen work and the risks they pose to clandestine US operatives, including the Schengen Information System (SIS), the European fingerprint database EURODAC (European Dactyloscopie) and FRONTEX (Frontières extérieures) – the EU agency responsible for easing travel between member states while maintaining security.

While Schengen currently does not use a biometric system for people travelling with US documents, if it did this “would increase the identity threat level” and, the report warns, this is likely to come into place in 2015 with the EU’s Entry/Exit System (EES). Currently, the Visa Information System (VIS), operated by a number of Schengen states in certain foreign consular posts, provides the most concern to the CIA as it includes an electronic fingerprint database that aims to expose travellers who are attempting to use multiple and false identities. As use of the VIS system grows it will increase the “identity threat for non-US-documented travelers”, which would narrow the possible false national identities the CIA could issue for undercover operatives.

WikiLeaks’ Editor-in-Chief Julian Assange said: “The CIA has carried out kidnappings from European Union states, including Italy and Sweden, during the Bush administration. These manuals show that under the Obama administration the CIA is still intent on infiltrating European Union borders and conducting clandestine operations in EU member states.”

Both documents are classified and marked NOFORN (preventing allied intelligence liaison officers from reading it). The document detailing advice on maintaining cover through secondary screening also carries the classification ORCON (originator controlled) and specifically allows distribution to Executive Branch Departments/Agencies of the US government with the appropriate clearance, facilitating clandestine operations by the other 16 known US government spy agencies. Both documents were produced by a previously unknown office of the CIA: CHECKPOINT, situated in the Identity Intelligence Center (i2c) within the Directorate of Science and Technology. CHECKPOINT specifically focuses on “providing tailored identity and travel intelligence” including by creating documents such as those published today designed specifically to advise CIA personnel on protecting their identities while travelling undercover.

Phones-eavesdropping

Shhh… A Feasible Strategy Despite Severe Innate Phone Security (Eavesdropping) Flaws Like SS7

The Washington Post article below once again highlights one approach to mobile phone usage: have many spares, apart from your regular smartphone(s), like good old cellulars and disposable low-value SIM cards. Dispose the SIM card after each use and always switch amongst those cellulars.

It can’t stop eavesdropping but at least the hackers and spies cannot trace you so easily. The approach may sound extreme to most people, so for all practical reasons, it’s best recommended only for those important and confidential conversations.

SpareSimsPhones2

German researchers discover a flaw that could let anyone listen to your cell calls.
By Craig Timberg December 18

German researchers have discovered security flaws that could let hackers, spies and criminals listen to private phone calls and intercept text messages on a potentially massive scale – even when cellular networks are using the most advanced encryption now available.

The flaws, to be reported at a hacker conference in Hamburg this month, are the latest evidence of widespread insecurity on SS7, the global network that allows the world’s cellular carriers to route calls, texts and other services to each other. Experts say it’s increasingly clear that SS7, first designed in the 1980s, is riddled with serious vulnerabilities that undermine the privacy of the world’s billions of cellular customers.

The flaws discovered by the German researchers are actually functions built into SS7 for other purposes – such as keeping calls connected as users speed down highways, switching from cell tower to cell tower – that hackers can repurpose for surveillance because of the lax security on the network.

Those skilled at the myriad functions built into SS7 can locate callers anywhere in the world, listen to calls as they happen or record hundreds of encrypted calls and texts at a time for later decryption. There also is potential to defraud users and cellular carriers by using SS7 functions, the researchers say.

These vulnerabilities continue to exist even as cellular carriers invest billions of dollars to upgrade to advanced 3G technology aimed, in part, at securing communications against unauthorized eavesdropping. But even as individual carriers harden their systems, they still must communicate with each other over SS7, leaving them open to any of thousands of companies worldwide with access to the network. That means that a single carrier in Congo or Kazakhstan, for example, could be used to hack into cellular networks in the United States, Europe or anywhere else.

“It’s like you secure the front door of the house, but the back door is wide open,” said Tobias Engel, one of the German researchers.

Engel, founder of Sternraute, and Karsten Nohl, chief scientist for Security Research Labs, separately discovered these security weaknesses as they studied SS7 networks in recent months, after The Washington Post reported the widespread marketing of surveillance systems that use SS7 networks to locate callers anywhere in the world. The Post reported that dozens of nations had bought such systems to track surveillance targets and that skilled hackers or criminals could do the same using functions built into SS7. (The term is short for Signaling System 7 and replaced previous networks called SS6, SS5, etc.)

The researchers did not find evidence that their latest discoveries, which allow for the interception of calls and texts, have been marketed to governments on a widespread basis. But vulnerabilities publicly reported by security researchers often turn out to be tools long used by secretive intelligence services, such as the National Security Agency or Britain’s GCHQ, but not revealed to the public.

“Many of the big intelligence agencies probably have teams that do nothing but SS7 research and exploitation,” said Christopher Soghoian, principal technologist for the ACLU and an expert on surveillance technology. “They’ve likely sat on these things and quietly exploited them.”

The GSMA, a global cellular industry group based in London, did not respond to queries seeking comment about the vulnerabilities that Nohl and Engel have found. For the Post’s article in August on location tracking systems that use SS7, GSMA officials acknowledged problems with the network and said it was due to be replaced over the next decade because of a growing list of security and technical issues.

The German researchers found two distinct ways to eavesdrop on calls using SS7 technology. In the first, commands sent over SS7 could be used to hijack a cell phone’s “forwarding” function — a service offered by many carriers. Hackers would redirect calls to themselves, for listening or recording, and then onward to the intended recipient of a call. Once that system was in place, the hackers could eavesdrop on all incoming and outgoing calls indefinitely, from anywhere in the world.

The second technique requires physical proximity but could be deployed on a much wider scale. Hackers would use radio antennas to collect all the calls and texts passing through the airwaves in an area. For calls or texts transmitted using strong encryption, such as is commonly used for advanced 3G connections, hackers could request through SS7 that each caller’s carrier release a temporary encryption key to unlock the communication after it has been recorded.

Nohl on Wednesday demonstrated the ability to collect and decrypt a text message using the phone of a German senator, who cooperated in the experiment. But Nohl said the process could be automated to allow massive decryption of calls and texts collected across an entire city or a large section of a country, using multiple antennas.

“It’s all automated, at the push of a button,” Nohl said. “It would strike me as a perfect spying capability, to record and decrypt pretty much any network… Any network we have tested, it works.”

Those tests have included more than 20 networks worldwide, including T-Mobile in the United States. The other major U.S. carriers have not been tested, though Nohl and Engel said it’s likely at least some of them have similar vulnerabilities. (Several smartphone-based text messaging systems, such as Apple’s iMessage and Whatsapp, use end-to-end encryption methods that sidestep traditional cellular text systems and likely would defeat the technique described by Nohl and Engel.)

In a statement, T-Mobile said: “T-Mobile remains vigilant in our work with other mobile operators, vendors and standards bodies to promote measures that can detect and prevent these attacks.”

The issue of cell phone interception is particularly sensitive in Germany because of news reports last year, based on documents provided by former NSA contractor Edward Snowden, that a phone belonging to Chancellor Angela Merkel was the subject of NSA surveillance. The techniques of that surveillance have not become public, though Nohl said that the SS7 hacking method that he and Engel discovered is one of several possibilities.

U.S. embassies and consulates in dozens of foreign cities, including Berlin, are outfitted with antennas for collecting cellular signals, according to reports by German magazine Der Spiegel, based on documents released by Snowden. Many cell phone conversations worldwide happen with either no encryption or weak encryption.

The move to 3G networks offers far better encryption and the prospect of private communications, but the hacking techniques revealed by Nohl and Engel undermine that possibility. Carriers can potentially guard their networks against efforts by hackers to collect encryption keys, but it’s unclear how many have done so. One network that operates in Germany, Vodafone, recently began blocking such requests after Nohl reported the problem to the company two weeks ago.

Nohl and Engel also have discovered new ways to track the locations of cell phone users through SS7. The Post story, in August, reported that several companies were offering governments worldwide the ability to find virtually any cell phone user, virtually anywhere in the world, by learning the location of their cell phones through an SS7 function called an “Any Time Interrogation” query.

Some carriers block such requests, and several began doing so after the Post’s report. But the researchers in recent months have found several other techniques that hackers could use to find the locations of callers by using different SS7 queries. All networks must track their customers in order to route calls to the nearest cellular towers, but they are not required to share that information with other networks or foreign governments.

Carriers everywhere must turn over location information and allow eavesdropping of calls when ordered to by government officials in whatever country they are operating in. But the techniques discovered by Nohl and Engel offer the possibility of much broader collection of caller locations and conversations, by anyone with access to SS7 and the required technical skills to send the appropriate queries.

“I doubt we are the first ones in the world who realize how open the SS7 network is,” Engel said.

Secretly eavesdropping on calls and texts would violate laws in many countries, including the United States, except when done with explicit court or other government authorization. Such restrictions likely do little to deter criminals or foreign spies, say surveillance experts, who say that embassies based in Washington likely collect cellular signals.

The researchers also found that it was possible to use SS7 to learn the phone numbers of people whose cellular signals are collected using surveillance devices. The calls transmit a temporary identification number which, by sending SS7 queries, can lead to the discovery of the phone number. That allows location tracking within a certain area, such as near government buildings.

The German senator who cooperated in Nohl’s demonstration of the technology, Thomas Jarzombek of Merkel’s Christian Democratic Union party, said that while many in that nation have been deeply angered by revelations about NSA spying, few are surprised that such intrusions are possible.

“After all the NSA and Snowden things we’ve heard, I guess nobody believes it’s possible to have a truly private conversation on a mobile phone,” he said. “When I really need a confidential conversation, I use a fixed-line” phone.

Fingerprint-electronicInvestigation

Are You Unique – How to Check Your Browser Fingerprints & Online Privacy?

Think you have taken all measures to remain anonymous and untraceable online? Or are you still (unknowingly) leaving browser fingerprints that can be traced to you and your devices?

The good news is, there’s a way to check and confirm if you are unique in cyberspace.

A browser fingerprint, or device fingerprint, is the systematic collection of information about a remote device for identification purposes, even when cookies are turned off.

There’s a web site “Am I Unique” which you can visit and check by clicking “View my browser fingerprint” as shown below:

Fingerprinting-Browser

That should give much food for thoughts for the Christmas holidays?

According to a recent international survey on 23,376 Internet users in 24 countries, carried out between October 7, 2014 and November 12, 2014, which found some 64 percent confessed they’re more concerned today about online privacy than they were a year ago.

Privacy-survey

That’s one way to gauge the post-Snowden effects. And if you still wonder why privacy matters, I highly recommend the Glenn Greenwald’s TEDTalk on “Why Privacy Matters“.

Surveillance-Homes

Shhh… US Federal Court: Warrantless Surveillance Footage in Public Areas is an Invasion of Privacy

Guess one would easily assume privacy does not apply in public areas – just look at the proliferation of CCTV cameras in the streets.

Well, that’s probably not necessarily the case judging by one recent court ruling in Washington. It may be good news for the general public and bad news for law enforcement.

Now first, many would probably associate the following 2 photos with typical covert surveillance operations, whereby operatives waited patiently to snap photos (and video) evidence of their subjects.

Surveillance-Detectives

Surveillance-Detectives2

But in this case involving the Washington police and Leonel Vargas (an “undocumented” immigrant suspected of drug trafficking), the authorities had a better idea.

The police planted a video camera, without a warrant, on a nearby utility pole 100 yards from Vargas’ rural Washington state house and shot 6 weeks worth of footage of his front yard whereby they eventually captured convincing evidence.

Vargas challenged the case on the grounds of violation of his privacy, which the government argued was not valid as his front yard is a public space and thus privacy does not apply.

The evidence put forward by the authorities was subsequently thrown out of the court by US District Judge Edward Shea, whose ruling is well summed up as such:

Law enforcement’s warrantless and constant covert video surveillance of Defendant’s rural front yard is contrary to the public’s reasonable expectation of privacy and violates Defendant’s Fourth Amendment right to be free from unreasonable search. The video evidence and fruit of the video evidence are suppressed.

Find out more about this case from here and there.

FBI-SilkRoad

Shhh… The FBI Unmasking of TOR Users with Metasploit

I like to share this WIRED updates on the use of TOR.

The FBI Used the Web’s Favorite Hacking Tool to Unmask Tor Users
By Kevin Poulsen 12.16.14 | 7:00 am

For more than a decade, a powerful app called Metasploit has been the most important tool in the hacking world: An open-source Swiss Army knife of hacks that puts the latest exploits in the hands of anyone who’s interested, from random criminals to the thousands of security professionals who rely on the app to scour client networks for holes.

Now Metasploit has a new and surprising fan: the FBI. WIRED has learned that FBI agents relied on Flash code from an abandoned Metasploit side project called the “Decloaking Engine” to stage its first known effort to successfully identify a multitude of suspects hiding behind the Tor anonymity network.

That attack, “Operation Torpedo,” was a 2012 sting operation targeting users of three Dark Net child porn sites. Now an attorney for one of the defendants ensnared by the code is challenging the reliability of the hackerware, arguing it may not meet Supreme Court standards for the admission of scientific evidence. “The judge decided that I would be entitled to retain an expert,” says Omaha defense attorney Joseph Gross. “That’s where I am on this—getting a programming expert involved to examine what the government has characterized as a Flash application attack of the Tor network.”

A hearing on the matter is set for February 23.

Tor, a free, open-source project originally funded by the US Navy, is sophisticated anonymity software that protects users by routing traffic through a labyrinthine delta of encrypted connections. Like any encryption or privacy system, Tor is popular with criminals. But it also is used by human rights workers, activists, journalists and whistleblowers worldwide. Indeed, much of the funding for Tor comes from grants issued by federal agencies like the State Department that have a vested interest in supporting safe, anonymous speech for dissidents living under oppressive regimes.

With so many legitimate users depending upon the system, any successful attack on Tor raises alarm and prompts questions, even when the attacker is a law enforcement agency operating under a court order. Did the FBI develop its own attack code, or outsource it to a contractor? Was the NSA involved? Were any innocent users ensnared?

Now, some of those questions have been answered: Metasploit’s role in Operation Torpedo reveals the FBI’s Tor-busting efforts as somewhat improvisational, at least at first, using open-source code available to anyone.

Created in 2003 by white hat hacker HD Moore, Metasploit is best known as a sophisticated open-source penetration testing tool that lets users assemble and deliver an attack from component parts—identify a target, pick an exploit, add a payload and let it fly. Supported by a vast community of contributors and researchers, Metasploit established a kind of lingua franca for attack code. When a new vulnerability emerges, like April’s Heartbleed bug, a Metasploit module to exploit it is usually not far behind.

Moore believes in transparency—or “full disclosure”—when it comes to security holes and fixes, and he’s applied that ethic in other projects under the Metasploit banner, like the Month of Browser Bugs, which demonstrated 30 browser security holes in as many days, and Critical.IO, Moore’s systematic scan of the entire Internet for vulnerable hosts. That project earned Moore a warning from law enforcement officials, who cautioned that he might be running afoul of federal computer crime law.

In 2006, Moore launched the “Metasploit Decloaking Engine,” a proof-of-concept that compiled five tricks for breaking through anonymization systems. If your Tor install was buttoned down, the site would fail to identify you. But if you’d made a mistake, your IP would appear on the screen, proving you weren’t as anonymous as you thought. “That was the whole point of Decloak,” says Moore, who is chief research officer at Austin-based Rapid7. “I had been aware of these techniques for years, but they weren’t widely known to others.”

One of those tricks was a lean 35-line Flash application. It worked because Adobe’s Flash plug-in can be used to initiate a direct connection over the Internet, bypassing Tor and giving away the user’s true IP address. It was a known issue even in 2006, and the Tor Project cautions users not to install Flash.

The decloaking demonstration eventually was rendered obsolete by a nearly idiot-proof version of the Tor client called the Tor Browser Bundle, which made security blunders more difficult. By 2011, Moore says virtually everyone visiting the Metasploit decloaking site was passing the anonymity test, so he retired the service. But when the bureau obtained its Operation Torpedo warrants the following year, it chose Moore’s Flash code as its “network investigative technique”—the FBI’s lingo for a court-approved spyware deployment.

Torpedo unfolded when the FBI seized control of a trio of Dark Net child porn sites based in Nebraska. Armed with a special search warrant crafted by Justice Department lawyers in Washington DC, the FBI used the sites to deliver the Flash application to visitors’ browsers, tricking some of them into identifying their real IP address to an FBI server. The operation identified 25 users in the US and an unknown number abroad.

Gross learned from prosecutors that the FBI used the Decloaking Engine for the attack — they even provided a link to the code on Archive.org. Compared to other FBI spyware deployments, the Decloaking Engine was pretty mild. In other cases, the FBI has, with court approval, used malware to covertly access a target’s files, location, web history and webcam. But Operation Torpedo is notable in one way. It’s the first time—that we know of—that the FBI deployed such code broadly against every visitor to a website, instead of targeting a particular suspect.

The tactic is a direct response to the growing popularity of Tor, and in particular an explosion in so-called “hidden services”—special websites, with addresses ending in .onion, that can be reached only over the Tor network.

Hidden services are a mainstay of the nefarious activities carried out on the so-called Dark Net, the home of drug markets, child porn, and other criminal activity. But they’re also used by organizations that want to evade surveillance or censorship for legitimate reasons, like human rights groups, journalists, and, as of October, even Facebook.

A big problem with hidden service, from a law enforcement perceptive, is that when the feds track down and seize the servers, they find that the web server logs are useless to them. With a conventional crime site, those logs typically provide a handy list of Internet IP addresses for everyone using the site – quickly leveraging one bust into a cascade of dozens, or even hundreds. But over Tor, every incoming connection traces back only as far as the nearest Tor node—a dead end.

Thus, the mass spyware deployment of Operation Torpedo. The Judicial Conference of the United States is currently considering a Justice Department petition to explicitly permit spyware deployments, based in part on the legal framework established by Operation Torpedo. Critics of the petition argue the Justice Department must explain in greater detail how its using spyware, allowing a public debate over the capability.

“One thing that’s frustrating for me right now, is it’s impossible to get DOJ to talk about this capability,” says Chris Soghoian, principal technologist at the ACLU. “People in government are going out of their way to keep this out of the discussion.”

For his part, Moore has no objection to the government using every available tool to bust pedophiles–he once publicly proposed a similar tactic himself. But he never expected his long-dead experiment to drag him into a federal case. Last month he started receiving inquiries from Gross’ technical expert, who had questions about the efficacy of the decloaking code. And last week Moore started getting questions directly from the accused pedophile in the case— a Rochester IT worker who claims he was falsely implicated by the software.

Moore finds that unlikely, but in the interest of transparency, he answered all the questions in detail. “It only seemed fair to reply to his questions,” Moore says. “Though I don’t believe my answers help his case at all.”

Using the outdated Decloaking Engine would not likely have resulted in false identifications, says Moore. In fact, the FBI was lucky to trace anyone using the code. Only suspects using extremely old versions of Tor, or who took great pains to install the Flash plug-in against all advice, would have been vulnerable. By choosing an open-source attack, the FBI essentially selected for the handful offenders with the worst op-sec, rather than the worst offenders.

Since Operation Torpedo, though, there’s evidence the FBI’s anti-Tor capabilities have been rapidly advancing. Torpedo was in November 2012. In late July 2013, computer security experts detected a similar attack through Dark Net websites hosted by a shady ISP called Freedom Hosting—court records have since confirmed it was another FBI operation. For this one, the bureau used custom attack code that exploited a relatively fresh Firefox vulnerability—the hacking equivalent of moving from a bow-and-arrow to a 9-mm pistol. In addition to the IP address, which identifies a household, this code collected the MAC address of the particular computer that infected by the malware.

“In the course of nine months they went from off the shelf Flash techniques that simply took advantage of the lack of proxy protection, to custom-built browser exploits,” says Soghoian. “That’s a pretty amazing growth … The arms race is going to get really nasty, really fast.”

MichaelHayden

Shhh… Michael Hayden on the Senate’s CIA Interrogation Report

Photo (above) credit: CIA

I like to share this POLITICO MAGAZINE exclusive interview with former CIA Director (May 30, 2006 – February 12, 2009) Michael Hayden on the release of the US Senate’s report.

Michael Hayden Is Not Sorry
The Senate report rakes Bush’s former CIA director over the coals. He fires back in an exclusive interview.

By MICHAEL HIRSH
December 09, 2014

Though the CIA’s “enhanced interrogation” program long predated his takeover of the agency in 2006, former Director Michael Hayden has found himself at the center of the explosive controversy surrounding the Senate Intelligence Committee’s executive summary of its still-classified report on torture. In a long, impassioned speech on the floor Tuesday, Committee Chair Dianne Feinstein cited Hayden’s testimony repeatedly as evidence that the CIA had not been forthright about a program that the committee majority report called brutal, ineffective, often unauthorized “and far worse than the CIA represented to policymakers and others.” She publicly accused Hayden of falsely describing the CIA’s interrogation techniques “as minimally harmful and applied in a highly clinical and professional manner.” In an interview with Politico Magazine National Editor Michael Hirsh, Hayden angrily rebuts many of the report’s findings.

Michael Hirsh: The report concludes, rather shockingly, that Pres. George W. Bush and other senior officials—including Defense Secretary Donald Rumsfeld for a time and Secretary of State Colin Powell—were not aware of many details of the interrogation programs for a long period. According to CIA records, it concludes, no CIA officer including Directors George Tenet and Porter Goss briefed the president on the specific enhanced interrogation techniques before April 2006. Is that true?

Michael Hayden: It is not. The president personally approved the waterboarding of Abu Zubaydah [in 2002]. It’s in his book! What happened here is that the White House refused to give them [the Senate Intelligence Committee] White House documents based upon the separation of powers and executive privilege. That’s not in their report, but all of that proves that there was dialogue was going on with the White House. What I can say is that the president never knew where the [black] sites were. That’s the only fact I’m aware that he didn’t know.

Hirsh: The report directly challenges your truthfulness, repeatedly stating that your testimony on the details of the programs –for example on whether the interrogations could be stopped at any time by any CIA participant who wanted them halted— is “not congruent with CIA records.” Does that mean you weren’t telling the truth?

Hayden: I would never lie to the committee. I did not lie.

Hirsh: Does it mean that you, along with others at senior levels, were misled about what was actually going on in the program?

Hayden: My testimony is consistent with what I was told and what I had read in CIA records. I said what the agency told me, but I didn’t just accept it at face value. I did what research I could on my own, but I had a 10-day window in which to look at this thing [the committee’s request for information]. I was actually in Virginia for about 30 hours and studied the program for about three before I went up to testify. I was trying to describe a program I didn’t run. The points being made against my testimony in many instances appear to be selective reading of isolated incidents designed to prove a point where I was trying to describe the overall tenor of the program. I think the conclusions they drew were analytically offensive and almost street-like in their simplistic language and conclusions. The agency has pushed back rather robustly in its own response.

Hirsh: You seem upset.

Hayden: Yeah, I’m emotional about it. Everything here happened before I got there [to the CIA], and I’m the one she [Sen. Feinstein] condemns on the floor of the Senate? Gee, how’d that happen? I’m the dumb son of a bitch who went down and tried to lay out this program in great detail to them. I’m mentioned twice as much in there as George Tenet—but George and Porter Goss had 97 detainees during their tenure, while I had two.

Hirsh: Is there anything you think the report gets right?

Hayden: All of us are really upset because we could have used a fair and balanced review of what we did. … The agency clearly admits it was fly-by-wire in the beginning. They were making it up as they went along and it should have been more well-prepared. They’ve freely admitted that. They said that early on they lacked the core competencies required to undertake an unprecedented program of detaining and interrogating suspected terrorists around the world. But then what the committee does is to take what I said out of context. They take statements I made about the later days of the program, for example when I said it was well-regulated and there were medical personnel available, etc., and then apply it to the early days of the program, when there were not. It misrepresents what I said.

Hirsh: One of the most stunning and cited conclusions of the report is that interrogations of CIA detainees were brutal and far worse than the CIA represented to policymakers and others.

Hayden: That is untrue. And let me give you a data point. John Durham, a special independent prosecutor, over a three-year period investigated every known CIA interaction with every CIA detainee. At the end of that the Obama administration declined any prosecution. [In 2012, the Justice Department announced that its investigation into two interrogation deaths that Durham concluded were suspicious out of the 101 he examined—those of Afghan detainee Gul Rahman and Iraqi detainee Manadel al-Jamadi—would be closed with no charges.] So if A is true how does B get to be true? If the CIA routinely did things they weren’t authorized to do, then why is there no follow-up? I have copies of the DOJ reports they’re using today. The question is, is the DoJ going to open any investigation and the DoJ answer is no. You can’t have it both ways. You can’t have all this supposed documentary evidence saying the agency mistreated these prisoners and then Barack Obama’s and Eric Holder’s Department of Justice saying no, you’ve got bupkis here.

Hirsh: What about the report’s overarching conclusion that these enhanced techniques simply were not effective at getting intelligence?

Hayden: My very best argument is that I went to [then-Deputy CIA Director] Mike Morell and I said, ‘Don’t fuck with me. If this story [about the usefulness of intelligence gained from enhanced techniques] isn’t airtight then I’m not saying it to Congress.’ They came back and said our version of the story is correct. Because of this program Zubaydah begat [Khalid Sheikh Mohammed], who begat [others]. We learned a great deal from the detainees.

Hirsh: The report says that even the CIA’s inspector general was not fully informed about the programs—that in fact the CIA impeded oversight by the IG.

Hayden: The IG never told me that. The IG never reported that to Congress. Look, I’m relying on people below me. If they tell you an untruth, you get rid of them. But I never felt I was being misled, certainly not on the important contours of this program. What they [the committee] are doing is grabbing emails out of the ether in a massive fishing expedition. This is a partisan report, as you can see from the minority report out of the committee.

Hirsh: Can you sort out the discrepancy between your testimony that there were only 97 detainees in the history of the program when the report says there 119?

Hayden: We knew there were more. The high-value-target program—they don’t show up on my list if they’re at the [black] sites. And committee knew all about that. They have chapter and verse from [former CIA IG John] Helgerson about it. It’s a question of what criteria you use. When I met with my team about these discrepancies, I said, ‘You tell [incoming CIA director] Leon Panetta he’s got to change the numbers that have been briefed to Congress.’

Hirsh: The report suggests that you misrepresented what you told Congress in the briefings, telling a meeting of foreign ambassadors to the United States in 2006 that every committee member was “fully briefed.”

Hayden: I mean what are they doing—trying to score my public speeches? What’s that about? You want me to go out and score Ron Wyden’s speeches?

Hirsh: You don’t believe you’re in legal jeopardy?

Hayden: No, not at all. I didn’t do anything wrong. How could I be in legal jeopardy?

Michael Hirsh is national editor for Politico Magazine.

CIAreport-Guatanamo

The US Senate Intelligence Committee & CIA Interrogation Report – A Closer Look at the Tortures at Guantanamo Bay

CIA-guantanamo

In view of the huge trove of news coverage following the release of the long overdue and highly anticipated CIA Interrogation report (the BBC has a nice summary of the 20 key findings) by the US Senate Intelligence Committee on Tuesday, I thought it is good to (re)view this UK’s Channel 4 “Guantanamo Handbook” documentary.

It is a reenactment of the tortures at one of the most well known US military prisons in Cuba called the Guantanamo Bay detention camp, also referred to as Guantánamo, G-bay or GTMO – whereby 7 British volunteered to be detainees and subjected to selected CIA-style tortures for 48 hours.

Most notably, one volunteer who started off saying he supported the torture program as a means to gather intelligence and save lives – as per White House speaks – was the first to withdraw on medical grounds after just 10 hours, saying even though he had “strong views” earlier, he has “become more sympathetic of what’s going on there than before” and felt lucky he was “pulled” (out of the program).

Action speaks louder than words? Period.