Check out the original New York Times story.
Check out this video by the Anonymous:
House of Representatives Passes Cybersecurity Bills Without Fixing Core Problems
April 22, 2015 | By Mark Jaycox
The House passed two cybersecurity “information sharing” bills today: the House Permanent Select Committee on Intelligence’s Protecting Cyber Networks Act, and the House Homeland Security Committee’s National Cybersecurity Protection Advancement Act. Both bills will be “conferenced” to create one bill and then sent to the Senate for advancement. EFF opposed both bills and has been urging users to tell Congress to vote against them.
The bills are not cybersecurity “information sharing” bills, but surveillance bills in disguise. Like other bills we’ve opposed during the last five years, they authorize more private sector spying under new legal immunity provisions and use vague definitions that aren’t carefully limited to protect privacy. The bills further facilitate companies’ sharing even more of our personal information with the NSA and some even allow companies to “hack back” against potentially innocent users.
As we’ve noted before, information sharing is not a silver bullet to stopping security failures. Companies can already share the necessary technical information to stop threats via Information Sharing and Analysis Centers (ISACs), public reports, private communications, and the DHS’s Enhanced Cybersecurity Services.
While we are disappointed in the House, we look forward to the fight in the Senate where equally dangerous bills, like the Senate Select Committee on Intelligence’s Cybersecurity Information Sharing Act, have failed to pass every year since 2010.
Contact your Senator now to oppose the Senate bills.
Time to brace up for further loss of privacy as the PCNA would amount to voluntary wholesale transfer of data to the NSA (see story below).
And the Congress actually believe it’s in the name of stopping hackers and cyber attacks?
House Passes Cybersecurity Bill Despite Privacy Protests
Congress is hellbent on passing a cybersecurity bill that can stop the wave of hacker breaches hitting American corporations. And they’re not letting the protests of a few dozen privacy and civil liberties organizations get in their way.
On Wednesday the House of Representatives voted 307-116 to pass the Protecting Cyber Networks Act, a bill designed to allow more fluid sharing of cybersecurity threat data between corporations and government agencies. That new system for sharing information is designed to act as a real-time immune system against hacker attacks, allowing companies to warn one another via government intermediaries about the tools and techniques of advanced hackers. But privacy critics say it also threatens to open up a new backchannel for surveillance of American citizens, in some cases granting the same companies legal immunity to share their users’ private data with government agencies that include the NSA.
“PCNA would significantly increase the National Security Agency’s (NSA’s) access to personal information, and authorize the federal government to use that information for a myriad of purposes unrelated to cybersecurity,” reads a letter signed earlier this week by 55 civil liberties groups and security experts that includes the American Civil Liberties Union, the Electronic Frontier Foundation, the Freedom of the Press Foundation, Human Rights Watch and many others.
“The revelations of the past two years concerning the intelligence community’s abuses of surveillance authorities and the scope of its collection and use of individuals’ information demonstrates the potential for government overreach, particularly when statutory language is broad or ambiguous,” the letter continues. “[PCNA] fails to provide strong privacy protections or adequate clarity about what actions can be taken, what information can be shared, and how that information may be used by the government.”
Specifically, PCNA’s data-sharing privileges let companies give data to government agencies—including the NSA—that might otherwise have violated the Electronic Communications Privacy Act or the Wiretap Act, both of which restrict the sharing of users’ private data with the government. And PCNA doesn’t even restrict the use of that shared information to cybersecurity purposes; its text also allows the information to be used for investigating any potential threat of “bodily harm or death,” opening its application to the surveillance of run-of-the-mill violent crimes like robbery and carjacking.
Congressman Adam Schiff, who led the advocacy for the bill on the House floor, argued in a statement to reporters that PCNA in fact supports privacy by protecting Americans from future hacker breaches. “We do this while recognizing the huge and growing threat cyber hacking and cyber espionage poses to our privacy, as well as to our financial wellbeing and our jobs,” he writes.
“In the process of drafting this bill, protecting privacy was at the forefront throughout, and we consulted extensively with privacy and civil liberties groups, incorporating their suggestions in many cases. This is a strong bill that protects privacy, and one that I expect will get even better as the process goes forward—we expect to see large bipartisan support on the Floor.”
Here’s a video [above] of Schiff’s statement on the House floor.
PCNA does include some significant privacy safeguards, such as a requirement that companies scrub “unrelated” data of personally identifying information before sending it to the government, and that the government agencies pass it through another filter to delete such data after receiving it.
But those protections still don’t go far enough, says Robyn Greene, policy counsel for the Open Technology Institute. Any information considered a “threat indicator” could still legally be sent to the government—even, for instance, IP address innocent victims of botnets used in distributed denial of service attacks against corporate websites. No further amendments that might have added new privacy restrictions to the bill were considered before the House’s vote Wednesday. “I’m very disappointed that the house has passed an information sharing bill that does so much to threaten Americans’ privacy and civil liberties, and no real effort was made to address the problems the bill still had,” says Greene. “The rules committee has excluded amendments that would have resolved privacy concerns…This is little more than a backdoor for general purpose surveillance.”
In a surprise move yesterday, the White House also publicly backed PCNA and its Senate counterpart, the Cybersecurity Information Sharing Act in a statement to press. That’s a reversal of its threat to veto a similar Cybersecurity Information Sharing and Protection Ac in 2013 over privacy concerns, a decision that all but killed the earlier attempt at cybersecurity data sharing legislation. Since then, however, a string of high-profile breaches seems to have swayed President Obama’s thinking, from the cybercriminal breaches of Target and health insurer Anthem that spilled millions of users’ data, to the devastating hack of Sony Pictures Entertainment, which the FBI has claimed was perpetrated as an intimidation tactic by the North Korean government to prevent the release of its Kim Jong-un assassination comedy the Interview.
If the White House’s support stands, it now leaves only an upcoming Senate vote sometime later this month on the Senate’s CISA as the deciding factor as to whether it and PCNA are combined to become law.
But privacy advocates haven’t given up on a presidential veto. A new website called StopCyberspying.com launched by the internet freedom group Access, along with the EFF, the ACLU and others, includes a petition to the President to reconsider a veto for PCNA, CISA and any other bill that threatens to widen internet surveillance.
OTI’s Greene says she’s still banking on a change of heart from Obama, too. “We’re hopeful that the administration would veto any bill that doesn’t address these issues,” she says. “To sign a bill that resembles CISA or PCNA would represent the administration doing a complete 180 on its commitment to protect Americans’ privacy.”
Has Julian Assange gone overboard with the latest WikiLeaks‘ dump of over 200,000 Sony documents and emails on its website this week?
“This archive shows the inner workings of an influential multinational corporation. It is newsworthy and at the centre of a geo-political conflict. It belongs in the public domain. WikiLeaks will ensure it stays there,” Assange explains in his press statement.
Sony’s lawyer David Boies was certainly not impressed and he has sent letters to media outlets urging them not to make use of the data, according to a Bloomberg report.
Here’s an interesting story:
Meet the privacy activists who spy on the surveillance industry
by Daniel Rivero | April 6, 2015
LONDON– On the second floor of a narrow brick building in the London Borough of Islington, Edin Omanovic is busy creating a fake company. He is playing with the invented company’s business cards in a graphic design program, darkening the reds, bolding the blacks, and testing fonts to strike the right tone: informational, ambiguous, no bells and whistles. In a separate window, a barren website is starting to take shape. Omanovic, a tall, slender Bosnian-born, Scottish-raised Londonite gives the company a fake address that forwards to his real office, and plops in a red and black company logo he just created. The privacy activist doesn’t plan to scam anyone out of money, though he does want to learn their secrets. Ultimately, he hopes that the business cards combined with a suit and a close-cropped haircut will grant him access to a surveillance industry trade show, a privilege usually restricted to government officials and law enforcement agencies.
Once he’s infiltrated the trade show, he’ll pose as an industry insider, chatting up company representatives, swapping business cards, and picking up shiny brochures that advertise the invasive capabilities of bleeding-edge surveillance technology. Few of the features are ever marketed or revealed openly to the general public, and if the group didn’t go through the pains of going undercover, it wouldn’t know the lengths to which law enforcement and the intelligence community are going to keep tabs on their citizens.
“I don’t know when we’ll get to use this [company], but we need a lot of these to do our research,” Omanovic tells me. (He asked Fusion not to reveal the name of the company in order to not blow its cover.)
The strange tactic– hacking into an expo in order to come into close proximity with government hackers and monitors– is a regular part of operations at Privacy International, a London-based anti-surveillance advocacy group founded 25 years ago. Omanovic is one of a few activists for the group who goes undercover to collect the surveillance promotional documents.
“At last count we had about 1,400 files,” Matt Rice, PI’s Scottish-born advocacy officer says while sifting through a file cabinet full of the brochures. “[The files] help us understand what these companies are capable of, and what’s being sold around the world,” he says. The brochures vary in scope and claims. Some showcase cell site simulators, commonly called Stingrays, which allow police to intercept cell phone activity within a certain area. Others provide details about Finfisher– surveillance software that is marketed exclusively to governments, which allows officials to put spyware on a target’s home computer or mobile device to watch their Skype calls, Facebook and email activity.
The technology buyers at these conferences are the usual suspects — the Federal Bureau of Investigation (FBI), the UK’s Government Communications Headquarters (GCHQ), and the Australian Secret Intelligence Service– but also representatives of repressive regimes —Bahrain, Sudan, pre-revolutionary Libya– as the group has revealed in attendees lists it has surfaced.
At times, companies’ claims can raise eyebrows. One brochure shows a soldier, draped in fatigues, holding a portable device up to the faces of a somber group of Arabs. “Innocent civilian or insurgent?,” the pamphlet asks.
“Our systems are.”
The treasure trove of compiled documents was available as an online database, but PI recently took it offline, saying the website had security vulnerabilities that could have compromised information of anyone who wanted to donate to the organization online. They are building a new one. The group hopes that the exposure of what Western companies are selling to foreign governments will help the organization achieve its larger goal: ending the sale of hardware and software to governments that use it to monitor their populations in ways that violate basic privacy rights.
The group acknowledges that it might seem they are taking an extremist position when it comes to privacy, but “we’re not against surveillance,” Michael Rispoli, head of PI’s communications, tells me. “Governments need to keep people safe, whether it’s from criminals or terrorists or what it may be, but surveillance needs to be done in accordance with human rights, and in accordance with the rule of law.”
The group is waging its fight in courtrooms. In February of last year, it filed a criminal complaint to the UK’s National Cyber Crime Unit of the National Crime Agency, asking it to investigate British technology allegedly used repeatedly by the Ethiopian government to intercept the communications of an Ethiopian national. Even after Tadesse Kersmo applied for– and was granted– asylum in the UK on the basis of being a political refugee, the Ethiopian government kept electronically spying on him, the group says, using technology from British firm Gamma International. The group currently has six lawsuits in action, mostly taking on large, yet opaque surveillance companies and the British government. Gamma International did not respond to Fusion’s request for comment on the lawsuit, which alleges that exporting the software to Ethiopian authorities means the company assisted in illegal electronic spying.
“The irony that he was given refugee status here, while a British company is facilitating intrusions into his basic right to privacy isn’t just ironic, it’s wrong,” Rispoli says. “It’s so obvious that there should be laws in place to prevent it.”
PI says it has uncovered other questionable business relationships between oppressive regimes and technology companies based in other Western countries. An investigative report the group put out a few months ago on surveillance in Central Asia said that British and Swiss companies, along with Israeli and Israeli-American companies with close ties to the Israeli military, are providing surveillance infrastructure and technical support to countries like Turkmenistan and Uzbekistan– some of the worst-ranking countries in the world when it comes to freedom of speech, according to Freedom House. Only North Korea ranks lower than them.
PI says it used confidential sources, whose accounts have been corroborated, to reach those conclusions.
Not only are these companies complicit in human rights violations, the Central Asia report alleges, but they know they are. Fusion reached out to the companies named in the report, NICE Systems (Israel), Verint Israel (U.S./ Israel), Gamma (UK), or Dreamlab (Switzerland), and none have responded to repeated requests for comment.
The report is a “blueprint” for the future of the organization’s output, says Rice, the advocacy officer. “It’s the first time we’ve done something that really looks at the infrastructure, the laws, and putting it all together to get a view on how the system actually works in a country, or even a whole region,” says Rice.
“What we can do is take that [report], and have specific findings and testimonials to present to companies, to different bodies and parliamentarians, and say this is why we need these things addressed,” adds Omanovic, the researcher and fake company designer.
The tactic is starting to show signs of progress, he says. One afternoon, Omanovic was huddled over a table in the back room, taking part in what looked like an intense conference call. “European Commission,” he says afterwards. The Commission has been looking at surveillance exports since it was revealed that Egypt, Tunisia, and Bahrain were using European tech to crack down on protesters during the Arab Spring, he added. Now, PI is consulting with some members, and together they “hope to bring in a regulation specifically on this subject by year’s end.”
Privacy International has come a long way from the “sterile bar of an anonymous business hotel in Luxembourg,” where founder Simon Davies, then a lone wolf privacy campaigner, hosted its first meeting with a handful of people 25 years ago. In a blog post commemorating that anniversary, Davies (who left the organization about five years ago) described the general state of privacy advocacy when that first meeting was held:
“Those were strange times. Privacy was an arcane subject that was on very few radar screens. The Internet had barely emerged, digital telephony was just beginning, the NSA was just a conspiracy theory and email was almost non-existent (we called it electronic mail back then). We communicated by fax machines, snail mail – and through actual real face to face meetings that you travelled thousands of miles to attend.”
Immediately, there were disagreements about the scope of issues the organization should focus on, as detailed in the group’s first report, filed in 1991. Some of the group’s 120-odd loosely affiliated members and advisors wanted the organization to focus on small privacy flare-ups; others wanted it to take on huge, international privacy policies, from “transborder data flows” to medical research. Disputes arose as to what “privacy” actually meant at the time. It took years for the group to narrow down the scope of its mandate to something manageable and coherent.
Gus Hosein, current executive director, describes the 90’s as a time when the organization “just knew that it was fighting against something.” He became part of the loose collective in 1996, three days after moving to the UK from New Haven, Connecticut, thanks to a chance encounter with Davies at the London Economics School. For the first thirteen years he worked with PI, he says, the group’s headquarters was the school pub.
They were fighting then some of the same battles that are back in the news cycle today, such as the U.S. government wanting to ban encryption, calling it a tool for criminals to hide their communications from law enforcement. “[We were] fighting against the Clinton Administration and its cryptography policy, fighting against new intersections of law, or proposals in countries X, Y and Z, and almost every day you would find something to fight around,” he says.
Just as privacy issues stemming from the dot com boom were starting to stabilize, 9/11 happened. That’s when Hosein says “the shit hit the fan.”
In the immediate wake of that tragedy, Washington pushed through the Patriot Act and the Aviation and Transportation Security Act, setting an international precedent of invasive pat-downs and extensive monitoring in the name of anti-terrorism. Hosein, being an American, followed the laws closely, and the group started issuing criticism of what it considered unreasonable searches. In the UK, a public debate about issuing national identification cards sprung up. PI fought it vehemently.
“All of a sudden we’re being called upon to respond to core policy-making in Western governments, so whereas policy and surveillance were often left to some tech expert within the Department of Justice or whatever, now it had gone to mainstream policy,” he says. “We were overwhelmed because we were still just a ragtag bunch of people trying to fight fights without funding, and we were taking on the might of the executive arm of government.”
The era was marked by a collective struggle to catch up. “I don’t think anyone had any real successes in that era,” Hosein says.
But around 2008, the group’s advocacy work in India, Thailand and the Philippines started to gain the attention of donors, and the team decided it was time to organize. The three staff members then started the formal process of becoming a charity, after being registered as a corporation for ten years. By the time it got its first office in 2011 (around the time its founder, Davies, walked away to pursue other ventures) the Arab Spring was dominating international headlines.
“With the Arab Spring and the rise of attention to human rights and technology, that’s when PI actually started to realize our vision, and become an organization that could grow,” Hosein says. “Four years ago we had three employees, and now we have 16 people,” he says with a hint of pride.
“This is a real vindication for [Edward] Snowden,” Eric King, PI’s deputy director says about one of the organization’s recent legal victories over the UK’s foremost digital spy agency, known as the Government Communications Headquarters or GCHQ.
PI used the documents made public by Snowden to get the British court that oversees GCHQ to determine that all intelligence sharing between GCHQ and the National Security Administration (NSA) was illegal up until December 2014. Ironically, the court went on to say that the sharing was only illegal because of lack of public disclosure of the program. Now that details of the program were made public thanks to the lawsuit, the court said, the operation is now legal and GCHQ can keep doing what it was doing.
“It’s like they’re creating the law on the fly,” King says. “[The UK government] is knowingly breaking the law and then retroactively justifying themselves. Even though we got the court to admit this whole program was illegal, the things they’re saying now are wholly inadequate to protect our privacy in this country.”
Nevertheless, it was a “highly significant ruling,” says Elizabeth Knight, Legal Director of fellow UK-based civil liberties organization Open Rights Group. “It was the first time the [courts have] found the UK’s intelligence services to be in breach of human rights law,” she says. “The ruling is a welcome first step towards demonstrating that the UK government’s surveillance practices breach human rights law.”
In an email, a GCHQ spokesperson downplayed the significance of the ruling, saying that PI only won the case in one respect: on a “transparency issue,” rather than on the substance of the data sharing program. “The rulings re-affirm that the processes and safeguards within these regimes were fully adequate at all times, so we have not therefore needed to make any changes to policy or practice as a result of the judgement,” the spokesperson says.
Before coming on board four years ago, King, a 25-year old Wales native, worked at Reprieve, a non-profit that provides legal support to prisoners. Some of its clients are at Guantanamo Bay and other off-the-grid prisons, something that made him mindful of security concerns when the group was communicating with clients. King worried that every time he made a call to his clients, they were being monitored. “No one could answer those questions, and that’s what got me going on this,” says King.
Right now, he tells me, most of the group’s legal actions have to do with fighting the “Five Eyes”– the nickname given to the intertwined intelligence networks of the UK, Canada, the US, Australia and New Zealand. One of the campaigns, stemming from the lawsuit against GCHQ that established a need for transparency, is asking GCHQ to confirm if the agency illegally collected information about the people who signed a “Did the GCHQ Illegally Spy On You?” petition. So far, 10,000 people have signed up to be told whether their communications or online activity were collected by the UK spy agency when it conducted mass surveillance of the Internet. If a court actually forces GCHQ to confirm whether those individuals were spied on, PI will then ask that all retrieved data be deleted from the database.
“It’s such an important campaign not only because people have the right to know, but it’s going to bring it home to people and politicians that regular, everyday people are caught up in this international scandal,” King says. “You don’t even have to be British to be caught up in it. People all over the world are being tracked in that program.”
Eerke Boiten, a senior lecturer at the interdisciplinary Cyber Security Centre at the University of Kent, says that considering recent legal victories, he can’t write off the effort, even if he would have dismissed it just a year ago.
“We have now finally seen some breakthroughs in transparency in response to Snowden, and the sense that intelligence oversight needs an overhaul is increasing,” he wrote in an email to me. “So although the [British government] will do its best to shore up the GCHQ legal position to ensure it doesn’t need to respond to this, their job will be harder than before.”
“Privacy International have a recent record of pushing the right legal buttons,” he says. “They may win again.”
A GCHQ spokesperson says that the agency will “of course comply with any direction or order” a court might give it, stemming from the campaign.
King is also the head of PI’s research arm– organizing in-depth investigations into national surveillance ecosystems, in tandem with partner groups in countries around the world. The partners hail from places as disparate as Kenya and Mexico. One recently released report features testimonials from people who reported being heavily surveilled in Morocco. Another coming out of Colombia will be more of an “exposé,” with previously unreported details on surveillance in that country, he says.
And then there’s the stuff that King pioneered: the method of sneaking into industry conferences by using a shadow company. He developed the technique Omanovic is using. King can’t go to the conferences undercover anymore because his face is now too well known. When asked why he started sneaking into the shows, he says: “Law enforcement doesn’t like talking about [surveillance]. Governments don’t talk about it. And for the most part our engagement with companies is limited to when we sue them,” he laughs.
When it comes to the surveillance field, you would be hard pressed to find a company that does exactly what it says it does, King tells me. So when he or someone else at PI sets up a fake company, they expect to get about as much scrutiny as the next ambiguous, potentially official organization that lines up behind them.
Collectively, PI has been blacklisted and been led out of a few conferences over the past four years they have been doing this, he estimates.
“If we have to navigate some spooky places to get what we need, then that’s what we’ll do,” he says. Sometimes you have to walk through a dark room to turn on a light. Privacy International sees a world with a lot of dark rooms.
“Being shadowy is acceptable in this world.”
No arrest yet but the good news is that the US and Europe have, via the FBI and Europol’s European Cybercrime Center, dismantled on Wednesday a network of as many as 12,000 computers that cyber-criminals used to elude security firms and law enforcement agencies for some years. Check out the video clip and Bloomberg article below.
Meanwhile, recall yesterday’s blog on data breach and the 22 countries where stolen data were most frequently accessed.
Police Shut Europe Computer Network Enabling Theft, Extortion
by Cornelius RahnChris Strohm
European and U.S. police shut down a computer network on Wednesday used by cybercriminals to facilitate the theft of banking passwords and extortion which had eluded security companies and law enforcement for years.
Agents of the U.S. Federal Bureau of Investigation and the European Cybercrime Center seized servers across Europe that had been responsible for spreading malware on thousands of mainly U.S.-based victim computers, said Raj Samani, chief technology officer for Intel Corp.’s security unit in the region, which helped prepare the takedown.
Governments are responding to increasing frequency and impact of online attacks by setting up dedicated cybercrime units and working with security-software companies to weed out threats before more damage is done. The network functioned as a portal offered by criminals to others seeking to spread their own malware, according to Paul Gillen, head of operations at Europol’s European Cybercrime Centre.
“If that carried on in earnest, it had great potential from a criminal perspective,” Gillen said. “People set up infrastructure like that and rent it out to others, saying ‘here are a lot of infected computers so you can upload all your banking malware or other things on them.’”
FBI and Europol said there had been no arrests yet as it was too early to say who the perpetrators were, or what damage the malware had caused. Police will now sift through the data gained from the seized machines before notifying victims and determining the culprits, according to Gillen.
The malicious code, labeled W32/Worm-AAEH, was first detected in 2009 but was difficult to weed out because it changed its shape as many as six times a day, Intel’s Samani said. The worm had evolved capabilities such as shutting down connections with servers from antivirus companies and disabling tools that could terminate it, he said.
Even after the control servers are no longer available to the criminals to morph existing pieces of malware, users must still clean up their machines. Computer owners can stop the software’s core function by setting rules that prevent new software from running automatically and shutting certain ports, Intel said.
Here’s an interesting experiment (below) on where did those stolen data go after a data breach.
The list of those 22 countries where the (fake) sensitive data were accessed is noteworthy, especially if one falls under your jurisdiction – mine in the list…
What happens to data after a breach?
Posted on 07 April 2015.
Bitglass undertook an experiment geared towards understanding what happens to sensitive data once it has been stolen. In the experiment, stolen data traveled the globe, landing in five different continents and 22 countries within two weeks.
Overall, the data was viewed more than 1,000 times and downloaded 47 times; some activity had connections to crime syndicates in Nigeria and Russia.
Threat researcher programmatically synthesized 1,568 fake names, social security numbers, credit card numbers, addresses and phone numbers that were saved in an Excel spreadsheet. The spreadsheet was then transmitted through the Bitglass proxy, which automatically watermarked the file.
Each time the file is opened, the persistent watermark, which survives copy, paste and other file manipulations, “calls home” to record view information such as IP address, geographic location and device type. Finally, the spreadsheet was posted anonymously to cyber-crime marketplaces on the Dark Web.
The experiment offers insight into how stolen records from data breaches are shared, bought and then sold on the black market. During the experiment, crime syndicates in Nigeria and Russia emerged via clusters of closely-related activity. Traffic patterns indicate the fake data was shared among members of the syndicates to vet its validity and subsequently shared elsewhere on the Dark Web, beyond the original drop sites.
In 2014, 783 data breaches were reported, which represents a 27.5 percent spike over the previous year. Data breaches continue to spike in 2015 – as of March 20, 174 breaches, affecting nearly 100 million customer records were reported. While many are suffering from data-breach fatigue, this experiment sheds light on how cybercriminals interact with pilfered data and thus helps enterprises understand why visibility is critical when it comes to limiting the damage of breaches.
The falsified data was placed on Dropbox as well as on seven Dark Web sites believed to be frequented by cybercriminals. The result of the experiment found that within 12 days the data was:
– Accessed from five continents – North America, Asia, Europe, Africa and South America
– Accessed from 22 countries – United States, Brazil, Belgium, Nigeria, Hong Kong, Spain, Germany, the United Kingdom, France, Sweden, Finland, the Maldives, New Zealand, Canada, Norway, the Russian Federation, the Netherlands, the Czech Republic, Denmark, Italy, Turkey
– Accessed most often from Nigeria, Russia and Brazil
– Viewed 1,081 times, with 47 unique downloads.
Photo (above) credit: http://www.freakingnews.com
Here’s a breaking news (below) from the CNN:
How the U.S. thinks Russians hacked the White House
By Evan Perez and Shimon Prokupecz, CNN
Updated 0037 GMT (0737 HKT) April 8, 2015
Washington (CNN)Russian hackers behind the damaging cyber intrusion of the State Department in recent months used that perch to penetrate sensitive parts of the White House computer system, according to U.S. officials briefed on the investigation.
While the White House has said the breach only affected an unclassified system, that description belies the seriousness of the intrusion. The hackers had access to sensitive information such as real-time non-public details of the president’s schedule. While such information is not classified, it is still highly sensitive and prized by foreign intelligence agencies, U.S. officials say.
The White House in October said it noticed suspicious activity in the unclassified network that serves the executive office of the president. The system has been shut down periodically to allow for security upgrades.
The FBI, Secret Service and U.S. intelligence agencies are all involved in investigating the breach, which they consider among the most sophisticated attacks ever launched against U.S. government systems. The intrusion was routed through computers around the world, as hackers often do to hide their tracks, but investigators found tell-tale codes and other markers that they believe point to hackers working for the Russian government.
National Security Council spokesman Mark Stroh didn’t confirm the Russian hack, but he did say that “any such activity is something we take very seriously.”
“In this case, as we made clear at the time, we took immediate measures to evaluate and mitigate the activity,” he said. “As has been our position, we are not going to comment on [this] article’s attribution to specific actors.”
Neither the U.S. State Department nor the Russian Embassy immediately responded to a request for comment.
Ben Rhodes, President Barack Obama’s deputy national security adviser, said the White House’s use of a separate system for classified information protected sensitive national security-related items from being obtained by hackers.
“We do not believe that our classified systems were compromised,” Rhodes told CNN’s Wolf Blitzer on Tuesday.
“We’re constantly updating our security measures on our unclassified system, but we’re frankly told to act as if we need not put information that’s sensitive on that system,” he said. “In other words, if you’re going to do something classified, you have to do it on one email system, one phone system. Frankly, you have to act as if information could be compromised if it’s not on the classified system.”
To get to the White House, the hackers first broke into the State Department, investigators believe.
The State Department computer system has been bedeviled by signs that despite efforts to lock them out, the Russian hackers have been able to reenter the system. One official says the Russian hackers have “owned” the State Department system for months and it is not clear the hackers have been fully eradicated from the system.
As in many hacks, investigators believe the White House intrusion began with a phishing email that was launched using a State Department email account that the hackers had taken over, according to the U.S. officials.
Director of National Intelligence James Clapper, in a speech at an FBI cyberconference in January, warned government officials and private businesses to teach employees what “spear phishing” looks like.
“So many times, the Chinese and others get access to our systems just by pretending to be someone else and then asking for access, and someone gives it to them,” Clapper said.
The ferocity of the Russian intrusions in recent months caught U.S. officials by surprise, leading to a reassessment of the cybersecurity threat as the U.S. and Russia increasingly confront each other over issues ranging from the Russian aggression in Ukraine to the U.S. military operations in Syria.
The attacks on the State and White House systems is one reason why Clapper told a Senate hearing in February that the “Russian cyberthreat is more severe than we have previously assessed.”
The revelations about the State Department hacks also come amid controversy over former Secretary of State Hillary Clinton’s use of a private email server to conduct government business during her time in office. Critics say her private server likely was even less safe than the State system. The Russian breach is believed to have come after Clinton departed State.
But hackers have long made Clinton and her associates targets.
The website The Smoking Gun first reported in 2013 that a hacker known as Guccifer had broken into the AOL email of Sidney Blumenthal, a friend and advisor to the Clintons, and published emails Blumenthal sent to Hillary Clinton’s private account. The emails included sensitive memos on foreign policy issues and were the first public revelation of the existence of Hillary Clinton’s private email address now at the center of controversy: firstname.lastname@example.org. The address is no longer in use.
Wesley Bruer contributed to this report
Was that a brainfart?
President Barack Obama signed an executive order Wednesday that permits the US to impose economic sanctions on individuals and entities anywhere in the world for destructive cyber-crimes and online corporate espionage – see the Bloomberg article below.
Now what’s this about? An all-out effort on cyber-criminals or just plain window dressing?
For all their abilities to trace the attacks right down to the identities of the hackers, have the US authorities been able to do anything? Recall the Mandiant Report two years ago that allegedly traced Chinese hackers down to the very unit of a military base in Shanghai?
Recall also the five Chinese military hackers (above) on the FBI wanted list last year? Where has that led to (see video clip below)? And what about the alleged North Korean hacks on Sony Pictures?
With all good intent and seriousness to go on the offensive, Obama has yet to put his words into action on this front…
Hackers, Corporate Spies Targeted by Obama Sanctions Order
by Justin SinkChris Strohm
President Barack Obama signed an executive order Wednesday allowing the use of economic sanctions for the first time against perpetrators of destructive cyber-attacks and online corporate espionage.
That will let the Treasury Department freeze the assets of people, companies or other entities overseas identified as the source of cybercrimes. The federal government also will be able to bar U.S. citizens and companies from doing business with those targeted for sanctions.
“Cyberthreats pose one of the most serious economic and national security challenges to the United States,” Obama said in a statement. “As we have seen in recent months, these threats can emanate from a range of sources and target our critical infrastructure, our companies and our citizens.”
Under the order, sanctions only will be used if a cyber-attack threatens to harm U.S. national security, foreign policy or the broader economy. It’s aimed at cybercriminals who target critical infrastructure, disrupt major computer networks, or are involved in the “significant” theft of trade secrets or intellectual property for competitive advantage or private financial gain.
The administration is using the threat of sanctions to help prevent large-scale data theft after breaches at major U.S. corporations, including retailer Target Corp., health-insurer Anthem Inc. and home-improvement chain Home Depot Inc. It’s also a recognition that companies are facing increasingly destructive attacks, such as the hack against Sony Pictures Entertainment that crippled thousands of computers and delayed release of a comedy movie.
Sanctions imposed under the executive order will help disrupt the operations of hackers who may be in countries outside the reach of U.S. law enforcement, John Carlin, U.S. assistant attorney general for national security, said in a phone interview.
Banks and other companies connected to the U.S. financial system will be required to prohibit sanctioned hackers and entities from using their services, cutting them off from valuable resources, Carlin said.
“It’s a new powerful tool and we intend do to use it,” Carlin said. “It has the capability to significantly raise the cost for those who steal or benefit through cybercrime.”
The unique aspect of the executive order is that it allows the U.S. to impose sanctions on individuals or entities over hacking attacks regardless of where they are located, White House Cybersecurity Coordinator Michael Daniel told reporters on a conference call. While other sanctions are tied to a particular country or group of persons, hacking attacks transcend borders.
“What sets this executive order apart is that it is focused on malicious cyber-activity,” Daniel said. “What we’re trying to do is enable us to have a new way of both deterring and imposing costs on malicious cyber-actors wherever they may be.”
The order is a signal of the administration’s “clear intent to go on offense against the full range of very serious cyberthreats that are out there,” said Peter Harrell, the former principal deputy assistant secretary for sanctions at the State Department.
“This is a message that if folks around the world don’t cut out these activities, they’re going to find themselves cut off from the American banking system,” Harrell said in an interview.
Harrell said there are potential stumbling blocks to effective implementation. For one, hackers work hard to conceal their identity. Even though the U.S. and private companies have improved their ability to trace attacks, attribution can sometimes be difficult.
Daniel acknowledged that determining who is actually behind hacking attacks is still a challenge but said the U.S. is getting better at it.
In other cases, diplomatic considerations may be at play. The administration’s decision in 2014 to file criminal charges against five members of the Chinese military over their role in cyber-espionage strained relations with Beijing.
In January, Obama authorized economic sanctions against 10 North Korean officials and government entities in connection with the Sony attack. The North Korean government has denied any involvement in the Sony case.
Harrell said the use of sanctions can provide leverage as the U.S. registers complaints with governments overseas about cyber-attacks. Targeted use of the new sanctions powers also may help deter criminals.
“A number of these cyber-attacks are organized by fairly significant actors out there — large hacking collectives, or organized by foreign intelligence agencies,” Harrell said. “They all have real potential costs if they were put on sanctions lists.”
The Obama administration has been under pressure to take action to help companies protect their networks from cyber-attacks. In early March, Premera Blue Cross announced that hackers may have accessed 11 million records, including customer Social Security numbers, bank account data and medical information.
Home Depot in September said 56 million payment cards and 53 million e-mail addresses had been stolen by hackers. And just days earlier, JPMorgan Chase & Co. announced a data breach affecting 76 million households and 7 million small businesses.
The highest-profile breach, however, may have been the hacking of Sony Pictures. The U.S. government said North Korean hackers broke into the studio’s network and then exposed e-mails and private employment and salary records. U.S. authorities said it was in retaliation for plans to release “The Interview,” a satirical film depicting the assassination of leader Kim Jong Un.
This is one app all parents should be aware of. The Secrets app is the cyberspace where kids make their confessions and share their best kept secrets and the nightmare is, their supposedly anonymous postings were highly vulnerable after all.
It should come as no surprise that health insurance companies store lots, lots more sensitive and personal information about their clients than banks and credit card companies and it certainly doesn’t help when they were not taking cybersecurity seriously, as the recent hacks on Anthem and Premera (article below) have highlighted.
And what’s going to happen to these clients following the (Anthem and Premera) hacks? Watch the video clips below.
The disturbing truth behind the Premera, Anthem attacks
March 24, 2015 | By Dan Bowman
As details continue to emerge following the recent hack attacks on payers Anthem and Premera–in which information for close to 90 million consumers combined may have been put at risk–perhaps the most disturbing revelation of all is that, in both instances, neither entity appears to truly take security seriously.
Premera, for instance, knew three weeks prior to the initial penetration of its systems in May 2014 that network security issues loomed large. A report sent by the U.S. Office of Personnel Management’s Office of Inspector General detailed several vulnerabilities, including a lack of timely patch implementations and insecure server configurations.
The findings were so bad, they prompted OPM to warn Premera, “failiure to promptly install important updates increases the risk that vulnerabilities will not be remediated and sensitive data could be breached.” In addition, OPM told the Mountlake Terrace, Washington-based insurer that failure to remove outdated software would increase the risk of a successful malicious attack on its information systems.
“Promptly” to Premera apparently meant eight months down the road. And one month after its self-imposed Dec. 31, 2014, deadline to resolve its issues, guess what the payer found?
Just imagine how much damage could have been spared had Premera acted with more haste.
In Anthem’s case, negligence continues to persist. The nation’s second-largest payer has refused to allow a federal watchdog agency to perform vulnerability scans and compliance tests on its systems in the wake of its massive hack attack. It also prevented auditors from adequately testing whether it appropriately secured its computer information systems during a 2013 audit, citing corporate policy prohibiting external entities from connecting to the Anthem network.
Corporate policy is all well and good, but it’s not going to mean squat to a consumer two years from now when Anthem’s complimentary credit monitoring wears off and the hackers begin wading through the treasure trove of stolen information. As one of those consumers, it would be nice to hear Anthem take the advice Shaun Greene, chief operating officer of Salt Lake City-based Arches Health Plan, who told my colleague Brian Eastwood last month that payers should hire third parties to conduct HIPAA risk assessments.
“That way, you avoid internal posturing and receive objective feedback,” Greene said.
Following last summer’s massive Community Health Systems breach–and on the heels of other high-profile cybersecurity attacks–it appeared earlier this year that the healthcare industry was finally starting to truly prioritize information protection.
That’s not to say that the majority of the industry doesn’t take such matters seriously. But it’s disappointing to see that some of its biggest players seem to feel differently. – Dan (@Dan_Bowman and @FierceHealthIT)
You may want to think twice about the new MacBook.
Apple may have ideas about its newly introduced USB-C but widely reported vulnerabilities of USB devices amplify big troubles ahead, as the following article explains.
The NSA Is Going to Love These USB-C Charging Cables
Thanks to Apple’s new MacBook and Google’s new Chromebook Pixel, USB-C has arrived. A single flavor of cable for all your charging and connectivity needs? Hell yes. But that convenience doesn’t come without a cost; our computers will be more vulnerable than ever to malware attacks, from hackers and surveillance agencies alike.
The trouble with USB-C stems from the fact that the USB standard isn’t very secure. Last year, researchers wrote a piece of malware called BadUSB which attaches to your computer using USB devices like phone chargers or thumb drives. Once connected, the malware basically takes over a computer imperceptibly. The scariest part is that the malware is written directly to the USB controller chip’s firmware, which means that it’s virtually undetectable and so far, unfixable.
Before USB-C, there was a way to keep yourself somewhat safe. As long as you kept tabs on your cables, and never stuck random USB sticks into your computer, you could theoretically keep it clean. But as The Verge points out, the BadUSB vulnerability still hasn’t been fixed in USB-C, and now the insecure port is the slot where you connect your power supply. Heck, it’s shaping up to be the slot where you connect everything. You have no choice but to use it every day. Think about how often you’ve borrowed a stranger’s power cable to get charged up. Asking for a charge from a stranger is like having unprotected sex with someone you picked up at the club.
What the Verge fails to mention however, is that it’s potentially much worse than that. If everyone is using the same power charger, it’s not just renegade hackers posing as creative professionals in coffee shops that you need to worry about. With USB-C, the surveillance establishment suddenly has a huge incentive to figure out how to sneak a compromised cable into your power hole.
It might seem alarmist and paranoid to suggest that the NSA would try to sneak a backdoor into charging cables through manufacturers, except that the agency has been busted trying exactly this kind of scheme. Last year, it was revealed that the NSA paid security firm RSA $10 million to leave a backdoor in their encryption unpatched. There’s no telling if or when or how the NSA might try to accomplish something similar with USB-C cables, but it stands to reason they would try.
We live in a world where we plug in with abandon, and USB-C’s flexibility is designed to make plugging in easier than ever. Imagine never needing to guess whether or not your aunt’s house will have a charger for your phone. USB-C could become so common that this isn’t even a question. Of course she has one! With that ubiquity and convenience comes a risk that the tech could become exploited—not just by criminals, but also by the government’s data siphoning machine.
Ever wonder what happens when one’s hacked?
Here’s an insightful chilling account of how one victim attempted to trace the hacker who invaded into his onlife life and Bitcoin wallet.
Anatomy of a Hack
In the early morning hours of October 21st, 2014, Partap Davis lost $3,000. He had gone to sleep just after 2AM in his Albuquerque, New Mexico, home after a late night playing World of Tanks. While he slept, an attacker undid every online security protection he set up. By the time he woke up, most of his online life had been compromised: two email accounts, his phone, his Twitter, his two-factor authenticator, and most importantly, his bitcoin wallets.
Davis was careful when it came to digital security. He chose strong passwords and didn’t click on bogus links. He used two-factor authentication with Gmail, so when he logged in from a new computer, he had to type in six digits that were texted to his phone, just to make sure it was him. He had made some money with the rise of bitcoin and held onto the bitcoin in three protected wallets, managed by Coinbase, Bitstamp, and BTC-E. He also used two-factor with the Coinbase and BTC-E accounts. Any time he wanted to access them, he had to verify the login with Authy, a two-factor authenticator app on his phone.
Other than the bitcoin, Davis wasn’t that different from the average web user. He makes his living coding, splitting time between building video education software and a patchwork of other jobs. On the weekends, he snowboards, exploring the slopes around Los Alamos. This is his 10th year in Albuquerque; last year, he turned 40.
After the hack, Davis spent weeks tracking down exactly how it had happened, piecing together a picture from access logs and reluctant customer service reps. Along the way, he reached out to The Verge, and we added a few more pieces to the puzzle. We still don’t know everything — in particular, we don’t know who did it — but we know enough to say how they did it, and the points of failure sketch out a map of the most glaring vulnerabilities of our digital lives.
It started with Davis’ email. When he was first setting up an email account, Davis found that Partap@gmail.com was taken, so he chose a Mail.com address instead, setting up Partap@mail.com to forward to a less memorably named Gmail address.
Some time after 2AM on October 21st, that link was broken. Someone broke into Davis’ mail.com account and stopped the forwarding. Suddenly there was a new phone number attached to the account — a burner Android device registered in Florida. There was a new backup email too, email@example.com, which is still the closest thing we have to the attacker’s name.
For simplicity’s sake, we’ll call her Eve.
How did Eve get in? We can’t say for sure, but it’s likely that she used a script to target a weakness in Mail.com’s password reset page. We know such a script existed. For months, users on the site Hackforum had been selling access to a script that reset specific account passwords on Mail.com. It was an old exploit by the time Davis was targeted, and the going rate was $5 per account. It’s unclear how the exploit worked and whether it has been closed in the months since, but it did exactly what Eve needed. Without any authentication, she was able to reset Davis’ password to a string of characters that only she knew.
Eve’s next step was to take over Partap’s phone number. She didn’t have his AT&T password, but she just pretended to have forgotten it, and ATT.com sent along a secure link to firstname.lastname@example.org to reset it. Once inside the account, she talked a customer service rep into forwarding his calls to her Long Beach number. Strictly speaking, there are supposed to be more safeguards required to set up call forwarding, and it’s supposed to take more than a working email address to push it through. But faced with an angry client, customer service reps will often give way, putting user satisfaction over the colder virtues of security.
Once forwarding was set up, all of Davis’ voice calls belonged to Eve. Davis still got texts and emails, but every call was routed straight to the attacker. Davis didn’t realize what had happened until two days later, when his boss complained that Davis wasn’t picking up the phone.
Google and Authy
Next, Eve set her sights on Davis’ Google account. Experts will tell you that two-factor authentication is the best protection against attacks. A hacker might get your password or a mugger might steal your phone, but it’s hard to manage both at once. As long as the phone is a physical object, that system works. But people replace their phones all the time, and they expect to be able to replace the services, too. Accounts have to be reset 24 hours a day, and two-factor services end up looking like just one more account to crack.
Davis hadn’t set up Google’s Authenticator app, the more secure option, but he had two-factor authentication enabled — Google texted him a confirmation code every time he logged in from a new computer. Call forwarding didn’t pass along Davis’ texts, but Eve had a back door: thanks to Google’s accessibility functions, she could ask for the confirmation code to be read out loud over the phone.
Authy should have been harder to break. It’s an app, like Authenticator, and it never left Davis’ phone. But Eve simply reset the app on her phone using a mail.com address and a new confirmation code, again sent by a voice call. A few minutes after 3AM, the Authy account moved under Eve’s control.
It was the same trick that had fooled Google: as long as she had Davis’ email and phone, two-factor couldn’t tell the difference between them. At this point, Eve had more control over Davis’s online life than he did. Aside from texting, all digital roads now led to Eve.
At 3:19AM, Eve reset Davis’s Coinbase account, using Authy and his Mail.com address. At 3:55AM, she transferred the full balance (worth roughly $3,600 at the time) to a burner account she controlled. From there, she made three withdrawals — one 30 minutes after the account was opened, then another 20 minutes later, and another five minutes after that. After that, the money disappeared into a nest of dummy accounts, designed to cover her tracks. Less than 90 minutes after his Mail.com account was first compromised, Davis’ money was gone for good.
Authy might have known something was up. The service keeps an eye out for fishy behavior, and while they’re cagey about what they monitor, it seems likely that an account reset to an out-of-state number in the middle of the night would have raised at least a few red flags. But the number wasn’t from a known fraud center like Russia or Ukraine, even if Eve might have been. It would have seemed even more suspicious when Eve logged into Coinbase from the Canadian IP. Could they have stopped her then? Modern security systems like Google’s ReCAPTCHA often work this way, adding together small indicators until there’s enough evidence to freeze an account — but Coinbase and Authy each only saw half the picture, and neither had enough to justify freezing Partap’s account.
BTC-E and Bitstamp
When Davis woke up, the first thing he noticed was that his Gmail had mysteriously logged out. The password had changed, and he couldn’t log back in. Once he was back in the account, he saw how deep the damage went. There were reset emails from each account, sketching out a map of the damage. When he finally got into his Coinbase account, he found it empty. Eve had made off with 10 bitcoin, worth more than $3,000 at the time. It took hours on the phone with customer service reps and a faxed copy of his driver’s license before he could convince them he was the real Partap Davis.
What about the two other wallets? There was $2,500 worth of bitcoin in them, with no advertised protections that the Coinbase wallet didn’t have. But when Davis checked, both accounts were still intact. BTC-e had put a 48-hour hold on the account after a password change, giving him time to prove his identity and recover the account. Bitstamp had an even simpler protection: when Eve emailed to reset Davis’s authentication token, they had asked for an image of his driver’s license. Despite all Eve’s access, it was one thing she didn’t have. Davis’ last $2,500 worth of bitcoin was safe.
It’s been two months now since the attack, and Davis has settled back into his life. The last trace of the intrusion is Davis’ Twitter account, which stayed hacked for weeks after the other accounts. @Partap is a short handle, which makes it valuable, so Eve held onto it, putting in a new picture and erasing any trace of Davis. A few days after the attack, she posted a screenshot of a hacked Xfinity account, tagging another handle. The account didn’t belong to Davis, but it belonged to someone. She had moved onto the next target, and was using @partap as a disposable accessory to her next theft, like a stolen getaway car.
Who was behind the attack? Davis has spent weeks looking for her now — whole afternoons wasted on the phone with customer service reps — but he hasn’t gotten any closer. According to account login records, Eve’s computer was piping in from a block of IP addresses in Canada, but she may have used Tor or a VPN service to cover her tracks. Her phone number belonged to an Android device in Long Beach, California, but that phone was most likely a burner. There are only a few tracks to follow, and each one peters out fast. Wherever she is, Eve got away with it.
Why did she choose Partap Davis? She knew about the wallets upfront, we can assume. Why else would she have spent so much time digging through the accounts? She started at the mail.com account too, so we can guess that somehow, Eve came across a list of bitcoin users with Davis’ email address on it. A number of leaked Coinbase customer lists are floating around the internet, although I couldn’t find Davis’ name on any of them. Or maybe his identity came from an equipment manufacturer or a bitcoin retailer. Leaks are commonplace these days, and most go unreported.
Davis is more careful with bitcoin these days, and he’s given up on the mail.com address — but otherwise, not much about his life has changed. Coinbase has given refunds before, but this time they declined, saying the company’s security wasn’t at fault. He filed a report with the FBI, but the bureau doesn’t seem interested in a single bitcoin theft. What else is there to do? He can’t stop using a phone or give up the power to reset an account. There were just so many accounts, so many ways to get in. In the security world, they call this the attack surface. The bigger the surface, the harder it is to defend.
Most importantly, resetting a password is still easy, as Eve discovered over and over again. When a service finally stopped her, it wasn’t an elaborate algorithm or a fancy biometric. Instead, one service was willing to make customers wait 48 hours before authorizing a new password. On a technical level, it’s a simple fix, but a costly one. Companies are continuously balancing the small risk of compromise against the broad benefits of convenience. A few people may lose control of their account, but millions of others are able to keep using the service without a hitch. In the fight between security and convenience, security is simply outgunned.
3/5 11:10am ET: Updated to clarify Bitstamp security protocols.
If there’s any one lesson on computer/phone scams you need to remember: Microsoft, or Apple for that matter, will not initiate a call to offer a remote computer scan to fix a “problem”.
So here’s an actual incident when the scammers called and met their match – it was a computer security researcher on the line, who recorded the entire conversation (his two audio files below).
At one point, after allowing the scammer to gain some limited control of his computer screen, he informed the caller that she was busted, who in turn threatened to hack him (second audio file).
Enjoy witnessing scammers at work and here’s the article for a brief background.
Oh by the way, the caller’s number was 949-000-7676.
Above photo credit: http://background-kid.com/blurred-people-background.html
Great, now there’s a new technology to get true clear pictures out of blurred CCTV images just when we learned last week that there are gadgets to hide one’s identity from the prying eyes of facial recognition programs like the FBI’s US$1 billion futuristic facial recognition program – the Next Generation Identification (NGI) System.
Fujitsu, the Japanese multinational information technology equipment and services company, recently said it has invented a new, first of its kind image-processing technology that can detect people from low-resolution imagery and track people in security camera footage, even when the images are heavily blurred to protect privacy. See full story below.
Sad to say, this is probably the easiest, effective and most feasible solution:
Fujitsu tech can track heavily blurred people in security videos
By Tim Hornyak
IDG News Service | March 6, 2015
Fujitsu has developed image-processing technology that can be used to track people in security camera footage, even when the images are heavily blurred to protect their privacy.
Fujitsu Laboratories said its technology is the first of its kind that can detect people from low-resolution imagery in which faces are indistinguishable.
Detecting the movements of people could be useful for retail design, reducing pedestrian congestion in crowded urban areas or improving evacuation routes for emergencies, it said.
Fujitsu used computer-vision algorithms to analyze the imagery and identify the rough shapes, such as heads and torsos, that remain even if the image is heavily pixelated. The system can pick out multiple people in a frame, even if they overlap.
Using multiple camera sources, it can then determine if two given targets are the same person by focusing on the distinctive colors of a person’s clothing.
An indoor test of the system was able to track the paths of 80 percent of test subjects, according to the company. Further details of the trial were not immediately available.
“The technology could be used by a business owner when planning the layout of their next restaurant/shop,” a Fujitsu spokesman said via email. “It would also be used by the operators of a large sporting event during times of heavy foot traffic.”
People-tracking know-how has raised privacy concerns in Japan. Last year, the National Institute of Information and Communications Technology (NICT) was forced to delay and scale down a large, long-term face-recognition study it was planning to carry out at Osaka Station, one of the country’s busiest rail hubs.
The Fujitsu research is being presented to a conference of the Information Processing Society of Japan being held at Tohoku University in northern Japan. The company hopes to improve the accuracy of the system with an aim to commercializing it in the year ending March 31, 2016.
Fujitsu has also been developing retail-oriented technology such as sensors that follow a person’s gaze as he or she looks over merchandise as well as LED lights that can beam product information for smartphones.
Forget Google Glass, there’s something more fun and useful (picture above) but first, consider this picture below.
It may sounds like the Hollywood movie Matrix but let’s face it, everyone would sooner or later have their photos captured in the public space.
Consider for example, the FBI’s US$1 billion futuristic facial recognition program – the Next Generation Identification (NGI) System – was already up and running with the aim to capture photographs of every Americans and everyone on US soils.
The pictures above is an example of what the US government had collected of one individual – she filed a Freedom of Information Act request to see what was collected and the Department of Homeland Security subsequently released the data collected under the Global Entry Program.
But apart from immigration checkpoints, and potentially other files from other government departments (local and global), we are also subjected to the millions of CCTV cameras in public areas and the facial recognition programs scanning through the captured images (and also those on the internet and social networks).
So it’s good to know there may be a potential solution – though it’s still early days and it may not apply to cameras at immigration checkpoints.
The (computer) antivirus software company AVG is working on a “privacy glasses” project. These glasses (above) are designed to obfuscate your identity and prevent any facial recognition software from figuring out who you are, either by matching you with the pictures in their database or creating a new file of you for future use.
Find out more from this article below.
It could be game over for Russian hacker Evgeniy Bogachev as the US State Department and FBI have issued a “Wanted” poster with a US$3 million reward for information leading to his arrest, the highest price the US authorities had ever placed on a head in a cyber case.
Bogachev, apparently still in Russia, was charged by the US for running a computer attack called GameOver Zeus that has allegedly amassed in excess of US$100 million from online bank accounts of businesses and consumers in the US and around the world.
However, despite the taking down of the GameOver botnet and the demise of CryptoLocker, it’s not all over as new variants of file-encrypting ransomware still exist. The following screen is what you don’t want to see on your computer monitor.
Check out this nice article about how to protect yourself from ransomware with the Sophos Virus Removal Tool.
I have an easier, effective and unorthodox solution, which I have mentioned in public lectures and previous columns.: changing your cyber lifestyle by having “naked” computers, i.e. not storing a single file in the computer hard disks, apart from the operating system and software program files.
In essence, I store all my files on an external encrypted hard disk and use either the 1 laptop or 2 laptops approach – with the former you alternate between online and offline depending on when you connect the external disk to the laptop and with the latter, you attach the external disk to a laptop that is offline (you can go one step further with the Snowden approach by using an “air gapped” computer, as he has recommended to Glenn Greenwald) and work online only with the other computer. The latter would come handy when on the road (even with the extra weight) as there are always risks with public (which one should always avoid) and hotel internet connections, spying walls, etc.
Congratulations to Laura Poitras and her team behind “CitizenFour” in winning the Oscars for Best Documentary Feature. And did you notice Snowden‘s girlfriend Lindsay Mills was on the stage (see picture above (Credit: YouTube) and video clips below)?
This news originally from The Intercept, based on leaked files from Edward Snowden, shouldn’t come as a surprise as the NSA had been on a mission to Collect It All (Chapter 3) according to Glenn Greenwald’s book “No Place to Hide” (see above).
I’m a self-confessed hardcore fan of the good old IBM Thinkpad laptops but I’ve shied away from the black box ever since the Lenovo acquisition in 2005. And this (see video clips below) is one of those reasons. My tilt these days is towards those laptops with no parts made in China…
Amid continuing Sino-US spats on cyber-espionage and related matters, China is beefing up its cyber and national security in a big way as it is reportedly just months away from launching the longest quantum communications network on earth stretching some 2,000 kilometer between its capital Beijing and financial center Shanghai to transfer data close to the speed of light with no hacking risks – initially to transmit sensitive diplomatic and classified information for the government and military with personal and financial data also on the cards for the near future.
And that’s ahead of the previously announced plan for 2016 to become the first country to launch a quantum communications satellite into the orbit.
Looks like Snowden was spot on again. In a post just a month ago, I wrote what he said about how the US (would and) is paying the price for focusing too much on the cyber offensive at the expense of cyber defense.
Meanwhile, following the recent cyber-attack on Sony Pictures, President Barack Obama’s homeland security and counter-terrorism adviser Lisa Monaco announced earlier this week a new intelligence unit – the Cyber Threat Intelligence Integration Center – to take the lead in tracking cyber-threats by pooling and disseminating data on cyber-breaches to other US agencies.
“Currently, no single government entity is responsible for producing coordinated cyber threat assessments,” according to Monaco.
China nears launch of hack-proof ‘quantum communications’ link
Published: Feb 9, 2015 11:13 p.m. ET
Technology to be employed for military and other official uses
BEIJING (Caixin Online) — This may be a quantum-leap year for an initiative that accelerates data transfers close to the speed of light with no hacking threats through so-called “quantum communications” technology.
Within months, China plans to open the world’s longest quantum-communications network, a 2,000-kilometer (1,240-mile) electronic highway linking government offices in the cities of Beijing and Shanghai.
Meanwhile, the country’s aerospace scientists are preparing a communications satellite for a 2016 launch that would be a first step toward building a quantum communications network in the sky. It’s hoped this and other satellites can be used to overcome technical hurdles, such as distance restrictions, facing land-based systems.
Physicists around the world have spent years working on quantum-communications technology. But if all goes as planned, China would be the first country to put a quantum-communications satellite in orbit, said Wang Jianyu, deputy director of the China Academy of Science’s (CAS) Shanghai branch.
At a recent conference on quantum science in Shanghai, Wang said scientists from CAS and other institutions have completed major research and development tasks for launching the satellite equipped with quantum-communications gear.
The satellite program’s likelihood for success was confirmed by China’s leading quantum-communications scientist, Pan Jianwei, a CAS academic who is also a professor of quantum physics at the University of Science and Technology of China (USTC) in Hefei, in the eastern province of Anhui. Pan said researchers reported significant progress on systems development after conducting experiments at a test center in Qinghai province, in the northwest
The satellite would be used to transmit encoded data through a method called quantum key distribution (QKD), which relies on cryptographic keys transmitted via light-pulse signals. QKD is said to be nearly impossible to hack, since any attempted eavesdropping would change the quantum states and thus could be quickly detected by data-flow monitors.
A satellite-based quantum-communications system could be used to build a secure information bridge between the nation’s capital and Urumqi, a city that’s the capital of the restive Xinjiang Uyghur Autonomous Region in the west, Pan said.
It’s likely the technology initially will be used to transmit sensitive diplomatic, government-policy and military information. Future applications could include secure transmissions of personal and financial data.
Plans call for China to put additional satellites into orbit after next year’s ground-breaking launch, Pan said, without divulging how many satellites might be deployed or when. He did say that China hopes to complete a QKD system linking Asia and Europe by 2020, and have a worldwide quantum-communications network in place by 2030.
In 2009, China became the first country in the world to put quantum-communications technology to work outside of a laboratory.
In October of that year, a team of scientists led by Pan built a secure network for exchanging information among government officials during a military parade in Beijing celebrating the 60th anniversary of the People’s Republic. The demonstration underscored the research project’s key military application.
“China is completely capable of making full use of quantum communications in a regional war,” Pan said. “The direction of development in the future calls for using relay satellites to realize quantum communications and control that covers the entire army.”
The country is also working to configure the new technology for civilian use.
A pilot quantum-communications network that took 18 months to build was completed in February 2012 in Hefei. The network, which cost the city’s government 60 million yuan ($9.6 million), was designed by Pan’s team to link 40 telephones and 16 video cameras installed at city government agencies, military units, financial institutions and health-care offices.
A similar, civilian-focused network built by Pan’s team in Jinan, the provincial capital of the eastern province of Shandong, started operating in March 2014. It connects some 90 users, most of whom tap the network for general business and information.
In late 2012, Pan’s team installed a quantum-communications network that was used to securely connect the Beijing venue hosting a week-long meeting of the 18th National Congress of the Communist Party, with hotel rooms where delegates stayed, as well as the Zhongnanhai compound in Beijing where the nation’s top leaders live and work.
Next on the development agenda is opening the network linking Beijing and Shanghai. Pan is leading that project as well.
If all goes as planned, Pan said, existing networks in Hefei and Jinan would eventually be tied to the Beijing-Shanghai channel to provide secure communications connecting government and financial agencies in each of the four regions. The new network could be operating as early as 2016.
No room for hype
A quantum code expert said that so far, quantum-communications technology development efforts in China have basically focused on protecting national security. “How important it will be for the public and in everyday life are questions that remain unanswered,” said the expert.
To date, Pan said, technical barriers and the high cost of systems development have kept private capital out of what’s now almost exclusively a government initiative. Moreover, it’s still too early to tell whether the technology has any potential commercial value.
Pan has warned the public not to listen to investment come-ons that hype the money-making potential of quantum-communications businesses. At this stage of the game, he said, the focus is still on technological development, not commercial applications.
Nevertheless, since 2009, USTC has been building a commercial enterprise called Anhui Quantum Communication Technology Co. to produce equipment based on technology developed by Pan and his team. The company is China’s largest quantum-communications equipment supplier. Last September, it said it had started mass-producing quantum-cryptography equipment.
Anhui Quantum general manager Zhao Yong said the company’s clients include financial institutions and government agencies seeking to supplement, not replace, conventional communications systems. Their shared goal, he said, is to improve data security.
Once the technology has matured, said Wang Xiangbin, a physicist at Beijing’s Tsinghua University, its range of applications should be targeted to specific industries and regions because of its high barrier in technology and cost. Quantum communications is not a technology suitable for mass use via the Internet, for example, Wang told a group of scientists at a 2012 seminar.
Some experts say it’s wrong to assume that quantum communications is a flawlessly secure means of transmitting information. Another Tsinghua physics professor, Long Guilu, said quantum communication is only theoretically safe, since malfunctioning equipment or operational errors can open doors to risk.
Experimental systems built in 2007 by Chinese and U.S. physicists reportedly achieved secure QKD transmissions between two points more than 100 kilometers apart. But the experiment also taught scientists that data can be intercepted by a third party during a transmission.
In addressing the naysayers, Pan admitted that quantum communications is not perfect. But he defended it as safer than conventional means of communication. In fact, he said, no means of protecting data is more secure than quantum communications.
To test the capacity and safety of the network linking Beijing and Shanghai, Pan said his team plans to ask other communications experts to carefully study the system and look for potential security holes. The network could then be modified in ways that close any detected gaps and reduce hacking risks.
“Assessments and testing will be conducted after the network is completed,” said Pan, who remains convinced that any network using quantum cryptographic technology is more secure than any other communications channel.
Pan has been working on quantum-communications technology since the late 1990s, when he was a researcher at the University of Vienna and working in a partnership with Austrian physicist Anton Zeilinger. That team is credited with developing the first protocol for quantum communications.
Pan worked with Zeilinger about a decade after U.S. physicist Charles Bennett and colleagues at IBM Research built the world’s first functioning quantum cryptographic system. Based on their research, the first network was installed in the U.S. city of Boston.
Like their counterparts in China, researchers in the United States, Japan and European countries continue work to advance the technology. A key effort is aimed at extending that potential reach of quantum-communications systems, which for years were used only to span short distances.
Some experts have even wondered whether the new technology has been misidentified, since its key feature is high-level cryptography, not electronic communications.
“What we can do now is merely encrypt data, which is far from real quantum communications,” said one expert who declined to be named. “Theoretically it can’t be hacked, but in practice it has many limitations.”
Guo Guangcan, director of USTC’s quantum-communications lab, said networks now operating and those being built in China “achieve encryption only,” whereas true communications networks “involve content.”
“It’s not accurate to call it quantum communications,” said Guo.
Whatever it’s called, China appears determined to push ahead with the research and development that paves the way for a new era of secure communications. And according to Pan, that era is still at least a decade away.
“It will take 10 to 20 years to really put (the technology) into practice,” said Pan.
Rewritten by Han Wei
This is really nothing new but I’m posting it because similar “news” resurfaced again the past week.
If you’ve already bought one, the easy solution is to cover the webcam with a duct tape unless you need to use it.
Photo (above) credit: US Central Command
Snowden was spot on, the US (would and) is paying the price for focusing too much on the cyber offensive at the expense of cyber defense.
“The National Security Agency has two halves, one that handles defense and one that handles offense. Michael Hayden and Keith Alexander, the former directors of NSA, they shifted those priorities… But the problem is when you deprioritize defense, you put all of us at risk,” according to Snowden.
“If we attack a Chinese university and steal the secrets of their research program, how likely is it that that is going to be more valuable to the United States than when the Chinese retaliate and steal secrets from a U.S. university, from a U.S. defense contractor, from a U.S. military agency?
“The most important thing to us is not being able to attack our adversaries, the most important thing is to be able to defend ourselves. And we can’t do that as long as we’re subverting our own security standards for the sake of surveillance.”
The website PBS.org published an exclusive interview with Snowden and his views on cyber warfare, just days before the CENTCOM hacks early this week. Interestingly, the video link is no longer working but the full unedited transcript is splashed below.
Exclusive: Edward Snowden on Cyber Warfare
By James Bamford and Tim De Chant on Thu, 08 Jan 2015
Last June, journalist James Bamford, who is working with NOVA on a new film about cyber warfare that will air in 2015, sat down with Snowden in a Moscow hotel room for a lengthy interview. In it, Snowden sheds light on the surprising frequency with which cyber attacks occur, their potential for destruction, and what, exactly, he believes is at stake as governments and rogue elements rush to exploit weaknesses found on the internet, one of the most complex systems ever built by humans. The following is an unedited transcript of their conversation.
James Bamford: Thanks very much for coming. I really appreciate this. And it’s really interesting—the very day we’re meeting with you, this article came out in The New York Times, seemed to be downplaying the potential damage, which they really seem to have hyped up in the original estimate. What did you think of this article today?
Edward Snowden: So this is really interesting. It’s the new NSA director saying that the alleged damage from the leaks was way overblown. Actually, let me do that again.
So this is really interesting. The NSA chief in this who replaced Keith Alexander, the former NSA director, is calling the alleged damage from the last year’s revelations to be much more insignificant than it was represented publicly over the last year. We were led to believe that the sky was going to fall, that the oceans were going to boil off, the atmosphere was going to ignite, the world would end as we know it. But what he’s saying is that it does not lead him to the conclusion that the sky is falling.
And that’s a significant departure from the claims of the former NSA director, Keith Alexander. And it’s sort of a pattern that we’ve seen where the only U.S. officials who claim that these revelations cause damage rather than serve the public good were the officials that were personally embarrassed by it. For example, the chairs of the oversight committees in Congress, the former NSA director himself.
But we also have, on the other hand, the officials on the White House’s independent review panels who said that these programs had never been shown to stop even a single imminent terrorist attack in the United States, and they had no value. So how could it be that these programs were so valuable that talking about them, revealing them to the public would end the world if they hadn’t stopped any attacks?
But what we’re seeing and what this article represents is that the claims of harm that we got last year were not accurate and could in fact be claimed to be misleading, and I think that’s a concern. But it is good to see that the director of NSA himself now today, with full access to classified information, is beginning to come a little bit closer to the truth, getting a little bit closer to the President’s viewpoint on that, which is this discussion that we’ve had over the last year doesn’t hurt us. It makes us stronger. So thanks for showing that.
Bamford: Thanks. One other thing that the article gets into, which is what we’re talking about here today, is the article quotes the new NSA director, who is also the commander of Cyber Command, as basically saying that it’s possible in the future that these cyber weapons will become sort of normal military weapons, and they’ll be treated sort of like guided missiles or cruise missiles and so forth.
Snowden: Cruise missiles or drones.
Bamford: What are your thoughts about that, having spent time in this whole line of work yourself?
Snowden: I think the public still isn’t aware of the frequency with which these cyber-attacks, as they’re being called in the press, are being used by governments around the world, not just the US. But it is important to highlight that we really started this trend in many ways when we launched the Stuxnet campaign against the Iranian nuclear program. It actually kicked off a response, sort of retaliatory action from Iran, where they realized they had been caught unprepared. They were far behind the technological curve as compared to the United States and most other countries. And this is happening across the world nowadays, where they realize that they’re caught out. They’re vulnerable. They have no capacity to retaliate to any sort of cyber campaign brought against them.
The Iranians targeted open commercial companies of U.S. allies. Saudi Aramco, the oil company there—they sent what’s called a wiper virus, which is actually sort of a Fisher Price, baby’s first hack kind of a cyber-campaign. It’s not sophisticated. It’s not elegant. You just send a worm, basically a self-replicating piece of malicious software, into the targeted network. It then replicates itself automatically across the internal network, and then it simply erases all of the machines. So people go into work the next day and nothing turns on. And it puts them out of business for a period of time.
But with enterprise IT capabilities, it’s not trivial, but it’s not impossible to restore a company to working order in fairly short time. You can image all of the work stations. You can restore your backups from tape. You can perform what’s called bare metal restores, where you get entirely new hardware that matches your old hardware, if the hardware itself was broken, and just basically paint it up, restore the data just like the original target was, and you’re back in the clear. You’re moving along.
Now, this is something that people don’t understand fully about cyber-attacks, which is that the majority of them are disruptive, but not necessarily destructive. One of the key differentiators with our level of sophistication and nation-level actors is they’re increasingly pursuing the capability to launch destructive cyber-attacks, as opposed to the disruptive kinds that you normally see online, through protestors, through activists, denial of service attacks, and so on. And this is a pivot that is going to be very difficult for us to navigate.
Bamford: Let me ask you about that, because that is the focus of the program here. It’s a focus because very few people have ever discussed this before, and it’s the focus because the U.S. launched their very first destructive cyber-attack, the Stuxnet attack, as you mentioned, in Iran. Can you just tell me what kind of a milestone that was for the United States to launch their very first destructive cyber-attack?
Snowden: Well, it’s hard to say it’s the first ever, because attribution is always hard with these kind of campaigns. But it is fair to say that it was the most sophisticated cyber-attack that anyone had ever seen at the time. And the fact that it was launched as part of a U.S. authorized campaign did mark a radical departure from our traditional analysis of the levels of risks we want to assume for retaliation.
When you use any kind of internet based capability, any kind of electronic capability, to cause damage to a private entity or a foreign nation or a foreign actor, these are potential acts of war. And it’s critical we bear in mind as we discuss how we want to use these programs, these capabilities, where we want to draw the line, and who should approve these programs, these decisions, and at what level, for engaging in operations that could lead us as a nation into a war.
The reality is if we sit back and allow a few officials behind closed doors to launch offensive attacks without any oversight against foreign nations, against people we don’t like, against political groups, radicals, and extremists whose ideas we may not agree with, and could be repulsive or even violent—if we let that happen without public buy-in, we won’t have any seat at the table of government to decide whether or not it’s appropriate for these officials to drag us into some kind of war activity that we don’t want, but we weren’t aware of at the time.
Bamford: And what you seem to be talking about also is the blowback effect. In other words, if we launch an attack using cyber warfare, a destructive attack, we run the risk of having been the most industrialized and electronically connected country in the world, that that’s a major problem for the US. Is that your thinking?
Snowden: I do agree that when it comes to cyber warfare, we have more to lose than any other nation on earth. The technical sector is the backbone of the American economy, and if we start engaging in these kind of behaviors, in these kind of attacks, we’re setting a standard, we’re creating a new international norm of behavior that says this is what nations do. This is what developed nations do. This is what democratic nations do. So other countries that don’t have as much respect for the rules as we do will go even further.
And the reality is when it comes to cyber conflicts between, say, America and China or even a Middle Eastern nation, an African nation, a Latin American nation, a European nation, we have more to lose. If we attack a Chinese university and steal the secrets of their research program, how likely is it that that is going to be more valuable to the United States than when the Chinese retaliate and steal secrets from a U.S. university, from a U.S. defense contractor, from a U.S. military agency?
We spend more on research and development than these other countries, so we shouldn’t be making the internet a more hostile, a more aggressive territory. We should be cooling down the tensions, making it a more trusted environment, making it a more secure environment, making it a more reliable environment, because that’s the foundation of our economy and our future. We have to be able to rely on a safe and interconnected internet in order to compete.
Bamford: Where do you see this going in terms of destruction? In Iran, for example, they destroyed the centrifuges. But what other types of things might be targeted? Power plants or dams? What do you see as the ultimate potential damage that could come from the cyber warfare attack?
Snowden: When people conceptualize a cyber-attack, they do tend to think about parts of the critical infrastructure like power plants, water supplies, and similar sort of heavy infrastructure, critical infrastructure areas. And they could be hit, as long as they’re network connected, as long as they have some kind of systems that interact with them that could be manipulated from internet connection.
However, what we overlook and has a much greater value to us as a nation is the internet itself. The internet is critical infrastructure to the United States. We use the internet for every communication that businesses rely on every day. If an adversary didn’t target our power plants but they did target the core routers, the backbones that tie our internet connections together, entire parts of the United States could be cut off. They could be shunted offline, and we would go dark in terms of our economy and our business for minutes, hours, days. That would have a tremendous impact on us as a society and it would have a policy backlash.
The solution, however, is not to give the government more secret authorities to put kill switches and monitors and snooping devices on the internet. It’s to reorder our priorities for how we deal with threats to the security of our critical infrastructure, for our electronic infrastructure. And what that means is taking bodies like the National Security Agency that have traditionally been about securing the nation and making sure that that’s their first focus.
In the last 10 years, we’ve seen—in the last 10 years, we’ve seen a departure from that traditional role of signals intelligence gathering overseas that’s related to responding to threats that are—
Bamford: Take your time.
Snowden: Right. What we’ve seen over the last decade is we’ve seen a departure from the traditional work of the National Security Agency. They’ve become sort of the national hacking agency, the national surveillance agency. And they’ve lost sight of the fact that everything they do is supposed to make us more secure as a nation and a society.
The National Security Agency has two halves, one that handles defense and one that handles offense. Michael Hayden and Keith Alexander, the former directors of NSA, they shifted those priorities, because when they went to Congress, they saw they could get more budget money if they advertised their success in attacking, because nobody is ever really interested in doing the hard work of defense.
But the problem is when you deprioritize defense, you put all of us at risk. Suddenly, policies that would have been unbelievable, incomprehensible even 20 years ago are commonplace today. You see decisions being made by these agencies that authorize them to install backdoors into our critical infrastructure, that allow them to subvert the technical security standards that keep your communication safe when you’re visiting a banking website online or emailing a friend or logging into Facebook.
And the reality is, when you make those systems vulnerable so that you can spy on other countries and you share the same standards that those countries have for their systems, you’re also making your own country more vulnerable to the same attacks. We’re opening ourselves up to attack. We’re lowering our shields to allow us to have an advantage when we attack other countries overseas, but the reality is when you compare one of our victories to one of their victories, the value of the data, the knowledge, the information gained from those attacks is far greater to them than it is to us, because we are already on top. It’s much easier to drag us down than it is to grab some incremental knowledge from them and build ourselves up.
Bamford: Are you talking about China particularly?
Snowden: I am talking about China and every country that has a robust intelligence collection program that is well-funded in the signals intelligence realm. But the bottom line is we need to put the security back in the National Security Agency. We can’t have the national surveillance agency. We’ve got to go—look, the most important thing to us is not being able to attack our adversaries, the most important thing is to be able to defend ourselves. And we can’t do that as long as we’re subverting our own security standards for the sake of surveillance.
Bamford: That is a very strange combination, where you have one half of the NSA, the Information Assurances Directorate, which is charged with protecting the country from cyber-attacks, coexisting with the Signals Intelligence Directorate and the Cyber Command, which is pretty much focused on creating weaknesses. Can you just tell me a little bit about how that works, the use of vulnerabilities and implants and exploits?
Snowden: So broadly speaking, there are a number of different terms that are used in the CNO, computer networks operations world.
Broadly speaking, there are a number of different terms that are used to define the vernacular in the computer network operations world. There’s CNA, computer network attack, which is to deny, degrade, or destroy the functioning of a system. There’s CND, computer network defense, which is protecting systems, which is noticing vulnerabilities, noticing intrusions, cutting them off, and repairing them, patching the holes. And there’s CNE, computer network exploitation, which is breaking into a system and leaving something behind, this sort of electronic ear that will allow you to monitor everything that’s happening from that point forward. CNE is typically used for espionage, for spying.
To achieve these goals, we use things like exploits, implants, vulnerabilities, and so on. A vulnerability is a weakness in a system, where a computer program has a flaw in its code that, when it thinks it’s going to execute a normal routine task, it’s actually been tricked into doing something the attacker asks it to do. For example, instead of uploading a file to display a picture online, you could be uploading a bit of code that the website will then execute.
Or instead of logging into a website, you could enter code into the username field or into the password field, and that would crash through the boundaries of memory—that were supposed to protect the program—into the executable space of computer instructions. Which means when the computer goes through its steps of what is supposed to occur, it goes, I’m looking for user login. This is the username. This is the password. And then when it should go, check to see that these are correct, if you put something that was too long in the password field, it actually overwrites those next instructions for the computer. So it doesn’t know it’s supposed to check for a password. Instead, it says, I’m supposed to create a new user account with the maximum privileges and open up a port for the adversary to access my network, and then so on and so forth.
Vulnerabilities are generally weaknesses that can be exploited. The exploit itself are little shims of computer code that allow you to run any sort of program you want.
Exploits are the shims of computer code that you wedge into vulnerabilities to allow you to take over a system, to gain access to them, to tell that system what you wanted to do. The payload or implant follows behind the exploit. The exploit is what wedges you into the system. The payload is the instructions that are left behind. Now, those instructions often say install an implant.
The implant is an actual program that runs—it stays behind after the exploit has occurred—and says, tell me all of the files on this machine. Make a list of all of the users. Send every new email or every new keystroke that’s been recorded on this program each day to my machine as the attacker, or really anything you can imagine. It can also tell nuclear centrifuges to spin up to the maximum RPM and then spin down quickly enough that no one notices. It can tell a power plant to go offline.
Or it could say, let me know what this dissident is doing day to day, because it lives on their cell phone and it keeps track of all their movements, who they call, who they’re associating with, what wireless device it’s nearby. Really an exploit is only limited—or not an exploit. An implant is only limited by the imagination. Anything you can program a computer to do, you can program an implant to do.
Bamford: So you have the implant, and then you have the payload, right?
Snowden: The payload includes the implant. The exploit is what basically breaks into the vulnerability. The payload is what the exploit runs, and that is basically some kind of executable code. And the implant is a payload that’s left behind long term, some kind of basically listening program, some spying program, or some kind of a destructive program.
Bamford: Interviewing you is like doing power steering. I don’t have to pull this out.
Snowden: Yeah, sorry, I get a little ramble-y on my answers, and the political answers aren’t really strong, but I’m not a politician, so I’m just trying my best on these.
Bamford: This isn’t nightly news, so we’ve got an hour.
Snowden: Yeah, I hope you guys cut this so it’s not so terrible.
Producer: We’ve got two cameras, and we can carve your words up.
Snowden: (laughter) Great.
Producer: But we won’t.
Bamford: Should mention this implant now—the implant sounds a bit like what used to be sleeper agents back in the old days of the Cold War, where you have an agent that’s sitting there that can do anything. It can do sabotage. It can do espionage. It can do whatever. And looking at one of those slides that came out, what was really fascinating was the fact that the slide was a map of the world, and they had little yellow dots on it. The little yellow dots were indicated as CNEs, computer network exploitation. And you expect to see them in North Korea, China, different places like that. But what was interesting when we looked at it was there were quite a few actually in Brazil, for example, and other places that were friendly countries. Any idea why the U.S. would want to do something like that?
Snowden: So the way the United States intelligence community operates is it doesn’t limit itself to the protection of the homeland. It doesn’t limit itself to countering terrorist threats, countering nuclear proliferation. It’s also used for economic espionage, for political spying to gain some knowledge of what other countries are doing. And over the last decade, that sort of went too far.
No one would argue that it’s in the United States’ interest to have independent knowledge of the plans and intentions of foreign countries. But we need to think about where to draw the line on these kind of operations so we’re not always attacking our allies, the people we trust, the people we need to rely on, and to have them in turn rely on us. There’s no benefit to the United States hacking Angela Merkel’s cell phone. President Obama said if he needed to know what she was thinking, he would just pick up the phone and call her. But he was apparently allegedly unaware that the NSA was doing precisely that. These are similar things we see happening in Brazil and France and Germany and all these other countries, these allied nations around the world.
And we also need to remember that when we talk about computer network exploitation, computer network attack, we’re not just talking about your home PC. We’re not just talking about a control system in a factory somewhere. We’re talking about your cell phone, and we’re also talking about internet routers themselves. The NSA and its sister agencies are attacking the critical infrastructure of the internet to try to take ownership of it. They hack the routers that connect nations to the internet itself.
And this is dangerous for a number of reasons. It does provide us a real intelligence advantage, but at the same time, it’s a serious risk. If one of these hacking operations goes wrong, and this has happened in the past, and it’s a core router that connects all of the internet service providers for an entire country to the internet, we’ve blacked out that entire nation from online access until that problem can be corrected. And these routers are not your little Linksys, D-Link routers sitting at home. We’re talking $60,000, $600,000, $6 million devices, complexes, that are not easy to fix, and they don’t have an off the shelf replacement that’s ready to swap in.
So we need to be very careful, and we need to make sure that whenever we’re engaging in a cyber-warfare campaign, a cyber-espionage campaign in the United States, that we understand the word cyber is used as a euphemism for the internet, because the American public would not be excited to hear that we’re doing internet warfare campaigns, internet espionage campaigns, because we realize that we ourselves are impacted by it. The internet is shared critical infrastructure for everyone on earth. It’s not supposed to be a domain of warfare. We’re not supposed to be putting our economy on the frontlines in the battleground. But that’s increasingly what’s happening today.
So we need to put processes, policies, and procedures in place with real laws that forbid going beyond the borders of what’s reasonable to ensure that the only time that we and other countries around the world exercise these authorities are when it is absolutely necessary, there’s not alternative means of achieving the appropriate outcome, and it’s proportionate to the threat. We shouldn’t be putting an entire nation’s infrastructure at risk to spy on one company, to spy on one person. But increasingly, we see that happening more and more today.
Bamford: You mentioned the problems, the dangers involved if you’re trying to put an exploit into some country’s central nervous system when it comes to the internet. For example in Syria, there was a time when everything went down, and that was blamed on the president of Syria, Bashar al-Assad. Did you have any particular knowledge of that?
Snowden: I don’t actually want to get into that one on camera, so I’ll have to demur on that.
Bamford: Can you talk around it somehow?
Snowden: What I would say is when you’re attacking a router on the internet, and you’re doing it remotely, it’s like trying to shoot the moon with a rifle. Everything has to happen exactly right. Every single variable has to be controlled and precisely accounted for. And that’s not possible to do when you have limited knowledge of the target you’re attacking.
So if you’ve got this gigantic router that you’re trying to hack, and you want to hack it in a way that’s undetectable by the systems administrators for that device, you have to get below the operating system level of that device, of that router. Not where it says here are the rules, here are the user accounts, here are the routes and the proper technical information that everybody who’s administering this device should have access to. Down onto the firmware level, onto the hardware control level of the device that nobody ever sees, because it’s sort of a dark place.
The problem is if you make a mistake when you’re manipulating the hardware control of a device, you can do what’s called bricking the hardware, and it turns it from a $6 million internet communications device to a $6 million paperweight that’s in the way of your entire nation’s communications. And that’s something that all I can say is has happened in the past.
Bamford: When we were in Brazil, we were shown this major internet connection facility. It was the largest internet hub in the southern hemisphere, and it’s sitting in Brazil. And the Brazilians had a lot of concern, because again, they saw the slide that showed all this malware being planted in Brazil. Is that a real concern that they should have, the fact that they’ve, number one, got this enormous internet hub sitting right in Sao Paulo, and then on the second hand, they’ve got NSA flooding the country with malware?
Snowden: The internet exchange is sort of the core points where all of the international cables come together, where all of the internet service providers come together, and they trade lines with each other, where we move from separate routes, separate highways on the internet into one coherent traffic circle where everybody can get on and off on the exit they want. These are priority one targets for any sort of espionage agency, because they provide access to so many people’s communications.
Internet exchanges and internet service providers—international fiber optic landing points—these are the key tools that governments go after in order to enable their programs of mass surveillance. If they want to be able to watch the entire population of a country instead of a single individual, you have to go after those bulk interchanges. And that’s what’s happening.
So it is a real threat, and the only way that can be accounted for is to make sure that there’s some kind of independent control and auditing, some sort of routine forensic investigations into these devices, to ensure that not only were they secure when they were installed, but they hadn’t been monitored or tampered with or changed in any way since that last audit occurred. And that requires doing things like creating mathematical proofs called hashes of the validity of the actual hardware signature and software signatures on these devices and their hardware.
Bamford: Another area—you mentioned the presidential panel that looked into all these areas that are of concern now, which you’ve basically brought out these areas. And the presidential panel came out with I think 46 different recommendations. One of those recommendations dealt with restricting the use or cutting back or maybe even doing away with the idea of going after zero-day exploits. Can you tell me a little bit about your fears that you may have of the U.S. creating this market of zero-day exploits?
Snowden: So a zero-day exploit is a method of hacking a system. It’s sort of a vulnerability that has an exploit written for it, sort of a key and a lock that go together to a given software package. It could be an internet web server. It could be Microsoft Office. It could be Adobe Reader or it could be Facebook. But these zero-day exploits—they’re called zero-days because the developer of the software is completely unaware of them. They haven’t had any chance to react, respond, and try to patch that vulnerability away and close it.
The danger that we face in terms of policy of stockpiling zero-days is that we’re creating a system of incentives in our country and for other countries around the world that mimic our behavior or that see it as a tacit authorization for them to perform the same sort of operations is we’re creating a class of internet security researchers who research vulnerabilities, but then instead of disclosing them to the device manufacturers to get them fixed and to make us more secure, they sell them to secret agencies.
They sell them on the black market to criminal groups to be able to exploit these to attack targets. And that leaves us much less secure, not just on an individual level, but on a broad social level, on a broad economic level. And beyond that, it creates a new black market for computer weapons, basically digital weapons.
And there’s a little bit of a free speech issue involved in regulating this, because people have to be free to investigate computer security. People have to be free to look for these vulnerabilities and create proof of concept code to show that they are true vulnerabilities in order for us to secure our systems. But it is appropriate to regulate the use and sale of zero-day exploits for nefarious purposes, in the same way you would regulate any other military weapon.
And today, we don’t do that. And that’s why we see a growing black market with companies like Endgame, with companies like Vupen, where all they do—their entire business model is finding vulnerabilities in the most critical infrastructure software packages we have around the internet worldwide, and instead of fixing those vulnerabilities, they tear them open and let their customers walk in through them, and they try to conceal the knowledge of these zero-day exploits for as long as possible to increase their commercial value and their revenues.
Bamford: Now, of those 46 recommendations, including the one on the zero-day exploits that the panel came up with, President Obama only approved maybe five or six at the most of those 46 recommendations, and he didn’t seem to talk at all about the zero-day exploit recommendation. What do you think of that, the fact that that was sort of ignored by the President?
Snowden: I can’t comment on presidential policies. That’s a landmine for me. I would recommend you ask Chris Soghoian at the ACLU, American Civil Liberties Union, and he can get you any quote you want on that. You don’t need me to speak to that point, but you’re absolutely right that where there’s smoke, there’s fire, as far as that’s concerned.
Bamford: Well, as someone who has worked at the NSA, been there for a long time, during that time you were there, they created this entire new organization called Cyber Command. What are your thoughts on the creation of this new organization that comes just like the NSA, under the director of NSA? Again, backing up, the director of NSA for ever since the beginning was only three stars, and now he’s a four star general, or four star admiral, and he’s got this enormous largest intelligence agency in the world, the NSA, under him, and now he’s got Cyber Command. What are your thoughts on that, having seen this from the inside?
Snowden: There was a strong debate last year about whether or not the National Security Agency and Cyber Command should be split into two independent agencies, and that was what the President’s independent review board suggested was the appropriate method, because when you have an agency that’s supposed to be defensive married to an agency that’s entire purpose in life is to break things and set them on fire, you’ve got a conflict of interest that is really going to reduce the clout of the defensive agency, while the offensive branch gains more clout, they gain more budget dollars, they gain more billets and personnel assignments.
So there’s a real danger with that happening. And Cyber Command itself has always existed in a—Cyber Command itself has always been branded in a sort of misleading way from its very inception. The director of NSA, when he introduced it, when he was trying to get it approved, he said he wanted to be clear that this was not a defensive team. It was a defend the nation team. He’s saying it’s defensive and not defensive at the same time.
Now, the reason he says that is because it’s an attack agency, but going out in front of the public and asking them to approve an aggressive warfare focused agency that we don’t need is a tough sell. It’s much better if we say, hey, this is for protecting us, this is for keeping us safe, even if all it does every day is burn things down and break things in foreign countries that we aren’t at war with.
So there’s a real careful balance that needs to be struck there that hasn’t been addressed yet, but so long as the National Security Agency and Cyber Command exist under one roof, we’ll see the offensive side of their business taking priority over the defensive side of the business, which is much more important for us as a country and as a society.
Bamford: And you mentioned earlier, if we could just go back a little bit over this again, how much more money is going to the cyber offensive time than going to the cyber defensive side. Not only more money, but more personnel, more attention, more focus.
Snowden: I didn’t actually get the question on that one.
Bamford: I just wondered if you could just elaborate a little bit more on that. Again, we have Cyber Command and we have the Information Assurance Division and so forth, and there’s far more money and personnel and emphasis going on the cyber warfare side than the defensive side.
Snowden: I think the key point in analyzing the balance and where we come out in terms of offense versus defense at the National Security Agency and Cyber Command is that, more and more, what we’ve read in the newspapers and what we see debating in Congress, the fact the Senate is now trying to put forward a bill called CISPA, the Cyber Intelligence Sharing—I don’t even know what it’s called—let me take that back.
We see more and more things occurring like the Senate putting forward a bill called CISPA, which is for cyber intelligence sharing between private companies and government agencies, where they’re trying to authorize not just the total immunity, a grant of total immunity, to private companies if they share the information on all of their customers, on all the American citizens and whatnot that are using their services, with intelligence agencies, under the intent that that information be used to protect them.
Congress is also trying to immunize companies in a way that will allow them to invite groups like the National Security Agency or the FBI to voluntarily put surveillance devices on their internal networks, with the stated intent being to detect cyber-attacks as they occur and be able to respond to them. But we’re ceding a lot of authority there. We’re immunizing companies from due diligence and protecting their customers’ privacy rights.
Actually, this is a point that’s way too difficult to make in the interview. Let me dial back out of that.
What we see more and more is sort of a breakdown in the National Security Agency. It’s becoming less and less the National Security Agency and more and more the national surveillance agency. It’s gaining more offensive powers with each passing year. It’s gained this new Cyber Command that’s under the director of NSA that by any measure should be an entirely separate organization because it has an entirely separate mission. All it does is attack.
And that’s putting us, both as a nation and an economy, in a state of permanent vulnerability and permanent risk, because when we lose a National Security Agency and instead get an offensive agency, we get an attack agency in its place, all of our eyes are looking outward, but they’re not looking inward, where we have the most to lose. And this is how we miss attacks time and time again. This results in intelligence failures such as the Boston Marathon bombings or the underwear bomber, Abdul Farouk Mutallab (sic).
In recent years, the majority of terrorist attacks that have been disrupted in the United States have been disrupted due to things like the Time Square bomber, who was caught by a hotdog vendor, not a mass surveillance program, not a cyber-espionage campaign.
So when we cannibalize dollars from the defensive business of the NSA, securing our communications, protecting our systems, patching zero-day vulnerabilities, and instead we’re giving those dollars to them to be used for creating new vulnerabilities in our systems so that they can surveil us and other people abroad who use the same systems. When we give those dollars to subvert our encryption methods so we don’t have any more privacy online and we apply all of that money to attacking foreign countries, we’re increasing the state of conflict, not just in diplomatic terms, but in terms of the threat to our critical infrastructure.
When the lights go out at a power plant sometime in the future, we’re going to know that that’s a consequence of deprioritizing defense for the sake of an advantage in terms of offense.
Bamford: One other problem I think is that people think that, as you mentioned—just to sort of clarify this—people out there that don’t really follow this that closely think that the whole idea of Cyber Command was to protect the country from cyber-attacks. Is that a misconception, the fact that these people think that the whole idea of Cyber Command is to protect them from cyber-attack?
Snowden: Well, if you ask anybody at Cyber Command or look at any of the job listings for openings for their positions, you’ll see that the one thing they don’t prioritize is computer network defense. It’s all about computer network attack and computer network exploitation at Cyber Command. And you have to wonder, if these are people who are supposed to be defending our critical infrastructure at home, why are they spending so much time looking at how to attack networks, how to break systems, and how to turn things off? I don’t think it adds up as representing a defensive team.
Bamford: Now, also looking a little bit into the future, it seems like there’s a possibility that a lot of this could be automated, so that when the Cyber Command or NSA sees a potential cyber-attack coming, there could be some automatic devices that would in essence return fire. And given the fact that it’s so very difficult to—or let me back up. Given the fact that it’s so easy for a country to masquerade where an attack is coming from, do you see a problem where you’re automating systems that automatically shoot back, and they may shoot back at the wrong country, and could end up starting a war?
Snowden: Right. So I don’t want to respond to the first part of your question, but the second part there I can use, which is relating to attribution and automated response. Which is that the—it’s inherently dangerous to automate any kind of aggressive response to a detected event because of false positives.
Let’s say we have a defensive system that’s tied to a cyber-attack capability that’s used in response. For example, a system is created that’s supposed to detect cyber-attacks coming from Iran, denial of service attacks brought against a bank. They detect what appears to be an attack coming in, and instead of simply taking a defensive action, instead of simply blocking it at the firewall and dumping that traffic so it goes into the trash can and nobody ever sees it—no harm—it goes a step further and says we want to stop the source of that attack.
So we will launch an automatic cyber-attack at the source IP address of that traffic stream and try to take that system online. We will fire a denial of service attack in response to it, to destroy, degrade, or otherwise diminish their capability to act from that.
But if that’s happening on an automated basis, what happens when the algorithms get it wrong? What happens when instead of an Iranian attack, it was simply a diagnostic message from a hospital? What happens when it was actually an attack created by an independent hacker, but you’ve taken down a government office that the hacker was operating from? That wasn’t clear.
What happens when the attack hits an office that a hacker from a third country had hacked into to launch that attack? What if it was a Chinese hacker launching an attack from an Iranian computer targeting the United States? When we retaliate against a foreign country in an aggressive manner, we the United States have stated in our own policies that’s an act of war that justifies a traditional kinetic military response.
We’re opening the doors to people launching missiles and dropping bombs by taking the human out of the decision chain for deciding how we should respond to these threats. And this is something we’re seeing more and more happening in the traditional means as our methods of warfare become increasingly automated and roboticized such as through drone warfare. And this is a line that we as a society, not just in the United States but around the world, must never cross. We should never allow computers to make inherently governmental decisions in terms of the application of military force, even if that’s happening on the internet.
Bamford: And Richard Clarke has said that it’s more important for us to defend ourselves against attacks from China than to attack China using cyber tools. Do you agree with that?
Snowden: I strongly agree with that. The concept there is that there’s not much value to us attacking Chinese systems. We might take a few computers offline. We might take a factory offline. We might steal secrets from a university research programs, and even something high-tech. But how much more does the United States spend on research and development than China does? Defending ourselves from internet-based attacks, internet-originated attacks, is much, much more important than our ability to launch attacks against similar targets in foreign countries, because when it comes to the internet, when it comes to our technical economy, we have more to lose than any other nation on earth.
Bamford: I think you said this before, but in the past, the U.S. has actually used cyber warfare to attack things like hospitals and things like that in China?
Snowden: So they’re not cyber warfare capabilities. They’re CNE, computer network exploitation.
Bamford: Yeah, if you could just explain that a little.
Snowden: I’m not going to get into that on camera. But what the stories showed and what you can sort of voice over is that Chinese universities—not just Chinese, actually—scratch that—is that the National Security Agency has exploited internet exchanges, internet service providers, including in Belgium—the Belgacom case— through their allies at GCHQ and the United Kingdom.
They’ve attacked universities, hospitals, internet exchange points, internet service providers—the critical infrastructure that all of us around the world rely on.
And it’s important to remember when you start doing things like attacking hospitals, when you start doing things like attacking universities, when you start attacking things like internet exchange points, when something goes wrong, people can die. If a hospital’s infrastructure is affected, lifesaving equipment turns off. When an internet exchange point goes offline and voice over IP calls with the common method of communication—cell phone networks rout through internet communications points nowadays—people can’t call 911. Buildings burn down. All because we wanted to spy on somebody.
So we need to be very careful about where we draw the line and what is absolutely necessary and proportionate to the threat that we face at any given time. I don’t think there’s anything, any threat out there today that anyone can point to, that justifies placing an entire population under mass surveillance. I don’t think there’s any threat that we face from some terrorist in Yemen that says we need to hack a hospital in Hong Kong or Berlin or Rio de Janeiro.
Bamford: I know we’re on a time limit here, but are there questions that I haven’t—
Producer: Let’s take a two minute break here.
Bamford: One of the most interesting things about the Stuxnet attack was that the President—both President Bush and President Obama—were told don’t worry, this won’t be detected by anybody. There’ll be no return address on this. And number two, it won’t escape from the area that they’re focusing it anyway, the centrifuges. Both of those proved wrong, and the virus did escape, and it was detected, and then it was traced back to the United States. So is this one of the big dangers, the fact that the President is told is these things, the President doesn’t have the capability to look into every technical issue, and then these things can wind up hitting us back in the face?
Snowden: The problem is the internet is the most complex system that humans have ever invented. And with every internet enabled operation that we’ve seen so far, all of these offensive operations, we see knock on effects. We see unintended consequences. We see emergent behavior, where when we put the little evil virus in the big pool of all our private lives, all of our private systems around the internet, it tends to escape and go Jurassic Park on us. And as of yet, we’ve found no way to prevent that. And given the complexity of these systems, it’s very likely that we never will.
What we need to do is we need to create new international standards of behavior—not just national laws, because this is a global problem. We can’t just fix it in the United States, because there are other countries that don’t follow U.S. laws. We have to create international standards that say these kind of things should only ever occur when it is absolutely necessary, and that the response that the operation is tailored to be precisely restrained and proportionate to the threat faced. And that’s something that today we don’t have, and that’s why we see these problems.
Bamford: Another problem is, back in the Cold War days—and most people are familiar with that—when there was a fairly limited number of countries that could actually develop nuclear weapons. There were a handful of countries basically that could have the expertise, take the time, find the plutonium, put a nuclear weapon together. Today, the world is completely different, and you could have a small country like Fiji with the capability of doing cyber warfare. So it isn’t limited like it was in those days to just a handful of countries. Do you see that being a major problem with this whole idea of getting into cyber warfare, where so many countries have the capability of doing cyber warfare, and the U.S. being the most technologically vulnerable country?
Snowden: Yeah, you’re right. The problem is that we’re more reliant on these technical systems. We’re more reliant on the critical infrastructure of the internet than any other nation out there. And when there’s such a low barrier to entering the domain of cyber-attacks—cyber warfare as they like to talk up the threat—we’re starting a fight that we can’t win.
Every time we walk on to the field of battle and the field of battle is the internet, it doesn’t matter if we shoot our opponents a hundred times and hit every time. As long as they’ve hit us once, we’ve lost, because we’re so much more reliant on those systems. And because of that, we need to be focusing more on creating a more secure, more reliable, more robust, and more trusted internet, not one that’s weaker, not one that relies on this systemic model of exploiting every vulnerability, every threat out there. Every time somebody on the internet sort of glances at us sideways, we launch an attack at them. That’s not going to work out for us long term, and we have to get ahead of the problem if we’re going to succeed.
Bamford: Another thing that the public doesn’t really have any concept of, I think at this point, is how organized this whole Cyber Command is, and how aggressive it is. People don’t realize there’s a Cyber Army now, a Cyber Air Force, a Cyber Navy. And the fact that the models for some of these organizations like the Cyber Navy are things like we will dominate the cyberspace the same way we dominate the sea or the same way that we dominate land and the same way we dominate space. So it’s this whole idea of creating an enormous military just for cyber warfare, and then using this whole idea of we’re going to dominate cyberspace, just like it’s the navies of centuries ago dominating the seas.
Snowden: Right. The reason they say that they want to dominate cyberspace is because it’s politically incorrect to say you want to dominate the internet. Again, it’s sort of a branding effort to get them the support they need, because we the public don’t want to authorize the internet to become a battleground. We need to do everything we can as a society to keep that a neutral zone, to keep that an economic zone that can reflect our values, both politically, socially, and economically. The internet should be a force for freedom. The internet should not be a tool for war. And for us, the United States, a champion of freedom, to be funding and encouraging the subversion of a tool for good to be a tool used for destructive ends is, I think, contrary to the principles of us as a society.
Bamford: You had a question, Scott?
Producer: It was really just a question about (inaudible) vulnerabilities going beyond operating systems that we know of, (inaudible) and preserving those vulnerabilities, that that paradox extends over into critical infrastructure as well as—
Snowden: Let me just freestyle on that for a minute, then you can record the question part whenever you want. Something we have to remember is that everything about the internet is interconnected. All of our systems are not just common to us because of the network links between them, but because of the software packages, because of the hardware devices that comprise it. The same router that’s deployed in the United States is deployed in China. The same software package that controls the dam floodgates in the United States is the same as in Russia. The same hospital software is there in Syria and the United States.
So if we are promoting the development of exploits, of vulnerabilities, of insecurity in this critical infrastructure, and we’re not fixing it when we find it—when we find critical flaws, instead we put it on the shelf so we can use it the next time we want to launch an attack against some foreign country. We’re leaving ourselves at risk, and it’s going to lead to a point where the next time a power plant goes down, the next time a dam bursts, the next time the lights go off in a hospital, it’s going to be in America, not overseas.
Bamford: Along those lines, one of the things we’re focusing on in the program is the potential extent of cyber warfare. And we show a dam, for example, in Russia, where there was a major power plant under that. This was a facility that was three times larger than the Hoover Dam, and it exploded. One of the turbines, which weighed as much as two Boeing 747s, exploded 50 feet into the air and then crashed down and killed 75 people. And that was all because of what was originally thought was a cyber-attack, but turned out to be a mistaken piece of cyber that was sent to make this happen. It was accidental.
But the point is this is what can happen if somebody wants to deliberately do this, and I don’t think that’s what many people in the U.S. have a concept of, that this type of warfare can be that extensive. And if you could just give me some ideas along those lines of how devastating this can be, not just in knocking off a power grid, but knocking down an entire dam or an entire power plant.
Snowden: So I don’t actually want to get in the business of enumerating the list of the horrible of horribles, because I don’t want to hype the threat. I’ve said all these things about the dangers and what can go wrong, and you’re right that there are serious risks. But at the same time, it’s important to understand that this is not an existential threat. Nobody’s going to press a key on their keyboard and bring down the government. Nobody’s going to press a key on their keyboard and wipe a nation off the face of the earth.
We have faced threats from criminal groups, from terrorists, from spies throughout our history, and we have limited our responses. We haven’t resorted to total war every time we have a conflict around the world, because that restraint is what defines us. That restraint is what gives us the moral standing to lead the world. And if we go, there are cyber threats out there, this is a dangerous world, and we have to be safe, we have to be secure no matter the cost, we’ve lost that standing.
We have to be able to reject disproportionate and unjustified responses in the cyber domain just as we do in the physical domain. We reject techniques like torture regardless of whether they’re effective or ineffective because they are barbaric and harmful on a broad scale. It’s the same thing with cyber warfare. We should never be attacking hospitals. We should never be taking down power plants unless that is absolutely necessary to ensure our continued existence as a free people.
Bamford: That’s fine with me. If there’s anything that you think we didn’t cover or you want to put in there?
Snowden: I was thinking about two things. One is—I went a lot off on the politics here, and a lot of it was ramble-y, so I might try one more thing on that. The other one I was talking about the VFX thing for the cloud, how cyber-attacks happen.
Producer: So I just want sort of an outline of where you want to go to make sure we get that.
Bamford: Yeah, what kind of question you want me to ask.
Snowden: You wouldn’t even necessarily have to ask a question. It would just be—
Snowden: Yeah. It would just be like a segment. I would say people ask how does a cyber-attack happen. People ask what does exploitation on the internet look like, and how do you find out where it came from. Most people nowadays are aware of what IP addresses are, and they know that you shouldn’t send an email from a computer that’s associated with you if you don’t want it to be tracked back to you. You don’t want to hack the power plant from your house if you don’t want them to follow the trail back and see your IP address.
But there are also what are called proxies, proxy servers on the internet, and this is very typical for hackers to use. They create what are called proxy chains where they gain access to a number of different systems around the world, sometimes by hacking these, and they use them as sort of relay boxes. So you’ve got the originator of an attack all the way over here on the other side of the planet in the big orb of the internet, just a giant constellation of network links all around. And then you’ve got their intended victim over here.
But instead of going directly from them to the victim in one straight path where this victim sees the originator, the attacker, was the person who sent the exploit to them, who attacked their system, you’ll see they do something where they zigzag through the internet. They go from proxy to proxy, from country to country around the world, and they use that last proxy, that last step in the chain, to launch the attack.
So while the attack could have actually come from Missouri, an investigator responding to the attack will think it came from the Central African Republic or from the Sudan or from Yemen or from Germany. And the only way to track that back is to hack each of those systems back through the chain or to use mass surveillance techniques to have basically a spy on each one of those links so you can follow the tunnel all the way home.
The more I think about it, the more I think that would be way too complicated to—
Producer: No, I was just watching your hands. That was just filling in the blanks.
Bamford: No, I was, too. That’ll be fine.
Producer: And it’s a good point of how you can automate responses and how you—
Bamford: Yeah, we can just drive in and draw in those zigzags.
Snowden: Right. I mean, yeah, the way I would see it is just sort of like stars, like a constellation of points. And you’ve got different colored paths going between them. And then you just highlight the originator and the victim. And they don’t have to be on the edges. They could even be in the center of the cloud somewhere. And then you have sort of a green line going straight between them, and it turns red when it hacks, but then you see the little police agency follow it back. And then so you put an X on it and you replace it with the zigzag line that’s green, and then it goes red when it attacks, to sort of call it the path.
Bamford: From Missouri to the Central African Republic.
Producer: Are there any other visualizations that you can think of that maybe you see it as an image as opposed to a (multiple conversations; inaudible).
Snowden: I think one of the good ones to do—and you can do it pretty cheaply, even almost funny, like cartoon-like, and sort of like almost a Flash animation, like paper cutouts—would be to help people visualize the problem with the U.S. prioritizing offense over defense is you look at it—and I’ll give a voiceover here.
When you look at the problem of the U.S. prioritizing offense over defense, imagine you have two bank vaults, the United States bank vault and the Bank of China. But the U.S. bank vault is completely full. It goes all the way up to the sky. And the Chinese bank vault or the Russian bank vault of the African bank vault or whoever the adversary of the day is, theirs is only half full or a quarter full or a tenth full.
But the U.S. wants to get into their bank vault. So what they do is they build backdoors into every bank vault in the world. But the problem is their vault, the U.S. bank vault, has the same backdoor. So while we’re sneaking over to China and taking things out of their vault, they’re also sneaking over to the United States and taking things out of our vault. And the problem is, because our vault is full, we have so much more to lose. So in relative terms, we gain much less from breaking into the vaults of others than we do from having others break into our vaults. That’s why it’s much more important for us to be able to defend against foreign attacks than it is to be able to launch successful attacks against foreign adversaries.
You know, just something sort of symbolic and quick that people can instantly visualize.
Producer: The other thing I’d like to put to you, because we have to find somebody to do it, is how do you make a cyber-weapon? What is malware? What is that?
Snowden: When people are talking about malware, what they really mean is—when people are talking about malware, what they—
When people are talking about cyber weapons, digital weapons, what they really mean is a malicious program that’s used for a military purpose. A cyber weapon could be something as simple as an old virus from 1995 that just happens to still be effective if you use it for that purpose.
Custom developed digital weapons, cyber weapons nowadays typically chain together a number of zero-day exploits that are targeted against the specific site, the specific target that they want to hit. But it depends, this level of sophistication, on the budget and the quality of the actor who’s instigating the attack. If it’s a country that’s less poor or less sophisticated, it’ll be a less sophisticated attack.
But the bare bones tools for a cyber-attack are to identify a vulnerability in the system you want to gain access to or you want to subvert or you want to deny, destroy, or degrade, and then to exploit it, which means to send codes, deliver code to that system somehow, whether it’s locally in the physical realm or on the same network or remotely across the internet, across the global network, and get that code to that vulnerability, to that crack in their wall, jam it in there, and then have it execute.
The payload can then be the action, the instructions that you want to execute on that system, which typically, for the purposes of espionage, would be leaving an implant behind to listen in on what they’re doing, but could just as easily be something like the wiper virus that just deletes everything from the machines and turns them off. Really, it comes down to any instructions that you can think of that you would want to execute on that remote system.
Bamford: Along those lines, there’s one area that could really be visualized I think a lot better, and that’s the vulnerabilities. The way I’ve said it a few times but might be good if you thought about it is looking at a bank vault, and then there are these little cracks, and that enables somebody to get into the bank vault. So what the U.S. is doing is cataloguing all those little cracks instead of telling the bank how to correct those cracks. Problem is other people can find those same cracks.
Snowden: Other people can see the same cracks, yeah.
Bamford: And take the money from the bank, in which case the U.S. did a disservice to the customers of the bank, which is the public, by not telling the bank about the cracks in the first place.
Snowden: Yeah, that’s perfect. And another way to do it is not just cracks in the walls, but it could be other ways in. You can show a guy sort of peeking over the wall, you can see a guy tunneling underneath, you can see a guy going through the front door. All of those, in cyber terms, are vulnerabilities, because it’s not that you have to look for one hole of a specific type. It’s the whole paradigm. You look at the totality of their security situation, and you look for any opening by which you might subvert the intent of that system. And you just go from there. There’s a whole world of exploitation, but it goes beyond the depth of the general audience.
Producer: We can just put them all (multiple conversations; inaudible).
Bamford: Any others?
Snowden: One thing, yeah. There were a couple things I wanted to think about. One was man-in-the-middle, a type of attack you should illustrate. It’s routine hacking, but it’s related to CNE specifically, computer network exploitation. But I think in conflating in into cyber warfare helps people understand what it is.
A man-in-the-middle attack is where someone like the NSA, somebody who has access to the transmission medium that you use for communicating, actually subverts your communication. They intercept it, read it, and pass it on, or they intercept it, modify it, and pass it on.
You can imagine this as you put a letter in your mailbox for the postal carrier to pick up and then deliver, but you don’t know that the postal carrier actually took it to the person that you want until they confirm that it happened. The postal carrier could have replaced it with a different letter. They could have opened it. If it was a gift, they could have taken the gift out, things like that.
We have, over time, created global standards of behavior that mean mailmen don’t do that. They’re afraid of the penalties. They’re afraid of getting caught. And we as a society recognize that the value of having trusted means of communication, trusted mail, far outweighs any benefit that we might get from being able to freely tamper with mail. We need those same standards to apply to the internet. We need to be able to trust that when we send our emails through Verizon, that Verizon isn’t sharing with the NSA, that Verizon isn’t sharing them with the FBI or German intelligence or French intelligence or Russian intelligence or Chinese intelligence.
The internet has to be protected from this sort of intrusive monitoring or else the medium upon which we all rely for the basis of our economy and our normal life—everybody touches the internet nowadays—we’ll lose that, and it’s going to have broad effects as a consequence that we cannot predict.
Producer: Terrific. I think we ought to keep going and do like an interactive Edward Snowden kind of app.
Snowden: My lawyer would murder me.
Producer: No, you really—(inaudible) used to give classes.
Snowden: Yeah, I used to teach. It was on a much more specific level, which is why I keep having to dial back and think about it.
Producer: You’re a very clear speaker about it.
Snowden: Let me just one more time do the offense and defense and security thing. I think you guys already have enough to patch it together, but let me just try to freestyle on it.
The community of technical experts who really manage the internet, who built the internet and maintain it, are becoming increasingly concerned about the activities of agencies like the NSA or Cyber Command, because what we see is that defense is becoming less of a priority than offense. There are programs we’ve read about in the press over the last year, such as the NSA paying RSA $10 million to use an insecure encryption standard by default in their products. That’s making us more vulnerable not just to the snooping of our domestic agencies, but also foreign agencies.
We saw another program called Bullrun which subverted the—which subverts—it continues to subvert similar encryption standards that are used for the majority of e-commerce all over the world. You can’t go to your bank and trust that communication if those standards have been weakened, if those standards are vulnerable. And this is resulting in a paradigm where these agencies wield tremendous power over the internet at the price of making the rest of their nation incredibly vulnerable to the same kind of exploitative attacks, to the same sort of mechanisms of cyber-attack.
And that means while we may have a real advantage when it comes to eavesdropping on the military in Syria or trade negotiations over the price of shrimp in Indonesia—which is an actually real anecdote—or even monitoring the climate change conference, it means it results. It means we end up living in an America where we no longer have a National Security Agency. We have a national surveillance agency. And until we reform our laws and until we fix the excesses of these old policies that we inherited in the post-9/11 era, we’re not going to be able to put the security back in the NSA.
Bamford: That’s great. Just along those lines, from what you know about the project Bullrun and so forth, how secure do you think things like AES, DES, those things are, the advanced encryption standard?
Snowden: I don’t actually want to respond to that one on camera, and the answer is I actually don’t know. But yeah, so let’s leave that one.
Bamford: I mean, that would have been the idea to weaken it.
Snowden: Right. The idea would be to weaken it, but which standards? Like is it AES? Is it the other ones? DES was actually stronger than we thought it was at the time because the NSA had secretly manipulated the standard to make it stronger back in the day, which was weird, but that shows the difference in thinking between the ’80s and the ’90s. It was the S-boxes. That’s what it was called. The S-boxes was the modification made. And today, where they go, oh, this is too strong, let’s weaken it. The NSA was actually concerned back in the time of the crypto-wars with improving American security. Nowadays, we see that their priority is weakening our security, just so they have a better chance of keeping an eye on us.
Bamford: Right, well, I think that’s perfect. So why don’t we just do the—
Producer: Would you like some coffee? Something to drink?
Bamford: Yeah, we can get something from room service, if you like.
Snowden: I actually only drink water. That was one of the funniest things early on. Mike Hayden, former NSA CIA director, was—he did some sort of incendiary speech—
Bamford: Oh, I know what you’re going to say, yeah.
Snowden: —in like a church in D.C., and Barton Gellman was there. He was one of the reporters. It was funny because he was talking about how I was—everybody in Russia is miserable. Russia is a terrible place. And I’m going to end up miserable and I’m going to be a drunk and I’m never going to do anything. I don’t drink. I’ve never been drunk in my life. And they talk about Russia like it’s the worst place on earth. Russia’s great.
Bamford: Like Stalin is still in charge.
Snowden: Yeah, I know. It’s crazy.
Bamford: But you know what he was referring to, I think. You know what he was flashing back to was—and I’d be curious whether you’ve actually heard about this or not—
Snowden: Philby and Burgess and—
Bamford: Martin and Mitchel.
Snowden: I actually don’t remember the Martin and Mitchell case that well. I’m aware of the outlines of it.
Bamford: But you know what they did?
The CENTCOM hack was much more damaging than what the Pentagon has openly admitted (Pentagon spokesman said it was “little more than a prank or vandalism”):