In January, Representative Charles Albert “Dutch” Ruppersberger (D-MD) reintroduced the Cyber Intelligence Sharing and Protection Act (CISPA) as H.R. 234 into the 114th Congress. The bill was first introduced by Mike Rogers (R-MI) in 2011.
On the executive branch side, President Obama trumpeted the need for intelligence sharing in January before his State of the Union Address and then signed an executive order to encourage and promote public/private cyber threat intelligence sharing at the White House Summit on Cybersecurity and Consumer Protection in February.
What’s so great about public/private threat intelligence sharing? The basic assumption is that federal intelligence (i.e., CIA, NSA, etc.) and law enforcement agencies (i.e., FBI, Secret Service, etc.) have cyber threat intelligence about cyber-adversaries as well as their tactics, techniques, and procedures (TTPs) that is classified and unique. Thus, this information could be extremely valuable to private sector organizations under constant attack from nation state APTs and cyber-criminals.
If the government really does have exceptional threat intelligence, this effort makes a lot of sense. Okay, but what if the government’s threat intelligence is already publicly-available from open source organizations like FS-ISAC, Team Cymru, US-CERT, or Virus Total? Or suppose it’s commercially available from threat intelligence experts like Crowdstrike, FireEye, iSight Partners, Kaspersky Lab, Norse, Verisign, or Webroot? Either way, if the US government does not have a monopoly of specific threat intelligence then all this public/private threat intelligence sharing dialogue is nothing more than political rhetoric.
These questions persuaded me to do a bit of digging to see if I could find examples where public/private cyber threat intelligence worked and where it failed. In a recent blog, I reviewed Shane Harris’s book @War: The Rise of the Military Internet Complex, which provides some great background for assessing these programs. In my humble opinion, the results are a mixed bag:
- One of the most successful efforts came after hackers penetrated a military contractor’s network in 2007 and exfiltrated classified data about the next-generation F-22 fighter jet. After this event, the Feds took the unusual step of sharing classified cyber threat intelligence with an array of strategic government suppliers. This ultimately evolved into the Department of Defense (DoD) – Defense Industrial Base (DIB), a cyber-threat intelligence sharing program. Since its inception, DIB has grown bigger, more extensive, and more structured. The general feedback is that DIB has been a true success.
Now the bad news:
- In 2009, former FBI Deputy Assistant Director for Cyber Issues Steve Chabinsky held a meeting with law enforcement/intelligence officials and leading US banks. When he asked the bankers for feedback about an ongoing public/private threat intelligence sharing program, a spokesperson responded: “It’s not going well. We give you all our information voluntarily and we get nothing back.”
- In a 2010 meeting at the Department of Homeland Security, then NSA director Keith Alexander gave a presentation on NSA’s threat signature catalogue to a number of leaders from the Internet industry. After seeing the presentation, former Google CEO Eric Schmidt is quoted as saying: “You mean to tell me they spent all this money and this is what they came up with? Threat signatures don’t just come where the NSA points its sensors.”
- In a 2011 program, Internet Service Providers (ISPs) monitored their Internet traffic using classified government cyber-intelligence from NSA. The program was then reviewed by Carnegie Mellon University, one of the nation’s top computer science and cybersecurity schools. CMU found that most of the government’s cyber-threat intelligence was out-of-date and ineffective – of 52 cyber-attacks detected, only 2 were the result of NSA threat signatures.
One other example is worth noting. In January 2015, former Senator Tom Coburn (R-OK) published a paper titled, A Review of the Department of Homeland Security’s Mission and Performance. On page 94 of this report, Coburn states that when reporting on critical software vulnerabilities, US-CERT “does not provide information nearly as quickly as alternative private sector analysis companies.” The report provides examples as well.
I’m sure there are other successful examples of public/private cyber-threat intelligence sharing that I didn’t cover, but these visible failures are disheartening to say the least. A successful public/private cyber-threat intelligence sharing program must add value to all participants and protect privacy to be worth the effort.
So before we move forward, we need to define a model for strategic and long-term success for large enterprises and mid-sized organizations alike.