Thursday 10 May 2012

Breach Disclosure @AthCon

After being fortunate enough to have my presentation on breach disclosure selected by the AthCon CFP panel I had a fun trip over to Athens last week!
AthCon (http://www.athcon.org/) is in its third year now and it was great to see such a large attendance despite all the talk of economic turmoil in Greece.  Arduino printed circuit boards were the badges, and one of the sponsors even hacked one together by the end of the conference to run a small version of the game Simon.

So what was I doing there?  Most of the presentations were significantly more technical than mine, discussing subjects such as the one by finux on IDS evasion techniques, iOS hacking, rootkits etc.  Ian Glover from Crest spoke about the need for standardised professionalism in the security testing industry and I found myself completely agreeing with what he said.  I was there to talk about breach disclosure.  Not reporting vulnerabilities, not putting out a security advisory.  But how I think we should be thinking about security breaches.

Having seen a number of breaches as an auditor and QSA, I've seen first hand lots of bad responses.  Also I've seen businesses be reluctant to communicate about events occurring for fear of damage to their reputation  So some of what I had to say is about thinking about the problem differently, thinking about it from the consumers perspective.
The idea behind my presentation was really inspired by a conversation or rather a set of questions that were raised at the PCI community event in London.  On of my old colleagues was there and we were chatting about his new role, and when the microphones went open one of his new colleagues stood up and made a really good point to Visa and the SSC.  Which was basically, "Why aren't all the payment card breaches made public?" - there was a lot of mumbling about NDA's and reputation and it not being appropriate.  However I couldn't help think that he'd made a really good point.  Would it really hurt if the details from a breach were made public (even anonymously) so that everyone could see what happened and learn from it?  This made me think, in the infosec community we are really good at finding vulnerabilities, exploiting them telling people what could happen, but we're not really great at telling them how likely they are to be attacked, or how likely they are to be breached.  This in my experience leads to difficult discussion when dealing with senior executives that perhaps don't really understand the threats out there or that want things quantifying consistently with other risks.

So I started putting together some examples of how other industries do this, primarily the healthcare industry, the police, and nuclear energy industry.  These industries all seem better at sharing information than the infosec industry in order to quantify risk.  The UK Health and Safety executive have even gone to the trouble of writing a document called the tolerability of risk from nuclear power stations available here.  This puts in layman terms the risk of death from a number of causes (e.g. cancer, 1/374 people).  The thing that really interested me though was that human life as an asset had been assigned a monetary value.  Without getting into the morality of this, I actually thought this seemed like a good idea (for reference its £600,000 in the UK).  As this can be used by insurers to help quantify values in the event of various incidents.

Valuing Information Assets
This then lead me on to one of the other problematic areas when dealing with information security.  How do we value information assets?  This could end up being hugely subjective dependent on the type of business one is in or the type of information.  Lets look at personal information such as a name and address.  Whilst this has some inherent value to both businesses and fraudsters, its also not to difficult to obtain legitimately.  Whereas banking information, medical records and other data that the UK-DPA would consider sensitive is not.
One of the things I suggest in my presentation is that we should look to or even lobby our regulators to give some absolute minimum values for these assets.  They legislate for them and can fine people, so it doesn't feel unreasonable that they should give them a formal minimum value.  E.g. for simple maths sake - if we look at someone's name and address and gave it an arbitrary value of 1p but gave their medical record a value of £10 we can infer that the medical data is significantly more valuable/sensitive.  We can also then start to look at  whether we can put security expectations around certain values of data.  e.g. do we really care if someone loses a couple of names and addresses on a piece of paper - maybe not.  We valued the data at 2p and protected it accordingly.  However the same data is now sensitive, would someone leave a £20 note unattended on a train.  Probably not.  The ICO could set some sensible thresholds around what is acceptable to lose and what isn't and in what circumstances.  Business could value the data and then apply controls that are appropriate to the value of the asset.  Getting back on topic,  breaches could also be identified based on the value of data lost/stolen.  Post breach reviews could have consistent context applied when auditors and forensics come in, and the control environment can be reviewed in the context of the value of the data.  These breaches could be disclosed anonymously to the regulator or a body operating for the regulator, who could then openly produce statistical information on the likelihood of certain breaches occurring in certain sectors (over time).  This would give us all a better ability to beat off the FUD, and manage our information security risks in a more quantifiable manner.



More soon

Andy











No comments:

Post a Comment