Sunday, 27 May 2012

BYOD - Bring Your Own Disaster!

BYOD - perhaps we should call it "Bring Your Own Disaster!"

There have been lots of good reasons for not letting people connect what ever they like to the enterprise network.  These have not changed, in fact there are more threats to the corporate computing environment than ever before with ever simpler attack vectors.  However the BYOD brigade have charged on.  Want an iPad for email, fine, sign off the risk, non-corporate laptop, sign here ______.  What concerns me is not executives signing off risk, that of course is their decision, its whether they understand the risk in the first instance.

It seems strange at a time when information security is apparently so high on the corporate agenda that BYOD has as much traction has it does.  Does this show us that perhaps the non- infosec executive management still have the "attacks come from the outside" mind set?  Successful breaches almost always seem to include some form of end user device being used to attack the rest of the network.  They can be an easy target, and one authorised to access other services.  Why try to attack the data at rest when you can attack a vulnerable PC/user and steal the authorised credentials.  This can happen even without BYOD policies!  

Who's asset is it anyway?
The question of who owns the underlying asset is a really important one.  If an employee owns that device the enterprise can't really tell them what to do with it.  This will make data management policies almost impossible to police.  The data belongs to the enterprise, the device to the staff member, the staff member is almost certainly going to be able to do whatever they like with that device.  If devices get lost / stolen then the data compromised and used fraudulently I can't see too many judges looking fondly on.  "So, you let them copy this information on to their personal tablet/laptop/smartphone".  I'm not a lawyer but I could see this being a legal mine field around duty of care.

No real cost saving.
As IBM have discovered their BYOD initiative hasn't seen the cost savings expected. I suspect this is down to the well known fact that the cost of tin is always out weighed by support costs.  Non-standard builds, non-standard software / hardware is a support teams worst nightmare.
If you add to this security requirements, suddenly network architecture has to be much more defensive from scratch.  Really, all endpoint devices should be considered untrusted, perhaps even the network itself can't be trusted so more and more checks and policy must be applied. IT and Infosec can't even assume that they'll be able to install product on the devices so have to look for more and more agent-less technologies, endpoint analysis is required to see what on earth has been plugged in, network access control will have to be deployed to do quarantining of unknown / suspicious devices, all internal systems will require additional hardening and firewalling, multiple patch regimes to adhere too, and on, and on, and on.

To have a successful BYOD roll out requires an incredibly well locked down, hardened architecture with extensive internal firewalling the like that many organisations don't operate at the moment.  This sort of approach slows down delivery of services to "the business" because it simply takes longer.  Not helpful when the IT function is still trying to prove it is relevant as senior execs hear they can just move everything to the cloud.

Imagine malware that can detect whether it is plugged into the home network or the office one and operates intelligently based on that decision.  Whilst at work, it sniffs credentials, hoovers up data and does as much as it can with as much stealth as it can muster.  When it realises it has been plugged into the home network with a nice big internet connection and no real firewall / IDS it starts to transfer that data to the criminals unbeknown to the entity it stole it from...

The BYOD culture reminds me of those days in primary schools where the kids all bring a toy in to play with.  Everyone is impressed by the cool toys on display, and the older kids get to show off their latest action figures and video games but not a great deal of work gets done by anyone.  

BYOD - here to stay but probably for all the wrong reasons.

Thursday, 24 May 2012

Are you ready for the Olympics?

The Olympics is coming, are you ready ?

Now, I'm not talking about your 100m personal best or whether you are a medal contender.  Unfortunately I'm talking about heightened risk of cyber security incidents.  During the Beijing Olympics there were 12 million cyber security incidents.  We should use that as the bench mark for our risk management for the London 2012 games.

The London Olympics has the potential to be an incredible event attracting people from all over the world to the UK.  With this unfortunately comes a heightened level of risk.  The is already planning to fly combat planes in London airspace and is clearly concerned about the risk of a terrorist attack, as well as cyber attack. is working on the assumption that the threat level will be severe but with a focus on the games taking place come what may.

With a potentially reduced work force it will even more important to ensure that information security controls don't slip. Olympic phishing attacks are likely to be prevalent, no doubt offering access to events or tickets - these could be laced with malware or end up compromising sensitive information from your staff or customers.

Sensible steps to take -

Ensure that you have cover for staff responsible for authorising changes to IT infrastructure and be prepared to limit the number of IT changes during the games period.

Compliance controls - if you have recurring compliance controls that are likely to fall within the games period ensure someone is specifically tasked with staying on top of them and has a deputy.  External and internal vulnerability scans, patching and AV should all be kept up to date particularly if you are subject to PCI DSS.  Resist the temptation to open up your outbound internet access so that staff can access streaming sites from their PCs.  This can make matters worse if you get hit by some rogue malware.  A couple of TVs in the office may be a better solution - see below.

Plan for multiple types of incident, and ensure you have contingency for the assets that may be affected (Staff, internet, telecoms, IT etc)  Ensure that all staff know how to respond and are empowered to do so - consider adding social media engagement to your incident response plan, this can be a good way to get the message out on mass quickly from and too multiple types of devices.  Neira Jones of Barclaycard has a good blog post on that subject here.  

For those taking card payments who are serving the tourist community attending the games, there is probably going to be increased risk of payment fraud.  Cards from various parts of the world that may normally be declined are likely to be in use by tourists and fraudsters.  Make sure your staff are kept up to date and take additional steps if necessary to verify the card holder identity.  Have a quick chat with your acquiring bank to see if there is any advice they can offer and to understand what your responsibilities are.

Engage employees and customers!  Whilst this might seem slightly off topic for an infosec post I'd suggest finding a way to get your staff engaged in the Olympic celebrations whilst at work.  It is highly likely that a number of people will have taken annual leave and unauthorised absence may be higher than usual.  A couple of live TV feeds and positive acceptance that the games is going on might be enough to stop the odd straggler from an unauthorised absence when there is a big event.  Travel in London is likely to be slower and busier than normal, expect more remote working requests or delays in people getting to the office.

As a real example, I was in Portugal during the 2010 World cup working at a client site when Portugal beat North Korea 7-0.  Anyone who wanted to watch the game was given a free pass to do so by the operations manager, who had arranged a TV to be set up in the board room and a stack of pizza for everyone.  She had a full office every day I was there.  Contractors were invited as were clients and everyone enjoyed the atmosphere.

For those of you looking to be a little more pro-active, now is the time to be reviewing information security policies and procedures, updating risk assessments and incident response plans and ensuring you have up to date contacts with suppliers, third parties and any contractors.  They should be thinking about this too. 

Still 64 days to go ......


Monday, 14 May 2012

PCI DSS - How to - Configuration Standards - Part 2

Configuration Standards Part 2

Time to put some meat on the bones of this How-To.  Writing a configuration standard doesn't have to be a nightmare, if you follow some straight forward structure.

Document Control !

First off - apply some sensible document controls - this is of particular importance if your documentation is going to reside in word / excel documents and have to be controlled manually.  If you are using a SharePoint or wiki to host the documentation you can automate this.  This should contain the following details :-

Author - name and contact details  
Document version and version history - its common to have three fields
Version - Update reason / revision notes - Date

Revision period - with a reference to the revision owner as necessary ( it may be different from the original author)

Write a brief introduction to the config standard, explain why the standard exists and why it must be followed - 

Detail which assets the standard is intended to be applied to e.g. all Windows 2008 servers, or Windows 2008 servers in the card-holder data environment "CDE".  If this is going to be used for PCI DSS compliant assets now is a good time to make reference to the industry standard hardening approach you are using (SANS, NIST, CIS etc) and also make a reference to your exceptions appendix, discussed later.

Implementation Instructions
If you have an existing build guide or base install you might wish to reference it here.  Otherwise its common for this to be a step by step procedure so that it can followed by anyone - very handy in a DR scenario.
PCI DSS - requirement reference - 2.1 
For a PCI DSS relevant system you should ensure that your implementation instructions mandate the changing of vendor-supplied defaults before it goes onto "the network".  PCI DSS lists out some simple example such as SNMP community string, removal of unnecessary accounts etc.  If you are following something such as CIS benchmark for Windows you will get all this covered.
If the asset in question comes with a default password - this should also be changed. Default passwords can easily be found on sites like

PCI DSS - requirement reference - 2.2.1.  
To meet 2.2.1 the standard requires that you implement one primary function per server this applies to physical and virtual environments - where in a virtual environment the hypervisor's sole function is to be a hypervisor, and each guest system has is own primary function.  The one primary function can often be seen as a stack.  I've often seen people get hung up on breaking out n-tier applications on to multiple bits of hardware for no real security benefit.  If the tiers are so closely coupled that a breach in any one tier would be a breach of another - then the "function" of those tiers is to support the application.  That would be the primary function, and they can sit on one box.  The PCI SSC have repeatedly answered this to clients I've worked with when asked.

PCI DSS - requirement reference - 2.2.2
Your configuration standard should say that only the required services, protocols are to be enabled.  This is the foundation of a hardened build.  Mandatory!

PCI DSS - requirement reference - 2.2.3
This particular requirement requires that common security settings are documented in the configuration standard.  The vagueness of this requirements wording doesn't help us implement it much but the intention is that the config document details the specific settings.  
One sensible way of covering this is to reference the implementation of your chosen industry standard, lets say CIS, and then put an appendix at the back of the standard to detail which settings have not been applied and why.  Simple!

PCI DSS - requirement reference - 2.2.4
Dealing with 2.2.4 is the half of 2.2.2 really.  In 2.2.2 we are to enable just the required services, protocols, daemons etc.  For 2.2.4 we're being asked to remove what we've disabled, as well as unnecessary scripts, drivers, functionality etc.. In a windows environment this isn't as difficult as it seems.  Have a look at this article on technet on use of the sc.exe tool.

PCI DSS - requirement reference - 2.3
Whilst 2.3 doesn't relate directly to the config standard document, it makes sense to ensure that the need to have none console access encrypted is documented and made mandatory.  

In the rest of the implementation instructions it is useful to add in the details from, or refer to other documents that cover any of the following, host-based firewall / IDS config, Antivirus settings, specific log/audit settings, any encryption implemented, file integrity monitoring settings.

To finish up with, its important that these config documents are kept up to date (your QSA will ask!).  It helps to integrate these updates with the change control process you operate but if you are implementing them just for PCI DSS then you will have to update them if you find vulnerabilities that need to be addressed.  

Sunday, 13 May 2012

PCI DSS - How to - Configuration Standards - Part 1

Writing good configuration standards is an important job for any IT team.  Everybody needs to know how that box was built and a good configuration standard can help you piece the information together at a time of need.  Like the time you are on call, and someone is shouting at you down the phone line.  Its also great place to document all the hardening / security config that has been done and how everything has been locked down.  This is useful for patch management.  Patching services that have been disabled or removed in most cases doesn't warrant the reboot!

However - if experience has taught me anything it is that almost everyone hates doing writing technical documentation, especially retrospectively.  So if you are retro documenting a build I do feel for you.

However!  There are compliance brownie points for you in the PCI DSS world if you have good configuration standards.  The PCI DSS expects a build that shows the system to have been hardened consistent with "industry standards", such as Centre for Internet Security (, NIST ( or SANs (

My personal favourites have always been the CIS docs, I just prefer the way they are written.  They detail everything in a fairly linear manner as well as explain what the changes you are making actually do.  Have a look on those sites and you'll get a good feel for what is required to lock down a device if you are relatively new to this. 

The next few blog posts will be about the content needed in these config documents in order to a) satisfy the PCI requirements and b) Still be useful.  
I'll not go into the details of how you store/manage them, you can do these in Word/Excel/pdf or make use of a wiki (good if you have an environment that changes frequently).

Part two - here

Thursday, 10 May 2012

Breach Disclosure @AthCon

After being fortunate enough to have my presentation on breach disclosure selected by the AthCon CFP panel I had a fun trip over to Athens last week!
AthCon ( is in its third year now and it was great to see such a large attendance despite all the talk of economic turmoil in Greece.  Arduino printed circuit boards were the badges, and one of the sponsors even hacked one together by the end of the conference to run a small version of the game Simon.

So what was I doing there?  Most of the presentations were significantly more technical than mine, discussing subjects such as the one by finux on IDS evasion techniques, iOS hacking, rootkits etc.  Ian Glover from Crest spoke about the need for standardised professionalism in the security testing industry and I found myself completely agreeing with what he said.  I was there to talk about breach disclosure.  Not reporting vulnerabilities, not putting out a security advisory.  But how I think we should be thinking about security breaches.

Having seen a number of breaches as an auditor and QSA, I've seen first hand lots of bad responses.  Also I've seen businesses be reluctant to communicate about events occurring for fear of damage to their reputation  So some of what I had to say is about thinking about the problem differently, thinking about it from the consumers perspective.
The idea behind my presentation was really inspired by a conversation or rather a set of questions that were raised at the PCI community event in London.  On of my old colleagues was there and we were chatting about his new role, and when the microphones went open one of his new colleagues stood up and made a really good point to Visa and the SSC.  Which was basically, "Why aren't all the payment card breaches made public?" - there was a lot of mumbling about NDA's and reputation and it not being appropriate.  However I couldn't help think that he'd made a really good point.  Would it really hurt if the details from a breach were made public (even anonymously) so that everyone could see what happened and learn from it?  This made me think, in the infosec community we are really good at finding vulnerabilities, exploiting them telling people what could happen, but we're not really great at telling them how likely they are to be attacked, or how likely they are to be breached.  This in my experience leads to difficult discussion when dealing with senior executives that perhaps don't really understand the threats out there or that want things quantifying consistently with other risks.

So I started putting together some examples of how other industries do this, primarily the healthcare industry, the police, and nuclear energy industry.  These industries all seem better at sharing information than the infosec industry in order to quantify risk.  The UK Health and Safety executive have even gone to the trouble of writing a document called the tolerability of risk from nuclear power stations available here.  This puts in layman terms the risk of death from a number of causes (e.g. cancer, 1/374 people).  The thing that really interested me though was that human life as an asset had been assigned a monetary value.  Without getting into the morality of this, I actually thought this seemed like a good idea (for reference its £600,000 in the UK).  As this can be used by insurers to help quantify values in the event of various incidents.

Valuing Information Assets
This then lead me on to one of the other problematic areas when dealing with information security.  How do we value information assets?  This could end up being hugely subjective dependent on the type of business one is in or the type of information.  Lets look at personal information such as a name and address.  Whilst this has some inherent value to both businesses and fraudsters, its also not to difficult to obtain legitimately.  Whereas banking information, medical records and other data that the UK-DPA would consider sensitive is not.
One of the things I suggest in my presentation is that we should look to or even lobby our regulators to give some absolute minimum values for these assets.  They legislate for them and can fine people, so it doesn't feel unreasonable that they should give them a formal minimum value.  E.g. for simple maths sake - if we look at someone's name and address and gave it an arbitrary value of 1p but gave their medical record a value of £10 we can infer that the medical data is significantly more valuable/sensitive.  We can also then start to look at  whether we can put security expectations around certain values of data.  e.g. do we really care if someone loses a couple of names and addresses on a piece of paper - maybe not.  We valued the data at 2p and protected it accordingly.  However the same data is now sensitive, would someone leave a £20 note unattended on a train.  Probably not.  The ICO could set some sensible thresholds around what is acceptable to lose and what isn't and in what circumstances.  Business could value the data and then apply controls that are appropriate to the value of the asset.  Getting back on topic,  breaches could also be identified based on the value of data lost/stolen.  Post breach reviews could have consistent context applied when auditors and forensics come in, and the control environment can be reviewed in the context of the value of the data.  These breaches could be disclosed anonymously to the regulator or a body operating for the regulator, who could then openly produce statistical information on the likelihood of certain breaches occurring in certain sectors (over time).  This would give us all a better ability to beat off the FUD, and manage our information security risks in a more quantifiable manner.

More soon


First post!

After much deliberating, and finally succumbing to twitter I've finally decided to start a blog.

The purpose of this is really to help answer or to start discussion on the question I get asked most often.  "How do we make it compliant?" - As an information security consultant,  IT security auditor and a PCI-QSA I must hear this question on a daily basis.  So I thought I'd start to post some of the thoughts, conversations and various other info I have for the benefit of who-ever is listening.

A few words of warning, I'll probably rant, I'll probably go off at tangents and I'll probably not give you all the answers.  That being said I'll certainly try to keep this as useful as possible.