Monday 15 October 2012

Vulnerability Scanning for PCI DSS compliance - Part 2


How do you know you’re doing it right?

You need to scan all IPs which are allocated to systems which are in scope for assessment.  Scope of assessment is based on systems storing, processing or transmitting cardholder data or being connected to systems handling cardholder data. For your external vulnerability scan, you must use an Approved Scanning Vendor (ASV) and scan all public facing interface addresses of in-scope systems.

If you do not know the full extent of your scope then you need to take a step back and review your network diagrams and data flows to understand the scope of compliance requirements. If your scope has not been reduced based on the issued guidance on network segmentation or removal of cardholder data, the chances are that your entire network is in scope and all systems and interfaces must be scanned.

'Doing it right' for PCI means having a passing vulnerability scan for all in-scope systems per quarter.
This means you have achieved a 'pass' according to the risk ranking used for internal vulnerabilities, based on CVSS scoring. For external scans, it means the ASV has not identified any 'PCI Fail' vulnerabilities on your external interfaces and websites.

Depending on your risk appetite and management of security, 'doing it right' may mean something quite different. Performing more regular scans and remediation exercises on systems in-scope and out-of-scope should mean the following:

  •          You will identify vulnerabilities across all environments
  •          You will identify vulnerabilities earlier
  •          You will not face a backlog of vulnerabilities to remediate prior to the end of the quarter

All of the above can help to reduce the overall security risk to the business for cardholder and non-cardholder systems.

Scan profiles
A practical approach to achieving passing scans for PCI while maintaining a vulnerability scanning strategy for your entire organisation is to use scanning profiles. You could use a single profile for in-scope environments and alternate profiles for other environments, perhaps based on criticality to the business. For PCI scans, a passing scan result can be submitted for compliance quarterly, reporting requirements are based on the categorisation of the business. For other scans, reducing risk using an internally agreed risk ranking system, documenting a baseline and reducing the risk of that baseline over time may be a sensible objective.
Using schedules, you might alter scan frequency for various environments depending on identified risk factors.

False positives
The following is extracted from the PCI SSC issued ASV program guide…
The scan customer may dispute the findings in the ASV scanning report including, but not limited to:
  •          Vulnerabilities that are incorrectly found (false positives)
  •          Vulnerabilities that have a disputed CVSS Base score
  •          Vulnerabilities for which a compensating control is in place
  •          Exceptions in the report
  •          Conclusions of the scan report
  •          List of components designated as segmented from PCI-scope by scan customer

A false positive means that a vulnerability has been identified in a system which could not or does not exist on that system. For example, an IIS vulnerability identified on an Apache server or an openSSH vulnerability identified where you can evidence the vulnerability is not present on the version in use. When you have identified and proven a false positive, inform your ASV (or flag as a false positive in your scanning tool). The false positive flag should not identify this vulnerability for 1 year. At that point, you'll need to flag it as a false positive again (if it is still identified as such).

Compensating controls
A compensating control for a vulnerability scan is the same as any other compensating control. There must be a valid business or technical justification as to why you cannot meet the requirement and the compensating control in place must at least satisfy the intent and vigour of the original requirement. You must also have a methodology for validating and maintaining the compensating control.  It is the responsibility of your QSA to approve any compensating controls in use as part of the assessment process.

Friday 12 October 2012

Project OSNIF

There are times when thoughts and chats at conferences start something.  I'm hoping that the  Open Source Network Intrusion Framework is one of those some-things.

Intrusion detection and prevention systems are something of a way of life to my buddy Arron Finnon(aka @finux), he's a fairly regular speaker at conferences on evasion techniques, technical misgivings and the general mis-use of intrusion detection/prevention systems.  After much discussion with him at @Athcon earlier this year we were agreed that something was missing from the community.  IDS / IPS have become something of a dark art in network security,  with the overhead of managing endless tuning profiles, architectural issues, false positives, false negatives and claims from over zealous vendors that are very rarely reality in deployment.  At @AthC0n I floated an idea with Arron that me and the other makeitcompliant bloggers have been discussing for a while which was the need for security-vendor testing criteria that could be repeatable, automated and consistent across products so that different vendors can be evaluated in a neutral manner, rather than a paid-for lab certification.  This lead to a conversation on the sheer volume of IDS technology in the market, in no small part thanks to :

PCI DSS - 11.4 Use intrusion-detection systems, and/or intrusion-prevention systems to monitor all traffic at the perimeter of the cardholder data environment as well as at critical points inside of the cardholder data environment, and alert personnel to suspected compromises.
Keep all intrusion-detection and prevention engines, baselines, and signatures up-to-date.

IDS functionality is becoming prolific now, integrated into firewalls, as host software, stand alone infrastructure, appliances the list is endless.  However in my experience I'd seen common issues in deployment.  These were not compliance failings, the PCI standard is very flexible with IDS specifics because it is such a broad technology deployable in many ways.  These issues were because of the breadth, confusion seems to be the norm in just where to begin with an IDS/IPS deployment.  Having seen IDS regularly deployed in scenarios where its inspecting 1% of the throughput because the other 99% is encrypted, or its been deployed post-breach and then tuned to a potentially compromised environment - I am always somewhat sceptical of the real value of an IDS.  There are numerous evasion techniques readily available in metasploit already, and as the subject of one of Arron's talks, there are even techniques that were originally designed for evasion, that due to issues with metasploit were run as the norm and IDS' tuned to them meaning the original exploit technique actually goes un-noticed by the IDS!

So, over in the Athens heat a conversation started along the lines of, "wouldn't it be nice if we had an OWASP like framework for intrusion management".....  This lead to the concept that would become OSNIF, something we hope will give consistent guidance on the use, testing and deployment of IDS/IPS.

When we met up again at DeepIntel, we stumbled across Richard Perlotto from Shadowserver.org and were mid way through a conversation about how to sensibly go about setting up a IDS/IPS testing methodology that could be done consistently without just depending on the metasploit tools.  After a short "mmmm" Richard said, we might be able to help with that.  Shadowserver does lots of AV testing and scoring against malware in the wild and has a whole stash of pcap resources that would be beneficial to run against an IDS / IPS in a similar way.....   We will definitely be talking to them in future about how they can help and what we can do with some of their data.

Arron's managed to herd the cats caught up in the initial discussion, some goals have been set and I'm quite pleased to be one of the volunteers on this project.

The initial objectives of OSNIF are as follows -
·         Develop an OSNIF Top 5.
·         Developing a “Risk Assessment” guide for deploying detection systems.
·         Developing a “Best Practices” deployment guideline.
·         Developing an Open Source IDS/IPS testing methodology.
·         Operate as an Independent legal organisation to maintain, and manage community data.

The OSNIF framework could well be the start of some common-sense open and collaborative thinking in this space.  I hope so.  Head over to osnif.org to get connected with the various mailing lists etc.

Saturday 6 October 2012

Social Media - do too many tweets makes a ?

This week has had the media crawling all over Ashley Cole the England left back who has been heavily criticised for his use of certain language towards the FA.  Interestingly it isn't that long ago that the Prime Minister David Cameron used the same in relation to twitter itself (perhaps with less venom)(http://www.youtube.com/watch?v=d3Mrfut-FSw).  Whilst the details of the two instances don't really interest me,  they do show how difficult it is to control the use of social media by employees.  Equally, it shows how quickly someone's opinion or language can reflect on them, or perhaps their employer.  It is interesting to think about when the views of the individual are the views of the employer or not.

I think the usefulness of a corporate twitter profile should now be obvious, if for no other reason than to be able to clearly distinguish between "corporate messaging" and employee "chatter".  If a message comes from the company managed twitter it can clearly be identified as such, as opposed to an employee saying something on their personal account.  Whether an employer chooses to take action against something an employee says is clearly their decision and may depend on the type of organisation they are. 

Restricting access to social media in the enterprise does have some benefits - 

1) It might help to stop non-corporate tweets being directly linked to you.  As tweets could be restricted so they don't originate from your network during working hours unless authorised. (although limited benefit -tweets can be done many other ways)
2) Can help stop malicious URL propagation by re-tweets - (it is not uncommon for staff to follow each other) so one bad re-tweet could get a URL to a large amount of staff at a company from what looks like a trusted source.

Imposing restrictions on the corporate network use should be the norm but in my experience those that tweet personally do so from their smart phone via 3G so corporate network controls are ineffective.  Corporate tweeters tend to (or should) use a desktop/tablet application that can provide statistics and so access can be managed.  I have seen some organisations say it's acceptable to use social media on personal devices, but block it completely from corporate devices..  Clearly, this could be problematic in BYOD scenarios.

If you already impose lifestyle restrictions on your employees, (very common in football clubs, media/entertainment and certain social roles) then including an "acceptable social media policy" into your brand management strategy is the way forward.  This should outline your company's position on what it considers to be acceptable from an employee during their employment.  This can then be incorporated into their employment contract.  How you choose to enforce this is a different matter and likely to be very difficult.  Managing by exception is common - censuring employees if a tweet/post/message is reported.  I've seen very few organisations actively monitoring employee twitter activity mainly due to the privacy concerns and the amount of resources it takes to do so.

Placing restrictions on social media use such as twitter or Facebook needs to be a considered decision and should be done in-line with the culture of your organisation.  If you do allow it the risks should be considered and measures put in place to help prevent the technical vulnerabilities.

Thursday 4 October 2012

Vulnerability Scanning for PCI DSS Compliance


Maintaining compliance with PCI DSS requirements for vulnerability scanning can present a number of challenges for companies.  This blog post is aimed at tackling the challenges of vulnerability scanning and present ideas for managing the process, as well as evidencing it to your QSA.

The best starting point is to define the process for vulnerability scanning.  PCI DSS requires the following:
  • Scans are performed at least quarterly
  • Scans must be performed by a qualified resource for internal scans (not required to be an ASV) and by an Approved Scanning Vendor (ASV) for external scans
  • External vulnerability scans must satisfy the ASV program requirements
  • Scans must be performed after significant change in the environment
  • Scans must achieve a passing score (as defined in ASV requirements for external scans, and by having no vulnerabilities ranked as ‘high’ for internal scans)

These guidelines provide the control requirements for the vulnerability scanning process within an organisation. Establishing a process to satisfy those requirements can be difficult, because of size, complexity or architectures that have been implemented.

When it comes to vulnerability scanning, it is much like any other job you would rather ignore; the longer you leave it the more time, effort and interruption it causes when you want or need to be doing something else.  Scanning more regularly, such as monthly, is often touted by QSAs as being a good way to manage the process (and it is) but it will not deliver the added benefits unless configurations and security patching are managed correctly. 

In order to perform a scan, you need to have identified what systems are in scope for your internal and external scans and to have created the appropriate profiles in the scanning configuration. Implementing network segmentation can reduce the number of systems which need to be scanned.  For the ASV scan, your ASV provider, under the PCI SSC ASV Program Guidelines requires the following:
  • Information about any scoping discrepancies must be indicated on the Attestation of Scan Compliance cover sheet under heading "Number of components found by ASV but not scanned because scan customer confirmed components were out of scope ". This information should NOT be factored into the compliance status:
  • Include any IP address or domain that was previously provided to the ASV that has been removed at the request of the customer
  • For each domain provided, look up the IP address of the domain to determine if it was already provided by the customer
  • For each domain provided, perform a DNS forward lookup of common host-names – like ―www,‖ ―mail,‖ etc. – that were not provided by the customer
  • Identify any IPs found during MX record DNS lookup
  • Identify any IPs outside of scope reached via web redirects from in scope web-servers (includes all forms of redirect including: JavaScript, Meta redirect and HTTP codes 30x)
  • Match domains found during crawling to user supplied domains to find undocumented domains belonging to the customer

I mention this because I can only name a few ASVs that I have worked with through my clients who have demonstrated the processes identified above.  Just to note the requirements of the vulnerability scanning processes require the organisation or scan customer to acknowledge its responsibility to manage scope, and obligation to properly inform the ASV provider about your environment, including any local or external load balancers that may impact the scanning process.

For internal vulnerability scanning the chosen solution should be managed by someone who is able to demonstrate sufficient knowledge to perform the process.  In order to achieve this, the scope of scanning should be approved within the governance framework of the organisation, and implemented within the chosen solution.  The reports should always be passed back to the team responsible for the process.
There are a number of tools available, from managed and hosted solutions to customer implementations and free open source tools.  I have been to clients with 100+ servers in scope for PCI DSS assessment and have observed they are using scanning tools which do not facilitate ease of issues review or remediation management. If you want to scan large environments, investing in a tool which provides the business useable reports is a must as, in the early days of this process, remediation will likely be rather time consuming.  All of the findings within the reports should be considered for action, even if they are informational issues.  

When assets are identified for scanning they should be assigned an owner; this is the person who investigates the remediation plan for the identified issue and documents the relevant change control documentation to ensure the fixes are implemented.  This is where functionality of the tools selected to assist the business in maintaining compliance has to be considered. I am always amazed at how limited the functionality is of some solutions, such as only providing a PDF report, or CSV format download. It just makes the job harder!  A solution that can mirror the internal governance structure of an organisation can save time, meetings and other resources by just adding some work flow. 

Managing the identified issues from a scan is important, especially when a failure is noted.  Failures, as mentioned above, are in the instance of external scans anything with a CVSS baseline score over 4.0 and for internal scans any issues identified as ‘high’.  The ASV program guide suggests organisations attempt to address issues based on the CVSS scoring.  Any issues that have a failure implication need to have the following:
  • Root cause analysis (newly identified issue, security misconfiguration etc
  • Action plan for remediation
  • Communication and assessment of compensating controls (for external scans this is completed with   the ASV)
  • Change Management documentation

It is not just the changing security landscape that needs to be managed for vulnerabilities.  Changes made to the environment, such as addition of new servers, or new applications should always result in security testing being completed.  Prior to any other security testing, a full vulnerability scan on internal and external changes should be completed.  This should be the case even if you are going to complete penetration testing, as this will ensure that the tester’s time is spent on harder tasks rather than on the ‘low hanging fruit’.

Where an organisation has established and embedded change management procedures and governance structures such as a Change Advisory Board, these should be closely aligned and involved in the vulnerability scanning process. The ability to manage changes in the environment securely and in line with company processes will provide key documented evidence to the QSA that robust processes are in operation.

Part two of this post will cover the following item in more detail than they have been touched on in this post:

  • How do you know you’re doing it right?
  • Scan profiles
  • False positives
  • Compensating controls



Wireless scanning - PCI DSS requirement 11.1


Requirement 11.1 of the PCI DSS requires scanning for rogue wireless access points. The risk identified here is that some miscreant will connect a rogue wireless access point to your environment, then sit in their car outside and sniff information with the goal of intercepting passwords and other sensitive information so he/she can steal cardholder data.  This is an understandable view, don’t get me wrong, but it’s a limited view.   The control is there to mitigate the risk of any miscreant doing it, as an internal or external risk, as it is far easier for a member of staff to gain access to the office environment and plug a rogue device in than an external third party.

The requirement is as follows:
“11.1 Test for the presence of wireless access points and detect unauthorized wireless access points on a quarterly basis.
Note: Methods that may be used in the process include but are not limited to wireless network scans, physical/logical inspections of system components and infrastructure, network access control (NAC), or wireless IDS/IPS. Whichever methods are used, they must be sufficient to detect and identify any unauthorized devices.”

The methods above are slightly expanded from the methods enumerated in PCI DSS v1.2 and provide a bit more flexibility to merchants and service providers. In July 2009, the PCI SSC released an Information Supplement on PCI DSS Wireless Guidance prepared by the Wireless Special Interest Group. As that is a document just over 30 pages, I'll try to keep this short and sweet. If you're interested that document is available here: https://www.pcisecuritystandards.org/pdfs/PCI_DSS_Wireless_Guidelines.pdf
I'm going to go through the testing procedures for this requirement one by one for completion. However, only 11.1.b requires any real activity!

11.1.a Verify that the entity has a documented process to detect and identify wireless access points on a quarterly basis.
This is simply about incorporating the Wireless Scanning requirements in an appropriate policy (such as a Security Testing Policy).  A quarterly process that identifies the controls to be implemented should be documented, and the required evidence and control forms referenced.  The adequacy of the process is assessed in 11.1.b.

“11.1.b Verify that the methodology is adequate to detect and identify any unauthorized wireless access points, including at least the following:
 WLAN cards inserted into system components
 Portable wireless devices connected to system components (for example, by USB, etc.)
 Wireless devices attached to a network port or network device”

If you use wireless devices, these should all be documented as part of the asset inventory.  A simple process to validate the status of devices on the inventory against the active devices provides a good basis for control.  Also look to the wireless systems implemented to provide you with additional functionality to manage PCI DSS requirements. 

Network ports should be activated for required use only and a check of LAN and switch ports should be performed to verify only required devices are connected. E.g. if you have 10 servers and a firewall, you should probably have 11 cables going into your switch. While I know this is horribly difficult to monitor in many server rooms that are a cabling nightmare, tidy cabling can be a separate goal in itself and will not to be further discussed here!

Network Access Control can provide a simple solution to the issue, although with a heftier price tag.  Any investment should be decided based on the requirements of the business rather than solely on compliance requirements. Technology alone does not solve the problem; it must be configured and maintained.
Regular system users should not be permitted to install Plug'N'Play devices to their systems. Role Based Access Control is already required within the PCI DSS requirements and this is something that is fairly easy to lock down in most environments (usually via group policy).  Only devices that require and have authorised wireless access should have wireless configuration system services available.

On top of the normal physical security controls associated with a data centre or networking cabinet, racks should be locked and an eyes-on review should verify whether additional hardware has been introduced.
In terms of using tools an option which only requires configuration and some training is the use of the nmap scanner with the -O switch to identify wireless access points. The traditional use of WiFi analysers such as NetStumbler or Kismet will also assist you in establishing the wireless baseline of surrounding areas.  Once the baseline is established, monitoring can be based on any identified changes.

The requirement does not specify all of the above need to be implemented. This is here to provide a flavour of the measures available to demonstrate compliance.  The controls that are implemented must be effective; simply walking around once a quarter with a wireless scanner without direction does not provide adequate control. It is important to note the requirement calls for a methodology.

11.1.c Verify that the documented process to identify unauthorized wireless access points is performed at least quarterly for all system components and facilities.
If the environment is split over many sites or locations, the processes must be sufficient to cover the locations included within the scope of compliance, not just the sampled locations.  If a third party hosting company is used as a Service Provider someone must be responsible for the controls at this location.

11.1.d If automated monitoring is utilized (for example, wireless IDS/IPS, NAC, etc.), verify the configuration will generate alerts to personnel.
If you are using a Wireless IPS (WIPS), configure the settings to email or otherwise alert the appropriate security personnel if rogue access points are introduced to the environment.

11.1.e Verify the organization’s incident response plan (Requirement 12.9) includes a response in the event unauthorized wireless devices are detected.
The whole process is broken if the people that you rely on to implement the controls do not know what to do in the case of exceptions.  It is important for these controls to filter into the incident response framework.  Incident response should include a provision for the processes to be followed for alerts about possible use of rogue devices, and invocation procedures if the presence of a rogue device is confirmed.

Thursday 13 September 2012

UK Government release 10 Steps to Cyber Security advice sheet

The UK government via CESG, the Information Security Arm of GCHQ, have recently released a document entitled “10 Steps to Cyber Security”. The full document is available at http://www.bis.gov.uk/assets/biscore/business-sectors/docs/0-9/12-1121-10-steps-to-cyber-security-advice-sheets.pdf

The 10 areas of focus within the document are given two pages each for further review and are as follows:

Home and Mobile Working
User Education and Awareness
Incident Management
Information Risk Management Regime
Managing User Privileges
Removable Media Controls
Monitoring
Secure Configuration
Malware Protection
Network Security

Overall, it is very important that the government are being proactive in highlighting the online threat landscape for businesses and references to control frameworks such as ISO 27000 are welcome. On the other hand, the fact that 3rd party service providers and [often exploited] online interfaces are not referenced appears to be a massive oversight. Unfortunately, many of the control frameworks are not easily found online. For example, the controls referenced are familiar from the PCI DSS, ISO 27000, the Code of Connection, Public Sector Network and IL3 requirements. Not all of these standards are freely distributed.

Sources of training and other informational material for the above would also be of enormous value to those perusing the document as otherwise, it appears to come to a ‘dead end’. Use of SANS, NIST and CIS for secure systems baselines and the ‘Think Privacy’ campaign for user awareness are examples of excellent resources. Achieving other controls through the implementation of sound and considered policies for users, passwords and audit logs can also use the SANS, NIST and CIS documents as well as Microsoft and other online resources.



Tuesday 4 September 2012

Deep INTEL Day two

Another good day at DeepINTEL, combination of talks on APTs, security intelligence gathering, social media and evasion techniques.

So if I had to pick my two favourites (other than yours finux!) from day 2 it would be

Massive Storage - Richard Perlotto (of Shadow Server fame)

Richard's talk had tech-awesomeness stamped right through it.  The Shadow Server Foundation does some really cool analysis and intelligence gathering.  Have a look at their site to get a good idea, I'll never do it justice here. http://www.shadowserver.org/wiki/.  Richard went into the details on how they handle the sheer volume of data that they have to work with.  We're talking petabyte storage requirements without EVAs or SANs, relational databases are out,  Hadoop HDFS and Casandra are in, and some custom software to do even more index and data management.  Without doubt my favourite slide was the server density pic, where they show the servers are mounted vertically rather than horizontally as this allows more to be squeezed into a rack.  The shelves were straining and lights were flashing.  Couldn't look at it without wanting one!

Facebook and you - Jonathon Deutsch

Here's Johnny!  Nicely delivered presentation showing how intelligence gathering can be done by the various government agencies by crawling through Facebook profiles and the default settings for friend lists.  The concept of Facebook-hardening was interesting although quite counter to what facebook is all about.  Some good examples of where certain nation states had crafted fake profiles to try to get intel on military personnel.


The day has been stacked with discussion on mass malware, advanced persistent threats, and how to respond to them.  Add in some antivirus evasion and DNS tunnelling examples and the audience were well engaged.

Hope I get to speak at a Deepsec event again, the guys run a good con.  Everything ran really smoothly, scheduling was kept on top of and the venue was top notch.  Highly recommended.


Deep INTEL - Day one

The guys from DeepSec have done a great job with the DeepINTEL conference.  Well organised, great location and a good speaker line up.  They kindly let me talk about the importance of breach disclosure, so I gave an updated version of the Athcon talk incorporating some of the feedback and post con chatter.

Quick summary of my favourite presentations from day one.

Wargames in the fifth domain - Karin Kosina

Karin gave a really great presentation on the concept and notions of "cyberwar" or what it isn't really.  When the slides go out I highly recommend a read through them as it was well delivered and referenced.  Covering the various international treaties and conventions on what actually constitutes war and the acts of violence that constitute force.

I think the biggest take away point for me from Karin's talk was that most of the rhetoric on cyber war actually describes electronic espionage (I'm going to stop saying cyber now!). Very few instances of damage have occurred that would constitute violence in order for the act to be considered war.

Hopefully I'll manage to get her to co-author the piece I'm writing on collateral damage from electronic espionage


Sexy Defence - Maximising the home field advantage - Iftach Ian Amit

Some really interesting content from Ian on establishing a culture of counter intelligence and investigating what the legal extent of certain counter ops are, as well as the benefits of sensible risk based pen-testing.  Good demo on poisoning malware to give it a signature that is easily detectable, that helps verify that your source of intelligence on threats is accurate, and also enables it to be blocked with a custom IDS signature.  I think that the Bsides Dallas crew might have pinched Ian's subject as the theme for their CFP is just called "sexy defence"!

Picking two favourites from the day two line up could be tricky as there does appear to be some good subjects on the roster.

Tuesday 21 August 2012

Some thoughts on PTPE


In recent years there has been much discussion about the use of Point-To-Point Encryption (PTPE or P2PE) as a method for minimising the scope of compliance requirements for merchants. Effectively this works as follows: 
- Merchant has a Point of Interaction such as a PIN Entry Device which encrypts cardholder data using keys to which the merchant has no access.
- Encrypted cardholder data is transmitted from the merchant environment to the service provider’s environment
- Encrypted cardholder data is decrypted by the service provider using a HSM and sent for authorisation
Authorisation message is sent back to the merchant

In the above process, the merchant has no access to cardholder data or any of the keys that do any of the encryption, at any time.   This is a hugely important part of the process, as a number of discussions were had about using other types of encryption for securing the data.  In a PTPE environment the card data confidentiality is maintained using encryption and responsibility for the key management techniques is transferred to a third party.   According to the current release from the PCI Council, only a hardware to hardware implementation of the above solution can achieve scope reduction. This means that all encryption takes place within a PTS validated terminal which is approved for SRED (Secure Read and Exchange of Data) and Open Protocols (if using IP communications) modules and all decryption takes place within a Host Security Module (HSM) in the service provider’s environment.

I’ll go further into the P2PE requirements overview in a moment but it’s worth pointing something out here.

A service provider can provide a solution to a merchant whereby cardholder data is encrypted in the terminal and decrypted in the service provider’s environment. The merchant and service provider can be PCI DSS compliant and the service does not have to be validated and certified as a PTPE solution. The benefit of a validated PTPE solution is that a merchant can effectively write off their face to face channel’s PCI scope and much of the risk while relying on the service provider’s validated solution. If a merchant uses a PTPE solution which is not listed with the SSC, that merchant will need to complete an applicable SAQ depending on the environment or a RoC depending on transaction volumes.

The PTPE requirements are split across 6 domains. In a nutshell, these are as follows:
Domain 1: a POI which is PTS (PIN Transaction Security) 2.0 or PTS 3.0 validated with the SRED module validated and enabled and Open Protocols listed and enabled if the terminal uses IP must be used as part of a P2PE solution.

Domain 2: any application on the PTS PED (PIN Entry Device) must be validated by a PA-QSA (PTPE). Such validation must be performed for applications which do and which do not handle cardholder data.

Domain 3: the POI (Point Of Interaction) device must be secured at all times. This includes inventories, physical security and transport controls.

Domain 4: this domain covers segmentation between encryption and decryption environments and is not currently in scope as the standard is hardware/hardware only. This will be elaborated upon for hybrid environments when such may be approved.

Domain 5: the decryption environment must be PCI DSS compliant and must use secured and approved decryption devices. Physical security and inventories are again prominent here.

Domain 6: management of encryption keys is detailed here and is far more complete than you may be familiar with from requirement 3 of the PCI DSS.
Domain 6 also contains 2 annexes:

Annex A: Cryptographic Key Operations for Symmetric-Key Distribution using Asymmetric Techniques

Annex B: Cryptographic Key Operations for Key-Injection Facilities


The PCI SSC has clearly put a lot of work into this standard and there is pretty much no room for interpretation. The exams are a test of knowledge much more so than the standard QSA exams. While there are currently no validated PTPE solutions as yet, this will be an interesting areas to watch develop.
My considerations are:

Service providers able to provide this service in Europe may become an oligopoly (at least in Europe). I’ve heard rumours of terminal providers restricting the ability to install custom applications.

However, this could also become a very disruptive technology in the market, allowing smaller terminal manufacturers to team up with smaller payment processors and enter the face to face payment market.

There is an opt-out clause available to merchants whereby they can disable the encryption mechanism in the PTPE solution. This is effectively a ‘KILL’ switch and requires the merchant to accept responsibility to use alternative controls and/or processing method. I imagine the problem threshold to affect this should be pretty high and controlled! There’s not much detail as to how this would work in practice.


Tuesday 17 July 2012

Information Security, Reputation and FUD.

Often I hear sales people use brand and reputation damage to secure information security investment.  Typically done without example, real context or evidence this is a shameless use of FUD. For the uninitiated that's Fear, Uncertainty & Doubt.


FUD is the tool of choice for bad sales people in the information security world, "you might be subject to this, This or even THIS!!".  If you hear these cries you are probably talking to a bad sales person.  Honest consultants will help you manage and understand information security risks.  They may even get to the point where they tell you that some risks can't be quantified using traditional methods and then frame advice using good practice references.  Sensible historical evidence shows how breaches have occurred and we need to learn lessons from these by being open about their cause, target and outcome.  Too many people are suffering preventable breaches at the moment.

However, focusing only on damage to your reputation may lead you astray.  Reputational value is very difficult to quantify in real terms.   Information security professionals should deal in risks based on facts, and how those risks can really impact a businesses.  I wanted to explore a couple of examples after  Dark reading put out an article recently on the "6 Biggest Breaches of 2012".  This looks like a skewed list towards the US but I'm going to pick out three of the commercial organisations on the list.

All arguably big names in their respected sectors.  All have suffered breaches that have been publicly announced one way or another.  All are still trading.  Lets look at some financial indicators :-

Global Payments (data from Google Finance stock feeds)

If you were shown the YTD graph on Global Payments(NYSE: GPN) you could be forgiven for thinking that "the breach" caused an ever increasing share price to drop suddenly and dramatically (in reality the graph scale makes this look worse).   However the graph above shows a full one year of their NYSE price and shows that in July 2011 they also had a big share price dip and that the price has been fairly volatile until recent months.  However, share price != reputation.  Lets look at the other key stats and ratios that are considered. In 2012 Global Payments mean quarter on quarter sales estimates were up, estimated earnings per share were up, compared with 2011.

So is it fair to say their reputation was damaged by a security breach?  Perhaps,  there were a number of articles in the press, a lot of fairly scathing commentary. In reality though whilst their share price took a bit of a knock it was small really.  GPN had similar price drops.  Key business indicators seem to show a company that is holding up fairly well in tough economic times.  Most CEO incentives are geared around financial performance, and not on reports about the company.

Zappos.com Inc is privately owned and so digging up financial data again isn't as straight forward.  They are apparently the number one seller of shoes online.  Gross revenues appear to be circa $1bn with an approximate margin of 10%<unverified!>.  So we would perhaps expect Zappos to be  über  brand concious and take intellectual property management seriously as part of their information security processes....  Well maybe they do, I don't know I've never worked with them.  A good friend of mine worked in the fashion industry as an information security manager and told me that culturally that's just not how the industry works.  Lots of designers see the design IP as "theirs" until its actually made into a product that is sold.  Designers flit between companies with their ideas and are given free reign to do as they please.  Anything that seems to put restrictions on them is met by huge barrage of reasons why not to do it and senior management "accepting the risk".  His organisation put a lot of effort into shutting down counterfeiters instead.

Zappos is a retailer though, and perhaps run by people who are only interested in selling product and making margin.  Their senior management probably isn't going to be too interested in information risk management practices (or wasn't).  One would expect their information security to be focused on the risks that will really affect overall margin, logistics and the ability to actually deliver the product and customer service expected.  A denial of service or a breach that took down call centres or heavily disrupted the customer service would likely get people's attention.  Trying to convince someone in this space that information security protects reputation misses the point.  Their reputation isn't made from security, its made from good service.



And then there was Linkedin... You might even be reading this because I posted it on my Linkedin status! Linkedin suffered a breach that was widely seen as embarrassing within the information security community. However, Linkedin are still online, traded and doing business. It will be interesting to see if the breach is ever properly disclosed and if anyone discontinues the service. That being said the Linkedin service has value, and its reputation is built on other things.




Looking at the LinkedIn 1yr stock graph doesn't really show us much either. Hand on heart you can't look at that graph and say "ta da! thats the breach announcement day".

LinkedIn show the same sort of key stat information as Global Payments, key revenue estimates are all higher. Although it is interesting that Linkedin's share price is more than double Global Payments despite them generating lower revenue.

So, next time you hear someone pulling out the FUD gun trying to tell you "its all about reputation" - its fairly clear its all about "them not getting the facts straight".

Security breaches do damage the reputation of companies, but that's not what its all about. Those companies have data which affects others, card numbers, personal data etc. This causes both the business and the consumer to be affected. Businesses can and do recover; in some cases with limited share price damage. Consumers, can and do recover, though are left with having to cancel cards, argue with banks, or check credit reference agencies in case their identity has been stolen.

SME companies can suffer more acutely, typically throwing the problem at IT they are suddenly hit with un-budgeted consultancy, audit, and a lot of new processes to implement and technology to buy. A breach might not leave their reputation in tatters but it could come as a financial and operational burden. Suffering a breach and not having the tools or talent to deal with it can be an expensive exercise. Another reason for having information security management with some degree of strategic oversight.

Andy

Friday 22 June 2012

Advice for Parents - Photos in school

Being married to a teacher and having a number of teachers in my family I'm sad to say that I've seen a number of over zealous Headteachers particularly in primary schools who get a little bit carried away quoting the data protection act at Parents, almost always incorrectly.
It normally happens like this :-
Parent whips out a camera to take a picture of little Johnny as he crosses the finish line at the school sports day.  Headteacher comes running over citing all sorts of data protection related reasons why Parent can't take a photo of little Johnny.  Parents get annoyed and frustrated, upset that the Headteacher is denying them the opportunity to take photos of their child.  

This kind of frustration is shared by many parents who unknowingly forego the opportunity to take photos/videos because of an over aggressive and ignorant application of the Data Protection Act.

So... what does the Information Commissioners Office "ICO" actually have to say on the matter.  Well as you might expect they have issued a specific guidance note on just this topic.  A note that on occasion I have actually sent to headteachers so they are aware of what the rules are.

"Recommended Good Practice
The Data Protection Act is unlikely to apply in many cases where photographs are taken in schools and other educational institutions. Fear of breaching the provisions of the Act should not be wrongly used to stop people taking photographs or videos which provide many with much pleasure.
Where the Act does apply, a common sense approach suggests that if the photographer asks for permission to take a photograph, this will usually be enough to ensure compliance.

Photos taken for official school use may be covered by the Act and pupils and students should be advised why they are being taken.

Photos taken purely for personal use are exempt from the Act."

Yes you read that correctly.  Photos taken purely for personal use are exempt from the Act.  So what typically happens is a head assumes that because THE SCHOOL(!) may have data protection issues to deal with when taking photos of pupils and students, that those same rules may apply to Parents taking photos for personal use.

The ICO guidance even gives specific examples for complete clarity.

"Examples
Personal use:

A parent takes a photograph of their child and some friends taking part in the school Sports Day to be put in the family photo album. These images are for personal use and the Data Protection Act does not apply.

Grandparents are invited to the school nativity play and wish to video it. These images are for personal use and the Data Protection Act does not apply.


Official school use:

Photographs of pupils or students are taken for building passes. These images are likely to be stored electronically with other personal data and the terms of the Act will apply.

A small group of pupils are photographed during a science lesson and the photo is to be used in the school prospectus. This will be personal data but will not breach the Act as long as the children and/or their guardians are aware this is happening and the context in which the photo will be used.

Media use:

A photograph is taken by a local newspaper of a school awards ceremony. As long as the school has agreed to this, and the children and/or their guardians are aware that photographs of those attending the ceremony may appear in the newspaper, this will not breach the Act
"

The only way I can see a school can enforce the none taking of photos at any school event is if somehow the parents have signed up to some sort of contract that prohibits it.  That is different (and I am not a lawyer).  If someone starts swinging the Data Protection Act at you and tells you to switch off your camera.  Well at least you know where you stand now.

Guidance paper quoted can be downloaded from the ICO here.





Tuesday 19 June 2012

Documentation in a Business Environment


Documentation is just as important as correctly securing a technology, without it the details can stay inside peoples heads, which can lead to it being forgotten or easily leaving an organisation.  Let’s look at how to do it, so it is both useful and meets our compliance or regulatory requirements.

Before looking any further it is important to understand the different types of documentation, so you can understand what a standard such as PCI DSS is actually requires from you:

Policy: Policy is management’s way to direct staff to do something, or to set expectations for something such as a certain level of control.    An information security policy is the starting point of an organisations commitment to managing security risks. 

Procedure: A procedure is defined as a detailed description of the steps necessary to perform specific operation.  To take this further a procedure is how you as a business are going to implement the desired level of control.  This can be seen as the business response to the management objective set in a policy.  A procedure can also be the operation of a specific control.  In some cases a specific procedure to detail the correct way a set of actions should be performed.

Standard: A standard is a set of rules or specifications that, when implemented together define a software or hardware asset.  The make it compliant blog has already completed a guide on defining configuration standards; this forms a really good basis for a standards document (but I may be biasedJ).  A key point to note with standards is there should be a defined exception handling and authorisation process; because we all know that it is not always possible to implement every detail all the time. 

Guidelines:  These are suggested actions or recommendations related to an area of a policy that may supplement a procedure.  Unlike a standard, implementation of guidelines may be at the reader’s discretion.  For example a policy statement may state that employees are required to wear identification at all times, a guideline may outline that the acceptable wearing is either on a neck tie or a pass holder; often directing the user where to request or find supplies of these.

Form: A policy may state a requirement for records to be maintained of control implementation, the best way to record this is through the completion of a form.  A procedure would define the level of detail required to be completed within a form and the required management authorisations to be captured.  Forms can be paper based or electronic, but they are useless if they are not maintained.

Presenting an auditor or assessor with a good set of structured documentation is an excellent way to start presenting your business as meeting the spirit and intent of many compliance and certification requirements.  If your documentation is a mess, one’s first impression is that your environment is going to reflect this. 
Some believe that having a 100 page policy document is not a good move, and some will argue if it is the one source of information people always know where to look.  I think that this is never going to be resolved, especially here.  The truth is it doesn’t really matter as long as the target audience can navigate it and understand it, and the key to this is referencing. 

A policy document should identify supporting procedure documentation when required; a procedure should identify the applicable standards, and so on.  If you have a 100 page policy document you should reference procedures as you go.  If you have independent documentation, then these can be referenced at the end. 
Assigning an owner to a document is a requirement to show that controls are owned, this creates accountability within the business.  Controls are only ever implemented when people are informed and aware of their responsibilities..  This is why hierarchy is important in documentation, and assignment of responsibilities starts at the executive level, tone at the top is important to ensure that staff at all levels understand the business requirements, especially for security.  A simple responsibilities matrix across the documentation of a business can be an excellent tool to help capture and define accountability and responsibility.

Make it clear who the target audience for the document is, make sure it is available to them, they know where to find it and can reference it easily.  Making documentation that is applicable to the staff and can be maintained by them is important.  Documentation for compliance is not a one off exercise; it must be demonstrably maintained during the year. 

Documentation should also identify the expected frequency of the control’s operation and associated reporting criteria.  The reporting criteria should include metrics that allow the effectiveness of the control(s) to be measured.  Having good management information about compliance before assessments ensures that there are no surprises.  If there has been notification about control failure or a noted issue, management can prepare a response. 

Management of exceptions is especially important when dealing with a control based assessment such as PCI DSS, if a control is not in place non-compliance must be noted.  The PCI DSS allows for compensating controls to be noted to assist in areas of non-compliance; however these are not just documentation. The compensating controls should produce evidence to show that you have been managing the risk, and will continue to do so once the assessment has finished.

Documentation should also identify where management information on controls are reported.  Not all information is required at Executive level. To ensure effective sponsorship and engagement in information security only relevant information should be provided.  Control failure tolerances should be defined this too can impact on reporting lines.  Some people only need to know about failure.  The most important part is that someone knows and is taking the necessary action.  This is the control owners’ responsibility to delegate, either to individual functions or the governance structures of a business, such as a security committee or risk committee. 

Monday 11 June 2012

Restricting root shell and root user access through sudo


One of the issues I’ve encountered a number of times in assessments of Linux and AIX environments is the provision of excessive permissions using sudo. This article is an attempt to highlight those issues and provide some guidance as to practical resolution.

It is typical in a secured Windows environment that the administrator username is not used for standard business and that those users who require elevated privileges are members of the “Domain Admins” Organisation Unit.  The generic windows administrator account would be renamed, given a randomised long and complex password which would then be physically secured and access restricted.  In Windows, audit trails can be maintained against each user and users do not execute commands as other users.  This is somewhat different in a Linux environment.

In Linux, it has become more standard to use sudo to substitute user and do commands. Sudo in its default implementation is generally in place as (example from CentOS)

wheel ALL = (ALL) ALL

In Red Hat and CentOS, the members of the wheel group are provided full sudo privileges. In Ubuntu or Debian this would be the members of the sudoers group or the admin group.

The above means that a user who is a member of the wheel group can execute ALL commands as ALL users from ALL terminals. In other words, a member of the wheel group can masquerade as other users or can drop to a root shell and no longer have a full audit trail against him.

Users often drop to a root shell to avoid typing sudo before any command. Dropping to a root shell is usually done doing su -, sudo –i, sudo –s, sudo bash etc. In order to prevent sudoers from dropping to a root shell, the shell commands can be removed from the executable files available to the users. This can be done by editing the sudoers file as follows:

Cmnd_Alias SHELLS
SHELLS = /usr/bin/sh, /usr/bin/csh, /usr/bin/ksh, /usr/local/bin/tcsh, usr/bin/rsh, /usr/local/bin/zsh

Cmnd_Alias SU
SU = /usr/bin/su
wheel ALL = ALL, !SHELLS, !SU

In the above members of the wheel group can execute all commands on all systems except those commands listed in SHELLS and SU. Sometimes it is necessary for users to drop to a shell when performing administrative functions. Using the above, you could have a configuration with 1 or 2 users permitted to have root abilities with other domain admins having the above.

admin ALL = (ALL) ALL
wheel ALL = ALL, !SHELLS, !SU

Limit access to the admin group in the same way you might limit access to the administrator account in Windows.

Saturday 2 June 2012

Valuing data - should we have a minimum value?

How much is your data worth, do you know and how do you work it out?
One of the points I tried to make during my breach disclosure presentation at AthCon'12 was the need for some regulated standard for the value of personal data.  I wanted to state the importance of setting a value for data so that the value can be used to help estimate how much protection it needs.


For example - if you are the proud owner of 1961 E-type Jag a quick shufti on autotrader will tell you that these are valued at around £129,000, and a '72 model around £100k less at £27,000.
Ok, where am I going with this little de-tour into classic cars?  Within a few clicks of a mouse I've got a rough idea of what my asset is valued at.  If I wanted to, I could fill out the forms on one of the many comparison sites and get a quote for car insurance, a few clicks later and I know how much a third party is going to charge me to accept the risk of damage/theft/fire etc...


If only information security was that simple, but it isn't.  The threats we have to manage change daily, risk mitigation is complex and often a seemingly unachievable, endless battle.  The risk focused CISO could easily be forgiven for finding themselves in a spin.  Buzzwords are a plenty, opinions can vary, and technology is only half the battle! 
To make things worse it is hard to value data.  As is often said, you wouldn't spend a £100 to protect an asset valued at £1.  But how do we know how much data is worth?  The problem with data is its worth different things to different people.  We can't just use a "market" valuation in the same way we can with the great e-type.  Your customer data may be hugely valuable to you in some instances and of no value to you in others.  However it is always of value to the customer in terms of their privacy.  That is the regulators responsibility to uphold.  Even if the customer doesn't care, the regulator is responsible to make sure that there are some principles that are adhered to.  We know personal data is worth less than sensitive data in respect to the Data Protection Act and there is a maximum fine of £500,000 from the ICO.  Sometimes it feels like that is about as much as we have to work with.  No comparison sites, no defined minimum value.  Confused?(.com)


One of the reasons I think the PCI DSS got traction in its early days was that the data was given a value, $25 per card if memory serves me correctly, based on the cost of re-issuing a card.  Couple this with the cost of any fraud committed and then perhaps a fine for none compliance and value can be quantified.  If you store 100,000 card numbers, that data (then) would be worth at least $2.5m+.  From here the business case for a security standard is born.  


Personal data on the other hand seems to be something of a quandary.  If we review the monetary penalty notices by the ICO you will start to see what I mean.
Recently there was a fine issued to Brighton and Sussex University Hospitals for £325,000 for not controlling the destruction of highly sensitive data properly.  Circa 70,000 records with highly sensitive medical data were lost.  So this equates to £4.60 per record.  Which seems very low for information related to an individuals sexual health and preference.


A £90,000 fine was issued to Central London Community Healthcare NHS Trust for the loss/inappropriate transmission of 59 faxes containing information relating to medical diagnosis and palliative care info (£1525 per fax).


Looking at the less sensitive data is even less helpful.  When a gambling industry worker sold over 65,000 records from an online bingo company for in the region of £25,000 (according to the ICO) he received a conditional discharge and was ordered to pay £1,700 as well as £830.80 costs.


The case of the bingo-bandit highlights a significant problem.  The organisation who had the data stolen couldn't/didn't determine the perpetrator and the punishment levied against the buyer wasn't really proportionate to the value the data had to him.


If the ICO were able to set a minimum value for a personal record and a minimum value for a sensitive record they could then set expectations of what reasonable controls are for that data.  This could then be based on the data's minimum value.  In the event of a loss of that data, the ICO could say X records multiplied by value A is my starting point.  Then apply a distress factor, and perhaps a responsibility factor (how well controlled, etc etc) - this would then give people and indicator.  These factors in the multiplier could be used to dictate behaviours.  E.g. organisations that come forward openly, and demonstrate transparency should be rewarded (in my opinion) as this allows the situation to be dealt swiftly and with in the interests of the data subject at the centre.

Sunday 27 May 2012

BYOD - Bring Your Own Disaster!

BYOD - perhaps we should call it "Bring Your Own Disaster!"


There have been lots of good reasons for not letting people connect what ever they like to the enterprise network.  These have not changed, in fact there are more threats to the corporate computing environment than ever before with ever simpler attack vectors.  However the BYOD brigade have charged on.  Want an iPad for email, fine, sign off the risk, non-corporate laptop, sign here ______.  What concerns me is not executives signing off risk, that of course is their decision, its whether they understand the risk in the first instance.


It seems strange at a time when information security is apparently so high on the corporate agenda that BYOD has as much traction has it does.  Does this show us that perhaps the non- infosec executive management still have the "attacks come from the outside" mind set?  Successful breaches almost always seem to include some form of end user device being used to attack the rest of the network.  They can be an easy target, and one authorised to access other services.  Why try to attack the data at rest when you can attack a vulnerable PC/user and steal the authorised credentials.  This can happen even without BYOD policies!  


Who's asset is it anyway?
The question of who owns the underlying asset is a really important one.  If an employee owns that device the enterprise can't really tell them what to do with it.  This will make data management policies almost impossible to police.  The data belongs to the enterprise, the device to the staff member, the staff member is almost certainly going to be able to do whatever they like with that device.  If devices get lost / stolen then the data compromised and used fraudulently I can't see too many judges looking fondly on.  "So, you let them copy this information on to their personal tablet/laptop/smartphone".  I'm not a lawyer but I could see this being a legal mine field around duty of care.


No real cost saving.
As IBM have discovered their BYOD initiative hasn't seen the cost savings expected. I suspect this is down to the well known fact that the cost of tin is always out weighed by support costs.  Non-standard builds, non-standard software / hardware is a support teams worst nightmare.
If you add to this security requirements, suddenly network architecture has to be much more defensive from scratch.  Really, all endpoint devices should be considered untrusted, perhaps even the network itself can't be trusted so more and more checks and policy must be applied. IT and Infosec can't even assume that they'll be able to install product on the devices so have to look for more and more agent-less technologies, endpoint analysis is required to see what on earth has been plugged in, network access control will have to be deployed to do quarantining of unknown / suspicious devices, all internal systems will require additional hardening and firewalling, multiple patch regimes to adhere too, and on, and on, and on.


To have a successful BYOD roll out requires an incredibly well locked down, hardened architecture with extensive internal firewalling the like that many organisations don't operate at the moment.  This sort of approach slows down delivery of services to "the business" because it simply takes longer.  Not helpful when the IT function is still trying to prove it is relevant as senior execs hear they can just move everything to the cloud.


Imagine malware that can detect whether it is plugged into the home network or the office one and operates intelligently based on that decision.  Whilst at work, it sniffs credentials, hoovers up data and does as much as it can with as much stealth as it can muster.  When it realises it has been plugged into the home network with a nice big internet connection and no real firewall / IDS it starts to transfer that data to the criminals unbeknown to the entity it stole it from...


The BYOD culture reminds me of those days in primary schools where the kids all bring a toy in to play with.  Everyone is impressed by the cool toys on display, and the older kids get to show off their latest action figures and video games but not a great deal of work gets done by anyone.  


BYOD - here to stay but probably for all the wrong reasons.

Thursday 24 May 2012

Are you ready for the Olympics?

The Olympics is coming, are you ready ?


Now, I'm not talking about your 100m personal best or whether you are a medal contender.  Unfortunately I'm talking about heightened risk of cyber security incidents.  During the Beijing Olympics there were 12 million cyber security incidents.  We should use that as the bench mark for our risk management for the London 2012 games.


The London Olympics has the potential to be an incredible event attracting people from all over the world to the UK.  With this unfortunately comes a heightened level of risk.  The UK.gov is already planning to fly combat planes in London airspace and is clearly concerned about the risk of a terrorist attack, as well as cyber attack.  UK.gov is working on the assumption that the threat level will be severe but with a focus on the games taking place come what may.


With a potentially reduced work force it will even more important to ensure that information security controls don't slip. Olympic phishing attacks are likely to be prevalent, no doubt offering access to events or tickets - these could be laced with malware or end up compromising sensitive information from your staff or customers.


Sensible steps to take -


Ensure that you have cover for staff responsible for authorising changes to IT infrastructure and be prepared to limit the number of IT changes during the games period.


Compliance controls - if you have recurring compliance controls that are likely to fall within the games period ensure someone is specifically tasked with staying on top of them and has a deputy.  External and internal vulnerability scans, patching and AV should all be kept up to date particularly if you are subject to PCI DSS.  Resist the temptation to open up your outbound internet access so that staff can access streaming sites from their PCs.  This can make matters worse if you get hit by some rogue malware.  A couple of TVs in the office may be a better solution - see below.

Plan for multiple types of incident, and ensure you have contingency for the assets that may be affected (Staff, internet, telecoms, IT etc)  Ensure that all staff know how to respond and are empowered to do so - consider adding social media engagement to your incident response plan, this can be a good way to get the message out on mass quickly from and too multiple types of devices.  Neira Jones of Barclaycard has a good blog post on that subject here.  


For those taking card payments who are serving the tourist community attending the games, there is probably going to be increased risk of payment fraud.  Cards from various parts of the world that may normally be declined are likely to be in use by tourists and fraudsters.  Make sure your staff are kept up to date and take additional steps if necessary to verify the card holder identity.  Have a quick chat with your acquiring bank to see if there is any advice they can offer and to understand what your responsibilities are.


Engage employees and customers!  Whilst this might seem slightly off topic for an infosec post I'd suggest finding a way to get your staff engaged in the Olympic celebrations whilst at work.  It is highly likely that a number of people will have taken annual leave and unauthorised absence may be higher than usual.  A couple of live TV feeds and positive acceptance that the games is going on might be enough to stop the odd straggler from an unauthorised absence when there is a big event.  Travel in London is likely to be slower and busier than normal, expect more remote working requests or delays in people getting to the office.


As a real example, I was in Portugal during the 2010 World cup working at a client site when Portugal beat North Korea 7-0.  Anyone who wanted to watch the game was given a free pass to do so by the operations manager, who had arranged a TV to be set up in the board room and a stack of pizza for everyone.  She had a full office every day I was there.  Contractors were invited as were clients and everyone enjoyed the atmosphere.


For those of you looking to be a little more pro-active, now is the time to be reviewing information security policies and procedures, updating risk assessments and incident response plans and ensuring you have up to date contacts with suppliers, third parties and any contractors.  They should be thinking about this too. 




Still 64 days to go ......


Andy