Monday 15 October 2012

Vulnerability Scanning for PCI DSS compliance - Part 2


How do you know you’re doing it right?

You need to scan all IPs which are allocated to systems which are in scope for assessment.  Scope of assessment is based on systems storing, processing or transmitting cardholder data or being connected to systems handling cardholder data. For your external vulnerability scan, you must use an Approved Scanning Vendor (ASV) and scan all public facing interface addresses of in-scope systems.

If you do not know the full extent of your scope then you need to take a step back and review your network diagrams and data flows to understand the scope of compliance requirements. If your scope has not been reduced based on the issued guidance on network segmentation or removal of cardholder data, the chances are that your entire network is in scope and all systems and interfaces must be scanned.

'Doing it right' for PCI means having a passing vulnerability scan for all in-scope systems per quarter.
This means you have achieved a 'pass' according to the risk ranking used for internal vulnerabilities, based on CVSS scoring. For external scans, it means the ASV has not identified any 'PCI Fail' vulnerabilities on your external interfaces and websites.

Depending on your risk appetite and management of security, 'doing it right' may mean something quite different. Performing more regular scans and remediation exercises on systems in-scope and out-of-scope should mean the following:

  •          You will identify vulnerabilities across all environments
  •          You will identify vulnerabilities earlier
  •          You will not face a backlog of vulnerabilities to remediate prior to the end of the quarter

All of the above can help to reduce the overall security risk to the business for cardholder and non-cardholder systems.

Scan profiles
A practical approach to achieving passing scans for PCI while maintaining a vulnerability scanning strategy for your entire organisation is to use scanning profiles. You could use a single profile for in-scope environments and alternate profiles for other environments, perhaps based on criticality to the business. For PCI scans, a passing scan result can be submitted for compliance quarterly, reporting requirements are based on the categorisation of the business. For other scans, reducing risk using an internally agreed risk ranking system, documenting a baseline and reducing the risk of that baseline over time may be a sensible objective.
Using schedules, you might alter scan frequency for various environments depending on identified risk factors.

False positives
The following is extracted from the PCI SSC issued ASV program guide…
The scan customer may dispute the findings in the ASV scanning report including, but not limited to:
  •          Vulnerabilities that are incorrectly found (false positives)
  •          Vulnerabilities that have a disputed CVSS Base score
  •          Vulnerabilities for which a compensating control is in place
  •          Exceptions in the report
  •          Conclusions of the scan report
  •          List of components designated as segmented from PCI-scope by scan customer

A false positive means that a vulnerability has been identified in a system which could not or does not exist on that system. For example, an IIS vulnerability identified on an Apache server or an openSSH vulnerability identified where you can evidence the vulnerability is not present on the version in use. When you have identified and proven a false positive, inform your ASV (or flag as a false positive in your scanning tool). The false positive flag should not identify this vulnerability for 1 year. At that point, you'll need to flag it as a false positive again (if it is still identified as such).

Compensating controls
A compensating control for a vulnerability scan is the same as any other compensating control. There must be a valid business or technical justification as to why you cannot meet the requirement and the compensating control in place must at least satisfy the intent and vigour of the original requirement. You must also have a methodology for validating and maintaining the compensating control.  It is the responsibility of your QSA to approve any compensating controls in use as part of the assessment process.

Friday 12 October 2012

Project OSNIF

There are times when thoughts and chats at conferences start something.  I'm hoping that the  Open Source Network Intrusion Framework is one of those some-things.

Intrusion detection and prevention systems are something of a way of life to my buddy Arron Finnon(aka @finux), he's a fairly regular speaker at conferences on evasion techniques, technical misgivings and the general mis-use of intrusion detection/prevention systems.  After much discussion with him at @Athcon earlier this year we were agreed that something was missing from the community.  IDS / IPS have become something of a dark art in network security,  with the overhead of managing endless tuning profiles, architectural issues, false positives, false negatives and claims from over zealous vendors that are very rarely reality in deployment.  At @AthC0n I floated an idea with Arron that me and the other makeitcompliant bloggers have been discussing for a while which was the need for security-vendor testing criteria that could be repeatable, automated and consistent across products so that different vendors can be evaluated in a neutral manner, rather than a paid-for lab certification.  This lead to a conversation on the sheer volume of IDS technology in the market, in no small part thanks to :

PCI DSS - 11.4 Use intrusion-detection systems, and/or intrusion-prevention systems to monitor all traffic at the perimeter of the cardholder data environment as well as at critical points inside of the cardholder data environment, and alert personnel to suspected compromises.
Keep all intrusion-detection and prevention engines, baselines, and signatures up-to-date.

IDS functionality is becoming prolific now, integrated into firewalls, as host software, stand alone infrastructure, appliances the list is endless.  However in my experience I'd seen common issues in deployment.  These were not compliance failings, the PCI standard is very flexible with IDS specifics because it is such a broad technology deployable in many ways.  These issues were because of the breadth, confusion seems to be the norm in just where to begin with an IDS/IPS deployment.  Having seen IDS regularly deployed in scenarios where its inspecting 1% of the throughput because the other 99% is encrypted, or its been deployed post-breach and then tuned to a potentially compromised environment - I am always somewhat sceptical of the real value of an IDS.  There are numerous evasion techniques readily available in metasploit already, and as the subject of one of Arron's talks, there are even techniques that were originally designed for evasion, that due to issues with metasploit were run as the norm and IDS' tuned to them meaning the original exploit technique actually goes un-noticed by the IDS!

So, over in the Athens heat a conversation started along the lines of, "wouldn't it be nice if we had an OWASP like framework for intrusion management".....  This lead to the concept that would become OSNIF, something we hope will give consistent guidance on the use, testing and deployment of IDS/IPS.

When we met up again at DeepIntel, we stumbled across Richard Perlotto from Shadowserver.org and were mid way through a conversation about how to sensibly go about setting up a IDS/IPS testing methodology that could be done consistently without just depending on the metasploit tools.  After a short "mmmm" Richard said, we might be able to help with that.  Shadowserver does lots of AV testing and scoring against malware in the wild and has a whole stash of pcap resources that would be beneficial to run against an IDS / IPS in a similar way.....   We will definitely be talking to them in future about how they can help and what we can do with some of their data.

Arron's managed to herd the cats caught up in the initial discussion, some goals have been set and I'm quite pleased to be one of the volunteers on this project.

The initial objectives of OSNIF are as follows -
·         Develop an OSNIF Top 5.
·         Developing a “Risk Assessment” guide for deploying detection systems.
·         Developing a “Best Practices” deployment guideline.
·         Developing an Open Source IDS/IPS testing methodology.
·         Operate as an Independent legal organisation to maintain, and manage community data.

The OSNIF framework could well be the start of some common-sense open and collaborative thinking in this space.  I hope so.  Head over to osnif.org to get connected with the various mailing lists etc.

Saturday 6 October 2012

Social Media - do too many tweets makes a ?

This week has had the media crawling all over Ashley Cole the England left back who has been heavily criticised for his use of certain language towards the FA.  Interestingly it isn't that long ago that the Prime Minister David Cameron used the same in relation to twitter itself (perhaps with less venom)(http://www.youtube.com/watch?v=d3Mrfut-FSw).  Whilst the details of the two instances don't really interest me,  they do show how difficult it is to control the use of social media by employees.  Equally, it shows how quickly someone's opinion or language can reflect on them, or perhaps their employer.  It is interesting to think about when the views of the individual are the views of the employer or not.

I think the usefulness of a corporate twitter profile should now be obvious, if for no other reason than to be able to clearly distinguish between "corporate messaging" and employee "chatter".  If a message comes from the company managed twitter it can clearly be identified as such, as opposed to an employee saying something on their personal account.  Whether an employer chooses to take action against something an employee says is clearly their decision and may depend on the type of organisation they are. 

Restricting access to social media in the enterprise does have some benefits - 

1) It might help to stop non-corporate tweets being directly linked to you.  As tweets could be restricted so they don't originate from your network during working hours unless authorised. (although limited benefit -tweets can be done many other ways)
2) Can help stop malicious URL propagation by re-tweets - (it is not uncommon for staff to follow each other) so one bad re-tweet could get a URL to a large amount of staff at a company from what looks like a trusted source.

Imposing restrictions on the corporate network use should be the norm but in my experience those that tweet personally do so from their smart phone via 3G so corporate network controls are ineffective.  Corporate tweeters tend to (or should) use a desktop/tablet application that can provide statistics and so access can be managed.  I have seen some organisations say it's acceptable to use social media on personal devices, but block it completely from corporate devices..  Clearly, this could be problematic in BYOD scenarios.

If you already impose lifestyle restrictions on your employees, (very common in football clubs, media/entertainment and certain social roles) then including an "acceptable social media policy" into your brand management strategy is the way forward.  This should outline your company's position on what it considers to be acceptable from an employee during their employment.  This can then be incorporated into their employment contract.  How you choose to enforce this is a different matter and likely to be very difficult.  Managing by exception is common - censuring employees if a tweet/post/message is reported.  I've seen very few organisations actively monitoring employee twitter activity mainly due to the privacy concerns and the amount of resources it takes to do so.

Placing restrictions on social media use such as twitter or Facebook needs to be a considered decision and should be done in-line with the culture of your organisation.  If you do allow it the risks should be considered and measures put in place to help prevent the technical vulnerabilities.

Thursday 4 October 2012

Vulnerability Scanning for PCI DSS Compliance


Maintaining compliance with PCI DSS requirements for vulnerability scanning can present a number of challenges for companies.  This blog post is aimed at tackling the challenges of vulnerability scanning and present ideas for managing the process, as well as evidencing it to your QSA.

The best starting point is to define the process for vulnerability scanning.  PCI DSS requires the following:
  • Scans are performed at least quarterly
  • Scans must be performed by a qualified resource for internal scans (not required to be an ASV) and by an Approved Scanning Vendor (ASV) for external scans
  • External vulnerability scans must satisfy the ASV program requirements
  • Scans must be performed after significant change in the environment
  • Scans must achieve a passing score (as defined in ASV requirements for external scans, and by having no vulnerabilities ranked as ‘high’ for internal scans)

These guidelines provide the control requirements for the vulnerability scanning process within an organisation. Establishing a process to satisfy those requirements can be difficult, because of size, complexity or architectures that have been implemented.

When it comes to vulnerability scanning, it is much like any other job you would rather ignore; the longer you leave it the more time, effort and interruption it causes when you want or need to be doing something else.  Scanning more regularly, such as monthly, is often touted by QSAs as being a good way to manage the process (and it is) but it will not deliver the added benefits unless configurations and security patching are managed correctly. 

In order to perform a scan, you need to have identified what systems are in scope for your internal and external scans and to have created the appropriate profiles in the scanning configuration. Implementing network segmentation can reduce the number of systems which need to be scanned.  For the ASV scan, your ASV provider, under the PCI SSC ASV Program Guidelines requires the following:
  • Information about any scoping discrepancies must be indicated on the Attestation of Scan Compliance cover sheet under heading "Number of components found by ASV but not scanned because scan customer confirmed components were out of scope ". This information should NOT be factored into the compliance status:
  • Include any IP address or domain that was previously provided to the ASV that has been removed at the request of the customer
  • For each domain provided, look up the IP address of the domain to determine if it was already provided by the customer
  • For each domain provided, perform a DNS forward lookup of common host-names – like ―www,‖ ―mail,‖ etc. – that were not provided by the customer
  • Identify any IPs found during MX record DNS lookup
  • Identify any IPs outside of scope reached via web redirects from in scope web-servers (includes all forms of redirect including: JavaScript, Meta redirect and HTTP codes 30x)
  • Match domains found during crawling to user supplied domains to find undocumented domains belonging to the customer

I mention this because I can only name a few ASVs that I have worked with through my clients who have demonstrated the processes identified above.  Just to note the requirements of the vulnerability scanning processes require the organisation or scan customer to acknowledge its responsibility to manage scope, and obligation to properly inform the ASV provider about your environment, including any local or external load balancers that may impact the scanning process.

For internal vulnerability scanning the chosen solution should be managed by someone who is able to demonstrate sufficient knowledge to perform the process.  In order to achieve this, the scope of scanning should be approved within the governance framework of the organisation, and implemented within the chosen solution.  The reports should always be passed back to the team responsible for the process.
There are a number of tools available, from managed and hosted solutions to customer implementations and free open source tools.  I have been to clients with 100+ servers in scope for PCI DSS assessment and have observed they are using scanning tools which do not facilitate ease of issues review or remediation management. If you want to scan large environments, investing in a tool which provides the business useable reports is a must as, in the early days of this process, remediation will likely be rather time consuming.  All of the findings within the reports should be considered for action, even if they are informational issues.  

When assets are identified for scanning they should be assigned an owner; this is the person who investigates the remediation plan for the identified issue and documents the relevant change control documentation to ensure the fixes are implemented.  This is where functionality of the tools selected to assist the business in maintaining compliance has to be considered. I am always amazed at how limited the functionality is of some solutions, such as only providing a PDF report, or CSV format download. It just makes the job harder!  A solution that can mirror the internal governance structure of an organisation can save time, meetings and other resources by just adding some work flow. 

Managing the identified issues from a scan is important, especially when a failure is noted.  Failures, as mentioned above, are in the instance of external scans anything with a CVSS baseline score over 4.0 and for internal scans any issues identified as ‘high’.  The ASV program guide suggests organisations attempt to address issues based on the CVSS scoring.  Any issues that have a failure implication need to have the following:
  • Root cause analysis (newly identified issue, security misconfiguration etc
  • Action plan for remediation
  • Communication and assessment of compensating controls (for external scans this is completed with   the ASV)
  • Change Management documentation

It is not just the changing security landscape that needs to be managed for vulnerabilities.  Changes made to the environment, such as addition of new servers, or new applications should always result in security testing being completed.  Prior to any other security testing, a full vulnerability scan on internal and external changes should be completed.  This should be the case even if you are going to complete penetration testing, as this will ensure that the tester’s time is spent on harder tasks rather than on the ‘low hanging fruit’.

Where an organisation has established and embedded change management procedures and governance structures such as a Change Advisory Board, these should be closely aligned and involved in the vulnerability scanning process. The ability to manage changes in the environment securely and in line with company processes will provide key documented evidence to the QSA that robust processes are in operation.

Part two of this post will cover the following item in more detail than they have been touched on in this post:

  • How do you know you’re doing it right?
  • Scan profiles
  • False positives
  • Compensating controls



Wireless scanning - PCI DSS requirement 11.1


Requirement 11.1 of the PCI DSS requires scanning for rogue wireless access points. The risk identified here is that some miscreant will connect a rogue wireless access point to your environment, then sit in their car outside and sniff information with the goal of intercepting passwords and other sensitive information so he/she can steal cardholder data.  This is an understandable view, don’t get me wrong, but it’s a limited view.   The control is there to mitigate the risk of any miscreant doing it, as an internal or external risk, as it is far easier for a member of staff to gain access to the office environment and plug a rogue device in than an external third party.

The requirement is as follows:
“11.1 Test for the presence of wireless access points and detect unauthorized wireless access points on a quarterly basis.
Note: Methods that may be used in the process include but are not limited to wireless network scans, physical/logical inspections of system components and infrastructure, network access control (NAC), or wireless IDS/IPS. Whichever methods are used, they must be sufficient to detect and identify any unauthorized devices.”

The methods above are slightly expanded from the methods enumerated in PCI DSS v1.2 and provide a bit more flexibility to merchants and service providers. In July 2009, the PCI SSC released an Information Supplement on PCI DSS Wireless Guidance prepared by the Wireless Special Interest Group. As that is a document just over 30 pages, I'll try to keep this short and sweet. If you're interested that document is available here: https://www.pcisecuritystandards.org/pdfs/PCI_DSS_Wireless_Guidelines.pdf
I'm going to go through the testing procedures for this requirement one by one for completion. However, only 11.1.b requires any real activity!

11.1.a Verify that the entity has a documented process to detect and identify wireless access points on a quarterly basis.
This is simply about incorporating the Wireless Scanning requirements in an appropriate policy (such as a Security Testing Policy).  A quarterly process that identifies the controls to be implemented should be documented, and the required evidence and control forms referenced.  The adequacy of the process is assessed in 11.1.b.

“11.1.b Verify that the methodology is adequate to detect and identify any unauthorized wireless access points, including at least the following:
 WLAN cards inserted into system components
 Portable wireless devices connected to system components (for example, by USB, etc.)
 Wireless devices attached to a network port or network device”

If you use wireless devices, these should all be documented as part of the asset inventory.  A simple process to validate the status of devices on the inventory against the active devices provides a good basis for control.  Also look to the wireless systems implemented to provide you with additional functionality to manage PCI DSS requirements. 

Network ports should be activated for required use only and a check of LAN and switch ports should be performed to verify only required devices are connected. E.g. if you have 10 servers and a firewall, you should probably have 11 cables going into your switch. While I know this is horribly difficult to monitor in many server rooms that are a cabling nightmare, tidy cabling can be a separate goal in itself and will not to be further discussed here!

Network Access Control can provide a simple solution to the issue, although with a heftier price tag.  Any investment should be decided based on the requirements of the business rather than solely on compliance requirements. Technology alone does not solve the problem; it must be configured and maintained.
Regular system users should not be permitted to install Plug'N'Play devices to their systems. Role Based Access Control is already required within the PCI DSS requirements and this is something that is fairly easy to lock down in most environments (usually via group policy).  Only devices that require and have authorised wireless access should have wireless configuration system services available.

On top of the normal physical security controls associated with a data centre or networking cabinet, racks should be locked and an eyes-on review should verify whether additional hardware has been introduced.
In terms of using tools an option which only requires configuration and some training is the use of the nmap scanner with the -O switch to identify wireless access points. The traditional use of WiFi analysers such as NetStumbler or Kismet will also assist you in establishing the wireless baseline of surrounding areas.  Once the baseline is established, monitoring can be based on any identified changes.

The requirement does not specify all of the above need to be implemented. This is here to provide a flavour of the measures available to demonstrate compliance.  The controls that are implemented must be effective; simply walking around once a quarter with a wireless scanner without direction does not provide adequate control. It is important to note the requirement calls for a methodology.

11.1.c Verify that the documented process to identify unauthorized wireless access points is performed at least quarterly for all system components and facilities.
If the environment is split over many sites or locations, the processes must be sufficient to cover the locations included within the scope of compliance, not just the sampled locations.  If a third party hosting company is used as a Service Provider someone must be responsible for the controls at this location.

11.1.d If automated monitoring is utilized (for example, wireless IDS/IPS, NAC, etc.), verify the configuration will generate alerts to personnel.
If you are using a Wireless IPS (WIPS), configure the settings to email or otherwise alert the appropriate security personnel if rogue access points are introduced to the environment.

11.1.e Verify the organization’s incident response plan (Requirement 12.9) includes a response in the event unauthorized wireless devices are detected.
The whole process is broken if the people that you rely on to implement the controls do not know what to do in the case of exceptions.  It is important for these controls to filter into the incident response framework.  Incident response should include a provision for the processes to be followed for alerts about possible use of rogue devices, and invocation procedures if the presence of a rogue device is confirmed.