If Compliance is Not Enough, What Else is Needed to Secure Web Applications?
As we have seen in part 1 of PCI Complaince, the Good, the Bad and the Insecure
, PCI compliance is a good idea in abstract, however it should be viewed only as a starting point, given its rather minimalistic and generic approach to meeting compliance requirements. One of the largest problems with PCI compliance is the absolute lack of real, technical requirements. For example, the very first requirement is to have a firewall designed to protect cardholder data. That sounds good on paper, but nothing actually says how or to what degree this firewall must protect data.
Consider that any random Joe McSysadmin can throw a firewall on their network and call themselves compliant, and they would be technically correct. But that would not actually protect their network and web applications in any realistic way unless that firewall was finely and appropriately tuned, which is not thoroughly detailed in any real way under the PCI guidelines. Indeed, most merchants find themselves meeting the requirements at the most basic and minimal levels necessary, which properly explains the annual amount of cardholder data that gets compromised. Instead, merchants should go well above and beyond the basic and often ambiguous generalities of PCI compliance requirements.
As mentioned earlier, there are six categories of PCI compliance, each with a subset of rules. The following details a good starting point and some additional steps all merchants should follow when attempting to become PCI compliant:
A Complete Guide to Having PCI Compliant Web Applications and Business
Build and Maintain a Secure Network
1. Install and maintain a firewall configuration to protect cardholder data:
Just installing and configuring a basic firewall is not enough, even if it meets the PCI requirements. It is also imperative that all externally-facing systems (and, indeed, even some internal-only systems as well) not only be properly configured with adaptive and well-tuned firewalls, but that the firewall logs be frequently inspected as well. And by adaptive, this means that the firewall should both automatically and manually improve itself actively with traffic that occurs, including but not limited to rate-limiting or outright blocking questionable traffic, and alerting security engineers of any possible trouble. This is not only to prevent external threats from gaining entry, but also to prevent even insider threats from gaining access they should not have (hence the prior mention of internal-only systems).
This also is not limited to your web servers, but any systems on your network as well, such as your employees' desktop computers. In 2011, RSA Security - an American computer and network security organization used in both high-level corporate business and government contracting - fell victim to a social engineering and trojan horse attack that rendered their SecurID two-factor authentication tokens useless, all due to a compromised employee desktop from a simple infected email attachment. For more information about this attack click here. Most insider threats are not intentionally committed by disgruntled employees, but in fact occur from poor computing practices on insecure networks.
2. Do not use vendor-supplied defaults for system passwords and other security parameters:
Time and time again, network engineers install routers with cisco:cisco username/password combinations, thinking, "Surely, no one will make it in this far." Wrong. The same can be said for practically anything that comes supplied with defaults, be they passwords or configurations. There exists plenty of black-hat scanners that search for fresh installations of WordPress, phpMyAdmin, and various other easy-access web applications and software for that brief period just after installation when default passwords have not yet been changed. Just this momentary exposure can wreck havoc on an administrator's setup, or even the whole network.
It is also worth mention that this requirement should include any defaults, including configuration, such as ports and version replies. There exists no reason to leave SSH port 22 open to the world unless you are running a shell server, in which case that shell server should never be even the slightest bit connected to cardholder data to begin with. There also exists no reason to leave the full version reply in the Apache web server reply headers. In fact, wherever possible, the most minimal information should be supplied, or none at all if it is not critically necessary. The less potential attackers can glean from your surroundings, and the less entry points made available to them, the more secure your systems will be.
Protect Cardholder Data
3. Protect stored cardholder data:
This requirement should go without saying, but often gets ignored or mostly overlooked once the first requirement is completed. For example: With PCI compliance, it is required that CVV numbers not be stored whatsoever, and that cardholder data such as the card number, ZIP code, and cardholder name all be stored in an encrypted format. All too often, neither of these two requirements are completed. Some eCart software provides this functionality already, but also does an ineffective job of protecting the keys used in the encryption/decryption process. What good is a lock if you leave the keys in it? This sort of mass-compromise is easy preventable by two simple methods of data protection:
- One-Way Encryption: It is not necessary to store cardholder or personally identifiable information in a decryptable method unless absolutely necessary, such as recurring charge payments or saving cardholder data for future easy payment methods. If you absolutely must store cardholder data for whatever reason, and have no reasonable need to retrieve it later, then encrypt it using a highly secure one-way algorithm, such as salted SHA512.
- Store Keys Offsite: If you absolutely must store cardholder data and have reasonable need to retrieve it later, then keep your encryption methods offsite (or, if multiple servers are infeasible, inaccessible to the publicly-facing services, such as by process chroot and permissions). One way you can do this is by running a service on a system that is inaccessible from your publicly-facing servers or services (e.g. via SSH or NFS for separate systems; open permissions shared processes; or any other access methods) that takes only two actions: Receive cardholder data to encrypt and store, or charge existing stored cardholder data with a defined cost (such a command could be like: charge client #123 with $29.95 USD to their payment method #2). This service would never return cardholder or personally identifiable data via any query, thus preventing that data from ever being compromised. This service could be coupled with exclusive access to your cardholder database as well for simplicity, just so long as - again - it does not return any privileged data.
Additionally, many eCart owners often store all of their data in a single database with shared permissions, including cardholder data. If a website owner runs a web forum that charges for premium services such as access to exclusive hidden forums, and stores that forum's data in the same database or via the same access controls as the cardholder data, that cardholder data is as good as compromised the moment a vulnerability exists and is exploited on that forum, which happens exceedingly often. For this reason, it is critical to employ a Separation of Privileges and Segregation of Data set of principles, as follows:
Separation of Privileges:
Take the prior example of a web forum. If you must run them both on the same server, then separate out permissions of each one's access. If you are running forum.com, then set both forum.com and www.forum.com up in one segregated web application (such as nginx running with php-fpm for speed and application server security, listening on ports 80 and 443). Then, set store.forum.com to handle your premium forum access eCart purchase system in an isolated, segregated system. This could be done via suPHP with individual system users for each the forum and the eCart. Another separate entity traditional method to retain using nginx on both services involves a complex chroot by jailing off a second nginx instance. Couple this with a chrooted php-fpm instance, and this could work. However, a simpler method for full service segregation would involve running the eCart in an Apache instance with mod_php under a different system user and group with strict permissions (similar manual and module chroot methods are still applicable for highly restrictive security if desired). This Apache instance would listen localhost-only on a different and publicly-firewalled port (i.e. 8080, publicly-firewalled in deny state just in case Apache is misconfigured to listen on public IP addresses), and the nginx instance would proxy SSL requests through for this sub-domain.
Segregation of Data:
With Separation of Privileges, the access point is secured, but the data it uses is not .. yet. To combat this side's problem, we employ the concept of data segregation. First, this involves more of the prior concept - Separation of Privileges - where you restrict the logins and access controls between your forum and eCart database users. Next, provide individual databases for each element: one for forums, one for eCart and cardholder data. So long as your eCart application remains secure, so, too, will your cardholder data, regardless of the security of your forums application (assuming no elevation of privileges occurs, of course). The one caveat is the security of your eCart, which can also fall victim to vulnerabilities. In mid-2012, One of the largest service eCart systems was exploited by multiple attacks and vulnerabilities, so even your eCart system can become the insecure entry point. A clever additional approach to this would involve custom-coding of a localhost-only and publicly-firewalled network listener service. By listening for only two commands ("store [clientID] [cardholderData]", "charge [clientID] [amount]"), having exclusive access to the encryption keys via unique user and group ID, permissions, and perhaps a chroot environment, and having exclusive access to the database user and tables with cardholder data, this service - with a little additional coding and hacking, such as generating the payment method plugin for your eCart application - could act as a middleman between the eCart and the sensitive data itself. It may seem a bit much, but indeed nothing is overkill when it comes to strict security.
4. Encrypt transmission of cardholder data across open, public networks:
This requirement definitely goes without saying. Simply put, if you are handling cardholder data between your servers and your clients' computers without encryption, you have no business running an eCart system to begin with. You absolutely must encrypt this traffic, and you must do so with reliable, trustworthy SSL certificates. Free single sub-domain certificates are available, as are plenty of commercial-grade, small-business to professional eCommerce levels of certificate options.
One side that sometimes gets overlooked is the communication from your servers to your payment processor, and all steps in between. Most all payment processors as of lately accept only secured communication methods. However, if you have a middle step in the process - such as a shopping cart mirror server hosted in a different data center which transmits, by non-encrypted communication, the cardholder data to your central database server before making it to the payment processor - that in-between traffic happens over public networks and must also be strongly encrypted, just like your communication between your servers and your clients must be.
Maintain a Vulnerability Management Program
5. Use and regularly update anti-virus software:
As stated all the way back in Requirement #1 (Install and maintain a firewall configuration to protect cardholder data), protection of your externally-facing systems is a highly important duty to maintain. Also as previously stated, your employees' terminals are very important in this field of protection as well. However, a piece of software is only as good as the user running it. Included in this requirement should also be highly effective education of good computing practices, as well. In the aforementioned 2011 RSA Security hack, ineffective anti-virus and firewall software coupled with poor computing practices of opening unsafe email attachments both ultimately led to the $66 million USD loss RSA Security felt as a result.
6. Develop and maintain secure systems and applications:
Probably one of the most important requirements of PCI compliance, this requirement acts as a sort of umbrella over other requirements to re-assert the absolute and unarguably significant importance of security and web application security
. As mentioned in Requirements #1, #5, later in #9, and several others, security is of the utmost importance with regard to cardholder data -- this cannot be over-stressed.
Good firewalls, anti-virus services and frequent web application security scans
; Encryption when crossing public channels; Encrypted storage of cardholder data, authentication tokens and passcodes (perhaps even methodology of two-factor authentication or biometrics). Additionally, later, in Requirement #10, we will address detailed and secured logging of all privileged activity. These all and more, you may notice, are repetitiously repeated in recurring repetition, repeatedly. Why? Because if they were not some of the most problematic failures of PCI compliance, there would exist no reason to continually drive these points home.
Implement Strong Access Control Measures
7. Restrict access to cardholder data by business need-to-know:
As with the next requirement, #8, access restrictions are a highly crucial element of protecting cardholder data, namely with regards to privileged personnel. However, unfortunately this requirement is often overlooked at a service level. In the technology industry, there exists a principle known as Least Privilege. As its name implies, the concept involves granting a service or user the least amount of privileges necessary to complete their job, including the revocation of temporary privileges applied where necessary. This principle should not be foreign to our readers, as we have previously discussed this concept several times already, and for good reason. Indeed, as discussed in our SQL injection articles, the restriction of permissions to the most minimalist level required is quite the common concept; As Linux administrators, for example, we apply this methodology to stored data in the form of filesystem permissions. So, too, should this concept be applied wherever possible, especially in environments that handle sensitive information such as cardholder data.
In Requirement #3, we exemplified the scenario of a web forum coupled with an eCart for premium access. In that scenario, the principles of Separation of Privileges and Segregation of Data are further deeply enhanced by the principle of Least Privilege when applied to database permissions. Of course, Least Privilege is not exclusive to database permissions, either. The concept is appropriately applied to everything that has any level of access, such as on-disk stored data, backups, employee file stores, communication pathways, command and control systems, even the contents of the access control lists themselves (it is unwise to tell an intruder what they must infiltrate next in order to gain the desired escalated privileges). In any and every possible area, Least Privilege should be applied and strictly enforced to minimalize the damage and effectiveness of when -- not if -- a hacker ultimately gains access to a service.
8. Assign a unique ID to each person with computer access:
This may seem like a relatively simple requirement, but it actually has quite a few layers of complexity beyond the obvious. As mentioned in Requirement #5, good education of computing practices should also be mandated with anyone who may have privileged access to sensitive data, but there are indeed other important aspects. For example, what good is a unique ID if the systems utilizing those authentication methods are insecure? By 'insecure', this does not only mean the insecurity of the network nor a poor anti-virus posture, but rather this also very importantly includes the computing practices of the users of those unique ID logins as well. As mentioned in Requirement #6, all systems involved, including those requiring unique ID authentication, should be secure. Equally important, though, are the security practices of the possessors of those unique ID authenticators. This can include many things, such as a strong understanding of social engineering approaches, how to employ safe computing, reduction of high-risk exposure on social networking or other arenas, and so forth. Again, proper education of secure computing practices cannot go far enough.
Also, as mentioned in the prior requirement, #7, this requirement does not exclusively apply to actual personnel, but services as well. Along with the principle of Least Privilege, services should possess unique access exclusive to each service unless the sharing of access is absolutely necessary (which should be avoided via protected communication pathways wherever possible). You can consider this another way: If a user can cause damage by sharing his or her credentials, so too can a service exploited by a hacker when its access is shared among other services.
9. Restrict physical access to cardholder data:
Indeed, for some merchants this requirement may exceed their ability to control, especially in the cases of an online store. However, simply using a reliable and trustworthy hosting provider would adequately meet the compliance necessities for this requirement. This also does include ensuring that the server hosting your online shopping system is exclusively accessible only by you or other users properly privileged by Requirement #8 above, such as by avoiding shared hosting (a topic we have addressed previously) or other methods that would give unauthorized users privileged access (such as through a hypervisor terminal with VPS hosting).
And, of course, physical security does obviously include the systems you and other privileged users have physical access to, both permanently installed or otherwise. Over the past several years, major corporations and, indeed, even the United States federal government itself have all fallen victim to massive security breaches due to failed physical security, most often due to unsecured laptops illogically carrying enormous troves of highly sensitive personally identifiable information. Ignoring the absolute irrational absurdity of laptops carrying vast amounts of highly sensitive, the lack of simple hard drive encryption led to several tens of millions of peoples' private information being leaked to entities that had no business accessing that data (resulting in billions of dollars of loss, via both identity theft and fraud or lawsuits).
Regularly Monitor and Test Networks
10. Track and monitor all access to network resources and cardholder data:
Not just with merchant systems that handle cardholder data, but practically every type of server imaginable, this often gets overlooked as needlessly unimportant, when in fact it is an extremely valuable asset. First, look at the side of monitoring, namely for its value in uptime, but also for its usefulness with security and rapid response. It is unreasonable and impossible to manually check on services constantly to ensure their consistent uptime and reliability. Many tools exist -- Nagios and Icinga are two of the most popular, among many others -- that allow you to monitor any conceivable service. Furthermore, most all monitoring software are incredibly simple to setup, requiring only the knowledge of the services you wish to monitor. For example, with the aforementioned Nagios and Icinga, system checks are performed via a series of check scripts or utilities. Nagios and Icinga require only two things from these check scripts: an exit code (0 for OK, 1 for Warning, 2 for Critical, 3 for Unknown) and a single line of status text. That really is all that is required. And you can generate a check script for practically anything -- CPU and memory utilization, properly formatted website output, TCP service replies, even monitoring the local weather around your remote data centers. Anything and everything can be monitored and give you not only the visibility of the moment any problem occurs, but also the moment any security issue erupts. That brings us to the second side of this: Logging.
Sometimes real-time monitoring of your systems and services may not prove completely effective. Sure, they keep you appraised of your uptime and general responsiveness, but a monitor is only as good as the things it monitors. It cannot watch for what it does not know to watch for. While you may be capable of finding when most kinds of attacks occur, as they occur, you may not be able to see them all. Thus, it is imperative to have a reliable, offsite monitoring system. Why did we heavily underscore the word 'offsite'? Well, we would not highlight something we felt no need to stress the importance of, now would we? Think of it this way: A convenience store has security cameras, and a system that records the images captured by those cameras. Would you leave those recording devices behind the counter, beside the register a robber is stealing from? Of course not. So why would you leave the records of an attacker's intrusion on the very server they are intruding upon?
There exist many solutions, both free and corporate, that allow users to store logging in an offsite format. The two simplest are a Syslog variant, and a dedicated offsite monitoring agent. By default, most Linux and Unix varieties have their own form of Syslog already operating locally. To facilitate the storage of offsite logging engines, administrators can use either syslog-ng (which often comes standard on modern distributions of Linux), or rsyslog, both of which can be interchangeable with some similar-but-different functionality. A far better solution, however, is an offsite monitoring agent, such as the open-source OSSEC -- a host-based intrusion detection system that can perform offsite logging, among many other features, even including this very requirement of PCI compliance.
11. Regularly test security systems and processes:
This requirement proves to create a rather tricky problem: A vulnerability scan is only as effective as the list of vulnerabilities it knows to scan for. Indeed, it is impossible to truly account for all unknowns, so the best a vulnerability scan can do is check for the conceivable known methods of intrusion. Thus, it is exceedingly necessary to take a very outside-the-box approach, where thinking abstractly like a hacker becomes a useful skill to employ. A skilled security engineer would not only run vulnerability scans, but could also even perform wargame scenarios to attempt real-world tests of all sorts of intrusion methods in order to most effectively find weaknesses and best craft successful remediation solutions. This is not just limited to checking firewalls, ensuring anti-virus scanners are up-to-date, or verifying traffic is encrypted. This includes everything and anything -- even almost absurd roleplaying tests, such as:
- Social Engineering: Real-world, unannounced tests of the personnel's ability to respond and resist being exploited as a security weakness
- Insider Threat:Testing both the damaging ramifications of a user or service's access being compromised, either accidentally or intentionally, such as in the case of a disgruntled employee
- Response and Remediation: In the event of ultimate catastrophic failure of security protocols, determine how quickly can a security team react and control the situation
Obviously, this short list of ideas is far from comprehensive, but should give a good idea of how to truly and deeply test all systems (and, indeed, personnel as well) with effective and usefully abstract methods to develop a successful security posture. They may border on the ridiculous, but you would be surprised how often these little-tested and widely-open security vulnerability access points fail in the real world. According to some recent security industry research statistics, upwards of 70% of all cyber attacks are in part due to insider threats. Testing not only the systems and services themselves, but the people responsible for them may prove invaluable.
Maintain an Information Security Policy
12. Maintain a policy that addresses information security:
It is essential that a security team (even if this is just you in a solo enterprise) is well prepared for any and every possible scenario that can be thought up, as it is an absolute guarantee that hackers are doing the same to dream up new and innovative ways to gain illegitimate access to cardholder data. A proper and prepared security team will find itself not only planning for the absolute impossible to the absolute worst, but everything in between and surrounding. You must plan for basic first-level response remediation -- secured configurations, firewalls, anti-virus software, and communication encryption. You must plan for cardholder data protection schemes -- encrypted data stores, physical and digital access restriction and control, Least Privilege, Separation of Privilege, Segregation of Data, education of responsible stakeholders. (Remember, in Requirement #5 and repeated thereafter, we mentioned the necessity of effective education in good security practices.) And finally, you must plan for the known and, as best as possible, for the unknown, via regular and irregular testing methods, monitoring, and offsite storage for forensics research.
Simply put, if you find yourself unprepared for when -- again, when, not if -- an attack or intrusion occurs, you will also find yourself incapable of prompt reaction and mitigation. Similarly, if you or others responsible for protecting cardholder data find yourself incapable of fully and completely protecting that cardholder data, it will -- not can, but will -- eventually fall into the wrong hands. It is therefore perfectly fitting that this requirement is last, but certainly not least in the list of PCI requirements, as it sums up the most important requirement of all: Planning and being prepared.
PCI Compliance is just a Stepping Stone
As we hope this article has highlighted, there are nearly infinite expansive approaches to the very limited and basic starting guidelines of PCI compliance. The need to go beyond the minimums of PCI compliance should indeed be well understood, particularly due to the incredible ramifications from not going well above and beyond those bare minimums. Not only will a breach in security systems cost a business a large sum of revenue due to lost sales (mainly from properly lost trust), but also via a very substantial cost in the form of levied PCI SSC fines. Couple all of these with contributing to the billions per year in losses from identity theft and the untold misery of millions of consumers per year, and the undeniable need for an almost fanatic level of security becomes quite clear.
PCI compliance is just a stepping stone up the Himalayan-sized mountain of information security. It presents itself as perhaps the most modest beginning guideline for all merchants, both big and small, to expand from. It would be pragmatically impossible for this paper to expand upon every conceivable (and, indeed, inconceivable) notion to expand from the basics of PCI compliance, especially due to the infinite combinations of systems and services in a merchant's setup, so, indeed, the onus of preparing and planning falls ultimately on the merchants themselves (and, of course, their security team). The task is difficult, though not impossible. It is, however, indeed quite impossible if only the bare minimum basics of PCI compliance are all that are implemented.
If only one thing is to be taken away from all of this, then at the very least take from Requirement #12 one simple thought: Hope for the best, but absolutely plan for the worst; It can, and sometimes does happen.