Security is a constantly moving target. There is no case where a given system can be presumed to be "secure" - information security is a process, not a goal. Even the best secured system will be vulnerable to new research over time: those who want to attack systems to extract information have every motive to refine and improve their methods and practices to become more effective, and present a constantly moving threat. No amount of 'security controls' or certifications, even if they address all the vulnerabilities found in the current market, will remain useful against the novel forms of attacks that tomorrow will bring. Neither will the standards and practices of yesterday be guaranteed to still be effective tomorrow; they too will become outmoded as new research shows the capabilities that attackers have when attempting to infiltrate or damage your systems. A determined attacker will always eventually succeed. Much like entropy, this is a situation where winning is impossible; the best possible case is to delay your loss for a little bit longer. You will lose, eventually; this is guaranteed.
This does not mean that investments in security are not worthwhile, though. It merely means that you will need to adjust your strategies to handle this different assumption - that you need strategies for losing as well as for winning.
The first step in building an effective strategy is to determine the things you know and how they compare to the things that you can possibly know. In order for your strategies to be effective, you have to be aware of the full gross and scope of your systems and networks - from the type and disposition of your network switches through the specific patches applied to every single system in your domain. You must know how all the myriad parts of these systems interconnect; you must know what a normal, well-running system looks like and what it looks like when it fails.
This is obviously not going to be the case for anyone with more than a small handful of systems at their command; this depth of information will quickly expand beyond the ability of any person to keep comfortably in their head. This is why tools for managing inventories exist, and for managing baselines.
Knowing the limits of your knowledge is also important; finding the edges where what you know for sure trickles off into speculation and supposition is just as important in formulating strategies as having an effective inventory of your actual knowledge. It informs you of what you will need to know before you can make an effective choice regarding your options at any given point, and it also lets you know where your blind spots are.
Speculating on the things you do not know and cannot know also has its uses - besides reinforcing your knowledge of the boundaries of your knowledge, you also can gain perspective on how to refine your further research. If you cannot know something, then it is a waste of time for you to try to find it out; there is distinct value in knowing that a given avenue is not worth your time.
This metaknowledge needs continual re-evaluation; as others research and release results, the boundaries of what you know will change, as will the boundaries of what is knowable. Maintenance of this metaknowledge database is, in its own way, just as important as the knowledge that you have.
What is Written is Dead
It is a common experience when finding policy documents for large organizations to see a notice that a printed copy is presumed obsolete, and that the current copy is to be found on a departmental fileserver. This is the case in, for instance, government agencies: the policies and procedures change on a regular enough basis that any printed copy is bound to be obsolete between the time it was printed and the next time it is consulted. This basic recognition that information is dynamic and needs constant adjustment pervades even a hidebound bureaucratic institution like the Federal Government; why, then, should information security be any different?
Every standard that is written is obsolete by the time it is published. PCI, ISO 9001, or FIPS - each is an artifact of the time it was built or last revised, and has, in the end, not much more than historical interest. The landscape that they were built in order to work on has changed and will change further by the time they are revised.
This is not to say that they are without merit; indeed, as historical documents, they show a very clear progression of what 'best practices' consist of at various times, and they can provide a useful milestone of the minimum standards and practices that a reasonably intelligent organization will need to adhere to in order to have even a modicum of success.
However, governance by checklist is doomed to fail, and to fail sooner than other strategies.
Evaluating the security posture of an organization by consulting a checklist of various "controls" that are decreed as necessary (at the time the standard was written) limits the scope of the evaluation to that small window that was visible to the people who drafted the standard used for the evaluation. The people who draft standards are industry professionals, true, but they are not prescient: they do not know the types of attacks that will become feasible in the future, so their standards are by necessity limited to the counters for the type of attacks that were most likely at the time that the standard was drafted.
Attack researchers do not stop their research because a standard is being drafted; by the time the standard is published, some new form of attack has become feasible and is being used - or an old form of attack that was discarded as irrelevant was given an update for new methods and has been brought back into widespread use.
Thus, simple adherence to standards is not enough to ensure any degree of safety - you can be entirely compliant with every standard and still fall victim to attacks that were developed after the standards were drafted or that were dismissed as antique and irrelevant by the authors of the standards. A standard is a starting point for an evaluation, and a starting point only.
No Heist Too Small
You may feel that your organization does not pose a viable target to attackers - that since you're not the Feds, nor a large corporation, nor a high tech firm, that you aren't going to be a viable target for attackers to infiltrate. If there's no high-value targets on your network, then why bother to do more than the bare minimum? Why should you be held to the same high standards that the big players are held to?
Attackers do not see you in that light. Regardless of whether you are a big player or a small one, all an attacker sees is a system that could be exploited for either immediate gain or for future use. The Target breach of '13 shows one particular methodology that is bound to be used increasingly often in the future - that of using a smaller vendor's access to a larger company's systems as a means of infiltrating the larger company.
Additionally, not all attackers are going to be after large returns; much like with crime outside of the information security sphere, there are plenty of small individual criminals who will be pleased with relatively small returns outside of the major, organized crime syndicates. Many of these attackers are relatively unskilled in and of themselves, but the rise of black markets selling software for system exploitation has made an alternate path available for would-be 'cybercriminals'. Where before, someone aspiring to break into a system and steal information would need to be highly knowledgeable, skilled, and persistent; now, all that is required is the ability to follow directions and enough budget to pay for the utilities required to accomplish a given goal.
Further, with the massive interconnection of nearly every system in the first world with the internet, persons in countries with much lower standards of living - places where a couple of dollars is a significant windfall rather than the expected price of a cup of coffee - have the means to access systems where data worth that amount is stored. Your credit card information may only be worth a few dollars on the black market, but a thief may only need a few dollars; your medical records may only be worth a couple hundred, but that would be enough to pay for an attacker's housing for a month.
Since these are people who you will never meet and who will never meet you, the psychological price of the attack is greatly decreased as well. To an attacker on the other side of the world, you aren't a person - you are a string of letters on their screen or an entry in their database. They don't care about the inconvenience, suffering, or trouble that you end up going through because of their activities, so they have no psychological reason to keep from deploying any of a number of dirty tricks to get what they want out of you - even if it's something as potentially damaging and life-altering as using child pornography to blackmail you into sending them a few dollars.
These attackers are myriad, and their numbers are growing by the day, and they don't care that you are not a 'serious target' nor do they care that your life is potentially ruined. All they care about is whether or not they can get what they want out of some target in those rich First-World countries - and whether the methods that they're using are effective in extracting dollars from pockets.
You Can't Patch People
Even if you manage to fix all your technological vulnerabilities - which will never happen - your users are still an enormous attack surface. Systems and networks exist solely to enable people to use them; no company or organization is going to buy computers to sit in a sort of platonic ideal of a network.
Where there are people, there are vulnerabilities that can never be entirely fixed. Even the most savvy professionals fall prey to social engineering; even the most cynical and suspicious people can be fooled. The hooks that are exploited by con artists, frauds, hucksters, snake-oil salesmen and marketers are deep and primal, based in the long history of human social interaction. Every time there is a human to human connection there is the possibility of attack; every business action is an opportunity for exploitation.
Even apparently benign activities can constitute an attack. Social media, for example, has any number of "quizzes" and "games" where people share trivia about themselves with each other. "Your movie star name" or "Your superhero name" for instance, where items from your past history (like your school principal, or the street you lived on, or your pet's names) are the same types of information that are used to confirm identity in case of lost passwords. Sometimes, people use this type of information for their passwords in the first place - it's personal and memorable. When it's public, it becomes information that an attacker can exploit to attack that person's accounts - whether personal or business.
Enabling people to do business effectively and to perform the tasks which are required by their positions necessarily provides them with the opportunity to act in ways which break security. Authorized users can be convinced to perform actions which should not be authorized; no access control can prevent an access that is allowed to a user but should not be allowed to the person the user is talking with.
Just as a compromised computer can be used as a pivot point to allow an attack past countermeasures intended to prevent it, a compromised person can be used as a pivot point to gain accesses that would otherwise be forbidden - and no amount of training nor policy can entirely prevent a suitably motivated and skilled attacker from exploiting this hole in your defenses. It is, after all, a hole that is guaranteed to be available to the attacker regardless of the technology you use and regardless of any technological countermeasures that you put into place; targeting that particular vulnerability is usually a reliable way to get a payoff; training a criminal to exploit that type of vulnerability allows the use of skills that have been refined by grifters and marketers for thousands of years. It is the oldest exploit, and it is one which will never be entirely patched.
They Need to Win Once
There is a vast asymmetry between the success conditions of attackers and the success conditions of defenders. Attackers only have to succeed once in order to gain their 'win' condition, and they have the luxury of having a large variety of attacks to choose from. If one type of attack fails, they may move on, or they may change to another vector. The attacker is always in charge of the pace of engagement. The attacker gets to choose which of your attack surfaces to attack. They get to dictate the whole course of the battle; their only limitation is that they must find some surface where you are exposed in order to find a way in.
The defensive side, on the contrary, is inherently reactive; there is no way to "head off" an attack at a location other than on the local network. Every single day is a siege situation, with some number of attackers arriving from arbitrary locations at the gateway and trying to find a way through the defenses. Intelligence available to companies is also limited; many companies exist that monetize this type of data, and as such are reluctant to share it with other companies in the same space - it would constitute a loss in revenue for them to do so. The defensive side needs to either prevent or nullify every attack as it happens - and given that attacks can arrive at any time of day or night, they have to do so whether or not there are people available to handle security measures. Worse, the defenders do not have the option of cutting off access to the outside world; they must carry out their defense while the normal day-to-day business is occurring and do so with as little disruption to the business as possible. This day-to-day business can disguise attacks, or indications that an attack has succeeded; interrupting it constitutes, effectively, an attack on the company in much the same as some viruses use a body's own immune system against it.
The attacker has an inherent advantage, and it is one which is exploited to the fullest. The attacker's goals are relatively simple; they only need to gain some limited kind of access for a limited period of time to accomplish their goal. The defender's goals are orders of magnitude more complex; the defender must seek out every single indication of attack without interrupting the business' operations, and they must do so continually throughout the entire life of the business.
Under these conditions of engagement, you will be breached. It is not a matter of "if" but of "when" - and "how bad."
Preventing every breach under these conditions is an entirely unreasonable goal; it is entirely doomed to failure, and any business strategy that relies on preventing breaches is one which is not based in reality.
As alluded to before, you cannot be assured that your technical systems are without flaws. From the silicon on up, every single layer of your systems and networks has the potential for unexpected behavior; any kind of unexpected behavior is a potential avenue for attack.
Even assuming that the code you're running has been vetted from top to bottom and is entirely without errors at the time of last vetting, you are not guaranteed that some new behavior will not be discovered at a later point. Given that code is never entirely vetted - given that every major project depends on a series of libraries in order to function - given that new vulnerabilities are found in major projects on a daily basis - there is absolutely no possibility for a given piece of code to be guaranteed as bug-free.
Unexpected behaviors can be derived from the interaction of the physical equipment that the code runs on as well; various groups with an interest in those fields have developed any number of security-breaching side-channel attacks, from listening to capacitor whine in power supplies (and extrapolating from that the operations being performed in the system in an attempt to derive cryptographic keys) to sniffing for Van Eck radiation to intercepting packages during shipping in order to implant listening devices.
Charles Stross said it most evocatively: "Didn't they know that the only unhackable computer is one that's running a secure operating system, welded inside a steel safe, buried under a ton of concrete at the bottom of a coal mine guarded by the SAS and a couple of armoured divisions, and switched off?"
Slowing the Burn
Physical security countermeasures are intended to do three things: to harden the target enough to repel attackers, to provide a way to detect attacks, and to slow successful and dedicated attackers enough that a police or other security response can take more direct action.
This does not work nearly as well in the information world. Some attackers use phishing, where their malicious traffic is, effectively, executed by an authorized user - so someone who is allowed by the security systems ends up carrying out the attack as a proxy for the actual attacker. Some attackers overwhelm monitoring systems with vast amounts of traffic - or, if their motive is merely to deny service, that can be the entirety of the attack, where the systems' resources are consumed through an excess of malicious traffic. Some attacks are subtle, probing from various points, in a way that does not alert those watching until it is too late.
The intent behind the various standards, and the intent behind most best practices, is to harden information systems enough to achieve the first goal - that of being too difficult for run-of-the-mill, casual, attackers to handle. For the typical profit-seeking bot herder, it's a waste of time to expend more than a moment's effort trying to exploit your systems; there are much easier targets available.
Attempting to detect attacks requires things like IDS - network appliances that watch ongoing traffic for indications that an attack is taking place. This provides a significantly higher bar for an attacker, in that even if they are successful in penetrating the network, their presence may be revealed to an attentive monitor.
The third function, that of slowing the attacker enough for a response, is the most difficult to accomplish. Some slowing is possible through various types of configurations - making the network nonresponsive to surveillance, making systems take an excessive amount of time to respond to incorrect passwords, and deploying honeypots to waste an attacker's time and provide extra warning. Unfortunately, unlike in the real world, it's generally not possible to call the police on someone performing an informational attack; many attacks take place from jurisdictions that are, to put it politely, uncooperative with law enforcement requests, and the rest are masked behind one or more proxies who may well be victims of previous attacks.
You Don't Need to Outrun the Bear
Even though you cannot be guaranteed security, it is still very worthwhile to take significant measures to protect your systems. Most reported 'hacks' are far from sophisticated; the vast majority of intrusions are the result of old, unpatched vulnerabilities. It is very rare for an ordinary person to encounter a "0-day" - one of those previously unknown vulnerabilities being exploited for the first time in an attack. More than 80% of attacks involve the exploitation of security holes that are weeks or months - or even years - old.
Remember, malware is a business. Bot herders are, for the most part, not doing this for fun and games; they are doing this to make money. Malware authors do not generally craft their wares and give them away out of the goodness of their hearts; they are selling their wares because they can make money in doing so. If you are too difficult a target for these people, then they are going to pass you by in favor of someone else who is not taking the level of precautions that you are taking.
Much like the old joke about the two hikers, you do not need to outrun the bear - you need to outrun the other guy. If the bear catches the other guy, then you have all the time in the world to get to safety.
Consider all the business owners who think they know better - who think that they're not a target, and that they aren't going to be attacked, and that information security is a waste of time and money for them. They are the ones who will be eaten by the metaphorical bear; if you are a harder target than they are, then you will be able to use them as bear bait and live another day.
Taking measures above and beyond minimum compliance, and taking an active interest in the security of your systems and networks, gives you the advantage that your colleagues lack. Installing information security systems, and contracting with information security professionals (like the folks at BiJoTi.be for instance) can give you significant business advantages.
Failure is Always an Option
However, targeted attacks still exist. It may be a disgruntled ex-employee; it may be that you're the best way to get to something else; it may just be bad luck, but someday you may end up in the sights of a skilled and determined attacker. In that case, eventually, it is entirely likely that all your preparations will be for naught and you will be cracked open like an egg under a hammer.
Failure, given a long enough timeline, is inevitable. You cannot entirely prevent it. You can minimize the chances as far as possible, yes; you can make sure that you are much less likely than your colleagues to fail - but eventually you too will experience it.
Planning for failure does not mean that you are admitting you are less capable. Planning for failure is intelligent: it means you have taken the hazards of the information world seriously, and have anticipated what could happen in the event that someone finds a way to defeat the various measures that you have put into place.
Mitigation is the name of the game. Containment helps: if you have proper compartmentalization, then a failure in one part of your network will be much less likely to spread to other parts; this means that you can remediate the failure while still being allowed access to the rest of your resources. Full knowledge also helps: if you know everything that you have and how important each part is to your overall operation, then you can anticipate the effect that the loss or compromise of each of those things will have on your operations.
If you have a plan for failure, and if you can quickly put such a plan in place when there is a failure to contain and mitigate whatever incident has occurred, you will find yourself much better off than otherwise. The bear might eat your foot, but you'll still be able to limp off to live another day. Your colleagues without a plan for failure will find themselves much worse off.
Know where you can fail: by controlling your failures, you will enable your success.
Nihil Certes Praeter Morte Ac Tributo Est
The laws of Thermodynamics have been stated as:
- There ain't no such thing as a free lunch
- You can't even get the lunch you paid for
- You have to have lunch
Information security adheres strictly to these rules as well. Securing your systems never comes free; you're going to have to make a continuous effort to keep them secured (because the attackers aren't going to stop trying to find new ways in, after all); and you have to secure them somehow or you'll find yourself out of business and without any systems to secure.
Security is not a goal that you can ever attain. You will never find yourself in a situation where there is nothing more that you can do; you will always find that there is more work to be done - and by the time you finish doing the work you know about, even more will need to be done. It's the Red Queen's dilemma: you have to run as fast as you possibly can just to stay in the same place.
Security is a process. It is a state of mind, and it is an ongoing committment. Security is an investment, much like electricity and water; you'll need some today to keep doing business today, and you'll need more tomorrow if you want to do business tomorrow.
If there is one thing to take away, there is this: you will never be secure; you will always have to anticipate failure - but you can make failure less likely, and, by controlling where and how your failures occur, you can be overall successful.