Today is cost model day. I'm in the process of writing a manuscript that aligns Cloud computing with existing internal control and security models that we know and love, and helps us to identify those elements of cloud computing that introduce truly new control risks and vulnerabilities. And one of the critical tools we seem to lack is a fungible, usable cost model that can be used to evaluate an investment in Cloud Computing up-front, and further, can be used to evaluate whether on-going investment in Cloud is returning value to the business in lines with expectations.
The RAD lab at Berkeley has put forth some straight-forward computations that regard IT as a compute utility and compare costs on that basis in the February paper Above the Clouds: A Berkeley View of IT Computing [p.12]. This needs to be expanded with costs and anticipated benefits associated with various use cases, and to create a template to enable both SMEs and Large Enterprises to evaluate whether the Cloud investment is worth a plunge, or just a "toe dip". From what I've found so far, most of the published case studies and use cases are based on one-off-major-cost-savings-couldn't- have-done-it-any-other-way feats of heroism compared to the mundane, can-this-technology-help-me-improve-my-proprietary-business-model-and-is-it-worth-the-investment?
If you've got a great story about how cloud computing is used and the cost/benefit equation to quantify that benefit, I'd love to hear about it!
So, that's my Sunday... my goal is to make your IT simple[r].
Sunday, January 10, 2010
Tuesday, January 5, 2010
Measuring the Success of Our Efforts
One of the things we have lacked in the IT Risk and Information Security Fields is a decent way to measure how we are doing.
A decade ago, it was popular to measure the levels of viruses. Some of us (not this chick) would report numbers of viruses in the network every month. If they were up, things were trending badly; if they were down, we claimed victory.
Hindsight makes everything clearer, and if one thinks about it, that is a measurement filled with fallacy. I could report lower, or even zero, viruses in my network and it could mean a number of things: (1) viruses were higher and now they are lower, or (2) I am incapable of really seeing how many viruses there really are. It also begs the question if viruses on the network are a particularly bad thing. When I was a CISO at Microsoft, we proudly (and publicly) acknowledged that the corporate network was as dirty as the Internet. But for most corporations, giving management a number - ANY number - made everyone feel better, and we happily measured the "success" of the security program on metrics that meant absolutely nothing at all. I suppose we will start to measure our improvement of airport security by measuring how many body scanners are in place, how many people are added to "no fly" lists, in coming months - again, these are completely meaningless measurements because they don't reflect the reality.
The way we measure things is called a "measurement system." We must understand the root cause of the problem, or multiple root causes in complex problems, and choose measurements that tell how well we are addressing the root cause. That is a primary measurement. We also need a secondary measurement - one that will ensure that we didn't "fix" the problem in one department, only to create a nightmare situation for another department. That would be "transferring the problem" in a way that hurts the organization, but does not reduce the risk appropriately.
I'll give another example of transferring the problem, something we can almost all relate to this time of year: weight loss. If I need to lose 10 pounds (which I always do) and I starve myself, I will lose weight. The weight I lose will be the wrong kind of weight - I will lose muscle AND I will reduce my metabolism so as soon as I eat again, it will all come back with interest, and my lean muscle mass will be worse than before. What happened? I lost weight, but I am choosing the wrong metric by only measuring weight loss. The root cause of the problem is not my extra 10 pounds, it is a sluggish metabolism which is a direct result of not enough exercise. I should be measuring how far I walk everyday as an effective measurement in that scenario.
In the world of information security, and security in general, we react to the superficial, and content ourselves far too easily with responses that really don't solve anything, and often make things worse. How can that happen? Because we don't measure results properly so we don't set expectations properly. As you listen in the days ahead to the discussions about national security in the days ahead, realize that will not be solved by the superficial improvements being discussed. We could very well end up monitoring the wrong controls, spend billions of dollars and find ourselves in an even worse security situation than we are today. Think about what we are doing, and how the problem is being moved from place to place. We may also have to ask ourselves, is this problem thing or action that can really be eliminated? Or as individuals do we accept some level of accountability for our personal security and safety and manage our personal risk accordingly.
Just something to think about.
A decade ago, it was popular to measure the levels of viruses. Some of us (not this chick) would report numbers of viruses in the network every month. If they were up, things were trending badly; if they were down, we claimed victory.
Hindsight makes everything clearer, and if one thinks about it, that is a measurement filled with fallacy. I could report lower, or even zero, viruses in my network and it could mean a number of things: (1) viruses were higher and now they are lower, or (2) I am incapable of really seeing how many viruses there really are. It also begs the question if viruses on the network are a particularly bad thing. When I was a CISO at Microsoft, we proudly (and publicly) acknowledged that the corporate network was as dirty as the Internet. But for most corporations, giving management a number - ANY number - made everyone feel better, and we happily measured the "success" of the security program on metrics that meant absolutely nothing at all. I suppose we will start to measure our improvement of airport security by measuring how many body scanners are in place, how many people are added to "no fly" lists, in coming months - again, these are completely meaningless measurements because they don't reflect the reality.
The way we measure things is called a "measurement system." We must understand the root cause of the problem, or multiple root causes in complex problems, and choose measurements that tell how well we are addressing the root cause. That is a primary measurement. We also need a secondary measurement - one that will ensure that we didn't "fix" the problem in one department, only to create a nightmare situation for another department. That would be "transferring the problem" in a way that hurts the organization, but does not reduce the risk appropriately.
I'll give another example of transferring the problem, something we can almost all relate to this time of year: weight loss. If I need to lose 10 pounds (which I always do) and I starve myself, I will lose weight. The weight I lose will be the wrong kind of weight - I will lose muscle AND I will reduce my metabolism so as soon as I eat again, it will all come back with interest, and my lean muscle mass will be worse than before. What happened? I lost weight, but I am choosing the wrong metric by only measuring weight loss. The root cause of the problem is not my extra 10 pounds, it is a sluggish metabolism which is a direct result of not enough exercise. I should be measuring how far I walk everyday as an effective measurement in that scenario.
In the world of information security, and security in general, we react to the superficial, and content ourselves far too easily with responses that really don't solve anything, and often make things worse. How can that happen? Because we don't measure results properly so we don't set expectations properly. As you listen in the days ahead to the discussions about national security in the days ahead, realize that will not be solved by the superficial improvements being discussed. We could very well end up monitoring the wrong controls, spend billions of dollars and find ourselves in an even worse security situation than we are today. Think about what we are doing, and how the problem is being moved from place to place. We may also have to ask ourselves, is this problem thing or action that can really be eliminated? Or as individuals do we accept some level of accountability for our personal security and safety and manage our personal risk accordingly.
Just something to think about.
Labels:
IT Risk,
Measurements,
Travel Security
Sunday, January 3, 2010
Airport Security - Just smile for the full-body scan?
I think I've just gotten the compelling incentive to workout more - I plan to travel and based on recent events, I'll likely be checked with a full body scanner. The news says the scanner technology provides visuals under clothing, detailed enough to "see the sweat on someone's back." Sweet. I suppose if a person has strong feelings about modesty they can stay home, or be prepared for a pat down like none experienced outside of the police station (we already know PETN in briefs needs checking, but the same amount of PETN would pad out a bra, or be hidden in body cavities just as easily).
The scanners are in place in various airports already (Denver, Reagan, London for example). They are in place in Amsterdam, but they just weren't used for the Christmas Day Bomber. If body scanners were going to be effective, they would need to be used on every single person - assuming they were able to detect the items of concern (reports indicate they may not). I understand one can refuse the scan, but must be subject to a pat down instead. Will this include effective checks of gender-specific anatomy? Its definitely a weak-link model: all the investment, all the invasion, and potential abuse of personal privacy, will only help us if we use it all-the-time-everywhere. If this is the control of choice, it will be coming soon to an airport near you.
But is this really the right control? Recently I traveled to Israel via Paris, and am keenly sensitive to the security aspects of travel there. Somehow, I made a mistake in packing, and found all my liquids for two weeks of travel (not a huge amount but definitely over TSA limits) in my carryon bag just as we were going through security. As we approached the x-ray at SEATAC, I was prepared to have to throw bottles of sunscreen, etc, into the trash, but tried one thing first: used multiple 1 Qt baggies to divide up the liquids, and then placed one baggie in separate totes to go through the x-ray. It worked! At that point I realized TSA screeners have no idea who owns what in the various totes they screen. Seems like a big weakness in the scanning process. Could I have hidden a few ounces of PETN in a cosmetic container? Certainly, yes: the risk of being caught would be only if they had a specially trained dog to sniff for it, or if they did the additional scan for explosives. I think the chances of foiling the system through the carry-on scanning system is pretty high. If that is true, the privacy we are giving up for full-body scanning at airports could be for naught. Ugh. DHS needs to look at their controls from an effectiveness standpoint. Full-body scanners is a "cosmetic" control - it looks like they are doing something, but it may not do any real good.
ACLU is suggesting that full-body scanning at airports is just the first step on a slippery slope. Why not use them at sports events? Access to shopping malls? Access to anywhere there is a large concentration of people? You want to go shopping? Scan. Movies? Scan? Baseball? Scan. Yikes!
The potential for abuse is there. We don't see the person reading the scan, we don't know what is really viewed on the scan, we don't know where the information goes after the scan (we're told it is erased but come on...). Let's just say the transparency in this whole control arrangement leaves a lot to be desired (yes, that pun was intended).
So people are outraged by this, right? No. The acceptance of this intrusion is quite matter-of-fact. "In this day and age we have to accept this sort of thing," was the response from those queried according to an article in USAToday. That is chilling.
This is not a new discussion. Yes, in ways it is much more physically personally. But we've been here before with the Clipper Chip discussion during Clinton's administration. Lynn McNulty was the designated "arrow catcher" for NIST during that episode as the debate raged about the loss of privacy of our communications due to the back door designed into that crypto mechanism. The FBI pushed hard for that technology. Ultimately the public outcry caused the Feds to re-look at their approach and come up with a better way. One of my heros, Mark Rotenberg of EPIC, continually watches out for our personal privacy. But this time... we just have to accept it or presumably face being blown out of the sky?
Not this chick. We have to do better. We must demand it.
The scanners are in place in various airports already (Denver, Reagan, London for example). They are in place in Amsterdam, but they just weren't used for the Christmas Day Bomber. If body scanners were going to be effective, they would need to be used on every single person - assuming they were able to detect the items of concern (reports indicate they may not). I understand one can refuse the scan, but must be subject to a pat down instead. Will this include effective checks of gender-specific anatomy? Its definitely a weak-link model: all the investment, all the invasion, and potential abuse of personal privacy, will only help us if we use it all-the-time-everywhere. If this is the control of choice, it will be coming soon to an airport near you.
But is this really the right control? Recently I traveled to Israel via Paris, and am keenly sensitive to the security aspects of travel there. Somehow, I made a mistake in packing, and found all my liquids for two weeks of travel (not a huge amount but definitely over TSA limits) in my carryon bag just as we were going through security. As we approached the x-ray at SEATAC, I was prepared to have to throw bottles of sunscreen, etc, into the trash, but tried one thing first: used multiple 1 Qt baggies to divide up the liquids, and then placed one baggie in separate totes to go through the x-ray. It worked! At that point I realized TSA screeners have no idea who owns what in the various totes they screen. Seems like a big weakness in the scanning process. Could I have hidden a few ounces of PETN in a cosmetic container? Certainly, yes: the risk of being caught would be only if they had a specially trained dog to sniff for it, or if they did the additional scan for explosives. I think the chances of foiling the system through the carry-on scanning system is pretty high. If that is true, the privacy we are giving up for full-body scanning at airports could be for naught. Ugh. DHS needs to look at their controls from an effectiveness standpoint. Full-body scanners is a "cosmetic" control - it looks like they are doing something, but it may not do any real good.
ACLU is suggesting that full-body scanning at airports is just the first step on a slippery slope. Why not use them at sports events? Access to shopping malls? Access to anywhere there is a large concentration of people? You want to go shopping? Scan. Movies? Scan? Baseball? Scan. Yikes!
The potential for abuse is there. We don't see the person reading the scan, we don't know what is really viewed on the scan, we don't know where the information goes after the scan (we're told it is erased but come on...). Let's just say the transparency in this whole control arrangement leaves a lot to be desired (yes, that pun was intended).
So people are outraged by this, right? No. The acceptance of this intrusion is quite matter-of-fact. "In this day and age we have to accept this sort of thing," was the response from those queried according to an article in USAToday. That is chilling.
This is not a new discussion. Yes, in ways it is much more physically personally. But we've been here before with the Clipper Chip discussion during Clinton's administration. Lynn McNulty was the designated "arrow catcher" for NIST during that episode as the debate raged about the loss of privacy of our communications due to the back door designed into that crypto mechanism. The FBI pushed hard for that technology. Ultimately the public outcry caused the Feds to re-look at their approach and come up with a better way. One of my heros, Mark Rotenberg of EPIC, continually watches out for our personal privacy. But this time... we just have to accept it or presumably face being blown out of the sky?
Not this chick. We have to do better. We must demand it.
Labels:
Privacy,
Travel Security
Friday, January 1, 2010
Due Diligence to a Standard of Care
I'm not sure its legitimate (or legal) for me to use the tag-line Due Diligence to a Standard of Care in my business anymore.
That tagline was something I started using long before my brief stint at Microsoft, where "standard of care" seemed to really take off. It was my mantra as a CISO, because it sent the message that securing everything was just not feasible, but doing that which could be legally considered reasonable was at least an achievable target. It didn't stop me from having a boss whose performance objective for me as a CISO was "no hacks, no leaks" but I think most people understood the message that "due diligence to a standard of care" meant.
I did not originate the phrase. Donn Parker, founder of the International Information Integrity Institute at SRI International over two decades ago, first promoted the idea of establishing a security framework that could be defended as 'due care' vs conducting formalized risk assessments for every decision. He was right: we can't secure everything and if we did a formal risk assessment for every security decision, we would get nothing else done. Theoretically, we should be able to establish a reasonable standard of care for our clients' business, and put in place processes and techniques to demonstrate due diligence to that standard of care on an ongoing basis.
Here is where a license to practice law would be very more than handy: defining what is "reasonable" and what would constitute "due diligence" to a prudent individual. What "standard" should we adopt? ITIL? ISO27001 (and the rest of the 27000 series?) CobiT? How does an information security professional determine that reasonable measures were taken to ensure the integrity of information in e-discovery? How does one defend technical architectures and supporting processes for reasonable measures to detect nefarious behavior in the network? You get the idea... these recommendations have gone beyond the role of the information security professional and require the advice of a qualified legal professional. How much about information security can we advise, and how much requires us to work under the oversight of the legal profession? It won't be long before the answer is: "We can't say much."
When I started in information security (mid-80s-ish) we were just beginning to get a whiff of data security laws, mostly pertaining to information of interest to the US Federal Government. Probably could count legislation relating to digital data protection on half the fingers of one hand, if that.
Today, we need software services to follow pending US legislation on information security and privacy because there is so much of it to track. The volume is exploding and while I'm excited about some legislative "teeth", codification of information security advice into statutes is having an interesting secondary effect on those of us who consult full-time: for us to give advice on what constitutes "compliance" in a field where compliance can be defined by statutes is effectively practicing law without a license (perhaps I should get legal advice before I offer this opinion)! We have healthcare information handling laws, data protection laws for personal information, privacy laws most of which are at the State level. The Senate Bill S.773 that was introduced earlier this year and sent to Committee for CyberSecurity was an attempt to introduce new legislation at the Federal level to "ensure the continued free flow of commerce within the United States and with its global trading partners..." but it has not emerged from Committee. May it never be (that will be a topic for another post!)
For those of you in the Seattle area, I will be teaming up again with the CEO of Legicrawler, Beckie Krantz, JD, to give a presentation on compliance with information security and privacy laws in the US on January 19th, 2010. Information on the session is below - registration for the event is through the Puget Sound chapter of ISACA.
That tagline was something I started using long before my brief stint at Microsoft, where "standard of care" seemed to really take off. It was my mantra as a CISO, because it sent the message that securing everything was just not feasible, but doing that which could be legally considered reasonable was at least an achievable target. It didn't stop me from having a boss whose performance objective for me as a CISO was "no hacks, no leaks" but I think most people understood the message that "due diligence to a standard of care" meant.
I did not originate the phrase. Donn Parker, founder of the International Information Integrity Institute at SRI International over two decades ago, first promoted the idea of establishing a security framework that could be defended as 'due care' vs conducting formalized risk assessments for every decision. He was right: we can't secure everything and if we did a formal risk assessment for every security decision, we would get nothing else done. Theoretically, we should be able to establish a reasonable standard of care for our clients' business, and put in place processes and techniques to demonstrate due diligence to that standard of care on an ongoing basis.
Here is where a license to practice law would be very more than handy: defining what is "reasonable" and what would constitute "due diligence" to a prudent individual. What "standard" should we adopt? ITIL? ISO27001 (and the rest of the 27000 series?) CobiT? How does an information security professional determine that reasonable measures were taken to ensure the integrity of information in e-discovery? How does one defend technical architectures and supporting processes for reasonable measures to detect nefarious behavior in the network? You get the idea... these recommendations have gone beyond the role of the information security professional and require the advice of a qualified legal professional. How much about information security can we advise, and how much requires us to work under the oversight of the legal profession? It won't be long before the answer is: "We can't say much."
When I started in information security (mid-80s-ish) we were just beginning to get a whiff of data security laws, mostly pertaining to information of interest to the US Federal Government. Probably could count legislation relating to digital data protection on half the fingers of one hand, if that.
Today, we need software services to follow pending US legislation on information security and privacy because there is so much of it to track. The volume is exploding and while I'm excited about some legislative "teeth", codification of information security advice into statutes is having an interesting secondary effect on those of us who consult full-time: for us to give advice on what constitutes "compliance" in a field where compliance can be defined by statutes is effectively practicing law without a license (perhaps I should get legal advice before I offer this opinion)! We have healthcare information handling laws, data protection laws for personal information, privacy laws most of which are at the State level. The Senate Bill S.773 that was introduced earlier this year and sent to Committee for CyberSecurity was an attempt to introduce new legislation at the Federal level to "ensure the continued free flow of commerce within the United States and with its global trading partners..." but it has not emerged from Committee. May it never be (that will be a topic for another post!)
For those of you in the Seattle area, I will be teaming up again with the CEO of Legicrawler, Beckie Krantz, JD, to give a presentation on compliance with information security and privacy laws in the US on January 19th, 2010. Information on the session is below - registration for the event is through the Puget Sound chapter of ISACA.
Labels:
Legislation
Recommended reading
I'm just getting into it, but I can tell that one of my favorite tech books of 2009 is going to be David S. Linthicum's book Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide. It is practically written, and highlights the tremendous opportunity that combining SOA (Service-Oriented Architecture) with implementation using Cloud, can have in the area of business transformation. He makes a great case for SOA as part of an IT Risk Management approach because of the strong governance that SOA (at least good SOA) requires.
Labels:
IT Risk
An exceptional beginning
As the ball dropped in Times Square, and the fireworks lit off the SpaceNeedle in Seattle, a Blue Moon accompanied the sendoff of 2009 and the welcome of 2010. A Blue Moon is when a full moon occurs twice in one calendar month. It happens about every 2.5 years but the next time it will occur on the cusp of the calendar change between the civil year will be 2028. So, this feels a little exceptional - like heavenly events, it will happen again predictably. Where we will be, and what we will be doing when the next one comes around is, well, out of our hands.
What does this have to do with IT Risk? We deal with unpredictable events. Unlike the moons, unlike the tides, our profession is governed more by the unforeseen, the unplanned, the "zero day" event that we didn't see coming. At times it is thrilling, but over time, it is exhausting... and I believe its unnecessary. Much of what we deal with as firedrills in our profession is foreseeable, and manageable, if we have the will to do so.
So welcome to the kickoff of Security Curmudgeon. This happens once in a New Year's Blue Moon. We cannot predict all that will happen to us in 2010, all we can do is anticipate the probable, and be ready for the improbable. With anticipation of Cloud Computing, a rise in cyber-crime, technology innovation, and the hoped-for economic upturn (not to mention what promises to be a thriller of a political year) we will likely have a lot to do in reactive mode this year. Let's change that so that by the next Blue Moon, we'll be in a more relaxed, proactive posture for managing IT Risk.
What does this have to do with IT Risk? We deal with unpredictable events. Unlike the moons, unlike the tides, our profession is governed more by the unforeseen, the unplanned, the "zero day" event that we didn't see coming. At times it is thrilling, but over time, it is exhausting... and I believe its unnecessary. Much of what we deal with as firedrills in our profession is foreseeable, and manageable, if we have the will to do so.
So welcome to the kickoff of Security Curmudgeon. This happens once in a New Year's Blue Moon. We cannot predict all that will happen to us in 2010, all we can do is anticipate the probable, and be ready for the improbable. With anticipation of Cloud Computing, a rise in cyber-crime, technology innovation, and the hoped-for economic upturn (not to mention what promises to be a thriller of a political year) we will likely have a lot to do in reactive mode this year. Let's change that so that by the next Blue Moon, we'll be in a more relaxed, proactive posture for managing IT Risk.
Labels:
IT Risk
Subscribe to:
Posts (Atom)