How my site got hacked, and how you can learn from that

Detection

I should have acted on the first signals more aggressively. But let’s talk about that later.

Here is the story of my site being infected with malware, viewed by a professional cloud security expert. So I am going to apply all that cloud security theory to it.

The hack led to business damage at the end of one of my webinars. A couple of weeks ago, on a Friday, I did a webinar, at the end of which I had two links to my site as a call to action, www.clubcloudcomputing.com to be precise, and www.clubcloudcomputing.com/cloud-risk-scan/.

However, three participants reported in the chat that they could not access those links as their corporate firewalls blocked them. Three different security programs (Microsoft, McAfee and Kaspersky) rated the site as unsafe for various reasons, ranging from detected Trojans to mention of “Malicious software threat: This site contains links to viruses or other software programs that can reveal personal information stored or typed on your computer to malicious persons”.

So, instead of continuing the conversation of how I could be of help to these people, and talk about my next program, I stalled. Nobody bought my program. Business lost. And my time suddenly had to be diverted to fixing this. Another loss. This is all real damage. The only upside is that I can write this article about it.

That was the detection phase. And as I said, I could have found it earlier.

Analysis and eradication

Now for the analysis and eradication phase. What the heck was going on? I had very little luck in getting the security programs to tell me what was wrong with the site. www.virustotal.com reported 1 hit by Yandex, though the diagnostics on their webmaster pages where vague. McAfee and Kaspersky don’t seem to have any service that is helpful with this.

In the mean time, 3 more reports came in on the site, adding TrendMicro to the list of blockers.

It took my site manager until Tuesday to fix it. Very disappointing. He also was not very communicative about the real underlying problem, other than that is was caused by a WordPress plugin that had not been updated. He did manage to restore the site and clean it. I think.

After I discovered the problem, independently I got a report from a SEO expert, who noticed funny code in the pages, and weird statistics in the analytics. He told me that the malware was in fact a 5 year old Mozilla exploit, which is number 17974 on exploit-db (I removed the link, because it gives my site a bad reputation).

It appeared to be an automated infection targeted at Mozilla users who had not updated their browsers. My site does not store any customer data, all form submissions go to a separate autoresponder and shopping cart service. So no data was lost or breached.

Recovery

Now for the recovery phase. Malware gone does not equal business problem gone. Even as the malware is erased, my site’s reputation is still suffering.

Getting off the blacklists is a hard process; they seem to parrot and echo each other. A week after the alleged site fix, I managed to get it off one or two engines. But it is still listed at Yandex, Clean MX, SCUMWARE and CRDF, all of whom don’t appear to have an expedient process of getting off their blacklist. http://www.urlvoid.com/scan/clubcloudcomputing.com/ actually increased the number of blacklisting sites in the past days and added Fortinet’s FortiGuard.

One of the engines rates my site badly because it links to a bad site, namely itself. How Catch 22 can you become?

Sounds like a bad vigilante movie, where the main characters don’t care too much about the collateral damage they inflict. Listing malware sites is easy enough, delisting apparently is harder.

So this reputation might haunt me for who knows how long. Maybe the domain will never really recover.

On the positive side, some corporate firewalls now seem to allow my site again (but please help me assert that). But be aware that most corporate firewalls are extremely paranoid, as they probably should be. Just having a simple link in my email message pointing to my homepage will have that message marked as [SUSPICIOUS MESSAGE] by one of the big four advisory firms.

Preparation

Finally, back to preparation. What could we have done to prevent this, or at least reduce the impact of the problem?

I have a backup running of this website. It is a WordPress plugin that dumps the entire site’s content in a DropBox which is synced to my PC. Weeks before the webinar, I had installed F-Secure on the PC, and it barked on one of the files in the DropBox folder. I reported this to my website manager, but I knew that it was in a part of the website that was not in use, nor accessible to the users of the website. That led me to believe it was a false positive, but I should have known better.

In the end, having the site itself generate a backup is not sufficient. The advantage is that the backup should be easy to restore, but malware might take the backup software or its configuration as a first target. In fact, I suspect that in my case the malware created a few hundred thousand files, which clogged my DropBox synchronization. However, I could not finish the forensics on that.

The site manager restored the site from a file system backup. I do not have access to that.

Externally spidering the website, and versioning it may be better. At any rate, this is a case for generating fully static websites.

So, obviously the best direct preparations is regularly updating software, and removing software you don’t need. Case in point: the malware was inserted into a piece of forum software that we never got to work properly. In the end we abandoned it in favor of a managed cloud solution (an e-learning platform).

Cloud security reference model

The cloud security reference model asks us to identify who is responsible for what piece of the software stack. I don’t think there is much confusion about who was supposed to keep the site’s software up to date. My site manager never denied that he was. But he did not put in any warning system, and ignored my F-Secure warning.

He also did not yet provide adequate forensics to me after the fact. Maybe a regular customer won’t need those details, I can see that. But I have professional interests beyond that, as this article proves.

Of course, my site manager is not the only one responsible for the software. He did not write it. The site’s software and plugins are either commercial or open source. Both have their own update policies or lack thereof. Both can be abandoned by their developers. But somebody needs to track that too.

Managing one custom WordPress website at a time is not likely to be a very viable business model in the long run. If your website is not very complicated functionally, you might consider static hosting, or move it to a cloud based website builder like squarespace.com or wix.com. You would still have to check their competence, but with thousands or hundreds of thousands of websites at stake, these companies are more likely to have the motivation and the resources to properly manage these risks.

As a business owner I am ultimately accountable for all IT risks that affect my business. Remember, any provider, including a managed hosting provider, can fail. You need a plan B. I do have some backup of the most important documents on my site. I wrote them. But in the end, the most irrecoverable asset lost here might be the domain name. As a precaution against that, I could have considered to have the most important pages also hosted on another domain. In fact, I might have to do that, if this domain isn’t delisted quickly enough. It is a telling and disturbing sign that registrations for my newsletters these days mostly come from public email providers, not companies.

Wrapping up

I am disclosing my misfortune so that it may be of help to people. Whether you work in a large corporation or a small one, are on the consumer or on the provider side, you can use this case to improve your own management of IT risk.

What are the biggest lessons you should take?

Reputation damage that gets its way into the firewalls and proxies of customer companies leads to real and lasting business damage.

Exit and recovery plans can be considered on multiple levels. Sure, the basic backups matter, but at all times consider your business continuity from the top down, starting at your domain name.

I have multiple training programs developed, or in progress, to help improve IT risk management. Stay tuned.

So, in case you want to sign up to my newsletters, and cannot access www.clubcloudcomputing.com for whatever reason (LOL), hop over to www.ccsk.eu. That page is focused on the CCSK certification, but you will be updated on cloud risk in general. And if you have problems accessing www.clubcloudcomputing.com, please tell me which service is blocking it, when, and with what message.

Agile development requires modern digital infrastructures

Agile development is all the fashion nowadays. Why is that and what kind of digital infrastructures does that require?

Back in the old days, business software was primarily written to automate existing business processes. Those processes might change somewhat as a result, but in the core processes were no different. Think accounting systems, scheduling, “customer relationship management” and so on.

Today we see that software not only automates these business processes, but becomes part of the product, or even becomes the product itself. And on top of that, this software is often an on-line service. Think of the smart room thermostat. Or financial services, where software increasingly is the main part of the product: think online banking. And in social media from Facebook to Tindr, software really is the product.

The dance

Every product changes the world that uses it. Think how cars have changed the way people commute, or even choose where they live. Software is no different. But a changing world also changes the products we can use or want to use. There is a kind of dance between supply and demand. Do we have breakfast out of our house more often because there are more opportunities for this or does the supply of breakfast places increase as a result of us going out more? Just as in a dance, it is not always easy to tell who is leading who.

Because software now has become the product it will also participate in the dance, and then it becomes more important to quickly adapt to the dance partner. As a developer, you change the world that uses your software in ways you cannot always predict, so in reaction you have to adapt to that world.

The faster the better.

This explains the need for agile development. Between idea and realization (time to market) there should not be two years, but only two weeks, and preferably less.

What kind of digital infrastructures does that require?

The prime objective of digital infrastructures is to enable the deployment of application functionality. The quality of digital infrastructures used to be measured in the number of users it could support well. I.e. we used to talk about a system supporting 10.000 concurrent users with less than 4 seconds response time.

But agile development comes with a new goal: ‘feature velocity’. That is the speed with which new features can be deployed. The time between inception of a feature and its deployment to a large user base has to be shorter than the time it takes for the environment to change. In a dance you want to anticipate your partner, not stand on her toes.

Your digital infrastructure should not be a bottleneck. This requires features such as automated testing, quick scaling up and down of resources, and as little manual intervention as possible. This is the only way to shorten the lead time for a change.

Cloud computing

In summary: agile development requires cloud computing. Remember: the essential characteristics of cloud computing include fast, elastic and self-service provisioning of resources. That is what is required for agile development.

And then the dance continues. Because if we can do that, we can do other new things. Like better security. If you can respond quicker to new functional requirements, you can also respond quicker to security issues.

If you want more cloud security look here.

A Dutch version of this article appeared earlier in: Tijdschrift IT Management, een uitgave van ICT Media.

Cloud migration strategies and their impact on security and governance

Public cloud migrations come in different shapes and sizes, but I see three major approaches. Each of these has very different technical and governance implications.

Three approaches

Companies dying to get rid of their data centers often get started on a ‘lift and shift’ approach, where applications are moved from existing servers to equivalent servers in the cloud. The cloud service model consumed here is mainly IaaS (infrastructure as a service). Not much is outsourced to cloud providers here. Contrast that with SaaS.

The other side of the spectrum is adopting SaaS solutions. More often than not, these trickle in from the business side, not from IT. These could range from small meeting planners to full blown sales support systems.

More recently, developers have started to embrace cloud native architectures. Ultimately, both the target environment as well as the development environment can be cloud based. The cloud service model consumed here is typically PaaS.

I am not here to advocate the benefits of one over the other, I think there can be business case for each of these.

The categories also have some overlap. Lift and shift can require some refactoring of code, to have it better fit cloud native deployments. And hardly any SaaS application is stand alone, so some (cloud native) integration with other software is often required.

Profound differences

The big point I want to make here is that there are profound differences in the issues that each of these categories faces, and the hard decisions that have to be made. Most of these decisions are about governance and risk management.

With lift and shift, the application functionality is pretty clear, but bringing that out to the cloud introduces data risks and technical risks. Data controls may be insufficient, and the application’s architecture may not be a good match for cloud, leading to poor performance and high cost.

One group of SaaS applications stems from ‘shadow IT’. The people that adopt them typically pay little attention to existing risk management policies. These can also add useless complexity to the application landscape. The governance challenges for these are obvious: consolidate and make them more compliant with company policies.

Another group of SaaS applications is the reincarnation of the ‘enterprise software package’. Think ERP, CRM or HR applications. These are typically run as a corporate project, with all its change management issues, except that you don’t have to run it yourself.

The positive side of SaaS solutions, in general, is that they are likely to be cloud native, which could greatly reduce their risk profile. Of course, this has to be validated, and a minimum risk control is to have a good exit strategy.

Finally, cloud native development is the most exciting, rewarding and risky approach. This is because it explores and creates new possibilities that can truly transform an organization.

One of the most obvious balances to strike here is between speed of innovation and independence of platform providers. The more you are willing to commit yourself to an innovative platform, the faster you may be able to move. The two big examples I see of that are big data and internet of things. The major cloud providers have very interesting offerings there, but moving a fully developed application from one provider to another is going to be a really painful proposition. And of course, the next important thing is for developers to truly understand the risks and benefits of cloud native development.

Again, big governance and risk management issues to address.

Next

Need to know more about the details of the service models and their impact on risk management and governance? You may find my training on cloud security very relevant for that. Click here for online training or classroom.

How the NSA hacks you, and what cloud can do about it

At the recent Usenix Enigma 2016 conference, NSA TAO chief Rob Joyce explains how his team works. By the way, TAO means Tailored Access Operations, which is a euphemism for hacking. See the full presentation here.  Rob explains their methods, but between the lines he implies that other nation states are doing the same, so in a way he is here to help us. For that reason he also explains what makes their work hard.

After Snowdon I should not need to explain the extent of what is going on here.

In summary, the NSA’s method of operation is: “reconnaissance, initial exploitation, establish persistence, move laterally, collect and exfiltrate”.

In this article I won’t go in more detail on each of these. But here are a couple of rephrased quotes for illustration.

  • Reconnaissance: “We aim to know your network (i.e. infrastructure) better than you do”
  • Initial exploitation: “Zero day exploits are hardly ever necessary”
  • Lateral moves: “Nothing frustrates us more than being inside, and not able to move”

What is the implication of this for cloud security? Of course, if you replicate your legacy infrastructure into a cloud provider, it is not going to be more secure. So you need to do some more.

Cloud to the rescue?

Can the cloud model actually help with security? I think it can, and here are a few examples. They hinge on the cloud essential characteristics of self-service provisioning and rapid elasticity, which enable security automation.

Know your network. A good IaaS provider allows you to fully and automatically take stock of what you have provisioned. A very small proof of concept is on my github project ‘state of cloud’, which just lists all running EC2 instances in your AWS account across all regions. You can then do all kinds of reporting and analysis on your infrastructure, and in particular check for vulnerabilities like rogue machines and open ports.

Code exploits. Why should you use zero day exploits, if organizations are months or even years behind on patching? Why are they behind? Because it is labor intensive. So automate it. Whenever an instance boots up, it should be patched automatically and then tested. All without manual intervention. This requires cloud automation.

Lateral moves. A typical organization has a hard shell, and a soft inside, so to say. Once passed the firewall, the attacker is like the fox in the henhouse. To counter this you need hyper segregation, in particular of security groups and user credentials. You can have a security group per machine, and individual credentials per task. Only cloud automation enables you to do this at scale.

Summary

Hacking is an arms race. Automate and scale up your response or lose. Cloud computing might help.
Want to know more about cloud security? See the course calendar

Assuring your customer of your service quality

Do you deliver your software product as a service? Or do you offer another IT service online? Then you probably have found that your customers really need assurance that your service is good enough for their purposes. It has to be usable of course, but it also has to fit in their risk appetite and compliance obligations.

During the service, or even in the process of procurement, your customer needs assurance. If not, they won’t remain your customer for very long.

Here is a sketch of a process that helps you demonstrate your quality to your customers.

Step 1. Figure out how the customer typically does assurance. Ask them. What are they scared of? Are they looking for specific certifications, audit reports or such? Think ISO27001, Cloud Controls Matrix, or staff certifications. You also want to figure out through what process they do this, if they have a process. Who is involved on the customer side? The IT security department? Internal audit? Corporate Risk Management? Your objective is to make their work in assessing your service as simple as possible. For example, a lot of companies are using the CAIQ format from the Cloud Security Alliance.

Step 2. What proof of your quality do you already have? There is knowledge about your service, processes and assets, but also skills in handling these. How can you show that? You should also be honest about your attitude in service delivery. Is your company doing stuff because it is “the right thing” or because you are forced to do it? In particular you are looking for the things that you do well in a repeatable, documented way.

Step 3. Gap analysis. Based on the output of step 1 and 2 you can get going on a gap analysis.

  • What are your big security holes? I.e. do you allow your customers to do rotation of all login and API keys?
  • What are the evidence holes? For example: Is there logging missing, or not exposed well enough to the customer?
  • What process holes are there in your system? I.e. you may not have a repeatable process for tracking new security bugs or new compliance obligations.

Step 4. Talk to your customer. The ideal situation is where you can bring the results of the previous steps back to your customers and have an open dialogue about their requirements, desires and priorities. You want to figure out what the value is that the service is bringing them that offsets any residual risk that your service still poses to them.

Step 5. Improve, rinse and repeat. Feed the output of the previous step into your software development, your system and support organization, legal and sales. Make sure that whatever improvement they realize, they also build in the evidence that shows how it is an improvement. Status pages, dashboards, and self assessments (e.g. published under non-disclosure agreements) are good examples of that.

It does not stop here. Ideally you are in constant conversation with your customers about their new opportunities and new risks. As your service become more important to your customers, their risk appetite will change. You will need to address that.

If you want to know more about this, download the cloud security 101 one-pager, and drop me a line with your specific question.

How the internet is changing our thinking

The internet changes the way we think, Nicholas Carr writes in “The Shallows”.

Simple examples can be found in what we decide to lookup instead of memorize, such as phone numbers. At the same time it still makes sense to study and memorize traffic signs and history.

But old truths don’t all hold anymore.

The internet changes our decisions on what to share to whom and what to keep secret. A lot of people are comfortable about sharing most of their feelings to the world, but it may be wise to be a bit more restrictive on sharing teenage party pictures, when you leave your house for a holiday, or your mother’s maiden name.

Not all old truths still hold.

Cloud computing changes the way we run professional IT. It changes our decisions on what we do ourselves and what we let others do for us.

Who owns the servers, who keeps the software safe? We can’t afford to run all of our interesting IT ourselves, but we don’t want to hand over everything. There are still things that we can do better than anybody else. So: less server hugging, and more useful applications that help our organizations stand out.

Old truths may need updating.

Want to know more? Have a look at my calendar for a free webinar where I will talk about this in more detail.

Cloud Security 101

Cloud_security_101Here is a one page overview of the basic things you need to know about cloud security.

On the right you see the thumbnail version.
Just register below (or on the right if that does not work) for the full page. It has clickable links into additional resources.

 



First name:

E-mailaddress:

 

Thanks!

IT leadership in the 21st century

The question I am working on is this. How can IT leaders drive the right level of cloud adoption? We know cloud computing can bring risks and benefits. But how can organizations swiftly and securely do the right level of adoption?

No place for bean counters

When I talk about IT leaders I don’t mean the bean counter who is only trying to keep IT cost down. That is not the skill set that this article is about.

In an age where arguably every business is innovating through IT, a whole different skill set is required.

The IT leaders I am talking about are the ones whose mission it is to support the business goals and ambitions of their organizations. Their job is to use information technology to the maximum while staying within the flight envelop given by the organization’s risk appetite.

I suspect that this requires deep understanding of information technology and its risks.

The first question might be: why should we embrace the right level of cloud adoption?

My answer to that is that it depends on how digitally infused you are. If you company is mainly ‘brick and mortar’, cloud computing can lead to cost savings, if done right. But if your business is dependent on digital communication with customers, suppliers and other stakeholders, it will depend on its capacity to adapt its IT systems quickly. For that, cloud computing brings agility. And therefore IT leadership’s role includes enabling that agility for the business.

IT leaders

Who is actually leading IT and the company IT strategy? At first sight you might be talking about who sets the budget, and oversees the staff. For example: what CRM system to invest in? In the end, the CIO is supposed to control that.

But that ignores a whole set of decisions on the long term direction of IT. Although IT is a dynamic and rapidly changing field, it is also know for dragging along a lot of legacy. Anecdotes abound about COBOL mainframe applications that have been unchanged for decades, and production systems that are running on Windows XP. Yet, at the time those where sound innovative decisions.

To be truly innovative requires making decisions on the future direction of technology and how the company is going to benefit from it. These decisions will impact the organization of IT way beyond the next budget cycle.

And if it is innovative, it is also unproven: “No risk, no fun”, “No pain, no gain”. It requires taking calculated risks that can have a large upside, yet fit within the risk appetite of the company as a whole. It requires making choices about what to adopt and what not.

These choices require a reasonably detailed understanding of new technology and the risks that it can bring. The better you understand the risks, the better the balance you can strike. Better brakes allow your car to go faster.

The competences to make these decisions are most likely to be found in the IT architects and IT risk managers. It might be in the skill set of the CIO, but I find that rare.

The required leadership skills not only extend to technology decisions. Every change in the structure of technology requires a change in the structure of the organization. Old silo’s fade away, new skill sets, tasks, and responsibilities set in. Nowhere is this more clear than in the uptake of cloud computing. Parts of the responsibility of running IT are outsourced as well as changed in service model. That surely impacts the IT power structure.

IT leadership therefore also requires the skill to navigate the organization through a change in its power structure. Within IT, and increasingly also outside the IT department, tasks and responsibilities shift. That can be painful. Resistance is to be expected.

So what are the new conflicts to be resolved? And what is the mission of the IT leader?

For that, we first need to look at innovation with and within IT.

Click here for the next post in this series

Risks of virtualization in public and private clouds

Server virtualization is one of the cornerstone technologies of the cloud. The ability to create multiple virtual servers that can run any operating system on a single physical server has a lot of advantages.

These advantages can appear in public Infrastructure as a Service (IaaS) as well as in private clouds.

However, it also brings some risks.

Virtualization reduces the number of physical servers required for a given workload, which brings cost benefits. It also allows for more flexible sizing of computer resources such as CPU and memory. This in turn tends to speed up development projects, even without automatic provisioning. Virtualization can even increase the security of IT because it is easier to set up the right network access controls between machines.

So in order to get the real benefits, steer clear of the risks. A pretty extensive overview of these risks was written by the US National Institute of Standards and Technology (NIST). You can find it as Special Publication 800-125. This article is partly based on that.

Let us first get some of the important concepts straight. The host is the machine on which the hypervisor runs. The hypervisor is the piece of software that does the actual virtualization. The guests then are the virtual machines on top of the hypervisor, each of which runs its own operating system.

The hypervisors are controlled through what is called the ‘management plane’, which is a web console or similar that can remotely manage the hypervisors. This is a lot more convenient than walking up to the individual servers. It is also a lot more convenient for remote hackers. So it is good practice to control access to the management plane.  That might involve using two-factor authentication (such as smart cards), and giving individual administrators only the access that is needed for their task.

An often mentioned risk of virtualization is so-called ‘guest escape’, where one virtual machine would access or break into its neighbor on the same virtual machine. This could happen through a buggy hypervisor or insecure network connections on the host machine. The hypervisor is a piece of software like any other software. In fact it is often based on a scaled down version of Linux, and any Linux vulnerability could affect the hypervisor. However, if you control the hypervisor, you control not just one system, you can control the entire cloud system. So it is of the highest importance that you are absolutely sure that you run the right version of the hypervisor. You should be very sure of where it came from (its provenance), and you should be able to patch or update it immediately.

The network matters too

Related to this is the need for good network design. The network should allow real isolation of any guest, so that they will not be able to see any traffic from other guests, nor traffic with the host.

An intrinsic risk of server virtualization is so called ‘resource abuse’, where one guest (or tenant) is overusing the physical resources, thereby starving the other guests of the resources required to run their workloads. This is also called the ‘noisy neighbor’ problem. To address it can require a number of things. The hypervisor might be able to limit the over usage of a guest, but in the end somebody should be thinking about how to avoid putting too many guests on a single host. That is a tough balance to strike: too few guests means you are not saving enough money, too many guests mean you risk performance issues.

In the real world, there are typically a lot of virtual servers that are identical. They run from the same ‘image’, and each virtual server is then called an instance of that image, or instance for short.

Then, with virtual servers it becomes easy to clone, snapshot, replicate, start and stop images. This has advantages, but also creates new risks. It can lead to an enormous sprawl or proliferation of server images that need to be stored somewhere. This can become hard to manage and represents a security risk. For example, how do you know that when a dormant image is restarted after a long time, that it is still up to date and patched? I heard a firsthand story of an image that got rootkitted by a hacker.

So the least you should do is to do your anti malware, virus and version checking also on images that are not in use. Even when you work with a public IaaS provider, you are still responsible for patching the guest images.

In summary, server virtualization brings new power to IT professionals. But as the saying goes, with great power comes great responsibility.

Basic cloud risk assessment

You should worry about the risks of cloud computing. But don’t get too scared. With a few simple steps you can easily get a basic understanding of your risks in the cloud and even have a good start in managing these risks.

If you are a large corporation in a regulated industry, a cloud risk assessment can take weeks or months of work. But even that process starts from simple principles.

Oddly enough, I think any risk assessment of a cloud plan should start with the benefit you are expecting from the cloud service. There are two reasons for that. First, the benefit determines the risk appetite. You can accept a little risk if the benefit is large enough. But if the benefit is small, why take any chances?

The second reason is that not realizing the benefit is a risk as well.

For example, if there is a choice between running your CRM system in-house versus in the cloud, you might find that it takes too long to set up the system in-house and it won’t be accessible by sales people in the field. The cloud system will be quicker to deploy and easier to access from outside your company, so the benefit can be realized quicker.

The first question: what is the data you want to store in the cloud

Pretty essential in any cloud risk assessment is figuring out what the data is that you want to store in the cloud. Most of the cloud risk management is built on that pillar.

Pay particular attention to data that identifies persons, log files, credit card numbers, intellectual property, and anything that is essential to the conduct of your business. You can easily guess what this means for a CRM system: customers, proposals, contact details.

The second question then is: what do you want to do with that data?

How is the cloud provider giving you access to that data? Is the access convenient enough, can you get the reports that you need? In this step you sometimes need to revisit the previous step. For example as you do your reports you figure out that you not only stored customer orders in the cloud, but also your product catalog. So add that to the data that you should worry about.

Once you have a clear idea of the data and the functionality, you can start looking at the value at risk.

Beginning with the data, think about what the worst thing is that can happen to the data. What about it getting lost, or falling into the hands of the wrong people? What about the chance that it is changed without you knowing (maybe by a colleague who happens to have too many access rights)? In my experience, people overestimate the risk of the cloud provider leaking your data, and underestimate the risk of internal people leaking your data.

Similarly, what happens to the business if the data or the reports are not available for some period of time? How long can your business get by without having full access to the data? In the worst case the provider goes out of business. Can you survive the time it takes to set up a new service?

With that general picture in your head, you can start looking at the threats. The top risks are that the cloud provider fails to deliver, and that the cloud provider leaks information.

A little more subtle are the cases where you think they should be doing something, but they don’t. If you use IaaS, you may think that the cloud provider is patching your operating systems. Typically, they don’t. And any backup that the cloud provider makes does not protect you from a provider going out of business. So you want to review your assumptions on who takes care of which risk.

If anything, you should think about which data you still want to use after you stop working with that cloud service. This is easier to do before the cloud provider runs into trouble. Regular data extraction can be fairly simple. If your provider does not make that easy, well, maybe they should not be your provider.

Further reading? The European Network and Information Security Agency (ENISA) has produced a very good list of cloud risks. See my earlier blog which also has a brief video on that. For risk assessment purposes I have also created a brief risk triage worksheet. You can get that by signing up to my cloud newsletter at http://www.ccsk.eu.