Virtualization is an important technology in cloud computing. It allows for resource sharing and multitenancy. With these benefits come security concerns. Security of the virtualization method is crucial. The two primary methods of virtualization are VMs created and managed through a hypervisor and virtualization through containers.
A hypervisor, such as Hyper-V or vSphere, packages resources into a VM. Creating and managing the VM are both done through the hypervisor. For this reason, it is important that the hypervisor be secure. Hypervisors such as Hyper-V, VMware EXSi, or Citrix XenServer are type I hypervisors or native hypervisors that run on the host's hardware.
A type I hypervisor is faster and more secure but is more difficult to set up than type II hypervisors, such as VMware or VirtualBox, which sit on top of the operating system. These are easier to set up but less secure.
A hypervisor is a natural target of malicious users as they control all the resources used by each VM. If a hacker compromises another tenant on the server you are on and can compromise the hypervisor, they may be able to attack other customers through the hypervisor. Hypervisor vendors are continually working to make their products more secure.
For the customer, security is enhanced by controlling admin access to the virtualization solution, designing security into your virtualization solution, and securing the hypervisor. All access to the hypervisor should be logged and audited. Access to the network should be limited for the hypervisor to only the necessary access. This traffic should be logged and audited. Finally, the hypervisor must remain current, with all security patches and updates applied as soon as is reasonable. More detailed security recommendations are published in NIST SP 800-125A Rev 1 and by hypervisor vendors.
Containerization, such as through Docker or LXC, has many benefits and some vulnerabilities. These include resource efficiency, portability, easier scaling, and agile development. Containerization also improves security by isolating the cloud solution and the host system. Security risks occur through inadequate identity and access management and through misconfigured containers. Software bugs in the container software can also be an issue. The isolation of the container from the host system does not mean that security of the host system can be ignored.
The security issues of containerization must first be addressed through education and training. Traditional DevOps practices and methodologies do not always translate to secure containerization. The use of specialized container operating systems is also beneficial as it limits the capabilities of the underlying OS to those functions a container may need. Much like disabling network ports that are unused, limiting OS functionality decreases the attack surface. Finally, all management and security tools used must be designed for containers. A number of cloud-based security services are available.
There are many containerization solutions provided by major CSPs. One can easily find articles that extoll the virtues of one solution over another. As with other areas of technology, which is best is often a matter of who you ask. Determining which solution is best for your organization requires comparing costs and features.
Previous sections dealt with threats that are related to the specific technologies that are key parts of cloud computing, such as virtualization, media sanitization, and network security. However, all other threats that may attack traditional services are also of concern. Controls that are used to protect access to software solutions, data transfer and storage, and identity and access control in a traditional environment must be considered in a cloud environment as well.
UNDERSTAND DESIGN PRINCIPLES OF SECURE CLOUD COMPUTING
As processes and data move to the cloud, it is only right to consider the security implications of that business decision. Cloud computing is as secure as it is configured to be. With careful review of CSPs and cloud services, as well as fulfilling the customer's shared responsibilities for cloud security, the benefits of the cloud can be obtained securely. The following sections discuss methods and requirements that help the customer work securely in the cloud environment.
Cloud Secure Data Lifecycle
As with all development efforts, the best security is the security that is designed into a system. The cloud secure data lifecycle can be broken down into six steps or phases.
Create: This is the creation of new content or the modification of existing content.
Store: This generally happens at creation time. This involves storing the new content in some data repository, such as a database or file system.
Use: This includes all the typical data activities such as viewing, processing, and changing.
Share: This is the exchange of data between two entities or systems.
Archive: Data is no longer used but is being stored.
Destroy: Data has reached the end of its life, as defined in a data retention policy or similar guidance. It is permanently destroyed.
At each of these steps in the data's lifecycle, there is the possibility of a data breach or data leakage. The general tools for preventing these are encryption and the use of data loss prevention (DLP) tools.
Cloud-Based Disaster Recovery and Business Continuity Planning
A business continuity plan (BCP) is focused on keeping a business running following a disaster such as weather, civil unrest, terrorism, fire, etc. The BCP may focus on critical business processes necessary to keep the business going while disaster recovery takes place. A disaster recovery plan (DRP) is focused on returning to normal business operations. This can be a lengthy process. The two plans work together.
In a BCP, business operations must continue, but they often continue from an alternate location. So, the needs of BCP include space, personnel, technology, process, and data. The cloud can support the organization with many of those needs. A cloud solution provides the technology infrastructure, processes, and data to keep the business going.
Availability zones in a region are independent data centers that protect the customer from data center failures. Larger CSPs like AWS, Azure, and Google define regions. Within a region, latency is low. However, a major disaster could impact all the data centers in a region and eliminate all availability zones in that region. A customer can set up their plan to include redundancy across a single region using multiple availability zones, or redundancy across multiple regions to provide the greatest possible availability of your necessary technology, processes, and data.
One drawback of multiregion plans is that the cost grows quickly. For this reason, many organizations only put their most critical data—the core systems that they cannot operate the business without—across two or more regions, but less critical processes and data may be stored in a single region. Functions and data that are on-premise may also utilize cloud backups. But they may not be up and running as quickly as the cloud-based solutions. The business keeps operating, although not all business processes may be enabled.
DRPs rely heavily on data backups. A DRP is about returning to normal operations. And returning the data to the on-premise environment is part of that. After the on-premise infrastructure has been rebuilt, reconfigured, or restored, the data must be returned.
One failure of many DRPs is the lack of an offsite backup or the ability to quickly access that data backup. In the cloud, a data backup exists in the locations (regions or availability zones) you specify and is available from anywhere network access is available. A cloud-based backup works only if you have network access and sufficient bandwidth to access that data. That must be part of the DRP, along with an offsite data backup. A physical, local backup can also be beneficial. Not every disaster destroys the workplace.
Читать дальше