8. Virtualization and containers - DONE Flashcards
virtualization adds two new layers for security controls:
*Security of the virtualization technology itself, such as hypervisor security. This rests with the provider.
*Security controls for the virtual assets. The responsibility for implementing available controls rests with the customer. Exposing controls for the customers to leverage is the provider’s responsibility.
“The main areas of virtualization that you need to know for your exam are”
“Compute, Network, Storage. Each of the three creates its own storage pools, and those pools are possible only as a result of virtualization”
“Virtualization is how compute, network, and storage pools are created and is the enabling technology behind the multitenancy aspect of cloud services.”
What is compute virtualization?
Compute virtualization abstracts the running of code (including operating systems) from the underlying hardware. Instead of running code directly on the hardware, it runs on top of an abstraction layer (such as a hypervisor) that isolates (not just segregation!) one virtual machine (VM) from another. This enables multiple operating systems (guest OSs) to run on the same hardware.”
An older form of virtualization that you may be aware of is the Java Virtual Machine (JVM). What does it do?
“the JVM creates an environment for a Java application to run in. The JVM abstracts the underlying hardware from the application. This allows for more portability “across hardware platforms because the Java app does not need to communicate directly with the underlying hardware, only with the JVM.
There are many other examples of virtualization out there, but the big takeaway is that virtualization performs abstraction.
What are the primary cloud provider responsibilities in compute virtualization?
The primary security responsibilities of the cloud provider in compute virtualization are to enforce isolation and maintain a secure virtualization infrastructure. Isolation ensures that compute processes or memory in one virtual machine/container are not visible to another. This isolation supports a secure multitenant model, where multiple tenants can run processes on the same physical hardware (such as a single server).
The cloud provider is also responsible for securing the underlying physical infrastructure and the virtualization technology from external attack or internal misuse. Like any other software, hypervisors need to be properly configured and will require the latest patches installed to address new security issues.
Cloud providers should also have strong security in place for all aspects of virtualization for cloud users. This means creating a secure chain of processes from the image (or other source) used to run the virtual machine through a boot process, with security and integrity being top concerns. This ensures that tenants cannot launch machines based on images that they shouldn’t have access to, such as those belonging to another tenant, and that when a customer runs a virtual machine (or another process), it is the one the customer expects to be running.
Finally, cloud providers should also assure customers that volatile memory is safe from unapproved monitoring since important data could be exposed if another tenant, a malicious employee, or a bad actor can access running memory belonging to another tenant.
What is a volatile memory?
volatile memory contains all kinds of potentially sensitive information (think unencrypted data, credentials, and so on) and must be protected from unapproved access. Volatile memory must also have strong isolation implemented and maintained by the provider.
What are some cloud consumer responsibilities around virtualization?
The primary responsibility of the cloud user is to implement security properly for everything deployed and managed in a cloud environment. Cloud customers should take advantage of the security controls exposed by their providers for managing their virtual infrastructures. Of course, there are no rules or regulations as to what a provider must offer customers, but some controls are usually offered.
Cloud providers offer security settings such as identity and access management (IAM) to manage virtual resources. When you’re considering the IAM offered by the provider, remember that this is generally at the management plane, not the applistructure. In other words, we’re talking about the ability for your organization’s users accessing the management plane to be given the appropriate permissions required to start or stop an instance, for example, not log on to the server itself.”
“Cloud providers will also likely offer logging of actions performed at the metastructure layer and monitoring of workloads at the virtualization level. This can include the status of a virtual machine, performance (such as CPU utilization), and other actions and workloads.
“Another option that providers may offer is that of “dedicated instances” or “dedicated hosting.” This usually comes at an increased cost, but it may be a useful option if the perceived risk of running a workload on hardware shared with another tenant is deemed unacceptable, or if there is a compliance requirement to run a workload on a single-tenant server.
Finally, the customer is responsible for the security of everything within the workload itself. All the standard stuff applies here, such as starting with a secure configuration of the operating system, securing any applications, updating patches, using agents, and so on. The big difference for the cloud has to do with proper management of the images used to build running server instances as a result of the automation of cloud computing. It is easy to make the mistake of deploying older configurations that may not be patched or properly secured if you don’t have strong asset management in place.
Other general compute security concerns include these:
*Virtualized resources tend to be more ephemeral and can change at a more rapid pace. Any corresponding security, such as monitoring, must keep up with the pace.
*Host-level monitoring/logging may not be available, especially for serverless deployments. Alternative log methods such as embedding logging into your applications may be required.
What are Cloud computing deployments based on?
“Cloud compute deployments are based on master images—a virtual machine, container, or other code—that are then run as an instance in the cloud. Just as you would likely build a server in your data centre by using a trusted, preconfigured image, you would do the same in a cloud environment. Some Infrastructure as a Service (IaaS) providers may have “community images” available. But unless they are supplied by a trusted source, I would be very hesitant to use these in a production environment, because they may not be inspected by the provider for malicious software or back doors being installed by a bad actor who’s waiting for someone to use them. Managing images used by your organization is one of your most vital security responsibilities.
“You know there are multiple network virtualization technologies out there, examples include:
virtual LANs (VLANs) to software-defined networking (SDN). You now understand that “software-driven everything” is how the industry is going. The software-driven aspect is a key contributor to resource pooling, elasticity, and all other aspects that make the cloud work at the scale it does.
We have to perform an inspection and filtering of network traffic, but we can no longer use the same security controls we have used in the past. what are some other options?
Back in the early days of virtualization, some people thought it was a good idea to send all virtual network traffic out of the virtual environment, inspect the traffic using a physical firewall, and then reintroduce it back to the virtual network.
Newer virtual approaches to address this problem could include routing the virtual traffic to a virtual inspection machine on the same physical server or routing the network traffic to a virtual appliance on the same virtual network. Both approaches are feasible, but they still introduce bottlenecks and require less efficient routing.
The provider will most likely offer some form of filtering capability, be it through the use of an SDN firewall or within the hypervisor
From a network monitoring perspective, don’t be surprised if you can’t get the same level of detail about network traffic from the provider that you had in the past in your own environment. why so?
This is because the cloud platform/provider may not support access for direct network monitoring. They will state that this is because of complexity and cost. Access to raw packet data will be possible only if you collect it yourself in the host or by using a virtual appliance. This accounts only for network traffic that is directed to, or originates from, a system that you control. In other environments, such as systems managed by the provider, you will not be able to gain access to monitor this network traffic, because this would be a security issue for the provider.
“By default, the virtual network management plane is available to the entire world, and if it’s accessed by bad actors, they can destroy the entire virtual infrastructure in a matter of seconds via an API or web access. It is therefore paramount that this management plane be properly secured.”
As with compute virtualization in a cloud environment, virtual networks have a shared responsibility. What are some responsibilities of the provider?
The absolute top security priority is segregation and isolation of network traffic to prevent tenants from viewing another tenant’s traffic. At no point should one tenant ever be able to see traffic from another tenant unless this is explicitly allowed by both parties (via cross-account permissions, for example). This is the most foundational security control for any multitenant network.
Next, packet sniffing (such as using Wireshark), even within a tenant’s own virtual networks, should be disabled to reduce the ability of an attacker to compromise a single node and use it to monitor the network, which is common in traditional networks. This is not to say that customers cannot use some packet-sniffing software on a virtual server, but it means the customers should be able to see traffic sent only to a particular server.
In addition, all virtual networks should offer built-in firewall capabilities for cloud users without the need for host firewalls or external products. The provider is also responsible for detecting and preventing attacks on the underlying physical network and virtualization platform. This includes perimeter security of the cloud itself.”
As with compute virtualization in a cloud environment, virtual networks have a shared responsibility. What are some responsibilities of the cloud consumer?
The consumer is ultimately responsible for adhering to their own security requirements. this will require consuming and configuring security controls that are created and managed by the cloud provider, especially any virtual firewalls. Here are recommendations for consumers when it comes to securing network virtualization.
Take advantage of new network architecture possibilities. For example, compartmentalizing application stacks in their own isolated virtual networks to enhance security can be performed at little to no cost. Such an implementation may be cost-prohibitive in a traditional physical network environment.
Next, software-defined infrastructure (SDI) includes the ability to create templates of network configurations. You can essentially take a known-good network environment and save it as software. This approach enables you to rebuild an entire network environment incredibly quickly if needed. You can also use these templates to ensure that your network settings remain in a known-good configuration.
Finally, when the provider doesn’t expose appropriate controls for customers to meet their security requirements, customers will need to implement additional controls (such as virtual appliances or host-based security controls) to meet their requirements.
What are cloud overlay networks?
cloud overlay networks are a function of the Virtual Extensible LAN (VXLAN) technology, and they enable a virtual network to span multiple physical networks across a wide area network (WAN).“This is possible because VXLAN encapsulates packets in a routable format