Chapter 5 Flashcards
Virtualization
It allows you to host one or more virtual systems, or virtual machines (VMs), on a single physical system. With today’s technologies, you can host an entire virtual network within a single physical system, and organizations are increasingly using virtualization to reduce costs.
Virtualization typically provides the best return on investment (ROI)
Hypervisor
(part of virtualization) The hypervisor is specialized software that creates, runs, and manages virtual machines. Several software vendors produce hypervisors, including VMware products, Microsoft Hyper-V products, and Oracle VM VirtualBox.
Host
(part of virtualization) The physical system hosting the VMs is the host. It requires more resources than a typical system, such as multiple high speed multi-core processors, large amounts of RAM, fast and abundant disk space, and one or more fast network cards. Although these additional resources increase the cost of the host, it is still less expensive than paying for multiple physical systems. It also requires less electricity, less cooling, and less physical space. The host system runs the hypervisor software.
Guest
Operating systems running on the host system are guests or guest machines. Most hypervisors support several different operating systems, including various Microsoft operating systems and various Linux distributions. Additionally, most hypervisors support both 32-bit and 64-bit operating systems.
Cloud Scalability.
Scalability refers to the ability to resize the computing capacity of the VM. You do this by assigning it more memory, processors, disk space, or network bandwidth. Scaling is a manual process, and it often requires a reboot. In other words, an administrator would manually change the resources assigned to the VM.
Cloud Elasticity.
Elasticity refers to the ability to dynamically change resources assigned to the VM based on the load. As an example, imagine a VM has increased traffic. Monitoring software senses this increased load and automatically increases the VM resources to handle it. This does not require a reboot.
Thin Clients
A thin client is a computer with enough resources to boot and connect to a server to run specific applications or desktops. When the thin client is a traditional computer, it typically has a keyboard, mouse, and screen and may support other peripherals such as speakers and USB ports. The server is a powerful server located on-site or in the cloud, supporting multiple thin clients.
VDI
A virtual desktop infrastructure (VDI) hosts a user’s desktop operating system on a server. While traditional computers typically access VDIs within a network, it’s also possible to deploy a VDI that users can access with their mobile device. This allows users to access any applications installed on their desktop. When the organization hosts a remote access solution such as a virtual private network (VPN), users can access the mobile VDI from anywhere if they have Internet access.
Containerization
Containerization is a type of virtualization that runs services or applications within isolated containers or application cells.
A benefit of containerization is that it uses fewer resources and can be more efficient than a system using a traditional Type II hypervisor virtualization. Internet Service Providers (ISPs) often use it for customers who need specific applications. One drawback is that containers must use the operating system of the host. As an example, if the host is running Linux, all the containers must run Linux.
VM Escape Protection
VM escape is an attack that allows an attacker to access the host system from within a virtual guest system. As previously mentioned, the host system runs an application or process called a hypervisor to manage the virtual systems. In some situations, the attacker can run code on the virtual system and interact with the hypervisor. This should never be allowed.
A successful VM escape attack often gives the attacker unlimited control over the host system and each guest virtual machine running on that host.
VM sprawl
VM sprawl occurs when an organization has many VMs that aren’t appropriately managed.
challenge with VM sprawl is that each VM adds additional load onto a server.
Replication
Replication makes it easy to restore a failed virtual server. If you create a backup of the virtual server files and the original server fails, you simply restore the files. You can measure the amount of time it takes to restore a replicated virtual server in minutes. In contrast, rebuilding a physical server can take hours.
snapshot
A snapshot provides you with a copy of a VM at a moment in time, which you can use as a backup. You are still able to use the VM just as you normally would. However, after taking a snapshot, the hypervisor keeps a record of all changes to the VM. If the VM develops a problem, you can revert the VM to the state it was in when you took the snapshot.
Antivirus software
Type of Endpoint security software. This software scans endpoints for the presence of viruses, worms, Trojan horses, and other malicious code. When an infection is detected, the antivirus software can often step in and resolve the issue automatically.
EDR
Endpoint detection and response is a security technology that focuses on detecting and responding to threats at the endpoint level, often using advanced behavioral analysis techniques to identify suspicious activity and contain threats before they can cause damage.
XDR
Extended detection and response is a next-generation security technology that goes beyond the endpoint to include other types of devices and systems, such as network devices, cloud infrastructure, and IoT devices, providing a more comprehensive view of the entire IT environment and enabling faster threat detection and response.
HIPS
Host intrusion prevention systems. takes the concept of intrusion prevention and applies it to a single host or endpoint, using techniques such as behavior analysis, file integrity monitoring, and application control to prevent unauthorized access, tampering, or other types of attacks.
Resource reuse
Resource reuse in the context of cloud computing risks refers to the potential for data or resources to remain on a shared infrastructure even after a customer has finished using them, making them potentially accessible to other users of the cloud service. This can lead to a risk of data leakage or exposure, as well as the potential for malicious actors to gain unauthorized access to sensitive data or systems.
The best way to protect against this risk is to have contractual requirements with cloud service providers that they securely erase your data when it is no longer needed.
Establish an initial baseline configuration
Administrators use various tools to deploy systems consistently in a secure state.
Deploy the baseline
The baseline may be initially deployed on systems during the build process, or it may be pushed out to existing systems through Group Policy or other configuration management tools.
Maintain the baseline
Organizations change and so does the security landscape. It’s natural for system baselines to change over time as well. Security professionals should revise the baseline as needed and push out updates following the organization’s configuration and change management policies.
Using Master Images for Baseline Configurations
Administrators start with a blank source system. They install and configure the operating system, install, and configure any desired applications, and modify security settings. Administrators perform extensive testing to ensure the system works as desired and that it is secure before going to the next step.
Next, administrators capture the image, which becomes their master image. Symantec Ghost is a popular imaging application, and Windows Server versions include free tools many organizations use to capture and deploy images. The captured image is simply a file stored on a server or copied to external media, such as a DVD or external USB drive.
In step 3, administrators deploy the image to multiple systems. When used within a network, administrators can deploy the same image to dozens of systems during initial deployment or to just a single system to rebuild it. The image installs the same configuration on the target systems as the original source system created in step 1.
Secure starting point
The image includes mandated security configurations for the system. Personnel who deploy the system don’t need to remember or follow extensive checklists to ensure that new systems are set up with all the detailed configuration and security settings. The deployed image retains all the settings of the original image. Administrators will still configure some settings, such as the computer name, after deploying the image.
Reduced costs.
Deploying imaged systems reduces the overall maintenance costs and improves reliability. Support personnel don’t need to learn several different end-user system environments to assist end users. Instead, they learn just one. When troubleshooting, support personnel spend their time helping the end user rather than learning the system configuration. Managers understand this as reducing the total cost of ownership (TCO) for systems.