Cloud + Flashcards

1
Q

Hub

A

A network hub is a basic networking device that connects multiple devices within a LAN. It is a central point where devices can be connected to share data and communicate with each other. However, network hubs have been largely replaced by more advanced devices such as switches.

Network hubs operate at the physical layer of the network and work by receiving data packets from one device and broadcasting them to all other connected devices, regardless of destination. This means that all devices on a hub’s network share the same bandwidth, and collisions are likely to occur if multiple devices transmit data simultaneously. Drawbacks of the network hub include: not having the capability to manage or prioritize network traffic, filter data, or make intelligent routing decisions. Hubs are commonly not used in modern network setups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Bridge

A

A network bridge is a networking device or software component that connects multiple network segments or LANs (Local Area Networks) together. It operates at the data link layer (Layer 2) of the OSI (Open Systems Interconnection) model and is used to forward network traffic between different network segments.

The primary function of a network bridge is to selectively transmit data packets between network segments based on their destination MAC (Media Access Control) addresses. When a bridge receives a packet, it examines the MAC address of the packet and determines whether to forward it to the other network segment or discard it. The bridge maintains a table called the bridge forwarding table or MAC table, which associates MAC addresses with the network segments they belong to. Unlike network hubs, which broadcast data to all connected devices, a bridge is more selective and intelligent in its forwarding process. It only forwards packets across network segments if the destination MAC address is located on the other segment, thus reducing unnecessary traffic and improving overall network efficiency.

Network bridges have been largely replaced by more advanced technologies such as switches and routers. Switches, in particular, offer similar functionality to bridges but with additional features and improved performance. However, bridges still have their uses in specific networking scenarios, such as connecting legacy equipment or extending the range of a network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Switch

A

A network switch is a networking device that connects multiple devices within a Local Area Network (LAN) and facilitates communication between them. It operates at the data link layer (Layer 2) and sometimes at the network layer (Layer 3) of the OSI (Open Systems Interconnection) model. The primary function of a network switch is to receive incoming network packets and forward them to their intended destination based on the MAC (Media Access Control) addresses of the devices connected to the switch. When a switch receives a packet, it examines the destination MAC address and looks up its forwarding table to determine the port to which the packet should be sent. This process is known as switching, and it allows devices within the LAN to communicate directly with each other.

Network switches offer several advantages over network hubs and bridges. Unlike hubs, which broadcast data to all connected devices, switches create dedicated connections between devices, allowing for simultaneous communication without collisions. This improves network performance and bandwidth utilization. Additionally, switches can handle simultaneous traffic across multiple ports, providing full-duplex communication.

Switches come in various configurations, such as unmanaged, managed, and Layer 3 switches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Switching

A

When a switch receives a packet, it examines the destination MAC address and looks up its forwarding table to determine the port to which the packet should be sent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Unmanaged Switch

A

Unmanaged switches are plug-and-play devices that operate with default settings, making them easy to use but with limited configuration options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Managed Switch

A

Managed switches provide more control and configuration capabilities, allowing network administrators to monitor and manage the network traffic, implement security features, and optimize performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Layer 3 Switch / Multi-layer Switch

A

Layer 3 switches, also known as multi-layer switches, can perform routing functions in addition to switching, making them capable of forwarding packets based on IP addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

vNIC

A

A vNIC (virtual Network Interface Card) is a software-based representation of a physical network interface card within a virtualized environment. It emulates the functionality of a physical NIC, allowing virtual machines (VMs) or containers to connect to virtual networks and communicate with other devices and systems.

A vNIC is created and assigned to each virtual machine or container running on a hypervisor or containerization platform. It provides the necessary network connectivity for the virtual instance to send and receive data over the virtual network infrastructure. From the perspective of the virtual machine or container, a vNIC appears and behaves like a physical NIC, enabling network communication.

Virtualization technologies such as VMware, Hyper-V, or KVM, as well as container platforms like Docker or Kubernetes, utilize vNICs to establish network connectivity and enable virtual instances to access the underlying physical network infrastructure or communicate with other virtual machines or containers within the same virtual environment.

The configuration and properties of vNICs can be managed and adjusted within the virtualization or containerization platform, allowing network settings, such as IP addresses, subnet masks, VLAN tags, or quality-of-service parameters, to be defined and customized for each virtual instance. This flexibility enables administrators to tailor network connectivity to meet the specific requirements of virtual machines or containers within the virtual environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

vSwitch

A

A vSwitch (virtual switch) is a software-based networking component used in virtualized environments to connect and manage network traffic between virtual machines (VMs) or containers running on a hypervisor or containerization platform. Similar to a physical network switch, a vSwitch operates at the data link layer (Layer 2) of the OSI model and performs the following functions:

Network connectivity: A vSwitch provides vPorts (virtual network ports) to which virtual machines or containers can be connected. It enables communication between virtual instances within the same virtual network or across different virtual networks

Packet forwarding: Incoming network traffic from virtual machines or containers is received by the vSwitch, which makes forwarding decisions based on the MAC (Media Access Control) addresses of the virtual instances. It forwards packets to the appropriate destination vPorts, ensuring proper delivery.

VLAN support: A vSwitch often includes support for Virtual LANs (VLANs), allowing network segmentation and isolation within the virtual environment. VLANs help to enhance network security, optimize network performance, and provide logical separation between different groups of virtual instances.

vSwitches are integral components of virtualization platforms such as VMware vSphere, Microsoft Hyper-V, or KVM, as well as containerization platforms like Docker or Kubernetes. They enable virtual machines or containers to access the physical network infrastructure and communicate with other virtual instances, while also providing network management capabilities within the virtual environment. The configuration and management of vSwitches are typically done through the virtualization or containerization platform’s management interfaces, allowing administrators to define network settings, monitor network traffic, and apply network policies to efficiently manage the virtual network infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

vPorts

A

(Virtual Network Ports) Enables communication between virtual instances within the same virtual network or across different virtual networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Packet Forwarding

A

Packet forwarding is the process of routing network packets from a source to a destination within a computer network. When a packet arrives at a network device (such as a router or switch), the device examines the packet’s destination address and determines the optimal path for forwarding the packet to its intended destination. This involves looking up routing tables or forwarding rules to identify the next hop or outgoing interface for the packet. The device then encapsulates the packet in a new frame with appropriate addressing information and transmits it toward the next network device in the path

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

VLAN

A

A VLAN (Virtual Local Area Network) is a logical network that is created within a physical network infrastructure. It allows network devices to be grouped together, even if they are not physically connected on the same network switch. VLANs provide isolation, security, and flexibility by segmenting a network into smaller, virtual subnetworks. Devices within the same VLAN can communicate with each other as if they were connected to the same physical network, while traffic between VLANs requires routing through a router or Layer 3 switch. VLANs enable network administrators to efficiently manage network traffic, implement security policies, and optimize network performance by logically separating devices and controlling communication between them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Traffic Shaping

A

Traffic shaping is a network management technique used to control and prioritize network traffic flows. It involves managing the bandwidth allocation and transmission rates of different types of network traffic to ensure optimal network performance and avoid congestion. By shaping traffic, administrators can regulate the flow of data based on predefined policies, such as prioritizing critical applications or limiting bandwidth for specific types of traffic. This helps to enhance network efficiency, minimize latency, and ensure fair usage of available network resources.

Traffic shaping is a specific technique within the broader concept of QoS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

QoS

A

QoS, or Quality of Service, is a network management concept that aims to prioritize and control the delivery of network traffic based on specific requirements. It involves techniques and mechanisms to ensure that critical traffic receives preferential treatment in terms of bandwidth, latency, and reliability.

QoS focuses on delivering a consistent level of service to different types of network traffic, such as voice, video, data, or real-time applications. It involves setting priorities, allocating resources, and implementing policies to meet specific performance targets and ensure a satisfactory user experience.

Traffic shaping, on the other hand, is a specific technique within the broader concept of QoS. It involves controlling the flow of network traffic to smooth out peaks and prevent congestion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

HTTPS

A

(Hypertext Transfer Protocol Secure ) is a secure communication protocol used for secure and encrypted data transfer over computer networks, especially the internet. It is an extension of the standard HTTP protocol and adds an extra layer of security by using SSL (Secure Sockets Layer) or TLS (Transport Layer Security) encryption protocols.

HTTPS ensures that the data transmitted between a client (such as a web browser) and a server is encrypted and protected from eavesdropping or tampering. This encryption is achieved through the use of digital certificates, which authenticate the identity of the server and establish a secure connection.

Port 443

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

SSL

A

(Secure Sockets Layer) is a cryptographic protocol that provides secure communication over computer networks, especially the internet. It was widely used to establish secure connections between a client (such as a web browser) and a server, encrypting the data transmitted between them. SSL operates at the transport layer (Layer 4) of the OSI model and ensures confidentiality, integrity, and authentication of data. It uses asymmetric encryption (also known as public-key cryptography) to establish a secure session between the client and the server.

The SSL initiation process, also known as the SSL handshake, is the initial exchange between a client and a server to establish a secure SSL/TLS connection. It involves the client and server exchanging information about supported SSL versions, selecting cipher suites, authenticating certificates, and exchanging cryptographic keys. Once the handshake is complete, a secure session is established, enabling encrypted communication between the client and server.

SSL provides encryption, data integrity, authentication, and forward-secrecy through support of TLS (if the private-key of the server is compromised in the future, the previously recorded SSL communications cannot be decrypted).

Though there are some differences, the terms “SSL” and “TLS” are used interchangeably.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Forward Secrecy

A

Forward secrecy, also known as perfect forward secrecy (PFS), is a cryptographic property that ensures the confidentiality of past communication even if the long-term private key of a system is compromised in the future. It achieves this by generating unique session keys for each communication session, preventing the decryption of past sessions even if the private key is obtained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

TLS

A

(Transport Layer Security) is a cryptographic protocol designed to provide secure communication over computer networks, such as the internet. It is the successor to SSL (Secure Sockets Layer) and operates at the transport layer (Layer 4) of the OSI model.

TLS differs from SSL in that it features it’s own versions (SSL v1.0, 2.0, 3.0 vs. TLS 1.2, 1.3, etc. ), incorporating stronger cryptographic algorithms. Additionally, more secure algorithms and cipher suites are used for key exchange, authentication, and encryption. TLS also provides forward-secrecy to SSL. TLS is designed to be backwards-compatible with SSL, allowing it to negotiate using SSL protocols and cipher suites.

Though there are some differences, the terms “SSL” and “TLS” are used interchangeably.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

IPSEC

A

(Internet Protocol Security) is a suite of protocols used to secure Internet Protocol (IP) communications by providing authentication, integrity, and confidentiality services. It is commonly used for creating virtual private networks (VPNs) and ensuring secure communication between network devices over potentially insecure networks, such as the internet.

Operates in two modes: Transport and Tunnel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Transport Mode (IPSEC)

A

IPSEC secures only the payload of the IP packet while leaving the IP headers intact. This mode is typically used for securing end-to-end communication between two hosts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Tunnel Mode (IPSEC)

A

The entire IP packet, including the original IP headers, is encapsulated within a new IP packet. This mode is commonly used for secure communication between networks or for remote access VPN’s.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

SSH

A

Secure Shell) is a network protocol that provides a secure and encrypted method for remote login, command execution, and data communication between two networked devices. It is commonly used to establish a secure remote connection to a server or network device over an unsecured network, such as the internet.

SSH provides secure communications through the use of strong encryption algorithms that protect against eavesdtopping, tampering, etc. SSH requires authentication of users (through generated key pairs) before establishing a connection, and once a connection has been formed, remote command execution can be accomplished. SSH also provides secure file transfer capabilities, allowing users to securely transfer files between the local and remote systems. Lastly, SSH incorporates port forwarding, which allows users to securely tunnel other network protocols or services through the SSH connection.

Port 22

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

RDP

A

(Remote Desktop Protocol) is a proprietary protocol developed by Microsoft that allows users to remotely connect and control a Windows-based computer or server from another device. It provides a graphical user interface (GUI) for accessing and interacting with a remote computer as if you were sitting in front of it.

Port 3389

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Hardware Based VPN

A

A type of VPN implementation that relies on dedicated hardware devices to establish secure connections between remote networks or individual devices.

Hardware-based VPNs offload the VPN processing tasks to specialized devices, typically known as VPN appliances or VPN gateways. Hardware-based VPNs are particularly suitable for organizations that require high-performance, scalable, and secure VPN solutions. They are commonly deployed in enterprise networks, data centers, and large-scale VPN deployments where dedicated hardware resources can optimize VPN performance and manage large volumes of VPN traffic effectively.

Uses IPSEC for secure communications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Software Based VPN

A

A type of VPN implementation that relies on dedicated VPN client software to establish secure connections between remote networks or individuals device.

Uses IPSEC for secure communications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

MPLS VPN

A

A type of VPN implementation that utilizes Multiprotocol Label Switching to create virtual pathways or tunnels that ensure the privacy and isolation of data traffic between the connected sites.

MPLS reduces routing complexity and lookups by substituting labels for network paths instead of using long IP address notations that may require complex routing table lookups. MPLS can increase the efficiency of routing network traffic. Access to and from cloud data centers, as well as access within an organization’s network, may involve many routers. Creating greater efficiency may enhance network performance.

Uses IPSEC for secure communications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the 3 main components of a cloud service solution?

A
  1. Client: Means of access to cloud services for consumer. Cloud services may include storage, email, e-commerce, office suites, and development environmnets. Users may access these services from phones, tablets, traditional computers, IoT devices, and servers. The cloud client devices may be any device with a network connection. The major operating systems on the client devices include Windows, macOS, Linux, iOS, and Android.
  2. CSP Datacenter: Hosts cloud services. Major CSPs (AWS, MS, Google) have a great many datacenters distributed across the world. These datacenters are redundant, have extremely reliable access to power, have extremely reliable Internet access, and ae physically secure. Cloud services are hosted within the walls of these datacenters.
  3. Network: Path between cloud services and client devices. In some deployment models, the network connection may be wholly owned and operated by your company. In other cases, the Internet may be the network path to cloud services. Access may also come via cell connections. In some cases, all three network connection types may be used.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Public Cloud

A

Public cloud is a type of computing where resources are offered by a third-party provider via the internet and shared by organizations and individuals who want to use or purchase them. Some public cloud computing resources are available for free, while customers may pay for other resources through subscription or pay-per-usage pricing models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Private Cloud

A

Private cloud is defined as computing services offered either over the Internet or a private internal network and only to select users instead of the general public. Also called an internal or corporate cloud, private cloud computing gives businesses many of the benefits of a public cloud - including self service, scalability, and elasticity - with the added control and customization available from dedicated resources over a computing infrastructure hosted on-premises.

There are three main types of Private Clouds:

  1. On-premises Private Cloud
  2. Managed Private Cloud
  3. Virtual Private Cloud
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

On-Presmises Private Cloud

A

An on-premises private cloud is one that you can deploy on your own resources in an internal data center. You must purchase the resources, maintain and upgrade them, and ensure security. On-premises private cloud management is expensive and requires heavy initial investment and ongoing expensive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Managed Private Cloud

A

A managed private cloud is a single-tenant environment fully managed by a third party. For example, the IT infrastructure for your organization could be purchased and maintained by a third-party organization in it’s data center.

The third party provides maintenance, upgrades, support, and remote management of your private cloud resources. While managed private clouds are expensive, they are more convenient than on-premises solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Virtual Private Cloud

A

A virtual private cloud is a private cloud that you can deploy within a public cloud infrastructure. It is a secure, isolated environment where private cloud users can run code, host websites, store data, and perform other tasks that require a traditional data center.

Virtual Private Clouds efficiently give you the convenience and scalability of public cloud computing resources along with additional control and security.

Also known as Cloud Within a Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Community Cloud

A

Community cloud computing refers to a shared cloud computing service environment that is targeted to a limited set of organizations or employees. The organizing principle for the community will vary, but the members of the community generally share similar security, privacy, performance and compliance requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Hybrid Cloud

A

There is a combination of two or more private, public, or community deployments. For example, an organization may choose to utilize some services offered via a CSP’s public cloud while hosting other services in a private cloud environment.

The services in the public cloud portion may be cheaper, and security may be less of a concern. The services hosted in the private cloud may be more secure, but deployment is more expensive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Multitenancy

A

A cloud model where CSP resources are shared among multiple clients (tenants), and is the concept behind public cloud deployments. Multiple consumers, known as tenants, share computing resources owned and managed by the CSP. This is the opposite idea from a VPC deployment.

It is multitenancy that provides the cost benefits behind shared resource utilization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Multi-cloud

A

Multicloud is when an organization uses cloud computing services from at least two cloud providers to run their applications. Instead of using a single-cloud stack, multi-cloud environments typically include a combination of two or more public clouds, two or more private clouds, or some combination of both.

Multi-cloud deployments reduce reliance on a single vendor, provide greater service flexibility and choice, permit improved geographic control of data, and help manage disaster mitigation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Digital Ocean

A

Digital Ocean is a cloud hosting provider that offers cloud computing services and IaaS. Known for it’s pricing and scalability, teams can deploy Digital Ocean in seconds for cheap. This structure can help anyone get up and running quickly in the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Rackspace

A

Rackspace is a cloud storage service provider, which includes Cloud Files, Cloud Block Storage, and Cloud Backup. Services such as cloud servers, database platforms, load balancers, storage, and other services to organizations are all provided by Rackspace. Users connect to it with the REST API.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Red Hat Cloud Suite

A

Red Hat, originally known for it’s enterprise Linux operating system and supporting services, offers Red Hat Cloud Suite for cloud services. The suite consists of four key products: OpenStack Platform (for building public and private clouds), Virtualization, Satellite (for cloud services management), and OpenShift ( for Kubernetes container management).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

OpenStack

A

OpenStack is an open source platform that uses pooled virtual resources to build and manage private and public clouds. The tools that comprise the OpenStack platform, called “projects”, handle the core cloud-computing service of compute, networking, storage, identity, and image services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

OpenShift

A

OpenShift is a cloud-based Kubernetes platform that helps developers build applications. It offers automated installation, upgrades, and life cycle management throughout the container stack - the operating system, Kubernetes and cluster services, and applications - on any cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

IoT

A

IoT refers to a combination of network connectivity and smart devices that facilitate the collection and analysis of data. These devices may include software, sensors, and robotics that exchange data and instructions over the Internet or internal networks. The IoT is enabled by nearly global network connectivity, low-cost sensors to collect data, and cloud management platforms.

Common uses for IoT products include:

  • Smart Homes
  • Medical Monitoring
  • Agriculture Management
  • Energy Management
  • Manufacturing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Serverless Computing

A

A software architecture that runs functions within virtualized runtime containers in a cloud rather than on dedicated server instances. Serverless computing still utilizes compute resources, contrary to what the name implies. Compute resources are allocated on demand to applications, and no resources are reserved when the application is not in use. Billing reflects the application’s use of resources. Serverless environments require no configuration, monitoring, or capacity planning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Artificial Intelligence (AI)

A

The science of creating machines with the ability to develop problem solving and analysis strategies without significant human direction or intervention.

AI is concerned with simulating human intelligence by providing structured, semi-structured, and unstructured data and solving complex problems. AI accomplishes this by using a set of rules to manage it’s analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Machine Learning (ML)

A

A component of AI that enables a machine to develop strategies for solving a task given a labeled dataset where features have been manually identified but without further explicit instructions.

The goal of ML is to make accurate predictions by extracting data based on learned information and experience. ML systems are not explicitly programmed to find a particular outcome. Instead, they are programmed to learn from provided data and then make accurate decisions based on what they’ve learned. Insights are gained with minimal human interaction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Deep Learning

A

A refinement of machine learning that enables a machine to develop strategies for solving a task given a labeled dataset and without further explicit instructions.

DL provides a greater degree of accuracy when analyzing unstructured data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Simple Storage Service (S3)

A

Amazon S3 is a program built to store, protect, and retrieve data from “buckets” at any time from anywhere on any device. Organizations of any size and industry can use this service. Use cases include websites, mobile apps, archiving, data backups and restorations, IoT devices, enterprise application storage, and providing underlying storage layer for a data lake.

Organizing and retrieving data in Amazon S3 focuses on two key components: Buckets and Objects. These components work together to create the storage system. As AWS describes it, an S3 environment is a flat structure - a user creates a bucket; the bucket stores the objects in the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Advantages of S3

A

Scalability: AWS allows you to scale resources up and down, while only charging you for the amount of resources you use.

Durability and Accessibility: S3 is designed for 11 “9’s” worth of durability, meaning it is extremely reliable. The service automatically creates and stores your S3 objects across multiple systems, meaning your data is protected and you can access it quickly whenever you need it.

Cost Effective: Data stored in S3 is done so in ranges, allowing frequently and data needed to be accessed immediately to be stored on hot storage, while lesser needed data can be supplanted onto warm or even cold storage. S3 will determine data priority through ongoing access patterns, to allow for cost optimization.

Versioning: This is a setting that allows for multiple variants of the same file or object to exist in the same bucket. This provides an opportunity to roll back or recover a deleted object.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Elastic Compute Cloud (EC2)

A

Amazon EC2 provides scalable computing capacity in the AWS cloud. Leveraging it enables organizations to develop and deploy applications faster, without needing to invest in hardware upfront. Users can launch virtual servers, configure security and networking, and manage cookies from an intuitive dashboard.

AWS EC2 is important, as it does not require any hardware units, and is easily scalable (up or down). With EC2, you only pay for what you use, and you are given full control of your cloud environment. EC2 is highly secure, as well as highly available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What are some of the features of EC2

A

Virtual Machines, known as Instances.

Preconfigured Templates, known as Amazon Machine Images (AMIs), that package the bits you need for your server (including OS and software)

Various hardware configurations.

Secure login through use of key pairs.

Storage volumes for temporary volatile data that is discarded when stopped, hibernated, or instance is terminated, known as instance store volumes.

Persistent storage volumes for your data using Amazon EBS

Multiple physical locations for your resources, such as instances and Amazon EBS volumes, known as Regions and Availability Zones.

A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your instances using security groups.

Use of Elastic IP Addresses.

Metadata, known as “Tags”, that you can create and assign to your Amazon EC2 resources.

Use of VPC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

AWS Lambda

A

AWS Lambda is a compute service that lets you run code without provisioning or managing servers.

Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capavity provisioning and automatic scaling, and logging. With Lambda, all you need to do is supply your code in one of the language runtimes that Lambda supports.

You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. You only pay for the compute time that you consume - there is no charge when your code is not running.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

AWS S3 Glacier

A

Amazon S3 Glacier is a secure and durable service for low cost data archiving and long-term backup.

With S3 Glacier, you can store your data cost effectively for months, years, or even decades. S3 Glacier helps you offload the administrative burdens of operating and scaling storage to AWS , so you don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations.

S3 Glacier can be divided into three storage classes:

  • Instant Retrieval (Hot Storage)
  • Flexible Retrieval (Warm Storage)
  • Deep Archive (Cold Storage)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

AWS S3 Glacier

A

Amazon S3 Glacier is a secure and durable service for low cost data archiving and long-term backup.

With S3 Glacier, you can store your data cost effectively for months, years, or even decades. S3 Glacier helps you offload the administrative burdens of operating and scaling storage to AWS , so you don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations.

S3 Glacier can be divided into three storage classes:

  • Instant Retrieval (Hot Storage)
  • Flexible Retrieval (Warm Storage)
  • Deep Archive (Cold Storage)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Amazon SNS

A

Amazon SNS is a managed service that provides message delivery from publishers to subscribers (also known as producers and consumers).

Publishers communicate asynchronously with subscribers by sending messages to a “topic”, which is a logical access point and communication channel.

Clients can subscribe to the SNS topic and receive published messages using a supported endpoint type, such as Amazon SQS, AWS Lambda, email, push notifications, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Amazon CloudFront

A

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and images files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. If the content is not in that edge location, CloudFront retrieves it from an origin that you’ve defined - such as an Amazon S3 bucket, or an HTTP Server, that you have identified as the source for the definitive version of your content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Azure Visual Studio

A

Visual Studio is a powerful developer tool that you can use to complete the entire development cycle in one place. It is a comprehensive integrated development environment (IDE) that you can use to write, edit, debug, and build code, and then deploy your app. Beyond code editing and debugging, Visual Studio includes compilers, code completion tools, source control, extensions, and many more features to enhance every stage of the software development process.

Visual Studio provides developers a feature rich development environment to develop high-quality code efficiently and collaboratively. Some features include:

  • Workload-based Installer (install only what you need)
  • Powerful coding tools and features
  • Multiple coding language support
  • Cross-platform development (build apps for any platform)
  • Version control integration (collaborate on code with teammates)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Azure Backup

A

The Azure Backup service provides simple, secure, and cost-effective solutions to back up your data and recover it from the Azure Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Azure SQL

A

Azure SQL Database is a fully managed platform as a service (PaaS) database engine that handles most of the database management functions such as upgrading, patching, backups, and monitoring without user involvement. Azure SQL Database is always running on the latest stable version of the SQL Server database engine and patched OS with 99.99% availability. PaaS capabilities built into Azure SQL Database enable you to focus on the domain-specific database administration and optimization activities that are critical for your business.

With Azure SQL Database, you can create a highly available and high-performance data storage layer for the applications and solutions in Azure. SQL Database can be the right choice for a variety of modern cloud applications because it enables you to process both relational data and non-relational structures, such as graphs, JSON, spatial, and XML.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Azure Cosmos DB

A

Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Azure Cosmos DB offers single-digit millisecond response times, automatic and instant scalability, along with guarantee speed at any scale. Business continuity is assured with SLA-backed availability and enterprise-grade security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Managed Service Provider

A

A MSP is a company that remotely manages a customer’s IT infrastructure and/or end-user systems, typically on a proactive basis and under a subscription model. The terms “cloud service provider” and “managed service provider” are sometimes used as synonyms when the provider’s service is supported by a SLA and is delivered over the internet.

There are also MSPs who are independent of the CSP. Your organization may choose to outsource cloud design, migration, deployment, and management solutions to these companies, relying on their expertise and experience. Many CSPs also offer management services for their products. For example, AWS offers AWS Managed Services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

The Shared Responsibility Model

A

The Shared Responsibility Model is a security and compliance framework that outlines the responsibilities of cloud service providers (CSPs) and customers for securing every aspect of the cloud environment, including hardware, infrastructure, endpoints, data, configurations, settings, operating system (OS), network controls and access rights.

As Amazon puts it “CSPs are responsible for the security of the cloud; the consumer is responsible for security in the cloud”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Cloud Subscription Service Models

A

Refers to the pricing and billing structure that cloud service providers use to offer their services to customers. Instead of purchasing software or hardware upfront, customers pay a recurring fee to access and use cloud-based resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

IAM

A

Identity and Access Management (IAM) lets administrators authorize who can take action on specific resources, giving you full control and visibility to manage cloud resources centrally. For enterprises with complex organizational structures, hundreds of workgroups, and many projects.

IAM provides a unified view into security policy across your entire organization, with built-in auditing to ease compliance processes.

IAM provides tools to manage resource permissions with minimum fuss and high automation. Map job functions withni your company to groups and roles. Users get access only to what they need to get the job done, and admins can easily grant default permissions to entire groups of users.

Create more granular access control policies to resources based on attributes like device security status, IP address, resource type, and data / time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Provisioning

A

The process of deploying an application to the target environment, such as enterprise desktops, mobile devices, or cloud infrastructure.

Provisioning is one of several steps in the cloud services deployment process. The term refers to the allocation of cloud resources in the overall enterprise infrastructure. The provisioning process is governed by objectives, policies, and procedures for deploying services and data.

Provisioning is usually self-service, reflecting one of the NIST cloud characteristics discussed earlier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Cloud Applications

A

With cloud applications, the installation and processing occur in the cloud, rather than on local workstations or servers. The cloud may be a private or public network. The applications are accessed over the network. One advantage of cloud applications is a consistent experience for all users, whether they use the same workstation platform or mobile device.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Virtualization

A

Virtualization allocates hardware resources among one or more VMs. The VMs then have an operating system and one or more applications installed on them. The VM participates on the network as a regular node, providing database, authentication, storage, or other services. VMs have greater access to hardware resources and can be provided with redundancy to increase high availability.

VMs are a key component of cloud-based IaaS services, such as AWS EC2 or Azure Virtual Machines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Containerization

A

Containerization is a form of virtualization, but it is significantly different than VMs. Containers virtualize at the OS layer, rather than the hardware layer. A container holds a single application and everything it needs to run. This narrow focus allows containers to excel with technologies such as microservices. Containers are very lightweight, share a single OS (usually Linux), and provide a single function. GCP, Azure, and AWS all offer cloud-based container services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Templates

A

A virtual machine template is a master copy of a virtual machine that usually includes the guest OS, a set of applications, and a specific VM configuration. Virtual machine templates are used when you need to deploy many VMs and ensure that they are consistent and standardized.

CSPs also use templates to offer flexible but standardized VM configurations to customers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Post-Deployment Validation

A

Post-deployment validation ensures that deployed apps or services meet required service levels. Depending on the service, this may be handled through regression or functionality testing.

If possible, automate post-deployment validation for efficiency and consistency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Auto-Scaling

A

Auto-scaling takes advantage of automated deployments and virtualizations to provide appropriate resources for the current demand. Resources can be scaled up or down to manage costs. Your organization only pays for the resources that it consumes. Auto-scaling is useful when your resource utilization is difficult to predict or is seasonal.

Resources may be scaled up (more compute power, such as RAM, given to a single virtual server) or scaled out (more virtual servers deployed). When demand is reduced, the resources are reduced, saving money.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Hyper-converged

A

Hyper-convergence is an IT framework that combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability.

Hyper-converged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking, and they typically run on standard servers. Multiple nodes can be clustered together to create pools of shared compute and storage resources, designed for convenient consumption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

What are the basic steps in the troubleshooting methodology?

A
  1. Identify the problem
  2. Determine the scope of the problem
  3. Establish a theory of probable cause, or question the obvious.
  4. Test the theory to determine the cause.
  5. Establish a plan of action
  6. Implement the solution, or escalate.
  7. Verify full system functionality
  8. Implement preventative measures
  9. Perform a root cause analysis
  10. Document findings, actions, and outcomes throughout the process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Service Level Agreement

A

A contract between the provider of a service and a user of that service, specifying the level of service that will be provided.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

What are the differing needs when comparing users to businesses?

A

Users typically are concerned with front-end needs such as applications, network performance, technical support, etc.

Businesses are typically concerned with costs, integration with existing services, compliance, and data storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

CapEx

A

The spending of business funds to buy or maintain fixed business assets, such as datacenters, servers, buildings, and so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

OpEx

A

The spending of business funds for ongoing business costs, such as utilities, payroll, and so on. Cloud subscriptions are usually an OpEx.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

SQL

A

A programming and query language common to many large-scale database systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

NoSQL

A

A non-relational database for storing unstructured data, common with big data technologies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Big Data

A

Large stores of unstructured and semi-structured information. As well as volume, big data is often described as having velocity, as it may involve the capture and analysis of high bandwidth network links.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Business Requirement Documents (BRDs)

A

The document defining a project’s scope, success factors, constraints, and other information to achieve project goals.

Business analysts will help develop BRDs that provide the answers to “What?”’ and “Why?” questions reharding services and applications to ensure the business will benefit from projects such as cloud migrations and web app development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Development Environment

A

Development is the act of programming an application or other piece of code that executes on a computer.

The development environment is where programmers code projects, detect bugs, manage code versions, and implement code-level security.

In a cloud deployment, this environment may be a combination of PaaS (for actual development work) and IaaS (for testing).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Staging Environment

A

Staging is a user testing environment that is a copy of the production environment.

The staging environment (which is also the quality assurance environment) is where QA testers validate cloud applications and services. This validation may include security and performance testing. The tests may be automated or manual (or both).

The cloud may provide an IaaS environment for staging. This environment may need to scale significantly as part of performance testing, so costs here may not reflect anticipated costs in the production environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Production Environment

A

Production is an IT environment available to consumer for normal, day-to-day use.

The production environment is available to end-users. Security is in place to protect data and availability.

If the production environment is hosted in the cloud, scalability may be a concern too. Monitoring and availability must be assured here, probably at a higher level than in the other two environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Blue-Green Release Model

A

A variation of the model of the separate development, staging, and production environments is the blue-green release model. In this model, two identical environments are available, one labeled “blue”, and the other “green”. At any given time, only one of these is hosting the production environment. The idle environment serves as the staging area for the next release of the software or service. Final testing and QA is performed, and users are gradually migrated to the new environment with the canary deployment model.

The blue environment is the production environment, while the green environment serves as the staging environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Canary Deployment Model

A

The deployment model that gradually moves users from an old deployment to a new one, rather than an immediate switchover of all users.

The Canary model is similar to the Blue-Green Model, except that users are gradually migrated from the older environment to the newer environment to the newer environment, instead of the complete and immediate migration used with the blue-green model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Vulnerability Testing

A

An evaluation of system’s security and ability to meet compliance requirements based on the configuration state of the system, as represented by information collected from the system.

Vulnerability Testing empirically identifies, quantifies, and ranks vulnerabilities in networks, operating systems, services and applications. The goal is to identify the vulnerability do that it can be mitigated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Penetration Testing

A

A test that uses active tools and security utilities to evaluate security utilities to evaluate security by simulating an attack on a system. A pen test will verify that a threat exists, then will actively test and bypass security controls, and will finally exploit vulnerabilities on the system.

Such testing begins with an analysis of available resources, looking for older, unpatched, or vulnerable software. The testing also includes an analysis of business practices.

Penetration testing may help meet several strategic goals:

  • Compliance
  • Identify weaknesses in processes and configurations
  • Identify vulnerabilities in software and operating systems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Performance Testing

A

A test that shows an application’s ability to function under a given workload in order to confirm performance and scalability.

For cloud services, this information is useful for determining scalability settings. For example, scaling can be done via scale-up (more resources, such as memory, given to a VM) or scale-out (more VMs deployed). Applications may respond better to one or the other of these scaling practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Regression Testing

A

The process of testing an application after changes are made to see is these changes have triggered problems in older areas of code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Functional Testing

A

A test method used in QA to confirm that a solution meets the required needs.

Functional testing evaluates whether a system or application meets it’s specification - does it do what it is supposed to do?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Usability Testing

A

A testing method where end-users provide direct feedback on requirements and usability.

Usability Testing is accomplished by end-users and provides direct feedback on the interface, features, and practical use. Usabilty testing helps ensure the application or server meets requirements and will actually be useful upon release.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Capacity Planning

A

Capacity planning is the process of determining and optimizing the resources required to meet the demands of an organization. It involves forecasting future needs, evaluating existing capacities, and making strategic decisions to ensure that sufficient resources are available to support business operations efficiently.

Capacity Planning is concerned with the following questions:

  • What is the current baseline
    or service level?
  • What is the current capacity?
  • What future needs can we predict, based on upcoming business initiatives?
  • Are there consolidation opportunities for services, applications, or data sources?
  • What recommendations can be made, and what actions can be taken?

Capacity planning helps organizations avoid overprovisioning or underprovisioning of resources. Overprovisioning can lead to unnecessary expenses and underutilization, while underprovisioning can result in performance degradation and user dissatisfaction. By accurately forecasting capacity needs, organizations can optimize resource allocation, improve system performance, enhance scalability, and ensure that service levels are maintained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

Solution Requirements

A

Defines the criteria for a solution to a given problem that software or services are expected to meet.

The requirements define what needs to happen without specifying how the solution will be met.

For cloud services, a solution requirements document might specify that content is quickly available to users. The solution might be a content delivery network, but that is selected later in the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Business Needs Analysis

A

The document containing solutions that must be found in order for the organization to achieve it’s strategic goals.

Such goals might include decreasing costs, increasing revenue, increasing a customer base, or increasing operational effectiveness.

Many organizations believe they need to migrate to the cloud so they are not left behind technologically, but they don’t have a good understanding of why (or if) the migration is useful or what (if any) benefits they can expect from cloud services. A business needs analysis will identify a specific business problem for which cloud service might provide a solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

What are the different types of licensing for cloud-based services?

A

Per User: One license for each user that consumes the software or service

Sock based: One license for each CPU that attaches to the socket of a motherboard, regardless of the number of cores the CPU contains

Core Based: One license for each core in a server’s CPU.

Volume Based: One license that permits a specified number of installations, for example, installation of the software on up to 100 computers.

Perpetual: One-time fee for a license that may include additional support costs; however, the license is good for the life of the software.

Subscription: Periodic cost; usually includes at least basic technical support, maintenance, and possibly upgrades.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

System Load

A

A measure of how busy the system’s CPU is over a period of time. The load is usually reported over three points in time: one minute, five minutes, and 15 minutes.

While there are usually counters for CPU utilization itself, the system load is better measured by using CPU queue length. That value tracks processes currently being run by the CPU as well as those that are awaiting the CPU’s attention.

Typically, the queue length value should not exceed the number of logical processors (cores) in the system.

Operating systems such as Linux and Windows Server have tools to display the CPU queue length. These tools include “top” in Linux and “Performance Monitor” in Windows Server. Cloud administrators can watch these values on cloud-based VMs to ensure performance expectations are met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Trend Analysis

A

The process of detecting patterns within a dataset over time, and using those patterns to make predictions about future events or better understand past events.

The results acquired are used for capacity planning and system scaling. Trend analysis can help the IT staff understand what to move to the cloud and when.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

Baselines

A

The point from which something varies. A configuration baseline is the original or recommended settings for a device while a performance baseline is the originally measured throughput.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Price Estimators

A

A free tool offered by cloud service providers to estimate the costs of cloud services with various configurations.

They break the costs out into sections to help your organization better understand how changes to resources impact OpEx

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Storage as a Service (STaaS)

A

A common cloud subscription for managing file storage for both home users and businesses.

Data stored using STaaS is available from any device, adding a significant layer of convenience.

Examples include:

  • Dropbox
  • MS OneDrive
  • iCloud
  • Google Drive
  • AWS Backup
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

Virtual Desktop Interface (VDI)

A

A virtualization implementation that separates the personal computing environment from a user’s physical computer.

The desktops can be accessed from any device over a web-based connection and from any location. IT management of the desktops may be easier and less expensive due to centralization. A new patch or application is deployed only on the centralized server, and any desktop instance launched includes the update.

VDI is a subset of the desktop as a service (DaaS) concept. There are other ways of implementing remote desktops.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

Single-Sign On (SSO)

A

An authentication technology that enables a user to authenticate once and receive authorizations for multiple services.

Users may be assigned preconfigured roles that grant a given level of access to cloud-based resources. These roles are usually created based on the principle of least privilege and help ensure regulatory compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

Identity Management (IdM)

A

A security process that provides identification, authentication, and authorization mechanisms for users, computers, and other entities to work with organizational assets like networks, operating systems, and applications.

The terms IAM and IdM are (at the time) used interchangeably.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

Compute Resources

A

In cloud architecture, the resources that provide processing functionality and services, often in the context of an isolated container or VM.

Compute resources encompass CPU, memory, storage, and network allocations. Compute functions rely on computing I/O functionality to accomplish calculation-based tasks. Administrators will create compute solutions to meet specific needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Hypervisor

A

The software or firmware that creates a virtual machine on the host hardware and provides virtualization services.

This layer manages the hardware allocations and control VM access to hardware. VMs can be restored to a given point in time with snapshots and replicated among host machines for high availability. Finally, adjustments can be made to the allocated hardware, allowing administrators to scale VM capacity depending on the workload.

105
Q

OS templates

A

A preconfigured OS ready for deployment with required settings and applications.

Composed of configurations files that specify all of the hardware allocations - CPU information, RAM quantities, network options, storage - for the VM.

106
Q

Cloning

A

The process of quickly duplicating a virtual machine’s configuration when several identical machines are needed immediately.

Building from templates is useful for deploying machines that reflect a standardized initial deployment configuration.

107
Q

Solution Template

A

A complete template that includes multiple virtual servers, various services, and network configurations.

Solutions templates include complete VM, network, and storage configurations to deploy an entire solution to a consumer. Once deployed, the consumer will manage the structure themselves.

108
Q

Managed Template

A

A complete VM, storage, and network configuration managed by IT staff.

Managed templates also include a complete VM, network, and storage configuration; however. you or your IT staff will manage the structure on behalf of the customer.

109
Q

Containers

A

An operating system virtualization deployment containing everything required to run a service, application, or microservice.

A container is a complete, portable solution, It contains the application code, runtime, libraries, settings, and other components - everything needed for the software to run. This complete package is portable and will run on any platform hosting a container solution.

Containerized software developed on a Linux workstation runs the same way on a Windows cloud-based VM. Containers are typically very quick and reliable.

Containers may be deployed on physical, on-premises servers or a cloud infrastructure. They can even be used on individual workstations. Deploying containers in the cloud provides all the usual cloud benefits: scalability, HA, and quick deployments.

Popular container solutions include:

  • Docker
  • Kubernetes
  • Hyper-V and Windows Containers
  • Podman
110
Q

Image

A

A duplicate of an operating system installation (including installed software, settings, and user data) stored on removable media. Windows make use of image-based backups and they are also used for deploying Windows to multiple PCs rapidly.

111
Q

Container Variables

A

A container variable is a programming concept that refers to a variable that can hold multiple values or objects. It is often used to group related data together or to store collections of items.

112
Q

Container Secrets

A

Container secrets refer to sensitive information, such as passwords, API keys, database credentials, or other confidential data, that are securely stored and accessed by containers within a cloud or container orchestration platform. Managing secrets is important to protect sensitive information from unauthorized access or exposure.

113
Q

Persistent Storage

A

By default, Docker containers do not persistently store data. Data generated by the container exists in the container and disappears when the container is stopped. However, containers can be configured to store data on the host system.

114
Q

Auto-Scaling

A

The dynamic and automated method by which cloud compute capacity is scaled up or down to satisfy workload demand.

Auto-Scaling provides cost savings, solves the issue of too many or too few resources available, and detects unhealthy instance and replaces them, resulting in HA.

115
Q

Scaling Up

A

Adding compute resources to an existing instance. Also known as vertical scaling.

116
Q

Scaling Out

A

Adding more VM instances to an auto-scaling group. Also known as horizontal scaling.

117
Q

Type 1 Hypervisor

A

A hypervisor that runs directly on server hardware without an intermediate operating system.

Very efficient, they have direct access to the hardware, without having to go through a configuration layer (such as a type 2 hypervisor). In this deployment, there is not a host operating system, such as Windows Server or RHEL.

Advantages:
- Greater performance
- Greater security (no host OS vulnerabilities)

Disadvantages:
- Requires a management interface, usually on a different host

118
Q

Type 2 Hypervisor

A

A hypervisor that runs an application within an operating system.

These are very common for workstation or developer deployments and less common for production servers. For example, developers might create applications on an Ubuntu Linux workstation and use Windows 10 and RHEL 8 VMs to test the applications.

Advantages:
- Easy access to multiple operating systems
- Independent of the host operating system

Disadvantages:
- Additional latency and security problems by having the host OS between the hypervisor virtualization layer and hardware

119
Q

Simultaneous Multi-Threading (SMT)

A

A CPU design to manage more than one processing thread.

120
Q

Hyper Threading

A

Hyper threading exposes two or more logical processors to the OS, delivering performance benefits similar to SMP

121
Q

Oversubscription

A

The allocation of compute resources to consumers with the assumption that not all resources will be required.

Involves allocating more resources than the physical server actually has. Private cloud administrators might do this based on the anticipated workload for a given set of VMs. For example, a physical server might have 16 GB RAM, but a toal of 24 GB might be allocated to six VMs that reside on the hardware. The hypervisor layer is responsible for allocating memory based on actual utilization at any given time by running VMs. Clearly, this is not a good choice if all VMs will consume their fully allocated memory quantities. However, if well planned, over-allocation can permit maximization of hardware utilization.

122
Q

vCPU

A

A vCPU represents a portion of a physical CPU that is allocated to a virtual machine (VM) or a container.

In virtualized environments, a physical CPU is divided into multiple virtual CPUs, each of which behaves as if it were a separate physical CPU. These virtual CPUs are assigned to different VMs or containers to enable them to run concurrently on the same physical hardware.

The concept of vCPU allows for efficient utilization of computing resources by allowing multiple virtual machines or containers to share the processing power of a single physical CPU. Each vCPU is capable of executing instructions and performing computations, similar to a physical CPU core.

It’s important to note that while vCPUs provide isolation and resource allocation for virtual machines or containers, they are not equivalent to physical CPU cores in terms of performance. The actual performance of a vCPU depends on various factors, including the underlying physical hardware, the hypervisor or virtualization layer, and the workload characteristics.

123
Q

Graphical Processing Unit (GPU)

A

A specialized electronic circuit or chip designed to handle and accelerate the processing of computer graphics and visual data. Originally developed for rendering images, videos, and 3D graphics, GPUs have evolved to become powerful processors that excel at parallel processing and performing complex mathematical computations.

While GPUs are commonly associated with graphics-intensive applications, such as gaming and video editing, they have found extensive use in various other fields, including scientific research, machine learning, data analysis, and cryptocurrency mining. The ability of GPUs to perform large-scale parallel computations makes them valuable for tasks that involve massive amounts of data and require intensive mathematical calculations.

124
Q

vGPU

A

refers to a virtualized instance of a GPU that is allocated to a virtual machine (VM) in a virtualized or cloud computing environment. vGPU technology allows multiple VMs to share the processing power and capabilities of a physical GPU.

vGPU technology enables the partitioning of a physical GPU into multiple virtual GPUs, which can be assigned to different VMs. Each VM with a vGPU has the illusion of having a dedicated GPU, allowing it to perform graphics-intensive tasks and take advantage of GPU acceleration.

125
Q

Pass-through GPU

A

A virtualization configuration that provides a virtual machine with direct access to GPU resources, bypassing the hypervisor.

While this makes hardware management more difficult, this configuration provides for predictable and excellent performance

126
Q

Instructions per cycle (IPC) / Clock Speed

A

A common measurement for CPUs, it is the average number of instructions executed per clock cycle. Also known as clock speed.

Optimizing clock speed can allow for greater time efficiency.

127
Q

Memory Ballooning

A

A method where a hypervisor reclaims unused memory from a VM if the host is low on RAM.

The VMs may still attempt to use that space, at which point other memory management tricks, such as swapping or paging, will occur.

Ballooning is an indication that your host does not have enough memory, and your best rescourse is to add RAM (or reduce the host’s workload). however, if increasing the host’s memory is not possible, ballooning can help maintain performance until the underlying situation can be resolved.

128
Q

Data Compression

A

Reducing the amount of space that a file takes up on a disk using a various algorithms to describe it more efficiently. File storage compression uses lossless techniques. NTFS-formatted drives can compress files automatically while ZIP compression adds files to a compressed archive.

Lossy compression, such as that used by JPEG and MPEG image and video formats, discards some information in the file more or less aggressively, allowing for a trade-off between picture quality and file size.

This strategy is particularly useful for stored data that is not frequently accessed because decompressing the information takes CPU time.

129
Q

Data Deduplication

A

A technique for removing duplicate copies of repeated data. In SIEM, the removal of redundant information provided by several monitored systems.

Data deduplication removes duplicate file chunks, replacing the content with pointers to a definitive copy. Depending on the stored data, deduplication can result in immense storage savings.

130
Q

Thin Storage Provisioning

A

Provisioning of storage that dynamically grows to meet storage capacity needs up to a maximum size.

Thin storage can grow on demand, up to the maximum capacity, but the organization will be charged only for the storage space actually used. Keep in mind, however, that dynamically growing the storage capacity on demand can result in a performance degradation.

131
Q

Thick Storage Provisioning

A

Provisioning of storage that reserves the entire specified storage capacity immediately, whether it is needed or not.

132
Q

How can data storage limits be set to control costs and capacity?

A

Storage Filesystem: Filesystems such as Microsoft NTFS support storage quotas on a per-user/ per-group basis.

Storage Device: Windows disk partitions support quotas to limit the total space users can consume on the partition

Cloud Filesystems: Cloud storage supports quotas to limit the total space users can consume.

133
Q

Soft Storage Quota

A

The storage limitation is not enforced. Typically, a notification will be sent to the user or the administrator or placed in log files.

134
Q

Hard Storage Quota

A

The storage limitation is enforced, and the user may not store additional data until space is freed or the quota is extended.

135
Q

Solid-State Drives (SSDs)

A

Persistent mass storage devices implemented using flash memory.

SSDs are the standard in end-user workstations and servers and are becoming more common in servers. SSD technology has taken a long time to catch up to HDD capacity, but the gap is closing. While SSD is more expensive, the performance trade-off is often worth the extra cost.

136
Q

Hard Disk Drives (HDDs)

A

Device providing persistent mass storage. Data is stored using platters with a magnetic coating that are spun under disks heads that can read and write to locations on each platter (sectors).

137
Q

Hybrid Disks

A

Hybrid disks use a mix of SSD and HDD technologies to attempt to maximize performance and cost. Frequently accessed data is stored on the flash memory portion of the disk, while less commonly used information is stored on the spinning disks.

138
Q

Block Storage

A

The storage method that breaks data into chunks and distributes the chunks across available storage space (and even across several storage devices), independent of the server’s filesystem. Block storage organizes data for the benefit of the data itself, whereas file storage organizes data for the system’s benefit.

Block storage can be very efficient, but it can also be very expensive. Block storage is used with SANs, which can also be difficult and expensive to implement. It can be very effective for larger chunks of data that are modified frequently, such as databases.

Block storage such as AWS EBS provides high availability and redundancy by replicating data within the availability zone that hosts the EBS storage. This implementation differs from S3 data storage, which may be replicated between availability zones. EC2 instances must reside in the same zone as the EBS storage.

AWS Block Storage = AWS EBS
Azure = Azure Disk Storage
GCP = GCP Persistent Disk

139
Q

File Storage

A

A storage method where data is managed as a discrete file within an operating system’s filesystem. Common on workstations, servers, and NAS devices.

Information about retrieving data is stored in filesystem metadata. This type of storage is inexpensive and useful for relatively small pieces of data.

Access is provided via shared filesystems, such as network file system (NFS) and common Internet file system (CIFS). The file storage type is essentially a cloud-based NAS that shares folders to client computers, whether they are on premises or in the cloud infrastructure.

Cloud-based file services are scalable, simple, and inexpensive. They are also usually familiar to Windows and Linux server administrators.

140
Q

Object Storage

A

A storage method where data is broken into chunks and managed with detailed metadata. It is very scalable and is best for data that is written once and read often. Object Storage is comprised of Tenants and Buckets.

Tenants are members of the public cloud that share the existing environment. For security, tenant IDs uniquely label the consumer and help filter access to resources. Tenant IDs are assigned to namespaces within which storage buckets are created to provide further data management.

Buckets are the primary storage unit for data objects. When you begin working with cloud storage, you will create one or more buckets. You can then store data in these buckets. Buckets are created in a region selected by you (usually the one nearest to your location). Buckets are given a globally unique name and location when they are created. Before data can be stored, you must assign a storage class (Hot, Warm, Cold).

141
Q

Input/Output per second (IOPS)

A

A storage performance indicator that measures the time taken to complete read / write operations.

The drive’s own technology limits IOPS, although throughput also impacts performance.

142
Q

Throughput

A

Throughput is the amount of data that the network can transfer in typical conditions. This can be measured in various ways with different software applications.

Measured in megabytes per second (MB/sec), or the number of bits read or written per second. Throughput is the critical value for large file transactions and therefore may be the better choice for those workloads.

Goodput is typically used to refer to the actual “useful” data rate at the application layer (less overhead from headers and lost packets).

143
Q

Network File System (NFS)

A

Remote file access protocol used principally on UNIX and Linux networks.

Standard NFS configuration uses the /etc/exports file to list what directories should be shared and what access levels should be enforced. Client computers then use the mount command to attach to the exported directories.

While NFSv4 is the underlying protocol for directory access in cloud storage, it is configured differently than with local file servers. In the case of AWS, NFS is provided via Elastic File System (EFS). Using the web-based EFS Management Console, administrators create and name a filesystem, connect it to a VPC, and then manage any performance settings. When this is complete, mount targets are created that allow administrators to connect to the resources and manage access controls.

144
Q

Mount Targets

A

NFS feature that allows administrators to connect to resources and manage access controls.

145
Q

Common Internet File System (CIFS) / Server Message Block (SMB)

A

This protocol is used for requesting files from Windows servers and delivering them to clients. SMB allows machines to share files and printers, thus making them available for other machines to use. SMB client software is available for UNIX-based systems. Samba software allows UNIX and Linux servers or NAS appliances to run SMB services for Windows clients.

Access controls are enabled to manage who has permissions to the resources.

Microsoft Azure offers Azure Files as another way of managing stored data. Azure Files is a serverless file-sharing service that makes shared data available via SMB or NFS protocols. A service such as Azure Files helps eliminate on-premises file servers. Cloud-stored data can be easily accessed by any authorized user from any type of device.

146
Q

Storage Area Network (SAN)

A

A network dedicated to data storage, typically consisting of storage devices and servers connected to switches via host bus adapters.

SANs provide greater flexibility, fault tolerance, and performance than do NAS devices. SANs however, are also significantly more complex and more expensive.

While the SAN is technically the supporting network between the servers and the storage devices, a complete SAN solution is made up of three primary components, in addition to the client workstations.

The first component is one or more servers that manage access to data. The second component is an isolated network between the servers and the storage infrastructure. The final component is the storage infrastructure itself.

SANs allow organizations to connect disparate types of storage, such as tapes and optical media. SANs also connect data storage across physical boundaries, such as remote datacenters.

The SAN infrastructure is transparent to the end user. In the case of a Windows user, they may simply have a drive mapped on their computer and will have no idea what the storage structure actually looks like.

There are two primary communication protocols used to support SAN solutions: Fibre Channel and iSCSI

147
Q

Internet Small Computer Systems Interface (iSCSI)

A

IP tunneling protocol that enables the transfer of SCSI data over an IP-based network to create a SAN. Common solution for small to medium size organizations.

iSCSI implementations have a client component and a target component.

The client component may be a software-based iSCSI initiator.Such software is often integrated into the server OS. It is inexpensive and relatively easy to configure. An HBA may be added to the server itself and provides a hardware solution. HBA implementations are faster, more complex, and more expensive.

The target component is the iSCSI target. This is the storage device itself, or the target for the iSCSI connection from the initiator or HBA.

148
Q

Internet Small Computer Systems Interface (iSCSI)

A

IP tunneling protocol that enables the transfer of SCSI data over an IP-based network to create a SAN. Common solution for small to medium size organizations.

iSCSI implementations have a client component and a target component.

The client component may be a software-based iSCSI initiator.Such software is often integrated into the server OS. It is inexpensive and relatively easy to configure. An HBA may be added to the server itself and provides a hardware solution. HBA implementations are faster, more complex, and more expensive.

The target component is the iSCSI target. This is the storage device itself, or the target for the iSCSI connection from the initiator or HBA.

iSCSI performance relies on the underlying network infrastructure.

149
Q

Fibre Channel

A

Fibre Channel protocols carry SCSI commands over fiber optic cables. This implementation requires specialized network devices. This is the most common SAN structure.

Fibre Channel is a protocol that provides block-level data transfers over optical media. The Fibre Channel protocol carries SCSI or NVMe commands between server nodes and storage devices.

Fibre Channel SANs may be organized in a point-to-point (direct) or switched topology.

150
Q

Fibre Channel over Ethernet (FCoE)

A

Standard allowing for a mixed use Ethernet and Fibre network with both ordinary data and storage network traffic. The FC protocol is embedded in Ethernet frames. There are fewer specialized devices, a single standardized Ethernet cable in the datacenter, and an overall lower cost.

151
Q

In what situations would you use iSCSI for your storage environment? In what situations would you use Fibre Channel?

A

iSCSI:

  • Cost is an issue
  • You’re connecting many hosts to one storage target (several servers storing different data on a single storage server)
  • Training is not available to your IT staff for the complexities of a Fibre Channel solution
  • You need or want a less complex infrastructure

Fibre Channel:

  • Performance is paramount
  • SAN components are widely distributed
152
Q

NVMe-oF

A

“Non-Volatile Memory Express” (NVMe) is a technology designed to improve the performance of storage devices, particularly solid-state drives (SSDs). It allows for faster data transfer and lower latency compared to traditional storage protocols.

“Fabrics” refers to network fabrics, which are the underlying infrastructure that connects devices in a network. It could be Ethernet, InfiniBand, or Fibre Channel, among others.

Now, when we say NVMe-oF, we are referring to the extension of NVMe technology over a network fabric. In other words, it enables the communication of NVMe storage devices, such as SSDs, over a network, rather than being directly connected to a server.

NVMe-oF allows multiple servers to access and share NVMe storage devices over the network, providing high-performance storage capabilities. It’s like extending the speed and efficiency of NVMe-based storage to multiple computers, enabling them to access and utilize the storage resources more effectively.

153
Q

RAID 0

A

RAID 0, also known as “striping,” is a data storage configuration that splits data across multiple drives, increasing performance by allowing data to be read from and written to multiple drives simultaneously. However, RAID 0 offers no data redundancy or fault tolerance, meaning that if one drive fails, all data stored across the array may be lost.

Requires a minimum of two disks.

154
Q

RAID 1

A

RAID 1, also known as “mirroring,” is a data storage configuration that duplicates data across multiple drives, providing data redundancy and fault tolerance. Each drive in the array contains an exact copy of the data, ensuring that if one drive fails, the data remains accessible from the other drive. Data is not efficiently used, as two copies of data are made in order for redundancy).

Requires a minimum of two disks.

155
Q

RAID 5

A

RAID 5 is a data storage configuration that uses striping with parity. It distributes data and parity information across multiple drives, providing a balance between performance, data redundancy, and storage efficiency. If one drive fails, the remaining drives can use the parity information to reconstruct the lost data, maintaining data integrity and allowing continued access to the stored information.

Requires a minimum of 3 HDDs

156
Q

RAID 6

A

RAID 6 is a data storage configuration that offers higher data protection compared to RAID 5. It utilizes striping with double parity, distributing data and two sets of parity information across multiple drives. This redundancy allows for the failure of two drives without losing data, ensuring better fault tolerance and data integrity in the event of multiple drive failures.

Requires a minimum of 4 HDDs.

157
Q

RAID 10

A

RAID 10, also known as RAID 1+0 or “striped mirroring,” combines the features of RAID 1 (mirroring) and RAID 0 (striping). It creates a mirrored set of drives and then stripes data across them. This configuration offers both data redundancy and improved performance by leveraging the benefits of both mirroring and striping. If a drive fails, the mirrored drive ensures data availability, while the striping enhances read and write speeds.

158
Q

Software RAID vs. Hardware RAID

A

Software RAID relies on the OS to manage the RAID array, First, this assumes the OS is capable of doing so - most server OSs can, but most client systems cannot. The computer’s primary processing power handles the array. However, software RAID is usually cheaper than hardware RAID.

Hardware RAID relies on a separate controller to manage the array. This RAID solution does not consume resources from the system. It is typically faster, more reliable, and more flexible. Those advantages lead to a higher cost.

159
Q

RAID in the Cloud

A

RAID configurations are approached differently in cloud storage than with traditional physical servers and disk drives. The process is to attach cloud-based disks to the VM you’re managing, and then use the VM operating system to manage the configuration. For example, with Linux, you could configure Logical Volume Manager (LVM), and with Windows, you could use Storage Spaces.

Microsoft Azure offers Managed Disks for VM storage. These disks can be attached to VMs and configured for RAID. Sometimes RAID configurations that provide performance increases, such as RAID 0, can be used to configure less expensive disks for greater speed.

Architects and administrators working in datacenters and deploying private clouds need to be familiar with RAID options, including controllers and the best use of SSD and HDD storage. Hardware management such as this is offloaded to CSPs for public clouds.

160
Q

Software-Defined Network (SDN)

A

APIs and compatible hardware / virtual appliances allowing for programmable network appliances and systems. SDN manage network devices differently. First, it defines two management planes: the data plane and the control plane.

The data plane is the compute level that controls packet management tasks such as the actual forwarding or filtering of networking traffic within and between segments.

A layer of programmable or configurable control of multiple network devices that is decoupled from the individual network devices. It provides the devices with the information needed for packet forwarding at the IP and MAC address layers.

The control plane is hosted on a controller that permits network administrators to manage configurations for multiple devices, such as routers, switches, and load balancers. Furthermore, this configuration management is automated.

161
Q

PaaS only network design

A

Virtual network design; consists of basic and limited SDN support for platform services.

162
Q

Cloud-Native network design

A

Virtual network design that consists of a cloud-specific SDN without reliance on physical on-premises networks.

163
Q

Cloud DMZ network design

A

Virtual network design that consists of perimeter network with tightly controlled access between your cloud network deployment and your on-premises physical network.

164
Q

Hub and Spoke

A

Virtual network design that centrally manages connectivity and services with isolated networks for specific workloads.

165
Q

Domain Name Service (DNS)

A

The service that maps names to IP addresses on most TCP / IP networks, to include the Internet.

Each server, workstation, or other network node has an entry in the database. When a user requests access to a resource by name, their computer queries DNS to discover the related IP address. Network traffic is then addressed to that IP address. DNS is one of the most critical services on the network.

Public Internet resources are discovered the same way. Websites, for example, are requested by users with a URL, but they are actually contacted via IP address.

CSPs offer DNS services for the networks they host. Your organization can use these services to organize cloud resources and provide connectivity. Cloud DNS can integrate with your on-premises name resolution services to provide a more consistent infrastructure that’s easier to manage.

Both public and private DNS zones are supported.

166
Q

Public DNS Zone

A

Provide DNS resolution for Internet-facing services, such as your organization’s public website

167
Q

Private DNS Zone

A

Private Zones provide DNS services to internal resources, such as your company’s VPC of internal servers, printers, and databases.

168
Q

DNS / Zone Forwarding

A

A feature in DNS that allows a DNS server to forward DNS queries to another DNS server. It is typically used when a DNS server does not have the necessary information to resolve a domain name and needs to pass the query to another server that may have the required information.

169
Q

Reverse Lookups

A

Also known as reverse DNS or rDNS, this is a process that translates an IP address to a domain name. While regular DNS resolves domain names to IP addresses, reverse lookups perform the opposite function by providing the domain name associated with a given IP address.

170
Q

DNS Peering

A

Also known as DNS interconnect, is a mechanism used by network operators and ISPs to exchange DNS traffic directly between their DNS infrastructure. It involves establishing direct connections between DNS servers of different networks to exchange DNS queries and responses efficiently.

171
Q

DNSSSEC

A

DNS protocol that adds security features to DNS. It aims to provide data integrity, authenticity, and authentication mechanisms for DNS queries and responses, mitigating various DNS-based attacks and ensuring the trustworthiness of DNS information.

172
Q

Fully Qualified Domain Name (FQDN)

A

Specifies the exact location of a specific host or resource within the DNS hierarchy. FQDN consists of two main components:

  1. Hostname: represents the unique name assigned to a specific device or resource within a domain. It could be a computer, server, or any networked device. For example, “www” for a web server
  2. Domain Name: The domain name represents the hierarchical structure of the DNS and identifies the organization, network, or entity to which the host belongs. It consists of one or more domain labels separated by dots. For example “example.com”

To form an FQDN, the hostname is appended to the domain name, creating a fully qualified and unique address for a specific resource.

173
Q

Dynamic Host Configuration Protocol (DHCP)

A

A protocol used to automatically assign IP addressing information to hosts that have not been configured manually.

Such configuration is automatic, making it less prone to mistakes and less time consuming for administrators. However, network devices and services that must be easily and consistently found on the network should not be DHCP clients, because their IP address may change over time.

174
Q

Detail the DHCP lease generation process

A
  1. Client sends a DHCPDiscover broadcast, which is heard by the DHCP server.
  2. DHCP server sends a DHCPOffer broadcast, which is heard by the client.
  3. Clients sends a DHCPRequest broadcast, accepting the offer, which is heard by the DHCP server.
  4. The DHCP server sends a DHCPAck broadcast, which is heard by the client, finalizing the process.
175
Q

IP address management (IPAM)

A

Software consolidating management of multiple DHCP and DNS services to provide oversight into IP address allocation across an enterprise network.

176
Q

Network Time Protocol (NTP)

A

TCP/IP application protocol allowing machines to synchronize to the same time clock that runs over UDP port 123.

Clients and servers that experience time drift (unsynchronized and different time settings) may have difficulty communicating, users may not be able to authenticate, and time-sensitive applications may not run correctly.

In cloud environments, AWS EC2 and Azure VMs receive the definitive time from their hardware host by default.

177
Q

Content Delivery Networks (CDNs)

A

A distributed network of servers strategically located in different geographic locations to deliver web content and other digital assets to end-users with improved performance, reliability, and scalability . CDNs work by caching and serving content from edge servers that are geographically closer to the users, reducing latency and network congestion.

178
Q

Secure Socket Tunneling Protocol (SSTP)

A

A protocol that uses the HTTP over SSL protocol and encapsulates an IP packet with a PPP header and then with an SSTP header.

Primarily used with Windows OS

179
Q

OpenVPN

A

OpenVPN is an open-source VPN software that provides secure and encrypted connections between devices over the internet. It allows users to establish a private and encrypted tunnel between their device and a remote server, enabling secure communication and data transfer.

180
Q

Internet Key Exchange v2 (IKEv2)

A

A protocol within IPsec that sets up a Security Association using certificates to establish a secure network session.

Open source protocol, strong security, fast

181
Q

Point-to-Point Tunneling Protocol (PPTP)

A

Developed by Cisco and Microsoft to support VPNs over PPP and TCP/IP. PPTP is highly vulnerable to password cracking attacks and considered obsolete.

182
Q

Virtual Routing and Forwarding (VRF)

A

Permits multiple isolated routing tables to exist on the same router, each associated with different interfaces.

VRF provides virtualization within a physical router. The result is the ability to manage multiple routing tables on the router. Each routing table directs traffic for a different route, eliminating the need for individual routes.

183
Q

Network Interface Cards (NIC)

A

Implements the physical and data link connection between a host and transmission media. An OS deployed on that physical server accesses the network via the NIC.

184
Q

Virtual NIC (vNIC)

A

Connections between a virtual machine instance and a physical network interface card in the host server. Also called virtual NIC or vNICs

185
Q

Subnetting

A

The practice of dividing a single network into two or more smaller networks by using subnet masks.

The goal could be performance or security (or both). Most cloud network configurations begin with the VPC. From there, the VPC can host one or more subnets. Instances can be added to the subnets as appropriate.

186
Q

Load Balancers

A

A type of switch or router that distributes client requests between different resources, such as communications links or similarly configured servers. This provides fault tolerance and improves throughput.

187
Q

Web Application Firewalls (WAFs)

A

A firewall designed to specifically protect software running on web servers and their backend databases from code injection and DoS attacks.

WAF filtering protects cloud resources from Internet threats. Isolating VPC traffic within a cloud deployment is accomplished by using security groups and network ACLs. When a new VM instance is created, it is associated with a security group. Such an association is used to control access to the instance. Segments within a VPC can be filtered by using newtork ACLSs, and logs are generated that report on permitted and denied traffic.

188
Q

Virtual Private Cloud (VPC)

A

A private network segment made available to a single cloud consumer on a public cloud. VPCs allow organizations to achieve greater security within a public cloud. Traditional public cloud IaaS deployments do isolate customer data by subnets and VLANs, but VPCs go further, implementing a single-tenant structure within the CSP’s multi-tenant environment. VPCs add VPN connectivity and isolation to regular public cloud implementations.

189
Q

Peering

A

Network connectivity that permits instances to communicate between two virtual private clouds using reversed IP addresses.

The virtual networks appear to consumers as a single network. In addition, fast connectivity is provided between the two networks, making data and resource access very efficient.

Peering is used in the hub-and-spoke model to connect the spoke networks with the hub network. Not that the spoke networks are not peered to each other in the hub-and-spoke model.

190
Q

VLAN

A

A logically separate network, created by using switching technology. Even though hosts on two VLANs may be physically connected to the same cabling, local traffic is isolated to each VLAN so they must use a router to communicate.

A VLAN segments the network at Layer 2, accomplished by tagging data frames with VLAN membership information.

191
Q

Virtual Extensible Local Area Network (VXLANs)

A

A method for overcoming VLAN shortfalls by providing a great number of available segments. VXLANs extend the functionality of VLANs by adding increased scalability that is appropriate for cloud, on-premises, and hybrid networks. VXLANs support up to 16 million separate networks.

192
Q

Generic Network Virtualization Encapsulation (GENEVE)

A

A network virtualization standard used with VXLANs, NVGRE, and STT. GENEVE is a combined standard for VXLANs and network virtualization using generic routing encapsulation (NVGRE), a competing standard.

The use of GENEVE may be required across multi-cloud deployments.

193
Q

Network Flow Diagrams

A

Enable users to visualize and understand how data moves through a network infrastructure. The diagram displays the flow of data through any internal and external nodes, network devices (such as routers), and cloud services.

194
Q

Stretching

A

Refers to the concept of extending a network across multiple locations or sites, typically over a wide geographical area. It involves connecting multiple network segments or subnets in different physical locations to create a single logical network.

195
Q

Microsegmentation

A

Network microsegmentation is a security technique that involves dividing a network into smaller segments or subnetworks to enhance security and control network traffic. In traditional network architectures, all devices within a network share the same security perimeter, making it easier for threats to move laterally once they gain access to the network.

Microsegmentation addresses this vulnerability by creating smaller security zones within the network, where each segment has its own security policies and controls. This allows for granular control over network traffic and restricts the lateral movement of threats.

To implement network microsegmentation, organizations often leverage technologies such as virtual local area networks (VLANs), software-defined networking (SDN), network virtualization, or firewall-based segmentation solutions. These technologies enable the creation and enforcement of security policies at a more detailed level, improving network security and reducing the impact of potential security incidents.

196
Q

Service Chain

A

A separation of server roles into tiers to facilitate management

197
Q

Man-in-the-middle Attack

A

A form of eavesdropping where the attacker makes an independent connection between two victims and steals information to use fraudulently.

198
Q

Generic Routing Encapsulation (GRE) protocol

A

Tunneling protocol allowing the transmission of encapsulated frames or packets from different types of network protocol over an IP network.

Does not use encryption that does not use encryption unless combined with IPsec and should be avoided for any secure communications.

199
Q

DNS over HTTPS (DoH)

A

The DNS name resolution service that uses HTTPS to encrypt communications between the client and server to ensure privacy and confidentiality.

200
Q

DNS over TLS (DoT)

A

The DNS name resolution service that uses TLS to encrypt communications between the client and server to ensure privacy and confidentiality.

201
Q

Stateful Firewall

A

A stateful firewall is a network security device that monitors and filters incoming and outgoing network traffic based on the context and state of the network connections. Unlike traditional packet-filtering firewalls, which examine individual network packets in isolation, stateful firewalls maintain knowledge about the ongoing sessions and connections passing through them.

Stateful firewalls are widely used in network security architectures to provide robust protection against unauthorized access, malicious activities, and network-based threats. They are a fundamental component of network security infrastructure, along with other security measures such as intrusion prevention systems (IPS), virtual private networks (VPNs), and secure web gateways.

202
Q

Stateless Firewall

A

A stateless firewall, also known as a packet-filtering firewall, is a network security device or software that examines individual network packets without maintaining information about the state or context of network connections. Stateless firewalls make decisions about packet filtering based solely on the information contained in the packet headers, such as source and destination IP addresses, port numbers, and protocol types.

Stateless firewalls are often used in scenarios where simplicity, efficiency, and basic packet filtering functionality are sufficient for network security requirements. They are commonly implemented at network perimeters, where they can act as a first line of defense by blocking or allowing traffic based on predetermined rules. However, for more advanced security needs or when deeper inspection and context-aware filtering are required, stateful firewalls or other security solutions may be more appropriate.

203
Q

Cross Site Scripting (XSS)

A

A malicious script hosted on the attacker’s site or coded in a link injected onto a trusted site designed to compromise clients browsing that trusted site, circumventing the browser’s security model of trusted zones.

204
Q

Cross Site Forgery

A

A malicious script hosted on the attacker’s site that can exploit a session started on another site in the same browser.

205
Q

SQL injections

A

An attack that injects a database query into the input data directed at a server by accessing the client side of the application.

206
Q

DDoS

A

An attack that uses multiple compromised hosts (a botnet) to overwhelm a service with request or response traffic.

207
Q

Virtual Patch

A

Addresses vulnerabilities at the WAF layer rather than in the application layer.

May serve as a permanent fix or a temporary solution until the application’s code is updated.

208
Q

Application Delivery Controllers

A

A multifunctional device that provides load balancing, traffic flow, SSL offloading, and other optimization functions for web apps.

209
Q

Intrusion Detection System (IDS)

A

A software and/or hardware system that scans, audits, and monitors the security infrastructure for signs of attacks in progress.

IDSs are passive devices that match network traffic and patterns against known vulnerabilities. They monitor the network environment.

An IDS does not sit “in-line” (traffic does not flow through).

210
Q

Intrusion Prevention System (IPS)

A

An IDS that can actively block attacks. IDS devices are active, actually controlling the network traffic flow. IPS devices are the common standard on today’s networks.

211
Q

Data Loss Prevention (DLP)

A

A software solution that detects and prevents sensitive information from being stored on unauthorized systems or transmitted over unauthorized network.

There are several ways to manage data loss, including examination of data at rest, in transit, and in use. DLP is often used to manage data security for authorized users who may deliberately or accidentally attempt to copy or otherwise remove data from the secured environment.

212
Q

Network Access Controls (NACS)

A

A general term for the collected protocols, policies, and hardware that authenticate and authorize access to a network at the device level.

Examples:
Workstation security (endpoints): anti-virus, anti-spyware, patching, and vulnerability scans

Authentication: single sign-on and multifactor authentication

Network security: firewalls, network IDS, patching, and updated anti-virus definitions

213
Q

Network Packet Brokers (NPB)

A

A service that gathers and exposes network information to security tools.

NPBs exist between the network infrastructure and infrastructure security tools to gather information and expose that information to the appropriate tools.

Without an NPB, each network analysis tool is individually configured to collect information, placing a strain on systems and gathering a great deal of undesired information. NPBs gather all of the network information and then expose it to the appropriate tool.

214
Q

Network Time Security

A

A key exchange that verifies the identity of a source time server and the integrity of the time synchronization information received from it.

215
Q

Firmware

A

Refers to software instructions stored semi-permanently (embedded) on a hardware device. Modern types of firmware are stored in flash memory and can be updated more easily than legacy programmable (ROM) types.

Not as easy to patch as operating systems or applications; however, periodically, vendors will offer firmware updates. Typically, these updates will provide three possible benefits:

  • Security fix
  • Performance update
  • New feature release
216
Q

Web Proxy

A

An intermediary service for web browsing that provides content filtering.

May exist between users and web-based resources, and DLP solutions can be used to analyze this traffic and mitigate data exfiltration. analysis and control are managed by allow lists and/or blocklists.

217
Q

Round Robin

A

Load balancing technique where workloads are assigned to servers in sequence, with no regard for the current work assignment.

Advantages include simplicity and equal distribution of traffic. Additionally, it can handle different types of traffic, and is suitable for environments where all servers or devices have similar capabilities.

Disadvantages include the lack of consideration for actual capacity or current workload of each server or device. If some servers or devices are more powerful or have higher performance than others, they may end up handling more requests or traffic, leading to potential imbalances. To address this, more advanced load balancing techniques, such as weight Round Robin or dynamic load balancing algorithms, can be employed.

218
Q

Static Load Balancing Algorithm

A

The workload is evenly distributed among servers (best for workloads that rarely change).

219
Q

Dynamic Static Load Balancing Algorithm

A

Load balancing technique where the least busy server is given the next work cycle (best for variable workloads).

220
Q

Default Gateway

A

The default gateway is an IP configuration parameter that identifies the location of a router on the local subnet that the host can use to contact other networks.

221
Q

Network Address Translation (NAT)

A

Network Address Translation (NAT) is a technique used in computer networks to modify IP addresses and/or port numbers in IP packet headers as they pass through a network device, such as a router or firewall. NAT allows multiple devices within a private network to share a single public IP address, enabling them to access the Internet.

The primary purpose of NAT is to conserve public IP addresses, as the number of available IPv4 addresses is limited. By using NAT, a private network with private IP addresses can communicate with external networks using a single public IP address.

222
Q

What are the four phases of the cloud migration?

A

Assessment: Understanding what services and applications exist and which are candidates for migration

Planning: The planning phase of your migration

Implementation: Actual transfer of data, services, and servers to the cloud. The implementation is likely to be accomplished in phases.

Optimization and Security: optimization of services and processes to ensure they are functioning as efficiently and as cost effectively as possible.

223
Q

Rehost (Lift and Shift)

A

No modification, application is cloud ready for cloud migration.

Typically the fastest and easiest migration. The software involved is cloud ready, meaning that it can take advantage of cloud computing benefits, it can run in a virtualized environment.

224
Q

Replatform (Lift, tinker, and shift)

A

Application requires some modification before cloud migration. This method relies on some modification or optimization of the application in order to take advantage of cloud benefits. The bulk of the application does not are to be freshly developed, but some changes are necessary.

225
Q

Refactor (Rip and Replace)

A

Application will be entirely rearchitected to be cloud ready. Thiis method requires a great deal of development time and may be quite expensive.

226
Q

Repurchase (Drop and Shop)

A

Application is retired and replaced by a modern, cloud-ready application. This is very common for legacy applications that cannot run in virtualized or cloud-based environments. The legacy application is retired, and a new application is selected and purchased. This method may be more cost effective than others because the new application will be cloud ready from the start.

227
Q

Retire

A

Application is retired and not replaced upon cloud migration.

228
Q

Hybrid Cloud Migration

A

Describes a mix of any of the six cloud migration types:

  • Rehost
  • Replatform
  • Refactor
  • Repurchase
  • Retire
  • Retain
229
Q

Manual P2V Migration

A

Administrators create a VM, install an OS and applications, and copy data.

230
Q

Semi-Automatic P2V Migration

A

A migration tool assists with some aspects of physical to virtual migration, such as hardware specifications and data migration.

231
Q

Automatic P2V Migration

A

A migration tool manages the entire process of physical to virtual migration.

232
Q

Cloud Native

A

Cloud native refers to a modern approach to building and running applications that takes full advantage of cloud computing and its characteristics. It involves designing applications specifically for deployment and operation in cloud environments, such as public or private clouds.

233
Q

Continuous Integration / Continuous Deployment (CI/CD)

A

Software development method combining app and platform updates, which are rapidly committed to production, with code updates, which are rapidly committed to a code repository or build server.

This approach addresses configuration drift and decentralized management, which is often inconsistent. Code-based deployments are easier to test, faster to deploy, more accurate to scale up (or down), and repeatable.

234
Q

DevOps

A

A combination of software development and systems operations, and refers to the practice of integrating one discipline with the other.

235
Q

Infrastructure as a Code (IaC)

A

A provisioning architecture in which deployment of resources is performed by scripted automation and orchestration.

One of the most important aspects of IaC is that all changes are made in code and then applied to the devices. Manual changes should never be made directly on individual devices. Any alteration of the desired state is implemented through code.

There are two approaches to managing IaC: imperative and declarative

Benefits include:
- Quicker Deployments
- Quicker Configuration Changes
- Quicker Recovery
- Consistency and less config drift
- Re usability of code
- Version control
- Visibility of configuration settings

236
Q

Imperative IaC

A

A configuration method that specifies step by step exactly how the target machine will be configured.

Specific commands are defined that execute and configure the device.

237
Q

Declarative IaC

A

A configuration method that declares a desired configuration that the target machine configures itself to match.

The desired state is defined, and the automation tool matches it on the device.

238
Q

Application Programming Interfaces (APIs)

A

An API is a set of rules and protocols that allows different software applications to communicate and interact with each other. It defines how different components of software systems can interact, exchange data, and request services from each other.

APIs provide a standardized way for software developers to access the functionality of a system, whether it’s an operating system, a database, a web service, or any other software component. They define the methods, parameters, and data formats that developers use when making requests and receiving responses.

239
Q

Orchestration

A

The automation of multiple steps in a deployment process. The workflow is launched once, and then each process is started in order. When one process is completed, the next one begins without any interaction by the administrator.

Orchestration Sequencing refers to running the components of the workflow in the proper order.

Care must be taken by the system admin to ensure that the orchestration workflow has a way of testing each automated step. For example, when an instance is deployed and the next automated task should be run, the existence of the instance must be confirmed by the management tool.

240
Q

Configuration Management

A

The process through which an organization’s information systems components are kept in a controlled state that meet’s the organization’s requirements, including those for security and compliance

241
Q

Ansible

A

Ansible is an open-source automation tool that provides a simple and powerful way to automate IT infrastructure tasks, including configuration management, application deployment, and orchestration. It is designed to be agentless, meaning it doesn’t require any software to be installed on the managed hosts. Instead, it uses SSH or WinRM protocols to connect to remote systems and execute tasks.

Ansible uses a declarative language called YAML (YAML Ain’t Markup Language) to define playbooks, which are files containing a set of instructions that define the desired state of the system. Playbooks describe the tasks to be performed, the hosts on which to execute those tasks, and any required variables or conditions.

It allows you to streamline repetitive and complex tasks, reduce human error, and promote consistency across your IT infrastructure.

242
Q

YAML

A

A common markup language for configuration files and applications such as Ansible.

243
Q

Chef

A

Automation platform that transforms complex infrastructure into code, and automates how applications are configured, deployed, and managed.

Chef uses a client-server structure to deliver configuration requirements. the configurations are written in the Ruby programming language and stored in Recipes.

Recipes are definitions of configuration settings written in Ruby and included in a Chef configuration cookbook.

Chef uses a pull configuration. The Chef clients periodically check in with the Chef server to see if there have been changes. The Chef server stroes the configuration information. Typically, the Chef server is managed via a Chef workstation, where the recipes are created.

244
Q

Puppet

A

An open-source configuration management tool designed to manage the configuration of Unix-like and Microsoft Windows systems declaratively.

Puppet uses a client-server model. Puppet relies on an agent called the puppet-client to get configuration information from the Puppet Master server. Like Chef, Puppet files are written with Ruby. Puppet supports all of the common operating systems and may be used with physical machines, VMs, and Cloud Instances.

245
Q

Windows PowerShell

A

A command shell and scripting language built on the .NET Framework, can be used to run common commands on single Window’s systems. Native PowerShell commands use a verb-noun syntax and are referred to as cmdlets. PowerShell can execute cmdlets on remote systems. Finally, PowerShell can be configured to deliver a file containing a desired state for the destination remote system.

PowerShell can also be used as a config manager by enabling and using PowerShell Desired State Configuration (DSC)

246
Q

PowerShell Desired State Configuration (DSC)

A

A declarative configuration management tool that uses PowerShell to manage the target systems. Relies on a Managed Object Format (MOF) file that holds the configurations. The service can be used with both Windows and Linux clients to configure physical servers and Azure-based instances

247
Q

Managed Object Format (MOF)

A

A configuration file using Common Information Model classes to define a desired configuration to be implemented via PowerShell Desired State Configuration.

248
Q

Docker

A

A container virtualization engine that can be installed on Linux, Windows, macOS, and other platforms to run containers.

Docker containers can be managed and automated by Docker’s own Swarm (Docker service used to create a cluster of Docker Engine hosts), or a third-party solution.

Docker hosts DockerHub, an online repository for storing container images. Your organization’s staff can access the repositories and use images stored there. The repository is kept current by using automation. Docker supports automated builds that create a new build when updated code is sent to a repository.

Docker containers are built by using a Dockerfile. This text files contains the necessary commands to construct and configure the container. It is very easy to quickly build containers by storing these settings in a Dockerfile.

249
Q

Git

A

Type of version tracking software used primarily with collaborative development projects to ensure integrity and version control.

Git is the de facto standard for code management. Git software manages code versioning in collaborative development environments.It can actually be used to manage code or files in many different contexts.

Functions by storing code in a central repository. The code can then be copied, or “cloned”, to a developer’s local workstation for work. The updated code is then pushed back up to the repository to be integrated with the rest of the project. Branches of code are created for additional work.

Once Git is installed on a workstation and a local repository (folder) exists, project files can be created. These might be files that make up code for an application or even plain text files. The files can then be uploaded to a Git repository stored at GitHub.

250
Q

Service Accounts

A

A host or network account that is designed to run a background service rather than to log on interactively.

The service accounts are given a very narrow scope of access—just enough to run the service or application and no more.

251
Q

Endpoint Detection and Response (EDR)

A

A software agent that collects system data and logs for analysis by a monitoring system to provide early detection of threats

252
Q

File Integrity Monitoring (FIM)

A

A type of software that reviews systems files to ensure that they have not been tampered with.

253
Q

Data Retention

A

The process an organization uses to maintain the existence of and control over certain data in order to comply with business policies and / or applicable laws and regulations.

254
Q

Write Once Read Many (WORM)

A

Storage media used to maintain the integrity of the data being compiled by preventing modification.

255
Q

Records Management

A

The process of controlling data throughout it’s lifecycle. Includes the following:

  • Versioning
  • Retention
  • Destruction
256
Q

Cloud Access Security Broker (CASB)

A

CASB software observes the data flow between the on-premises network and the cloud, looking for potential data loss incidents. A CASB uses rules to determine legitimate and illegitimate exchanges of information. The rules can be established for specific users or entire departments.

CASB solutions have become increasingly important with the growth of cloud services, especially cloud storage. Third-party CASB software is available on the AWS Marketplace. Microsoft offers Cloud App Security to help organizations manage DLP. Cloud App Security combines Azure DLP, EDR, and IAM solutions to manage data security.

257
Q

Lifecycle Roadmap

A

Method to track the lifecycle phase of one or more hardware, service, or software systems in an organization. It tracks four primary phases:

  • Development
  • Deployment
  • Maintenance
  • Reprecation
258
Q

Configuration Management Database (CMDB)

A

Stores information about hardware and software deployed throughout the company. Entries in the database are referred to as configuration items.

The role of the CMDB has changed with the increased use of virtualization and cloud services. The original CMDB concept emphasized asset and inventory tracking. With virtualization, whether on premises or in the cloud, there are new challenges to managing the CMDB.

259
Q

Tags

A

Cloud resource labels that provide tracking for governance and cost management.

Tag labels are used to generate billing and utilization reports in tools such as the AWS Billing and Cost Management console.

260
Q

Showbacks

A

Reports that display the utilization of services without billing the business unit that consumed the resources.

261
Q

Linux Unified Key Setup (LUKS)

A

Linux tool used for drive encryption.