Cloud Flashcards
Differences between a Cluster and a Grid
- Cluster: Group of interconnected computers working as a single system, tightly coupled and located in close proximity. Provides high performance and availability for specific applications.
- Grid: Distributed computing infrastructure spanning multiple clusters or organizations. Focuses on resource sharing and collaboration across administrative domains.
Rationale of Cluster Computing
- High performance: Distributes tasks across nodes for parallel processing, reducing computation time.
- Scalability: Easily add nodes to handle increasing workloads without replacing infrastructure.
- Fault tolerance: Redundancy and failover mechanisms ensure high availability and minimize downtime.
- Cost-effectiveness: More cost-effective than a single high-end server.
Common Computing Model for Cluster of Computers
farm: the farmer
distributes computation to workers (PCs), keeping them
balanced.
What happens if a pc in the farm model fails?
It will be replaced with a PC from the free pool.
Amdahl’s Law Equation
Speedup = 1 / [(1 - p) + (p / n)], where p is the portion that can be parallelized and n is the number of processors.
Amdahl’s Law
Predicts the potential speedup of a program when only a portion of it can be parallelized.
Definition of Cloud Computing
“Cloud computing is a model for enabling ubiquitous,
convenient, on-demand network access to a shared pool of
configurable computing resources (for example, networks,
servers, storage, applications and services) that can be rapidly
provisioned and released with minimal management effort or
service or provider interaction.”
Scalability in Cloud Computing
Ability to handle increasing workloads by adding resources, either manually or automatically.
Elasticity in Cloud Computing
Ability to dynamically scale resources up and down based on demand, automatically allocating or releasing
Infrastructure as a service (IaaS)
Bare bone resources are available to
use (the user deploys the OS and applications)
Software as a service (SaaS)
Customers can use services and/or
applications running in the cloud (applications and services ready to
use).
Platform as a service (PaaS)
Software suits are available to users
(application servers, database servers, middleware, development
runtime environments).
Private Clouds
– Use is restricted (intranet VPN);
Hybrid clouds
Combination of the two (e.g., backup, files recovery);
Community clouds
Access given to a community of users.
Criteria for Selecting Public Cloud
Cost-effectiveness, scalability, and flexibility. Suitable for organizations with unpredictable workloads and limited IT budget.
Criteria for Selecting Private Cloud
Compliance requirements, data security, and control. Suitable for organizations with sensitive data or regulatory restrictions.
When to Use a Community Cloud
When multiple organizations with shared requirements collaborate and share resources, such as in research or industry-specific projects. Offers cost-sharing and resource pooling benefits.
What is the Mobile Cloud?
Combination of mobile computing and cloud computing, where mobile devices leverage cloud resources and services to enhance their capabilities and storage capacity.
Role of an Edge Server in a Mobile Cloud System
Located at the edge of the network, it acts as an intermediary between mobile devices and the cloud, providing localized processing, caching, and reducing latency for mobile applications.
Services that can be run by an Edge Server
Content caching, data filtering, real-time analytics, security functions, and local computation offloading.
Definition of a Cloudlet and Interaction with Mobile Devices
Small-scale cloud data center located at the edge of the network. Mobile devices interact with cloudlets to offload computation, access resources, and leverage local services.
Dynamic VM Synthesis in Mobile Offloading to a Cloudlet
Process of creating a virtual machine instance on the cloudlet to execute offloaded tasks from a mobile device, optimizing resource utilization and performance.
Spine/Leaf Model
Architecture for cloud networks with a hierarchical structure.
Benefits of Using the Spine/Leaf Model
High scalability, flexibility, fault tolerance, and efficient interconnectivity between network switches.
Top-of-Rack Placement of Switches in Cloud Networks
One or two Ethernet switches are
installed inside each rack to provide local server connectivity. Allows for ease of server-to-switch connectivity, removing the need for long cable runs
End-of-Row Placement of Switches in Cloud Networks
Was devised to provide two
central points of aggregation for server connectivity in a row of
cabinets: each server within each cabinet would be connected
to each end-of-row switch cabinet (right and left)
Concept of Oversubscription
Allocating more network bandwidth or resources to users or devices than what is physically available, based on the assumption that not all users will require maximum capacity simultaneously. Example: A network switch with 10 Gbps uplink capacity shared among multiple devices with a total potential demand of 20 Gbps.
Impact of Oversubscription in a Cloud Network
Can be beneficial as it allows for resource sharing and cost savings. However, excessive oversubscription can lead to performance degradation and congestion if demand surpasses available resources.
Overlay Network in a Cloud
Logical network infrastructure built on top of the physical network, enabling communication and connectivity between virtual machines or containers regardless of their physical location.
Use of Virtual Tunnel EndPoints
Network endpoints responsible for encapsulating and decapsulating network traffic in overlay networks, facilitating communication between virtual machines or containers.
Benefits of Using Virtual Machines in Cloud Computing
Isolation, flexibility, portability, resource allocation, and management, allowing multiple operating systems or applications to run concurrently on the same physical hardware.
Role of the Hypervisor and its Different Types
Software layer that enables the creation and management of virtual machines. Types include Type 1 (bare-metal) hypervisors installed directly on hardware, and Type 2 (hosted) hypervisors installed on top of an operating system.
Reason for Amazon Changing its Initial Hypervisor with Nitro
Amazon’s custom-built hypervisor. Changed to improve performance, security, and efficiency by offloading certain virtualization tasks to dedicated hardware.
Amazon Machine Image (AMI)
A template that contains the desired software configuration, an OS and a root device volume at a minimum.
Optionally, there is an application and supporting software.
Non-Live VM Migration
The VM is suspended, the state is
migrated to the new host where it is resumed.
Live VM migration
In live migration, VM continues to run during the process of
moving from one host to another one. Therefore, the process
is transparent to the users.
Pre-copy
Migration of memory pages takes place in iterations, while the VM is
running.
Post copy
First, the VM is suspended.
• The CPU state is transferred to the destination.
• The VM is started at the destination and the memory pages are
moved from the source host to the destination host on demand.
Hybrid
The most frequently used VM memory pages are transferred.
• Then, the VM at source is suspended to transfer the minimum
state of the VM to the destination.
• VM is resumed at the destination host.
Benefits of Pre-copy
Reduces VM downtime. However, transferring memory pages
several times increases the execution time and consumes network
bandwidth.
Benefits of post copy
Reduces migration time but may result in temporary performance degradation until the entire memory of the virtual machine is transferred.
Benefits of hybrid
Combined approach that combines elements of both pre-copy and post-copy migration, aiming to achieve a balance between migration time and downtime while ensuring efficient memory transfer.
Virtual Machines
Provide full isolation and encapsulation by running a complete operating system and application stack within a virtualized environment.
Containers
Lightweight, isolated environments that share the host operating system’s kernel, allowing for faster startup times and more efficient resource utilization.