Lecture Ten - Virtualisation and Cloud Computing Flashcards
Moore’s Law - Principle
The number of transistors in an integrated circuit doubles approximately every two years
Moore’s Law - Implication
Continuous growth in computing power and efficiency, driver technological advancement
Servers - Definition
High-performance computers designed for continues operation under heavy workloads
Servers - Characteristics
Non-Specialized Hardware - Utilizes high-quality components for reliability
Service Providers - Facilitate client-server interactions by providing dedicated services.
Specialization: Servers can be dedicated to specific tasks, such as web hosting or database management
Data Centre Formation
Multiple servers can be deployed to create a robust data centre infrastructure
Servers vs. Computers - Memory-
Servers: Typically have larger RAM to support multiple users and applications.
Computers: Designed for individual use with standard memory capacities.
Servers vs. Computers - Storage
Servers: Utilize large, fast disks with RAID configurations for reliability and speed.
Computers: Use standard hard drives without advanced redundancy features.
Servers vs. Computers - Processing Power
Servers: May have multiple CPUs for enhanced performance.
Computers: Generally have single CPUs sufficient for personal tasks.
Servers vs. Computers - Backup
Servers: Feature high-capacity backup drives to prevent data loss.
Computers: Typically use external or cloud backup solutions.
Servers vs. Computer - Connectivity
Servers: Equipped with multiple network cards for increased data throughput.
Computers: Usually have a single network interface.
Servers vs. Computers - Robustness
Servers: Built with high-quality components for 24/7 operation.
Computers: Designed for standard usage patterns.
Servers vs. Computers - Scalability
Servers: Allow for expansion with additional disks, power supplies, and CPUs.
Computers: Limited upgrade capabilities.
Basic Server-Client Model
Architecture: Describes the interaction between clients and servers in a network.
Process:
Request: Client sends a service request to the server.
Processing: Server processes the request and performs the necessary operations.
Response: Server sends the results back to the client, completing the interaction.
Latency Consideration: The time taken to provide a service response is critical for performance.
Tiered Server-Client Model
Structure: Involves multiple server layers to manage complex services.
Components:
Client: Initiates requests for services.
Frontend Server: Handles client requests and interacts with backend servers.
Backend Server: Processes requests and manages data operations.
Server-Client Model Types
Centralized Model: All client applications interact with a single server or server cluster.
Distributed Model: Clients communicate with multiple servers distributed across a network.
Hybrid Model: Combines centralized and distributed approaches for enhanced flexibility and resilience.
Server-Client Architecture
Components:
Client Side: User interface and client-side logic handle user interactions and requests.
Server Side: Server-side logic processes requests, manages data, and communicates with other servers.
Communication Protocols: Define the rules for data exchange between client and server.
Interoperability: Ensures seamless interaction between different systems and platforms.
File Servers
Provide centralized storage for files accessible by multiple users.
FTP Servers
Enable file transfers over the Internet or WAN.
Application Servers
Host network-enabled applications for shard use
Web Servers
Serve web pages and content using HTTP
Email Servers
Manage email communications using protocols like SMTP, POP3, and IMAP.
Print Servers
Connect printers to networks, managing print queues and jobs
Communications Servers
Handle network services like remote access and firewall management
Database Servers
Store and manage databases for data-driven applications
Proxy Services
Intermediate devices that handle requests for resources, providing filtering and security
Introduction to Virtualisation
Origin: Concept emerged in the 1960s with time-sharing systems.
Evolution: Initially used to run legacy software on new hardware.
Purpose: Abstract physical resources to create virtual environments, improving efficiency and flexibility.
Modern Significance: Central to cloud computing and resource optimization in IT infrastructures.
Basic Concept of Virtualisation
Physical Resources: Include CPU, memory, storage, and network components.
Virtual Machines (VMs): Abstracted instances running on physical machines, enabling multiple operating systems to coexist.
Virtualisation Benefits
Resource Optimization: Increases hardware utilization.
Isolation: Separates environments for security and stability.
Flexibility: Allows dynamic resource allocation.
Server Virtualisation
Hypervisor (VMM): Software layer enabling multiple VMs to run on a single physical server.
Types of Hypervisors:
Type-1 (Bare-Metal): Runs directly on hardware, offering high performance and security.
Type-2 (Hosted): Runs on top of an operating system, providing flexibility and ease of use.
Host/Guest Model:
Host OS: Manages hardware resources.
Guest OS: Runs within VMs, isolated from the host.
Virtualisation Levels - Emulation
Software simulates hardware, allowing unmodified OSes and applications to run.
Virtualisation Levels - Full/Native Virtualisation
VMs run as if they have direct access to hardware, unaware of the virtual layer.
Virtualisation Levels - Para-Virtualisation
OS is aware of the virtual environment, requiring modifications for efficiency.
Virtualisation Levels - OS-Level Virtualisation
Multiple user spaces share a single OS kernel, providing lightweight isolation.
Virtualisation Levels - Application Level Virtualisation
Provides a virtual environment for specific applications, often using interpreters or runtime compilers.
Virtualisation Levels Examples
Storage Virtualisation: Virtual disks and cloud storage solutions.
Computing Power Virtualisation: Virtual machines and cloud computing services.
Network Virtualisation: Virtual paths, circuits, and VPNs.
Function Virtualisation: Network function virtualisation (NFV) for flexible service deployments.
Physical vs. Virtual Server - Performance
Physical Server: Offers dedicated resources and optimal performance.
Virtual Server: May incur a performance penalty but is often sufficient for most applications.
Physical vs. Virtual Server - Security
Physical Server: Provides complete control over hardware and data.
Virtual Server: May share physical resources with other users, raising security concerns.
Physical vs. Virtual Server - Availability
Virtual Server: Offers high availability with seamless failover and recovery.
Physical vs. Virtual Server - Cost
Virtualisation: Can reduce long-term costs, especially for large deployments.
Traditional Networking
Control Plane: Manages signaling and routing decisions.
Data Plane: Handles user data transport.
Software-Defined Networking (SDN) Principles
Decoupling of Control and Data Planes: Separates decision-making from data transport.
Centralized Control: SDN controllers manage network behavior programmatically.
Programmability: Enables dynamic and flexible network configurations.
Network Function Virtualisation (NFV)
Concept: Separates network functions from hardware, enabling virtual deployment.
Benefits:
Flexibility: Allows dynamic service deployment and scaling.
Efficiency: Reduces the need for dedicated hardware devices.
Implementation: Utilizes virtual machines to host functions like firewalls, load balancers, and NAT.
SDN and NFV Integration
Combined Benefits:
SDN: Provides centralized network management and control.
NFV: Offers flexible, virtualized network functions.
Application: Supports diverse network services in fixed and mobile access networks, enhancing performance and reducing costs.
Cloud Computing Concepts - Definition
Internet-based computing providing shared resources and services on demand
Cloud Computing Concepts - Cloud Types
Public Cloud: Services available to the general public.
Private Cloud: Exclusive services for a specific organization.
Cloud Computing Concepts - Resource Abstraction
Virtual resources are abstracted from physical data centers, allowing flexible and scalable solutions.
Infrastructure as a Service (IaaS)
Provides virtualized computing resources over the Internet.
Consumer Control: Over OS, storage, and applications.
Examples: Amazon EC2, Google Compute Engine.
Platform as a Service (PaaS)
Offers a development platform for building applications.
Development Platform: For building and deploying applications.
Examples: Windows Azure, Google AppEngine.
Software as a Service (SaaS)
Delivers software applications over the Internet.
Software Delivery: Applications accessed via web browsers.
Examples: Google Apps, Microsoft Office 365.
Everything as a Service (aaS)
Extends the service model to include various aspects of IT infrastructure.
Computational Principles of Cloud Computing
Multi-Tenancy: A single software instance serves multiple customers, requiring privacy, performance, and failure isolation.
Elasticity: Dynamic resource allocation based on demand.
Resource Consolidation: Aggregates workloads to optimize resource usage and reduce variability.
Cloud Computing Economics
Provisioning Dilemma: Balancing resource allocation with fluctuating demand patterns.
Cost Model: Pay-as-you-go pricing reduces upfront investment and accommodates demand spikes.
Economy of Scale: Shares resources across multiple users, akin to a utility service.
Future Trends in Cloud Computing
Edge Computing: Brings cloud capabilities closer to users, enabling applications with low latency and high resource demands.
Multi-Access Edge Computing (MEC): Extends telecom infrastructure with computing facilities for mobile applications.
Fog and Crowd Computing: Engages user equipment and shared resources for enhanced performance.
Future Trends - MEC
Applications: Supports mobile multimedia, augmented reality, video streaming, and gaming.
Challenges: Balances cost and performance with dynamic user traffic and limited resources.
Solutions: Incorporates edge computing paradigms to reduce latency and improve user experiences.