F3.2 Contain Applications Flashcards
Contanerized Platforms applications
Why are HPC systems applied?
To perform large-scale financial and engineering simulations that demand LOW LATENCY and HIGH THROUGHPUT.
HPC workload managers lack micro-service support and deeply integrated container management capabilities. True or False
True
A container orchestrator, such as Kubernetes, on its own does not address all the requirements of HPC systems and cannot replace existing workload managers in HPC centers. What can be used?
A hybrid architecture composed of TWO Clusters, an HPC cluster and a cloud cluster, where container orchestration on the HPC cluster can be performed by the container orchestrator (e.g., Kubernetes) located in the cloud cluster can be used
What is the role of resource managers, workload managers, or job schedulers in HPC clusters?
They are used to allocate processors and memory on compute nodes to users’ jobs.
Name two mainstream workload managers mentioned in the slides.
Slurm and TORQUE
In the TORQUE-managed cluster, what components are present on the head node?
A Portable Batch System (pbs) server daemon and a job scheduler daemon
What is the advantage of using containers over traditional virtual machines (VMs) in cloud orchestration?
Containers use the dependencies on their host OS, resulting in a more efficient use of resources and faster start-up times.
How does Singularity differ from Docker in terms of network configurations?
Singularity does not require additional network configurations, unlike Docker
What is Kubernetes based on, and how does it provide services?
Kubernetes is based on a highly modular architecture, and it provides services through ‘deployment,’ specified in YAML files.
In the HPC + Cloud architecture, what is the role of Kubernetes and TORQUE?
Jobs are co-scheduled and co-managed by both Kubernetes and TORQUE. Kubernetes handles scheduling on the first level, and TORQUE manages the second level
What is the purpose of the login node in the HPC + Cloud architecture?
The login node serves as a bridge between the TORQUE and Kubernetes clusters, submitting TORQUE jobs to the HPC cluster
What is TORQUE-operator, and how does it connect Kubernetes and TORQUE?
TORQUE-operator is a tool that connects with Kubernetes by creating a deployment on the Kubernetes cluster. It creates virtual nodes corresponding to TORQUE queues
What services are carried out by the Singularity containers in HPC + Cloud architecture?
4 Singularity containers
1. Generate the virtual node,
2. Fetch queue information,
3. Launch TORQUE jobs to the Kubernetes cluster
4. Transfer TORQUE jobs back to the TORQUE cluster
What were the two use cases presented from the CYBELE project?
Pilot Wheat Ear and Pilot Soybean Farming
What is the goal of the Pilot Soybean Farming use case?
The goal is to use machine learning for soybean farming, developing a prediction algorithm to infer hidden dependencies between input parameters and yield
How does FOG overcome limitations of cloud computing?
By utilizing resources close to end devices.
Why are containers considered better than VMs for fog computing?
Containers are easily deployable and have high performance, making them preferable in fog computing.
What is Kubernetes, and what role does a pod play in it?
Kubernetes is an open-source orchestration platform for container-based applications. A pod is the most fundamental unit in Kubernetes, containing one or more containers
How does Kubernetes expose an application to external cluster access, and what is the role of ClusterIP?
Kubernetes exposes an application through a Service, and the service is bound to a ClusterIP, which is a virtual IP address that never changes
What is the role of Kube-scheduler in Kubernetes, and how does it select the optimal node for a pod?
Kube-scheduler is the default scheduler in Kubernetes. It watches unscheduled pods, adds them to a waiting list, and selects the best node for the pod based on filtering and scoring steps
What are some policies supported by the FILTERING step in Kubernetes scheduling?
Policies supported by the filtering step include
* PodFitsHostPorts,
* PodFitsResources,
* PodFitsHost, and
* CheckNodeCondition
Name some policies supported by the SCORING step in Kubernetes scheduling
Policies supported by the scoring step include
* SelectorSpreadPriority,
* BalancedResourceAllocation,
* NodeAffinityPriority
* ImageLocalityPriority
What is ElasticFog, and how does it address the resource allocation challenge in a fog computing environment?
ElasticFog is an elastic resource provisioning method for applications on a container-based fog computing platform. It considers network traffic status for resource allocation and dynamically adapts based on real-time traffic information
How does ElasticFog use nodeAffinity rules in Kubernetes for pod allocation across locations?
ElasticFog uses the preferred rule of nodeAffinity to allocate pods based on the proportion of incoming network traffic at each location
How does ElasticFog demonstrate awareness of changes in network traffic in real-time during performance evaluations?
ElasticFog adjusts the number of pods among locations based on the proportion of requests coming from each location, showing real-time awareness of changes in network traffic status