Cloud + Flashcards
Hub
A network hub is a basic networking device that connects multiple devices within a LAN. It is a central point where devices can be connected to share data and communicate with each other. However, network hubs have been largely replaced by more advanced devices such as switches.
Network hubs operate at the physical layer of the network and work by receiving data packets from one device and broadcasting them to all other connected devices, regardless of destination. This means that all devices on a hub’s network share the same bandwidth, and collisions are likely to occur if multiple devices transmit data simultaneously. Drawbacks of the network hub include: not having the capability to manage or prioritize network traffic, filter data, or make intelligent routing decisions. Hubs are commonly not used in modern network setups.
Bridge
A network bridge is a networking device or software component that connects multiple network segments or LANs (Local Area Networks) together. It operates at the data link layer (Layer 2) of the OSI (Open Systems Interconnection) model and is used to forward network traffic between different network segments.
The primary function of a network bridge is to selectively transmit data packets between network segments based on their destination MAC (Media Access Control) addresses. When a bridge receives a packet, it examines the MAC address of the packet and determines whether to forward it to the other network segment or discard it. The bridge maintains a table called the bridge forwarding table or MAC table, which associates MAC addresses with the network segments they belong to. Unlike network hubs, which broadcast data to all connected devices, a bridge is more selective and intelligent in its forwarding process. It only forwards packets across network segments if the destination MAC address is located on the other segment, thus reducing unnecessary traffic and improving overall network efficiency.
Network bridges have been largely replaced by more advanced technologies such as switches and routers. Switches, in particular, offer similar functionality to bridges but with additional features and improved performance. However, bridges still have their uses in specific networking scenarios, such as connecting legacy equipment or extending the range of a network.
Switch
A network switch is a networking device that connects multiple devices within a Local Area Network (LAN) and facilitates communication between them. It operates at the data link layer (Layer 2) and sometimes at the network layer (Layer 3) of the OSI (Open Systems Interconnection) model. The primary function of a network switch is to receive incoming network packets and forward them to their intended destination based on the MAC (Media Access Control) addresses of the devices connected to the switch. When a switch receives a packet, it examines the destination MAC address and looks up its forwarding table to determine the port to which the packet should be sent. This process is known as switching, and it allows devices within the LAN to communicate directly with each other.
Network switches offer several advantages over network hubs and bridges. Unlike hubs, which broadcast data to all connected devices, switches create dedicated connections between devices, allowing for simultaneous communication without collisions. This improves network performance and bandwidth utilization. Additionally, switches can handle simultaneous traffic across multiple ports, providing full-duplex communication.
Switches come in various configurations, such as unmanaged, managed, and Layer 3 switches.
Switching
When a switch receives a packet, it examines the destination MAC address and looks up its forwarding table to determine the port to which the packet should be sent.
Unmanaged Switch
Unmanaged switches are plug-and-play devices that operate with default settings, making them easy to use but with limited configuration options.
Managed Switch
Managed switches provide more control and configuration capabilities, allowing network administrators to monitor and manage the network traffic, implement security features, and optimize performance.
Layer 3 Switch / Multi-layer Switch
Layer 3 switches, also known as multi-layer switches, can perform routing functions in addition to switching, making them capable of forwarding packets based on IP addresses.
vNIC
A vNIC (virtual Network Interface Card) is a software-based representation of a physical network interface card within a virtualized environment. It emulates the functionality of a physical NIC, allowing virtual machines (VMs) or containers to connect to virtual networks and communicate with other devices and systems.
A vNIC is created and assigned to each virtual machine or container running on a hypervisor or containerization platform. It provides the necessary network connectivity for the virtual instance to send and receive data over the virtual network infrastructure. From the perspective of the virtual machine or container, a vNIC appears and behaves like a physical NIC, enabling network communication.
Virtualization technologies such as VMware, Hyper-V, or KVM, as well as container platforms like Docker or Kubernetes, utilize vNICs to establish network connectivity and enable virtual instances to access the underlying physical network infrastructure or communicate with other virtual machines or containers within the same virtual environment.
The configuration and properties of vNICs can be managed and adjusted within the virtualization or containerization platform, allowing network settings, such as IP addresses, subnet masks, VLAN tags, or quality-of-service parameters, to be defined and customized for each virtual instance. This flexibility enables administrators to tailor network connectivity to meet the specific requirements of virtual machines or containers within the virtual environment.
vSwitch
A vSwitch (virtual switch) is a software-based networking component used in virtualized environments to connect and manage network traffic between virtual machines (VMs) or containers running on a hypervisor or containerization platform. Similar to a physical network switch, a vSwitch operates at the data link layer (Layer 2) of the OSI model and performs the following functions:
Network connectivity: A vSwitch provides vPorts (virtual network ports) to which virtual machines or containers can be connected. It enables communication between virtual instances within the same virtual network or across different virtual networks
Packet forwarding: Incoming network traffic from virtual machines or containers is received by the vSwitch, which makes forwarding decisions based on the MAC (Media Access Control) addresses of the virtual instances. It forwards packets to the appropriate destination vPorts, ensuring proper delivery.
VLAN support: A vSwitch often includes support for Virtual LANs (VLANs), allowing network segmentation and isolation within the virtual environment. VLANs help to enhance network security, optimize network performance, and provide logical separation between different groups of virtual instances.
vSwitches are integral components of virtualization platforms such as VMware vSphere, Microsoft Hyper-V, or KVM, as well as containerization platforms like Docker or Kubernetes. They enable virtual machines or containers to access the physical network infrastructure and communicate with other virtual instances, while also providing network management capabilities within the virtual environment. The configuration and management of vSwitches are typically done through the virtualization or containerization platform’s management interfaces, allowing administrators to define network settings, monitor network traffic, and apply network policies to efficiently manage the virtual network infrastructure.
vPorts
(Virtual Network Ports) Enables communication between virtual instances within the same virtual network or across different virtual networks.
Packet Forwarding
Packet forwarding is the process of routing network packets from a source to a destination within a computer network. When a packet arrives at a network device (such as a router or switch), the device examines the packet’s destination address and determines the optimal path for forwarding the packet to its intended destination. This involves looking up routing tables or forwarding rules to identify the next hop or outgoing interface for the packet. The device then encapsulates the packet in a new frame with appropriate addressing information and transmits it toward the next network device in the path
VLAN
A VLAN (Virtual Local Area Network) is a logical network that is created within a physical network infrastructure. It allows network devices to be grouped together, even if they are not physically connected on the same network switch. VLANs provide isolation, security, and flexibility by segmenting a network into smaller, virtual subnetworks. Devices within the same VLAN can communicate with each other as if they were connected to the same physical network, while traffic between VLANs requires routing through a router or Layer 3 switch. VLANs enable network administrators to efficiently manage network traffic, implement security policies, and optimize network performance by logically separating devices and controlling communication between them
Traffic Shaping
Traffic shaping is a network management technique used to control and prioritize network traffic flows. It involves managing the bandwidth allocation and transmission rates of different types of network traffic to ensure optimal network performance and avoid congestion. By shaping traffic, administrators can regulate the flow of data based on predefined policies, such as prioritizing critical applications or limiting bandwidth for specific types of traffic. This helps to enhance network efficiency, minimize latency, and ensure fair usage of available network resources.
Traffic shaping is a specific technique within the broader concept of QoS
QoS
QoS, or Quality of Service, is a network management concept that aims to prioritize and control the delivery of network traffic based on specific requirements. It involves techniques and mechanisms to ensure that critical traffic receives preferential treatment in terms of bandwidth, latency, and reliability.
QoS focuses on delivering a consistent level of service to different types of network traffic, such as voice, video, data, or real-time applications. It involves setting priorities, allocating resources, and implementing policies to meet specific performance targets and ensure a satisfactory user experience.
Traffic shaping, on the other hand, is a specific technique within the broader concept of QoS. It involves controlling the flow of network traffic to smooth out peaks and prevent congestion.
HTTPS
(Hypertext Transfer Protocol Secure ) is a secure communication protocol used for secure and encrypted data transfer over computer networks, especially the internet. It is an extension of the standard HTTP protocol and adds an extra layer of security by using SSL (Secure Sockets Layer) or TLS (Transport Layer Security) encryption protocols.
HTTPS ensures that the data transmitted between a client (such as a web browser) and a server is encrypted and protected from eavesdropping or tampering. This encryption is achieved through the use of digital certificates, which authenticate the identity of the server and establish a secure connection.
Port 443
SSL
(Secure Sockets Layer) is a cryptographic protocol that provides secure communication over computer networks, especially the internet. It was widely used to establish secure connections between a client (such as a web browser) and a server, encrypting the data transmitted between them. SSL operates at the transport layer (Layer 4) of the OSI model and ensures confidentiality, integrity, and authentication of data. It uses asymmetric encryption (also known as public-key cryptography) to establish a secure session between the client and the server.
The SSL initiation process, also known as the SSL handshake, is the initial exchange between a client and a server to establish a secure SSL/TLS connection. It involves the client and server exchanging information about supported SSL versions, selecting cipher suites, authenticating certificates, and exchanging cryptographic keys. Once the handshake is complete, a secure session is established, enabling encrypted communication between the client and server.
SSL provides encryption, data integrity, authentication, and forward-secrecy through support of TLS (if the private-key of the server is compromised in the future, the previously recorded SSL communications cannot be decrypted).
Though there are some differences, the terms “SSL” and “TLS” are used interchangeably.
Forward Secrecy
Forward secrecy, also known as perfect forward secrecy (PFS), is a cryptographic property that ensures the confidentiality of past communication even if the long-term private key of a system is compromised in the future. It achieves this by generating unique session keys for each communication session, preventing the decryption of past sessions even if the private key is obtained.
TLS
(Transport Layer Security) is a cryptographic protocol designed to provide secure communication over computer networks, such as the internet. It is the successor to SSL (Secure Sockets Layer) and operates at the transport layer (Layer 4) of the OSI model.
TLS differs from SSL in that it features it’s own versions (SSL v1.0, 2.0, 3.0 vs. TLS 1.2, 1.3, etc. ), incorporating stronger cryptographic algorithms. Additionally, more secure algorithms and cipher suites are used for key exchange, authentication, and encryption. TLS also provides forward-secrecy to SSL. TLS is designed to be backwards-compatible with SSL, allowing it to negotiate using SSL protocols and cipher suites.
Though there are some differences, the terms “SSL” and “TLS” are used interchangeably.
IPSEC
(Internet Protocol Security) is a suite of protocols used to secure Internet Protocol (IP) communications by providing authentication, integrity, and confidentiality services. It is commonly used for creating virtual private networks (VPNs) and ensuring secure communication between network devices over potentially insecure networks, such as the internet.
Operates in two modes: Transport and Tunnel
Transport Mode (IPSEC)
IPSEC secures only the payload of the IP packet while leaving the IP headers intact. This mode is typically used for securing end-to-end communication between two hosts.
Tunnel Mode (IPSEC)
The entire IP packet, including the original IP headers, is encapsulated within a new IP packet. This mode is commonly used for secure communication between networks or for remote access VPN’s.
SSH
Secure Shell) is a network protocol that provides a secure and encrypted method for remote login, command execution, and data communication between two networked devices. It is commonly used to establish a secure remote connection to a server or network device over an unsecured network, such as the internet.
SSH provides secure communications through the use of strong encryption algorithms that protect against eavesdtopping, tampering, etc. SSH requires authentication of users (through generated key pairs) before establishing a connection, and once a connection has been formed, remote command execution can be accomplished. SSH also provides secure file transfer capabilities, allowing users to securely transfer files between the local and remote systems. Lastly, SSH incorporates port forwarding, which allows users to securely tunnel other network protocols or services through the SSH connection.
Port 22
RDP
(Remote Desktop Protocol) is a proprietary protocol developed by Microsoft that allows users to remotely connect and control a Windows-based computer or server from another device. It provides a graphical user interface (GUI) for accessing and interacting with a remote computer as if you were sitting in front of it.
Port 3389
Hardware Based VPN
A type of VPN implementation that relies on dedicated hardware devices to establish secure connections between remote networks or individual devices.
Hardware-based VPNs offload the VPN processing tasks to specialized devices, typically known as VPN appliances or VPN gateways. Hardware-based VPNs are particularly suitable for organizations that require high-performance, scalable, and secure VPN solutions. They are commonly deployed in enterprise networks, data centers, and large-scale VPN deployments where dedicated hardware resources can optimize VPN performance and manage large volumes of VPN traffic effectively.
Uses IPSEC for secure communications.
Software Based VPN
A type of VPN implementation that relies on dedicated VPN client software to establish secure connections between remote networks or individuals device.
Uses IPSEC for secure communications.
MPLS VPN
A type of VPN implementation that utilizes Multiprotocol Label Switching to create virtual pathways or tunnels that ensure the privacy and isolation of data traffic between the connected sites.
MPLS reduces routing complexity and lookups by substituting labels for network paths instead of using long IP address notations that may require complex routing table lookups. MPLS can increase the efficiency of routing network traffic. Access to and from cloud data centers, as well as access within an organization’s network, may involve many routers. Creating greater efficiency may enhance network performance.
Uses IPSEC for secure communications.
What are the 3 main components of a cloud service solution?
- Client: Means of access to cloud services for consumer. Cloud services may include storage, email, e-commerce, office suites, and development environmnets. Users may access these services from phones, tablets, traditional computers, IoT devices, and servers. The cloud client devices may be any device with a network connection. The major operating systems on the client devices include Windows, macOS, Linux, iOS, and Android.
- CSP Datacenter: Hosts cloud services. Major CSPs (AWS, MS, Google) have a great many datacenters distributed across the world. These datacenters are redundant, have extremely reliable access to power, have extremely reliable Internet access, and ae physically secure. Cloud services are hosted within the walls of these datacenters.
- Network: Path between cloud services and client devices. In some deployment models, the network connection may be wholly owned and operated by your company. In other cases, the Internet may be the network path to cloud services. Access may also come via cell connections. In some cases, all three network connection types may be used.
Public Cloud
Public cloud is a type of computing where resources are offered by a third-party provider via the internet and shared by organizations and individuals who want to use or purchase them. Some public cloud computing resources are available for free, while customers may pay for other resources through subscription or pay-per-usage pricing models.
Private Cloud
Private cloud is defined as computing services offered either over the Internet or a private internal network and only to select users instead of the general public. Also called an internal or corporate cloud, private cloud computing gives businesses many of the benefits of a public cloud - including self service, scalability, and elasticity - with the added control and customization available from dedicated resources over a computing infrastructure hosted on-premises.
There are three main types of Private Clouds:
- On-premises Private Cloud
- Managed Private Cloud
- Virtual Private Cloud
On-Presmises Private Cloud
An on-premises private cloud is one that you can deploy on your own resources in an internal data center. You must purchase the resources, maintain and upgrade them, and ensure security. On-premises private cloud management is expensive and requires heavy initial investment and ongoing expensive.
Managed Private Cloud
A managed private cloud is a single-tenant environment fully managed by a third party. For example, the IT infrastructure for your organization could be purchased and maintained by a third-party organization in it’s data center.
The third party provides maintenance, upgrades, support, and remote management of your private cloud resources. While managed private clouds are expensive, they are more convenient than on-premises solutions.
Virtual Private Cloud
A virtual private cloud is a private cloud that you can deploy within a public cloud infrastructure. It is a secure, isolated environment where private cloud users can run code, host websites, store data, and perform other tasks that require a traditional data center.
Virtual Private Clouds efficiently give you the convenience and scalability of public cloud computing resources along with additional control and security.
Also known as Cloud Within a Cloud.
Community Cloud
Community cloud computing refers to a shared cloud computing service environment that is targeted to a limited set of organizations or employees. The organizing principle for the community will vary, but the members of the community generally share similar security, privacy, performance and compliance requirements.
Hybrid Cloud
There is a combination of two or more private, public, or community deployments. For example, an organization may choose to utilize some services offered via a CSP’s public cloud while hosting other services in a private cloud environment.
The services in the public cloud portion may be cheaper, and security may be less of a concern. The services hosted in the private cloud may be more secure, but deployment is more expensive.
Multitenancy
A cloud model where CSP resources are shared among multiple clients (tenants), and is the concept behind public cloud deployments. Multiple consumers, known as tenants, share computing resources owned and managed by the CSP. This is the opposite idea from a VPC deployment.
It is multitenancy that provides the cost benefits behind shared resource utilization.
Multi-cloud
Multicloud is when an organization uses cloud computing services from at least two cloud providers to run their applications. Instead of using a single-cloud stack, multi-cloud environments typically include a combination of two or more public clouds, two or more private clouds, or some combination of both.
Multi-cloud deployments reduce reliance on a single vendor, provide greater service flexibility and choice, permit improved geographic control of data, and help manage disaster mitigation.
Digital Ocean
Digital Ocean is a cloud hosting provider that offers cloud computing services and IaaS. Known for it’s pricing and scalability, teams can deploy Digital Ocean in seconds for cheap. This structure can help anyone get up and running quickly in the cloud.
Rackspace
Rackspace is a cloud storage service provider, which includes Cloud Files, Cloud Block Storage, and Cloud Backup. Services such as cloud servers, database platforms, load balancers, storage, and other services to organizations are all provided by Rackspace. Users connect to it with the REST API.
Red Hat Cloud Suite
Red Hat, originally known for it’s enterprise Linux operating system and supporting services, offers Red Hat Cloud Suite for cloud services. The suite consists of four key products: OpenStack Platform (for building public and private clouds), Virtualization, Satellite (for cloud services management), and OpenShift ( for Kubernetes container management).
OpenStack
OpenStack is an open source platform that uses pooled virtual resources to build and manage private and public clouds. The tools that comprise the OpenStack platform, called “projects”, handle the core cloud-computing service of compute, networking, storage, identity, and image services.
OpenShift
OpenShift is a cloud-based Kubernetes platform that helps developers build applications. It offers automated installation, upgrades, and life cycle management throughout the container stack - the operating system, Kubernetes and cluster services, and applications - on any cloud.
IoT
IoT refers to a combination of network connectivity and smart devices that facilitate the collection and analysis of data. These devices may include software, sensors, and robotics that exchange data and instructions over the Internet or internal networks. The IoT is enabled by nearly global network connectivity, low-cost sensors to collect data, and cloud management platforms.
Common uses for IoT products include:
- Smart Homes
- Medical Monitoring
- Agriculture Management
- Energy Management
- Manufacturing
Serverless Computing
A software architecture that runs functions within virtualized runtime containers in a cloud rather than on dedicated server instances. Serverless computing still utilizes compute resources, contrary to what the name implies. Compute resources are allocated on demand to applications, and no resources are reserved when the application is not in use. Billing reflects the application’s use of resources. Serverless environments require no configuration, monitoring, or capacity planning.
Artificial Intelligence (AI)
The science of creating machines with the ability to develop problem solving and analysis strategies without significant human direction or intervention.
AI is concerned with simulating human intelligence by providing structured, semi-structured, and unstructured data and solving complex problems. AI accomplishes this by using a set of rules to manage it’s analysis.
Machine Learning (ML)
A component of AI that enables a machine to develop strategies for solving a task given a labeled dataset where features have been manually identified but without further explicit instructions.
The goal of ML is to make accurate predictions by extracting data based on learned information and experience. ML systems are not explicitly programmed to find a particular outcome. Instead, they are programmed to learn from provided data and then make accurate decisions based on what they’ve learned. Insights are gained with minimal human interaction.
Deep Learning
A refinement of machine learning that enables a machine to develop strategies for solving a task given a labeled dataset and without further explicit instructions.
DL provides a greater degree of accuracy when analyzing unstructured data.
Simple Storage Service (S3)
Amazon S3 is a program built to store, protect, and retrieve data from “buckets” at any time from anywhere on any device. Organizations of any size and industry can use this service. Use cases include websites, mobile apps, archiving, data backups and restorations, IoT devices, enterprise application storage, and providing underlying storage layer for a data lake.
Organizing and retrieving data in Amazon S3 focuses on two key components: Buckets and Objects. These components work together to create the storage system. As AWS describes it, an S3 environment is a flat structure - a user creates a bucket; the bucket stores the objects in the cloud.
Advantages of S3
Scalability: AWS allows you to scale resources up and down, while only charging you for the amount of resources you use.
Durability and Accessibility: S3 is designed for 11 “9’s” worth of durability, meaning it is extremely reliable. The service automatically creates and stores your S3 objects across multiple systems, meaning your data is protected and you can access it quickly whenever you need it.
Cost Effective: Data stored in S3 is done so in ranges, allowing frequently and data needed to be accessed immediately to be stored on hot storage, while lesser needed data can be supplanted onto warm or even cold storage. S3 will determine data priority through ongoing access patterns, to allow for cost optimization.
Versioning: This is a setting that allows for multiple variants of the same file or object to exist in the same bucket. This provides an opportunity to roll back or recover a deleted object.
Elastic Compute Cloud (EC2)
Amazon EC2 provides scalable computing capacity in the AWS cloud. Leveraging it enables organizations to develop and deploy applications faster, without needing to invest in hardware upfront. Users can launch virtual servers, configure security and networking, and manage cookies from an intuitive dashboard.
AWS EC2 is important, as it does not require any hardware units, and is easily scalable (up or down). With EC2, you only pay for what you use, and you are given full control of your cloud environment. EC2 is highly secure, as well as highly available.
What are some of the features of EC2
Virtual Machines, known as Instances.
Preconfigured Templates, known as Amazon Machine Images (AMIs), that package the bits you need for your server (including OS and software)
Various hardware configurations.
Secure login through use of key pairs.
Storage volumes for temporary volatile data that is discarded when stopped, hibernated, or instance is terminated, known as instance store volumes.
Persistent storage volumes for your data using Amazon EBS
Multiple physical locations for your resources, such as instances and Amazon EBS volumes, known as Regions and Availability Zones.
A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your instances using security groups.
Use of Elastic IP Addresses.
Metadata, known as “Tags”, that you can create and assign to your Amazon EC2 resources.
Use of VPC
AWS Lambda
AWS Lambda is a compute service that lets you run code without provisioning or managing servers.
Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capavity provisioning and automatic scaling, and logging. With Lambda, all you need to do is supply your code in one of the language runtimes that Lambda supports.
You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. You only pay for the compute time that you consume - there is no charge when your code is not running.
AWS S3 Glacier
Amazon S3 Glacier is a secure and durable service for low cost data archiving and long-term backup.
With S3 Glacier, you can store your data cost effectively for months, years, or even decades. S3 Glacier helps you offload the administrative burdens of operating and scaling storage to AWS , so you don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations.
S3 Glacier can be divided into three storage classes:
- Instant Retrieval (Hot Storage)
- Flexible Retrieval (Warm Storage)
- Deep Archive (Cold Storage)
AWS S3 Glacier
Amazon S3 Glacier is a secure and durable service for low cost data archiving and long-term backup.
With S3 Glacier, you can store your data cost effectively for months, years, or even decades. S3 Glacier helps you offload the administrative burdens of operating and scaling storage to AWS , so you don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations.
S3 Glacier can be divided into three storage classes:
- Instant Retrieval (Hot Storage)
- Flexible Retrieval (Warm Storage)
- Deep Archive (Cold Storage)
Amazon SNS
Amazon SNS is a managed service that provides message delivery from publishers to subscribers (also known as producers and consumers).
Publishers communicate asynchronously with subscribers by sending messages to a “topic”, which is a logical access point and communication channel.
Clients can subscribe to the SNS topic and receive published messages using a supported endpoint type, such as Amazon SQS, AWS Lambda, email, push notifications, etc.
Amazon CloudFront
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and images files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. If the content is not in that edge location, CloudFront retrieves it from an origin that you’ve defined - such as an Amazon S3 bucket, or an HTTP Server, that you have identified as the source for the definitive version of your content.
Azure Visual Studio
Visual Studio is a powerful developer tool that you can use to complete the entire development cycle in one place. It is a comprehensive integrated development environment (IDE) that you can use to write, edit, debug, and build code, and then deploy your app. Beyond code editing and debugging, Visual Studio includes compilers, code completion tools, source control, extensions, and many more features to enhance every stage of the software development process.
Visual Studio provides developers a feature rich development environment to develop high-quality code efficiently and collaboratively. Some features include:
- Workload-based Installer (install only what you need)
- Powerful coding tools and features
- Multiple coding language support
- Cross-platform development (build apps for any platform)
- Version control integration (collaborate on code with teammates)
Azure Backup
The Azure Backup service provides simple, secure, and cost-effective solutions to back up your data and recover it from the Azure Cloud.
Azure SQL
Azure SQL Database is a fully managed platform as a service (PaaS) database engine that handles most of the database management functions such as upgrading, patching, backups, and monitoring without user involvement. Azure SQL Database is always running on the latest stable version of the SQL Server database engine and patched OS with 99.99% availability. PaaS capabilities built into Azure SQL Database enable you to focus on the domain-specific database administration and optimization activities that are critical for your business.
With Azure SQL Database, you can create a highly available and high-performance data storage layer for the applications and solutions in Azure. SQL Database can be the right choice for a variety of modern cloud applications because it enables you to process both relational data and non-relational structures, such as graphs, JSON, spatial, and XML.
Azure Cosmos DB
Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Azure Cosmos DB offers single-digit millisecond response times, automatic and instant scalability, along with guarantee speed at any scale. Business continuity is assured with SLA-backed availability and enterprise-grade security.
Managed Service Provider
A MSP is a company that remotely manages a customer’s IT infrastructure and/or end-user systems, typically on a proactive basis and under a subscription model. The terms “cloud service provider” and “managed service provider” are sometimes used as synonyms when the provider’s service is supported by a SLA and is delivered over the internet.
There are also MSPs who are independent of the CSP. Your organization may choose to outsource cloud design, migration, deployment, and management solutions to these companies, relying on their expertise and experience. Many CSPs also offer management services for their products. For example, AWS offers AWS Managed Services.
The Shared Responsibility Model
The Shared Responsibility Model is a security and compliance framework that outlines the responsibilities of cloud service providers (CSPs) and customers for securing every aspect of the cloud environment, including hardware, infrastructure, endpoints, data, configurations, settings, operating system (OS), network controls and access rights.
As Amazon puts it “CSPs are responsible for the security of the cloud; the consumer is responsible for security in the cloud”
Cloud Subscription Service Models
Refers to the pricing and billing structure that cloud service providers use to offer their services to customers. Instead of purchasing software or hardware upfront, customers pay a recurring fee to access and use cloud-based resources.
IAM
Identity and Access Management (IAM) lets administrators authorize who can take action on specific resources, giving you full control and visibility to manage cloud resources centrally. For enterprises with complex organizational structures, hundreds of workgroups, and many projects.
IAM provides a unified view into security policy across your entire organization, with built-in auditing to ease compliance processes.
IAM provides tools to manage resource permissions with minimum fuss and high automation. Map job functions withni your company to groups and roles. Users get access only to what they need to get the job done, and admins can easily grant default permissions to entire groups of users.
Create more granular access control policies to resources based on attributes like device security status, IP address, resource type, and data / time.
Provisioning
The process of deploying an application to the target environment, such as enterprise desktops, mobile devices, or cloud infrastructure.
Provisioning is one of several steps in the cloud services deployment process. The term refers to the allocation of cloud resources in the overall enterprise infrastructure. The provisioning process is governed by objectives, policies, and procedures for deploying services and data.
Provisioning is usually self-service, reflecting one of the NIST cloud characteristics discussed earlier.
Cloud Applications
With cloud applications, the installation and processing occur in the cloud, rather than on local workstations or servers. The cloud may be a private or public network. The applications are accessed over the network. One advantage of cloud applications is a consistent experience for all users, whether they use the same workstation platform or mobile device.
Virtualization
Virtualization allocates hardware resources among one or more VMs. The VMs then have an operating system and one or more applications installed on them. The VM participates on the network as a regular node, providing database, authentication, storage, or other services. VMs have greater access to hardware resources and can be provided with redundancy to increase high availability.
VMs are a key component of cloud-based IaaS services, such as AWS EC2 or Azure Virtual Machines.
Containerization
Containerization is a form of virtualization, but it is significantly different than VMs. Containers virtualize at the OS layer, rather than the hardware layer. A container holds a single application and everything it needs to run. This narrow focus allows containers to excel with technologies such as microservices. Containers are very lightweight, share a single OS (usually Linux), and provide a single function. GCP, Azure, and AWS all offer cloud-based container services.
Templates
A virtual machine template is a master copy of a virtual machine that usually includes the guest OS, a set of applications, and a specific VM configuration. Virtual machine templates are used when you need to deploy many VMs and ensure that they are consistent and standardized.
CSPs also use templates to offer flexible but standardized VM configurations to customers.
Post-Deployment Validation
Post-deployment validation ensures that deployed apps or services meet required service levels. Depending on the service, this may be handled through regression or functionality testing.
If possible, automate post-deployment validation for efficiency and consistency.
Auto-Scaling
Auto-scaling takes advantage of automated deployments and virtualizations to provide appropriate resources for the current demand. Resources can be scaled up or down to manage costs. Your organization only pays for the resources that it consumes. Auto-scaling is useful when your resource utilization is difficult to predict or is seasonal.
Resources may be scaled up (more compute power, such as RAM, given to a single virtual server) or scaled out (more virtual servers deployed). When demand is reduced, the resources are reduced, saving money.
Hyper-converged
Hyper-convergence is an IT framework that combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability.
Hyper-converged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking, and they typically run on standard servers. Multiple nodes can be clustered together to create pools of shared compute and storage resources, designed for convenient consumption.
What are the basic steps in the troubleshooting methodology?
- Identify the problem
- Determine the scope of the problem
- Establish a theory of probable cause, or question the obvious.
- Test the theory to determine the cause.
- Establish a plan of action
- Implement the solution, or escalate.
- Verify full system functionality
- Implement preventative measures
- Perform a root cause analysis
- Document findings, actions, and outcomes throughout the process
Service Level Agreement
A contract between the provider of a service and a user of that service, specifying the level of service that will be provided.
What are the differing needs when comparing users to businesses?
Users typically are concerned with front-end needs such as applications, network performance, technical support, etc.
Businesses are typically concerned with costs, integration with existing services, compliance, and data storage.
CapEx
The spending of business funds to buy or maintain fixed business assets, such as datacenters, servers, buildings, and so on.
OpEx
The spending of business funds for ongoing business costs, such as utilities, payroll, and so on. Cloud subscriptions are usually an OpEx.
SQL
A programming and query language common to many large-scale database systems.
NoSQL
A non-relational database for storing unstructured data, common with big data technologies.
Big Data
Large stores of unstructured and semi-structured information. As well as volume, big data is often described as having velocity, as it may involve the capture and analysis of high bandwidth network links.
Business Requirement Documents (BRDs)
The document defining a project’s scope, success factors, constraints, and other information to achieve project goals.
Business analysts will help develop BRDs that provide the answers to “What?”’ and “Why?” questions reharding services and applications to ensure the business will benefit from projects such as cloud migrations and web app development.
Development Environment
Development is the act of programming an application or other piece of code that executes on a computer.
The development environment is where programmers code projects, detect bugs, manage code versions, and implement code-level security.
In a cloud deployment, this environment may be a combination of PaaS (for actual development work) and IaaS (for testing).
Staging Environment
Staging is a user testing environment that is a copy of the production environment.
The staging environment (which is also the quality assurance environment) is where QA testers validate cloud applications and services. This validation may include security and performance testing. The tests may be automated or manual (or both).
The cloud may provide an IaaS environment for staging. This environment may need to scale significantly as part of performance testing, so costs here may not reflect anticipated costs in the production environment.
Production Environment
Production is an IT environment available to consumer for normal, day-to-day use.
The production environment is available to end-users. Security is in place to protect data and availability.
If the production environment is hosted in the cloud, scalability may be a concern too. Monitoring and availability must be assured here, probably at a higher level than in the other two environment.
Blue-Green Release Model
A variation of the model of the separate development, staging, and production environments is the blue-green release model. In this model, two identical environments are available, one labeled “blue”, and the other “green”. At any given time, only one of these is hosting the production environment. The idle environment serves as the staging area for the next release of the software or service. Final testing and QA is performed, and users are gradually migrated to the new environment with the canary deployment model.
The blue environment is the production environment, while the green environment serves as the staging environment.
Canary Deployment Model
The deployment model that gradually moves users from an old deployment to a new one, rather than an immediate switchover of all users.
The Canary model is similar to the Blue-Green Model, except that users are gradually migrated from the older environment to the newer environment to the newer environment, instead of the complete and immediate migration used with the blue-green model.
Vulnerability Testing
An evaluation of system’s security and ability to meet compliance requirements based on the configuration state of the system, as represented by information collected from the system.
Vulnerability Testing empirically identifies, quantifies, and ranks vulnerabilities in networks, operating systems, services and applications. The goal is to identify the vulnerability do that it can be mitigated.
Penetration Testing
A test that uses active tools and security utilities to evaluate security utilities to evaluate security by simulating an attack on a system. A pen test will verify that a threat exists, then will actively test and bypass security controls, and will finally exploit vulnerabilities on the system.
Such testing begins with an analysis of available resources, looking for older, unpatched, or vulnerable software. The testing also includes an analysis of business practices.
Penetration testing may help meet several strategic goals:
- Compliance
- Identify weaknesses in processes and configurations
- Identify vulnerabilities in software and operating systems.
Performance Testing
A test that shows an application’s ability to function under a given workload in order to confirm performance and scalability.
For cloud services, this information is useful for determining scalability settings. For example, scaling can be done via scale-up (more resources, such as memory, given to a VM) or scale-out (more VMs deployed). Applications may respond better to one or the other of these scaling practices.
Regression Testing
The process of testing an application after changes are made to see is these changes have triggered problems in older areas of code.
Functional Testing
A test method used in QA to confirm that a solution meets the required needs.
Functional testing evaluates whether a system or application meets it’s specification - does it do what it is supposed to do?
Usability Testing
A testing method where end-users provide direct feedback on requirements and usability.
Usability Testing is accomplished by end-users and provides direct feedback on the interface, features, and practical use. Usabilty testing helps ensure the application or server meets requirements and will actually be useful upon release.
Capacity Planning
Capacity planning is the process of determining and optimizing the resources required to meet the demands of an organization. It involves forecasting future needs, evaluating existing capacities, and making strategic decisions to ensure that sufficient resources are available to support business operations efficiently.
Capacity Planning is concerned with the following questions:
- What is the current baseline
or service level? - What is the current capacity?
- What future needs can we predict, based on upcoming business initiatives?
- Are there consolidation opportunities for services, applications, or data sources?
- What recommendations can be made, and what actions can be taken?
Capacity planning helps organizations avoid overprovisioning or underprovisioning of resources. Overprovisioning can lead to unnecessary expenses and underutilization, while underprovisioning can result in performance degradation and user dissatisfaction. By accurately forecasting capacity needs, organizations can optimize resource allocation, improve system performance, enhance scalability, and ensure that service levels are maintained.
Solution Requirements
Defines the criteria for a solution to a given problem that software or services are expected to meet.
The requirements define what needs to happen without specifying how the solution will be met.
For cloud services, a solution requirements document might specify that content is quickly available to users. The solution might be a content delivery network, but that is selected later in the process.
Business Needs Analysis
The document containing solutions that must be found in order for the organization to achieve it’s strategic goals.
Such goals might include decreasing costs, increasing revenue, increasing a customer base, or increasing operational effectiveness.
Many organizations believe they need to migrate to the cloud so they are not left behind technologically, but they don’t have a good understanding of why (or if) the migration is useful or what (if any) benefits they can expect from cloud services. A business needs analysis will identify a specific business problem for which cloud service might provide a solution.
What are the different types of licensing for cloud-based services?
Per User: One license for each user that consumes the software or service
Sock based: One license for each CPU that attaches to the socket of a motherboard, regardless of the number of cores the CPU contains
Core Based: One license for each core in a server’s CPU.
Volume Based: One license that permits a specified number of installations, for example, installation of the software on up to 100 computers.
Perpetual: One-time fee for a license that may include additional support costs; however, the license is good for the life of the software.
Subscription: Periodic cost; usually includes at least basic technical support, maintenance, and possibly upgrades.
System Load
A measure of how busy the system’s CPU is over a period of time. The load is usually reported over three points in time: one minute, five minutes, and 15 minutes.
While there are usually counters for CPU utilization itself, the system load is better measured by using CPU queue length. That value tracks processes currently being run by the CPU as well as those that are awaiting the CPU’s attention.
Typically, the queue length value should not exceed the number of logical processors (cores) in the system.
Operating systems such as Linux and Windows Server have tools to display the CPU queue length. These tools include “top” in Linux and “Performance Monitor” in Windows Server. Cloud administrators can watch these values on cloud-based VMs to ensure performance expectations are met.
Trend Analysis
The process of detecting patterns within a dataset over time, and using those patterns to make predictions about future events or better understand past events.
The results acquired are used for capacity planning and system scaling. Trend analysis can help the IT staff understand what to move to the cloud and when.
Baselines
The point from which something varies. A configuration baseline is the original or recommended settings for a device while a performance baseline is the originally measured throughput.
Price Estimators
A free tool offered by cloud service providers to estimate the costs of cloud services with various configurations.
They break the costs out into sections to help your organization better understand how changes to resources impact OpEx
Storage as a Service (STaaS)
A common cloud subscription for managing file storage for both home users and businesses.
Data stored using STaaS is available from any device, adding a significant layer of convenience.
Examples include:
- Dropbox
- MS OneDrive
- iCloud
- Google Drive
- AWS Backup
Virtual Desktop Interface (VDI)
A virtualization implementation that separates the personal computing environment from a user’s physical computer.
The desktops can be accessed from any device over a web-based connection and from any location. IT management of the desktops may be easier and less expensive due to centralization. A new patch or application is deployed only on the centralized server, and any desktop instance launched includes the update.
VDI is a subset of the desktop as a service (DaaS) concept. There are other ways of implementing remote desktops.
Single-Sign On (SSO)
An authentication technology that enables a user to authenticate once and receive authorizations for multiple services.
Users may be assigned preconfigured roles that grant a given level of access to cloud-based resources. These roles are usually created based on the principle of least privilege and help ensure regulatory compliance.
Identity Management (IdM)
A security process that provides identification, authentication, and authorization mechanisms for users, computers, and other entities to work with organizational assets like networks, operating systems, and applications.
The terms IAM and IdM are (at the time) used interchangeably.
Compute Resources
In cloud architecture, the resources that provide processing functionality and services, often in the context of an isolated container or VM.
Compute resources encompass CPU, memory, storage, and network allocations. Compute functions rely on computing I/O functionality to accomplish calculation-based tasks. Administrators will create compute solutions to meet specific needs.