Module 6 Flashcards
Deployment Environments
- Development environment
- Testing environment
- Staging environment
- Production environment
Code needs to go through a series of steps to improve its reliability. Code passes through a number of environments, and as it does, its quality and reliability increases. These environments are self-contained, and intended to mimic the ultimate environment in which the code will ‘live’. Typically, large organizations use a four-tier structure: development, testing, staging, and production.
The development environment is where you do your coding.
When you believe your code is finished, you may move on to a second environment that has been set aside for testing the code, though when working on small projects, the development and testing environments are often combined. This testing environment should be structurally similar to the final production environment, even if it is on a much smaller scale. The testing environment often includes automated testing tools such as Jenkins.
After the code has been tested, it moves to the staging environment. Staging should be as close as possible to the actual production environment, so that the code can undergo final acceptance testing in a realistic environment. Some organizations maintain two matching production environments, one of which hosts the current release of an application, the other standing by to receive a new release.
Finally, the code arrives at the production environment, where end users interact with it. At this point it has been tested multiple times, and should be error free.
Deployment models - bare metal
The most familiar, and the most basic way to deploy software is by installing it directly on the target computer, or the “bare metal.” In addition to this being the simplest method, bare metal deployment has other advantages, such as the fact that software can access the operating system and hardware directly.
- useful for when you need to access specialized hardware or for High Performance Computing applications where every bit of speed counts
- disadvantage in place where need to isolate different workloads from each other
- also not flexible in terms of resources
More commonly, bare metal is now used as infrastructure to host virtualization (hypervisors) and cloud frameworks (orchestrators for virtual compute, storage, and networking resources).
Deployment models - virtual machines
One way to solve the flexibility and isolation problems is through the use of Virtual Machines, or VMs. A virtual machine is like a computer within your computer; it has its own computing power, network interfaces, and storage.
A hypervisor is software that creates and manages VMs. Hypervisors are generally classified as either ‘Type 1’, which run directly on the physical hardware (‘bare metal’), and ‘Type 2’, which run, usually as an application, under an existing operating system.
If you had three workloads you wanted to isolate from each other, you could create three separate virtual machines on one bare metal server.
Because they are so much like physical machines, VMs can host a wide range of software, even legacy software. Newer application environments, like containers, may not be “real machine-like” enough to host applications that are not written with their limitations in mind.
Deployment models - container-based infrastructure
Moving up the abstraction ladder from VMs, you will find containers. Software to create and manage or orchestrate containers is available from Docker, AWS (Elasticized Container Service), Microsoft (Azure Container Service), and others.
- containers are designed to start up quickly and as such do not include much underlying software infrastructure
A container shares the operating system of the host machine and uses container-specific binaries and libraries.
- container typically represents an application or a group of applications
- containers only contain part of an operating system, not like a VM that has its own complete operating system
Containers also solve the problem that arises when multiple applications need different versions of the same library in order to run. Containers are also useful because of the ecosystem of tools around them.
Containers are also the foundation of cloud native computing, in which applications are generally stateless. This statelessness makes it possible for any instance of a particular container to handle a request. When you add this to another aspect of cloud computing that emphasizes services, serverless computing becomes possible.
Deployment models - serverless computing
Let’s start with this important point: to say that applications are “serverless” is great for marketing, but it is not technically true. Of course your application is running on a server. It is just running on a server that you do not control, and do not have to think about. Hence the name “serverless”.
Serverless computing takes advantage of a modern trend towards applications that are built around services. That is, the application makes a call to another program or workload to accomplish a particular task, to create an environment where applications are made available on an “as needed” basis.
Step 1. Create your application.
Step 2. Deploy your application as a container, so it can run easily in any appropriate environment.
Step 3. Deploy that container to a serverless computing provider. This deployment includes a specification of how long the function should remain inactive before it is spun down.
Step 4. When necessary, your application calls the function.
Step 5. The provider spins up an instance of the container, performs the needed task, and returns the result.
What is important to notice here is that if the serverless app is not needed, it is not running, and you are not getting charged for it.
Because the capacity goes up and down with need, it is generally referred to as “elastic” rather than “scalable.”
- huge advantage that only paying for resources that are in use rather than running all the time, but you have zero control over the host machine so might not be appropriate from a security perspective
Types of Infrastructure - On-premises
Technically speaking, “on-premises” means any system that is literally within the confines of your building. In this case we are talking about traditional data centers that house individual machines which are provisioned for applications, rather than clouds, external or otherwise.
Operating a traditional on-premises data center requires servers, storage devices, and network equipment to be ordered, received, assembled in racks (“racked and stacked”), moved to a location, cabled for power and data. This equipment must be provided with environmental services such as power protection, cooling, and fire prevention. Servers then need to be logically configured for their roles, operating systems and software must be installed, and all of it needs to be maintained and monitored.
All of this infrastructure work takes time and effort. In addition, scaling an application typically means moving it to a larger server, which makes scaling up or down a major event. These problems can be solved by moving to a cloud-based solution.
Types of Infrastructure - Private cloud
A cloud is a system that provides self-service provisioning for compute resources, networking, and storage. A cloud consists of a control plane, which enables you to perform requests. You can create a new VM, attach a storage volume, even create a new network and compute resources.
Clouds provide self-service access to computing resources, such as VMs, containers, and even bare metal. This means that users can log into a dashboard or use the command line to spin up new resources themselves, rather than waiting for IT to resolve a ticket. These platforms are sometimes referred to as Infrastructure-as-a-Service (IaaS). Common private cloud platforms include VMware (proprietary), OpenStack (open source), and Kubernetes (a container orchestration framework). Underlying hardware infrastructure for clouds may be provided by conventional networked bare-metal servers, or by more advanced, managed bare-metal or “hyperconverged” physical infrastructure solutions.
What distinguishes a private cloud from other types of clouds is that all resources within the cloud are under the control of your organization.
The advantage of a private cloud is that you have complete control over where it is located, which is important in situations where there are specific compliance regulations, and that you do not typically have to worry about other workloads on the system. On the downside, you do have to have an operations team that can manage the cloud and keep it running.
Types of Infrastructure - Public cloud
A public cloud is essentially the same as a private cloud, but it is managed by a public cloud provider. Public clouds can also run systems such as OpenStack or Kubernetes, or they can be specific proprietary clouds such as Amazon Web Services or Azure.
Public cloud customers may share resources with other organizations: your VM may run on the same host as a VM belonging to someone else. Alternatively, public cloud providers may provide customers with dedicated infrastructure. Most provide several geographically-separate cloud ‘regions’ in which workloads can be hosted.
Public clouds can be useful because you do not have to pay for hardware you are not going to use, so you can scale up virtually indefinitely as long as the load requires it, then scale down when traffic is slow. Because you only pay for the resources you are actually using, this solution can be most economical because your application never runs out of resources, and you do not pay for resources you are not using. You also do not have to worry about maintaining or operating the hardware; the public cloud provider handles that. However, in practice, when your cloud gets to be a certain size, the cost advantages tend to disappear, and you are better off with a private cloud.
There is one disadvantage of public cloud. Because you are sharing the cloud with other users, you may have to contend with situations in which other workloads take up more than their share of resources, and the problem is worse when the cloud provider is overcommitting.
Types of Infrastructure - Hybrid cloud
As you might guess, hybrid cloud is the combination of two different types of clouds. Typically, hybrid cloud is used to bridge a private cloud and a public cloud within a single application.For example, you might have an application that runs on your private cloud, but “bursts” to public cloud if it runs out of resources. In this way, you can save money by not overbuying for your private cloud, but still have the resources when you need them.
You might also go in the other direction, and have an application that primarily runs on the public cloud, but uses resources in the private cloud for security or control. For example, you might have a web application that serves most of its content from the public cloud, but stores user information in a database within the private cloud.
Hybrid cloud is often confused with multi-cloud, in which an organization uses multiple clouds for different purposes. What distinguishes hybrid cloud is the use of more than one cloud within a single application. As such, a hybrid cloud application has to be much more aware of its environment than an application that lives in a single cloud.
Types of Infrastructure - Edge cloud
Edge cloud is gaining popularity because of the growth of the Internet of Things (IoT). These connected devices, such as connected cameras, autonomous vehicles, and even smartphones, increasingly benefit from computing power that exists closer to them on the network. The two primary reasons that closer computing power helps IoT devices are speed and bandwidth.
To solve both of these problems, an edge cloud moves computing closer to where it is needed. Instead of transactions making their way from an end user in Cleveland, to the main cloud in Oregon, there may be an intermediary cloud, an edge cloud, in Cleveland. The edge cloud processes the data or transaction. It then either sends a response back to the client, or does preliminary analysis of the data and sends the results on to a regional cloud that may be farther away.
There is nothing “special” about edge clouds. They are just typical clouds. What makes them “edge” is where they are, and that they are connected to each other. There is one more thing about edge clouds, however. Because they often run on much smaller hardware than “typical” clouds, they may be more resource-constrained. In addition, edge cloud hardware must be reliable, efficient in terms of power usage, and preferably remotely manageable, because it may be located in a remote area.
Docker
- Namespaces
- Control groups
- Union File Systems
The most popular way to containerize an application is to deploy it as a Docker container. A container is a way of encapsulating everything you need to run your application, so that it can easily be deployed in a variety of environments. Docker is a way of creating and running that container.
Specifically, Docker is a format that wraps a number of different technologies to create what we know today as containers. These technologies are:
- Namespaces - These isolate different parts of the running container. For example, the process itself is isolated in the pid (process ID) namespace, the filesystem is isolated in the mnt (mount) namespace, and networking is isolated in the net namespace.
- Control groups - These cgroups are a standard linux concept that enables the system to limit the resources, such as RAM or storage, used by an application.
- Union File Systems - These UnionFS are file systems that are built layer by layer, combining resources.
A Docker image is a set of read-only files which has no state. A Docker Image contains source code, libraries, and other dependencies needed to run an application. A Docker container is the run-time instance of a Docker image. You can have many running containers of the same Docker image. A Docker image is like a recipe for a cake, and you can make as many cakes (Docker containers) as you wish. Images can in turn be stored in registries such as Docker Hub.
A simplified version of the workflow of creating a container looks like this:
Step 1. Either create a new image using docker build or pull a copy of an existing image from a registry using docker pull. (Depending on the circumstances, this step is optional. See step 3.)
Step 2. Run a container based on the image using docker run or docker container create.
Step 3. The Docker daemon checks to see if it has a local copy of the image. If it does not, it pulls the image from the registry.
Step 4. The Docker daemon creates a container based on the image and, if docker run was used, logs into it and executes the requested command.
As you can see, if you are going to create a container-based deployment of the sample application, you are going to have to create an image. To do that, you need a Dockerfile.
What is a Dockerfile?
If you have used a coding language such as C, you know that it required you to compile your code. If so, you may be familiar with the concept of a “makefile.” This is the file that the make utility uses to compile and build all the pieces of the application.
That is what a Dockerfile does for Docker. It is a simple text file, named Dockerfile. It defines the steps that the docker build command needs to take to create an image that can then be used to create the target container.
You can create a very simple Dockerfile that creates an Ubuntu container. Use the cat command to create a Dockerfile, and then add FROM ubuntu to the file. Enter Ctrl+D to save and exit the file with the following text and save it in your current directory.
That is all it takes, just that one line. Now you can use the docker build command to build the image as shown in the following example. The -t option is used to name the build. Notice the period (.) at the end of the command which specifies that the image should be built in the current directory. Use docker build –help to see all the available options.
Enter the command docker images to see your image in the list of images. Now that you have the image, use the docker run command to run it. You are now in a bash shell INSIDE the docker image you created. Change to the home directory and enter ls to see that it is empty and ready for use. Enter exit to leave the Docker container and return to your DEVASC VM main operating system.
Anatomy of a Dockerfile
FROM python
WORKDIR /home/ubuntu
COPY ./sample-app.py /home/ubuntu/.
RUN pip install flask
CMD python /home/ubuntu/sample-app.py
EXPOSE 8080
Of course, if all you could do with a Dockerfile was to start a clean operating system, that would be useful, but what you need is a way to start with a template and build from there.
In the Dockerfile above, an explanation of the commands are as follows:
- The FROM command installs Python in the Docker image. It invokes a Debian Linux-based default image from Docker Hub, with the latest version of Python installed.
- The WORKDIR command tells Docker to use /home/ubuntu as the working directory.
- The COPY command tells Docker to copy the sample-app.py file from Dockerfile’s current directory into /home/ubuntu.
- The RUN command allows you to directly run commands on the container. In this example, Flask is installed. Flask is a platform to support your app as a web app.
- The CMD command will start the server when you run the actual container. Here, you use the python command to run the sample-app.py inside the container.
- The EXPOSE command tells Docker that you want to expose port 8080. Note that this is the port on which Flask is listening. If you have configured your web server to listen somewhere else (such as https requests on port 443) this is the place to note it.
Use the docker build command to build the image.
Docker goes through each step in the Dockerfile, starting with the base image, Python. If this image does not exist on your system, Docker pulls it from the registry. The default registry is Docker Hub. However, in a secure environment, you might set up your own registry of trusted container images. Notice that the image is actually a number of different images layered on top of each other, just as you are layering your own commands on top of the base image. Notice that between steps such as executing a command, Docker actually creates a new container and builds an intermediate image, a new layer, by saving that container. In fact, you can do that yourself by creating a container, making the changes you want, then saving that container as a new image.
Enter the command docker images to view a list of images.
Start a Docker Container Locally
Now that image is created, use it to create a new container and actually do some work by entering the docker run command. In this case, several parameters are specified. The -d parameter is short for –detach and says you want to run it in the background. The -P parameter tells Docker to publish it on the ports that you exposed (in this case, 8080).
- you can name the container using –name option e.g. docker run -d -P –name pythontest sample-app-image
Notice also that, even though the container is listening on port 8080, that is just an internal port. Docker has specified an external port, in this case 32774, that will forward to that internal port. This lets you run multiple containers that listen on the same port without having conflicts. If you want to pull up your sample app website, you can use the public IP address for the host server and that port.
When your container is running, you can log into it just as you would log into any physical or virtual host using the exec command from the host on which the container is running.
To stop and remove a running container, you can call it by its name.
Save a Docker Image to a Registry
Now that you know how to create and use your image, it is time to make it available for other people to use. One way to do this is by storing it in an image registry.
By default, Docker uses the Docker Hub registry, though you can create and use your own registry. You will need to start by logging in to the registry.
Next, you commit a running container instance of your image. For example, the pythontest container is running in this example. Commit the container with the docker commit command.
Next, use the docker tag command to give the image you commited a tag. The tag takes the following form:
<repository>/<imagename>:<tag>
The first part, the repository, is usually the username of the account storing the image. Next is the image name, and then finally the optional tag. (Remember, if you do not specify it, it will come up as latest.) Now the image is ready to be pushed to the repository.
</tag></imagename></repository>