Az Official Certification test Flashcards
There are several ways to process message in a Service Bus Messages Queue. One way is by using a console application and then running the messages in a virtual machine (VM), but this approach is costly and difficult to maintain. Fortunately, we have another approach to handle the Azure Service Bus Messages Queue that is cost-effective, easy to maintain, and includes powerful functionality….
By using Azure Functions…
public static class MyServiceBusFunction { [FunctionName("MyServiceBusFunction")] public static void Run( [ServiceBusTrigger("myqueue ", Connection = "")]string myQueue Item, ILogger log) { log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueue Item}"); } }
What is Azure Service Bus?
What is Azure Service Bus?
Azure Service Bus allows you to group operations against multiple messaging entities within the scope of a single transaction. There are three capabilities of Azure Service Bus:
Queues
Topics
Subscriptions
Queues allow for message processing by a single consumer; topics and subscriptions provide for a one-to-many communication process.
What are Azure Functions?
What is Azure Functions?
Azure functions is a serverless application allowing you to maintain less code or infrastructure rather than a common development application. There are several capabilities or triggers of Azure Functions, such as HTTP Trigger, Timer Trigger, Service Bus Trigger, etc. Each trigger has specific functionality.
What does the following code represents?
public static void Run(
[ServiceBusTrigger(“myqueue”, AccessRights.Manage)]
string myQueueItem, ILogger log)
Azure Cloud Function Run method..
The following table explains the properties you can set using the attribute:
QueueName Name of the queue. Set only if sending queue messages, not for a topic.
TopicName Name of the topic. Set only if sending topic messages, not for a queue.
Connection The name of an app setting or setting collection that specifies how to connect to Service Bus. See Connections.
Access Access rights for the connection string. Available values are manage and listen. The default is manage, which indicates that the connection has the Manage permission. If you use a connection string that does not have the Manage permission, set accessRights to “listen”. Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn’t support manage operations.
What does the following fuction do?
[FunctionName(“BlobTriggerCSharp”)]
public static void Run([BlobTrigger(“samples-workitems/{name}”)] Stream myBlob, string name, ILogger log)
{
log.LogInformation($”C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes”);
}
a C# function that writes a log when a blob is added or updated in the samples-workitems container. The string {name} in the blob trigger path samples-workitems/{name} creates a binding expression that you can use in function code to access the file name of the triggering blob.
What does this Dockerfile code mean? # syntax=docker/dockerfile:1 FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env WORKDIR /app
Using “mcr.microsoft.com/dotnet/sdk:6.0” library literally as “AS build-env” the build environment for the App, ergo, we’re telling Docker to use dotnet core 6.0 as the programming language
the set the working directory WORKDIR /app in our target location
More…
The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions. As such, a valid Dockerfile must start with a FROM instruction.
Containers don’t run a full OS, they share the kernel of the host OS (typically, the Linux kernel). That’s the “Host Operating System” box in your right image.
They do provide what’s called “user space isolation” though - roughly speaking, this means that every container manages its own copy of the part of the OS which runs in user mode- typically, that’s a Linux distribution such as Ubuntu. In your right image, that would be contained in the “Bins/Libs” box.
You can leave out the FROM line in your Dockerfile, or use FROM scratch, to create a blank base image, then add all the user mode pieces on top of a blank kernel yourself.
What does this Dockerfile code mean? # Copy csproj and restore as distinct layers COPY *.csproj ./ RUN dotnet restore
Explanation
Copy all the contents from *.csproj (or the whole C# Solution to “./” main directory of target location
then RUN the restore comand to setup environment for solution
More…
Docker COPY instruction
Due to some functionality issues, Docker had to introduce an additional command for duplicating content – COPY.
Unlike its closely related ADD command, COPY only has only one assigned function. Its role is to duplicate files/directories in a specified location in their existing format. This means that it doesn’t deal with extracting a compressed file, but rather copies it as-is.
The instruction can be used only for locally stored files. Therefore, you cannot use it with URLs to copy external files to your container.
To use the COPY instruction, follow the basic command format:
COPY …
dotnet publish compiles the application, reads through its dependencies specified in the project file, and publishes the resulting set of files to a directory. The output includes the following assets:
Intermediate Language (IL) code in an assembly with a dll extension. A .deps.json file that includes all of the dependencies of the project. A .runtimeconfig.json file that specifies the shared runtime that the application expects, as well as other configuration options for the runtime (for example, garbage collection type). The application's dependencies, which are copied from the NuGet cache into the output folder. The dotnet publish command's output is ready for deployment to a hosting system (for example, a server, PC, Mac, laptop) for execution. It's the only officially supported way to prepare the application for deployment. Depending on the type of deployment that the project specifies, the hosting system may or may not have the .NET shared runtime installed on it.
What does this Dockerfile code mean? # Copy everything else and build COPY ../engine/examples ./ RUN dotnet publish -c Release -o out
Explanation:
Copy all the contents from *../engine/examples” (or the C# Examples to “./” main directory of target location
then RUN the “dotnet publish” comand to build(compile) the release version
More…
Docker COPY instruction
Due to some functionality issues, Docker had to introduce an additional command for duplicating content – COPY.
Unlike its closely related ADD command, COPY only has only one assigned function. Its role is to duplicate files/directories in a specified location in their existing format. This means that it doesn’t deal with extracting a compressed file, but rather copies it as-is.
The instruction can be used only for locally stored files. Therefore, you cannot use it with URLs to copy external files to your container.
To use the COPY instruction, follow the basic command format:
COPY …
dotnet publish compiles the application, reads through its dependencies specified in the project file, and publishes the resulting set of files to a directory. The output includes the following assets:
Intermediate Language (IL) code in an assembly with a dll extension. A .deps.json file that includes all of the dependencies of the project. A .runtimeconfig.json file that specifies the shared runtime that the application expects, as well as other configuration options for the runtime (for example, garbage collection type). The application's dependencies, which are copied from the NuGet cache into the output folder. The dotnet publish command's output is ready for deployment to a hosting system (for example, a server, PC, Mac, laptop) for execution. It's the only officially supported way to prepare the application for deployment. Depending on the type of deployment that the project specifies, the hosting system may or may not have the .NET shared runtime installed on it.
What does this Dockerfile code mean? # Build runtime image FROM mcr.microsoft.com/dotnet/aspnet:6.0 WORKDIR /app
FROM
The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions. As such, a valid Dockerfile must start with a FROM instruction. The image can be any valid image – it is especially easy to start by pulling an image from the Public Repositories.
ARG is the only instruction that may precede FROM in the Dockerfile. See Understand how ARG and FROM interact.
FROM can appear multiple times within a single Dockerfile to create multiple images or use one build stage as a dependency for another. Simply make a note of the last image ID output by the commit before each new FROM instruction. Each FROM instruction clears any state created by previous instructions.
Optionally a name can be given to a new build stage by adding AS name to the FROM instruction. The name can be used in subsequent FROM and COPY –from= instructions to refer to the image built in this stage.
The tag or digest values are optional. If you omit either of them, the builder assumes a latest tag by default. The builder returns an error if it cannot find the tag value.
WORKDIR
WORKDIR /path/to/workdir
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
The WORKDIR instruction can be used multiple times in a Dockerfile. If a relative path is provided, it will be relative to the path of the previous WORKDIR instruction. For example:
WORKDIR /a WORKDIR b WORKDIR c RUN pwd The output of the final pwd command in this Dockerfile would be /a/b/c.
The WORKDIR instruction can resolve environment variables previously set using ENV. You can only use environment variables explicitly set in the Dockerfile. For example:
ENV DIRPATH=/path
WORKDIR $DIRPATH/$DIRNAME
RUN pwd
The output of the final pwd command in this Dockerfile would be /path/$DIRNAME
If not specified, the default working directory is /. In practice, if you aren’t building a Dockerfile from scratch (FROM scratch), the WORKDIR may likely be set by the base image you’re using.
Therefore, to avoid unintended operations in unknown directories, it is best practice to set your WORKDIR explicitly.
Docker COPY vs ADD - which is best?
Docker Copy vs ADD
Why was there a need to add a new, similar command?
The fact that ADD had so many functionalities proved to be problematic in practice, as it behaved extremely unpredictable. The result of such unreliable performance often came down to copying when you wanted to extract and extracting when you wanted to copy.
Docker couldn’t completely replace the command due to its many existing usages. To avoid backward compatibility, the safest option was to add the COPY command – a less diverse yet more reliable command.
Which to Use (Best Practices)
Considering the circumstances in which the COPY command was introduced, it is evident that keeping ADD was a matter of necessity. Docker released an official document outlining best practices for writing Dockerfiles, which explicitly advises against using the ADD command.
Docker’s official documentation notes that COPY should always be the go-to instruction as it is more transparent than ADD.
If you need to copy from the local build context into a container, stick to using COPY.
The Docker team also strongly discourages using ADD to download and copy a package from a URL. Instead, it’s safer and more efficient to use wget or curl within a RUN command. By doing so, you avoid creating an additional image layer and save space.
Let’s say you want to download a compressed package from a URL, extract the content, and clean up the archive.
Instead of using ADD and running the following command:
ADD http://source.file/package.file.tar.gz /temp
RUN tar -xjf /temp/package.file.tar.gz \
&& make -C /tmp/package.file \
&& rm /tmp/ package.file.tar.gz
You should use:
RUN curl http://source.file/package.file.tar.gz \
| tar -xjC /tmp/ package.file.tar.gz \
&& make -C /tmp/ package.file.tar.gz
Explain the Dockerfile ADD instruction¡?
Docker ADD Command
Let’s start by noting that the ADD command is older than COPY. Since the launch of the Docker platform, the ADD instruction has been part of its list of commands.
The command copies files/directories to a file system of the specified container.
The basic syntax for the ADD command is:
ADD …
It includes the source you want to copy () followed by the destination where you want to store it (). If the source is a directory, ADD copies everything inside of it (including file system metadata).
For instance, if the file is locally available and you want to add it to the directory of an image, you type:
ADD /source/file/path /destination/path
ADD can also copy files from a URL. It can download an external file and copy it to the wanted destination. For example:
ADD http://source.file/url /destination/path
An additional feature is that it copies compressed files, automatically extracting the content in the given destination. This feature only applies to locally stored compressed files/directories.
Type in the source and where you want the command to extract the content as follows:
ADD source.file.tar.gz /temp
Bear in mind that you cannot download and extract a compressed file/directory from a URL. The command does not unpack external packages when copying them to the local filesystem.
Note: The ADD command extracts a compressed source only if it is in a recognized compression format which is solely based on the contents of the file (not on the file name). The recognized compression formats include identity, gzip, bzip, and xz.
What does this Dockerfile code mean? # Build runtime image FROM mcr.microsoft.com/dotnet/aspnet:6.0 WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT ["dotnet", "aspnetapp.dll"]
COPY
COPY has two forms:
COPY [–chown=:] …
COPY [–chown=:] [”“,… “”]
This latter form is required for paths containing whitespace
Note
The –chown feature is only supported on Dockerfiles used to build Linux containers, and will not work on Windows containers. Since user and group ownership concepts do not translate between Linux and Windows, the use of /etc/passwd and /etc/group for translating user and group names to IDs restricts this feature to only be viable for Linux OS-based containers.
The COPY instruction copies new files or directories from and adds them to the filesystem of the container at the path .
Multiple resources may be specified but the paths of files and directories will be interpreted as relative to the source of the context of the build.
Each may contain wildcards and matching will be done using Go’s filepath.Match rules. For example:
To add all files starting with “hom”:
COPY hom* /mydir/
In the example below, ? is replaced with any single character, e.g., “home.txt”.
COPY hom?.txt /mydir/
The is an absolute path, or a path relative to WORKDIR, into which the source will be copied inside the destination container.
The example below uses a relative path, and adds “test.txt” to /relativeDir/:
COPY test.txt relativeDir/
Whereas this example uses an absolute path, and adds “test.txt” to /absoluteDir/
COPY test.txt /absoluteDir/
When copying files or directories that contain special characters (such as [ and ]), you need to escape those paths following the Golang rules to prevent them from being treated as a matching pattern. For example, to copy a file named arr[0].txt, use the following;
COPY arr[[]0].txt /mydir/
All new files and directories are created with a UID and GID of 0, unless the optional –chown flag specifies a given username, groupname, or UID/GID combination to request specific ownership of the copied content. The format of the –chown flag allows for either username and groupname strings or direct integer UID and GID in any combination. Providing a username without groupname or a UID without GID will use the same numeric UID as the GID. If a username or groupname is provided, the container’s root filesystem /etc/passwd and /etc/group files will be used to perform the translation from name to integer UID or GID respectively. The following examples show valid definitions for the –chown flag:
COPY –chown=55:mygroup files* /somedir/
COPY –chown=bin files* /somedir/
COPY –chown=1 files* /somedir/
COPY –chown=10:11 files* /somedir/
ENTRYPOINT
ENTRYPOINT has two forms:
The exec form, which is the preferred form:
ENTRYPOINT [“executable”, “param1”, “param2”]
The shell form:
ENTRYPOINT command param1 param2
An ENTRYPOINT allows you to configure a container that will run as an executable.
For example, the following starts nginx with its default content, listening on port 80:
docker run -i -t –rm -p 80:80 nginx
Command line arguments to docker run <img></img> will be appended after all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD. This allows arguments to be passed to the entry point, i.e., docker run <img></img> -d will pass the -d argument to the entry point. You can override the ENTRYPOINT instruction using the docker run –entrypoint flag.
The shell form prevents any CMD or run command line arguments from being used, but has the disadvantage that your ENTRYPOINT will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container’s PID 1 - and will not receive Unix signals - so your executable will not receive a SIGTERM from docker stop .
Only the last ENTRYPOINT instruction in the Dockerfile will have an effect.
Exec form ENTRYPOINT example
You can use the exec form of ENTRYPOINT to set fairly stable default commands and arguments and then use either form of CMD to set additional defaults that are more likely to be changed.
FROM ubuntu
ENTRYPOINT [“top”, “-b”]
CMD [“-c”]
Differences between Azure Service Bus queues and Storage queues
Azure supports two types of queue mechanisms: Storage queues and Service Bus queues.
Storage queues are part of the Azure Storage infrastructure. They allow you to store large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously. For more information, see What are Azure Storage queues.
Service Bus queues are part of a broader Azure messaging infrastructure that supports queuing, publish/subscribe, and more advanced integration patterns. They’re designed to integrate applications or application components that may span multiple communication protocols, data contracts, trust domains, or network environments. For more information about Service Bus queues/topics/subscriptions, see the Service Bus queues, topics, and subscriptions.
Consider using Storage queues
As a solution architect/developer, you should consider using Storage queues when:
Your application must store over 80 gigabytes of messages in a queue.
Your application wants to track progress for processing a message in the queue. It’s useful if the worker processing a message crashes. Another worker can then use that information to continue from where the prior worker left off.
You require server side logs of all of the transactions executed against your queues.
Consider using Service Bus queues
As a solution architect/developer, you should consider using Service Bus queues when:
Your solution needs to receive messages without having to poll the queue. With Service Bus, you can achieve it by using a long-polling receive operation using the TCP-based protocols that Service Bus supports.
Your solution requires the queue to provide a guaranteed first-in-first-out (FIFO) ordered delivery.
Your solution needs to support automatic duplicate detection.
You want your application to process messages as parallel long-running streams (messages are associated with a stream using the session ID property on the message). In this model, each node in the consuming application competes for streams, as opposed to messages. When a stream is given to a consuming node, the node can examine the state of the application stream state using transactions.
Your solution requires transactional behavior and atomicity when sending or receiving multiple messages from a queue.
Your application handles messages that can exceed 64 KB but won’t likely approach the 256 KB or 1 MB limit, depending on the chosen service tier (although Service Bus queues can handle messages up to 100 MB).
You deal with a requirement to provide a role-based access model to the queues, and different rights/permissions for senders and receivers. For more information, see the following articles:
Authenticate with managed identities
Authenticate from an application
Your queue size won’t grow larger than 80 GB.
You want to use the AMQP 1.0 standards-based messaging protocol. For more information about AMQP, see Service Bus AMQP Overview.
You envision an eventual migration from queue-based point-to-point communication to a publish-subscribe messaging pattern. This pattern enables integration of additional receivers (subscribers). Each receiver receives independent copies of either some or all messages sent to the queue.
Your messaging solution needs to support the “At-Most-Once” and the “At-Least-Once” delivery guarantees without the need for you to build the additional infrastructure components.
Your solution needs to publish and consume batches of messages.
Difference between Service Bus and Event Hubs
Event vs. message services
There’s an important distinction to note between services that deliver an event and services that deliver a message.
Event
An event is a lightweight notification of a condition or a state change. The publisher of the event has no expectation about how the event is handled. The consumer of the event decides what to do with the notification. Events can be discrete units or part of a series.
Discrete events report state change and are actionable. To take the next step, the consumer only needs to know that something happened. The event data has information about what happened but doesn’t have the data that triggered the event. For example, an event notifies consumers that a file was created. It may have general information about the file, but it doesn’t have the file itself. Discrete events are ideal for serverless solutions that need to scale.
Series events report a condition and are analyzable. The events are time-ordered and interrelated. The consumer needs the sequenced series of events to analyze what happened.
Message
A message is raw data produced by a service to be consumed or stored elsewhere. The message contains the data that triggered the message pipeline. The publisher of the message has an expectation about how the consumer handles the message. A contract exists between the two sides. For example, the publisher sends a message with the raw data, and expects the consumer to create a file from that data and send a response when the work is done.
Difference between Event Hubs & Service Bus
To the external publisher or the receiver Service Bus and Event Hubs can look very similar and this is what makes it difficult to understand the differences between the two and when to use what.
- Event Hubs focuses on event streaming where Service Bus is more of a traditional messaging broker.
- Service Bus is used as the backbone to connects applications running in the cloud to other applications or services and transfers data between them whereas Event Hubs is more concerned about receiving massive volume of data with high throughout and low latency.
- Event Hubs decouples multiple event-producers from event-receivers whereas Service Bus aims to decouple applications.
- Service Bus messaging supports a message property ‘Time to Live’ whereas Event Hubs has a default retention period of 7 days.
- Service Bus has the concept of message session. It allows relating messages based on their session-id property whereas Event Hubs does not.
- Service Bus the messages are pulled out by the receiver & cannot be processed again whereas Event Hubs message can be ingested by multiple receivers.
- Service Bus uses the terminology of queues and topics whereas Event Hubs partitions terminology is used.
Use this loose general rule of thumb.
SOMETHING HAS HAPPENED – Event Hubs
DO SOMETHING or GIVE ME SOMETHING – Service Bus
Explain Azure Data Redundancy in the primary region
Redundancy in the primary region
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers two options for how your data is replicated in the primary region:
Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option, but is not recommended for applications requiring high availability or durability.
Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability, Microsoft recommends using ZRS in the primary region, and also replicating to a secondary region.