Microservices Flashcards

1
Q

What is not the core component of an HTTP response?

A) Status/Response Code
B) Version
C) Header
D) Time

A

The correct answer is:

D) Time

Time is not a core component of an HTTP response. The core components of an HTTP response are:

Status/Response Code: This is a three-digit number that indicates the outcome of the request. For example, 200 means “OK”, 404 means “Not Found”, etc.

Version: This specifies the HTTP version being used, such as HTTP/1.1 or HTTP/2.

Headers: These are additional pieces of information about the response or the server, such as Content-Type, Content-Length, Server, etc.

Body: This is the actual content of the response (though not listed in your options). It’s optional and depends on the type of request and response.

While time-related information can be included in HTTP headers (like Date or Last-Modified), it’s not considered a core component of the HTTP response structure itself. The time of the response is typically handled at the server or application level, not as a fundamental part of the HTTP protocol.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

POST sends data to a particular URI and expects the resource at that URI to deal with the request. The web server at this point can decide what to do with the data in the context of a specified resource

A) TRUE
B) FALSE

A

A) TRUE

This statement accurately describes the functionality of the HTTP POST method. Here’s why:

POST sends data: The POST method is indeed used to send data to a server.

To a particular URI: POST requests are directed to a specific Uniform Resource Identifier (URI), which identifies the resource that will handle the request.

Expects the resource to deal with the request: The server-side resource identified by the URI is responsible for processing the data sent in the POST request.

Web server decides what to do with the data: The server has the flexibility to determine how to handle the incoming data in the context of the specified resource. This could involve storing the data, updating a database, triggering a process, or any other server-side action appropriate for that resource.

The POST method is commonly used for submitting form data, uploading files, or sending data that will result in a change on the server. It allows for more complex operations than simpler methods like GET, as it can handle larger amounts of data and is not limited by URL length restrictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Out of the Following which is not a tool or API for developing or testing web api

A) Spring REST web service using MVC
B) Jersey API
C) AXIS
D) MQTT

A

D) MQTT

MQTT (Message Queuing Telemetry Transport) is not a tool or API specifically designed for developing or testing web APIs. Instead, it is a lightweight messaging protocol used primarily for Internet of Things (IoT) applications and machine-to-machine communication.

Let’s briefly explain the other options, which are tools or APIs for developing web services:
Spring REST web service using MVC: This is a framework for building RESTful web services using the Model-View-Controller architectural pattern within the Spring ecosystem.

Jersey API: This is a popular open-source framework for developing RESTful Web Services in Java that provides support for JAX-RS APIs.

AXIS: Apache Axis is a SOAP engine, which is used for developing web services and clients that leverage the SOAP protocol.

MQTT, on the other hand, is designed for low-bandwidth, high-latency networks, making it suitable for IoT and mobile applications where network conditions may be unreliable. While it can be used in conjunction with web services, it is not primarily a tool for developing or testing web APIs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Mention which markup language can be used in restful web API?

1) JSON
2) XML
3) None
4) 1&2

A

1&2

Both JSON (JavaScript Object Notation) and XML (eXtensible Markup Language) can be used in RESTful web APIs for data representation and exchange. Here’s a brief explanation of each:

JSON: This is a lightweight, text-based, language-independent data interchange format. It’s easy for humans to read and write, and easy for machines to parse and generate. JSON has become increasingly popular in RESTful APIs due to its simplicity and efficiency.

XML: This is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. While it’s more verbose than JSON, XML is still widely used in many RESTful APIs, especially in enterprise environments.

Both JSON and XML serve the purpose of structuring data in a way that can be easily transmitted over the network and parsed by the receiving application. The choice between JSON and XML often depends on the specific requirements of the project, the preferences of the development team, and the needs of the API consumers.

It’s worth noting that while these are the most common formats, RESTful APIs are not limited to just JSON and XML. Other formats like YAML or even plain text can be used, though they are less common in practice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do you version your RESTful web services?
A) Via URL
B) Via HTTP headers.
C) None
D) Both

A

D) Both

Versioning RESTful web services can be done using both URL-based and HTTP header-based approaches. Here’s a brief explanation of each method:

Via URL: This method involves including the version number directly in the URL of the API endpoint. For example:

https://api.example.com/v1/users
https://api.example.com/v2/users

Via HTTP headers: This approach uses custom HTTP headers to specify the version of the API being requested. For example:
Accept-version: v1
Api-version: v2

Both methods have their advantages and are commonly used in practice. The choice between them (or using both) often depends on the specific requirements of the API and the preferences of the development team.
URL-based versioning is more visible and easier for API consumers to understand at a glance. However, it can lead to duplication of endpoints as the API evolves.

Header-based versioning keeps the URLs clean and allows for more flexibility, but it may be less intuitive for some developers and requires proper documentation.

By supporting both methods, API providers can offer more flexibility to their users and cater to different preferences or use cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which directive of Cache Control Header of HTTP response indicates that the resource is cachable by any component?

A) Private
B) Public

A

B) Public

The public directive of the Cache-Control header in an HTTP response indicates that the resource is cacheable by any component, including browsers, intermediate caches, and proxies. This directive allows the response to be stored by any cache, regardless of whether the response is normally non-cacheable or restricted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following use cases are good candidates for microservices:

A) A micro or macro applications that serve multiple purpose and performing a multiple responsibilities.
B) Backend services of a well-architected, responsive client-side MVC web application (the Backend as a Service (BaaS) scenario) load data on demand in response to the user navigation. In most of these scenarios, data could be coming from multiple logically different data sources.
C) Applications that require Command Query Responsibility segregations (CQRS)
D) B & C
E) A & C
F) All of the above

A

D) B & C

Here’s why:

B) Backend services of a well-architected, responsive client-side MVC web application (the Backend as a Service (BaaS) scenario) load data on demand in response to the user navigation. In most of these scenarios, data could be coming from multiple logically different data sources.

This scenario is a good candidate for microservices because it involves handling data from multiple sources and requires a modular, scalable backend that can efficiently respond to user interactions.

C) Applications that require Command Query Responsibility Segregations (CQRS)

CQRS is a pattern that separates read and write operations into different models, which aligns well with the microservices architecture. This separation can be effectively managed using microservices, allowing for independent scaling and optimization of read and write operations.

A) A micro or macro application that serves multiple purposes and performs multiple responsibilities

This scenario is less suitable for microservices because it involves a monolithic approach with multiple responsibilities, which contradicts the microservices principle of having small, single-responsibility services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

which is not a valid HTTP method used in REST-based architecture?

A) GET
B) PUT
C) REMOVE
D) OPTIONS

A

C) REMOVE

REMOVE is not a valid HTTP method used in REST-based architecture. The standard HTTP methods commonly used in RESTful APIs are:

GET: Used to retrieve resources from the server.
POST: Used to create new resources on the server.
PUT: Used to update existing resources on the server.
DELETE: Used to remove resources from the server.
PATCH: Used to partially modify resources.
OPTIONS: Used to describe the communication options for the target resource.

While REMOVE might seem like a logical choice for deleting resources, the standard HTTP method for this purpose is actually DELETE. The REMOVE method is not part of the HTTP specification and is not used in RESTful APIs.

It’s worth noting that RESTful APIs typically use these standard HTTP methods to perform CRUD (Create, Read, Update, Delete) operations on resources, mapping them to appropriate actions within the application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Consider a use case where a university runs different courses in different departments led by multiple instructors. Each department can have multiple courses and each course can have multiple instructors. Which of the following is the best way to design API end points as per REST design principles in above scenario for listing instructors attached to a course :

A) GET /departments/{departmentId}/courses/{courseid}?list=instructors
B) GET /departments/{departmentld}/courses/{courseid}/instructors
C) A and B both
D) none of the above

A

B) GET /departments/{departmentId}/courses/{courseid}/instructors

This endpoint follows RESTful design principles by clearly representing the hierarchical relationship between departments, courses, and instructors. It uses a clean and intuitive URL structure that makes it easy to understand the resource being accessed (instructors for a specific course within a specific department).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is not advantage of statelessness in RESTful Webservices?

A) Web services need to get extra information in each request and then interpret to get the client’s state in case client interactions are to be taken care of.
B) Web services can treat each method request independently.
C) Web services need not to maintain client’s previous interactions. It simplifies application design.
D) As HTTP is itself a statelessness protocol, RESTful Web services work seamlessly with HTTP protocol.

A

A) Web services need to get extra information in each request and then interpret to get the client’s state in case client interactions are to be taken care of.

This is not an advantage of statelessness in RESTful web services. In fact, it’s more of a challenge or potential drawback. Here’s why:

Statelessness means that each request from a client to the server must contain all the information necessary to understand and process the request. The server does not store any client context between requests.

While this approach simplifies server design and improves scalability, it can lead to increased overhead in situations where client state needs to be maintained.

The need to include extra information in each request to convey the client’s state can result in larger request payloads and potentially more complex request handling on the server side.

The other options (B, C, and D) are indeed advantages of statelessness in RESTful web services:
B) Treating each method request independently simplifies server-side logic and improves scalability.
C) Not maintaining client’s previous interactions simplifies application design and reduces server-side complexity.
D) The stateless nature of RESTful web services aligns well with the stateless HTTP protocol, making them work seamlessly together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following kubernetes command is used to publish a resource as a service on a specific port.

A) run
B) expose
C) autoscale
D) rollout

A

B) expose

The ‘expose’ command in Kubernetes is used to publish a resource as a service on a specific port. This command creates a new service object that exposes a resource to external network traffic.

Here’s a brief explanation of what the ‘expose’ command does:

  1. It creates a new service resource to expose pods to network traffic.
  2. It can expose various types of Kubernetes resources, such as pods, deployments, replication controllers, and services.
  3. The command allows you to specify the port on which the service should be exposed.

For example, a typical use of the ‘expose’ command might look like this:

kubectl expose deployment my-deployment --port=8080 --target-port=80

This command would create a new service that exposes the pods in ‘my-deployment’ on port 8080, routing traffic to the containers on port 80.

The other options mentioned in the question have different purposes:

  • ‘run’ is used to create and run a particular image in a pod.
  • ‘autoscale’ is used to automatically adjust the number of pods in a deployment, replica set, or replication controller based on observed CPU utilization.
  • ‘rollout’ is used to manage the rollout of a resource, such as performing updates to deployments.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to display custom error pages using RestFull web services?

A) extend StatusService and implement getRepresentation
B) extend StatusService and implement getService

A

A) extend StatusService and implement getRepresentation

To display custom error pages using RESTful web services, you need to extend the StatusService class and implement the getRepresentation method. This approach allows you to customize the error responses returned by your RESTful API.

Here’s a brief explanation of how this works:

  1. The StatusService class is part of the RESTful framework (such as Restlet) and is responsible for handling status codes and error responses.
  2. By extending StatusService, you can override its default behavior and provide your own custom implementation for error handling.
  3. The getRepresentation method is the key method to implement. This method is called when an error occurs, and it allows you to define how the error should be represented to the client.
  4. In your implementation of getRepresentation, you can create custom error pages or responses based on different status codes or error conditions.

For example, a basic implementation might look like this:

```java
public class CustomStatusService extends StatusService {
@Override
public Representation getRepresentation(Status status, Request request, Response response) {
// Create and return a custom error representation based on the status
return new StringRepresentation(“Custom error: “ + status.getDescription());
}
}
~~~

By implementing this method, you can return tailored error messages, format them in specific ways (e.g., JSON or XML), or even serve custom HTML pages for different types of errors.

The option B) “extend StatusService and implement getService” is incorrect because there is no getService method in the StatusService class that is relevant to customizing error pages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following are true about microservices monitoring:

A) Heterogeneous technologies may be used to implement microservices, which makes things even more complex. A single monitoring tool may not give all the required monitoring options.
B) Microservices deployment topologies are dynamic, making it impossible to preconfigure servers, instances, and monitoring parameters.
C) Microservice monitoring is typically done with three approaches: (i) Application performance monitoring (APM) (ii) Synthetic monitoring (iii) Real user monitoring (RUM) or user experience monitoring
D) A & B
E) A & C
F) All of the above

A

F) All of the above

All three statements (A, B, and C) are true about microservices monitoring. Let’s break down each point:

A) This statement is correct. Microservices often use diverse technologies, which can complicate monitoring. A single monitoring tool may not be sufficient to cover all aspects of a heterogeneous microservices architecture.

B) This statement is also true. Microservices deployments are typically dynamic, with instances scaling up or down based on demand. This dynamic nature makes it challenging to preconfigure monitoring parameters for specific servers or instances.

C) This statement accurately describes the three main approaches to microservices monitoring:

  1. Application Performance Monitoring (APM): This approach focuses on monitoring the performance and behavior of individual microservices and their interactions.
  2. Synthetic Monitoring: This involves simulating user interactions to proactively identify issues before they affect real users.
  3. Real User Monitoring (RUM) or User Experience Monitoring: This approach involves monitoring actual user interactions with the system to understand performance from the end-user perspective.

These three approaches together provide a comprehensive view of microservices performance and user experience.

Since all three statements are correct and provide valuable insights into the complexities and approaches of microservices monitoring, the most appropriate answer is F) All of the above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following are true about log management in microservices architecture:

A) To have a centralized logging solution
B) To have a distributed logging solution
C) A log store is part of centralized logging solution and is the place where all log messages are stored for real-time analysis, trending, and so on. Typically, a log store is a NoSQL database, such as HDFS, capable of handling large data volumes.
D) A & C
E) B & C
F) None of the above

A

D) A & C

Let’s break down why these statements are true about log management in microservices architecture:

A) To have a centralized logging solution
This is correct. In a microservices architecture, having a centralized logging solution is crucial. With multiple services generating logs independently, a centralized system allows for easier log aggregation, analysis, and troubleshooting across the entire application ecosystem.

C) A log store is part of centralized logging solution and is the place where all log messages are stored for real-time analysis, trending, and so on. Typically, a log store is a NoSQL database, such as HDFS, capable of handling large data volumes.
This statement is also correct. A centralized log store is an essential component of a centralized logging solution in microservices architecture. It provides a single repository for all log data, enabling efficient searching, analysis, and long-term storage of logs from multiple services. NoSQL databases like HDFS are indeed commonly used for this purpose due to their ability to handle large volumes of unstructured or semi-structured data.

The combination of a centralized logging approach (A) and a centralized log store (C) forms a comprehensive logging strategy for microservices architectures. This approach allows for:

  1. Easier correlation of events across different services
  2. Simplified troubleshooting and debugging
  3. Improved ability to perform system-wide analysis and generate insights
  4. Better scalability in handling logs from numerous microservices

Option B (To have a distributed logging solution) is not typically recommended for microservices, as it can make log analysis and troubleshooting more complex and time-consuming. While logs may be generated in a distributed manner, the goal is usually to aggregate them into a centralized system for easier management and analysis.

Therefore, the correct answer is D) A & C, as these statements accurately describe key aspects of effective log management in microservices architecture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In Kubernetes, a node is:

A) A tool for starting a kubernetes cluster on a local machine
B) A worker machine
C) A machine that coordinates the scheduling and management of application containers on the cluster
D) A & C
E) A & B
F) All of the above

A

B) A worker machine

In Kubernetes, a node is indeed a worker machine. Here’s a more detailed explanation:

  1. A node in Kubernetes is a physical or virtual machine that runs containerized workloads. It’s part of the Kubernetes cluster and is responsible for running pods, which are the smallest deployable units in Kubernetes.
  2. Nodes are the worker machines in a Kubernetes cluster where containers are deployed, managed, and run. Each node contains the necessary services to run pods, including the container runtime (like Docker), kubelet (the primary node agent), and kube-proxy (for network routing).
  3. Nodes are managed by the control plane, which is responsible for the global decisions about the cluster (such as scheduling) and detecting and responding to cluster events.

It’s important to note that the other options are incorrect or refer to different components in Kubernetes:

A) A tool for starting a Kubernetes cluster on a local machine is typically something like Minikube, not a node.

C) The machine that coordinates scheduling and management is the control plane or master node, not a regular node.

D, E, and F) These combinations are incorrect as they include the incorrect options A and C.

Therefore, the most accurate description of a node in Kubernetes is B) A worker machine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which type of Webservices methods are to be idempotent?

A) GET
B) PUT
C) DELETE
D) B & C

A

D) B & C

In the context of web services, idempotent methods are those that can be called multiple times without producing different results. This means that making the same request multiple times will have the same effect as making it once. The following HTTP methods are considered idempotent:

  • PUT: This method is used to update a resource. If you call PUT multiple times with the same data, the resource will remain in the same state after each call.
  • DELETE: This method is used to delete a resource. If you call DELETE multiple times, the resource will be deleted after the first call, and subsequent calls will have no further effect.

While GET is also idempotent (retrieving the same resource multiple times does not change its state), the question specifically asks for methods that are to be idempotent, which typically refers to methods that modify resources. Therefore, the correct answer is:

D) B & C

17
Q

How do you achieve low latency in microservices?

A) scale up
B) scale down

A

A) scale up

Here’s an explanation of why scaling up is the appropriate approach to achieve low latency in microservices:

  1. Scaling up (also known as vertical scaling) involves increasing the resources (such as CPU, memory, or storage) of individual nodes or servers running the microservices. This can help improve the performance and processing capacity of each service instance.
  2. By scaling up, you can reduce the processing time for individual requests, which directly contributes to lower latency. More powerful hardware can handle requests faster, reducing the overall response time.
  3. In a microservices architecture, low latency is often achieved through a combination of strategies, including efficient service design, optimized communication protocols, and appropriate scaling techniques. Scaling up is one of these techniques.
  4. While horizontal scaling (adding more instances of a service) is also a common approach in microservices, the question specifically asks about achieving low latency, which is more directly addressed by scaling up individual instances to process requests faster.

It’s worth noting that in practice, a combination of scaling up and scaling out (horizontal scaling) is often used to optimize performance and latency in microservices architectures. However, given the options provided in the question, scaling up is the more appropriate choice for directly addressing latency concerns.

18
Q
A
19
Q

Different cluster management solutions for microservices are:

A) Apache Mesos
B) Kubernetes
C) Docker Swarm
D) A & B
E) A & C
F) A, B & C

A

F) A, B & C

All three options mentioned - Apache Mesos, Kubernetes, and Docker Swarm - are indeed cluster management solutions commonly used for microservices architectures. Let’s briefly discuss each:

  1. Apache Mesos: This is an open-source cluster manager that provides efficient resource isolation and sharing across distributed applications or frameworks. It can scale to very large clusters and is used by companies like Twitter and Airbnb.
  2. Kubernetes: Developed by Google, Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It has become one of the most popular choices for managing microservices.
  3. Docker Swarm: This is Docker’s native clustering and scheduling tool for Docker containers. It turns a pool of Docker hosts into a single, virtual host, making it easier to manage a cluster of Docker containers.

Each of these solutions has its own strengths and is suitable for different use cases within the microservices ecosystem. They all provide features essential for managing microservices at scale, such as:

  • Service discovery
  • Load balancing
  • Scaling
  • Self-healing
  • Rolling updates and rollbacks

The choice between these solutions often depends on specific project requirements, existing infrastructure, team expertise, and the scale of operations.

Therefore, the most comprehensive answer is F) A, B & C, as all three are valid and widely used cluster management solutions for microservices architectures.

20
Q

Which of the following is a tool for defining and running multi-container Docker applications.

A) docker-compose
B) docker-machine
C) a and b both
D) None of the above

A

A) docker-compose

Docker Compose is indeed a tool for defining and running multi-container Docker applications. Here’s a brief explanation:

  1. Docker Compose allows you to define a multi-container application in a single file, typically named docker-compose.yml.
  2. It provides a way to configure all the services, networks, and volumes required for your application stack.
  3. With Docker Compose, you can start all the services of your application with a single command, making it easier to manage complex, multi-container setups.
  4. It’s particularly useful for development, testing, and staging environments, as well as CI workflows.

Docker Machine, mentioned in option B, is a different tool. It’s used for creating and managing Docker hosts on various platforms, including virtual machines, cloud providers, and bare metal servers. While it’s a useful Docker tool, it’s not specifically designed for defining and running multi-container applications.

Therefore, the correct answer is A) docker-compose, as it’s the tool specifically designed for defining and running multi-container Docker applications.

21
Q

What are the key principles of a microservice:

A) Deploy independently
B) Culture of Automation
C) Isolate failure
D) Highly Autonomous
E) All of the above

A

E) All of the above

All of the mentioned options are indeed key principles of microservices. Let’s briefly explain each principle:

  1. Deploy independently: This principle allows each microservice to be deployed separately from others, enabling faster and more flexible updates to specific parts of the system without affecting the entire application.
  2. Culture of Automation: Microservices architecture heavily relies on automation for deployment, testing, and monitoring. This principle ensures consistency, reduces human error, and enables rapid and reliable delivery of services.
  3. Isolate failure: Each microservice should be designed to contain failures within itself, preventing them from cascading through the entire system. This principle enhances the overall resilience and reliability of the application.
  4. Highly Autonomous: Microservices should be loosely coupled and highly cohesive, allowing teams to work on individual services with minimal dependencies on other services or teams. This autonomy enables faster development and easier maintenance.

These principles work together to provide the benefits commonly associated with microservices architecture, such as:

  • Improved scalability
  • Enhanced flexibility and agility in development and deployment
  • Better fault isolation and system resilience
  • Easier maintenance and updates
  • Support for polyglot programming and heterogeneous technology stacks

By adhering to all these principles, organizations can fully leverage the advantages of a microservices architecture. Therefore, the most comprehensive and correct answer is E) All of the above, as all the listed items are indeed key principles of microservices.

22
Q

What challenges and risks do organisations often face when switching from a monolithic system to microservices?

1) Service discovery
2) Service replication
3) 1&2
4) None

A

3) 1&2

Both service discovery and service replication are indeed challenges and risks that organizations often face when transitioning from a monolithic system to microservices. Let’s elaborate on each:

  1. Service Discovery:
    • In a microservices architecture, services need to be able to find and communicate with each other dynamically.
    • As services are deployed, scaled, or moved, their network locations can change frequently.
    • Implementing an effective service discovery mechanism becomes crucial to ensure that services can locate and interact with each other reliably.
    • This is a significant challenge compared to monolithic systems where all components are typically deployed together and have fixed locations.
  2. Service Replication:
    • Microservices often need to be replicated for scalability and high availability.
    • Ensuring consistent replication of services across different instances or containers can be complex.
    • Challenges include maintaining data consistency across replicas, load balancing between replicas, and managing the lifecycle of replicated services.
    • This is more complicated than in monolithic systems where scaling often involves replicating the entire application.

These challenges are interconnected and form part of the broader complexities in managing a distributed system:

  • Service discovery is essential for effective load balancing and routing between replicated services.
  • Replication strategies need to work in tandem with service discovery mechanisms to ensure optimal performance and reliability.

While these are not the only challenges in transitioning to microservices (others include data management, inter-service communication, monitoring, and more), service discovery and service replication are indeed significant hurdles that organizations need to address.

Therefore, the correct answer is 3) 1&2, as both service discovery and service replication present challenges and risks when switching from a monolithic system to microservices.

23
Q

What are the best practices to be followed while designing a secure RESTful web service?

A) Validation
B) Restriction on Methods
C) Validate Malformed JSON
D) All above

A

D) All above

All of the mentioned practices are indeed important for designing a secure RESTful web service. Let’s briefly explain each:

  1. Validation:
    • Input validation is crucial for preventing various types of attacks, such as SQL injection, cross-site scripting (XSS), and buffer overflows.
    • All input data should be validated for type, length, format, and range before processing.
    • This includes validating query parameters, request headers, and request body content.
  2. Restriction on Methods:
    • Limiting HTTP methods to only those necessary for each endpoint helps reduce the attack surface.
    • For example, if an endpoint only needs to support GET requests, other methods like POST, PUT, DELETE should be disabled for that endpoint.
    • This practice follows the principle of least privilege, providing only the minimum required functionality.
  3. Validate Malformed JSON:
    • Proper validation of JSON payloads is essential to prevent attacks that exploit parsing vulnerabilities.
    • Malformed JSON can potentially lead to security issues or application crashes if not handled correctly.
    • Implementing strict JSON parsing and validation helps ensure that only well-formed data is processed.

These practices are part of a broader set of security measures for RESTful web services, which may also include:

  • Authentication and authorization
  • Use of HTTPS for all communications
  • Proper error handling and logging
  • Rate limiting to prevent abuse
  • Security headers implementation
  • Regular security audits and penetration testing

By implementing all of these practices (validation, restriction on methods, and validating malformed JSON), along with other security measures, you can significantly enhance the security of your RESTful web service.

Therefore, the most comprehensive and correct answer is D) All above, as all the listed items are indeed important best practices for designing secure RESTful web services.

24
Q

Mention whether you can use GET request instead of PUT to create a resource

A) Yes
B) No

A

B) No

You should not use a GET request instead of PUT to create a resource in a RESTful web service. Here’s why:

  1. GET requests are designed to retrieve information and should not modify the server’s state. They are considered “safe” and “idempotent” operations, meaning they should not have side effects and can be called multiple times without changing the result.
  2. PUT requests, on the other hand, are specifically designed for updating existing resources or creating new resources at a specific URL if they don’t already exist. PUT is idempotent but not safe, as it modifies the server’s state.
  3. For creating new resources, POST is typically used when the server decides the URL of the new resource, while PUT is used when the client specifies the exact URL where the resource should be created.
  4. Using GET for resource creation would violate the HTTP protocol specifications and RESTful principles. It could lead to unexpected behavior, security vulnerabilities, and confusion for API consumers.
  5. Proper use of HTTP methods is crucial for maintaining the semantic meaning of operations in a RESTful API and ensuring that caching, proxies, and other HTTP infrastructure work correctly.

Therefore, it is not appropriate to use GET instead of PUT (or POST) to create a resource in a RESTful web service. The correct HTTP method should be used for each operation to maintain the integrity and proper functioning of the API.

25
Q

which one is not an application integration style?

A) SSO
B) Shared database
C) Batch file transfer
D) Swapping asynchronous messages over a message oriented middle-ware

A

A) SSO

Single Sign-On (SSO) is not an application integration style. SSO is a user authentication process that allows a user to access multiple applications with one set of login credentials. It simplifies the user experience by reducing the number of logins required but does not directly relate to how applications integrate or communicate with each other.

The other options are indeed application integration styles:

B) Shared database: This involves multiple applications accessing the same database to share data.

C) Batch file transfer: This method involves transferring files between systems in batches at scheduled intervals.

D) Swapping asynchronous messages over a message-oriented middleware: This style uses messaging systems to exchange data between applications asynchronously, allowing for loose coupling and scalability.

26
Q

What is the purpose of HTTP Verb in REST-based webservices?

A) identifies the service name
B) identifies the operation to be performed on the resource.
C) identifies protocol
D) none of the above

A

B) identifies the operation to be performed on the resource.

In REST (Representational State Transfer) architecture, HTTP verbs play a crucial role in defining the actions to be performed on resources. Here’s a more detailed explanation:

  1. HTTP verbs, also known as HTTP methods, are used to specify the desired action when making a request to a RESTful API.
  2. The most commonly used HTTP verbs in REST APIs are:
    • GET: Retrieve a resource
    • POST: Create a new resource
    • PUT: Update an existing resource (or create if it doesn’t exist)
    • DELETE: Remove a resource
    • PATCH: Partially modify an existing resource
  3. By using these verbs, the client communicates to the server what operation it wants to perform on the specified resource.
  4. This approach allows for a clear and standardized way of interacting with resources, making APIs more intuitive and easier to use.
  5. The use of HTTP verbs in this manner is a key principle of RESTful design, promoting a uniform interface for resource manipulation.

The other options are incorrect because:
A) The service name is typically identified by the URL path, not the HTTP verb.
C) The protocol is already known to be HTTP in this context.
D) This option is incorrect as the HTTP verb does have a specific purpose in REST.

Therefore, the correct answer is B) identifies the operation to be performed on the resource, as this accurately describes the purpose of HTTP verbs in REST-based web services.

27
Q

Consider a use case where a microservice needs to be developed to cater different types of clients written in C++, Java, Python, PHP, Ruby, Erlang, and Node.js. They expect response in JSON, binary, compact binary formats. Which of the following options are preferred way of developing microservice for the above scenario:

A) Develop microservice using REST style
B) Develop microservice using APACHE THRIFT
C) both A and B
D) None of the above

A

B) Develop microservice using APACHE THRIFT

Here’s why Apache Thrift is the preferred option for this scenario:

  1. Multiple Programming Languages: The scenario mentions clients written in various languages including C++, Java, Python, PHP, Ruby, Erlang, and Node.js. Apache Thrift is designed to support cross-language services development, making it ideal for this multi-language environment.
  2. Multiple Data Formats: The clients expect responses in different formats such as JSON, binary, and compact binary. Apache Thrift supports multiple data serialization formats, including binary and compact binary protocols, which aligns well with the requirements.
  3. Efficiency: For scenarios involving multiple client types and data formats, Thrift can be more efficient than REST, especially when dealing with binary data.
  4. Strong Typing: Thrift uses a strongly-typed interface definition language (IDL), which can help ensure consistency across different language implementations.
  5. Performance: For scenarios requiring high performance, especially with binary data, Thrift can offer better performance compared to REST-based services.

While REST (option A) is widely used and has its advantages, it may not be the best fit for this specific scenario due to the following reasons:

  • REST typically uses JSON for data exchange, which might not be as efficient for binary data formats.
  • Implementing consistent binary and compact binary format support across all the mentioned languages could be more challenging with REST.

Therefore, given the specific requirements of supporting multiple programming languages and various data formats including binary, Apache Thrift (option B) is the more suitable choice for developing this microservice.

28
Q

Which header of HTTP response provides control over caching?

A) Cache-Control Header
B) Cache_Control Header
C) Cache Directive
D) Last Modified header

A

The correct answer is:

A) Cache-Control Header

The Cache-Control header in an HTTP response is used to provide control over caching. This header allows the server to specify directives for caching mechanisms in both requests and responses. Some common directives include no-cache, no-store, max-age, must-revalidate, and public.

Here’s a brief explanation of each option:

  • A) Cache-Control Header: This is the correct header used to control caching behavior in HTTP responses. It provides various directives to manage how and for how long the response should be cached.
  • B) Cache_Control Header: This is not a valid HTTP header. HTTP headers use hyphens, not underscores.
  • C) Cache Directive: This is not a specific HTTP header but rather a term that refers to the directives used within the Cache-Control header.
  • D) Last Modified header: This header indicates the last time the resource was modified. While it can be used in conjunction with caching mechanisms, it does not provide direct control over caching behavior.

Therefore, the correct answer is A) Cache-Control Header.

29
Q

is it True: “PUT” puts a file or resource at a particular URI and exactly at that URI. If there is already a file or resource at that URI, PUT changes that file or resource. If there is no resource or file there, PUT makes one

A) TRUE
B) FALSE

A

Based on the information provided in the question and general knowledge about HTTP methods in RESTful web services, the correct answer is:

A) TRUE

This statement accurately describes the behavior of the PUT method in HTTP:

  1. Resource Creation or Update: PUT is used to place a resource at a specific URI. If a resource already exists at that URI, PUT will update it. If no resource exists, PUT will create a new one.
  2. Idempotency: PUT is an idempotent method, meaning that multiple identical requests should have the same effect as a single request. This is why it’s suitable for both creation and update operations.
  3. Complete Resource Representation: When using PUT, the client typically sends a complete representation of the resource. This means that if you’re updating a resource, you should send all fields, not just the ones that are changing.
  4. Specified URI: Unlike POST, which can create a resource at a server-determined URI, PUT requires the client to specify the exact URI where the resource should be created or updated.
  5. Overwriting: If a resource already exists at the specified URI, PUT will effectively replace or overwrite that resource with the new data provided in the request.

This behavior of PUT is consistent with RESTful principles and the HTTP specification. It allows for precise control over resource locations and supports both creation and update operations in a predictable manner.

Therefore, the statement is TRUE, accurately describing the behavior of the PUT method in HTTP and RESTful web services.

30
Q

Assume a use case where a user clicks on a booking function which calls booking service. When booking is successful, it sends a message to the customer’s e-mail address, sends a message to the hotel’s booking system, updates the cached inventory, updates the loyalty points system, prepares an invoice, and perhaps more. The user waits till a booking record is created by the Booking Service. On successful completion, a booking event will be published and return a confirmation message to the user. Subsequently, all other activities will happen in parallel. Which communication style is an ideal choice for the above implementation:

A) synchronous
B) asynchronous
C) combination of A and B both
D) none of the above

A

C) combination of A and B both

Here’s why a combination of synchronous and asynchronous communication is ideal for the described use case:

  1. Synchronous Communication:
    • The initial booking function call and the immediate creation of the booking record should be handled synchronously. This allows the user to receive a confirmation message once the booking is successfully created.
    • The user waits for this synchronous operation to complete, ensuring that the booking has been recorded before proceeding.
  2. Asynchronous Communication:
    • Subsequent activities such as sending an email to the customer, notifying the hotel’s booking system, updating the cached inventory, updating the loyalty points system, and preparing an invoice can be handled asynchronously.
    • These tasks can be triggered by publishing a booking event once the booking is confirmed. Each of these tasks can then be processed in parallel without making the user wait for them to complete.

This approach leverages the strengths of both communication styles:
- Synchronous communication ensures that the user gets an immediate response and confirmation of the booking.
- Asynchronous communication allows for efficient and scalable processing of additional tasks without blocking the user or the main booking service.

Therefore, the combination of synchronous and asynchronous communication (option C) is the most suitable choice for the given use case.

31
Q

MSA focuses on
A) low coupling & high cohesion
B) re-usability

A

A) low coupling & high cohesion

Microservice Architecture (MSA) indeed focuses on low coupling and high cohesion. Here’s a brief explanation of why this is the correct answer:

  1. Low coupling: In MSA, services are designed to be independent and loosely coupled. This means that changes in one service should have minimal impact on other services. Low coupling allows for easier maintenance, updates, and scalability of individual services without affecting the entire system.
  2. High cohesion: MSA promotes the development of services that are highly cohesive, meaning each service is responsible for a specific, well-defined functionality or business capability. This makes services more focused, easier to understand, and maintain.

These principles are fundamental to the microservices approach and offer several benefits:

  • Improved modularity
  • Easier development and maintenance
  • Better scalability
  • Increased flexibility in technology choices for each service
  • Faster deployment and updates

While reusability (option B) can be a benefit of well-designed microservices, it is not the primary focus of MSA. The main goals of MSA are to create a more flexible, scalable, and maintainable architecture through low coupling and high cohesion.

Therefore, the correct answer is A) low coupling & high cohesion, as this accurately describes the key focus of Microservice Architecture.

32
Q

We are developing microservices and using client-side discovery pattern. The client is responsible for determining the network locations of available service instances and load balancing requests across them. The client queries a service registry, which is a database of available service instances. The client then uses a load-balancing algorithm to select one of the available service instances and makes a request. Which of the statements are true for this scenario:

A) Netflix OSS provides a great example of the client-side discovery pattern. Netflix is a service registry. It provides a REST API for managing service-instance registration and for querying available instances. Netflix Ribbon is an IPC client that works with Eureka to load balance requests across the available service instances.
B) The AWS Elastic Load Balancer (ELB) is an example of a client-side discovery router.
C) HTTP servers and load balancers such as NGINX Plus and NGINX can also be used as client-side discovery load balancers.
D) All of the above

A

A) Netflix OSS provides a great example of the client-side discovery pattern. Netflix is a service registry. It provides a REST API for managing service-instance registration and for querying available instances. Netflix Ribbon is an IPC client that works with Eureka to load balance requests across the available service instances.

This statement is true for the scenario described. Netflix OSS, specifically Netflix Eureka, is a well-known example of the client-side discovery pattern. Eureka acts as a service registry, and Netflix Ribbon is a client-side IPC library that integrates with Eureka to handle load balancing of requests across available service instances.

The other options are not correct in the context of client-side discovery:

B) The AWS Elastic Load Balancer (ELB) is an example of a client-side discovery router. - This is incorrect because AWS ELB is an example of server-side discovery, not client-side discovery.

C) HTTP servers and load balancers such as NGINX Plus and NGINX can also be used as client-side discovery load balancers. - This is incorrect because NGINX and NGINX Plus are typically used in server-side load balancing scenarios.

Therefore, the correct answer is A).