Microservices Flashcards
What is a microservice and how does it differ from a monolithic architecture?
Microservices and monolithic architecture are two distinct software design paradigms, each with its unique traits.
A monolithic architecture consolidates all software components into a single program, whereas a microservices architecture divides the application into separate, self-contained services.
Microservices offer several advantages but also have their own challenges, requiring careful consideration in the software design process.
Key Differences
Decomposition: Monolithic applications are not easily separable, housing all functionality in a single codebase. Microservices are modular, each responsible for a specific set of tasks.
Deployment Unit: The entire monolithic application is packaged and deployed as a single unit. In contrast, microservices are deployed individually.
Communication: In a monolith, modules communicate through in-process calls. Microservices use standard communication protocols like HTTP/REST or message brokers.
Data Management: A monolith typically has a single database, whereas microservices may use multiple databases.
Scaling: Monoliths scale by replicating the entire application. Microservices enable fine-grained scaling, allowing specific parts to scale independently.
Technology Stack: While a monolithic app often uses a single technology stack, microservices can employ a diverse set of technologies.
Development Team: Monoliths can be developed by a single team, whereas microservices are often the domain of distributed teams.
When to Use Microservices
Microservices are advantageous for certain types of projects:
Complex Systems: They are beneficial when developing complex, business-critical applications where modularity is crucial.
Scalability: If you anticipate varying scaling needs across different functions or services, microservices might be the best pick.
Technology Diversification: When specific functions are better suited to certain technologies or when you want to use the best tools for unique tasks.
Autonomous Teams: For bigger organizations with multiple teams that need to work independently.
Can you describe the principles behind the microservices architecture?
Microservices is an architectural style that structures an application as a collection of small, loosely coupled services. Each service is self-contained, focused on a specific business goal, and can be developed, deployed, and maintained independently.
Core Principles of Microservices
Codebase & Infrastructure as a Service
Each microservice manages its own codebase and data storage. It uses its own independent infrastructure, ranging from the number of virtual machines to persistence layers, messaging systems, or even data models.
Antifragility
Microservices, instead of resisting failure, respond to it favorably. They self-adapt and become more resilient in the face of breakdowns.
Ownership
Development teams are responsible for the entire lifecycle of their respective microservices - from development and testing to deployment, updates, and scaling.
Design for Failure
Microservices are built to anticipate and handle failures at various levels, ensuring the graceful degradation of the system.
Decentralization
Services are autonomous, making their own decisions without requiring overarching governance. This agility permits independent deployments and ensures that changes in one service do not disrupt others.
Built Around Business Capability
Each service is crafted to provide specific and well-defined business capabilities. This focus increases development speed and makes it easier to comprehend and maintain the system.
Service Coupling
Services are related through well-defined contracts, mainly acting as providers of specific functionalities. This reduces dependencies and integration challenges.
Directed Transparency
Each service exposes a well-defined API, sharing only the necessary information. Teams can independently choose the best technology stack, avoiding the need for a one-size-fits-all solution.
Infrastructure Automation
Deployments, scaling, and configuration undergo automation, preserving development velocity and freeing teams from manual, error-prone tasks.
Organizational Alignment
Teams are structured around services, aligning with Conway’s Law to support the Microservices architecture and promote efficiency.
Continuous Small Revisions
Services are frequently and iteratively improved, aiming for continual enhancement over major, infrequent overhauls.
Discoverability
Services make their features, capabilities, and interfaces discoverable via well-documented APIs, fostering an environment of interoperability.
The “DevOps” Connection
The DevOps method for software development merges software development (Dev) with software operation (Ops). It focuses on shortening the system’s software development life cycle and providing consistent delivery. The “you build it, you run it” approach, where developers are also responsible for operating their software in production, is often associated with both Microservices and DevOps.
Code Example: Loan Approval Microservice
Here is the sample Java code:
@RestController
@RequestMapping(“/loan”)
public class LoanService {
@Autowired
private CreditCheckService creditCheckService;
@PostMapping("/apply") public ResponseEntity applyForLoan(@RequestBody Customer customer) { if(creditCheckService.isEligible(customer)) return ResponseEntity.ok("Congratulations! Your loan is approved."); else return ResponseEntity.status(HttpStatus.FORBIDDEN).body("We regret to inform you that your credit rating did not meet our criteria."); } }
What are the main benefits of using microservices?
Let’s look at the main advantages of using microservices:
Key Benefits
1. Scalability
Each microservice can be scaled independently, which is particularly valuable in dynamic, going-viral, or resource-intensive scenarios.
- Flexibility
Decoupling services means one service’s issues or updates generally won’t affect others, promoting agility. - Technology Diversity
Different services can be built using varied languages or frameworks. While this adds some complexity, it allows for best-tool-for-the-job selection. - Improved Fault Tolerance
If a microservice fails, it ideally doesn’t bring down the entire system, making the system more resilient. - Agile Development
Microservices mesh well with Agile, enabling teams to iterate independently, ship updates faster, and adapt to changing requirements more swiftly. - Easier Maintenance
No more unwieldy, monolithic codebases to navigate. With microservices, teams can focus on smaller, specific codebases, thereby enabling more targeted maintenance. - Tailored Security Measures
Security policies and mechanisms can be tailored to individual services, potentially reducing the overall attack surface. - Improved Team Dynamics
Thanks to reduced codebase ownership and the interoperability of services, smaller, focused teams can thrive and communicate more efficiently.
What are some of the challenges you might face when designing a microservices architecture?
When designing a microservices architecture, you are likely to encounter the following challenges:
Data Management
Database Per Microservice: Ensuring that each microservice has its own database can be logistically complex. Data relationships and consistency might be hard to maintain.
Eventual Consistency: Different microservices could be using data that might not be instantly synchronized. Dealing with eventual consistency can raise complications in some scenarios.
Service Communication
Service Synchronization: Maintaining a synchronous communication between numerous services can result in a more tightly coupled and less scalable architecture.
Service Discovery: As the number of services grows, discovering and properly routing requests to the appropriate service becomes more challenging.
Security and Access Control
Decentralized Security: Implementing consistent security measures, such as access control and authentication, across all microservices can be intricate.
Externalized Authorization: When security-related decisions are taken outside the service, coherent and efficient integration is crucial.
Infrastructure Management
Server Deployment: Managing numerous server deployments entails additional overhead and might increase the risk of discrepancies among them.
Monitoring Complexity: With each microservice operating independently, gauging the collective functionality of the system necessitates more extensive monitoring capabilities.
Business Logic Distribution
Domain and Data Coupling: Microservices, especially those representing different business domains, may find it challenging to process complex business transactions that require data and logic from several services.
Cross-Cutting Concerns Duplication: Ensuring a uniform application of cross-cutting concerns like logging or caching across microservices is non-trivial.
Scalability
Fine-Grained Scalability: While microservices allow selective scale-up, guaranteeing uniform performance across varying scales might be troublesome.
Service Bottlenecks: Certain services might be hit more frequently, potentially becoming bottlenecks.
Development and Testing
Integration Testing: Interactions between numerous microservices in real-world scenarios might be challenging to replicate in testing environments.
Consistency and Atomicity
System-Wide Transactions: Ensuring atomic operations across multiple microservices is complex and might conflict with certain microservice principles.
Data Integrity: Without a centralized database, governing data integrity could be more intricate, especially for related sets of data that multiple microservices handle.
Challenges in Updating and Versioning
Deployment Orchestration: Coordinated updates or rollbacks, particularly in hybrid environments, can present difficulties.
Version Compatibility: Assuring that multiple, potentially differently-versioned microservices can still work together smoothly.
Team Structure and Organizational Alignment
Siloed Teams: Without a unified architectural vision or seamless communication, different teams developing diverse microservices might make decisions that are not entirely compatible with the overall system.
Documentation and Onboarding: With an extensive number of microservices, their functionalities, interfaces, and usage need to be well-documented for efficient onboarding and upkeep.
How do microservices communicate with each other?
Microservices often work together, and they need efficient communication mechanisms…
Communication Patterns
Synchronous: Web services and RESTful APIs synchronize requests and responses. They are simpler to implement but can lead to tighter coupling between services. For dynamic traffic or workflow-specific requests, this is a suitable choice.
Asynchronous: Even with service unavailability or high loads, queues lead to the delivery of messages. The services do not communicate or interact beyond their immediate responsibilities and workloads. For unpredictable or lengthy processes, use asynchronous communication.
Data Streaming: For continuous data needs or applications that work with high-frequency data, such as stock prices or real-time analytics, this method is highly effective. Kafka or AWS Kinesis are examples of this pattern.
Inter-Service Communication Methods
RESTful APIs: Simple and clean, they utilize HTTP’s request-response mechanism. Ideal for stateless, cacheable, and stateless resource interactions.
Messaging: Deploys a message broker whereby services use HTTP or a messaging protocol (like AMQP or MQTT). This approach offers decoupling, and the broker ensures message delivery. Common tools include RabbitMQ, Apache Kafka, or AWS SQS.
Service Mesh and Sidecars: A sidecar proxy, typically running in a container, works alongside each service. They assist in monitoring, load balancing, and authorization.
Remote Procedure Call (RPC): It involves a client and server where the client sends requests to the server with a defined set of parameters. They’re efficient but not perfectly decoupled.
Event-Based Communication: Here, services interact by producing and consuming events. A service can publish events into a shared event bus, and other services can subscribe to these events and act accordingly. This pattern supports decoupling and scalability. Common tools include Apache Kafka, AWS SNS, and GCP Pub/Sub.
Database per Service: It involves each microservice owning and managing its database. If a service A needs data from service B, it uses B’s API to retrieve or manipulate data.
API Gateway: Acts as a single entry point for services and consumers. Netscaler, HAProxy, and Kong are popular API Gateway tools.
Code Example: REST API
Here is the Python code:
import requests
Make a GET request to receive a list of users.
response = requests.get(‘https://my-api/users’)
users = response.json()
Code Example: gRPC
Here is the Python code:
Import the generated server and client classes.
import users_pb2
import users_pb2_grpc
Create a gRPC channel and a stub.
channel = grpc.insecure_channel(‘localhost:50051’)
stub = users_pb2_grpc.UserStub(channel)
Call the remote procedure.
response = stub.GetUsers(users_pb2.UserRequest())
What is the best way to Implement Microservices?
Ease of Development: If you need to onboard a large number of developers or have strict timelines, RESTful APIs are often easier to work with.
Performance: gRPC and other RPC approaches are superior to RESTful APIs in terms of speed, making them ideal when performance is paramount.
Type Safety: gRPC, due to its use of Protocol Buffers, ensures better type safety at the cost of being slightly less human-readable when compared to RESTful JSON payloads.
Portability: RESTful APIs, being HTTP-based, are more portable across platforms and languages. On the other hand, gRPC is tailored more towards microservices built with Protobufs.
What is Domain-Driven Design (DDD) and how is it related to microservices?
Domain-Driven Design (DDD) provides a model for designing and structuring microservices around specific business domains. It helps teams reduce complexity and align better with domain experts.
Context Boundaries
In DDD, a Bounded Context establishes clear boundaries for a domain model, focusing on a specific domain of knowledge. These boundaries help microservice teams to operate autonomously, evolving their services within a set context.
Ubiquitous Language
Ubiquitous Language is a shared vocabulary that unites developers and domain experts. Microservices within a Bounded Context are built around this common language, facilitating clear communication and a deeper domain understanding.
Strong Consistency and Relational Databases
Within a Bounded Context, microservices share a consistent data model, often dealing with strong consistency and using relational databases. This cohesion simplifies integrity checks and data relationships.
Code Example
PaymentService Microservice:
@Entity
public class Payment {
@Id
private String paymentId;
private String orderId;
// … other fields and methods
}
OrderService Microservice:
@Entity
public class Order {
@Id
private String orderId;
// … other fields and methods
}
public void updateOrderWithPayment(String orderId, String paymentId) {
// Update the order
}
OrderDetailsService Microservice:
@Entity
public class OrderDetail {
@EmbeddedId
private OrderDetailId orderDetailId;
private String orderId;
private String itemId;
private int quantity;
// … other fields and methods
}
How would you decompose a monolithic application into microservices?
Decomposing a monolithic application into microservices involves breaking down a larger piece of software into smaller, interconnected services. This process allows for greater development agility, flexibility, and often better scalability.
Key Considerations
Domain-Driven Design (DDD): Microservices should be independently deployable and manageable pieces of the application, typically built around distinct business areas or domains.
Database Strategy: Each microservice should have its own data storage, but for ease of data access and management, it’s beneficial for microservices to share a database when practical.
Communication: The microservices should interact with each other in a well-coordinated manner. Two common models are Direct communication via HTTP APIs or using events for asynchronous communication.
Steps to Decompose
Identify Domains: Break down the application into major business areas or domains.
Data Segregation: Determine the entities and relationships within each microservice. Use techniques like database-per-service or shared-database.
Service Boundary: Define the boundaries of each microservice - what data and functionality does it control?
Define Contracts: Intelligently design the APIs or events used for communication between microservices.
Decouple Services: The services should be loosely coupled, to the maximum extent possible. This is especially important in scenarios where you have independent development teams working on the various microservices.
Code Example: Decomposition with DDD
Here is the Java code:
@Entity
@Table(name = “product”)
public class Product {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private double price;
//…
}
@Entity
@Table(name = “order_item”)
public class OrderItem {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private Long productId;
private Integer quantity;
private double price;
//…
}
public interface OrderService {
Order createOrder(String customerId, List items);
List getOrdersForCustomer(String customerId);
//…
}
@RestController
@RequestMapping(“/orders”)
public class OrderController {
//…
@PostMapping(“/”)
public ResponseEntity> createOrder(@RequestBody Map order) {
//…
}
//…
}
In this example, a Product microservice could manage products and expose its services through RESTful endpoints, and an Order microservice could manage orders. The two microservices would communicate indirectly through APIs, following DDD principles. Each would have its own database schema and set of tables.
What strategies can be employed to manage transactions across multiple microservices?
Managing transactions across multiple microservices presents certain challenges, primarily due to the principles of independence and isolation that microservices are designed around. However, there are both traditional and modern strategies to handle multi-service transactions, each with its own benefits and trade-offs.
Traditional Approaches
Two-Phase Commit (2PC)
Two-Phase Commit is a transaction management protocol in which a global coordinator communicates with all participating services to ensure that the transaction can either be committed globally or rolled back across all involved services.
While it offers transactional integrity, 2PC has seen reduced popularity due to its potential for blocking scenarios, performance overhead, and the difficulties associated with its management in distributed ecosystems.
Three-Phase Commit (3PC)
A direct evolution of the 2PC model, 3PC provides a more robust alternative. By incorporating an extra phase, it tries to overcome some of the drawbacks of 2PC, such as the potential for indefinite blocking.
While 3PC is an improvement over 2PC in this regard, it’s not without its complexities and can still introduce performance penalties and maintenance overhead.
Transactional Outbox
The Transactional Outbox pattern involves using messaging systems as a mechanism to coordinate transactions across multiple microservices. In this approach:
The primary DB records changes in the outbox.
An event message is added to a message broker.
Subscribers read the message and execute the corresponding local transaction.
Transactional outbox offers high decoupling but does not provide the same level of strong consistency as the previous pattern.
SAGA Pattern
Derived from the Greek word for a “long, epic poem,” a saga is a sequence of local transactions, each initiated within a microservice. In a distributed setting, a saga is a coordination mechanism between these local transactions, aiming for eventual consistency.
With SAGA, you trade immediate consistency for long-term consistency. If something goes wrong during the saga, you need to define a strategy for compensation actions to bring the overall system back to a consistent state, hence the “epic journey” metaphor.
Modern Approaches
Acknowledged Unreliability
The philosophy here is one of embracing a partially reliable set of distributed systems. Instead of trying to guarantee strong consistency across services, the focus is on managing and mitigating inconsistencies and failures through robust service designs and effective monitoring.
DDD and Bounded Contexts
When microservices are designed using Domain-Driven Design (DDD), each microservice focuses on a specific business domain, or “Bounded Context.” By doing so, services tend to be more independent, leading to fewer cross-service transactions in the first place.
This approach promotes a strong focus on clear service boundaries and effective communication and collaboration between the stakeholders who understand those boundaries and the associated service behavior.
CQRS and Event Sourcing
The Command Query Responsibility Segregation (CQRS) pattern involves separating read and write operations. This clear separation of concerns reduces the need for cross-service transactions.
With Event Sourcing, each state change is represented as an event, providing a reliable mechanism to propagate changes to multiple services in an asynchronous and non-blocking manner.
What is crucial here is that the proliferation of these patterns and concepts in modern software and system design is a direct result of the unique needs and opportunities presented by new paradigms such as microservices. Instead of retrofitting old ways of thinking into a new environment, the focus is on adapting notions of consistency and reliability to the realities of modern, decentralized, and highly dynamic systems.
Explain the concept of ‘Bounded Context’ in the microservices architecture.
In the context of microservices architecture, the principle of “Bounded Context” emphasizes the need to segment a complex business domain into distinct and manageable sections.
It suggests a partitioning based on business context and clearly defined responsibilities to enable individual teams to develop and manage independent microservices.
Core Concepts
Ubiquitous Language
Each microservice and its bounded context must have a clearly defined “domain language” that is comprehensible to all the members of the team and aligns with the business context.
Context Boundaries
A bounded context delineates the scope within which a particular model or concept is operating, ensuring that the model is consistent and meaningful within that context.
It establishes clear boundaries, acting as a bridge between domain models, so that inside the context a specific language or model is used.
For instance: in the context of a customer, it might use a notion of “sales leads” to represent potential customers, while in the context of sales, it would define leads as initial contact or interest in a product.
Data Consistency
The data consistency and integrity is local to the bounded context. Each context’s data is safeguarded using transactions, and data is only propagated carefully to other contexts to which it has a relationship.
It ensures that the understanding of data by each service or bounded context is relevant and up-to-date.
Example: In an e-commerce system, the product catalog context is responsible for maintaining product data consistency.
Teams & Autonomy
Each bounded context is maintained and evolved by a specific team responsible for understanding the business logic, making it self-governing and allowing teams to work independently without needing to understand the logic of other contexts.
Relationship with Source Code
The concept of a bounded context is implemented and realized within the source code using Domain-Driven Design (DDD) principles. Each bounded context typically has its own codebase.
Code Example: Bounded Context and Ubiquitous Language
Here is the Tic Tac Toe game Model:
// Very specific to the context of the game
public enum PlayerSymbol {
NOUGHT, CROSS
}
// Specific to the game context
public class TicTacToeBoard {
private PlayerSymbol[][] board;
// Methods to manipulate board
}
// This event is purely for the game context to indicate the game has a winner.
public class GameWonEvent {
private PlayerSymbol winner;
// getter for winner
}
How do you handle failure in a microservice?
In a microservices architecture, multiple smaller components, or microservices, work together to deliver an application. Consequently, a failure in one of the services can have downstream effects, potentially leading to system-wide failure. To address this, several best practices and resilience mechanisms are implemented.
Best Practices for Handling Failure
Fault Isolation
Circuit Breaker Pattern: Implement a circuit breaker that isolates the failing service from the rest of the system. This way, the failure doesn’t propagate and affect other services.
Bulkhead Pattern: Use resource pools and set limits on the resources each service can consume. This limits the impact of failure, ensuring that it doesn’t exhaust the whole system’s resources.
Error Recovery
Retry Strategy: Implement a retry mechanism that enables services to recover from transient errors. However, it’s important to determine the maximum limit and backoff policies during retries to prevent overload.
Failsafe Services: Set up backup systems so that essential functionalities are not lost. For example, while one service is down, you can temporarily switch to a reduced-functionality mode or data backup to avoid complete system failure.
Resilience Mechanisms
Auto-scaling
Resource Reallocation: Implement auto-scaling to dynamically adjust resources based on load and performance metrics, ensuring the system is capable of handling the current demand.
Data Integrity
Eventual Consistency: In asynchronous communication between services, strive for eventual consistency of data to keep services decoupled. This ensures data integrity is maintained even when a service is temporarily unavailable.
Transaction Management: Use a two-phase commit mechanism to ensure atomicity of transactions across multiple microservices. However, this approach can introduce performance bottlenecks.
Data Management
Data Redundancy: Introduce redundancy (data duplication) in services that need access to the same data, ensuring data availability if one of the services fails.
Caching: Implement data caching to reduce the frequency of data requests, thereby lessening the impact of failure in the data source.
Data Sharding: Distribute data across multiple databases or data stores in a partitioned manner. This reduces the risk of data loss due to a single point of failure, such as a database server outage.
Communication
Versioning: Maintain backward compatibility using API versioning. This ensures that services can communicate with older versions if the newer one is experiencing issues.
Message Queues: Decouple services using a message queuing system, which can help with load leveling and buffering of traffic to handle temporary fluctuations in demand.
Health Checks: Regularly monitor the health of microservices to identify and isolate services that are malfunctioning or underperforming.
Best Practices for Handling Failure
Self-Healing Components: Develop microservices capable of self-diagnosing and recovering from transient faults, decreasing reliance on external mechanisms for recovery.
Graceful Degradation: When a service fails or becomes overloaded, gracefully degrade the quality of service provided to users.
Continuous Monitoring: Regularly monitor all microservices and alert teams in real-time when there is a deviation from the expected behavior.
Failure Isolation: Localize and contain the impact of failures, and provide backup operations and data whenever possible to provide ongoing service.
What design patterns are commonly used in microservices architectures?
Several design patterns lend themselves well to microservices architectures, offering best practices in their design and implementation.
Common Design Patterns
API Gateway: A single entry point for clients, responsible for routing requests to the appropriate microservice.
Circuit Breaker: A fault-tolerance pattern that automatically switches from a failing service to a fallback to prevent service cascading failures.
Service Registry: Microservices register their network location, making it possible to discover and interact with them dynamically. This is essential in a dynamic environment where services frequently start and stop or migrate to new hosts.
Service Discovery: The ability for a microservice to locate and invoke another through its endpoint, typically facilitated by a service registry or through an intermediary like a load balancer.
Bulkhead: The concept of isolating different parts of a system from each other to prevent the failure of one from affecting the others.
Event Sourcing: Instead of persisting the current state of an entity, the system persists a sequence of events that describe changes to that entity, allowing users to reconstruct any state of the system.
Database per Service: Each microservice has a dedicated database, ensuring autonomy and loose coupling.
Saga Pattern: Orchestrates multiple microservices to execute a series of transactions in a way that maintains data consistency across the services.
Strangler Fig: A deployment pattern that gradually replaces monolithic or conventional systems with a modern architecture, such as microservices.
Blue-Green Deployment: This strategy reduces downtime and risk by running two identical production environments. Only one of them serves live traffic at any point. Once the new version is tested and ready, it switches.
A/B Testing: A/B testing refers to the practice of making two different versions of something and then seeing which version performs better.
Cache-Aside: A pattern where an application is responsible for loading data into the cache from the storage system.
Chained Transactions: Instead of each service managing its transactions, the orchestration service controls the transactions between multiple microservices.
Code Example: Circuit Breaker using Hystrix Library
Here is the Java code:
@CircuitBreaker(name = “backendA”, fallbackMethod = “fallback”)
public String doSomething() {
// Call the service
}
public String fallback(Throwable t) {
// Fallback logic
}
The term “Circuit Breaker” is from Martin Fowler’s original description. It’s a well-known hardware pattern used in electrical engineering. When the current is too high, the circuit “breaks” or stops working until it is manually reset. The software equivalent, in a microservices architecture, is designed to stop sending requests to a failing service, giving it time to recover.
Can you describe the API Gateway pattern and its benefits?
The API Gateway acts as a single entry point for a client to access various capabilities of microservices.
Gateway Responsibilities
Request Aggregation: Merges multiple service requests into a unified call to optimize client-server interaction.
Response Aggregation: Collects and combines responses before returning them, benefiting clients by reducing network traffic.
Caching: Stores frequently accessed data to speed up query responses.
Authentication and Authorization: Enforces security policies, often using JWT or OAuth 2.0.
Rate Limiting: Controls the quantity of requests to safeguard services from being overwhelmed.
Load Balancing: Distributes incoming requests evenly across backend servers to ensure performance and high availability.
Service Discovery: Provides a mechanism to identify the location and status of available services.
Key Benefits
Reduced Latency: By optimizing network traffic, it minimizes latency for both requests and responses.
Improved Fault-Tolerance: Service failures are isolated, preventing cascading issues. It also helps in providing fallback functionality.
Enhanced Security: Offers a centralized layer for various security measures, such as end-to-end encryption.
Simplified Client Interface: Clients interact with just one gateway, irrespective of the underlying complicated network of services.
Protocol Normalization: Allows backend services to use different protocols (like REST and SOAP) while offering a consistent interface to clients.
Data Shape Management: Can transform and normalize data to match what clients expect, hiding backend variations.
Operational Insights: Monitors and logs activities across services, aiding in debugging and analytics.
Contextual Use
The gateway pattern is particularly useful:
In systems built on SOA, where it is used to adapt to modern web-friendly protocols.
For modern applications built with microservices, especially when multiple services need to be accessed for a single user action.
When integrating with third-party services, helping in managing and securing the integration.
Code Example: Setting Up an API Gateway
Here is the Python code:
from flask import Flask, request
import requests
app = Flask(__name__)
@app.route(‘/’)
def api_gateway():
# Example: Aggregating and forwarding requests
response1 = requests.get(‘http://service1.com’)
response2 = requests.get(‘http://service2.com’)
# Further processing of responses return 'Aggregated response'
Explain the ‘Circuit Breaker’ pattern. Why is it important in a microservices ecosystem?
The Circuit Breaker pattern is a key mechanism in microservices architecture that aims to enhance fault tolerance and resilience.
Core Mechanism
State Management: The circuit breaker can be in one of three states: Closed (normal operation), Open (indicating a failure to communicate with the service), and Half-Open (an intermittent state to test if the service is again available).
State Transition: The circuit breaker can transition between states based on predefined triggers like the number of consecutive failures or timeouts.
Benefits
Failure Isolation: Preventing cascading failures ensures that malfunctioning services do not drag down the entire application.
Latency Control: The pattern can quickly detect slow responses, preventing unnecessary resource consumption and improving overall system performance.
Graceful Degradation: It promotes a better user experience by continuing to operate, though possibly with reduced functionality, even when services are partially or completely unavailable.
Fast Recovery: After the system or service recovers from a failure, the circuit breaker transitions to its closed or half-open state, restoring normal operations promptly.
.NET’s Polly Example
Here is the C# code:
var circuitBreakerPolicy = Policy
.Handle()
.CircuitBreaker(3, TimeSpan.FromSeconds(60));
Asynchronous Use Cases
For asynchronous activities, such as making API calls in a microservices context, the strategy can adapt to handle these as well. Libraries like Polly and Resilience4j are designed to cater to asynchronous workflows.
What is a ‘Service Mesh’? How does it aid in managing microservices?
A Service Mesh is a dedicated infrastructure layer that simplifies network requirements for microservices, making communication more reliable, secure, and efficient. It is designed to reduce the operational burden of communication between microservices.
Why Service Mesh?
Zero Trust: Service Meshes ensure secure communication, without relying on individual services to implement security measures consistently.
Service Health Monitoring: Service Meshes automate health checks, reducing the risk of misconfigurations.
Traffic Management: They provide tools for controlling traffic, such as load balancing, as well as for A/B testing and canary deployments.
Adaptive Routing: In response to dynamic changes in service availability and performance, Service Meshes can redirect traffic to healthier services.
Elements of Service Mesh
The Service Mesh architecture comprises two primary components:
Data Plane: Controls the actual service-to-service traffic. It’s made up of proxy servers, such as Envoy or Linkerd, which sit alongside running services to manage traffic.
Control Plane: Manages the configuration and policies that the Data Plane enforces. It includes systems like Istio and Consul.
Key Capabilities
Load Balancing: Service Meshes provide intelligent load balancing, distributing requests based on various criteria, like latency or round-robin.
Security Features: They offer a suite of security capabilities, including encryption, authentication, and authorization.
Traffic Control: Service Meshes enable fine-grained traffic control, allowing you to manage traffic routing, failover, and versioning.
Metrics and Tracing: They collect and provide key operational telemetry, making it easier to monitor the health and performance of your microservices.
Code Example: Service Mesh Components in Kubernetes
Here is the YAML configuration:
For the Control Plane:
apiVersion: v1
kind: Pod
metadata:
name: control-plane-pod
labels:
component: control-plane
spec:
containers:
- name: controller
image: control-plane-image
ports:
- containerPort: 8080
—
apiVersion: v1
kind: Service
metadata:
name: control-plane-service
spec:
selector:
component: control-plane
ports:
- protocol: TCP
port: 80
targetPort: 8080
For the Data Plane:
apiVersion: v1
kind: Pod
metadata:
name: service-1-pod
labels:
app: service-1
spec:
containers:
- name: service-1-container
image: service-1-image
ports:
- containerPort: 8080
- name: proxy
image: envoyproxy/envoy-alpine
containers:
- name: service-2-container
image: service-2-image
ports:
- containerPort: 8080
- name: proxy
image: envoyproxy/envoy-alpine
In this example, Envoy serves as the sidecar proxy (Data Plane) injected alongside service-1 and service-2, and the control-plane-pod and control-plane-service represent the control plane.
How do you ensure data consistency across microservices?
Data consistency in a microservices architecture is crucial for ensuring that business-critical operations are executed accurately.
Approaches to Data Consistency in Microservices
Synchronous Communication: Via REST or gRPC, which ensures immediate consistency but can lead to performance issues and tight coupling.
Asynchronous Communication: Using message queues which ensure eventual consistency but are more resilient and performant.
Eventual Consistency with Compensating Actions: Involves completing a series of potentially inconsistent operations within a microservice and compensating for any errors, often orchestrated through a dedicated event handler.
Code Example: Synchronous Communication
Here is the Python code:
Synchronous Communication with RESTful APIs
import requests
def place_order(order):
response = requests.post(‘http://order-service/api/v1/orders’, json=order)
if response.status_code == 201:
return “Order placed successfully”
else:
return “Order placement failed”
Potential downside: If the order service is down, the basket service cannot commit the transaction.
Code Example: Asynchronous Communication with Event Bus
Here is the RabbitMQ code:
Producer:
import pika
def send_order(order):
connection = pika.BlockingConnection(pika.ConnectionParameters(‘localhost’))
channel = connection.channel()
channel.queue_declare(queue=’order_queue’)
channel.basic_publish(exchange=’’, routing_key=’order_queue’, body=order)
connection.close()
No blocking operation or transactional context ensures high performance.
Consumer:
Consumes the ‘order_queue’
# Processes the order asynchronously
Eventual Consistency with Compensating Actions
CAP Theorem
The CAP theorem states that it’s impossible for a distributed data store to simultaneously provide more than two of the following three guarantees: Consistency, Availability, and Partition Tolerance.
BASE (Basically Available, Soft state, Eventually consistent)
BASE is an acronym that describes the properties of many NoSQL databases. It includes:
Basically Available: The system remains operational most of the time.
Soft state: The state of the system may change over time, even without input.
Eventually Consistent: The system will become consistent over time, given that the applications do not input any new data.
Transactional Outbox Pattern
This pattern, used in conjunction with an event store or message broker, ensures atomic operations across services. The microservice first writes an event to an “outbox” table within its own database before committing the transaction. A specialized, outbox-reading process then dispatches these events to the message broker.
Advantages
Ensures atomicity, preventing events from being disclosed due to a partially committed transaction.
Mitigates the risk of message loss that might occur if an external event publishing action happens after the transaction is committed.
Code Example: Transactional Outbox Pattern
Here is the Java code:
// Using Java’s Spring Data JPA and RabbitMQ
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Modifying;
import org.springframework.data.jpa.repository.Query;
import org.springframework.data.repository.query.Param;
public interface OutboxEventRepository extends JpaRepository {
@Modifying @Query(value = "INSERT INTO outbox_event (id, eventType, payload) VALUES (:id, :eventType, :payload)", nativeQuery = true) void create(@Param("id") long id, @Param("eventType") String eventType, @Param("payload") String payload); }
public class OrderService {
private final OutboxEventRepository outboxEventRepository;
public void placeOrder(Order order) { // ... Perform order placement outboxEventRepository.create(order.getId(), "OrderPlacedEvent", toJson(order)); // ... Commit transaction } }
Discuss the strategies you would use for testing microservices.
When it comes to testing microservices, there are several strategies tailored to meet the unique challenges and opportunities of this architecture.
Key Strategies
Test Stubs and Mocks
For microservices, automated testing often starts at the unit level. To isolate parts of the system for testing, mocks and stubs are invaluable. Stubs provide canned responses, while mocks validate the behavior of the system under test.
Frameworks like WireMock or mockito can assist in creating these.
Smart End-To-End Tests with Cucumber
End-to-end (e2e) tests are crucial to ensure the proper integration of service components. However, these tests can be challenging to maintain as microservices evolve independently. Tools like Cucumber alleviate this issue through the use of easily comprehensible, plain-text specifications. They also help improve testing coverage.
Chaos Testing
In a microservices architecture, system components can fail independently. Chaos testing, popularized by Netflix’s “Chaos Monkey,” injects various forms of failure—such as latency or downtime—into the system to assess its resilience. Tools like Gremlin automate this approach, empowering teams to identify and remediate vulnerabilities.
Canary and Blue/Green Deployments
Canary and blue/green deployments (‘all-at-once’ or ‘rolling’) are deployment strategies that you can use to handle updates to your microservices. These strategies are designed to manage risk during deployment and can help you identify issues early in the deployment process. You can use Chaos Engineering techniques to add more stability and confidence in your deployments.
Multi-Region Deployments
Using multi-region deployments, you can duplicate and distribute your services across different geographical locations to mitigate the effects of a region-specific outage. This offers a more robust, widely accessible, and reliable service.
Immutable Architectures
An immutable architecture involves replacing, rather than updating, elements of your application. This approach to microservice management offers a reliable, consistent, and efficient way to handle changes.
Orchestration with Kubernetes
Kubernetes automates the deployment, scaling, and management of microservices. Its self-healing capabilities are especially relevant in a microservices environment, ensuring that the system can recover from faults without human intervention.
Blameless Postmortems
Instituting blameless postmortems fosters a culture of continuous improvement, where teams openly discuss mistakes or system failures. This approach to tackling outages and discrepancies ensures a transparent process, where the focus is on root cause analysis and learning, not assigning blame.
Code Example: Chaos Monkey
Here is the Java code:
public class ChaosMonkey {
public void killRandomService() {
// Method to induce failure
}
}
How can you prevent configuration drift in a microservices environment?
Configuration drift refers to inconsistencies that can occur between your intended infrastructure state and its actual deployment. This phenomenon can lead to operational issues, discrepancies in monitoring and security, and headaches during deployments.
To combat configuration drift, you need strategies and tools for continual monitoring and remediation to ensure your infrastructure aligns with your gold standard.
One approach is creating static configurations and deploying them in immutable infrastructure. However, the focus here is on strategies outside of immutable infrastructure.
How to Achieve Configuration Consistency?
Centralized Configuration: Opt for a centralized configuration management system or service, like Hashicorp’s Consul, that ensure that all application instances access the latest and uniform configuration data.
Version Control: Leverage version control repositories (e.g., Git) to record your configuration changes. Automated deployments and CD pipelines can then ensure that the code from the repository aligns with your production systems.
Automated Auditing and Adjustments: Regularly review and, if necessary, adjust deployed configurations to match the central one. Automated auditing tools like Netflix’s Archaius can assist in this process.
Container Orchestration Platforms: Emphasis on containerized architectures and orchestration platforms, like Kubernetes, ensures that applications are deployed uniformly and consistently according to their container definition. This mandates that the service definition is consistent across all nodes.
Dependency Management and Testing: Continuous Integration/Continuous Deployment (CI/CD) isn’t limited to application code. It should also include dependencies like configuration data, with tests to verify compatibility and consistency.
Service Registries: Implement service registries, such as Eureka, so services can dynamically discover others. This minimizes the need for static configuration files that could fall out of sync.
Code Example: CI/CD Pipeline
Here is the Python code:
from git import Repo
import os
Clone or pull the configuration repository
config_repo_path = ‘path/to/configuration/repo’
if os.path.exists(config_repo_path):
repo = Repo(config_repo_path)
repo.remotes.origin.pull()
else:
repo = Repo.clone_from(‘https://github.com/organization/config-repo.git’, config_repo_path)
Deploy configurations using the repository’s latest version
# This is a simplified example; in an actual deployment, you might use a config management tool
# like Ansible or Terraform to handle the deployment process.
deploy_configurations(config_repo_path)
When should you use synchronous vs. asynchronous communication in microservices?
Deciding between synchronous and asynchronous communication in a microservices architecture requires carefully considering various factors, such as service boundaries, data dependencies, fault tolerance, performance, and consistency.
Key Considerations
Service Boundaries
Recommendation: Start with synchronous communication for intra-service tasks and choose asynchronous communication for inter-service tasks requiring loose coupling.
Data Dependencies:
Recommendation: Synchronous communication can be more practical when you have data dependencies that demand both request and response. Asynchronous communication provides greater decoupling but might require additional strategies, like eventual consistency, to ensure data integrity.
Performance and Latency Requirements
Recommendation: If low latency and immediate response are necessary, opt for synchronous communication. However, for tasks where immediate responses aren’t critical, like notifications or batch processing, asynchronous communication is more suitable.
Fault Tolerance and Resilience
Recommendation: Asynchronous communication, especially with message queues that support retry and error handling, offers better resilience against failures. Synchronous communication can lead to cascading failures. Thus, decoupling with asynchronous communication enhances the overall robustness of the system.
Complexity and Operational Overhead
Recommendation: Simplicity favors synchronous communication, making it easier to understand for developers, troubleshoot, and monitor. On the other hand, the additional complexity of managing asynchronous communication might be justified when it offers clear architectural advantages, such as better decoupling.
What role does containerization play in microservices?
Containerization is pivotal for building, deploying, and running microservices due to the consistent, isolated environments it provides. It’s the backbone of flexible, scalable, and agile microservices architecture.
Key Benefits of Containerization in Microservices
Consistent Deployment: Containers ensure identical runtime environments across different microservices, guaranteeing consistent behavior.
Resource Isolation: Each service operates within its container, isolating resources and dependencies from others to prevent compatibility issues.
Portability: Containers can be deployed across various infrastructure types, offering excellent portability. Services developed using containers can seamlessly move between development, testing, and production environments.
Scalability: Containers provide a straightforward mechanism for scaling microservices, such as via Kubernetes auto-scaling features, ensuring smooth and efficient resource utilization.
Dependency Management: Containers encapsulate both microservice code and its dependencies, simplifying version management and reducing potential conflicts. A service stays concise and self-sufficient.
Streamlined Updates: Containerized services can be updated without affecting others, enhancing agility.
Microservices and Macrotasks
Containers lay the groundwork for a clear microtask division. Each container typically hosts one microservice, aligning with the microservices mantra of “doing one thing well.”
This modular approach makes development and maintenance more straightforward, fosters code reusability, and enables rapid system updates. It’s a stark contrast to monolithic architectures where a single codebase handles multiple responsibilities. Containerized microservices are akin to specialized craftsmen, each proficient in a specific task, working harmoniously to build a grand structure.
What are the challenges of deploying microservices?
The individual deployments in a microservices architecture present several challenges that a unified deployment strategy in monolithic systems does not have:
Challenges of Microservices Deployment
Service Discovery: Identifying and managing dynamic service locations is crucial in a microservices architecture. Centralized service registries or modern solutions like Kubernetes can help with this.
Data Consistency: Microservices usually follow a bounded context that could result in local data inconsistencies. Solutions include distributed transactions or event-driven systems where services react to data changes.
Inter-Service Communication: It’s important to validate data when different services need to be consistent, traditionally supported by transactions that are now handled through asynchronous communication and graceful fault tolerance.
Network Complexity: Deploying services across a network introduces a new layer of operational complexity and potential issues like latency, network outages, and reliability.
Resilience to Failure: While systems always have to be robust, microservices demand a more resilient architecture as the failure of one service should not bring down the entire system.
Deployable Artifacts: Each service typically requires its own deployable artifact. Possible solutions are creating Docker containers or using platforms such as Kubernetes for container orchestration.
Continuous Integration and Continuous Deployment (CI/CD): Microservices are more complex to test and deploy, requiring more automation in the CI/CD pipeline.
Versioning and Compatibility: Managing the coexistence of different service versions is crucial to ensuring that evolving services don’t break existing clients.
Security: Each service having its own API brings the challenge of securing these various APIs and handling permissions across multiple services.
Cross-Cutting Concerns: Functions like logging, monitoring, and caching can become more complicated with microservices. Tools aimed at microservices, like Istio, do a lot to help with this.
Describe blue-green deployment and how it applies to microservices.
Blue-Green Deployment is a release strategy that’s particularly well-suited to microservices architectures.
Key Principles
Zero Downtime: Ensuring uninterrupted service for end-users during updates by switching traffic between two identical environments.
Elastic Scaling: Each environment is independently scalable, allowing for, dynamic resource allocation based on demand.
Quick Reversion: In case of issues, the deployment can be immediately rolled back to the last stable environment.
Workflow
Parallel Environments: Two identical environments - blue (current) and green (new) - run simultaneously.
Isolated Testing: The green environment undergoes rigorous testing without affecting production.
Blue-Green Deployment
Traffic Switch: Once the green environment is validated, traffic - often referred to as a DNS record - is routed to it.
Continuous Monitoring: Post-deployment, both green and blue are monitored to safeguard operational integrity.
Code Example: Blue-Green Deployment
Here is the Python code:
The Task is to implement a function divide(num1, num2) in a new version (green) and perform Blue-Green Deployment. If everything’s successful, the new version is to be made the live one.
Original (Blue)
Here is the Python code:
def divide(num1, num2):
return num1 / num2
New (Green)
Here is the Python code:
def divide(num1, num2):
if num2 == 0:
return “Cannot divide by 0”
return num1 / num2
The Benefits
Exception Safety: With a roll-back mechanism, if new deployments encounter issues, the platform will instantly switch to the former environment.
Risk-Free Upgrades: Users are protected from potential problems with new versions, ensuring a seamless and superior user experience.
Framework Agnosticism: Blue-Green deployments are tool and platform-agnostic, and are compatible with numerous cloud platforms and management systems.
How does canary releasing work, and how is it beneficial for microservices deployments?
Canary Releasing is a deployment strategy in microservices architecture that provides a phased approach to roll out new features, reducing the impact of potential issues.
Key Components
Traffic Splitter: Tools like Istio or Nginx Ingress Controller can be configured to divert only a portion of incoming traffic to the upgraded version.
Validation Metrics: Real-time monitoring, A/B Testing, and user feedback help determine if upgrades meet operational standards and user expectations.
Benefits
Reduced Risk Exposure
By incrementally routing traffic to new deployments, any problems can be detected and addressed before a full rollout. This minimizes the impact on users if unexpected issues arise.
Controlled Rollouts
Canary releasing allows for user segment-based testing, letting you target specific demographics or geographic regions. This ensures a more focused beta testing approach.
Canary Metrics
The decision of the traffic split between the canary and the stable version is made based on a set of key performance indicators (KPIs)
such as request latency, error rate, RPS, and custom metrics tailored to the specific microservice.
Canary Data Sources
Real Time Traffic: For immediate validation ensuring accuracy and responsiveness
Observability Tools: Utilize logs, metrics, and distributed tracing to monitor the canary’s performance against the stable version
User Feedback: Direct input from select users or through mechanisms like beta programs or feedback buttons
Canary Best Practices
Gradual Increase: Start with a small percentage of traffic sent to the canary, monitoring KPIs closely, before gradually increasing the percentage.
Automated Rollback: Utilize automated health checks to revert to the stable version if KPIs deviate.
Version Parity: Ensure the canary and stable versions are configured similarly to guarantee accurate comparisons.
Isolation and Debuggability: Employ methods to isolate canary users for detailed examination and debugging, like UUID Headers or session stickiness.
Canary Workflow
Trunk Development: Maintain a single codebase where ongoing work is integrated.
Release Candidate: A specific build is chosen for canary deployment.
Traffic Split: Incoming requests are divided between the stable and canary versions.
Validation: Real-time and post-deployment data are analyzed to determine if the canary version performs adequately, or if a rollback is necessary.
Full Deployment (Optional): After successful validation, the canary becomes the new stable version for all users.
Code Example: Canary Release with Istio
To implement Canary releasing using **Istio:
Define a Virtual Service:
Attach Canary Labels: Can be based on HTTP Headers, Cookies, or more advanced techniques such as User-Agent matching.
Explain the concept of ‘Infrastructure as Code’ and how it benefits microservices management.
Infrastructure as Code (IaC) is a practice where infrastructure is provisioned and managed using machine-readable definition files, rather than physical hardware. This approach is beneficial in various areas of software deployment and management, particularly in the context of microservices.
Benefits of IaC in Microservices Management
Consistency: With IaC, infrastructure configurations are standardized, ensuring consistent behavior across microservices.
Scalability: IaC templates can be easily modified to accommodate rapid scaling, aligning with the microservices’ dynamic nature.
Resource Efficiency: The modularity of IaC enables optimized resource allocation, critical in microservices for limiting resource usage.
Centralized Control: IaC provides a centralized view of the distributed microservices infrastructure, enabling streamlined management.
Security and Compliance: IaC templates can include pre-configured security measures and compliance standards, promoting a more secure microservices architecture.
Key Tools and Technologies
CloudFormation: This AWS tool allows the creation and management of AWS resources using JSON or YAML templates.
Terraform: An open-source tool by HashiCorp, it addresses the multi-cloud environment, using its Domain Specific Language.
Ansible: Primarily designed for configuration management, Ansible also supports IaC functionalities, allowing for consistent infrastructure provisioning.
Chef and Puppet: While traditionally known for their configuration management capabilities, these tools also facilitate IaC principles.
The Lifecycles of IaC Objects
Create: New deployments are initialized, aligning with changes in the microservices ecosystem.
Update: Modifications and improvements are made to existing infrastructures, keeping pace with microservices’ constant evolution.
Destroy: Upon decommissioning a microservice, associated resources are removed, preventing any unnecessary clutter in infrastructure.
Common IaC Cornerstones
Declarative vs. Imperative: Declarative IaC defines the end state, while the imperative specifies the steps to achieve that state.
Version Control: Just like application code, IaC scripts should be managed using version control systems, ensuring traceability and maintainability.
Automated Testing: Resource configurations in IaC scripts should undergo thorough testing to prevent potential discrepancies.
Documentation: Code comments, README files, and diagrammatic representations support IaC scripts, improving comprehensibility and maintainability.
Collaborative Approaches: Multiple developers can concurrently work on diverse parts of the IaC script, with systems in place for integration and conflict resolution, ensuring a streamlined, organic workflow for microservices management.
Compartmentalization: Distinct microservices and their infrastructure are segregated, minimizing their interdependencies and simplifying management.
Describe what Continuous Integration/Continuous Deployment (CI/CD) pipelines look like for microservices.
Continuous Integration/ Continuous Deployment (CI/CD) for microservices entails the integration and deployment of independent, self-sufficient units of software. This can be more complex compared to monolithic applications, primarily due to parallel development of multiple microservices and their intricate dependencies.
CI/CD for Microservices: Workflow
Build & Test: Each microservice is built and tested independently and published as a container. Integration tests ensure that the microservice behaves as expected both in isolation and in a test environment.
Service Versioning: Microservice APIs may change, so their versions are managed meticulously. A central registry handles these versions, and changes are documented to ensure consistent communication and compatibility among services.
Release & Deployment: This step is challenging due to each microservice having its testing, validation, and deployment requirements. Services are typically released and deployed using some form of a release train, ensuring that all interdependent services stay compatible.
System Integration: After independent builds, integration tests run to verify that collaborating components work together as expected.
Environment Flow: A stable flow of environments is essential to microservices. For instance, a service might proceed through development, testing, and staging before reaching production.
Tools Utilized
Version Control: Systems such as Git ensure code changes are tracked.
Docker: To containerize microservices, making them more portable.
Container Orchestration Tools: Such as Kubernetes or Docker Swarm to manage the lifecycle of containers.
Continuous Integration Systems: Jenkins or GitLab CI, which handle the automated build, test, and merge of microservices.
Challenges and Considerations
Dependency Management: Microservices need to be independent, yet they might rely on different persistent resources and external services. Managing such dependencies can be complex.
Service Discovery and Load Balancing: To appropriately direct traffic between different microservice instances.
Logging and Monitoring: With potentially hundreds of microservices running, it’s critical to have a clear and unified way to monitor their health and gather logs for troubleshooting.
Best Practices
Automated Testing: Implement comprehensive test suites, such as unit, integration, and end-to-end tests.
Small, Frequent Changes: Frequent small changes make issues easier to identify and resolve.
Rolling Update: Use this update strategy to minimize disruption to the system.
Infrastructure as Code (IaC): Employ tools such as Terraform or AWS CloudFormation to automate infrastructure provisioning and facilitate environment consistency.
Code Example: Script to Ensure Microservice Dependency is Satisfied
Here is the Python code:
import requests
def check_dependency_service():
dependency_response = requests.get(‘http://dependency-service-url/health’)
if dependency_response.status_code == 200:
print(“Dependency service is healthy”)
else:
raise Exception(“Dependency service is not healthy”)
check_dependency_service()