Cross Cutting Concern Design Patterns Flashcards
External Configuration
Summary:
Externalize your configuration settings so application can load them on the fly on startup. Good for settings/variables when deploying across mulitple environments. Hashicorp Vault or AWS AppConfig / Parameter Store (AWS Systems Manager)
Detail:
A service typically calls other services and databases as well. For each environment like dev, QA, UAT, prod, the endpoint URL or some configuration properties might be different. A change in any of those properties might require a re-build and re-deploy of the service.
To avoid code modification configuration can be used. Externalize all the configuration, including endpoint URLs and credentials. The application should load them either at startup or on the fly. These can be accessed by the application on startup or can be refreshed without a server restart.
Service Discovery Pattern
Summary:
Service registry to keep metadata of each service. Service should register when starting and de-register when shutting down. You can have client side (Netflix Eureka) and server side (AWS ELB) solutions.
Detail:
When microservices come into the picture, we need to address a few issues in terms of calling services.
With container technology, IP addresses are dynamically allocated to the service instances. Every time the address changes, a consumer service can break and need manual changes.
Each service URL has to be remembered by the consumer and become tightly coupled.
A service registry needs to be created which will keep the metadata of each producer service and specification for each. A service instance should register to the registry when starting and should de-register when shutting down. There are two types of service discovery:
client-side : eg: Netflix Eureka (https://github.com/Netflix/eureka/wiki/Eureka-at-a-glance)
Server-side : eg: AWS ELB (https://aws.amazon.com/elasticloadbalancing/).
https://microservices.io/patterns/server-side-discovery.html
Circuit Breaker
Summary:
A proxy that acts for operations that might fail. Proxy can be a state machine (Closed, Open, Half Open) to avoid making calls against a remote service that is likely to fail. When Closed and/or Half Open a timer can be used to eventually begin to test connectivity (e.g. Half Open) and depending on stability can return state to Open. When Open a counter can be used to track failures. Thresholds can be set to determine state change to Half Open and/or Closed.
Detail:
A service generally calls other services to retrieve data, and there is the chance that the downstream service may be down. There are two problems with this: first, the request will keep going to the down service, exhausting network resources, and slowing performance. Second, the user experience will be bad and unpredictable.
The consumer should invoke a remote service via a proxy that behaves in a similar fashion to an electrical circuit breaker. When the number of consecutive failures crosses a threshold, the circuit breaker trips, and for the duration of a timeout period, all attempts to invoke the remote service will fail immediately. After the timeout expires the circuit breaker allows a limited number of test requests to pass through. If those requests succeed, the circuit breaker resumes normal operation. Otherwise, if there is a failure, the timeout period begins again. This pattern is suited to, prevent an application from trying to invoke a remote service or access a shared resource if this operation is highly likely to fail.
https://docs.microsoft.com/en-us/azure/architecture/patterns/circuit-breaker
Blue Green Deployment Pattern
Summary:
Deploying new / updated services to a parallel environment to avoid downtime of existing system. Once services are deployed traffic can route to new environment and old environment can be used for next deployment. Can also be used in conjunction with canary deployments to test stability/reliability/etc of newly deployed service(s).
Detail:
With microservice architecture, one application can have many microservices. If we stop all the services then deploy an enhanced version, the downtime will be huge and can impact the business. Also, the rollback will be a nightmare. Blue-Green Deployment Pattern avoid this.
The blue-green deployment strategy can be implemented to reduce or remove downtime. It achieves this by running two identical production environments, Blue and Green. Let’s assume Green is the existing live instance and Blue is the new version of the application. At any time, only one of the environments is live, with the live environment serving all production traffic. All cloud platforms provide options for implementing a blue-green deployment.
Service Mesh
Summary:
Evolution of Service Discovery. In this model, infrastructure is provided so that services can discover one another, control traffic and policies and provide observability ideally without modifying the services themselves. There are a few means to implement this. Gateway, In Client, Sidecar Proxy and Networking.
Detail:
Without service mesh infrastructure each service would need to add a library that would make the service discoverable, encrypt traffic, emit metrics, etc. This is problematic because 1. service is now responsible for code outside its core domain and 2. not only does this library need to be reusable across services, you likely need to create the library in multiple stacks to support your heterogeneous microservice environment. Or you can factor in this code at the client level (see standard Service Discovery Pattern).
If adding a centralized service (e.g. API Gateway) to offload server side discovery from services, we introduce a single point of failure with the gateway itself, you are limited if you need to encrypt traffic to the API Gateway (Gateway can only manage encryption downstream), traffic visibility is limited since it can only acquire metrics once at the gateway. Alternatively, you can install mini API Gateways for each service, but this likely will amplify gateway challenges beyond the single point of failure.
Service Mesh implementation could be achieved with sidecar proxies (see Integration Patterns). This process deploys with the actual service and will intercept traffic. Proxy then implements all Service Mesh smartness. Downsides with this approach is more complexity, extra processing hop and possibly latency.
Other options include Integration through networking layer and Integrated thin clients. The former is where service mesh logic is embedded into the network layer of your host. eBPF (https://lwn.net/Articles/740157/) and Cilium (https://cilium.io/) explore this option. The latter, provides for a small portion of the service mesh code to exist in a client library and the remainder heavier processing is managed within a control plane layer (e.g. gRPC-LB https://grpc.io/blog/grpc-load-balancing/)
For a table of pros and cons of each architecture check out:
https://medium.com/swlh/service-mesh-architectural-patterns-5dfa0ad96e38