VL 6 Flashcards
Monolithic App architecture
Design of a software Program which is composed of all in one piece
Benefits of monolithic architecture
Easy to develop
Simple to test
Simple to deploy
Easy horizontal scaling
Disadvantages of monolithic architecture
Limitation in size and complexity
Too large and complex
Slow start time
Re-deployment of complete app on updates
Reliability
Micro services architecture
Single logical db per service
Built and deployed independently
Scalable
Stateless ( complete request and forget it)
Advantages of micro services
Easier to understand and maintain
Independence of service
No barrier on adopting new technologies
Independent service deployment
Each service scaling
Disadvantages of micro service
Complexity of creating a distributed system
Deployment complexity ( need to implement service discovery mechanism)
Service Mesh
Configurable, low latency infrastructure layer
Handle a high volume of network based interprocess communication among application infrastructure services using API
Micro services Application Framework Components: API Gateway
Server that is the single entry point into the system
Encapsulates internal system architecture and provides APU that is tailored to each client
API Gateway: backends for frontends pattern
Defines separate API Gateway for each client
Service Registry
Db containing. The network location of service instances
Service Registration
Self registration: directly to service registry
3rd party registration: through service manager
Service discovery: problem
In micro services app, each service instance is assigned IP Adress dynamically because of Autoscaling, failures, upgrades
Client-side service discovery
Clients are responsible for determining the network locations of available service instances
Client queries a service registry
Client uses a load balancing algorithm to select one of the available instances and makes a request
Client side service discovery: pros
Straightforward, no moving parts except of service registry
Client can make intelligent application-specific load balancing decisions as it knows about available services instances s
Client side service discovery: Cons
Cons:
• Couples the client with the service registry.
• Implementation of client-side service discovery logic for each programming language and framework used by service clients.
Server Side discovery
Client makes request to a service via load balancer
Load balancer queries the service registry and routes each request to an available service instance
Server side service discovery: pros&cons
Pros:
• details of discovery are abstracted away from the client.
• Eliminates the need to implement discovery logic for each programming language and framework used by your service clients.
Cons:
• Requires load balancer
Basic request flow
For each request there is assigned thread responsible for getting data and send response back to the client
Thread is freed after response is sent to user
Immediate failure
Wrap the code in Order service around try catch block to handle exceptions.
Thread freed quickly after response is sent to user
Timeout failure
All requests are waiting for response when payment service is overloaded or crashed
Reason: unconfigured timeout value
Cascading failure
At high requests rate all threads are waiting
Timeout failure: solution
Have default response to immediately return to keep the thread pool free
Add interceptor for all requests
Allow: change status after few failed requests in a time and after timeout check again
Problem: can be that service is still down => you get overload in payment service and want to know the reason( service down/ too many requests)
Allow partial with circuit breaker
Micro services App deployment: multiple services instances per VM
All resources are shared: cpu, memory, VM
Micro services app deployment: service instance per VM
Pay for resources you don’t use
Micro services app deployment:
Multiple services containers per VM
Deploy with containers
Micro services app deployment:
Service container per VM
Easily move from one machine to another, allows scalability
Rolling updates: ramped
slow rollout
The number of instances of the old version is decreased and the new version is increased until the correct number of service instances is reached.
Pro:
- version is slowly released across instances
- convenient for stateful applications that can handle rebalancing of the data
Cons:
- rollout/rollback can take time
- supporting multiple APIs is hard
- no control over traffic
Rolling updates: blue/green
Blue/Green
The “green” version of the application is deployed alongside the “blue” version.
After testing that the new version, update the load balancer to send traffic to the new version
Pro:
- instant rollout/rollback
- avoid versioning issue
Cons:
- requires double the resources
- proper test of the entire platform should be done before releasing to production
Rolling updates: canary
A canary deployment consists of routing a subset of users to a new functionality.
For example: end 25% of traffic to new version
Pro:
- version released for a subset of users
- convenient for error rate and performance monitoring
- fast rollback
Cons:
- slow rollout
- fine tuned traffic distribution can be expensive
Rolling updates: A/B testing
A/B testing
Distributing traffic amongst versions based on a few parameters (cookie, user agent, etc.).
Pro:
- intelligent load balancer
- several versions run in parallel
- full control over the traffic distribution
Cons:
- hard to troubleshoot errors for a given session, distributed tracing becomes mandatory
- not straightforward, you need to setup additional tools