Solutions Architecture Flashcards
What are the three most popular architecture patterns?
Microservices, event-driven, and service-oriented.
What is a microservices architecture?
A complex application is broken down into a collection of small, independent, and loosely coupled services.
Each microservice represents a specific business capability and can be developed, deployed, and scaled independently.
Communication between microservices typically occurs through lightweight protocols such as HTTP or messaging systems
What is event-driven architecture?
Event-driven architecture (EDA) is an approach where systems respond to events by producing, consuming, and reacting to events.
An event represents a significant change or occurrence within the system or the external environment.
Events are typically asynchronous and can be produced or consumed by various components or services.
Events are the primary means of communication between components or services, promoting loose coupling.
What is service oriented architecture?
Service-oriented architecture (SOA) is an architectural style that focuses on organizing software systems as a collection of services.
Services are self-contained, reusable software components that expose well-defined interfaces and can be accessed and used by other services or applications.
Services encapsulate specific business functions or processes and expose them through standardized interfaces.
Services are designed to be loosely coupled, allowing them to evolve and be replaced without affecting other services.
How can you optimise performance when designing a solution?
- Performance requirements: Clearly define early in the design process. Identify critical use cases and performance bottlenecks to focus optimization efforts.
- Scalability: Design the solution to scale horizontally or vertically as per the anticipated load. Use techniques such as load balancing, caching, and distributed computing to distribute the workload.
- Efficient data handling: Optimize data access and storage by using appropriate data structures, indexing, and caching mechanisms. Leverage technologies like in-memory databases or data grids for fast data retrieval.
- Performance testing: Conduct comprehensive performance testing to identify and address performance bottlenecks. Use load testing tools and simulate realistic usage scenarios to validate the solution’s performance.
How can you optimise reliability when designing a solution?
- Redundancy and fault tolerance: Incorporate redundancy and fault-tolerant mechanisms at different levels, such as hardware, software, and infrastructure, to mitigate failures and ensure high availability.
- Automated monitoring and alerting: Implement robust monitoring and logging solutions to proactively identify issues. Set up alerts to notify administrators about critical system events and failures.
- Failure recovery and disaster resilience: Design the solution with disaster recovery plans, backup strategies, and failover mechanisms to minimize downtime and data loss in the event of failures.
- Error handling and fault isolation: Implement proper error handling mechanisms and design services or components to be isolated from one another to prevent cascading failures.
How can you optimise availability when designing a solution?
- High availability architecture: Design the solution with redundancy and failover capabilities to minimize downtime. Use technologies such as load balancers, clustering, and hot standby servers to ensure continuous availability.
- Geographical distribution: Consider deploying the solution across multiple regions or data centers to provide better availability and reduce the impact of localized failures or outages.
- Service-level agreements (SLAs): Define and meet SLAs for availability. Set recovery time objectives (RTO) and recovery point objectives (RPO) to guide the design and implementation of the solution’s availability features.
- Continuous monitoring and maintenance: Continuously monitor the solution’s health and performance to detect and address potential issues before they impact availability. Regularly update and maintain the system to address security vulnerabilities and improve stability.
What are the 6 key factors to consider when evaluating different solution architecture options?
- Functional Requirements: align with the desired functionality, features, and use cases?
- Non-Functional Requirements:
- Scalability
- Reliability and Availability
- Performance
- Security
- Maintainability - Technology and Tools:
- Compatibility with existing technology stacks, infrastructure, and development tools
- Skills and Expertise within the team
- Vendor Support - Cost and ROI:
- Upfront investments, licensing fees, operational expenses, and ongoing maintenance. - Future Flexibility and Adaptability: future changes, scalability, and integration of new technologies or features.
- Stakeholder Alignment
What is the 12 step process for analysing and designing solution architectures?
- Understand Business Requirements
- Identify Stakeholders and User Needs
- Define Architecture Objectives and Constraints
- Conduct Current State Analysis
- Decompose and Analyze
- Identify Architecture Patterns and Technologies
- Iterative Design and Validation
- Risk Assessment and Mitigation
- Iteratively Refine and Document
- Collaborate and Communicate
- Review and Governance
- Monitor and Adapt
How should a solution architect handle trade offs between cost, performance, scalability and maintainability?
This requires a holistic and pragmatic approach, involving understanding the unique requirements, priorities, and constraints of the specific project or organization and making well-informed decisions that strike a balance between these factors.
Factors to consider:
- Understand Business Priorities
- Define Requirements and Constraints
- Analyze Impact and Dependencies
- Consider Lifecycle Costs
- Quantify Trade-offs
- Prioritize Performance-Critical Components (Invest more resources and effort in optimizing these areas)
- Leverage Cost-Effective Technologies
- Emphasize Modularity and Abstraction
- Plan for Scalability
- Automate Processes
- Consider Trade-off Reversibility
- Collaborate with Stakeholders
How can emerging technologies impact solution designs?
Emerging technologies significantly impact solution architecture by:
- expanding capabilities
- influencing architectural patterns
- introducing integration challenges
- improving scalability and performance
- raising security and privacy considerations
- affecting infrastructure and deployment models
- shaping data management and analytics approaches
Solution architects must embrace these technologies, understand their implications, and leverage them appropriately to design modern, efficient, and future-proof architectures.
What are the three security by design principles?
- least privilege
- defense-in-depth (multiple security measures)
- separation of concerns
How can a solution architect address security and compliance concerns in their designs?
- Understand Security and Compliance Requirements
- Adopt Security by Design Principles
- Perform Risk Assessment
- Implement Secure Authentication and Authorization
- Apply Secure Data Handling and data protection measures
- Secure Communication Channels
- Implement Threat Monitoring and Detection
- Conduct Regular Security Testing
- Establish Incident Response and Recovery Plans
- Stay Updated on Security Threats and Best Practices
- Collaborate with Security and Compliance Teams
- Document Security and Compliance Considerations
What are the key features of Databricks?
Databricks is a unified analytics platform designed for big data processing and machine learning.
Key Features:
- Apache Spark Integration: provides optimized Spark runtime environment,
- Unified Workspace: collaborative environment for data engineers, data scientists, and analysts, including interactive notebooks and data sharing
- Automated Spark Cluster Management
- High Performance Computing: highly optimized runtime for Spark workloads
- Data Source Integration: seamless integration with various data sources and storage systems, e.g. Amazon S3, Azure Blob Storage, Hadoop Distributed File System (HDFS), and more.
- Machine Learning: MLlib and MLflow
- Streaming Analytics: integration with Apache Spark Streaming and Structured Streaming
- Data Engineering Capabilities: supports batch processing, real-time streaming, and complex data workflows
- Security and Governance: data protection, IAM, and compliance, role-based access control (RBAC), data encryption at rest and in transit, audit logs, and integration with external authentication systems.
- Scalable Cloud Infrastructure: runs on major cloud platforms such as AWS, Azure, and GCP. It leverages the underlying cloud infrastructure to provide scalable and elastic computing resources
What is the benefit of Databricks’ integration with Spark?
Databricks’ integration with Apache Spark simplifies the deployment, management, and optimization of Spark workloads, enabling users to focus on data analysis, machine learning, and advanced analytics tasks rather than infrastructure management.