Enterprise Computing Flashcards
Lecture 1 - Introduction
Define the waterfall Model
This waterfall model cascades the three fundamental activities
of the software development process, so that they happen
sequentially.
It is performed as Exploration -> Development -> Operation
Why can the waterfall model be considered inoptimal
Specifications can often change and sometimes it can be too late to change a wrong interpretation of the problem deep into the development phase. Modifying requirements causes lots of issues
Define the Iterative/incremental Model
The iterative/incremental model is:
∙ iterative because the feed-forward between activities is augmented with feed-back between them - e.g. a feed forward/backward waterfall;
∙ incremental because the interleaved activities regularly deliver small additional pieces of functionality
What advantage does iterative/incremental development have over waterfall?
Iterative allows the going back and changing previous phases
Lecture 2 - lean Cycle evolution
What is the purpose of the Lean Cycle in software production?
The Lean Cycle is used to apply the scientific method to software production. It emphasizes continuous feedback and learning through iterative cycles of Build-Measure-Learn or Learn-Measure-Build to adapt and improve products efficiently.
What are the three main phases of the Lean Cycle?
The three main phases are:
Exploration: Identifying and testing hypotheses about the market.
Development: Building products or features based on validated hypotheses.
Operation: Delivering the product to customers and refining based on feedback.
What happens in the “Learn” phase of the Build-Measure-Learn cycle
In the Learn phase, an enterprise formulates hypotheses about the market and determines the empirical data required to validate these hypotheses.
What is the focus of the “Measure” phase in the Build-Measure-Learn cycle?
The Measure phase involves testing the hypothesis by collecting empirical data, often through experiments or feedback from prototypes or early versions of the product.
Describe the “Build” phase of the Build-Measure-Learn cycle.
In the Build phase, an enterprise creates a Minimum Viable Product (MVP) to test hypotheses. This MVP allows for quick iteration and feedback collection.
What is an issue with the Build-Measure-Learn cycle
Building first can often incur significant costs, building without research is risky
What is a better alternative to the Build measure learn cycle
Reversing the cycle can help to improve it, learning first about demand etc, measuring the market and then building the project
What is a “pivot” in the context of the Lean Cycle?
A pivot is a significant change in strategy without changing the vision. It occurs when empirical data suggests that the current approach isn’t working, leading to adjustments like technology changes or shifts in product focus.
What are the five types of pivots mentioned in the Lean Cycle?
Technology Pivot: Switching to a more efficient technology.
Zoom-In Pivot: Turning a product feature into the main product.
Zoom-Out Pivot: Making a product part of a larger product suite.
Customer Segment Pivot: Targeting a different customer group.
Customer Need Pivot: Addressing a different but more critical problem.
Provide an example of a technology pivot.
Microsoft shifted from selling standalone Office software to a subscription-based cloud service with Microsoft 365, improving value delivery.
Explain a Zoom-In Pivot with an example.
A Zoom-In Pivot occurs when a product feature becomes the main product. Example: Flickr started as a multiplayer game but pivoted to focus on its photo-sharing feature, which gained popularity.
What is a Zoom-Out Pivot? Provide an example
A Zoom-Out Pivot happens when a product becomes part of a larger offering. Example: DotCloud transitioned to manage Docker containers, focusing on application mobility across clouds.
What is a Customer Segment Pivot?
It occurs when a product addresses a different customer group than initially intended. Example: YouTube started as a dating platform but shifted to a general video-sharing platform.
Define a Customer Need Pivot with an example.
This pivot addresses a more critical customer problem. Example: Twitter evolved from a podcasting platform to a microblogging SMS-based social network after its initial model was rendered obsolete.
What was the pivot that led to the success of Instagram?
Instagram started as a location-based app with multiple features. It pivoted to focus solely on photo sharing, simplifying user experience and achieving massive success.
How did Netflix pivot to achieve its current model?
Netflix transitioned from a mail-order DVD rental service to a streaming platform, allowing instant access to films and TV shows, disrupting traditional rental models.
What are the four types of MVPs mentioned?
Concierge MVP: Personalized service with customer awareness.
Wizard of Oz MVP: Simulated functionality without customer awareness.
Landing Page MVP: Testing interest via a promotional webpage.
Video MVP: Demonstrating a concept through a video.
Describe a Concierge MVP and its use case.
A Concierge MVP involves hands-on interaction with customers to refine the product concept. It’s used when the solution hypothesis is unclear and customer feedback is critical.
What distinguishes a Wizard of Oz MVP?
In a Wizard of Oz MVP, the product’s functionality is manually simulated without the customer’s knowledge. It’s used to validate a clear solution hypothesis while minimizing development effort.
How does a Landing Page MVP test product ideas?
A Landing Page MVP uses a webpage to gauge interest in a product idea. Visitors can sign up or pledge support, providing data on demand before product development.
What was the MVP for Dropbox?
Dropbox used a Video MVP, creating a simple video to demonstrate its functionality. This validated interest and attracted early adopters without building a full product.
How did Airbnb use an MVP approach?
Airbnb’s MVP was a basic website showcasing their apartment space. They manually managed bookings, testing the concept of renting personal spaces to travelers.
Why is iterative learning essential in the Lean Cycle?
Iterative learning allows for continuous improvement by validating assumptions, minimizing waste, and adapting to market needs efficiently, ensuring the product aligns with customer demands.
Lecture 3
According to Eric Ries, what type of experiment is a startup?
A startup is a human experiment designed to create a new product or service under conditions of extreme uncertainty. The goal is to test hypotheses about customer needs and product-market fit.
According to Eric Ries, what is the biggest waste that product development faces today?
The biggest waste in product development is building products that nobody wants. This waste occurs due to a lack of understanding of customer needs before development begins.
What does Eric Ries describe as the universal constant of all successful startups?
Continuous learning is the universal constant of all successful startups. Startups must iteratively test and refine their ideas based on customer feedback and market demands. Performing a pivot while staying grounded in what learning has occurred already
Is agile development suitable for startups, according to Eric Ries?
Agile development is suitable for startups as it emphasizes iterative progress, flexibility, and adapting to changes, which aligns with the startup’s need to rapidly respond to feedback and refine their product.
What is a situation in which agile development is not suitable?
Agile is not so suitable for safety critical systems, the product must be released in such as state where there are no errors. Releasing with errors can cause serious problems
What is validated learning in the context of a startup?
Validated learning is the process of demonstrating empirically that a team has discovered valuable truths about a startup’s present and future business prospects. It uses metrics derived from customer behavior rather than opinions or assumptions.
What does Eric Ries suggest should be included in the first version of a product?
The first version of a product, or Minimum Viable Product (MVP), should include only the core functionalities required to test key assumptions about customer needs and product viability. The MVP is designed to gather feedback with minimal resources.
According to Reis what is the most important thing for startups in terms of the cycle time
Reducing the cycle time is the most important thing
According to Eric Ries, what should the heuristic be for any kind of startup advice?
The heuristic for startup advice is that it should be actionable, testable, and tied to specific customer or market contexts. Generic advice should be avoided in favor of insights that can drive specific experiments.
What are actionable metrics, and why are they important for startups?
Actionable metrics are specific, clear, and tied to decision-making. They enable startups to evaluate the effectiveness of their strategies and experiments. Unlike vanity metrics, actionable metrics directly inform whether the business is on the right path.
How does Lean Startup methodology reduce waste in development?
By focusing on building MVPs, testing hypotheses, and using validated learning, the Lean Startup methodology ensures resources are allocated to features and products that meet actual customer needs, reducing waste.
Why is customer feedback crucial in Lean methodology?
Customer feedback is crucial because it provides real-world insights into user needs, helping startups pivot or persevere based on evidence rather than assumptions.
What role does experimentation play in Lean Cycle Evolution?
Experimentation allows startups to test hypotheses about products, customers, and markets under real-world conditions, enabling informed decision-making and minimizing risks associated with uncertainty.
Lecture 4 - Monoliths
What is a monolithic system?
A monolithic system is a single program run by a single process, formed from a collection of modules that communicate via procedure calls.
Why is it easy to develop a monolithic system initially?
It is easy because all the code is in one language and one place, allowing the development team to utilize their existing programming skills, tools, and experience effectively.
What makes testing a monolithic system straightforward?
Testing is straightforward because the system is built as a single executable, enabling automatic testing with a suite of tests and easy debugging when issues arise.
How is scaling achieved in a monolithic system?
Scaling in a monolithic system is done through vertical scaling, which involves upgrading to a more powerful machine to run the system better.
What are the benefits of using a centralized database in a monolithic system?
Centralized databases ensure data accuracy, completeness, and consistency, which simplifies maintenance and management.
What are the two possible outcomes of a database transaction?
A transaction can either:
Commit, completing successfully and moving the database to a new consistent state.
Abort, completing unsuccessfully and restoring the database to its previous consistent state.
What does the acronym ACID stand for in the context of database transactions?
ACID stands for:
Atomicity
Consistency
Isolation
Durability
What is the atomicity property of a transaction?
Atomicity ensures that transactions either fully succeed or fully fail, even in the presence of system failures.
What is the consistency property of a transaction?
Consistency ensures that a transaction takes the database from one consistent state to another.
What does the isolation property guarantee in database transactions?
Isolation ensures that the effects of concurrent transactions are the same as if the transactions were performed sequentially.
What does the durability property guarantee in database transactions?
Durability guarantees that once a transaction is successful, its effects persist, even in the event of system failures.
How does Fred Brooks’ observation to “plan to throw one away” apply to monolithic MVP development?
This observation suggests that teams should anticipate that their first version of a monolithic MVP might need to be discarded and rebuilt to incorporate lessons learned during its development.
What is Gall’s Law, and how does it relate to monolithic MVPs?
Gall’s Law states that a complex system that works evolves from a simple system that worked. For monolithic MVPs, it implies starting with a simpler design that can be refined over time.
What does the “You Aren’t Gonna Need It” (YAGNI) principle advocate in Extreme Programming?
YAGNI advises against implementing features until they are actually needed, emphasizing simplicity in monolithic MVPs to avoid unnecessary complexity.
What is Conway’s Law and how does it apply to monolithic MVP development?
Conway’s Law states that a system’s design mirrors the structure of the organization that created it. For monolithic MVPs, this means the design will reflect the communication structure of the team.
Why might it be better to reverse Conways Law?
Structuring the organization so that there are departments for each module can make the process more congruent
What does the phrase “eating your own dogfood” mean in the context of monolithic MVPs?
“Eating your own dogfood” means that development teams should use their own product to identify and address issues, ensuring its quality and usability.
What are some advantages of monolithic systems?
Advantages include simplicity in development and testing, centralized data management, and easier debugging due to having all code in one place.
What are the primary limitations of monolithic systems?
Limitations include difficulty scaling horizontally, potential for large and complex codebases, and challenges in adapting to changes or integrating new technologies.
What is vertical scaling, and why is it a common approach in monolithic systems?
Vertical scaling involves upgrading the hardware of a single machine to improve performance. It’s common in monolithic systems because they run as a single process that benefits from more powerful hardware.
Lecture 6 - Microservices
What is a microservices system?
A microservices system consists of multiple programs running as independent processes that communicate by sending messages over a network.
What is Representational State Transfer (REST)?
REST is a network protocol that is a conventional form of the HyperText Transfer Protocol (HTTP) used in microservices to communicate and provide resources.
What are the different types of resources in RESTful microservices?
Document - File-like resources managed using GET (read), PUT (update), and DELETE (delete).
Controller - External resources that execute tasks using POST.
Collection - Directory-like resources where GET lists items and POST creates a new one with an invented name.
Store - Similar to collection, but PUT creates new resources with a given name.
What is a distributed database in microservices?
A distributed database is a system where data is stored across multiple databases, each accessed by individual microservices. Ensuring accuracy, completeness, and consistency in such a setup is challenging.
What are the two main approaches to managing distributed transactions?
Two-phase commit
Sagas
How does a two-phase commit work?
The coordinator transaction asks participants to vote on committing a change. If all agree, the commit is executed; otherwise, all transactions are aborted. Each participant holds locks on its data until the decision is made.
If a vote is not made in a certain timeframe it will time out and the vote will be considered negative
How does the saga pattern manage distributed transactions?
A saga executes a sequence of transactions, committing or aborting each individually. If a failure occurs, compensating transactions undo previous commits. This approach sacrifices atomicity and relies on eventual consistency.
Why do we want to use Sagas?
Ask Google, something about not needing to wait for locks
What is horizontal scaling in microservices?
Horizontal scaling involves adding more machines, each capable of running multiple microservice instances, to improve system scalability.
Why does the number of machines allocated to run one microservices be the same as the number allocated to run another?
What is the reliability challenge in microservices?
Since microservices communicate over a network, failures in one service can impact others. Strategies like retries, circuit breakers, and redundancy are used to improve reliability.
What is the strangler design pattern?
The strangler pattern is a gradual migration approach where monolithic modules are placed behind a facade and replaced one-by-one with microservices, updating the facade as changes are made.
What is the second-system effect, and how does it relate to microservices migration?
The second-system effect, identified by Fred Brooks, suggests that engineers tend to overcomplicate their second system. In microservices migration, teams must avoid unnecessary complexity when breaking apart a monolith.
What is Jeff Bezos’ two-pizza rule, and how does it apply to microservices?
Jeff Bezos suggests that teams should be small enough to be fed with two pizzas. In microservices, small, independent teams are ideal for maintaining and developing individual services efficiently.
What is the “Big Ball of Mud” problem in software architecture?
The “Big Ball of Mud” describes an unstructured, poorly designed software system. Microservices help avoid this by enforcing modular design principles.
How does Ward Cunningham’s technical debt concept apply to microservices?
Technical debt refers to the cost of shortcuts in development. Poorly designed microservices architectures can accumulate technical debt, requiring costly future refactoring.
What is the CAP theorem, and how does it apply to microservices?
The CAP theorem states that distributed systems can only provide two of three guarantees: Consistency, Availability, and Partition Tolerance. Microservices architectures must choose trade-offs based on system needs.
What is the role of API gateways in microservices?
API gateways act as intermediaries between clients and microservices, handling request routing, security, rate limiting, and load balancing.
What are some common tools used to manage microservices architectures?
Popular tools include Kubernetes (orchestration), Docker (containerization), Consul (service discovery), and Istio (service mesh).
What is event-driven architecture in microservices?
Event-driven architecture uses events to trigger and communicate between microservices asynchronously, improving decoupling and scalability.
What are some key advantages of microservices?
Scalability
Improved fault isolation
Flexibility in using different technologies
Faster development and deployment cycles
What are some disadvantages of microservices?
Increased complexity
Network latency issues
Distributed data management challenges
Higher infrastructure costs
Lecture 7
What are the nine common characteristics of microservices according to Martin Fowler?
The nine common characteristics of microservices according to Martin Fowler are:
Componentization via Services – Microservices are independently deployable services.
Organized Around Business Capabilities – Teams are structured around business functions.
Products Not Projects – Microservices focus on long-lived products, not temporary projects.
Smart Endpoints and Dumb Pipes – Business logic is in the services, not the communication mechanism.
Decentralized Governance – Different teams can use different technologies.
Decentralized Data Management – Each service manages its own database.
Infrastructure Automation – Deployment and monitoring are automated.
Design for Failure – Systems assume failures and handle them gracefully. Netflix Death Monkey
Evolutionary Design – Services can be updated or replaced independently.
What is a component according to Martin Fowler?
A component is a unit of software that can be replaced or upgraded independently. In microservices, a component is defined by its behavior and exposed via an API, making it easier to manage and scale.
Why should one organize around business capabilities in microservices?
Organizing around business capabilities ensures that teams focus on delivering value rather than being restricted by technology stacks. It enables better ownership, faster delivery, and clearer responsibilities. Each microservice aligns with a distinct business function.
Should endpoints be smart or dumb in microservices?
Endpoints should be smart, while the communication between them should be simple (dumb pipes). This means that microservices encapsulate business logic within the service, whereas the network simply routes data without complex logic.
What is the rule for microservice data management?
Each microservice should have its own dedicated database and not share it with other services. This ensures loose coupling and independence, allowing services to evolve independently.
What do you have to assume in any distributed system?
You must assume the “Fallacies of Distributed Computing,” including:
The network is reliable.
Latency is zero.
Bandwidth is infinite.
The network is secure.
The topology doesn’t change.
There is one administrator.
Transport cost is zero.
The network is homogeneous.
Recognizing these assumptions helps design resilient and fault-tolerant microservices.
How big is a microservice?
There is no fixed size, but a microservice should be small enough to be developed and managed by a small team (typically 2-5 developers) and should perform a single business function well.
What things must be sorted out before adopting microservices?
Before transitioning to microservices, teams must consider:
Deployment automation – Microservices require CI/CD pipelines.
Monitoring and logging – Observability is critical.
Service discovery – Dynamic service registration is needed.
Fault tolerance – Handling failures must be a priority.
Data consistency – Distributed databases need careful management.
Organizational readiness – Teams must be capable of handling service independence.
Lecture 9
The two common repository models are:
Monorepo - A single large repository that contains all microservices. Any commit triggers the production of multiple microservices.
Multirepo - A separate repository for each service. Any commit only affects a single service.
What are the advantages and disadvantages of using a Monorepo?
Advantages:
Simplifies dependency management.
Centralized codebase for better consistency.
Easier refactoring across services.
Disadvantages:
Can become slow and difficult to manage at scale.
Requires robust tooling to handle changes efficiently.
Can cause bottlenecks if too many teams work on the same repository.
What are the advantages and disadvantages of using a Multirepo?
Advantages:
Allows independent development and deployment of services.
Teams have full control over their own repositories.
Reduces risk of large-scale merge conflicts.
Disadvantages:
Harder to coordinate cross-service changes.
Dependency management can be more complex.
May lead to duplication of code across repositories.
What are the two common branching models?
Feature-Based Development - Developers create long-lived feature branches that may last for weeks or months before merging into the main branch.
Trunk-Based Development - Developers work primarily on a single main branch, with short-lived feature branches that are merged back within minutes or hours.
What are the advantages and disadvantages of Feature-Based Development?
Advantages:
Allows isolated development of new features.
Provides stability by keeping unfinished code out of the main branch.
Disadvantages:
Merging long-lived branches can be complex and lead to conflicts.
Delays in integration may cause unexpected failures when merging.
What are the advantages and disadvantages of Trunk-Based Development?
Advantages:
Encourages continuous integration.
Reduces merge conflicts by keeping branches short-lived.
Faster feedback and fewer integration problems.
Disadvantages:
Requires disciplined development practices.
Can be difficult for large teams to coordinate without proper tooling.
What are the essential practices of version control?
The essential practices of version control include:
Run commit tests locally.
Wait for commit tests to complete before proceeding.
Avoid committing on a broken build.
Never leave work with a broken build.
Be prepared to revert changes if needed.
Avoid commenting out failing tests.
Take responsibility for fixing breakages.
Why is it important to run commit tests locally?
Running commit tests locally ensures that the developer’s changes do not introduce failures before pushing them to the repository. This prevents unnecessary build failures and broken tests in shared branches.
Why should developers wait for commit tests to complete?
Developers should wait for commit tests to complete because:
It ensures that the build remains stable.
Developers can quickly fix failures instead of delaying corrections.
Why should developers avoid committing on a broken build?
Committing on a broken build:
Makes debugging more difficult.
Causes further build failures and wastes time.
Leads to a culture where broken builds become common and unresolved.
Why should developers never leave work with a broken build?
Developers should never leave a broken build because:
It delays fixes and impacts the entire team.
Developers may forget details of the change, making debugging harder.
Experienced developers commit changes at least an hour before leaving to ensure stability.
Why should developers be prepared to revert changes?
Developers should be prepared to revert changes because:
Quick reverts keep the project in a working state.
If a fix takes too long (e.g., over 10 minutes), reverting prevents prolonged issues.
Why should developers avoid commenting out tests?
It can lead to lower code quality.
Instead, developers should:
Fix the code if it fails.
Modify the test if assumptions change.
Delete the test if the functionality no longer exists.
Why should developers take responsibility for breakages?
Taking responsibility for breakages ensures that:
The codebase remains stable.
Developers collaborate to resolve issues quickly.
No single person is left fixing issues they did not introduce.
Lecture 10
What are the three benefits of a version control system according to Farley?
Step back to safety – Enables rolling back to previous versions in case of issues.
Share changes easily – Facilitates collaboration by allowing multiple contributors to work on the same project.
Store changes safely – Ensures that all changes are securely saved and can be retrieved when needed.
What are the three models of version control according to Farley?
Mono-repo – Stores everything in a single large repository.
Multi-repo – Each independent component has its own repository.
Multi-repo’ – Stores interdependent components in separate repositories.
Why does a mono-repo provide the three benefits of a version control system?
mono-repo supports these benefits because:
Step back to safety – Rolling back changes affects all components together.
Share changes easily – Any component can be updated in a centralized manner.
Store changes safely – Everything, including dependencies, is stored in one place.
Why might a multi-repo not provide the three benefits of a version control system?
A multi-repo can struggle with these benefits because:
Step back to safety – No centralized rollback mechanism for all components.
Share changes easily – Difficult to coordinate updates across repositories.
Store changes safely – Versioning relationships between components are not inherently stored.
What are two solutions to the multi-repo problem according to Farley?
Fixed, well-understood APIs – Components interact through stable interfaces.
Flexible, backward/forward-compatible APIs – Ensures components work together despite version differences.
Why do Farley’s solutions to the multi-repo problem restore the three benefits of a version control system?
Step back to safety – Individual components can be rolled back independently.
Share changes easily – Updates can be coordinated through APIs (though this remains complex).
Store changes safely – Components are versioned separately but stored reliably.
Why is the “multi-repo’” model considered the worst of all worlds?
Because in the multi-repo’ model:
Components cannot be developed independently.
Components cannot be deployed independently.
This means teams face the worst aspects of both monolithic and multi-repo structures, making version control more difficult.
Lecture 11 - Continuous Integration
What is Continuous Integration (CI)?
Continuous Integration (CI) is the practice of quickly integrating newly developed code with the rest of the application code. This process is usually automated and results in a build artifact at the end. The goal of CI is to detect errors early and streamline the deployment process.
What are the key benefits of Continuous Integration?
Key benefits of CI include:
Early bug detection through automated testing.
Faster development cycles and releases.
Reduced integration problems.
Improved collaboration among development teams.
Higher code quality through continuous testing.
What are the four traditional product delivery releases?
The four traditional product delivery releases are:
Alpha Release - An early version for internal testing.
Beta Release - A more stable version for external user feedback.
Release Candidate - A near-final version tested for last-minute issues.
Final Release - The official version available to customers
What are the three modern feature delivery environments?
The three modern feature delivery environments are:
Development Environment - Where individual teams integrate their work, updated throughout a sprint.
Staging Environment - A near-production environment where multiple teams integrate their work, updated at the end of a sprint.
Production Environment - The live system where software is deployed for customer use, updated based on business needs.
What is Shift Left Testing, and how does it apply to CI?
Shift Left Testing refers to the practice of moving testing earlier in the software development cycle. It ensures that testing is done frequently and early, reducing defects and improving code quality. In CI, Shift Left Testing is crucial as it enables continuous feedback, helping to identify and fix issues before deployment.
What is the Test Pyramid, and what are its levels?
The Test Pyramid is a model that categorizes different levels of testing:
Unit Tests - Test individual functions or components, performed in milliseconds.
Service Tests - Test interactions between services, performed in minutes.
End-to-End Tests - Test the entire application workflow, mimicking user interaction, performed in several minutes.
What is the Test Snow Cone, and why is it an anti-pattern?
The Test Snow Cone is an anti-pattern where more end-to-end tests exist than unit or service tests. This leads to slow test execution and longer feedback cycles. CI best practices encourage more unit and service tests over end-to-end tests to ensure efficiency.
What are Brittle Tests in Continuous Integration?
Brittle Tests are tests that fail because another dependent service fails, even if the functionality being tested is correct. They can cause false negatives, making debugging difficult.
What are Flaky Tests, and why are they problematic?
Flaky Tests sometimes fail due to non-deterministic issues such as timeouts or race conditions. They create unreliable feedback and reduce confidence in test automation.
What is the Normalization of Deviance, and how does it affect CI?
Normalization of Deviance is a concept where teams gradually accept small failures as normal, leading to degraded quality over time. In CI, failing tests must be addressed immediately to prevent this mindset and ensure reliable software.
What are Build Light Indicators, and how are they used in CI?
Build Light Indicators visually represent the status of CI builds. A green light means the build is successful, while a red light indicates a failure. Some teams use lava lamps or monitor screens to display build statuses.
What role does automation play in Continuous Integration?
Automation is central to CI as it enables frequent builds, automated testing, and fast feedback. It ensures that code changes do not introduce new errors and maintains software quality at scale.
Lecture 12 - Continuous integration 2
What is integration hell in software development?
Integration hell is an anti-pattern in software development where different parts of a software system are integrated too late, leading to complex and time-consuming conflicts.
Why should commit tests be run locally before pushing changes (Rule 1)?
Running commit tests locally ensures that the deployment pipeline remains a valuable shared resource that is not blocked by unnecessary test failures.
Why should developers wait for test results before moving on (Rule 2)?
Developers should wait for test results to be available so they can immediately fix any issues, ensuring smooth progress.
Why must failures be fixed or reverted within 10 minutes (Rule 3)?
Fixing or reverting failures within 10 minutes prevents blocking progress for others and maintains development velocity.
What should happen if a teammate breaks the integration rules (Rule 4)?
If a teammate breaks the rules, their changes should be reverted to prevent them from blocking progress.
Why is it considered a “build sin” if someone else notices your failure first (Rule 5)?
If someone else notices a failure before you do, it indicates a lack of attentiveness and encourages developers to monitor their changes more closely.
What should a developer do once their commit passes (Rule 6)?
Once a commit passes, a developer should move on to their next task, as automated testing ensures that their changes are stable.
Who is responsible for fixing a failing test (Rule 7)?
The committer is responsible for fixing a failing test to ensure accountability in the development process.
What is the rule about responsibility when multiple people may be responsible for a failure (Rule 8)?
Everyone who may be responsible should agree on who will fix the failure, ensuring that accountability is maintained.
Why should developers monitor the progress of their changes (Rule 9)?
Monitoring changes ensures that any issue is detected early, preventing unfit software from being released
Why should any pipeline failure be addressed immediately (Rule 10)?
Immediate attention to pipeline failures ensures that the pipeline remains clear for other changes, maintaining continuous integration efficiency.
Lecture 13 - Continuous Delivery
What is Continuous Delivery (CD)?
Continuous Delivery (CD) is a software engineering practice where software is automatically moved from a source code repository to a staging environment. At the press of a “release” button, it can be deployed to the production environment for customer use.
How does Continuous Deployment differ from Continuous Delivery?
Continuous Deployment (CD) goes a step beyond Continuous Delivery by automatically deploying software to the production environment without manual intervention, making new features immediately available to customers.
What are the key principles of Continuous Delivery?
The key principles of Continuous Delivery include:
Create a repeatable process
Automate almost everything
Version control for everything
If it hurts, do it more frequently
Build quality in
Done means released
Everyone is responsible
Continuous improvement
Why is creating a repeatable process important in Continuous Delivery?
A repeatable process ensures consistency, reduces errors, and allows teams to become more efficient. A well-practiced process becomes routine and reliable.
Why is automation emphasized in Continuous Delivery?
Automation ensures accuracy, consistency, and efficiency. Manual processes introduce human error and inefficiencies, whereas automation standardizes execution and reduces risks.
What is the significance of version control in Continuous Delivery?
Version control allows any team member to build any version of the application on demand. It ensures traceability, facilitates rollback if needed, and supports collaboration.
Why should painful processes be done more frequently in Continuous Delivery?
Performing painful processes frequently helps improve efficiency, identify bottlenecks, and streamline workflows. Regular practice leads to familiarity and process refinement.
What does “Build Quality In” mean in Continuous Delivery?
This principle emphasizes fixing defects as soon as they are found. Early detection and resolution of defects are cost-effective and ensure higher software quality.
What does “Done Means Released” signify in Continuous Delivery?
A feature is only considered “done” once it has been deployed to a production-like environment. This avoids ambiguity in development progress and ensures accountability.
Why is team responsibility emphasized in Continuous Delivery?
Continuous Delivery requires collaboration across teams. When everyone is responsible, issues are resolved collectively rather than leading to blame culture and inefficiencies.
What is the role of continuous improvement in Continuous Delivery?
Continuous improvement encourages teams to reflect on successes and failures, leading to process optimizations and better software delivery over time.
What are the three types of testing in production?
he three types of testing in production are:
A/B Testing
Canary Testing
Blue/Green Testing
What is A/B Testing in production?
A/B Testing involves directing a small percentage of user traffic to a new interface in production. If users respond negatively, traffic is reverted to the old interface.
How does Canary Testing work?
Canary Testing directs a small percentage of traffic to a new version of the software. If issues arise, traffic is rolled back to the previous version.
What is Blue/Green Testing?
Blue/Green Testing swaps the production (blue) and staging (green) environments. If the new version performs well, the switch is made permanent; otherwise, it is reversed.
Lecture 14 - Continuous Delivery 2
According to Jez Humble, how can we achieve continuous delivery?
Continuous delivery is achieved through fast, automated feedback on the production readiness of applications every time there is a change — to code, infrastructure, or configuration.
What condition should software always be in, according to Humble?
Software should always be in a production-ready or releasable state.
How does continuous delivery help to avoid the biggest source of waste in the software development process?
Continuous delivery helps avoid waste by making it easier to deploy new, experimental features into production quickly and efficiently, reducing delays and unnecessary rework.
When should testing be done in continuous delivery?
Testing should be done continuously throughout the development process, not just at the end.
Who is responsible for software quality in continuous delivery?
Everyone involved in the development process is responsible for quality, not just a dedicated QA team.
What is considered more important than delivering functionality, according to Humble?
Keeping the system in a working and stable state is more important than delivering new functionality.
How does continuous delivery reduce the risk of releases?
Continuous delivery reduces risk by enabling small, extensively tested changes to be released frequently, and by making reversion easy in case of issues.
What role does automation play in continuous delivery?
Automation is critical for providing fast feedback, reducing human error, and ensuring that every change can be safely and quickly deployed.
Why is continuous integration important in the context of continuous delivery?
Continuous integration ensures that changes are merged and tested frequently, reducing integration problems and allowing for quicker releases.
What are some common tools used in continuous delivery pipelines?
Common tools include Jenkins, GitHub Actions, GitLab CI/CD, CircleCI, and Travis CI for automation, along with Docker and Kubernetes for containerization and deployment.
What are the benefits of frequent, smaller releases in continuous delivery?
Frequent, smaller releases reduce risk, improve feedback loops, enable faster value delivery, and make it easier to pinpoint issues when they arise.
How does continuous delivery improve collaboration between development and operations teams?
Continuous delivery encourages DevOps practices, breaking down silos and promoting shared responsibility for deployment, monitoring, and system reliability.
What are some challenges organizations face when adopting continuous delivery?
Challenges include cultural resistance to change, legacy system constraints, lack of automation, and the need for robust testing strategies.
How does continuous delivery support business agility?
Continuous delivery enables businesses to respond quickly to market changes, customer feedback, and new opportunities by streamlining the software release process.
What is the relationship between continuous delivery and DevOps?
Continuous delivery is a key practice within DevOps, aiming to integrate development and operations for seamless, automated software releases.
How does monitoring play a role in continuous delivery?
Monitoring provides real-time feedback on system performance, helping teams detect and resolve issues quickly to maintain system reliability.
Why is rollback capability important in continuous delivery?
Rollback capability ensures that if an issue arises in production, teams can quickly revert to a previous stable version, minimizing downtime and impact.
How does feature flagging complement continuous delivery?
Feature flagging allows teams to deploy changes without exposing them to all users, enabling controlled testing and gradual rollouts.
What is the difference between continuous delivery and continuous deployment?
Continuous delivery ensures software is always ready for release, while continuous deployment automatically releases every successful change into production without manual intervention.
How can organizations measure the success of continuous delivery?
Success can be measured using metrics such as deployment frequency, lead time for changes, mean time to recovery (MTTR), and change failure rate.
Lecture 15 - Cloud Computing
What is cloud computing?
Cloud computing is the delivery of computing services—including servers, storage, databases, networking, software, and more—over the internet (“the cloud”). It allows users to rent rather than own IT infrastructure, offering scalability, flexibility, and cost savings.
Why is cloud computing compared to electricity utilities?
Just as electricity utilities provide power from centralized plants, cloud computing providers supply computing resources from centralized data centers. This model allows for economies of scale and expertise that individual users cannot achieve on their own.
What is a staging environment in cloud computing?
A staging environment is a pre-production environment where software is tested in conditions similar to production. It allows stakeholders to validate the software before deployment. Many enterprises rent their staging environments in the cloud rather than maintaining physical infrastructure.
Why has cloud computing become successful?
Broad network access (availability over standard networks, including VPNs).
On-demand, self-service (users can provision resources as needed).
Measured service (pay-per-use billing model).
Rapid elasticity (scaling resources up or down as needed).
Resource pooling (multi-tenant models for efficiency).
What are the key characteristics of broad network access in cloud computing?
Broad network access means cloud services are available over standard network technologies, including the internet and VPNs, ensuring accessibility from various devices and locations.
What does on-demand, self-service mean in cloud computing?
On-demand, self-service means customers can provision computing resources automatically without human intervention, typically through a web interface or API.
What is meant by measured service in cloud computing?
Measured service refers to the provider’s ability to track and optimize resource usage, ensuring customers pay only for what they consume.
What is rapid elasticity in cloud computing?
Rapid elasticity allows customers to scale computing resources up or down dynamically based on demand, ensuring efficiency and cost-effectiveness.
What is resource pooling in cloud computing?
Resource pooling enables cloud providers to serve multiple customers using shared resources, efficiently distributing computing power among users through multi-tenancy.
What are the two phases of cloud computing?
The two main phases are:
Serverful computing (traditional model with dedicated infrastructure).
Serverless computing (execution-based model where infrastructure management is abstracted).
What are the different serverful computing models?
Serverful computing includes:
Infrastructure-as-a-Service (IaaS): Access to raw computing resources (e.g., virtual machines).
Platform-as-a-Service (PaaS): Managed infrastructure with OS and development tools.
Software-as-a-Service (SaaS): Fully managed applications delivered over the cloud.
What technologies enable serverful computing?
Serverful computing relies on virtualization, which includes:
Virtual Machines (VMs): Software-based simulations of physical computers managed by hypervisors.
Containers: Lightweight, OS-level virtualization managed by the operating system.
How does the serverful cost model work?
The serverful cost model is based on resource rental, where customers pay for allocated resources, regardless of whether they are fully utilized. This is similar to renting a car.
What are the serverless computing models?
Serverless computing includes:
Backend-as-a-Service (BaaS): Pre-built backend services (e.g., authentication, databases).
Function-as-a-Service (FaaS): Execution of code in response to events without managing infrastructure.
How is serverless computing implemented?
Serverless computing uses hidden containers to run function code. Though servers are still used, their management is abstracted, and responsibility shifts to the cloud provider.
How does the serverless cost model work?
Serverless computing charges customers based on execution time rather than resource allocation. This model is often compared to hailing a taxi—you pay only for the ride, not for keeping a car.
How can microservices be implemented in cloud computing?
Microservices can be implemented using:
Virtual machines (serverful approach, more overhead).
Containers (lightweight, efficient serverful approach).
Function instances (serverless approach, potential maintenance/performance challenges).
What are potential issues when mapping microservices to multiple function instances?
Mapping a single microservice to multiple function instances may create:
Maintenance issues (tracking instances).
Performance issues (cold start problems when instances are inactive).
Lecture 16 - Cloud Computing 2
How long did it take to get a new server ready for code deployment in an FT data centre vs. an AWS data centre, according to Wells?
FT data centre: Several weeks to months.
AWS data centre: A few minutes to hours.
This highlights the agility and scalability benefits of cloud computing.
Should one worry about vendor lock-in, according to Wells?
Vendor lock-in occurs when it becomes costly or difficult to switch cloud providers.
Wells suggests it is not always a major concern because cloud providers offer significant advantages.
Mitigation strategies include using multi-cloud approaches and open standards.
What was the deployment frequency before and after the moving to the cloud?
Before: Infrequent, possibly quarterly or monthly releases.
After: Continuous deployment, allowing multiple releases per day.
The cloud enables faster development cycles and quicker feedback loops.
Do you have to choose between speed and stability in cloud computing?
No, modern DevOps practices enable both.
Automation, continuous integration/continuous deployment (CI/CD), and robust monitoring improve stability while maintaining rapid delivery.
Why should you use a queue in cloud-native architecture?
Queues decouple system components, enhancing scalability and reliability.
They help handle asynchronous processing and load balancing.
Example: Message queues (e.g., AWS SQS, RabbitMQ) prevent system overload.
What should you focus on when developing a distributed system?
Resilience and fault tolerance.
Network latency and eventual consistency.
Observability: logging, monitoring, and tracing.
Scalability: designing for auto-scaling and load balancing.
Why should one adopt business-focused monitoring?
Traditional monitoring focuses on infrastructure metrics (CPU, memory, etc.).
Business-focused monitoring tracks key performance indicators (KPIs) like user engagement, conversion rates, and revenue.
Helps align IT efforts with business goals.
Why should one test infrastructure recovery plans?
Ensures business continuity in case of failures.
Identifies weaknesses in disaster recovery strategies.
Techniques include chaos engineering (e.g., Netflix’s Chaos Monkey) to simulate failures and test resilience.