Monolith to Microservices: Evolutionary Patterns to Transform Your Monolith by Sam Newman
This book provides a comprehensive guide to transitioning from monolithic systems to microservice architectures. It emphasizes the importance of understanding whether microservices are suitable for your organization and offers strategies for managing this transition effectively.
Key Concepts
Microservices Overview:
- Independent Deployability: Microservices are designed to be deployed independently, allowing for more flexible updates and scaling.
- Business Domain Modeling: They should be organized around business domains, ensuring that each service aligns with specific business needs.
- Data Ownership: Each microservice should manage its own data, reducing dependencies and potential conflicts.
Advantages and Challenges:
- Advantages: Microservices can improve team autonomy, reduce time to market, and scale cost-effectively. They also enhance robustness and allow for the adoption of new technologies.
- Challenges: They can lead to complexities such as ownership issues, increased overhead in managing multiple services, and potential difficulties in maintaining data consistency.
Monoliths:
- Types: Includes single process monoliths and distributed monoliths, which can become challenging to manage as they grow.
- Challenges: High coupling and low cohesion can make changes difficult and slow.
- Advantages: Simplicity in deployment and easier management of a single codebase.
Migration Planning
Key Considerations:
- Assessing Suitability: Evaluate if microservices fit your organizational goals, considering factors like domain clarity and team readiness.
- Incremental Migration: Emphasizes the importance of gradual transition, allowing for testing and adaptation.
- Organizational Change: Successful migration requires changes in organizational structure and culture, including team reorganization and skill development.
Decomposition Strategies
Application Decomposition:
- Strangler Fig Pattern: Gradually replace parts of the monolith with microservices, minimizing disruption.
- UI Composition: Integrate microservices at the user interface level to allow for phased migration.
Database Decomposition:
- Shared Database Pattern: Initially use a shared database to ease transition but plan for eventual separation.
- Database Wrapping Service: Encapsulate database access within a service to manage data more effectively.
Growing Pains and Solutions
As microservice architectures expand, they introduce new challenges:
- Ownership: Clear ownership and responsibility are crucial as the number of services grows.
- Breaking Changes: Implement strategies to manage breaking changes and maintain service compatibility.
- Monitoring and Troubleshooting: Develop robust monitoring systems to handle increased complexity.
- Local Developer Experience: Ensure a streamlined development process to prevent bottlenecks.
Conclusion
The book serves as a practical guide for organizations considering or undergoing a transition to microservices. It highlights the importance of careful planning, understanding the trade-offs involved, and preparing for the complexities that come with microservice architectures.
For more detailed exploration, including technical patterns and real-world examples, refer to the full text.
Microservices Overview
Microservices are independently deployable services modeled around business domains, forming a type of service-oriented architecture (SOA). They communicate via network endpoints, encapsulating data storage and retrieval, and are technology agnostic. The key advantage of microservices is independent deployability, allowing changes to be made and deployed without affecting other services. This requires services to be loosely coupled, with well-defined, stable interfaces to ensure smooth communication and deployment.
Independent Deployability
The concept of independent deployability is crucial for microservices. It involves deploying changes to a single microservice without impacting others, necessitating loosely coupled services with explicit contracts. Sharing databases is discouraged as it complicates this process. The goal is to minimize the need for cross-service changes, maintaining stable service interfaces.
Business Domain Modeling
Microservices should be modeled around business domains to reduce the complexity of making changes across service boundaries. This approach contrasts with traditional three-tier architectures, which often align with team structures rather than business functionalities. By focusing on business domains, microservices can encapsulate UI, application logic, and data storage, making it easier to manage changes and organize teams.
Data Ownership
Microservices should own their data, avoiding shared databases. This promotes stable interfaces and reduces coupling, essential for independent deployability. Encapsulating data and behavior within services ensures high cohesion and reduces the effort needed for business-related changes.
Advantages of Microservices
Microservices offer flexibility, scalability, and robustness. They allow parallel development, enabling more developers to work on different parts of the system without interference. This architecture supports varied technology choices, facilitating the use of different programming languages and platforms.
Challenges and Considerations
Despite their advantages, microservices introduce complexities, especially in network communication, which involves latency and potential failures. Handling transactions and maintaining data consistency across distributed systems are significant challenges. The architecture requires careful consideration of system decomposition and technology choices to avoid costly mistakes.
User Interfaces and Technology
It’s important not to leave the user interface as a monolithic layer. Breaking apart the UI can enhance the benefits of microservices. While adopting new technologies can be tempting, it’s crucial to avoid overcomplicating the architecture with unnecessary innovations.
Conclusion
Microservices provide options and flexibility but come with costs. They require thoughtful implementation, focusing on business domain modeling, data ownership, and careful technology adoption to maximize their benefits while managing inherent complexities.
Adopting a microservice architecture involves navigating challenges inherent to distributed systems. It’s beneficial to start with familiar technology and adjust as needed. Microservices are technology agnostic; they allow flexibility in choosing programming languages and tools. The focus should be on managing complexity and defining clear service boundaries rather than on the size of the services.
The term “microservices” emerged from the idea of “micro-apps,” with an emphasis on small, easy-to-replace services. This aligns IT artifacts with business domains, promoting integration between business and IT functions. Microservices facilitate aligning delivery teams around product lines, reducing shared services across teams, and minimizing delivery contention.
Monoliths, in contrast, are units of deployment where all functionality is deployed together. There are single-process monoliths, modular monoliths, and distributed monoliths. Single-process monoliths are common and can be challenging due to coupling and deployment issues. Modular monoliths offer a middle ground by allowing parallel work on separate modules, though database decomposition can be complex.
Distributed monoliths have multiple services but require the entire system to be deployed together, often leading to tightly coupled architectures. Third-party systems can also be monolithic, and decomposition techniques can help manage them. While monoliths are often seen as legacy systems, they offer simpler deployment and code reuse, making them a valid architectural choice.
Understanding coupling and cohesion is crucial in defining microservice boundaries. High cohesion and low coupling lead to stable structures, minimizing the cost of changes across service boundaries. The goal is independent deployability, allowing changes to be made and deployed without affecting other services. These concepts are foundational to modular software and microservice architectures, focusing on creating stable, independently deployable modules.
The concepts of coupling and cohesion, initially outlined by Larry Constantine in 1968, are fundamental to software design. Cohesion refers to grouping functionality that changes together, minimizing the need to alter multiple services when implementing changes. This enhances the efficiency of microservice architecture by reducing the cost of change.
Coupling, on the other hand, is about the interdependence between components. High coupling means that changes in one component necessitate changes in another. Different types of coupling include implementation, temporal, deployment, and domain coupling.
Implementation Coupling occurs when components are dependent on each other’s internal implementations. For example, if a Recommendation service directly accesses the Order service’s database, changes in the Order service’s schema could break the Recommendation service. Using an API to fetch data instead can mitigate this issue by hiding implementation details.
Temporal Coupling involves runtime dependencies where services need to be available simultaneously for operations to complete. This is common with synchronous calls. Solutions include caching or using asynchronous messaging to decouple service availability from operation execution.
Deployment Coupling refers to the need to deploy multiple components together. Reducing this coupling, perhaps through independently deployable microservices, lowers deployment risk. Smaller, isolated updates are less risky and easier to manage, aligning with continuous delivery principles.
Domain Coupling is inherent in systems where services must interact to model real-world processes. For instance, an Order Processing service might send order details to a Warehouse service. By sharing only necessary information, such as using a Pick Instruction instead of full order details, domain coupling can be minimized. This approach also helps protect sensitive information.
Information hiding, a concept introduced by David Parnas, is crucial in managing coupling. It involves separating frequently changing parts of the code from stable ones, ensuring module boundaries are stable. This allows internal changes without affecting external interfaces, supporting independent deployability.
Service design should adopt “outside-in” thinking, focusing on consumer needs when defining service interfaces. This perspective helps shape user-friendly interfaces and maintains a clear separation between external contracts and internal implementations.
Overall, carefully managing coupling and cohesion in microservice architectures supports easier changes, safer deployments, and better alignment with business needs. These principles, rooted in early software design theories, remain essential for effective system architecture today.
Domain-Driven Design (DDD) helps align software with real-world domains. It emphasizes modeling domain concepts like Orders or Invoices as aggregates, which are self-contained units with their own lifecycle and state transitions. Aggregates can interact but should maintain clear boundaries. In microservices, a service manages one or more aggregates, ensuring encapsulation and preventing illegal state changes.
Bounded contexts represent larger organizational boundaries, encapsulating aggregates and hiding internal details. They define explicit interfaces for interaction, which can map to service boundaries in microservice architectures. Initially, it’s advisable to target services that encompass entire bounded contexts and later split them into smaller services around aggregate boundaries if needed.
Microservices, independently deployable services modeled around business domains, communicate over networks. They use information hiding and DDD principles to create stable boundaries, reducing coupling. Transitioning to microservices requires careful planning and a clear understanding of goals, avoiding the trend of adopting them without purpose.
Organizations should consider if microservices align with their goals, exploring alternatives and defining metrics to measure success. Common motivations include improving team autonomy and reducing time to market. Autonomous teams, empowered with decision-making capabilities, can enhance efficiency. Microservices can facilitate this by allowing teams to own and control services.
Reducing time to market involves deploying changes independently, without waiting for coordinated releases. However, improving delivery speed involves various factors beyond architecture, such as identifying bottlenecks in the production path. Analyzing the entire process from idea to deployment can reveal delays and areas for improvement.
Ultimately, adopting microservices should be a strategic decision based on achieving specific outcomes, not a goal in itself. Organizations should critically assess their needs, consider simpler alternatives, and focus on delivering value, even if it means reverting to traditional methods.
The text discusses the strategic implementation of microservices in software development, emphasizing its advantages and considerations. Microservices allow for independent scaling of processes, leading to cost-effective management of operational loads. This architecture is beneficial in SaaS products as it provides greater control over costs. Alternatives to microservices include vertical scaling, horizontal scaling, and technology replacement, though these have limitations.
Microservices enhance robustness and resilience by decomposing applications into independently deployable processes, thus isolating failures and minimizing system-wide outages. However, merely adopting microservices doesn’t guarantee robustness; it requires careful design to handle network partitions and service outages. Alternatives include running multiple monolith copies with load balancers and distributing instances across failure planes.
Scaling the number of developers is another benefit, as microservices reduce delivery contention by allowing independent work on decoupled components. However, successful scaling requires team autonomy and alignment with service ownership. An alternative is a modular monolith, where teams own different modules, though this can still lead to deployment coordination challenges.
Microservices offer technological flexibility, allowing different technologies for each service, which can be advantageous for innovation and developer satisfaction. However, monoliths limit technology choices, often necessitating a complete system overhaul for new technology adoption. The “strangler fig” approach can incrementally replace monoliths with new ones.
Reuse, often seen as a goal of microservices, is critiqued as a poor objective. Instead, focus should be on actual outcomes like faster time to market, as reuse can lead to unnecessary coordination and delays.
Microservices may not be suitable in certain situations. Unclear domain boundaries can lead to costly errors, as seen with SnapCI’s initial microservice implementation. Startups, focused on finding product-market fit, may find microservices premature, as they primarily address scaling issues post-success.
In summary, while microservices offer several advantages in scalability, robustness, developer productivity, and technological flexibility, they require careful planning and consideration of alternatives. They are not a one-size-fits-all solution and may not be suitable for all organizations, especially those in early stages or with unclear domain boundaries.
When considering a microservices architecture, startups should focus on initial success rather than prematurely adopting microservices. It’s often easier to partition an existing system than to start with microservices from scratch. With a brownfield system, you have a functional baseline and understand its production behavior, which helps identify mistakes and manage performance issues. Startups should cautiously split around clear boundaries and maintain a monolithic approach elsewhere, assessing operational maturity before scaling.
For customer-installed software, microservices can introduce operational complexity that customers may not be equipped to handle. Customers are familiar with monolithic deployments on specific platforms, such as Windows or macOS. Transitioning to microservices could overwhelm them, especially if it requires managing multiple processes or using technologies like Kubernetes, which vary significantly in implementation.
Adopting microservices without a clear goal is unwise. The desired outcomes should guide migration and system decomposition. Avoid adopting microservices just because it’s a trend. It’s crucial to separate core drivers from secondary benefits to prevent confusing priorities. For instance, while scaling might be the primary goal, other objectives like team autonomy or new technologies should not overshadow it. Prioritizing outcomes helps in decision-making and aligning goals across teams.
Change initiatives often involve multiple objectives, which can complicate priorities and delay benefits. It’s important to focus on the primary driver, such as scaling, and recognize secondary goals as less critical. Using a prioritization framework, like sliders, helps balance competing priorities and guides decision-making. This approach also aids in distributing responsibility and empowering autonomous teams.
Organizational change requires a shared understanding of goals. Dr. John Kotter’s eight-step process for change management is useful, especially in large-scale shifts. Establishing urgency is key; leveraging teachable moments can motivate change. Forming a guiding coalition involves identifying key supporters within and outside IT to champion the change. Trust and small wins can build support.
Developing a vision and strategy involves defining the change goal and the approach to achieve it. The vision should be realistic yet aspirational, while the strategy may evolve. Communicating the vision effectively is crucial; face-to-face interactions are more impactful than digital communication. Empowering employees involves removing roadblocks and providing the necessary bandwidth and resources to implement change. For microservices, this might mean addressing infrastructure provisioning challenges.
Overall, adopting microservices should be a well-considered decision, aligned with clear objectives and supported by a structured change management process. Prioritizing core goals, involving key stakeholders, and effectively communicating the vision are essential steps for successful implementation.
As part of Google’s Test Mercenaries program, we developed frameworks to simplify test suite creation and management, enhancing the visibility of tests in the code review system. This led to the creation of a new company-wide CI tool. Our approach was incremental, starting small and expanding based on initial successes. Quick wins were crucial to maintain momentum and faith in the vision, focusing on easily extractable functionalities in microservice decomposition.
Once successes were achieved, it was vital to reflect and adapt strategies for continued progress, especially as more complex challenges like database decomposition emerged. Sharing successes and failures helped embed new practices into the organizational culture, making them standard. Incremental migration was emphasized over big-bang rewrites, allowing for learning and minimizing the impact of errors.
Jeff Bezos’ concept of Type 1 and Type 2 decisions was highlighted, stressing that most decisions in microservice transitions are reversible and should be made quickly. The importance of assessing decisions on a spectrum of reversibility was noted, with irreversible decisions requiring more deliberation.
Cost of change varies, with code changes being less risky than database alterations. Mistakes should be made where the impact is lowest, using tools like whiteboards for initial planning. Domain-driven design plays a key role in identifying microservice boundaries and prioritizing decomposition. Event Storming is a collaborative technique that helps define a shared domain model, focusing on domain events without necessarily leading to an event-driven system.
Prioritization can be guided by domain models, assessing dependencies to determine the complexity of extracting functionalities. This approach provides a structured path for microservice decomposition, allowing for continuous refinement and adaptation based on ongoing insights.
In planning a migration to microservices, it’s crucial to assess the difficulty of extracting system components. The text suggests starting with components like Invoicing, which have fewer dependencies, using patterns like the strangler fig to ease the transition. However, the logical domain model may not reflect the actual code structure, necessitating a deeper code analysis to understand entanglements.
A two-axis model is proposed for prioritizing service decomposition, balancing ease of extraction against potential benefits. This helps identify quick wins to build momentum, though priorities might shift as more is learned during the process.
Reorganizing teams is also key. Traditional IT structures, with siloed teams based on competencies, are evolving. Modern approaches integrate roles like testing and operations into delivery teams, promoting autonomy and reducing hand-offs. This shift aligns with microservices’ focus on business functionality rather than technology.
However, organizational change should be context-specific. The text warns against blindly adopting models like Spotify’s, which may not fit different business contexts. Instead, organizations should adapt based on their specific needs and culture, using other companies’ experiences as inspiration rather than blueprints.
To facilitate this transition, the text recommends mapping current responsibilities and envisioning future states. This involves understanding existing processes and planning incremental changes, such as merging frontend and backend responsibilities or enabling teams to provision their own test environments.
Skill development is another focus. Encouraging self-assessment helps identify areas where team members need growth, allowing for targeted coaching and resource allocation. This approach was effectively used in a project with The Guardian, where developers set personal goals and received tailored support.
Overall, the text emphasizes a strategic, context-aware approach to transitioning to microservices, considering both technical and organizational aspects to ensure a successful migration.
In the context of software development, self-assessment of skills is crucial for personal and team growth. Keeping these assessments private ensures individuals focus on self-improvement rather than comparison with others. By anonymizing and aggregating these assessments, teams can identify skill gaps and address them through training or hiring specialists. For example, if a team needs to improve in areas like Kafka or Kubernetes, it might justify internal training or hiring experts.
When transitioning to microservices, it’s essential to define measurable goals to track success. These measures should include both quantitative metrics, like cycle time and failure rates, and qualitative feedback from team members. Regular checkpoints should be established to evaluate progress, allowing teams to reflect and decide if changes are needed. This process helps prevent the sunk cost fallacy, where teams persist with failing strategies due to prior investments.
The transition to microservices should be incremental, allowing for small, manageable mistakes. This approach minimizes disruption and enables continuous learning and adaptation. It’s crucial to integrate new microservices with existing systems and deploy them in production to gain real-world experience and benefits.
Deciding whether to modify the existing monolith is a key consideration. If changes are possible, it offers flexibility in migration strategies. However, if the monolith is unchangeable due to constraints like vendor restrictions or lack of skills, alternative patterns must be used. Even with access to the monolith’s code, the decision to move or re-implement functionality depends on the code’s organization. Often, monoliths are not structured around business domains, creating challenges in migration.
Ultimately, the transition to microservices involves adopting a culture of continuous improvement and being open to new approaches. Teams should focus on delivering incremental value, learning from each step, and adjusting strategies as needed. This mindset ensures adaptability and long-term success in evolving software architectures.
Reorganizing a monolithic codebase along business domain boundaries can be challenging, but leveraging techniques from “Working Effectively with Legacy Code” by Michael Feathers, such as defining seams, can help. Seams allow changes without altering existing behavior and can align with domain-driven design principles. Once the codebase is understood, extracting identified seams into separate modules can create a modular monolith, which retains a single deployment unit but consists of multiple modules. This approach offers benefits similar to microservices without their complexities.
Incremental rewrites of the monolith are generally preferable to complete rewrites. However, many teams opt for clean-room implementations of microservices, focusing on small, manageable functionality pieces to avoid big bang rewrite issues. This involves regular shipping of reworked functionality to customers.
The strangler fig pattern is a common technique for migrating monolithic systems to microservices. Inspired by the strangler fig tree, this pattern allows the new system to coexist with the old, gradually replacing it. This incremental migration supports rollback and reduces risk. The process involves identifying parts of the system to migrate, implementing them in a new microservice, and redirecting calls from the monolith to the new service. This separation of deployment from release allows for testing in production before full rollout.
The strangler fig pattern is effective when the monolith is actively used or a black-box system. It allows for functionality extraction without altering the existing system, beneficial when the monolith is third-party software. Challenges arise when extracting deeper functionality or when multiple inbound calls trigger a function, requiring more complex solutions like branch by abstraction.
HTTP reverse proxies are commonly used to implement the strangler fig pattern. They intercept and redirect calls, facilitating transparency and incremental rollout. Proxies like NGINX can manage redirection effectively, especially with HTTP-based systems. Redirection complexity varies based on how functionality is identified in requests, with URI paths being simpler than buried request body parameters.
Overall, the strangler fig pattern provides a structured approach to incrementally migrate systems, allowing for error correction and minimizing disruption during the transition to microservices.
The text discusses strategies for transitioning from monolithic to microservice architectures, emphasizing caution in custom proxy coding due to inefficiencies. Instead, adding custom behavior to existing proxies, like NGINX with Lua, is recommended. The incremental rollout of microservices allows for gradual architectural changes without halting other operations, avoiding the need for a complete switchover.
The text highlights the potential for using proxies to transform communication protocols, such as switching from SOAP to gRPC. However, this can complicate the proxy layer, slowing down independent service deployments. The advice is to keep proxies simple and manage protocol changes within services, supporting both old and new protocols to facilitate smoother migrations.
Service meshes, like those used at Square, are introduced as a solution for managing service-to-service communications. Each service instance communicates through a dedicated local proxy, avoiding centralized complexity. While service meshes offer centralized control, the tooling is still stabilizing, and caution is advised before adoption.
The strangler pattern is applicable beyond HTTP systems, as demonstrated by Homegate’s transition from FTP to a REST API for real estate listings. By intercepting FTP uploads and redirecting them to a new service, Homegate maintained customer experience while improving backend processes.
For message-driven systems, content-based routing and selective consumption are methods to intercept and redirect messages. Content-based routing adds complexity to the messaging layer, while selective consumption involves coordinating changes between the monolith and new services. Both methods have trade-offs, and careful implementation is necessary to avoid issues like message duplication or loss.
The strangler fig pattern is a versatile approach for incremental system re-platforming, not limited to microservices. It has been used in various industries for monolithic rebuilds. When migrating functionality, avoiding changes in behavior until migration completion is advised to maintain rollback capability.
UI composition is another technique for incremental migration, allowing functionality to be served from both monolithic and microservice architectures. This approach was used in The Guardian’s migration to a new CMS, demonstrating its effectiveness in phased transitions.
Overall, the text emphasizes the importance of gradual transitions, managing complexity, and maintaining flexibility in system migrations to microservices.
The text discusses the transition from monolithic architectures to microservice architectures, using The Guardian’s migration as a case study. Initially, The Guardian implemented a system that lasted nearly a decade before transitioning to a new architecture. This transition involved a page-based migration strategy, allowing for a gradual rollout of new features and aesthetics on a vertical-by-vertical basis, starting with less critical sections like Travel. This method ensured minimal disruption, with old page links redirected to new URLs. A similar strategy was used later when moving away from a Java monolith, utilizing a content delivery network (CDN) like Fastly for routing rules.
The REA Group in Australia also adopted a page-based composition approach, which was logical given their distinct branding for different channels. This allowed teams to own the entire end-to-end experience. The Guardian initially tested their new platform with a single widget to mitigate risks before a full-scale release.
The text highlights the evolution of UI composition techniques, noting the shift from server-side to browser-based widget splicing due to advancements in browser technology. This allows for more robust web UIs where individual widgets can fail without affecting the entire page. Companies like Orbitz (now part of Expedia) have used dedicated services for each widget, transitioning from a monolithic Content Orchestration service to microservices by delegating responsibilities to downstream services.
The text also touches on mobile applications, explaining how component-based UIs in Android and iOS allow for isolated work on different parts of the UI. However, deploying changes in native mobile apps is slower due to app store vetting processes. Companies like Spotify use server-side configurations to update UI components dynamically without needing app store resubmissions.
Micro Frontends are discussed as a modern approach to breaking down UIs into independently manageable components, leveraging frameworks like Vue, Angular, and React. This modularization allows for incremental migration to microservices, though it introduces complexity in managing dependencies and information sharing between components.
The Branch by Abstraction pattern is introduced as a method for incremental changes within a monolith. It involves creating an abstraction layer to allow old and new implementations to coexist, facilitating gradual migration to microservices without disrupting ongoing development. This pattern consists of creating an abstraction, refactoring existing clients to use it, developing a new implementation, switching to the new implementation, and cleaning up the old code. This approach minimizes disruption and supports continuous integration by avoiding long-lived source code branches.
Overall, the text emphasizes the benefits of modularization and incremental migration in transitioning from monolithic to microservice architectures, highlighting various techniques and patterns to manage this process effectively.
The text discusses techniques for managing transitions from monolithic systems to microservices using patterns like feature toggles, branch by abstraction, and parallel runs. Feature toggles enable easy switching between old and new implementations, allowing for quick reversions if issues arise. This is achieved through configuration files, eliminating the need for code changes. After implementing a new microservice, the old functionality and any unused feature flags should be removed to simplify the system.
Branch by abstraction is a strategy for introducing changes without disrupting existing functionality. It allows two implementations to coexist, providing a fallback if the new implementation fails. However, this can complicate system behavior, especially if both implementations are stateful, as data consistency becomes a concern. The pattern is useful when code changes are time-consuming, and it helps avoid long-lived code branches.
Parallel run involves executing both old and new implementations simultaneously to verify equivalency. Only one implementation is the source of truth, usually the old one, until the new system is verified. This pattern is useful for high-risk changes and can assess both functional and non-functional aspects, like response time and failure rates. It has been applied in financial systems to ensure accurate calculations, and in systems like Homegate for listing imports.
N-version programming is a variation where multiple implementations run concurrently, often used in safety-critical systems like aircraft controls. It aims to reduce the impact of defects by comparing responses and selecting the correct one based on a consensus.
Verification techniques include using Spies to simulate functionality without executing it, allowing for verification without side effects. GitHub’s Scientist library supports this by enabling side-by-side implementation comparisons.
Dark launching and canary releasing are related techniques. Dark launching tests new functionality invisibly to users, while canary releasing exposes it to a subset of users to minimize risk. These strategies are part of progressive delivery, which controls software rollout to ensure stability and effectiveness.
Implementing a parallel run is complex and should be reserved for high-risk changes. The decorating collaborator pattern can be used when the monolith cannot be changed, allowing new functionality to be added externally. This approach intercepts calls and triggers additional behavior without altering the monolith itself.
The text discusses the implementation of microservices using patterns like the strangler fig, decorating collaborator, and change data capture (CDC). It highlights the complexity of proxies in microservices, which need to handle calls to new services and manage responses. The proxy can become a microservice itself if it grows too complex, and it requires sufficient information from inbound requests to function properly. If the necessary data isn’t available, the monolith might need modification or the use of invasive techniques like CDC.
The decorating collaborator pattern is useful when information can be extracted from requests or responses, but becomes complex if additional data is needed. CDC, on the other hand, reacts to changes in a datastore and is tightly coupled to the monolith’s database. It can be implemented using database triggers or transaction log pollers. Triggers are easy to use but can complicate system understanding if overused. Transaction log pollers are sophisticated and can run independently of the database, reducing coupling concerns.
The text also covers the issuance of loyalty cards and how CDC can be used to trigger a loyalty card printing service upon detecting changes in a database table. Techniques for implementing CDC include database triggers, transaction log pollers, and batch delta copiers, each with trade-offs in complexity and reliability. CDC is particularly useful when reacting to data changes in a monolith without intercepting system calls or altering the codebase.
The document emphasizes the challenges of database decomposition in microservices, advocating for microservices to encapsulate their own data storage. It warns against shared databases, which can lead to issues with data control and business logic cohesion. Instead, splitting databases allows each microservice to own its data, though this is not always feasible initially. Coping patterns like database views and database wrapping services can help manage shared databases.
Database views provide a limited projection of data to services, mitigating coupling concerns. The text stresses the importance of addressing database coupling and suggests that while splitting schemas is ideal, coping patterns can be interim solutions. Overall, the document offers a variety of techniques for transitioning to microservices, emphasizing the need for careful consideration of data management and system complexity.
The text discusses various strategies for managing database schemas in complex systems, particularly when dealing with external dependencies and performance bottlenecks. A significant issue identified was the inefficiency in write operations due to an outdated schema, compounded by multiple external applications accessing the database with shared credentials. This lack of individual credentials made it difficult to track or control access. The proposed solution involves restructuring the schema and utilizing tools like HashiCorp’s Vault for managing credentials securely.
To address schema rigidity, the use of database views was suggested. Views allow for schema changes without impacting external systems by presenting a stable interface. They can hide sensitive information, but are limited by their read-only nature and dependency on the same database engine. Materialized views offer performance benefits by caching data, though they might serve stale data.
For more complex scenarios, a database wrapping service can encapsulate the database, allowing changes under the hood while presenting a stable API to consumers. This approach helps delineate ownership and control over the schema, encouraging upstream applications to decouple from direct database access. It also offers more flexibility compared to views, as it can handle sophisticated data projections and write operations.
The text also introduces the concept of a Database-as-a-Service Interface, which involves exposing a dedicated read-only database endpoint for external queries. This pattern, similar to the reporting database pattern, allows for the separation of internal and external databases, enabling consumers to perform ad-hoc queries while maintaining internal data integrity. The mapping engine plays a crucial role in synchronizing changes between databases, with options like change data capture systems offering real-time updates.
Overall, these patterns provide incremental solutions for managing database dependencies, enhancing performance, and maintaining data integrity in complex systems. They emphasize the importance of careful planning, ownership, and the potential need for more fundamental changes to achieve long-term scalability and flexibility.
The text discusses strategies for transitioning from a monolithic database architecture to a microservice-oriented approach, focusing on data ownership and synchronization. It emphasizes the importance of determining where data should reside as services are split from the monolith. When a microservice needs to access data still owned by the monolith, it can do so through well-defined interfaces, such as APIs or event streams, rather than direct database access. This approach helps define future service boundaries and can lead to the creation of new services, like an Employee service, by exposing necessary data through APIs.
The text introduces the concept of transferring data ownership, where the data lifecycle should be managed by the new service. For instance, invoice-related data should move from the monolith to the new Invoice service, making it the source of truth for that data. This transition involves complex challenges, such as breaking foreign-key constraints and transactional boundaries. Instead of relying on shared databases, the text suggests projecting views from the new service’s database to the monolith or, preferably, changing the monolith to call the new service directly.
Data synchronization is critical when migrating to new services. The text describes a scenario using the strangler pattern, where both the monolith and the new service manage the same data. To maintain consistency, data should initially be kept in one place, possibly using a database view, until the transition is complete. For a big-bang switchover, a batch process can migrate data in advance, but this risks losing state if a rollback is needed.
The text details a real-world example from a Danish medical records project, where data was synchronized between MySQL and Riak databases. Initially, MySQL remained the source of truth while Riak was synchronized. After verifying Riak’s reliability, the application switched to read from Riak while still writing to both databases, ensuring a fallback option.
The “Synchronize Data in Application” pattern involves keeping data in sync between old and new databases via the application itself. It allows for fast switching and rollback but requires careful implementation to avoid inconsistencies. This pattern is useful when the application cannot be taken offline, as demonstrated in the Danish medical records system.
The “Tracer Write” pattern allows for incremental migration of data by maintaining two sources of truth during the transition. The new service hosts the relocated data, while the current system ensures synchronization by writing changes to both the old and new sources. This pattern facilitates gradual migration while minimizing disruption.
Overall, the text stresses the importance of carefully managing data ownership and synchronization during the transition to microservices, highlighting strategies and patterns to ensure consistency and reliability.
The text discusses the concept of having multiple sources of truth for data, particularly in the context of transitioning from a monolithic architecture to microservices. The “tracer write” pattern is introduced as a method for phased data migration, allowing for incremental synchronization of data from an old system to a new microservice. This approach mitigates the risks associated with a single switchover by allowing data to be gradually synchronized and consumers to transition at their own pace.
Data Synchronization Challenges: The primary challenge is ensuring consistency between multiple data sources. Several strategies are outlined:
- Single Source Writes: All writes are directed to one source, with synchronization occurring afterward.
- Dual Source Writes: Writes are sent to both sources, either directly by clients or via an intermediary.
- Two-Way Synchronization: Writes can be made to either source, requiring complex two-way synchronization.
Each method introduces a window of inconsistency, which can vary based on the synchronization approach used, such as batch processing or continuous streaming. Eventual consistency is a goal, where both sources eventually align, but a reconciliation process is necessary to verify synchronization accuracy.
Case Study - Square: Square’s implementation of the tracer write pattern involved creating a new Fulfillments service to manage order data separately from the existing system. The migration process included:
- Running a background worker to copy data to the new service, controlled by a feature flag for easy rollback.
- Synchronizing updates by ensuring API calls were made to both systems for relevant changes.
- Handling non-atomic updates, accepting brief inconsistencies as part of eventual consistency.
Square’s approach allowed for incremental consumer migration to the new service, with minimal disruption. The Fulfillments service initially supported basic functionality, with additional features added as consumers transitioned.
Considerations for Implementation:
- Avoid two-way synchronization if possible, as it complicates the process.
- Utilize existing event-driven systems or change data capture pipelines to facilitate synchronization.
- Assess the acceptable window of inconsistency for your use case, as shorter windows increase complexity.
Database Decomposition: Logical separation of schemas within a single database engine can provide benefits without requiring physical separation. However, physical separation may enhance robustness and performance. The decision to separate logically or physically depends on factors like resilience, resource contention, and existing infrastructure investments.
Sequencing Database and Code Splits: Options for sequencing include splitting the database first, the code first, or both simultaneously. Each approach has its advantages and challenges, and the choice should align with the specific goals and constraints of the migration effort.
Overall, the tracer write pattern and careful synchronization strategies enable effective data migration and service decomposition, facilitating a smoother transition from monolithic to microservice architectures.
When considering database and application code decomposition, the decision to split the database first or the code first involves trade-offs in transactional integrity, performance, and future flexibility. Splitting the database first can help identify performance and consistency issues early, but may not provide immediate benefits and could complicate transactional integrity. This approach is suitable if there’s concern about performance or data consistency, but not feasible if dealing with a black-box system.
Tooling is crucial for database changes. Unlike code refactoring, databases lack robust tools. Historical tools like DBDeploy have been replaced by modern solutions like FlywayDB, which manage schema changes via version-controlled scripts.
The “Repository per Bounded Context” pattern suggests organizing repositories by domain context, aiding in understanding database usage and potential decomposition points. This pattern helps identify service boundaries but requires additional tools to visualize database constraints.
The “Database per Bounded Context” pattern emphasizes microservices owning their data, facilitating future separation into services. This approach can be more work initially but offers flexibility for future microservices architecture. It’s particularly useful for new systems where domain boundaries are unclear.
Conversely, splitting the code first can yield short-term improvements and clarify data needs for new services. However, it risks leaving a shared database, which can cause future issues. Ensuring data owned by microservices is split out is crucial.
The “Monolith as Data Access Layer” pattern involves creating APIs within the monolith to access data, avoiding direct data binding. This can be effective when the monolith continues to manage state transitions, but if the microservice should own the data, splitting the data is preferable.
“Multischema Storage” suggests storing new microservice data separately from the monolith. This approach is suitable for new functionality and gradual data migration from the monolith.
Splitting both code and data simultaneously is a larger undertaking with delayed impact assessment. Typically, splitting either the schema or application tier first is recommended.
Schema separation involves complex challenges, especially with relational databases. Patterns like “Split Table” address scenarios where data in a single table crosses service boundaries, necessitating careful decomposition to maintain integrity across services.
Overall, the choice between splitting the database first or the code first depends on specific circumstances, including the ability to modify the monolith and concerns about performance and data consistency. Each approach has distinct advantages and challenges, and the decision should be guided by the organization’s context and goals.
In the process of decomposing a monolithic database, tables need to be split based on their ownership by different bounded contexts. This involves separating tables into distinct schemas and services. For instance, a Customer
table with a Status
column might be updated by both customer management and finance code. The Status
should be managed by a Customer
service, with the Finance
service making calls to update it. This separation, however, complicates data consistency, as database transactions that ensured consistency are lost.
When moving foreign-key relationships to code, challenges arise. For example, a Catalog
service managing album information and a Finance
service tracking sales may need to join data from separate tables. Previously, a database join ensured data consistency and efficiency. In a microservices architecture, the Finance
service must request album information from the Catalog
service, increasing latency and potential data inconsistency.
To address this, several strategies can be employed:
-
Performing Joins in Code: The
Finance
service queries theLedger
table for SKUs and then requests album details from theCatalog
service. This replaces efficient database joins with potentially slower service calls. -
Handling Data Consistency: Options include checking with the
Finance
service before deleting records, handling missing data gracefully, or not allowing deletion at all. For instance, using a soft delete by marking records as unavailable can prevent issues. -
Managing Shared Static Data: Duplicate static reference data, such as country codes, in each service to avoid frequent changes and inconsistencies. This approach is viable if data changes infrequently and inconsistencies do not impact service communication.
Ultimately, the decision to break foreign-key relationships should consider whether entities truly belong together. For example, order lines should remain with their corresponding order if moved out of a monolith. Understanding the context and needs of users is crucial to choosing the right solution. In cases of potential data inconsistency, services should be resilient, handling failures gracefully to maintain system stability.
In summary, decomposing databases into microservices requires careful consideration of data ownership, consistency, and communication between services. Strategies such as moving joins to code, handling data deletion gracefully, and managing static data duplication are essential to address the challenges posed by this architectural shift.
In microservice architectures, managing data consistency and duplication is a common challenge. When services like Warehouse and Finance need the same country information, duplication becomes a concern. Synchronizing copies with a background process can minimize inconsistency, though it may not eliminate it entirely. Duplication can sometimes be a necessary trade-off to avoid coupling. This approach is best for large datasets that don’t require real-time consistency, such as postal code files.
A dedicated reference data schema can centralize country codes, reducing duplication and allowing cross-schema joins. This approach mitigates some coupling concerns due to the infrequent changes and simple structure of the data. However, changes to the schema can significantly impact multiple services.
For small, static datasets like country codes, storing data in a shared library is feasible. This method, used by companies like Stitch Fix, works well for data that changes infrequently. However, it requires redeploying services to update data, which can be cumbersome in microservices that need independent deployability.
Alternatively, a static reference data service can be created, serving as a centralized source for country codes. This approach is often seen as overkill due to the complexity and potential latency introduced by additional network dependencies. However, it can be efficient if the setup and management of microservices are streamlined, or if using Function-as-a-Service platforms.
When considering data consistency across services, a shared library is often the preferred solution if real-time consistency isn’t critical. For more complex or larger datasets, local databases might be more suitable. A dedicated service is considered when consistent data between services is essential, despite the higher setup cost.
Database transactions, particularly ACID (Atomicity, Consistency, Isolation, Durability) transactions, ensure data consistency and reliability. However, splitting databases in a microservices architecture complicates maintaining atomicity. While ACID properties can still be applied, the transaction scope is limited, potentially leading to issues when operations span multiple databases.
In summary, choosing the right data management pattern in microservices depends on the volume and consistency needs of the data, the operational complexity of managing services, and the trade-offs between duplication and coupling. Each approach has its merits and drawbacks, and the decision should align with the specific requirements and constraints of the system architecture.
In distributed systems, managing transactions across multiple services can be challenging, especially when atomicity is lost by decomposing operations into separate database transactions. This often leads to exploring distributed transactions, like the two-phase commit (2PC) algorithm, to coordinate changes. The 2PC involves a voting phase where a central coordinator asks all involved workers if a state change can be made, and a commit phase where changes are applied if all agree. However, 2PC has limitations, such as potential inconsistencies due to latency and the complexity of coordinating distributed locks, which can lead to failure modes requiring manual intervention. Due to these drawbacks, 2PC is typically avoided for long-lived operations.
An alternative is to keep data in a single database if atomicity is crucial, but when splitting data is necessary, sagas offer a solution. Sagas coordinate multiple state changes without long-term locking by modeling steps as discrete activities. This approach is beneficial for long-lived transactions (LLTs) that require multiple database changes over extended periods. Sagas break LLTs into shorter transactions, reducing contention and lock duration.
Sagas do not provide atomicity at the saga level but ensure atomicity for each subtransaction. They require explicit modeling of business processes and handling of failure modes through backward or forward recovery. Backward recovery involves compensating actions to undo committed transactions, while forward recovery allows continuation from failure points. Compensating transactions, unlike database rollbacks, are semantic rollbacks that revert changes without erasing transaction history.
Reordering saga steps can simplify rollback scenarios. For instance, awarding loyalty points after dispatching an order avoids unnecessary rollbacks. Mixing failure recovery modes is also viable; some failures may require rollbacks, while others may involve retrying or human intervention.
Implementing sagas can be done through orchestrated or choreographed approaches. Orchestrated sagas involve centralized coordination, whereas choreographed sagas rely on decentralized interactions. Both styles aim to manage complex transactions across distributed systems effectively, balancing the need for consistency and flexibility.
Overall, sagas provide a robust framework for handling distributed transactions in microservices architectures, offering a more manageable alternative to two-phase commits by reducing the need for extensive locking and accommodating long-lived processes.
The text discusses two approaches to managing distributed transactions in microservices: orchestrated and choreographed sagas. Orchestrated sagas use a central coordinator, known as an orchestrator, to manage the order of operations and trigger compensating actions if needed. This approach provides visibility into the process but results in domain coupling, where the orchestrator needs to be aware of all involved services, potentially leading to anemic services that lack independent behavior. To mitigate centralization, different services can act as orchestrators for different processes.
Choreographed sagas distribute responsibility among services, using events for collaboration. This model is less coupled, as services react to events without direct knowledge of each other. However, it complicates understanding the process flow and tracking the saga’s state. Implementing a correlation ID can help track the state by collecting events into a unified view.
The text also critiques Business Process Modeling (BPM) tools, which aim to allow nondevelopers to define processes but often end up being used by developers. These tools can be cumbersome, lacking version control and testing features. Developer-friendly BPM tools like Camunda and Zeebe are mentioned as alternatives.
The discussion extends to the merits of mixing orchestrated and choreographed styles, suggesting that some processes might naturally fit one model over the other. The choice between these approaches depends on factors like team structure and familiarity with event-driven systems. Orchestrated sagas may suit single-team implementations, while choreographed sagas are better for distributed teams due to their loosely coupled nature.
The text contrasts sagas with distributed transactions, highlighting the latter’s challenges, such as system stalls due to node failures. Sagas offer a way to model business processes explicitly, avoiding these issues and making processes clearer to developers.
Finally, the text addresses the growing pains of adopting microservices, noting that challenges vary with the number of services and organizational complexity. As systems grow, issues like service coupling, robustness, and testing become more pronounced, requiring new strategies and technologies to manage them effectively.
As microservice architectures scale, ownership and management become critical. Martin Fowler’s concepts of code ownership—strong, weak, and collective—are applicable here. Strong ownership involves a defined group managing changes, while weak ownership allows anyone to make changes with the expectation of communication. Collective ownership, where anyone can change anything, can lead to chaos if not managed well. As teams grow, especially rapidly, collective ownership becomes problematic, often resulting in a “distributed monolith” with tangled dependencies. Strong ownership is typically adopted in larger setups, allowing teams to focus on specific business domains and maintain domain expertise.
Breaking changes in microservices can disrupt the system. A microservice’s functionality is defined by a contract, and changes to this contract can lead to production issues if not managed carefully. Problems often manifest as production outages or the need for lock-step releases, where multiple services must be deployed simultaneously. This issue arises early in development, particularly when multiple teams are involved. Solutions include preventing accidental breaking changes through explicit schemas, thinking twice before making changes, and allowing consumers time to migrate by supporting old and new contracts simultaneously. This can be done by running multiple service versions or supporting multiple contracts within a single service.
Reporting in microservice architectures poses challenges due to data being scattered across various services. Unlike monolithic systems with a single database, microservices require a strategy to consolidate data for reporting. This problem often arises when decomposing a monolithic schema. Solutions involve creating a separate reporting database that aggregates data from various services, allowing stakeholders to access data without impacting microservice databases. This separation ensures that reporting needs are met without constraining the design and evolution of service-specific databases.
Overall, managing ownership, breaking changes, and reporting are crucial for the successful scaling of microservice architectures. Strong ownership models, careful management of service contracts, and strategic data aggregation for reporting are essential practices for maintaining system integrity and meeting stakeholder needs.
In microservice architectures, integrating data from various services into a cohesive reporting database is a challenge. Techniques such as change data capture, using views, and programmatically copying data are potential solutions. These methods allow microservices to push data into a unified schema, facilitating reporting.
Monitoring and troubleshooting in microservices differ significantly from monolithic applications. While monoliths have straightforward failure modes, microservices require monitoring individual service instances, complicating the detection of issues. As microservice architectures grow, the approach to monitoring must evolve, requiring continuous investment.
Predicting when monitoring issues will arise is difficult, but it often occurs when production problems can’t be easily traced. Implementing basic improvements preemptively is advisable. Log aggregation is a critical first step, allowing centralized log collection and alert generation. Tools like the ELK stack or Humio are recommended for this purpose.
Tracing is essential for understanding service call sequences and latency issues. Correlation IDs help track calls across services, and tools like Jaeger provide distributed tracing capabilities. This aids in identifying performance bottlenecks and is crucial for latency-sensitive applications.
Testing in production using synthetic transactions can catch issues before they affect real users. These tests simulate user behavior to ensure system functionality remains intact after changes. However, care must be taken to avoid unintended consequences.
Observability extends beyond traditional monitoring by enabling open-ended queries about system behavior. This approach is crucial for diagnosing complex issues and should be supported by comprehensive logging and tracing.
The developer experience can degrade as the number of services increases, especially with resource-intensive runtimes like the JVM. Solutions include stubbing services or using hybrid local/remote development setups. Tools like Telepresence facilitate this by allowing local development while connecting to remote services.
Managing numerous services and instances requires scalable deployment strategies. Manual processes become inadequate as the number of microservices grows. Desired state management, such as that provided by Kubernetes, is essential for automating deployments and maintaining system stability. Kubernetes, despite its complexity, is the preferred tool for managing microservice deployments, especially when integrated with developer-friendly abstractions or packaged solutions like OpenShift.
Overall, transitioning to microservices necessitates a shift in monitoring, troubleshooting, and deployment strategies to handle the increased complexity and ensure system reliability and developer productivity.
In the realm of microservice architecture, deploying services can involve various strategies, including serverless technologies like Function-as-a-Service (FaaS), which reduce operational overhead by allowing developers to focus on code while the platform manages operations. Although FaaS has limitations, it is often a preferable starting point over container-based platforms like Kubernetes, which should be considered only when the complexity of managing numerous processes justifies it.
End-to-end testing in microservices presents challenges due to the broad scope required to test multiple services simultaneously. This can lead to longer test durations and increased maintenance difficulties. To mitigate these issues, it’s crucial to limit the scope of functional tests to within team boundaries and consider using consumer-driven contracts (CDCs) to define service expectations, reducing the need for extensive cross-service testing. Additionally, implementing automated release remediation and progressive delivery can help manage production issues by rolling out software incrementally and assessing impacts on smaller user groups before full deployment.
Balancing global versus local optimization is another challenge in microservice environments. Teams often make decisions based on local needs, which can lead to inefficiencies, such as multiple teams solving the same problem differently. Establishing cross-team technical groups or communities of practice can help align local decisions with global organizational goals, ensuring consistency and efficiency.
Robustness and resiliency are critical in distributed systems, where failures such as network timeouts or machine crashes become more frequent as the number of services increases. These issues often surface in production, making it essential to design systems with resilience in mind, using strategies to handle failures gracefully and prevent cascading issues.
Overall, the key to successfully managing microservices lies in adopting suitable deployment strategies, optimizing testing practices, balancing local and global decision-making, and ensuring system resilience. Each organization must find the right balance to maximize efficiency and reliability while minimizing complexity and overhead.
In managing microservices, understanding potential failure modes and responses is crucial. Isolating services, using asynchronous communication, implementing time-outs, and applying circuit breakers can enhance resilience. Resiliency involves more than just patterns; it requires evolving organizational practices and documenting production issues to prevent recurrence.
Orphaned services, where no one takes ownership, can become problematic. These services may run independently for extended periods, leading to forgotten responsibilities and lost source code. This issue often arises in long-term microservice environments where collective memory fades. Collective ownership can mitigate this by facilitating developer transitions between services and standardizing tooling for changes, testing, and deployment.
Service registries help manage metadata and ownership, with some organizations creating in-house solutions. The Financial Times’ Biz Ops tool exemplifies this, offering a comprehensive view of microservices and infrastructure, calculating System Operability Scores to ensure service operability.
Addressing orphaned services involves aligning them with current management practices, assigning ownership, or raising improvement tasks. The broader challenge in microservices is balancing proactive problem-solving with efficient resource use, acknowledging that microservice architectures vary and require context-specific approaches.
Adopting microservices should be incremental, allowing for adaptability and learning from others while tailoring solutions to specific contexts. Microservices are not universally suitable, but understanding their potential benefits and challenges can guide successful implementation.
The text emphasizes the importance of rational decision-making, avoiding imitation, and considering context-specific solutions. Incremental adoption and learning from others’ experiences are key, as is understanding that microservices may not fit every scenario.
In summary, the text highlights the need for resilience, proper ownership, and strategic adoption of microservices, underscoring the importance of context and adaptability in managing complex systems.
The text provides an extensive overview of various strategies, patterns, and challenges associated with transitioning from monolithic applications to microservices architecture. Key aspects include:
-
Microservices Overview: Microservices are designed for independent deployability and are modeled around business domains rather than technical services. They offer advantages like improved scalability and resilience but also present challenges such as increased complexity and potential for service misalignment.
-
Database Patterns: Several patterns are discussed for managing databases in microservices, including:
- Database as a Service Interface: Provides a way to access databases as services, reducing dependence on centralized databases.
- Database View Pattern: Projects a subset of the schema to facilitate changes without affecting consumers.
- Move Foreign Key to Code: Transfers foreign key relationships to application logic to improve data consistency.
-
Data Synchronization: Techniques like the tracer write pattern and change-data capture are used to ensure consistent data across services. Eventual consistency is emphasized as a requirement in distributed systems.
-
Migration Strategies: Transitioning from monoliths involves patterns like:
- Strangler Fig: Incrementally replacing old components with new microservices.
- Branch by Abstraction: Creating an abstraction layer to allow parallel development of new services.
- Parallel Run: Running old and new systems simultaneously to verify correctness.
-
Organizational Change: Successful adoption of microservices requires cultural shifts, such as embracing DevOps and empowering teams. Kotter’s 8-step process for change management is recommended to guide organizations through transitions.
-
Challenges in Microservices: Issues such as service coupling, end-to-end testing, and monitoring are highlighted. Solutions include consumer-driven contracts, progressive delivery, and enhancing observability through tools like distributed tracing.
-
Sagas for Transaction Management: Sagas are used to manage long-lived transactions in a distributed system, offering alternatives to traditional ACID transactions. They come in choreographed and orchestrated styles, each with its trade-offs.
-
Team and Process Reorganization: Transitioning to microservices often necessitates reorganizing teams to improve autonomy and ownership, aligning them more closely with business functionality rather than technical silos.
-
Incremental Migration and Testing: Emphasizes the importance of incremental changes and testing in production to reduce risks and ensure smooth transitions.
Overall, the text underscores the complexity of adopting microservices but provides a comprehensive framework of patterns and strategies to address the associated challenges effectively.
Sam Newman is a developer, architect, and writer specializing in cloud technologies, continuous delivery, and microservices. He is known for his work across various industries worldwide and authored the best-selling “Building Microservices.” Newman operates independently and is based in East Kent.
The book “Monolith to Microservices” features the stinging cauliflower jellyfish (Drymonema dalmatinum) on its cover. This rare subtropical jellyfish, found in the Central Atlantic and Mediterranean, was first identified in 1880 near Croatia. Known for its brownish-pink color and large size, it is nicknamed “the big pink.” The jellyfish belongs to the class Scyphozoa, named for its cup-like shape, and the subphylum Medusozoa, noted for its long tentacles.
The stinging cauliflower jellyfish reproduces both sexually and asexually. In sexual reproduction, males release sperm and females release eggs, which fertilize in water. The resulting polyps reproduce asexually by budding. This jellyfish primarily feeds on other jellyfish, such as moon jellyfish, and inhabits saltwater environments exclusively. The cover illustration by Karen Montgomery is based on an image from “Medusae of the World.”
O’Reilly cover animals, including the stinging cauliflower, are often endangered and significant to ecosystems. The book’s cover fonts are Gilroy Semibold and Guardian Sans, with text in Adobe Minion Pro, headings in Adobe Myriad Condensed, and code in Dalton Maag’s Ubuntu Mono.