Fundamentals of Software Architecture: An Engineering Approach by Mark Richards and Neal Ford is a comprehensive guide aimed at both aspiring and experienced software architects. The book addresses the lack of structured resources for developers transitioning into architectural roles by providing an in-depth exploration of software architecture principles applicable across various technology stacks.
The authors, seasoned practitioners in software architecture, cover a wide range of topics including architectural characteristics, patterns, component determination, and the art of diagramming and presenting architecture. They emphasize the importance of understanding modern engineering practices and operational approaches that have significantly evolved in recent years.
Key areas of focus include:
-
Architecture Patterns: These form the basis for many architectural decisions and are crucial for understanding the technical foundation of software systems.
-
Components: The book delves into component identification, coupling, cohesion, partitioning, and granularity, which are essential for creating a modular and scalable architecture.
-
Soft Skills: Effective team management, negotiation, and presentation skills are highlighted as crucial for architects to lead projects successfully.
-
Engineering Discipline: The authors advocate for treating architecture as an engineering discipline, emphasizing repeatable results, metrics, and rigorous evaluations to ensure the architectureâs effectiveness.
Richards and Ford also discuss the intersection of architecture with engineering practices, operations/DevOps, and the necessity of aligning architecture with business drivers. They stress the importance of balancing hands-on coding with architectural oversight to maintain technical relevance.
The book is structured into several parts:
-
Foundations: Covers architectural thinking, modularity, and the identification and measurement of architectural characteristics.
-
Architecture Styles: Discusses various architecture styles, including layered, pipeline, microkernel, service-based, event-driven, space-based, and orchestration-driven service-oriented architectures. Each style is examined for its topology, use cases, and suitability for different scenarios.
-
Techniques and Soft Skills: Explores architecture decision-making, risk analysis, diagramming, team effectiveness, negotiation, and leadership skills. The authors provide insights into making architecture decisions, identifying and mitigating risks, and effectively communicating architectural concepts.
Additionally, the book includes practical case studies, such as âSilicon Sandwichesâ and âGoing, Going, Gone,â to illustrate the application of architectural principles in real-world scenarios. It also provides guidance on developing a career path in software architecture, emphasizing continuous learning and adaptation to technological advancements.
Richards and Fordâs work is praised for its clarity and depth, offering real-world advice and identifying common architecture characteristics and trade-offs necessary for success. The book serves as a valuable resource for those aiming to master the art of software architecture, reflecting the authorsâ extensive experience and expertise in the field.
Software architecture is a dynamic and evolving field, where foundational axioms and assumptions must be regularly questioned and updated. Unlike mathematics, where axioms are static, the software ecosystem changes rapidly, influenced by innovations like containerization and cloud resources. This dynamic nature requires architects to adapt continually, reassessing practices and tools to align with modern capabilities.
Historically, software architecture was seen as the âstuff thatâs hard to change later,â but with the advent of microservices, change has become a fundamental design consideration. The field has evolved through various phases, including Extreme Programming, Continuous Delivery, DevOps, and containerization, each contributing new capabilities and trade-offs. Architects now face the challenge of balancing these trade-offs pragmatically, avoiding binary choices, and focusing on analyzing options critically.
The book emphasizes an engineering approach to software architecture, aiming to transform it from a craft to a discipline characterized by repeatability and rigor. Agile practices have facilitated significant advancements, allowing architects to design more complex systems. The focus is on providing a modern overview of software architecture, rethinking existing axioms in light of the current ecosystem, and addressing both technical and soft skills.
Defining software architecture remains complex, as it encompasses a wide range of responsibilities and constantly shifts with technological advancements. Traditional definitions, like architecture being about âfundamental structural choices,â are outdated in the context of modern architectural styles like microservices, where incremental change is feasible. The book highlights the importance of understanding architecture in context, as past decisions were influenced by the constraints of their time, such as the cost of infrastructure.
Software architecture is defined by several components: structure, architecture characteristics, architecture decisions, and design principles. The structure refers to the architecture styles used, such as microservices or layered architectures. Architecture characteristics, or âilities,â define the systemâs success criteria, independent of functionality. Architecture decisions set the rules for constructing systems, while design principles offer guidelines rather than strict rules.
The book also addresses the need for a variance model, allowing exceptions to architecture decisions when necessary. This process is typically overseen by an architecture review board (ARB) or a chief architect. Overall, the book aims to equip architects with the tools and insights needed to navigate the evolving landscape of software architecture effectively.
OâReilly Media, the publisher, provides resources for further learning, including an online platform with training courses and supplemental materials. The book encourages readers to engage with these resources to deepen their understanding of software architecture.
Acknowledgments are given to those who contributed feedback and support during the bookâs development, highlighting the collaborative effort behind its creation.
In a microservices architecture, asynchronous messaging is often recommended to enhance performance, but developers may choose other protocols like REST or gRPC based on specific needs. This flexibility highlights the role of design principles in guiding technology choices rather than dictating them. Software architects are expected to make architecture decisions, continually analyze architecture, stay updated with trends, ensure compliance with decisions, have diverse technical exposure, possess business domain knowledge, and exhibit interpersonal skills.
Architects guide technology choices by setting design principles, helping teams make informed decisions without prescribing specific technologies. They must also analyze existing architectures to prevent structural decay, ensuring that changes do not compromise performance, availability, or scalability. Keeping up with industry trends is crucial, as architectural decisions often have long-lasting impacts.
Ensuring compliance involves verifying that teams adhere to architecture decisions, preventing deviations that can lead to system inefficiencies. Architects should be familiar with a range of technologies, allowing them to interface with diverse systems and environments. Business domain knowledge is essential for architects to design solutions that align with business goals and communicate effectively with stakeholders.
Interpersonal skills, including leadership and teamwork, are vital for architects, as they must lead development teams and navigate organizational politics. Negotiation skills are crucial, as architectural decisions often face challenges from various stakeholders due to cost or effort implications.
The role of software architecture has expanded to include operational aspects, such as elastic scaling in microservices, which were traditionally handled by separate teams. The history of companies like Pets.com illustrates the importance of scalable infrastructure, which has led to modern frameworks that support elastic scaling.
Software architecture is increasingly influenced by engineering practices, such as continuous integration and delivery, which originated from methodologies like Extreme Programming (XP). These practices emphasize automation and testing, contributing to the evolution of software development into a more structured engineering discipline.
Overall, architects must balance technical guidance with leadership and negotiation, ensuring that architectural decisions support both current needs and future adaptability.
Unknown unknowns pose significant challenges in software development, as they are unforeseen issues that arise during projects. Agile methodologies address these by promoting iterative development, allowing for faster adaptation to changes. This contrasts with traditional âBig Design Up Frontâ approaches, which struggle with unforeseen changes.
Software architecture is closely linked to engineering practices. For instance, a microservices architecture requires automated processes and testing. The evolution from Extreme Programming to Continuous Delivery has improved automation and architectural governance, as highlighted in Neal Fordâs book, âBuilding Evolutionary Architectures.â This book introduces fitness functions, which are metrics or tests to ensure architectural integrity over time, similar to genetic algorithmsâ optimization techniques.
The integration of operations and architecture has become more prevalent with DevOps, which merges operational concerns with architectural design. This reduces complexity by allowing operations to manage scalability and performance, previously managed within the architecture itself.
The development process impacts architecture, with Agile methodologies providing benefits like faster feedback loops and support for architectural restructuring. This is crucial for transitioning from monolithic to modern architectures using techniques like the Strangler Pattern.
Data management is integral to software architecture, with external storage solutions like relational and NoSQL databases playing a crucial role. Architects and database administrators must collaborate to build effective data architectures.
The Laws of Software Architecture emphasize trade-offs and understanding the rationale behind architectural decisions. The First Law states that everything in architecture involves trade-offs, while the Second Law highlights the importance of understanding the reasons behind architectural choices.
Architectural thinking involves distinguishing between architecture and design, requiring collaboration between architects and developers. Architects need a broad technical understanding, or technical breadth, to see various solutions, unlike developers who focus on technical depth. This breadth allows architects to identify multiple potential solutions for problems, enhancing their ability to design effective architectures.
Overall, modern software architecture requires an iterative approach, integration with operations, and a comprehensive understanding of both technical and business aspects to adapt to constant changes and unknown challenges.
As architects transition from developers, the focus shifts from technical depth to breadth. Architects need a broad understanding of various solutions to make informed decisions that align capabilities with constraints. This requires sacrificing some expertise to gain a wider portfolio. The knowledge pyramid highlights this shift, contrasting the deep expertise developers cultivate with the broad understanding architects need.
Two common dysfunctions arise during this transition: trying to maintain expertise across too many areas, leading to burnout, and relying on outdated knowledge, termed the âFrozen Caveman Anti-Pattern.â This pattern occurs when architects cling to outdated concerns, failing to adapt to current realities. Architects must realistically assess risks, distinguishing between genuine and perceived threats, and avoid being overly cautious based on past experiences.
Architectural thinking involves analyzing trade-offs, as decisions depend on numerous factors, including deployment environments, business drivers, and team capabilities. For instance, choosing between queues and topics in an auction system involves trade-offs in extensibility, security, and contract heterogeneity. Topics offer architectural extensibility and decoupling but pose security risks and lack contract flexibility. Queues provide better security and contract customization but require more changes when new functionality is added. Architects must weigh these trade-offs based on specific needs.
Understanding business drivers is crucial for architects to translate requirements into architectural characteristics like scalability and performance. This requires domain knowledge and collaboration with stakeholders. Balancing architecture with hands-on coding is another challenge. Architects should maintain technical depth by coding and avoiding bottlenecks by delegating critical work. Engaging in proof-of-concepts, tackling technical debt, automating tasks, and conducting code reviews help architects stay hands-on while supporting the development team.
Modularity is a key organizing principle in architecture, crucial for maintaining structural soundness. It involves grouping related code into modules, ensuring order and consistency. Without attention to modularity, systems can become disordered, requiring constant effort to maintain clarity and sustainability. Understanding modularity is vital for architects as it underpins tools like metrics and fitness functions, essential for analyzing architecture.
Overall, architects must balance breadth with depth, analyze trade-offs, understand business drivers, and maintain modularity to create effective and sustainable software architectures.
Modularity in programming involves organizing code into logical groupings, such as packages or namespaces, which help manage complexity and facilitate reuse. This concept dates back to structured programming, before the advent of object-oriented languages. Developers initially used modular languages like Modula and Ada to logically group code, leading to the evolution of packages and namespaces in modern languages like Java and .NET.
Java, for example, uses packages to avoid name conflicts by matching physical directory structures with package names. While this approach initially prevented ambiguity, it later introduced complexities as projects grew. Java 2 introduced JAR files to manage these complexities, though it led to new issues with classpath conflicts.
Modularity is essential for software architecture, impacting how code is reused and maintained. It involves logical rather than physical separation, which can affect restructuring efforts, such as breaking a monolithic application into smaller components.
To measure modularity, architects use metrics like cohesion, coupling, and connascence. Cohesion assesses how related the parts of a module are, with functional cohesion being the most desirable. Conversely, coincidental cohesion, where module elements are unrelated, is the least desirable.
The Chidamber and Kemerer metrics suite includes the Lack of Cohesion in Methods (LCOM) metric, which measures structural cohesion by evaluating shared fields among methods. A high LCOM score indicates low cohesion, suggesting that a class might be incidentally coupled.
Coupling metrics, such as afferent and efferent coupling, measure incoming and outgoing connections to code artifacts. High coupling can lead to instability, where changes in one part of the code affect others. Instability is calculated as the ratio of efferent coupling to the sum of both afferent and efferent coupling.
Robert Martin introduced additional metrics, such as abstractness and distance from the main sequence, to evaluate code structure. Abstractness measures the ratio of abstract to concrete artifacts, while distance from the main sequence assesses the balance between abstractness and instability. Classes should ideally fall near the main sequence line to avoid the âzone of painâ (too much implementation) or the âzone of uselessnessâ (too abstract).
Despite these metrics, software analysis tools remain blunt compared to other engineering disciplines. Metrics like cyclomatic complexity require interpretation to distinguish between essential and accidental complexity. Establishing baselines for these metrics helps architects evaluate and improve code quality, though they must interpret results to make informed decisions about software design and architecture.
In the realm of software architecture, connascence is a critical concept introduced by Meilir Page-Jones to describe the degree of interdependence between software components. Connascence is categorized into static and dynamic types, each with various subtypes that reflect different forms of coupling. Static connascence involves source-code-level dependencies, such as Connascence of Name (CoN), where multiple components must agree on the name of an entity, and Connascence of Type (CoT), where components must agree on an entityâs type. Other static forms include Connascence of Meaning (CoM), Position (CoP), and Algorithm (CoA), each requiring agreement on values, order, or algorithms, respectively.
Dynamic connascence, on the other hand, pertains to runtime dependencies. Examples include Connascence of Execution (CoE), where the execution order is crucial, and Connascence of Timing (CoT), which deals with race conditions. Connascence of Values (CoV) and Identity (CoI) involve dependencies on value changes and shared entity references. These connascence types highlight the challenges in maintaining system correctness and the complexity of runtime analysis.
Connascence is a tool for analyzing software architecture, guiding architects to improve system modularity by minimizing connascence across boundaries and maximizing it within encapsulation boundaries. Page-Jones emphasizes minimizing overall connascence and converting strong forms into weaker ones, aligning with Jim Weirichâs rules on degree and locality.
The discussion extends to the unification of coupling and connascence metrics. While structured programming focuses on in/out coupling, connascence delves into the nature of these relationships, offering insights into both static and dynamic interactions that structured programming might overlook.
However, the application of connascence faces challenges, particularly in addressing architectural decisions like synchronous versus asynchronous communication, crucial in modern distributed systems such as microservices.
The transition from modules to components is another architectural focus. Components serve as fundamental building blocks, and their effective separation is vital for achieving desired outcomes. This transition involves understanding architecture characteristics, which are distinct from domain requirements. These characteristics, often termed nonfunctional requirements or quality attributes, are critical for system success. They include operational aspects like performance and scalability, and structural concerns like modularity and maintainability.
Operational characteristics encompass availability, continuity, performance, recoverability, reliability, robustness, and scalability, often overlapping with DevOps concerns. Structural characteristics involve configurability, extensibility, installability, leverageability, localization, maintainability, and portability. These characteristics guide the architectural design, influencing structural decisions and ensuring the system meets both explicit and implicit requirements. Architects must balance these characteristics, considering trade-offs to achieve a robust, scalable, and maintainable architecture.
The text outlines key considerations in software architecture, focusing on supportability, upgradeability, and cross-cutting characteristics. Supportability involves determining the level of technical support and logging needed for debugging. Upgradeability refers to the ease of upgrading applications across servers and clients.
Cross-cutting architecture characteristics are crucial design constraints that donât fit into standard categories. These include accessibility, archivability, authentication, authorization, legal constraints, privacy, security, supportability, and usability. Each characteristic affects the systemâs design and operation, requiring careful consideration.
The concept of âItaly-ilityâ illustrates the unique architectural characteristics that can arise from specific organizational needs, combining availability, recoverability, and resilience. The text emphasizes the ambiguity in defining these characteristics, with terms like interoperability and compatibility often overlapping. Interoperability involves ease of integration with other systems, while compatibility focuses on industry standards.
The ISO provides definitions for several architectural characteristics, including performance efficiency, compatibility, usability, reliability, security, maintainability, and portability. Each has subcategories, such as modularity for maintainability and adaptability for portability.
Trade-offs are inherent in architectural design. Enhancing one characteristic, like security, may negatively impact others, such as performance. The metaphor of flying a helicopter, where adjusting one control affects others, illustrates this balance. Architects aim for the âleast worst architecture,â recognizing that supporting too many characteristics can lead to overly complex and ineffective designs.
Identifying architectural characteristics involves understanding domain concerns, requirements, and implicit knowledge. Architects must collaborate with stakeholders to determine which characteristics are most important, translating domain concerns into architectural terms. This process helps in making informed decisions and prioritizing characteristics effectively.
The text also introduces architecture katas, a method for practicing the identification and implementation of architectural characteristics. This exercise helps architects refine their skills in balancing and prioritizing various architectural needs.
Overall, the text underscores the importance of clear communication, understanding domain-specific needs, and making informed trade-offs in software architecture. It highlights the iterative nature of architecture design, aligning with Agile principles to adapt and improve over time.
Architecture katas are exercises designed to help aspiring architects develop their skills by working on domain-specific problems. Originating from the concept of kata in martial arts, these exercises emphasize proper form and technique. In architecture katas, small teams work on a design problem for a short period, then present their solutions for feedback. This process helps architects practice deriving architecture characteristics from requirements and domain context.
The Silicon Sandwiches case study illustrates how architects can identify architecture characteristics from requirements. The scenario involves a national sandwich shop wanting to enable online ordering. Key requirements include handling thousands of users, integrating with mapping services, supporting mobile access, offering promotions, and accepting various payment methods. Additional context includes franchised shops, overseas expansion plans, and a corporate goal to hire inexpensive labor.
Architects must identify both explicit and implicit architecture characteristics. Explicit characteristics are directly stated in requirements, such as scalability to handle many users, while implicit characteristics, like elasticity, are inferred from domain knowledge. For instance, the shopâs traffic may spike during meal times, necessitating an elastic system to handle bursts of requests.
Integration with external services, such as mapping, introduces considerations for reliability and potential fragility. Mobile-device accessibility suggests building a mobile-optimized web application, considering performance characteristics like page load time. Promotions require customizability, which might be supported by a microkernel architecture or design patterns like the Template Method.
Security, performance, and customizability are critical architecture characteristics. Security involves ensuring safe online transactions, often handled by third-party services. Performance requires balancing scalability and availability, ensuring the system remains responsive even during peak times. Customizability allows for tailored user experiences based on location and preferences.
Architects must balance trade-offs between design and architecture. For example, customizability can be achieved structurally through architecture or through design patterns. Collaboration with developers and stakeholders is crucial to making informed decisions and avoiding the âIvory Tower Architectâ anti-pattern.
Measuring and governing architecture characteristics involves defining them clearly across the organization. Operational measures like performance and scalability need nuanced interpretations. Performance may involve specific budgets for different application parts, such as ensuring quick page render times.
Overall, architecture katas and exercises like Silicon Sandwiches help architects practice identifying and prioritizing architecture characteristics, balancing explicit and implicit needs, and making informed trade-offs to achieve optimal design outcomes.
Modern web applications track nuanced metrics to enhance user experience, focusing on performance and scalability. Organizations often implement K-weight budgets to limit page download sizes, acknowledging network constraints, especially on mobile devices. Performance metrics like first contentful paint and first CPU idle are crucial for mobile users. Scalability is monitored using statistical models to predict and assess performance, with deviations indicating potential issues.
Structural measures, such as code complexity, are critical for maintaining code quality. Cyclomatic Complexity (CC) is a key metric used to assess the complexity of code by analyzing decision points and execution paths. A low CC suggests well-factored, modular code, whereas high complexity can indicate code smells, impacting modularity, testability, and deployability. Industry standards suggest a CC under 10 is acceptable, but ideally, it should be under five. Tools like Crap4J evaluate code quality by combining CC with code coverage.
Process measures intersect with software development, emphasizing agility, testability, and deployability. Testability is measured through code coverage, though it requires thoughtful implementation to ensure code correctness. Deployability metrics include deployment success rates and bug reports, tailored to team goals.
Governance in architecture ensures adherence to priorities like modularity. Fitness functions automate governance, assessing architecture characteristics. These functions are inspired by evolutionary computing, providing objective measures of code quality. Tools like JDepend and ArchUnit help detect issues like cyclic dependencies and enforce architectural rules, ensuring modularity and layer integrity.
Fitness functions can also be used for chaos engineering, exemplified by Netflixâs Chaos Monkey and Simian Army. These tools test system resilience by simulating failures and enforcing governance rules, such as ensuring RESTful compliance and security checks.
Overall, the emphasis on nuanced metrics, structural integrity, and governance through fitness functions ensures robust, scalable, and maintainable software architectures.
Chaos engineering, exemplified by tools like Chaos Monkey and Chaos Kong, is pivotal in anticipating system failures. It emphasizes that failures are inevitable, and testing for them enhances system robustness. The use of fitness functions, akin to checklists in other professions, allows architects to enforce architectural principles and automate governance checks, ensuring secure and efficient code releases.
The traditional scope of architecture characteristics at the system level has evolved with modern techniques like microservices. The concept of architecture quantum, defined as an independently deployable artifact with high functional cohesion and synchronous connascence, provides a new framework. This approach allows architects to focus on specific parts of a system, considering dependencies like databases that impact operational characteristics.
Connascence, a measure of coupling, is crucial in understanding how components are interdependent. It includes static connascence, identified through code analysis, and dynamic connascence, related to runtime behavior. Synchronous and asynchronous connascence are defined by how services interact, impacting system design and flexibility.
Domain-Driven Design (DDD) introduces the concept of bounded contexts, which localizes entities within specific domains, reducing coupling and enhancing modularity. This principle aligns with the architecture quantum approach, where characteristics are scoped at the quantum level rather than the entire system. This granularity allows architects to identify challenges early and design hybrid architectures.
The architecture kata âGoing, Going, Goneâ illustrates the application of these concepts. It involves designing an online auction system with specific requirements like scalability, real-time performance, and security. By identifying different architecture quanta, such as bidder feedback and auctioneer, architects can tailor characteristics like availability and reliability to specific components, enhancing the overall system design.
Component-based thinking focuses on the physical manifestation of modules, known as components. These are packaged differently across languages (e.g., jar files in Java, dll in .NET) and are crucial in organizing and stratifying code. Understanding component scope helps developers manage modularity and dependencies effectively.
Overall, the shift from monolithic to microservices architectures necessitates new approaches like architecture quanta and connascence to manage complexity and enhance system robustness. These concepts allow for more precise and adaptable architectural designs, accommodating the evolving software development landscape.
Components in software architecture serve as modular building blocks within systems, appearing as libraries, subsystems, or services. Libraries run in the same memory space as the calling code, while services operate independently, communicating through networking protocols like TCP/IP or REST. Architects use components to achieve higher modularity, especially in architectures like microservices where simplicity is key.
Architects play a crucial role in defining, managing, and governing components, working with various stakeholders to incorporate architectural characteristics and system requirements. Architecture exists independently of development processes, though Agile practices influence deployment and governance.
The First Law of Software Architecture highlights trade-offs in component creation, with common partitioning styles including layered and modular monoliths. Layered monoliths organize by technical capabilities, while modular monoliths focus on domains, offering different benefits and drawbacks. Domain partitioning aligns with business functions and facilitates cross-functional teams, while technical partitioning separates concerns but may increase coupling.
Conwayâs Law suggests that organizational communication structures influence system designs. The Inverse Conway Maneuver proposes evolving team structures to align with desired architectures. Domain-driven design, as seen in microservices, partitions systems around domains, allowing for easier migration to distributed architectures.
Architects must decide on top-level partitioning early, considering whether to organize by technical capabilities or domains. Technical partitioning offers clear separation of concerns, while domain partitioning better reflects business workflows. Both approaches have trade-offs, with domain partitioning gaining popularity for its alignment with modern architectural styles.
Component identification is iterative, involving initial design, requirement alignment, role analysis, and restructuring based on feedback. Proper granularity is crucial, balancing communication and coupling. Architects collaborate with developers to refine components, avoiding pitfalls like the entity trap, which reduces architecture to mere object-relational mappings.
Ultimately, component design varies with architecture styles and development processes, requiring architects to iteratively refine designs to meet system needs and architecture characteristics.
The text discusses various architectural frameworks and patterns used in software development, focusing on component-based thinking and architecture styles.
Frameworks for CRUD Applications: Naked Objects and similar frameworks simplify CRUD application development by automatically generating user interfaces from database entities. These frameworks, like Ruby on Railsâ scaffolding, offer default mappings from databases to user interfaces. However, they can lead to the âentity trapâ anti-pattern, where database relationships are mistakenly treated as application workflows.
Component Design Techniques:
-
Actor/Actions Approach: Originating from the Rational Unified Process, this method identifies system actors and their actions, mapping them to components. It suits systems with distinct roles and is adaptable to both monolithic and distributed systems.
-
Event Storming: Derived from domain-driven design (DDD), this technique identifies system events and builds components around event and message handlers. It is well-suited for microservices and distributed architectures.
-
Workflow Approach: Similar to event storming but without a focus on messaging, this approach models components around workflows, identifying key roles and activities.
Each technique has different trade-offs and is suited to different development processes.
Case Study: Going, Going, Gone (GGG): The GGG system identifies roles like bidder, auctioneer, and system, with corresponding actions and components. Initial components include VideoStreamer, BidStreamer, BidCapture, BidTracker, AuctionSession, and Payment. Architecture characteristics like scalability and reliability influence component design, leading to the separation of BidCapture and AuctioneerCapture components to address differing needs.
Architecture Quantum and Design Decisions: The architecture quantum concept helps determine whether a system should be monolithic or distributed. A monolithic architecture includes all functionality in a single deployable unit, while a distributed architecture consists of multiple services. The decision depends on the architecture characteristics and quanta identified during design.
Architecture Styles: Architecture styles define the structure and interaction of user interfaces and backend code. They include:
-
Big Ball of Mud: An anti-pattern characterized by a lack of structure, making changes difficult and leading to poor scalability and performance.
-
Unitary Architecture: Common in embedded systems, this involves software running on a single system without separation of concerns.
-
Client/Server Architecture: Includes variations like desktop + database server and browser + web server, where the frontend and backend are separated.
-
Three-tier Architecture: Adds an application tier between the user interface and database, enhancing separation and scalability.
Understanding these styles and their trade-offs is crucial for architects to make effective decisions tailored to specific business problems.
Distributed architectures, compared to monolithic ones, offer enhanced performance, scalability, and availability but come with significant trade-offs. The Java language, designed during the era of three-tier computing, integrated serialization to support network object movement. This decision, based on the assumption that three-tier architecture would remain dominant, has left lasting implications, complicating the addition of modern features due to backward compatibility requirements.
Distributed architectures face unique challenges, encapsulated in the âfallacies of distributed computing,â introduced by L. Peter Deutsch. These fallacies highlight common misconceptions:
-
The Network Is Reliable: Networks remain unreliable, affecting service communication. Techniques like timeouts and circuit breakers address these issues.
-
Latency Is Zero: Remote calls have higher latency than local ones. Understanding average and percentile latencies is crucial, especially for microservices.
-
Bandwidth Is Infinite: Distributed systems consume significant bandwidth. Reducing data transfer between services can mitigate this, using techniques like private APIs or GraphQL.
-
The Network Is Secure: Security is more complex in distributed systems due to increased endpoints, requiring robust security measures.
-
The Topology Never Changes: Network topology changes frequently, impacting latency assumptions and requiring constant communication with network administrators.
-
There Is Only One Administrator: Multiple administrators exist, necessitating coordination to manage latency and topology changes effectively.
-
Transport Cost Is Zero: Distributed architectures incur higher costs due to additional infrastructure needs.
-
The Network Is Homogeneous: Networks often comprise heterogeneous hardware, leading to integration challenges and affecting reliability and latency.
Additional considerations in distributed architectures include:
-
Distributed Logging: Root-cause analysis is complex due to dispersed logs. Tools like Splunk can consolidate logs but donât solve all challenges.
-
Distributed Transactions: Unlike monolithic architectures with straightforward transactions, distributed systems use eventual consistency. Techniques like transactional sagas and BASE transactions manage this complexity.
-
Contract Maintenance and Versioning: Maintaining and versioning contracts between decoupled services is challenging, requiring careful management and communication.
The layered architecture style, common due to its simplicity and alignment with organizational structures, consists of logical layers (presentation, business, persistence, and database). It often defaults in the absence of explicit architectural choices, leading to anti-patterns like architecture by implication.
Distributed and layered architectures each have their use cases and challenges. Understanding these complexities helps architects make informed decisions about system design, balancing scalability, performance, and maintainability.
The text discusses various deployment variants and roles within the layered architecture style, emphasizing the separation of concerns and the roles of different layers. The architecture consists of presentation, business, persistence, and database layers, each with specific responsibilities. The presentation layer handles user interface logic, while the business layer manages business rules. The persistence layer interacts with databases, and the database layer stores data.
The architecture can be deployed in different ways: separating the presentation layer, combining business and persistence layers, or integrating all layers into a single deployment for smaller applications. This layered style is beneficial for clear role delineation but lacks agility due to its technical partitioning rather than domain partitioning.
Layers can be closed or open, affecting how requests traverse the architecture. Closed layers require requests to pass through each layer sequentially, promoting isolation and reducing interdependencies, whereas open layers allow bypassing certain layers, which can lead to tight coupling and brittleness.
Adding layers can help manage shared objects and enforce architectural constraints. For example, introducing a services layer can restrict access to shared business objects and maintain layer isolation. Proper documentation of open and closed layers is crucial to avoid tightly coupled architectures.
The layered architecture is suitable for small, simple applications or as a starting point when architectural decisions are still pending. However, as applications grow, maintainability, agility, and testability may suffer. The architecture sinkhole anti-pattern, where requests pass through layers without processing, is a potential issue. If prevalent, it suggests reconsideration of the architecture style.
In terms of characteristics, the layered architecture excels in cost and simplicity due to its monolithic nature. However, it rates low in deployability and testability because changes require redeploying the entire unit. Reliability is moderate, hindered by monolithic deployment risks. Scalability and performance are limited due to the lack of modularity and parallel processing capabilities. Fault tolerance and availability are also weak, affected by high recovery times.
The text also introduces the pipeline architecture style, characterized by pipes and filters. Pipes provide unidirectional communication, while filters perform specific tasks. Filters can be producers, transformers, testers, or consumers, allowing for compositional reuse. This style is prevalent in scenarios requiring simple, one-way processing, such as Electronic Data Interchange (EDI) tools.
Overall, the layered architecture is advantageous for simplicity and initial development but may require transitioning to more modular styles as complexity increases.
The pipeline architecture is a design pattern used for processing data through a series of stages, each with specific tasks, using filters and pipes. This architecture is employed by ETL tools and orchestrators like Apache Camel for data flow and transformation. An example involves streaming service telemetry data to Apache Kafka, where filters like Service Info Capture and Duration Filter process and route data based on its characteristics. The architectureâs modularity allows easy addition of new filters without affecting existing ones, enhancing extensibility.
The pipeline architecture is technically partitioned, with application logic divided into producer, tester, transformer, and consumer filters. It is typically implemented as a monolithic deployment, resulting in a single architectural quantum. Its strengths lie in simplicity, cost-effectiveness, and modularity, enabling modifications of individual filters without impacting the entire system. However, it faces challenges in scalability and fault tolerance due to its monolithic nature.
The microkernel architecture, also known as the plug-in architecture, consists of a core system and plug-in components. It is suitable for product-based applications and divides application logic between the core and independent plug-ins, enhancing extensibility and maintainability. The core system handles basic functionality, while plug-ins add custom processing. For example, in an electronics recycling application, each device assessment can be implemented as a separate plug-in, allowing easy expansion by adding new plug-ins.
Plug-ins communicate with the core system through point-to-point connections, method invocations, or remote access using REST or messaging. They can be compile-based or runtime-based, with frameworks like OSGi managing runtime plug-ins. Plug-ins typically do not directly access a shared database, maintaining decoupling by allowing the core system to handle data interactions. They may have their own data stores for specific needs.
A plug-in registry helps the core system manage available plug-ins, containing details like names and access protocols. Contracts between the core and plug-ins define behavior and data exchange formats, ensuring consistency across plug-ins. These contracts can be standardized or adapted for third-party plug-ins. Overall, the microkernel architecture provides flexibility, allowing applications to adapt and extend functionality through isolated plug-in components.
The text discusses the microkernel architecture style, commonly used in software tools like Eclipse IDE, PMD, Jira, and Jenkins, as well as in web browsers like Chrome and Firefox. This architecture is effective for complex business applications, such as insurance claims processing and tax preparation software, by modularizing rules and processes into plug-in components. This modularity allows for easier updates and customization without affecting the core system.
The microkernel architectureâs strengths include simplicity and cost-effectiveness, while its weaknesses are scalability, fault tolerance, and extensibility due to monolithic deployments. However, it uniquely supports both domain and technical partitioning, making it suitable for applications requiring user customization and feature extensibility.
Testability, deployability, and reliability are above average because functionalities are isolated in plug-in components, reducing testing scope and deployment risks. Modularity and extensibility are also above average, allowing for quick adaptation to changes, such as tax law updates in tax preparation software. Performance is slightly above average, benefiting from the ability to streamline applications by removing unnecessary functionalities.
The text then transitions to service-based architecture, a hybrid of microservices architecture, known for its flexibility and lower complexity compared to other distributed architectures. It features a distributed macro layered structure with separately deployed UI, remote services, and a monolithic database. Services are coarse-grained, typically ranging from 4 to 12 per application, and are accessed remotely, often via REST.
Service-based architecture allows for topology variants, such as breaking apart the UI or database into domain-specific components. An API layer can be added for external system interactions and consolidating cross-cutting concerns. Domain services are designed using layered architecture or domain partitioning, ensuring efficient orchestration of business processes within a single service, maintaining data integrity with ACID transactions.
Database partitioning in service-based architecture often involves a single shared database, which can complicate schema changes. Logical partitioning and federated shared libraries can mitigate this by isolating changes to specific domains, reducing the impact on unrelated services. This approach emphasizes the importance of well-defined data domains to manage changes effectively.
Overall, the microkernel and service-based architectures offer distinct advantages for specific use cases, allowing for modularity, flexibility, and efficient management of complex systems.
The text outlines a service-based architecture used for electronic device recycling, emphasizing key processes: quoting, receiving, assessment, accounting, item status, recycling, and reporting. Each is implemented as a separate domain service, allowing scalability, fault tolerance, and security. The architecture uses two databasesâone for customer-facing operations and another for internal processesâenhancing security by segregating data access.
Service-based architecture is domain-partitioned, meaning each service corresponds to a specific domain, such as item assessment. This isolation enables agility, testability, and deployability, allowing changes to be made within a domain without affecting others. The architecture supports scalability by deploying multiple instances of services that require higher throughput, like quoting and item status, while others remain single-instance.
The architecture is rated highly for agility, testability, deployability, fault tolerance, and availability. However, scalability and elasticity are less efficient due to coarse-grained services compared to finer-grained architectures like microservices. Despite this, service-based architecture is simpler and more cost-effective, making it a pragmatic choice for many organizations, especially when domain-driven design is involved.
This architecture style maintains ACID transactions better than others due to its coarse-grained nature, although eventual consistency is sometimes necessary. It offers architectural modularity without complex orchestration or choreography, which are essential in finer-grained architectures.
The text also compares service-based architecture to event-driven architecture, which is highly scalable and adaptable. Event-driven architecture operates asynchronously, using components like event brokers and processors to handle events. It features two topologies: mediator and broker. The broker topology, without a central orchestrator, uses a chain-like broadcasting method via a message broker, suitable for simple event processing flows.
In the broker topology, an initiating event triggers a sequence of processing events. Each event processor performs a task and advertises the result, allowing other processors to react. This decoupled, asynchronous model supports architectural extensibility, enabling new processors to be added with minimal changes.
An example of broker topology is a retail order system where an order placement initiates events for notification, payment, and inventory management. Each processor performs its task independently, advertising results for further processing. This model allows for parallel processing and scalability, demonstrating the flexibility and efficiency of event-driven architecture.
In event-driven architecture, the broker and mediator topologies are key frameworks for managing workflows. The broker topology involves decoupled event processors that handle events independently, akin to a relay race where each processor passes the baton (event) along. This design enhances scalability and fault tolerance, as processors can operate and scale independently. However, it lacks control over the overall workflow, making error handling and transaction recoverability challenging. If an event processor fails, no central entity is aware, potentially leading to inconsistencies.
In contrast, the mediator topology introduces an event mediator to manage and control workflows, providing better error handling and recoverability. The mediator receives initiating events, orchestrates the workflow by generating processing events, and ensures that each step is completed before proceeding. This topology allows for more structured control, enabling the mediator to pause and restart workflows in case of errors, thus maintaining data consistency.
Despite these advantages, the mediator topology has drawbacks, including increased coupling of event processors and potential bottlenecks, as the mediator must also scale to handle the load. Moreover, modeling complex workflows can be difficult, often requiring a combination of both broker and mediator approaches to address dynamic processing needs.
The choice between the two topologies depends on the trade-offs between control and error handling versus performance and scalability. While the broker topology excels in scalability and performance, the mediator topology offers improved control and error management capabilities.
Event-driven architectures leverage asynchronous communication, enhancing system responsiveness. For instance, posting a comment on a website can be processed asynchronously, reducing user-perceived response time significantly compared to synchronous processing. This approach is beneficial for tasks where immediate feedback is less critical, allowing systems to handle operations efficiently in the background.
In event-driven architecture, asynchronous communication enhances responsiveness by notifying users that actions are accepted and will be processed, without guaranteeing immediate completion. This contrasts with synchronous communication, which ensures actions are completed before responding. Asynchronous methods can significantly reduce perceived response times, but they complicate error handling and message processing.
The workflow event pattern addresses asynchronous error handling by delegating errors to a workflow processor, which attempts to repair and resubmit messages without impacting overall responsiveness. If errors canât be resolved programmatically, they are sent to a dashboard for manual intervention. This pattern can lead to out-of-sequence message processing, necessitating additional mechanisms to maintain order within specific contexts.
Data loss is a concern in asynchronous systems, occurring when messages fail to reach their destination or are not processed correctly. Techniques to mitigate this include using persistent message queues for guaranteed delivery, client acknowledge mode to prevent message loss if a processor crashes, and leveraging ACID transactions to ensure data persistence.
Broadcast capabilities allow messages to be sent without knowing the recipients or their actions, providing high decoupling between event processors. This is crucial for scenarios like stock price updates, where multiple systems might react to the same broadcasted event.
Request-reply messaging is used when synchronous communication is needed within an event-driven architecture. It involves a request queue and a reply queue, using correlation IDs or temporary queues to manage responses. The correlation ID technique is preferred for its efficiency, as temporary queues can burden message brokers under high load.
Choosing between request-based and event-based models depends on the systemâs needs. Request-based models are suited for structured, data-driven workflows requiring control and certainty, while event-based models are ideal for dynamic, action-driven processes demanding responsiveness and scalability.
Event-Driven Architecture (EDA)
Advantages:
- Dynamic Content Response: EDA excels in handling dynamic user content.
- Scalability and Elasticity: Offers superior scalability through asynchronous communication and parallel processing.
- Agility and Extensibility: Easily adapts to changes and integrates new features.
- Performance and Responsiveness: Provides high performance with real-time decision-making and situational awareness.
Trade-offs:
- Eventual Consistency: Supports only eventual consistency, not immediate.
- Control and Testing Challenges: Less control over processing flow and difficult to test due to nondeterministic event flows.
- Outcome Uncertainty: Uncertain outcomes due to dynamic event interactions.
Hybrid Architectures:
- EDA is often combined with other architectures like microservices and space-based architectures to enhance scalability and remove bottlenecks.
- Microservices: Use messaging for asynchronous data transfer, enhancing scalability.
- Space-Based Architecture: Uses in-memory data grids to handle high concurrency and variable user loads.
Architecture Characteristics Ratings
- Performance, Scalability, Fault Tolerance: EDA scores high in these areas due to asynchronous communication and decoupled processors.
- Simplicity and Testability: Rates low due to complex event flows and dynamic interactions.
- Evolutionary Nature: High adaptability allows straightforward integration of new features.
Space-Based Architecture
Purpose:
- Designed to address high scalability, elasticity, and concurrency issues in applications with variable user volumes.
Components:
- Processing Unit: Contains application logic and in-memory data grids for high performance.
- Virtualized Middleware: Manages data synchronization and request handling through components like messaging and data grids.
- Data Pumps: Asynchronously update databases, ensuring eventual consistency.
Topology:
- Tuple Space Concept: Uses multiple parallel processors with shared memory, eliminating central database bottlenecks.
- In-Memory Data Grids: Data is kept in-memory and replicated, providing near-infinite scalability.
Virtualized Middleware Components:
- Messaging Grid: Manages input requests and session state, forwarding requests to available processing units.
- Data Grid: Ensures data consistency across processing units through asynchronous replication.
- Processing Grid (Optional): Manages orchestrated requests between multiple processing units.
- Deployment Manager: Dynamically adjusts processing units based on load conditions.
Data Management
- Data Pumps: Essential for asynchronous data updates to the database, maintaining eventual consistency.
- Contracts and Messaging: Use schemas or messages to define data updates, ensuring accurate database synchronization.
By integrating EDA with other architecture styles, systems can achieve enhanced scalability, responsiveness, and adaptability, while managing the inherent challenges of testing and outcome predictability.
In space-based architecture, data writers and data readers form a critical part of the data abstraction layer, facilitating database interactions without tightly coupling processing units to database structures. Data writers update databases based on messages from data pumps, and they can be domain-based or dedicated to specific processing units. Domain-based writers handle updates for a specific domain, while dedicated writers align with individual processing units for scalability and agility.
Data readers, on the other hand, retrieve data from databases under specific conditions, such as system crashes or redeployments. They use a reverse data pump to load data into caches, ensuring processing units are synchronized. Both data writers and readers can be implemented as services, applications, or data hubs and are crucial for maintaining data abstraction, allowing for incremental database changes without affecting processing units.
Data collisions, a potential issue in replicated caching, occur when updates in one cache instance conflict with another due to replication latency. Factors influencing collision rates include the number of processing units, update rate, cache size, and replication latency. The collision rate formula helps estimate potential collisions, guiding decisions on the feasibility of replicated caching.
Space-based architecture can be deployed in cloud, on-premises, or hybrid environments. A hybrid setup allows applications to run in cloud environments while maintaining data on-premises, supporting cloud-based synchronization and local data management.
Caching, essential for performance, can be replicated or distributed. Replicated caching offers high speed and fault tolerance by synchronizing in-memory data grids across processing units. However, it may struggle with high data volumes and update rates. Distributed caching centralizes data, enhancing consistency but potentially reducing performance and fault tolerance. The decision between caching models depends on factors like data consistency needs, cache size, and update frequency.
Near-cache models combine in-memory grids with distributed caches, using eviction policies to manage data. This setup balances performance and consistency by keeping recently or frequently used data in front caches while maintaining a full backing cache. Near-caches are not synchronized between processing units, allowing each to manage its data context independently.
Space-based architecture is ideal for applications experiencing high user spikes and throughput exceeding 10,000 concurrent users, such as online concert ticketing and auction systems. These systems require high performance, scalability, and elasticity. In concert ticketing, user volumes can surge dramatically when tickets go on sale, necessitating real-time updates on seating availability. Traditional databases struggle with such high concurrent requests, making space-based architecture, which uses in-memory data caching, a suitable solution. This architecture allows for rapid scaling of processing units to accommodate increased demand.
Similarly, online auction systems benefit from space-based architecture due to unpredictable spikes in user loads. Multiple processing units can be dynamically allocated as demand increases and released as it decreases. The architectureâs asynchronous data handling improves performance by allowing bid data to be processed with minimal latency.
Space-based architecture excels in elasticity, scalability, and performance, achieving these through in-memory caching and minimizing database constraints. However, this complexity poses challenges in simplicity and testability. Testing at scale is complex and costly, often requiring production environments, which introduces operational risks. The architecture is expensive due to licensing fees for caching products and high resource utilization.
The architecture is both domain and technically partitioned. Domain partitioning aligns with specific high-elasticity systems, while technical partitioning separates transactional processing from data storage. Processing units, data pumps, and databases form technical layers similar to n-tier architectures.
In contrast, orchestration-driven service-oriented architecture, popular in the late 1990s, focused on enterprise-level reuse. It organized services into layers: business services for domain behavior, enterprise services for fine-grained implementations, application services for single-use needs, and infrastructure services for operational concerns. The orchestration engine coordinated these services, handling transactional behavior and integration.
Despite its reuse goals, this architecture faced issues with coupling and complexity. Centralized services like a canonical âCustomerâ led to widespread dependencies, making changes risky and requiring coordinated deployments. Attempts to technically partition systems often resulted in fragmented domain concepts, complicating development tasks.
Modern architectural evaluation criteria such as deployability and testability were not priorities during this architectureâs peak. It suffered from coupling due to a single database and orchestration engine, leading to challenges in achieving modern engineering goals. The architectureâs structure contributed to the rise of microservices, which address these disadvantages by promoting more independent service deployment and scalability.
Microservices architecture is a popular style that emphasizes high decoupling and is inspired by domain-driven design (DDD), particularly the concept of bounded context. Each service in a microservices architecture operates independently, encapsulating all necessary components like databases and subcomponents, which allows for decoupling from other services. This architecture is distributed, with each service running in its own process, facilitated by technologies like virtual machines and containers.
Granularity is crucial in microservices, as architects aim to find the right service size to balance functionality and communication overhead. Services should be functionally cohesive, often defined by business workflows or domains. The architecture avoids shared databases to prevent coupling, promoting data isolation, which allows services to choose the most suitable storage solutions independently.
An API layer is often used in microservices to manage interactions between services and consumers, but it should not serve as an orchestration tool, as this would violate the architectureâs principles. Instead, operational concerns like monitoring and logging are managed through patterns like the sidecar, which allows for consistent operational interfaces across services.
Microservices also support heterogeneous environments, where different services can use different technology stacks. This approach prevents accidental coupling and allows teams to choose the most appropriate technologies for their specific service needs.
Communication in microservices can be synchronous or asynchronous. Synchronous calls require direct responses, while asynchronous communication often uses events and messages, aligning with an event-driven architecture. Choreography and orchestration are two patterns used to manage interactions, with choreography favoring decentralized coordination.
Microservices architecture often includes a service mesh, which provides unified control over operational concerns like logging and monitoring. This mesh forms a holistic view of the systemâs operational aspects, facilitating service discovery and scalability.
Frontends in microservices can be monolithic or follow the microfrontend pattern, which aligns user interfaces with backend services to maintain decoupling. Microfrontends allow for isolated service boundaries that extend from the user interface to backend services, often implemented using component-based frameworks like React.
Overall, microservices architecture emphasizes decoupling, flexibility, and scalability, allowing teams to develop and deploy services independently while maintaining a cohesive system structure.
In microservices architecture, decoupling is emphasized, aligning with event-driven architecture (EDA) patterns like choreography and orchestration. Choreography allows services to interact without a central mediator, preserving decoupling but complicating error handling and coordination. Orchestration introduces a mediator to manage complex workflows, creating coupling but simplifying coordination.
Transactions across microservices are challenging due to decoupling, and architects are advised to avoid them by adjusting service granularity. The saga pattern offers a solution for distributed transactions, using a mediator to handle success or failure across services. This pattern, while useful, increases complexity and should be used sparingly.
Microservices architecture supports scalability, elasticity, and evolutionary change, benefiting from modern engineering practices like DevOps. However, it faces challenges in performance due to the overhead of network calls and security checks. Choreography is often preferred over orchestration to reduce coupling and improve communication efficiency.
The architecture is domain-centered, with service boundaries aligning with domains. Extreme decoupling poses challenges but offers significant benefits when executed well. Architects must understand when to adhere to or deviate from best practices intelligently.
Choosing an architecture style depends on various factors, including domain understanding, architecture characteristics, data architecture, organizational factors, and domain/architecture isomorphism. Architects must decide between monolithic or distributed architectures based on the need for differing architecture characteristics. They must also determine data placement and communication styles, opting for synchronous communication by default and asynchronous when necessary.
The decision process involves creating architecture topology, documenting design decisions, and establishing fitness functions to protect important principles. A modular monolith, for example, partitions components by domain within a single quantum, balancing simplicity and functionality.
In the text, two architecture styles are discussed: monoliths and microservices, with case studies from Silicon Sandwiches and Going, Going, Gone (GGG).
Monolith Architecture: Silicon Sandwiches
- Design: A monolith with a single relational database and web-based UI, designed to be cost-effective. Domains appear as components.
- Customization: Achieved through an Override endpoint for individual customizations, ensuring domain components reference the Override component.
- Transition: If resources allow, separation of tables to facilitate future transition to distributed architecture.
- Microkernel Alternative: Utilizes plug-ins for customization, with a core system of domain components and a single database. Backends for Frontends (BFF) pattern is used for API layer adaptation, allowing rich interfaces and future device support.
Microservices Architecture: Going, Going, Gone (GGG)
- Requirements: High scalability, elasticity, and performance, with differing characteristics for roles like auctioneer and bidder.
- Design: Components are services, supporting different operational characteristics. Microservices better accommodate customization and separation.
- Services: Include BidCapture, BidStreamer, BidTracker, Auctioneer Capture, Auction Session, Payment, Video Capture, and Video Streamer.
- Communication: Mix of synchronous and asynchronous, using message queues to handle varying message flow rates and prevent bottlenecks.
Architecture Decisions and Anti-Patterns
- Decision-Making: Involves gathering information, justifying, documenting, and communicating decisions. Architects must overcome anti-patterns:
- Covering Your Assets: Avoiding decisions due to fear. Overcome by collaborating with teams and making decisions at the last responsible moment.
- Groundhog Day: Repeated discussions due to lack of justification. Provide both technical and business justifications.
- Email-Driven Architecture: Poor communication of decisions. Use a single system of record and notify only those impacted.
Architecturally Significant Decisions
- Decisions affecting structure, nonfunctional characteristics, dependencies, interfaces, or construction techniques are significant. They impact the systemâs architecture and should be documented.
Architecture Decision Records (ADRs)
- ADRs are concise documents (1-2 pages) that effectively record architecture decisions, as popularized by Michael Nygard and recommended by ThoughtWorks.
Architecture Decision Records (ADRs) are structured documents used to capture and manage architecture decisions. They can be written in formats like AsciiDoc, Markdown, or wiki templates, and tools like ADR-tools by Nat Pryce assist in managing them. An ADR typically consists of sections: Title, Status, Context, Decision, and Consequences, with optional sections like Compliance and Notes.
Title: The title is sequentially numbered and descriptive, providing clarity on the decisionâs nature.
Status: This can be Proposed, Accepted, or Superseded. Proposed indicates pending approval, Accepted means ready for implementation, and Superseded shows a decision has been replaced. This status helps maintain a historical record of decisions.
Context: Describes the forces and situations prompting the decision, potentially including alternatives if necessary.
Decision: Clearly states the architecture decision and its justification, focusing on the âwhyâ rather than the âhow,â which is crucial for understanding the rationale behind decisions.
Consequences: Documents the impact of the decision, including trade-offs and potential issues, providing a comprehensive view of the decisionâs implications.
Compliance: Although not standard, this section ensures the decisionâs compliance can be measured, often through automated tests.
Notes: Includes metadata like author, approval date, and modifications, aiding in tracking changes beyond what version control systems offer.
ADRs should be stored in accessible locations, such as wikis or shared directories, to ensure visibility and version control. The directory structure should accommodate application-specific, integration, and enterprise-wide decisions.
ADRs serve as effective documentation tools, providing context, decision rationale, and consequences, which are crucial for understanding and maintaining software architecture. They can also be used to document standards, ensuring they are justified and understood, thus increasing adherence.
By using ADRs, architects can document architecture decisions systematically, ensuring clarity and consistency across projects and teams.
Architecture Decision Records (ADRs) are essential for documenting and justifying decisions within a system. In the âGoing, Going, Goneâ case study, decisions like using event-driven microservices and publish-and-subscribe messaging are documented. An ADR example includes asynchronous pub/sub messaging between services.
Analyzing architecture risk involves assessing potential issues like availability, scalability, and data integrity. A risk matrix helps classify risks as low, medium, or high by evaluating impact and likelihood. For instance, a high-impact but low-likelihood event results in medium risk. Risk assessments summarize these evaluations, highlighting areas with the highest risk, such as data integrity.
Risk assessments can be filtered to focus on high-risk areas, improving clarity. Direction of risk (improving or worsening) is indicated using symbols like plus/minus signs or arrows, with colors enhancing clarity.
Risk storming is a collaborative exercise to identify architectural risks. It involves architects, senior developers, and tech leads to gain a comprehensive perspective. The process includes individual risk identification using the risk matrix, followed by collaborative consensus and mitigation discussions.
During risk storming, participants use architecture diagrams to pinpoint risk areas. They classify risks individually before collaborating to reach consensus. This ensures diverse perspectives are considered, uncovering risks that might be overlooked by a single architect.
Consensus involves discussing differences in risk perception among participants. For instance, if one sees high risk in a component due to past issues, it prompts a discussion to reassess risk levels. This collaborative effort ensures a thorough understanding and agreement on risk areas.
Mitigation is the final step, where identified risks are addressed through changes or enhancements to the architecture. This collaborative approach ensures that architecture remains robust and resilient to identified risks.
By continuously analyzing and addressing risks through methods like risk storming and using tools like the risk matrix, architects can maintain a stable and secure architecture. This process not only identifies current risks but also tracks improvements or degradations over time, ensuring ongoing architectural health and performance.
Risk storming is a strategic process used to identify and mitigate risks in software architecture and development. It involves assessing potential risks and making informed decisions about architectural changes to address these risks, often involving negotiations between architects and stakeholders to balance cost and risk.
Key Concepts of Risk Storming
-
Risk Identification and Mitigation:
- Risks are identified during sessions like risk storming, where stakeholders assess the architecture for potential issues.
- Solutions may involve refactoring or redesigning parts of the architecture, such as database clustering to improve availability.
-
Cost-Benefit Analysis:
- Decisions are made based on whether the cost of mitigating a risk outweighs the potential impact of the risk itself.
- Stakeholders negotiate to find cost-effective solutions that sufficiently reduce risk.
-
Agile Story Risk Analysis:
- Risk storming can be applied to Agile processes, assessing the risk of not completing user stories within an iteration.
- A risk matrix helps prioritize and track high-risk stories.
Application in a Call Center System
-
System Requirements:
- Supports concurrent users and integrates with a third-party diagnostics engine.
- Ensures HIPAA compliance and handles high traffic during peak times.
-
Architecture Overview:
- Includes separate interfaces for different user roles and services like case management and medical records exchange.
- Uses REST for communication, with proprietary protocols for external systems.
-
Risk Areas and Mitigations:
- Availability: High-risk areas include the central database and diagnostics engine. Solutions involve database clustering and publishing SLAs for external systems.
- Elasticity: The diagnostics engine interface is a bottleneck. Mitigation includes asynchronous queues and caching to manage load spikes.
- Security: The API gateway is a high-risk area for unauthorized access. Separate API gateways for different user roles enhance security.
Continuous Risk Management
- Risk storming is an ongoing process, revisited as systems evolve and new features are added.
- It helps prevent issues before they occur in production by continuously identifying and addressing potential risks.
Diagramming and Presenting Architecture
-
Effective Communication:
- Architects must effectively communicate their ideas to stakeholders and developers.
- Diagramming and presenting are crucial skills, requiring consistency in visual representation.
-
Tools and Techniques:
- Use low-fidelity tools early in design to avoid attachment to initial ideas.
- Advanced tools with features like layers and templates are used for detailed diagrams.
-
Representational Consistency:
- Always show relationships between architecture parts to avoid confusion.
- Incremental builds in diagrams help in presentations by gradually revealing complex structures.
Overall, risk storming and effective communication through diagramming are vital for managing software architecture risks, ensuring system robustness, and facilitating stakeholder engagement. These processes support continuous improvement and adaptability in dynamic development environments.
In technical diagramming, consistency and efficiency are achieved by using stencils for common patterns, which help architects quickly build new diagrams. Tools with features like magnets enhance alignment and connectivity. Supporting various formats, these tools must also handle lines, colors, and visual elements effectively. Standard diagramming techniques include UML, C4, and ArchiMate. UML, though initially popular, has seen reduced usage, with class and sequence diagrams remaining relevant. C4, developed by Simon Brown, offers a modern alternative, focusing on context, container, and component views, but is less suited for distributed architectures. ArchiMate provides a lightweight modeling language for enterprise ecosystems, emphasizing simplicity.
Effective diagram guidelines include clear titles, visible lines with directional arrows, and consistent use of solid and dotted lines to indicate synchronous and asynchronous communication. Shapes should be standardized within an organization, and labels and keys must be used to prevent ambiguity. Color can enhance differentiation between elements.
Presentation skills are crucial for architects, involving effective use of tools like PowerPoint and Keynote. Neal Fordâs âPresentation Patternsâ highlights the importance of manipulating time through transitions and animations to maintain audience engagement. Avoiding the âBullet-Riddled Corpseâ anti-pattern, presenters should use incremental builds to reveal information progressively, maintaining suspense and interest.
Infodecks differ from presentations by being standalone, comprehensive slide decks shared for individual reading, without the need for time elements. Presenters should remember that slides are only part of the story; strategic use of invisibility, such as inserting blank slides, can refocus attention on the speaker.
Effective architects also focus on team dynamics. They must communicate constraints clearly, balancing tight and loose boundaries to avoid frustration or confusion. Architect personalities influence these dynamics: control freaks impose excessive constraints, while armchair architects provide insufficient guidance. Effective architects strike a balance, ensuring teams have the tools and freedom to implement architecture successfully. By fostering collaboration and clear communication, architects can guide teams to create effective solutions.
In software architecture, the role of the architect varies based on project complexity and team dynamics. The âcontrol freakâ architect can disrupt development by micromanaging, while the âarmchair architectâ is often disconnected, lacking current coding experience and failing to provide practical guidance. An effective architect strikes a balance, offering appropriate constraints and support to the team.
Architectural Roles:
- Control Freak Architect: Over-involves in team processes, often hindering productivity.
- Armchair Architect: Detached from implementation, leading to vague guidance and development teams assuming architectural roles, which can reduce team velocity.
Effective Architect Characteristics:
- Provides necessary tools and guidance.
- Removes roadblocks and fosters team collaboration.
- Balances involvement based on project needs and team dynamics.
Elastic Leadership: Developed by Roy Osherove, this concept involves adjusting control based on five factors:
- Team Familiarity: New teams require more guidance; familiar teams need less.
- Team Size: Larger teams require more control to manage complexity and prevent cliques.
- Overall Experience: Junior teams need more mentoring; senior teams need less control.
- Project Complexity: Complex projects demand more architect involvement.
- Project Duration: Short projects need less control due to urgency, while long projects require more to maintain momentum.
Control Assessment: Architects should continually assess these factors to determine their level of involvement. A scale from -20 (armchair) to +20 (control freak) helps gauge the appropriate level of control. For example, a new, small, experienced team on a simple, short project would score -60, indicating minimal architect involvement.
Team Dynamics:
- Process Loss: As team size increases, productivity decreases due to coordination challenges.
- Pluralistic Ignorance: Team members may conform to norms they privately reject, fearing judgment.
- Diffusion of Responsibility: Larger teams may suffer from unclear roles, leading to dropped tasks.
Checklist Utilization: Drawing from aviation and medical practices, checklists can ensure thoroughness and consistency in architectural processes, preventing oversights that could impact project success.
By understanding these dynamics and leveraging tools like checklists, architects can better support development teams and ensure successful project outcomes.
Dr. Gawandeâs surgical checklists significantly reduced staph infection rates in hospitals, demonstrating their effectiveness. Checklists ensure comprehensive coverage of tasks, but their use in software development requires discernment. Not all tasks need checklists; only those prone to errors or frequently skipped should be included. Overuse of checklists can lead to diminishing returns, as developers may ignore them if they are too numerous or lengthy. Effective checklists are concise, capturing necessary steps without procedural dependencies. Automation should replace checklist items where possible.
Three key checklists for software development are: developer code completion, unit and functional testing, and software release. The developer code completion checklist ensures all coding standards and frequently overlooked items are addressed before declaring code as âdone.â It includes tasks like code formatting and handling exceptions, with automation recommended for repetitive checks.
The unit and functional testing checklist covers edge cases and scenarios often missed by developers. It bridges the gap between developers and testers, ensuring comprehensive testing before code reaches production. Automated tests should replace checklist items when feasible.
The software release checklist mitigates risks associated with deploying software. It evolves with each deployment failure, incorporating new checks to prevent repeated errors. Typical items include server configuration changes, third-party library updates, and database modifications.
The Hawthorne effect can encourage checklist adherence by creating awareness of monitoring, even if sporadic. Developers are more likely to complete tasks correctly when they believe their actions are observed.
Providing guidance through design principles helps teams make informed decisions. Architects can use visual aids to clarify which decisions developers can make independently and which require approval. For instance, special-purpose libraries might be developer-approved, while framework choices remain with the architect.
Business justifications play a crucial role in decision-making. Developers must consider both technical and business impacts, fostering a healthier team dynamic. An example illustrates how requiring a business justification transformed a disruptive team member into a valuable contributor.
Effective team leadership involves not just technical guidance but also facilitating team dynamics and decision-making. Architects, with their close collaboration with development teams, are well-positioned to enhance team effectiveness. This role distinguishes technical architects from effective software architects, who lead teams in implementing architecture.
Negotiation and leadership skills are essential for architects, requiring years of practice. Architects must navigate organizational politics, as their decisions often face challenges from developers, other architects, and stakeholders. Effective negotiation involves balancing technical soundness with cost and time considerations, ensuring decisions align with organizational goals.
Negotiation is a crucial skill for software architects, who must balance technical and business needs. Effective architects understand organizational politics and possess strong negotiation and facilitation abilities to resolve conflicts between stakeholders. In scenario 1, a senior VP demands five nines (99.999%) availability for a trading system, while the architect believes three nines (99.9%) is sufficient. The architect must diplomatically negotiate, using techniques like understanding stakeholder language and providing clear downtime metrics to shift focus from abstract percentages to tangible impacts.
Gathering information before negotiations is vital. For instance, understanding that five nines equates to 5 minutes and 35 seconds of downtime annually helps frame discussions around realistic expectations and costs. When all else fails, architects can discuss cost and time implications, but this should be a last resort.
Another strategy is âdivide and conquer,â where architects qualify requirements to specific system areas, reducing scope and complexity. This approach can clarify whether the entire system needs high availability or just parts of it.
Scenario 2 involves a debate between architects over using asynchronous messaging versus REST for service communication. Demonstration can be more effective than discussion; by testing both approaches in a production-like environment, the lead architect can provide evidence for their recommendation. Calm leadership and clear reasoning are essential, especially when arguments become heated.
With developers, architects should avoid the âIvory Towerâ approach, where decisions are imposed without explanation. Instead, providing justifications fosters respect and collaboration. For example, explaining the need for a closed-layer architecture helps developers understand decisions rather than just following orders.
Encouraging developers to find solutions themselves can also be effective. If a developer disagrees with a framework choice, they can be tasked with proving its viability, leading to either a validation of the architectâs decision or a better solution.
Leadership in architecture involves more than technical skills; it requires people skills, facilitation, and communication. The 4 Câs of architectureâcommunication, collaboration, clarity, and concisenessâhelp architects become effective leaders and respected team members. Avoiding accidental complexity and focusing on these principles ensures clear and effective communication.
Being pragmatic yet visionary is a balancing act for architects. Visionaries plan for the future with imagination, while pragmatists consider practical constraints like budget, time, and team skills. A good architect finds a balance, ensuring solutions are both innovative and feasible.
In summary, software architects must skillfully navigate negotiations, lead teams effectively, and balance visionary ideas with practical implementation. By focusing on clear communication, collaboration, and strategic thinking, architects can create solutions that meet both business and technical requirements.
The text discusses the role of software architects in balancing visionary and pragmatic approaches, emphasizing leadership through example, effective communication, and integration with development teams. Architects should assess solutions like data meshes pragmatically, identifying bottlenecks and considering alternatives like caching. Leadership involves gaining respect without relying on titles, illustrated by the âlead by exampleâ story of a captain and sergeant.
Effective communication is crucial; architects should avoid dictating solutions and instead foster collaboration by posing suggestions as questions. Using peopleâs names in conversations builds familiarity and respect. Handshakes should be firm and respectful, avoiding overly personal gestures like hugs in professional settings.
Turning requests into favors can motivate team members. For instance, asking a developer to help with a task by framing it as a favor rather than a demand can yield better results. Architects should aim to become the go-to person on the team, offering help and hosting knowledge-sharing sessions to build leadership and mentoring skills.
Integration with development teams is vital, but architects often face packed schedules. They should manage meetings effectively, questioning their necessity and optimizing attendance based on agendas. Architects should also take meetings on behalf of tech leads to allow the development team to focus on tasks. Meetings should be scheduled to minimize disruption to developersâ flow states, ideally at times that donât interrupt their peak productivity.
Sitting with the development team instead of apart fosters better communication and integration, reinforcing the architectâs role as a facilitator and collaborator. Overall, architects should focus on people skills, fostering collaboration, and minimizing unnecessary disruptions to effectively lead and integrate with their teams.
Effective software architects integrate closely with development teams to foster collaboration and respect. Physical proximity or regular engagement with the team is crucial for guiding and mentoring. Architects should also maintain open communication with other stakeholders, like operations heads, to ensure a collaborative environment.
Continuous learning is vital in the rapidly evolving tech landscape. Architects should regularly update their knowledge base with current resources, though these resources change swiftly. A suggested practice is the â20-minute rule,â dedicating 20 minutes daily to learning new technologies or concepts, ideally first thing in the morning before distractions arise.
Developing a âpersonal radarâ is essential for staying informed about technological trends. Inspired by ThoughtWorksâ Technology Radar, architects can create a personal version to track tools, languages, and platforms. This involves categorizing technologies into quadrants: Tools, Languages and Frameworks, Techniques, and Platforms, and further into rings: Hold, Assess, Trial, and Adopt. This helps architects manage their technology portfolio strategically, akin to financial diversification.
Social media can enhance technical breadth by connecting architects with weak linksâcasual acquaintances who provide new insights and opportunities. Following respected technologists on platforms like Twitter can expose architects to emerging trends and ideas.
To become a proficient architect, practice is key. Engaging in architecture katasâexercises designed to improve architectural skillsâcan help architects refine their craft. While there are no definitive answers in architecture, understanding trade-offs is crucial. Practicing these skills regularly is essential for growth and effectiveness in the field.
In summary, architects should focus on continuous learning, maintain open communication, strategically manage their technology portfolio, and engage in regular practice to enhance their skills and effectiveness. Always learning and practicing are essential for success in the architectural domain.
The text focuses on various aspects of software architecture, addressing key concepts, architectural styles, and decision-making processes that architects must navigate.
Architectural Thinking
Traditional architecture approaches, which separate architecture from development, are outdated. Architects should prioritize technical breadth over depth to adapt to evolving technologies. Maintaining technical depth can be achieved by staying hands-on with coding.
Modularity and Connascence
Connascence refers to the degree of dependency between software components. Static connascence is preferred over dynamic, as it is easier to manage. Architects must balance these dependencies to maintain a modular codebase.
Architecture Characteristics
Architecture characteristics, or â-ilities,â such as availability, performance, and scalability, are crucial. They should align with business requirements, and limiting their number can streamline architectural focus. Characteristics like scalability and elasticity are vital, especially during major business changes like acquisitions.
Measuring and Governing Architecture
Metrics like cyclomatic complexity help assess architectural soundness. Architecture fitness functions evaluate specific characteristics, such as scalability, allowing architects to implement improvements effectively.
Architecture Styles
Different architecture styles, including layered, pipeline, microkernel, service-based, event-driven, and space-based architectures, offer unique benefits and challenges. For instance, microservices architecture supports agility and testability but may struggle with performance due to its distributed nature. Choosing the right style depends on factors like data architecture and business needs.
Decision-Making and Risk Analysis
Architectural decisions should be documented using architecture decision records (ADRs). Risk analysis involves assessing potential issues and collaboratively developing mitigation strategies. Risk storming is a technique that combines individual and group assessments to address architectural risks.
Diagramming and Presentation
Effective communication of architecture involves clear documentation and avoiding anti-patterns like the Bullet-Riddled Corpse. The C4 model is a tool for creating structured architecture diagrams.
Team Dynamics and Career Development
Architects must balance leadership and negotiation skills, ensuring they communicate effectively with stakeholders. Techniques like the divide-and-conquer rule can aid in negotiations. Career paths should emphasize both breadth and depth of knowledge, with architects aspiring to be pragmatic and visionary.
Conclusion
The text provides a comprehensive overview of software architecture, emphasizing the importance of adaptability, clear communication, and strategic decision-making in the role of a software architect.
Key Concepts in Software Architecture
Architecture Patterns and Styles
- Microservices Architecture: Focuses on bounded contexts, granularity, and communication. It involves choreography and orchestration for service interactions and supports transactions through sagas. It emphasizes operational reuse and heterogeneity.
- Layered Architecture: Involves layers of isolation with considerations for fault tolerance and deployability. It is often used for technical partitioning.
- Event-Driven Architecture: Utilizes broker and mediator topologies for asynchronous communication. It handles error management and data loss prevention through event processing.
- Microkernel Architecture: Comprises a core system with plug-in components, focusing on extensibility and modularity. It is used for systems with remote access plug-ins.
- Space-Based Architecture: Addresses scalability and fault tolerance through distributed caching and data pumps. It is implemented on cloud and on-premises platforms.
Key Architectural Concepts
- Bounded Context: Essential in microservices for defining service boundaries and ensuring data isolation.
- Choreography vs. Orchestration: Methods for managing service interactions, with choreography being more decentralized.
- Granularity: Refers to the level of detail in service definitions, crucial for microservices.
- Modularity: Important for maintainability and extensibility, often tested using fitness functions.
Tools and Techniques
- C4 Diagramming: A standard for visualizing software architecture, focusing on context, containers, components, and code.
- Fitness Functions: Used to test modularity and other architectural characteristics.
- Chaos Engineering: Techniques like Chaos Monkey to test system resilience.
Challenges and Solutions
- Complexity: Managing complexity through clear architectural decisions and effective communication.
- Distributed Computing Fallacies: Recognizing common misconceptions such as zero latency and infinite bandwidth.
- Negotiation and Leadership: Essential skills for architects to facilitate stakeholder discussions and lead development teams.
Development Practices
- Continuous Delivery: Emphasizes the importance of deployability and automated processes in modern architectures.
- DevOps Integration: Highlights the intersection of architecture with development operations for streamlined processes.
Risk Management
- Mitigation Strategies: Focus on availability, elasticity, and security risks, often using risk storming techniques.
Anti-Patterns
- Big Ball of Mud: A common anti-pattern where systems lack clear structure and boundaries.
- Bullet-Riddled Corpse: Refers to systems that have been excessively patched and lack coherence.
Industry References
- Building Evolutionary Architectures: Emphasizes the need for adaptable architectures.
- Domain-Driven Design (DDD): Influences microservices through concepts like bounded context and event storming.
This summary captures the essence of various architectural styles, key concepts, tools, challenges, and best practices in software architecture, providing a comprehensive overview for those familiar with the domain.
The text provides a comprehensive overview of various software architecture styles and characteristics, focusing on their operational and structural aspects. Key architectures discussed include microservices, service-based, microkernel, pipeline, and space-based architectures, each with unique features and trade-offs.
Orchestration-Driven Service-Oriented Architecture: This architecture involves orchestration engines acting as coupling points, with a focus on reuse and coupling. It includes application and infrastructure services, message flow, and topology considerations.
Microservices Architecture: Highlights include partitioning components, operational reuse, and the saga pattern for transactions. Microservices emphasize decoupling and granularity, impacting performance and scalability ratings.
Microkernel Architecture: Features plug-in components and a core system, with a focus on partitioning and presentation layers. It offers simplicity and flexibility but may have low scalability.
Service-Based Architecture: This style includes service design and granularity, with REST access and service locators. It emphasizes scalability and simplicity, with specific topology variants and use cases.
Pipeline Architecture: Known for its pipes and filters approach, it focuses on the flow of data through a series of processing steps. It is characterized by architecture simplicity and defined topology.
Space-Based Architecture: This architecture deals with data replication, caching, and processing units. It offers advantages in scalability and flexibility, with specific implementations like concert ticketing and online auction systems.
Architecture Characteristics: Performance, reliability, scalability, security, and usability are critical characteristics across architectures. Trade-offs between these characteristics, such as security versus performance, are crucial in decision-making.
Development and Leadership: The role of software architects involves boundary creation, team integration, and negotiation with stakeholders. Effective architects are pragmatic, visionary, and possess strong leadership skills.
Technical and Domain Partitioning: Differentiates between domain and technical concerns, impacting architecture decisions. This includes technical breadth, trade-offs, and the impact of technology changes.
Risk Management: Involves risk assessments, risk storming, and mitigation strategies, particularly in case studies like the nurse diagnostics system. Understanding and navigating risks are essential for architecture stability.
Tools and Patterns: The text references various tools and patterns, such as the strangler pattern, service meshes, and the use of ADRs for standards. These tools aid in the effective implementation and management of architecture.
Overall, the text emphasizes the dynamic nature of software architecture, the importance of making informed trade-offs, and the need for continuous adaptation to technological changes. The integration of technical and business justifications is crucial for successful architectural decisions.