Design - OOD - CL2 Flashcards

1
Q

Architectural Patterns: MVC

A

Model–View–Controller (usually known as MVC) is a software design pattern[1] commonly used for developing user interfaces which divides the related program logic into three interconnected elements. This is done to separate internal representations of information from the ways information is presented to and accepted from the user.[2][3] Following the MVC architectural pattern decouples these major components allowing for code reuse and parallel development.

Traditionally used for desktop graphical user interfaces (GUIs), this pattern has become popular for designing web applications.[4] Popular programming languages like JavaScript, Python, Ruby, PHP, Java, and C# have MVC frameworks that are used in web application development straight out of the box.

Components
Model
The central component of the pattern. It is the application’s dynamic data structure, independent of the user interface.[5] It directly manages the data, logic and rules of the application.
View
Any representation of information such as a chart, diagram or table. Multiple views of the same information are possible, such as a bar chart for management and a tabular view for accountants.
Controller
Accepts input and converts it to commands for the model or view.[6]
In addition to dividing the application into these components, the model–view–controller design defines the interactions between them.
- The model is responsible for managing the data of the application. It receives user input from the controller.
- The view means presentation of the model in a particular format.
- The controller responds to the user input and performs interactions on the data model objects. The controller receives the input, optionally validates it and then passes the input to the model.
As with other software patterns, MVC expresses the “core of the solution” to a problem while allowing it to be adapted for each system.[8] Particular MVC designs can vary significantly from the traditional description here.[9]

Use in web applications
Although originally developed for desktop computing, MVC has been widely adopted as a design for World Wide Web applications in major programming languages. Several web frameworks have been created that enforce the pattern. These software frameworks vary in their interpretations, mainly in the way that the MVC responsibilities are divided between the client and server.[15]

Some web MVC frameworks take a thin client approach that places almost the entire model, view and controller logic on the server. This is reflected in frameworks such as Django, Rails and ASP.NET MVC. In this approach, the client sends either hyperlink requests or form submissions to the controller and then receives a complete and updated web page (or other document) from the view; the model exists entirely on the server.[15] Other frameworks such as AngularJS, EmberJS, JavaScriptMVC and Backbone allow the MVC components to execute partly on the client (also see Ajax).

Goals of MVC
- Simultaneous development
Because MVC decouples the various components of an application, developers are able to work in parallel on different components without affecting or blocking one another. For example, a team might divide their developers between the front-end and the back-end. The back-end developers can design the structure of the data and how the user interacts with it without requiring the user interface to be completed. Conversely, the front-end developers are able to design and test the layout of the application prior to the data structure being available.
- Code reuse
The same (or similar) view for one application can be refactored for another application with different data because the view is simply handling how the data is being displayed to the user. Unfortunately this does not work when that code is also useful for handling user input. For example, DOM code (including the application’s custom abstractions to it) is useful for both graphics display and user input. (Note that, despite the name Document Object Model, the DOM is actually not an MVC model, because it is the application’s interface to the user).
To address these problems, MVC (and patterns like it) are often combined with a component architecture that provides a set of UI elements. Each UI element is a single higher-level component that combines the 3 required MVC components into a single package. By creating these higher-level components that are independent of each other, developers are able to reuse components quickly and easily in other applications.

Advantages & disadvantages
Advantages
- Simultaneous development – Multiple developers can work simultaneously on the model, controller and views.
- High cohesion – MVC enables logical grouping of related actions on a controller together. The views for a specific model are also grouped together.
- Loose coupling – The very nature of the MVC framework is such that there is low coupling among models, views or controllers
- Ease of modification – Because of the separation of responsibilities, future development or modification is easier
- Multiple views for a model – Models can have multiple views
Disadvantages
- The disadvantages of MVC can be generally categorized as overhead for incorrectly factored software.
- Code navigability – The framework navigation can be complex because it introduces new layers of abstraction and requires users to adapt to the decomposition criteria of MVC.
- Multi-artifact consistency – Decomposing a feature into three artifacts causes scattering. Thus, requiring developers to maintain the consistency of multiple representations at once.
- Undermined by inevitable clustering – Applications tend to have heavy interaction between what the user sees and what the user uses. Therefore each feature’s computation and state tends to get clustered into one of the 3 program parts, erasing the purported advantages of MVC.
- Excessive boilerplate – Due to the application computation and state being typically clustered into one of the 3 parts, the other parts degenerate into either boilerplate shims or code-behind[16] that exists only to satisfy the MVC pattern.
- Pronounced learning curve – Knowledge on multiple technologies becomes the norm. Developers using MVC need to be skilled in multiple technologies.
- Lack of incremental benefit – UI applications are already factored into components, and achieving code reuse and independence via the component architecture, leaving no incremental benefit to MVC.

Link:
https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Architectural Patterns: IoC

A

Inversion of Control is a common phenomenon that you come across when extending frameworks. Indeed it’s often seen as a defining characteristic of a framework.

Let’s consider a simple example. Imagine I’m writing a program to get some information from a user and I’m using a command line enquiry. I might do it something like this

  #ruby
  puts 'What is your name?'
  name = gets
  process_name(name)
  puts 'What is your quest?'
  quest = gets
  process_quest(quest)
In this interaction, my code is in control: it decides when to ask questions, when to read responses, and when to process those results.

However if I were were to use a windowing system to do something like this, I would do it by configuring a window.

require ‘tk’
root = TkRoot.new()
name_label = TkLabel.new() {text “What is Your Name?”}
name_label.pack
name = TkEntry.new(root).pack
name.bind(“FocusOut”) {process_name(name)}
quest_label = TkLabel.new() {text “What is Your Quest?”}
quest_label.pack
quest = TkEntry.new(root).pack
quest.bind(“FocusOut”) {process_quest(quest)}
Tk.mainloop()
There’s a big difference now in the flow of control between these programs - in particular the control of when the process_name and process_quest methods are called. In the command line form I control when these methods are called, but in the window example I don’t. Instead I hand control over to the windowing system (with the Tk.mainloop command). It then decides when to call my methods, based on the bindings I made when creating the form. The control is inverted - it calls me rather me calling the framework. This phenomenon is Inversion of Control (also known as the Hollywood Principle - “Don’t call us, we’ll call you”).

One important characteristic of a framework is that the methods defined by the user to tailor the framework will often be called from within the framework itself, rather than from the user’s application code. The framework often plays the role of the main program in coordinating and sequencing application activity. This inversion of control gives frameworks the power to serve as extensible skeletons. The methods supplied by the user tailor the generic algorithms defined in the framework for a particular application.
– Ralph Johnson and Brian Foote

Inversion of Control is a key part of what makes a framework different to a library. A library is essentially a set of functions that you can call, these days usually organized into classes. Each call does some work and returns control to the client.

A framework embodies some abstract design, with more behavior built in. In order to use it you need to insert your behavior into various places in the framework either by subclassing or by plugging in your own classes. The framework’s code then calls your code at these points.

There are various ways you can plug your code in to be called. In the ruby example above, we invoke a bind method on the text entry field that passes an event name and a Lambda as an argument. Whenever the text entry box detects the event, it calls the code in the closure. Using closures like this is very convenient, but many languages don’t support them.

Another way to do this is to have the framework define events and have the client code subscribe to these events. .NET is a good example of a platform that has language features to allow people to declare events on widgets. You can then bind a method to the event by using a delegate.

The above approaches (they are really the same) work well for single cases, but sometimes you want to combine several required method calls in a single unit of extension. In this case the framework can define an interface that a client code must implement for the relevant calls.

EJBs are a good example of this style of inversion of control. When you develop a session bean, you can implement various methods that are called by the EJB container at various lifecyle points. For example the Session Bean interface defines ejbRemove, ejbPassivate (stored to secondary storage), and ejbActivate (restored from passive state). You don’t get to control when these methods are called, just what they do. The container calls us, we don’t call it.

These are complicated cases of inversion of control, but you run into this effect in much simpler situations. A template method is a good example: the super-class defines the flow of control, subclasses extend this overriding methods or implementing abstract methods to do the extension. So in JUnit, the framework code calls setUp and tearDown methods for you to create and clean up your text fixture. It does the calling, your code reacts - so again control is inverted.

There is some confusion these days over the meaning of inversion of control due to the rise of IoC containers; some people confuse the general principle here with the specific styles of inversion of control (such as dependency injection) that these containers use. The name is somewhat confusing (and ironic) since IoC containers are generally regarded as a competitor to EJB, yet EJB uses inversion of control just as much (if not more).

Etymology: As far as I can tell, the term Inversion of Control first came to light in Johnson and Foote’s paper Designing Reusable Classes, published by the Journal of Object-Oriented Programming in 1988. The paper is one of those that’s aged well - it’s well worth a read now over fifteen years later. They think they got the term from somewhere else, but can’t remember what. The term then insinuated itself into the object-oriented community and appears in the Gang of Four book. The more colorful synonym ‘Hollywood Principle’ seems to originate in a paper by Richard Sweet on Mesa in 1983. In a list of design goals he writes: “Don’t call us, we’ll call you (Hollywood’s Law): A tool should arrange for Tajo to notify it when the user wishes to communicate some event to the tool, rather than adopt an ‘ask the user for a command and execute it’ model.” John Vlissides wrote a column for C++ report that provides a good explanation of the concept under the ‘Hollywood Principle’ moniker. (Thanks to Brian Foote and Ralph Johnson for helping me with the Etymology.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

SOLID principles

A

In object-oriented computer programming, SOLID is a mnemonic acronym for five design principles intended to make software designs more understandable, flexible and maintainable. It is not related to the GRASP software design principles. The principles are a subset of many principles promoted by American software engineer and instructor Robert C. Martin.[1][2][3] Though they apply to any object-oriented design, the SOLID principles can also form a core philosophy for methodologies such as agile development or adaptive software development.[3] The theory of SOLID principles was introduced by Martin in his 2000 paper Design Principles and Design Patterns,[2][4] although the SOLID acronym was introduced later by Michael Feathers.

Single responsibility principle[6]
A class should only have a single responsibility, that is, only changes to one part of the software's specification should be able to affect the specification of the class.
The single responsibility principle is a computer programming principle that states that every module, class, or function[1] should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class, module or function. All its services should be narrowly aligned with that responsibility. Robert C. Martin expresses the principle as, "A class should have only one reason to change,"[1] although, because of confusion around the word "reason" he more recently stated "This principle is about people.

Open–closed principle[7]
“Software entities … should be open for extension, but closed for modification.”
In object-oriented programming, the open/closed principle states “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”;[1] that is, such an entity can allow its behaviour to be extended without modifying its source code.
The name open/closed principle has been used in two ways. Both ways use generalizations (for instance, inheritance or delegate functions) to resolve the apparent dilemma, but the goals, techniques, and results are different.

Liskov substitution principle[8]
“Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.” See also design by contract.
Substitutability is a principle in object-oriented programming stating that, in a computer program, if S is a subtype of T, then objects of type T may be replaced with objects of type S (i.e. an object of type T may be substituted with any object of a subtype S) without altering any of the desirable properties of the program (correctness, task performed, etc.). More formally, the Liskov substitution principle (LSP) is a particular definition of a subtyping relation, called (strong) behavioral subtyping, that was initially introduced by Barbara Liskov in a 1987 conference keynote address titled Data abstraction and hierarchy. It is a semantic rather than merely syntactic relation, because it intends to guarantee semantic interoperability of types in a hierarchy, object types in particular. Barbara Liskov and Jeannette Wing described the principle succinctly in a 1994 paper as follows:
Subtype Requirement: Let {\displaystyle \phi (x)} \phi (x) be a property provable about objects {\displaystyle x} x of type T. Then {\displaystyle \phi (y)} {\displaystyle \phi (y)} should be true for objects {\displaystyle y} y of type S where S is a subtype of T.
In the same paper, Liskov and Wing detailed their notion of behavioral subtyping in an extension of Hoare logic, which bears a certain resemblance to Bertrand Meyer’s design by contract in that it considers the interaction of subtyping with preconditions, postconditions and invariants.

Interface segregation principle[9]
“Many client-specific interfaces are better than one general-purpose interface.”
In the field of software engineering, the interface-segregation principle (ISP) states that no client should be forced to depend on methods it does not use.[1] ISP splits interfaces that are very large into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them. Such shrunken interfaces are also called role interfaces.[2] ISP is intended to keep a system decoupled and thus easier to refactor, change, and redeploy. ISP is one of the five SOLID principles of object-oriented design, similar to the High Cohesion Principle of GRASP.

Dependency inversion principle[10]
One should “depend upon abstractions, [not] concretions.”
In object-oriented design, the dependency inversion principle is a specific form of decoupling software modules. When following this principle, the conventional dependency relationships established from high-level, policy-setting modules to low-level, dependency modules are reversed, thus rendering high-level modules independent of the low-level module implementation details. The principle states:[1]
A. High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces).
B. Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.
By dictating that both high-level and low-level objects must depend on the same abstraction, this design principle inverts the way some people may think about object-oriented programming.[2]
The idea behind points A and B of this principle is that when designing the interaction between a high-level module and a low-level one, the interaction should be thought of as an abstract interaction between them. This not only has implications on the design of the high-level module, but also on the low-level one: the low-level one should be designed with the interaction in mind and it may be necessary to change its usage interface.
In many cases, thinking about the interaction in itself as an abstract concept allows the coupling of the components to be reduced without introducing additional coding patterns, allowing only a lighter and less implementation dependent interaction schema.
When the discovered abstract interaction schema(s) between two modules is/are generic and generalization makes sense, this design principle also leads to the following dependency inversion coding pattern.

Link:
https://en.wikipedia.org/wiki/SOLID

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Anti-patterns

A

An anti-pattern is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive.[1][2] The term, coined in 1995 by Andrew Koenig,[3] was inspired by a book, Design Patterns, which highlights a number of design patterns in software development that its authors considered to be highly reliable and effective.
The term was popularized three years later by the book AntiPatterns, which extended its use beyond the field of software design to refer informally to any commonly reinvented but bad solution to a problem. Examples include analysis paralysis, cargo cult programming, death march, groupthink and vendor lock-in.

Definition
According to the authors of Design Patterns, there must be at least two key elements present to formally distinguish an actual anti-pattern from a simple bad habit, bad practice, or bad idea:
A commonly used process, structure, or pattern of action that despite initially appearing to be an appropriate and effective response to a problem, has more bad consequences than good ones.
Another solution exists that is documented, repeatable, and proven to be effective.

Social and business operations
Organizational
Analysis paralysis: A project stalled in the analysis phase, unable to achieve support for any of the potential plans of approach
Bicycle shed: Giving disproportionate weight to trivial issues
Bleeding edge: Operating with cutting-edge technologies that are still untested or unstable leading to cost overruns, under-performance or delayed delivery
Bystander apathy: The phenomenon in which people are less likely to or do not offer help to a person in need when others are present
Cash cow: A profitable legacy product that often leads to complacency about new products
Design by committee: The result of having many contributors to a design, but no unifying vision
Escalation of commitment: Failing to revoke a decision when it proves wrong
Groupthink: A collective state where group members begin to (often unknowingly) think alike and reject differing viewpoints
Management by objectives: Management by numbers, focus exclusively on quantitative management criteria, when these are non-essential or cost too much to acquire
Micromanagement: Ineffectiveness from excessive observation, supervision, or other hands-on involvement from management
Moral hazard: Insulating a decision-maker from the consequences of their decision
Mushroom management: Keeping employees “in the dark and fed manure” (also “left to stew and finally canned”)
Peter principle: Continually promoting otherwise well-performing employees up to their level of incompetence, where they remain indefinitely[4]
Seagull management: Management in which managers only interact with employees when a problem arises, when they “fly in, make a lot of noise, dump on everyone, do not solve the problem, then fly out”
Stovepipe or Silos: An organizational structure of isolated or semi-isolated teams, in which too many communications take place up and down the hierarchy, rather than directly with other teams across the organization
Typecasting: Locking successful employees into overly safe, narrowly defined, predictable roles based on their past successes rather than their potential
Vendor lock-in: Making a system excessively dependent on an externally supplied component
Project management
Cart before the horse: Focusing too many resources on a stage of a project out of its sequence
Death march: A project whose staff, while expecting it to fail, are compelled to continue, often with much overwork, by management which is in denial[5]
Ninety-ninety rule: Tendency to underestimate the amount of time to complete a project when it is “nearly done”
Overengineering: Spending resources making a project more robust and complex than is needed
Scope creep: Uncontrolled changes or continuous growth in a project’s scope, or adding new features to the project after the original requirements have been drafted and accepted (also known as requirement creep and feature creep)
Smoke and mirrors: Demonstrating unimplemented functions as if they were already implemented
Brooks’s law: Adding more resources to a project to increase velocity, when the project is already slowed down by coordination overhead.

Software engineering
Software design
Abstraction inversion: Not exposing implemented functionality required by callers of a function/method/constructor, so that the calling code awkwardly re-implements the same functionality in terms of those calls
Ambiguous viewpoint: Presenting a model (usually Object-oriented analysis and design (OOAD)) without specifying its viewpoint
Big ball of mud: A system with no recognizable structure
Database-as-IPC: Using a database as the message queue for routine interprocess communication where a much more lightweight mechanism would be suitable
Gold plating: Continuing to work on a task or project well past the point at which extra effort is not adding value
Inner-platform effect: A system so customizable as to become a poor replica of the software development platform
Input kludge: Failing to specify and implement the handling of possibly invalid input
Interface bloat: Making an interface so powerful that it is extremely difficult to implement
Magic pushbutton: A form with no dynamic validation or input assistance, such as dropdowns
Race hazard: Failing to see the consequences of events that can sometimes interfere with each other
Stovepipe system: A barely maintainable assemblage of ill-related components
Object-oriented programming
Anemic domain model: The use of the domain model without any business logic. The domain model’s objects cannot guarantee their correctness at any moment, because their validation and mutation logic is placed somewhere outside (most likely in multiple places). Martin Fowler considers this to be an anti-pattern, but some disagree that it is always an anti-pattern.[6]
Call super: Requiring subclasses to call a superclass’s overridden method
Circle-ellipse problem: Subtyping variable-types on the basis of value-subtypes
Circular dependency: Introducing unnecessary direct or indirect mutual dependencies between objects or software modules
Constant interface: Using interfaces to define constants
God object: Concentrating too many functions in a single part of the design (class)
Object cesspool: Reusing objects whose state does not conform to the (possibly implicit) contract for re-use
Object orgy: Failing to properly encapsulate objects permitting unrestricted access to their internals
Poltergeists: Objects whose sole purpose is to pass information to another object
Sequential coupling: A class that requires its methods to be called in a particular order
Yo-yo problem: A structure (e.g., of inheritance) that is hard to understand due to excessive fragmentation
Programming
Accidental complexity: Programming tasks which could be eliminated with better tools (as opposed to essential complexity inherent in the problem being solved)
Action at a distance: Unexpected interaction between widely separated parts of a system
Boat anchor: Retaining a part of a system that no longer has any use
Busy waiting: Consuming CPU while waiting for something to happen, usually by repeated checking instead of messaging
Caching failure: Forgetting to clear a cache that holds a negative result (error) after the error condition has been corrected
Cargo cult programming: Using patterns and methods without understanding why
Coding by exception: Adding new code to handle each special case as it is recognized
Exceptions as flow control: Use exceptions to control program flow instead of they are meant to do (catch and handle errors)[7]
Design pattern: The use of patterns has itself been called an anti-pattern, a sign that a system is not employing enough abstraction[8]
Error hiding: Catching an error message before it can be shown to the user and either showing nothing or showing a meaningless message. This anti-pattern is also named Diaper Pattern. Also can refer to erasing the Stack trace during exception handling, which can hamper debugging.
Hard code: Embedding assumptions about the environment of a system in its implementation
Lasagna code: Programs whose structure consists of too many layers of inheritance
Lava flow: Retaining undesirable (redundant or low-quality) code because removing it is too expensive or has unpredictable consequences[9][10]
Loop-switch sequence: Encoding a set of sequential steps using a switch within a loop statement
Magic numbers: Including unexplained numbers in algorithms
Magic strings: Implementing presumably unlikely input scenarios, such as comparisons with very specific strings, to mask functionality.
Repeating yourself: Writing code which contains repetitive patterns and substrings over again; avoid with once and only once (abstraction principle)
Shooting the messenger: Throwing exceptions from the scope of a plugin or subscriber in response to legitimate input, especially when this causes the outer scope to fail.
Shotgun surgery: Developer adds features to an application codebase which span a multiplicity of implementors or implementations in a single change
Soft code: Storing business logic in configuration files rather than source code[11]
Spaghetti code: Programs whose structure is barely comprehensible, especially because of misuse of code structures
Methodological
Copy and paste programming: Copying (and modifying) existing code rather than creating generic solutions
Every Fool Their Own Tool: Failing to use proper software development principles when creating tools to facilitate the software development process itself.[12][original research?]
Golden hammer: Assuming that a favorite solution is universally applicable (See: Silver bullet)
Improbability factor: Assuming that it is improbable that a known error will occur
Invented here: The tendency towards dismissing any innovation or less than trivial solution originating from inside the organization, usually because of lack of confidence in the staff
Not Invented Here (NIH) syndrome: The tendency towards reinventing the wheel (failing to adopt an existing, adequate solution)
Premature optimization: Coding early-on for perceived efficiency, sacrificing good design, maintainability, and sometimes even real-world efficiency
Programming by permutation (or “programming by accident”, or “programming by coincidence”): Trying to approach a solution by successively modifying the code to see if it works
Reinventing the square wheel: Failing to adopt an existing solution and instead adopting a custom solution which performs much worse than the existing one
Silver bullet: Assuming that a favorite technical solution can solve a larger process or problem
Tester Driven Development: Software projects in which new requirements are specified in bug reports
Configuration management
Dependency hell: Problems with versions of required products
DLL hell: Inadequate management of dynamic-link libraries (DLLs), specifically on Microsoft Windows
Extension conflict: Problems with different extensions to classic Mac OS attempting to patch the same parts of the operating system
JAR hell: Overutilization of multiple JAR files, usually causing versioning and location problems because of misunderstanding of the Java class loading model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Microservice architecture:

  • What Are Microservices?
  • Key Benefits
  • Trade-offs
  • How to Model Services
A
"Microservices" - yet another new term on the crowded streets of software architecture. Although our natural inclination is to pass such things by with a contemptuous glance, this bit of terminology describes a style of software systems that we are finding more and more appealing. We've seen many projects use this style in the last few years, and results so far have been positive, so much so that for many of our colleagues this is becoming the default style for building enterprise applications. Sadly, however, there's not much information that outlines what the microservice style is and how to
In short, the microservice architectural style [1] is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.
To start explaining the microservice style it's useful to compare it to the monolithic style: a monolithic application built as a single unit. Enterprise Applications are often built in three main parts: a client-side user interface (consisting of HTML pages and javascript running in a browser on the user's machine) a database (consisting of many tables inserted into a common, and usually relational, database management system), and a server-side application. The server-side application will handle HTTP requests, execute domain logic, retrieve and update data from the database, and select and populate HTML views to be sent to the browser. This server-side application is a monolith - a single logical executable[2]. Any changes to the system involve building and deploying a new version of the server-side application.
Such a monolithic server is a natural way to approach building such a system. All your logic for handling a request runs in a single process, allowing you to use the basic features of your language to divide up the application into classes, functions, and namespaces. With some care, you can run and test the application on a developer's laptop, and use a deployment pipeline to ensure that changes are properly tested and deployed into production. You can horizontally scale the monolith by running many instances behind a load-balancer.
Monolithic applications can be successful, but increasingly people are feeling frustrations with them - especially as more applications are being deployed to the cloud . Change cycles are tied together - a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed. Over time it's often hard to keep a good modular structure, making it harder to keep changes that ought to only affect one module within that module. Scaling requires scaling of the entire application rather than parts of it that require greater resource.
These frustrations have led to the microservice architectural style: building applications as suites of services. As well as the fact that services are independently deployable and scalable, each service also provides a firm module boundary, even allowing for different services to be written in different programming languages. They can also be managed by different teams .
We do not claim that the microservice style is novel or innovative, its roots go back at least to the design principles of Unix. But we do think that not enough people consider a microservice architecture and that many software developments would be better off if they used it.

Characteristics of a Microservice Architecture
We cannot say there is a formal definition of the microservices architectural style, but we can attempt to describe what we see as common characteristics for architectures that fit the label. As with any definition that outlines common characteristics, not all microservice architectures have all the characteristics, but we do expect that most microservice architectures exhibit most characteristics. While we authors have been active members of this rather loose community, our intention is to attempt a description of what we see in our own work and in similar efforts by teams we know of. In particular we are not laying down some definition to conform to.
Componentization via Services
For as long as we’ve been involved in the software industry, there’s been a desire to build systems by plugging together components, much in the way we see things are made in the physical world. During the last couple of decades we’ve seen considerable progress with large compendiums of common libraries that are part of most language platforms.
When talking about components we run into the difficult definition of what makes a component. Our definition is that a component is a unit of software that is independently replaceable and upgradeable.
Microservice architectures will use libraries, but their primary way of componentizing their own software is by breaking down into services. We define libraries as components that are linked into a program and called using in-memory function calls, while services are out-of-process components who communicate with a mechanism such as a web service request, or remote procedure call. (This is a different concept to that of a service object in many OO programs [3].)
One main reason for using services as components (rather than libraries) is that services are independently deployable. If you have an application [4] that consists of a multiple libraries in a single process, a change to any single component results in having to redeploy the entire application. But if that application is decomposed into multiple services, you can expect many single service changes to only require that service to be redeployed. That’s not an absolute, some changes will change service interfaces resulting in some coordination, but the aim of a good microservice architecture is to minimize these through cohesive service boundaries and evolution mechanisms in the service contracts.
Another consequence of using services as components is a more explicit component interface. Most languages do not have a good mechanism for defining an explicit Published Interface. Often it’s only documentation and discipline that prevents clients breaking a component’s encapsulation, leading to overly-tight coupling between components. Services make it easier to avoid this by using explicit remote call mechanisms.
Using services like this does have downsides. Remote calls are more expensive than in-process calls, and thus remote APIs need to be coarser-grained, which is often more awkward to use. If you need to change the allocation of responsibilities between components, such movements of behavior are harder to do when you’re crossing process boundaries.
At a first approximation, we can observe that services map to runtime processes, but that is only a first approximation. A service may consist of multiple processes that will always be developed and deployed together, such as an application process and a database that’s only used by that service.
Organized around Business Capabilities
When looking to split a large application into parts, often management focuses on the technology layer, leading to UI teams, server-side logic teams, and database teams. When teams are separated along these lines, even simple changes can lead to a cross-team project taking time and budgetary approval. A smart team will optimise around this and plump for the lesser of two evils - just force the logic into whichever application they have access to. Logic everywhere in other words. This is an example of Conway’s Law[5] in action:
Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.
The microservice approach to division is different, splitting up into services organized around business capability. Such services take a broad-stack implementation of software for that business area, including user-interface, persistant storage, and any external collaborations. Consequently the teams are cross-functional, including the full range of skills required for the development: user-experience, database, and project management.
One company organised in this way is www.comparethemarket.com. Cross functional teams are responsible for building and operating each product and each product is split out into a number of individual services communicating via a message bus.
Large monolithic applications can always be modularized around business capabilities too, although that’s not the common case. Certainly we would urge a large team building a monolithic application to divide itself along business lines. The main issue we have seen here, is that they tend to be organised around too many contexts. If the monolith spans many of these modular boundaries it can be difficult for individual members of a team to fit them into their short-term memory. Additionally we see that the modular lines require a great deal of discipline to enforce. The necessarily more explicit separation required by service components makes it easier to keep the team boundaries clear.

Products not Projects
Most application development efforts that we see use a project model: where the aim is to deliver some piece of software which is then considered to be completed. On completion the software is handed over to a maintenance organization and the project team that built it is disbanded.
Microservice proponents tend to avoid this model, preferring instead the notion that a team should own a product over its full lifetime. A common inspiration for this is Amazon’s notion of “you build, you run it” where a development team takes full responsibility for the software in production. This brings developers into day-to-day contact with how their software behaves in production and increases contact with their users, as they have to take on at least some of the support burden.
The product mentality, ties in with the linkage to business capabilities. Rather than looking at the software as a set of functionality to be completed, there is an on-going relationship where the question is how can software assist its users to enhance the business capability.
There’s no reason why this same approach can’t be taken with monolithic applications, but the smaller granularity of services can make it easier to create the personal relationships between service developers and their users.
Smart endpoints and dumb pipes
When building communication structures between different processes, we’ve seen many products and approaches that stress putting significant smarts into the communication mechanism itself. A good example of this is the Enterprise Service Bus (ESB), where ESB products often include sophisticated facilities for message routing, choreography, transformation, and applying business rules.
Microservices and SOA
When we’ve talked about microservices a common question is whether this is just Service Oriented Architecture (SOA) that we saw a decade ago. There is merit to this point, because the microservice style is very similar to what some advocates of SOA have been in favor of. The problem, however, is that SOA means too many different things, and that most of the time that we come across something called “SOA” it’s significantly different to the style we’re describing here, usually due to a focus on ESBs used to integrate monolithic applications.
In particular we have seen so many botched implementations of service orientation - from the tendency to hide complexity away in ESB’s [6], to failed multi-year initiatives that cost millions and deliver no value, to centralised governance models that actively inhibit change, that it is sometimes difficult to see past these problems.
Certainly, many of the techniques in use in the microservice community have grown from the experiences of developers integrating services in large organisations. The Tolerant Reader pattern is an example of this. Efforts to use the web have contributed, using simple protocols is another approach derived from these experiences - a reaction away from central standards that have reached a complexity that is, frankly, breathtaking. (Any time you need an ontology to manage your ontologies you know you are in deep trouble.)
This common manifestation of SOA has led some microservice advocates to reject the SOA label entirely, although others consider microservices to be one form of SOA [7], perhaps service orientation done right. Either way, the fact that SOA means such different things means it’s valuable to have a term that more crisply defines this architectural style.
The microservice community favours an alternative approach: smart endpoints and dumb pipes. Applications built from microservices aim to be as decoupled and as cohesive as possible - they own their own domain logic and act more as filters in the classical Unix sense - receiving a request, applying logic as appropriate and producing a response. These are choreographed using simple RESTish protocols rather than complex protocols such as WS-Choreography or BPEL or orchestration by a central tool.
The two protocols used most commonly are HTTP request-response with resource API’s and lightweight messaging[8]. The best expression of the first is:
Be of the web, not behind the web
Microservice teams use the principles and protocols that the world wide web (and to a large extent, Unix) is built on. Often used resources can be cached with very little effort on the part of developers or operations folk.
The second approach in common use is messaging over a lightweight message bus. The infrastructure chosen is typically dumb (dumb as in acts as a message router only) - simple implementations such as RabbitMQ or ZeroMQ don’t do much more than provide a reliable asynchronous fabric - the smarts still live in the end points that are producing and consuming messages; in the services.
In a monolith, the components are executing in-process and communication between them is via either method invocation or function call. The biggest issue in changing a monolith into microservices lies in changing the communication pattern. A naive conversion from in-memory method calls to RPC leads to chatty communications which don’t perform well. Instead you need to replace the fine-grained communication with a coarser -grained approach.

Decentralized Governance
One of the consequences of centralised governance is the tendency to standardise on single technology platforms. Experience shows that this approach is constricting - not every problem is a nail and not every solution a hammer. We prefer using the right tool for the job and while monolithic applications can take advantage of different languages to a certain extent, it isn’t that common.
Splitting the monolith’s components out into services we have a choice when building each of them. You want to use Node.js to standup a simple reports page? Go for it. C++ for a particularly gnarly near-real-time component? Fine. You want to swap in a different flavour of database that better suits the read behaviour of one component? We have the technology to rebuild him.
Of course, just because you can do something, doesn’t mean you should - but partitioning your system in this way means you have the option.
Teams building microservices prefer a different approach to standards too. Rather than use a set of defined standards written down somewhere on paper they prefer the idea of producing useful tools that other developers can use to solve similar problems to the ones they are facing. These tools are usually harvested from implementations and shared with a wider group, sometimes, but not exclusively using an internal open source model. Now that git and github have become the de facto version control system of choice, open source practices are becoming more and more common in-house .
Netflix is a good example of an organisation that follows this philosophy. Sharing useful and, above all, battle-tested code as libraries encourages other developers to solve similar problems in similar ways yet leaves the door open to picking a different approach if required. Shared libraries tend to be focused on common problems of data storage, inter-process communication and as we discuss further below, infrastructure automation.
For the microservice community, overheads are particularly unattractive. That isn’t to say that the community doesn’t value service contracts. Quite the opposite, since there tend to be many more of them. It’s just that they are looking at different ways of managing those contracts. Patterns like Tolerant Reader and Consumer-Driven Contracts are often applied to microservices. These aid service contracts in evolving independently. Executing consumer driven contracts as part of your build increases confidence and provides fast feedback on whether your services are functioning. Indeed we know of a team in Australia who drive the build of new services with consumer driven contracts. They use simple tools that allow them to define the contract for a service. This becomes part of the automated build before code for the new service is even written. The service is then built out only to the point where it satisfies the contract - an elegant approach to avoid the ‘YAGNI’[9] dilemma when building new software. These techniques and the tooling growing up around them, limit the need for central contract management by decreasing the temporal coupling between services.
Perhaps the apogee of decentralised governance is the build it / run it ethos popularised by Amazon. Teams are responsible for all aspects of the software they build including operating the software 24/7. Devolution of this level of responsibility is definitely not the norm but we do see more and more companies pushing responsibility to the development teams. Netflix is another organisation that has adopted this ethos[11]. Being woken up at 3am every night by your pager is certainly a powerful incentive to focus on quality when writing your code. These ideas are about as far away from the traditional centralized governance model as it is possible to be.

Decentralized Data Management
Decentralization of data management presents in a number of different ways. At the most abstract level, it means that the conceptual model of the world will differ between systems. This is a common issue when integrating across a large enterprise, the sales view of a customer will differ from the support view. Some things that are called customers in the sales view may not appear at all in the support view. Those that do may have different attributes and (worse) common attributes with subtly different semantics.
This issue is common between applications, but can also occur within applications, particular when that application is divided into separate components. A useful way of thinking about this is the Domain-Driven Design notion of Bounded Context. DDD divides a complex domain up into multiple bounded contexts and maps out the relationships between them. This process is useful for both monolithic and microservice architectures, but there is a natural correlation between service and context boundaries that helps clarify, and as we describe in the section on business capabilities, reinforce the separations.
As well as decentralizing decisions about conceptual models, microservices also decentralize data storage decisions. While monolithic applications prefer a single logical database for persistant data, enterprises often prefer a single database across a range of applications - many of these decisions driven through vendor’s commercial models around licensing. Microservices prefer letting each service manage its own database, either different instances of the same database technology, or entirely different database systems - an approach called Polyglot Persistence. You can use polyglot persistence in a monolith, but it appears more frequently with microservices.
Decentralizing responsibility for data across microservices has implications for managing updates. The common approach to dealing with updates has been to use transactions to guarantee consistency when updating multiple resources. This approach is often used within monoliths.
Using transactions like this helps with consistency, but imposes significant temporal coupling, which is problematic across multiple services. Distributed transactions are notoriously difficult to implement and as a consequence microservice architectures emphasize transactionless coordination between services, with explicit recognition that consistency may only be eventual consistency and problems are dealt with by compensating operations.
Choosing to manage inconsistencies in this way is a new challenge for many development teams, but it is one that often matches business practice. Often businesses handle a degree of inconsistency in order to respond quickly to demand, while having some kind of reversal process to deal with mistakes. The trade-off is worth it as long as the cost of fixing mistakes is less than the cost of lost business under greater consistency.

Infrastructure Automation
Infrastructure automation techniques have evolved enormously over the last few years - the evolution of the cloud and AWS in particular has reduced the operational complexity of building, deploying and operating microservices.
Many of the products or systems being build with microservices are being built by teams with extensive experience of Continuous Delivery and it’s precursor, Continuous Integration. Teams building software this way make extensive use of infrastructure automation techniques. This is illustrated in the build pipeline shown below.
Since this isn’t an article on Continuous Delivery we will call attention to just a couple of key features here. We want as much confidence as possible that our software is working, so we run lots of automated tests. Promotion of working software ‘up’ the pipeline means we automate deployment to each new environment.
A monolithic application will be built, tested and pushed through these environments quite happlily. It turns out that once you have invested in automating the path to production for a monolith, then deploying more applications doesn’t seem so scary any more. Remember, one of the aims of CD is to make deployment boring, so whether its one or three applications, as long as its still boring it doesn’t matter[12].
Another area where we see teams using extensive infrastructure automation is when managing microservices in production. In contrast to our assertion above that as long as deployment is boring there isn’t that much difference between monoliths and microservices, the operational landscape for each can be strikingly different.

Design for failure
A consequence of using services as components, is that applications need to be designed so that they can tolerate the failure of services. Any service call could fail due to unavailability of the supplier, the client has to respond to this as gracefully as possible. This is a disadvantage compared to a monolithic design as it introduces additional complexity to handle it. The consequence is that microservice teams constantly reflect on how service failures affect the user experience. Netflix’s Simian Army induces failures of services and even datacenters during the working day to test both the application’s resilience and monitoring.
This kind of automated testing in production would be enough to give most operation groups the kind of shivers usually preceding a week off work. This isn’t to say that monolithic architectural styles aren’t capable of sophisticated monitoring setups - it’s just less common in our experience.
Since services can fail at any time, it’s important to be able to detect the failures quickly and, if possible, automatically restore service. Microservice applications put a lot of emphasis on real-time monitoring of the application, checking both architectural elements (how many requests per second is the database getting) and business relevant metrics (such as how many orders per minute are received). Semantic monitoring can provide an early warning system of something going wrong that triggers development teams to follow up and investigate.
This is particularly important to a microservices architecture because the microservice preference towards choreography and event collaboration leads to emergent behavior. While many pundits praise the value of serendipitous emergence, the truth is that emergent behavior can sometimes be a bad thing. Monitoring is vital to spot bad emergent behavior quickly so it can be fixed.
Monoliths can be built to be as transparent as a microservice - in fact, they should be. The difference is that you absolutely need to know when services running in different processes are disconnected. With libraries within the same process this kind of transparency is less likely to be useful.
Microservice teams would expect to see sophisticated monitoring and logging setups for each individual service such as dashboards showing up/down status and a variety of operational and business relevant metrics. Details on circuit breaker status, current throughput and latency are other examples we often encounter in the wild.

Evolutionary Design
Microservice practitioners, usually have come from an evolutionary design background and see service decomposition as a further tool to enable application developers to control changes in their application without slowing down change. Change control doesn’t necessarily mean change reduction - with the right attitudes and tools you can make frequent, fast, and well-controlled changes to software.
Whenever you try to break a software system into components, you’re faced with the decision of how to divide up the pieces - what are the principles on which we decide to slice up our application? The key property of a component is the notion of independent replacement and upgradeability[13] - which implies we look for points where we can imagine rewriting a component without affecting its collaborators. Indeed many microservice groups take this further by explicitly expecting many services to be scrapped rather than evolved in the longer term.
The Guardian website is a good example of an application that was designed and built as a monolith, but has been evolving in a microservice direction. The monolith still is the core of the website, but they prefer to add new features by building microservices that use the monolith’s API. This approach is particularly handy for features that are inherently temporary, such as specialized pages to handle a sporting event. Such a part of the website can quickly be put together using rapid development languages, and removed once the event is over. We’ve seen similar approaches at a financial institution where new services are added for a market opportunity and discarded after a few months or even weeks.
This emphasis on replaceability is a special case of a more general principle of modular design, which is to drive modularity through the pattern of change [14]. You want to keep things that change at the same time in the same module. Parts of a system that change rarely should be in different services to those that are currently undergoing lots of churn. If you find yourself repeatedly changing two services together, that’s a sign that they should be merged.
Putting components into services adds an opportunity for more granular release planning. With a monolith any changes require a full build and deployment of the entire application. With microservices, however, you only need to redeploy the service(s) you modified. This can simplify and speed up the release process. The downside is that you have to worry about changes to one service breaking its consumers. The traditional integration approach is to try to deal with this problem using versioning, but the preference in the microservice world is to only use versioning as a last resort. We can avoid a lot of versioning by designing services to be as tolerant as possible to changes in their suppliers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Layered Architecture:

  • Pattern Description
  • Key Concepts
  • Trade-offs
A

The hyers architectural pattern helps to structure applications that
can be decomposed into groups of subtasks in which each group of
subtasks is at a particular level of abstraction.

Example
Networking protocols are probably the best-known example of layered
architectures. Such a protocol consists of a set of rules and conventlons that describe how computer programs communicate across
machine boundaries. The format, contents, and meaning of all
messages are defined. All scenarios are described in detail, usually by
giving sequence charts. The protocol specifies agreements at a variety
of abstraction levels, ranging from the details of bit transmission to
high-level application logic. Therefore designers use several subprotocols and arrange them in layers. Each layer deals with a specific
aspect of communication and uses the services of the next lower
layer
A layered approach is considered better practice than implementing
the protocol as a monolithic block. since implementing conceptuallydifferent issues separately reaps several benefits, for example aiding
development by teams and supporting incremental coding and
testing. Using semi-independent parts also enables the easier exchange of individual parts at a later date. Better implementation
technologies such as new languages or algorithms can be incorporated by simply rewriting a delimited section of code.
While OSI is an important reference model, TCP/IP, also known as the
‘Internet protocol suite’, is the prevalent networking protocol. We use
TCP/IP to illustrate another important reason for layering: the reuse
of individual layers in different contexts. TCP for example can be used
‘as is’ by diverse distributed applications such as telnet or f tp.

Context
A large system that requires decomposition.

Problem
Imagine that you are designing a system whose dominant
characteristic is a mix of low- and high-level issues, where high-level
operations rely on the lower-level ones. Some parts of the system
handle low-level issues such as hardware traps, sensor input,
reading bits from a file or electrical signals from a wire. At the other
end of the spectrum there may be user-visible functionality such as
the interface of a multi-user ‘dungeon’ game or high-level policies
such as telephone billing tariffs. A typical pattern of communication
flow consists of requests moving from high to low level, and answers
to requests, incoming data or notification about events traveling in
the opposite direction.
Such systems often also require some horizontal structuring that is
orthogonal to their vertical subdivision. This is the case where several
operations are on the same level of abstraction but are largely independent of each other. You can see examples of this where the word
‘and’ occurs in the diagram illustrating the OS1 7-layer model.
The system specification provided to you describes the high-level
tasks to some extent, and specifies the target platform. Portability to
other platforms is desired. Several external boundaries of the system
are specified a priori, such as a functional interface to which your
system must adhere. The mapping of high-level tasks onto the platform is not straightforward, mostly because they are too complex to
be implemented directly using services provided by the platform.
In such a case you need to balance the followtngforces:
- Late source code changes should not ripple through the system. They should be confined to one component and not afTect others.
- Interfaces should be stable. and may even be prescribed by a standards body.
- Parts of the system should be exchangeable. Cornponents should
be able to be replaced by alternative implementations wtthout
affecting the rest of the system. A low-level platform may be given
but may be subject to change in the future. While such fundamental changes usually require code changes and recompilation.
reconfiguration of the system can also be done at run-time using
an administration interface. Adjusting cache or buffer sizes are
examples of such a change. An extreme form of exchangeability
might be a client component dynamically swttching to a different
implementation of a service that may not have been avallable at
start-up. Design for change in general is a major facilitator of
graceful system evolution.
- It may be necessary to build other systems at a later date with the
same low-level issues as the system you are currently designing.
- Similar responsibilities should be grouped to help understandability and maintainability. Each component should be coherentif one component implements divergent issues its integrity may be
lost. Crouplng and coherence are conflicting at times.
- There is no ‘standard’ component granularity.
- Complex components need further decomposition.
- Crossing component boundaries may impede performance, for
example when a substantial amount of data must be transferred
over several boundaries, or where there are many boundaries to
cross.
- The system will be built by a team of programmers, and work has
to be subdivided along clear boundarles-a requirement that is
often overlooked at the architectural design stage.

Solution
From a high-level viewpoint the solution is extremely simple.
Structure your system into an appropriate number of layers and
place them on top of each other. Start at the lowest level of
abstraction-call it Layer 1. This is the base of your system. Work
your way up the abstraction ladder by putting Layer J on top of Layer
J - 1 until you reach the top level of functionality-call it Layer N.
Note that this does not prescribe the order in which to actually design
layers, it just gives a conceptual view. It also does not prescribe
whether an individual Layer J should be a complex subsystem that
needs further decomposition, or whether it should just translate
requests from Layer J+l to requests to Layer J- 1 and make little
contribution of its own. It is however essential that within an individual layer all constituent components work at the same level of
abstraction.
Most of the services that Layer J provides are composed of services
provided by Layer J -1. In other words, the services of each layer
implement a strategy for combining the services of the layer below in
a meaningful way. In addition, Layer J’s services may depend on other
services in Layer J.

Dynamics The following scenarios are archetypes for the dynamic behavior of
layered applicauons. Thls does not mean that you will encounter
every scenario in every archltecture. In simple layered architectures
you will only see the first scenario. but most layered appllcatlons
involve Scenarios I and 11. Due to space limitations we do not glve
obJect message sequence charts in thls pattern.
Scenario I Is probably the best-known one. A client Issues a request
to Layer N. Since Layer N cannot cany out the request on its own. it
calls the next Layer N - 1 for supporting subtasks. Layer N - I provides
these. In the process sending further requests to Layer N-2. and so
on until Layer I 1s reached. Here, the lowest-level servlces are finally
performed. If necessary, replies to the different requests are passed
back up from Layer 1 to Layer 2, kom Layer 2 to Layer 3, and so on
until the final reply arrives at Layer N. The example code in the
Implementation secUon lllustrates thls.
A characteristic of such top-down communication Is that Layer J
often translates a slngle request from Layer J+1 Into several requests
to Layer J- 1. This is due to the fact that Layer J is on a hlgher level of
abstraction than Layer J- 1 and has to map a hlgh-level service onto
more prlmlUve ones.
Scenario II illustrates bottom-up communicaUon-a chaln of actions
starts at Layer 1, for example when a device driver detects input. The
driver translates the lnput into an Internal format and reports it to
Layer 2. which starts lnterpreting it, and so on. In thls way data
moves up through the layers until It arrives at the highest layer. While
top-down lnformauon and control flow are often described as
‘requests’. bottom-up calls can be termed ‘notifications’.
As mentioned in Scenario I, one top-down request often fans out to
several requests in lower layers. In contrast. several bottom-up noUfications may either be condensed into a slngle notificauon higher In
the structure. or remain in a I : I relationship.
Scenario III descrlbes the situation where requests only travel
through a subset of the layers. A top-level request may only go to the
next lower level N- 1 lf this level can satisfy the request. An example
of this is where level N- 1 acts as a cache. and a request from level N
can be satisfied without being sent all the way down to Layer 1 and
from here to a remote server. Note that such caching layers mafntain
state information, while layers that only forward requests are often
stateless. Stateless layers usually have the advantage of being
simpler to program, particularly with respect to re-entrancy.
Scenario N describes a situation similar to Scenario 111. An event is
detected in Layer 1, but stops at Layer 3 instead of traveling all the
way up to Layer N. In a communication protocol, for example, a resend request may arrive from an impatient client who requested data
some time ago. In the meantime the server has already sent the
answer, and the answer and the re-send request cross. In this case,
Layer 3 of the server side may notice this and intercept the re-send
request without further action.
Scenario V involves two stacks of N layers communicating with each
other. This scenario is well-known from communication protocols
where the stacks are known as ‘protocol stacks’. In the following
diagram, Layer N of the left stack issues a request. ‘The request moves
down through the layers until it reaches Layer 1, is sent to Layer 1 of
the right stack, and there moves up through the layers of the right
stack. The response to the request follows the reverse path until it
arrives at Layer N of the left stack.

Implementation
The following steps describe a step-wise refinement approach to the
definition of a layered architecture. This is not necessarily the best
method for all applications-often a bottom-up or ‘yo-yo’ approach is
better. See also the discussion in step 5.
Not all the following steps are mandatory-it depends on your
applicauon. For example, the results of several implementation steps
can be heavily influenced or even strictly prescribed by a standards
specification that must be followed.
1 Define the abstraction criterion for grouping tasks into layers. This
criterion is often the conceptual distance from the platform.
Sometimes you encounter other abstraction paradigms, for example
the degree of customization for specific domains, or the degree of
conceptual complexity. For example, a chess game application may
consist of the following layers, listed from bottom to top:
Elementary units of the game, such as a bishop
Basic moves, such as castling
Medium-term tactics, such as the Sicilian defense
Overall game strategies
In American Football these levels may correspond respectively to
linebacker, blitz, a sequence of plays for a two-minute drill, and finally
a full game plan.
In the real world of software development we often use a mix of
abstraction criterions. For example, the distance from the hardware
can shape the lower levels, and conceptual complexity governs the
higher ones. An example layering obtained using a mixed-mode
layering principle like this is as follows, ordered from top to bottom:
User -visible elements
Specific application modules
Common services level
Operating system interface level
Operating system (being a layered system itself, or structured
according to the Microkernel pattern (17 1))
Hardware
Determine the number of abstraction levels according to your
abstraction criterion. Each abstraction level corresponds to one layer
of the pattern. Sometimes this mapping from abstraction levels to
layers is not obvious. Think about the trade-offs when deciding
whether to split particular aspects into two layers or combine them
into one. Having too many layers may impose unnecessary overhead,
while too few layers can result in a poor structure.
Name the layers and assign tasks to each of them The task of the
highest layer is the overall system task, as perceived by the client. The
tasks of all other layers are to be helpers to higher layers. If we take
a bottom-up approach. then lower layers provide an infrastructure on
which higher layers can build. However, this approach requires considerable experience and foresight in the domain to find the right
abstractlons for the lower layers before being able to define specific
requests from higher layers.
Specih the services. The most important implementation principle is
that layers are strictly separated from each other, in the sense that
no component may spread over more than one layer. Argument,
return, and error types of functions offered by Layer J should be builtin types of the programming language, types defined in Layer J, or
types taken from a shared data definition module. Note that modules
that are shared between layers relax the principles of strict layering.
It is often better to locate more services in higher layers than in lower
layers. This is because developers should not have to learn a large set ,
of slightly different low-level primitives–which may even change
during concurrent development. Instead the base layers should be
kept ‘slim’ while higher layers can expand to cover a broader
spectrum of applicability. This phenomenon is also called the
‘inverted pyramid of reuse’.
Refine the layering. Iterate over steps 1 to 4. It is usually not possible
to define an abstraction criterion precisely before thinking about the
implied layers and their services. Alternatively, it is usually wrong to
define components and services first and later impose a layered
structure on them according to their usage relationships. Since such
a structure does not capture an inherent ordering principle, it is very
likely that system maintenance will destroy the architecture. For
example. a new component may ask for the services of more than one
other laver. violatinn the ~rinci~le of strict laverinn.
Pattern
The solution is to perform the first four steps several times until a
natural and stable layering evolves. ‘Like almost all other kinds of
design, finding layers does not proceed in an orderly, logical way, but
consists of both top-down ahd bottom-up steps, and certain amount
of inspiration.. .’ [Joh95]. Performing both top-down and bottom-up
steps alternately is often called ‘yo-yo’ development, mentioned at the
start of the Implementation section.
eciJiy an interface for each layer. If Layer J should be a ‘black box’
r Layer J+1, design a flat interface that offers all Layer J’s services,
and perhaps encapsulate this interface in a Facade object [GHJV95].
The Known Uses section describes flat interfaces further. A ‘whitebox’ approach is that in whi J+ 1 sees the internals of Layer
J. The last figure in the e section shows a ‘gray-box’
approach, a compromise between black and white box approaches.
Here Layer J+1 is aware of the fact that Layer J consists of three
components, and addresses them separately, but does not see the
internal workings of individual components.
Good design practise tells us to use the black-box approach whenever
possible, because it supports system evolution better than other
approaches. Exceptions to this rule can be made for reasons of
efficiency, or a need to access the innards of another layer. The latter
occurs rarely, and may be helped by the Reflection pattern (193).
which supports more controlled access to the internal functioning of
a component. Arguments over efficiency are debatable, especially
when inlining can simply do away with a thin layer of indirection.
tructure indivzdual layers. Traditionally, the focus was on the proper
relationships between layers, but inside individual layers there was
often free-wheeling chaos. When an individual layer is complex it
should be broken into separate components. This subdivision can be
helped by using finer-grained patterns. For example, you can use the
Bridge pattern [GHJV95] to support multiple implementations of
services provided by a layer. The Strategy pattern [GHJV95] can
support the dynamic exchange of algorithms used by a layer.
8 Specth the communication between adjacent layers. The most often
used mechanism for inter-layer communication is the push model.
When Layer J invokes a senrice of Layer J- 1, any required information
is passed as part of the service call. The reverse is known as the pull
model and occurs when the lower layer fetches available information
Layers
from the higher layer at its own discretion. The Publisher-Subscriber
(339) and Pipes and Filters patterns (53) give details about push and
pull model information transfer. However, such models may introduce additional dependencies between a layer and its adjacent higher
layer. If you want to avoid dependencies of lower layers on higher
layers introduced by the pull model, use callbacks, as described in
the next step.
9 Decouple adjacent layers. There are many ways to do this. Often an
upper layer is aware of the next lower layer, but the lower layer is
unaware of the identity of its users. This implies a one-way coupling
only: changes in Layer J can ignore the presence and identity of Layer
J+ 1 provided that the interface and semantics of the Layer J services
being changed remain stable. Such a one-way coupling is perfect
when requests travel top-down, as illustrated in Scenario 1, as return
values are sufficient to transport the results in the reverse direction.
For bottom-up communication, you can use callbacks and still
preserve a top-down one-way coupling. Here the upper layer registers
callback functions with the lower layer. This is especially effective
when only a fixed set of possible events is sent from lower to higher
layers. During start-up the higher layer tells the lower layer what
functions to call when specific events occur. The lower layer
maintains the mapping from events to callback functions in a
registry. The Reactor pattern [Sch94] illustrates an object-oriented
implementation of the use of callbacks in conjunction with event
demultiplexing. The Command pattern [GHJV95] shows how to
encapsulate callback functions into first-class objects.
You can also decouple the upper layer from the lower layer to a certain
degree. Here is an example of how this can be done using objectoriented techniques. The upper layer is decoupled from specific
implementation variants of the lower layer by coding the upper layer
against an interface. In the following C++ code, this interface is a base
class; The lower-level implementations can then be easily exchanged.
even at run-time. In the example code, a Layer 2 component talks to
a Level 1 provider but does not know which implementation of Layer
1 it is talking to. The ‘wiring’ of the layers is done here in the main
program, but will usually be factored out into a connectionmanagement component. The main program also takes the role of the
client by calling a service in the top layer.
10 Design an error-handling strategy. Error handling can be rather
expensive for layered architectures with respect to processing time
and, notably, programming effort. An error can either be handled in
the layer where it occurred or be passed to the next higher layer. In
the latter case, the lower layer must transform the error into an error
description meaningful to the higher layer. As a rule of thumb, try to
handle errors at the lowest layer possible. This prevents higher layers
from being swamped with many different errors and voluminous
error-handling code. As a minimum, try to condense similar error
types into more general error types, and only propagate these more
general errors. If you do not do this, higher layers can be confronted
with error messages that apply to lower-level abstractions that the
higher layer does not understand. And who hasn’t seen totally cryptic
error messages being popped up to the highest layer of all-the user?

Example
The most widely-used communication protocol, TCP/IP, does not
Resolved strictly conform to the OSI model and consists of only four layers: TCP
and IP constitute the middle layers, with the application at the top
and the transport medium at the bottom.
TCP/IP has several interesting aspects that are relevant to our
discussion. Corresponding layers communicate in a peer-to-peer
fashion using a uirtual protocol. This means that, for example, the two
TCP entities send each other messages that follow a specific format.
From a conceptual point of view, they communicate using the dashed
line labeled TCP protocol’ in the diagram above. We refer to this
protocol as ‘virtual’ because in reality a TCP message traveling from
left to right in the diagram is handled first by the IP entity on the left.
This IP entity treats the message as a data packet, prefixes it with a
header, and forwards it to the local Ethernet interface. The Ethernet
interface then adds its own control information and sends the data
over the physical connection. On the receiving side Lhe local Ethernet
and IP entities strip the Ethernet and IP headers respectively. The
TCP entity on the right-hand side of the diagram then receives the
TCP message from its peer on the left as if it had been delivered over
the dashed Line.
A notable characteristic of TCP/IP and other communication protocols is that standardizing the functional interface is a secondary
concern, partly driven by the fact that TCP/IP implementations from
different vendors differ from each other intentionally. The vendors
usually do not offer single layers, but full implementations of the
protocol suite. As a result, every TCP implementation exports a fixed
Layers
set of core functions but is free to offer more, for example to increase
flexibility or performance. This looseness has no impact on the application developer for two reasons. Firstly, different stacks understand
each other because the virtual protocols are strictly obeyed. Secondly,
application developers use a layer on top of TCP, or its alternative,
UDP. This upper layer has a fixed interface. Sockets and TLI are
examples of such a fixed interface.
Assume that we use the Socket API on top of a TCP/IP stack. The
Socket API consists of system calls such as bind ( ) , listen ( ) or
read 0. The Socket implementation sits conceptually on top of
TCP/UDP, but uses lower layers as well, for example IP and ICMP.
This violation of strict layering principles is worthwhile to tune performance, and can be justified when all the communication layers from
sockets to IP are built into the OS kernel.
The behavior of the individual layers and the structure of the data
packets flowing from layer to layer are much more rigidly defined in
TCP/IP than the functional interface. This is because different
TCP/IP stacks must understand each other-they are the workhorses
of the increasingly heterogeneous Internet. The protocol rules describe exactly how a layer behaves under specific circumstances. For
example, its behavior when handling an incoming re-transmit message after the original has been sent is exactly prescribed. The data
packet specifications mostly concern the headers and trailers added
to messages. The size of headers and trailers is specified, as well as
the meaning of their subfields. In a header, for example, the protocol
stack encodes information such as sender, destination, protocol
used, time-out information, sequence number, and checksums. For
more information on TCP/IP, see for example ISte901. For even more
detail, study the series started in [Ste94].

Variants Relaxed Layered System
This is a variant of the Layers pattern that
is less restrictive about the relationship between layers. In a Relaxed
Layered System each layer may use the services of all layers below it,
not only of the next lower layer. A layer may also be partially opaquethis means that some of its services are only visible to the next higher
layer, while others are visible to all higher layers. The gain of flexibility
and performance in a Relaxed Layered System is paid for by a loss of
maintainability. This is often a high price to pay, and you should consider carefully before giving in to the demands of developers asking
Architectural Patterns
for shortcuts. We see these shortcuts more often in infrastructure
systems, such as the UNIX operating system or the X Window System,
than in application software. The main reason for this is that infrastructure systems are modified less often than application systems,
and their performance is usually more important than their maintainability.
Layering Through Inheritance. This variant can be found in some
object-oriented systems and is described in [BuCa96]. In this variant
lower layers are implemented as base classes. A higher layer requesting services from a lower layer inherits from the lower layer’s
implementation and hence can issue requests to the base class
services. An advantage of this scheme is that higher layers can modify
lower-layer services according to their needs. A drawback is that such
an inheritance relationship closely ties the higher layer to the lower
layer. If for example the data layout of a C++ base class changes, all
subclasses must be recompiled. Such unintentional dependencies
introduced by inheritance are also known as the fragile base class
problem.

Known Uses
Virtual Machines.
We can speak of lower levels as a virtual machine
that insulates higher levels from low-level details or varying
hardware. For example, the Java Virtual Machine (JVM) defines a
binary code format. Code written in the Java programming language
is translated into a platform-neutral binary code, also called bytecodes, and delivered to the JVM for interpretation. The JVM itself is
platform-specific-there are implementations of the JVM for different
operating systems and processors. Such a two-step translation
process allows platform-neutral source code and the delivery of
binary code not readable to humans1, while maintaining platformindependency.
APIs.
An Application Programming Interface is a layer that encapsulates lower layers of frequently-used functionality. An API is usually
a flat collection of function specifications, such as the UNIX system
calls. ‘Flat’ means here that the system calls for accessing the UNIX
file system, for example, are not separated from system calls for storage allocation-you can only know from the documentation to which group open ( ) or sbrk ( ) belong. Above system calls we find other
layers, such as the C standard library [KR881 with operations like
printf ( ) or f open ( ) . These libraries provide the benefit of portability between different operating systems, and provide additional
higher-level services such as output buffering or formatted output.
They often carry the liability of lower efficiency2, and perhaps more
tightly-prescribed behavior, whereas conventional system calls would
give more flexibility-and more opportunities for errors and conceptual mismatches, mostly due to the wide gap between high-level
application abstractions and low-level system calls.
Information Systems (IS)
from the business software domain often
use a two-layer architecture. The bottom layer is a database that
holds company-specific data. Many applications work concurrently
on top of this database to fulfill different tasks. Mainframe interactive
systems and the much-extolled Client-Server systems often employ
this architecture. Because the tight coupling of user interface and
data representation causes its share of problems, a third layer is
introduced between them-the domain layer-which models the
conceptual structure of the problem domain. As the top level still
mixes user interface and application, this level is also split, resulting
in a four-layer architecture. These are, from highest to lowest:
Presentation
Application logic
Domain layer
Database

Consequences
The Layers pattern has several
Reuse of layers. If an individual layer embodies a well-defined
abstraction and has a well-defined and documented interface, the
layer can be reused in multiple contexts. However, despite the higher
costs of not reusing such existing layers, developers often prefer to
rewrite this functionality. They argue that the existing layer does not
fit their purposes exactly, layering would cause high performance
penalties-and they would do a better job anyway. An empirical study
hints that black-box reuse of existing layers can dramatically reduce
development effort and decrease the number of defects [ZEWH95].
Support for standardization. Clearly-defined and commonly-accepted
levels of abstraction enable the development of standardized tasks
and interfaces. Different implementations of the same interface can
then be used interchangeably. This allows you to use products from
different vendors in different layers. A well-known example of a
standardized interface is the POSIX programming interface [IEEE88].
Dependencies are kept local. Standardized interfaces between layers
usually confine the effect of code changes to the layer that is changed.
Changes of the hardware, the operating system, the window system,
special data formats and so on often affect only one layer, and you can
adapt affected layers without altering the remaining layers. This supports the portability of a system. Testability is supported as well,
since you can test particular layers independently of other components in the system.
Layers
Exchangeability. Individual layer implementations can be replaced by
semantically-equivalent implementations without too great an effort.
If the connections between layers are hard-wired in the code, these
are updated with the names of the new layer’s implementation. You
can even replace an old implementation with an implementation with
a different interface by using the Adapter pattern for interface adaptation [GHJV95]. The other extreme is dynamic exchange, which you
can achieve by using the Bridge pattern IGHJV951, for example, and
manipulating the pointer to the implementation at run-time.
Hardware exchanges or additions are prime examples for illustrating
exchangeability. A new hardware 1/0 device, for example, can be put
in operation by installing the right driver program-which may be a
plug-in or replace an old driver program. Higher layers will not be affected by the exchange. A transport medium such as Ethernet could
be replaced by Token Ring. In such a case, upper layers do not need
to change their interfaces, and can continue to request services from
lower layers as before. However, if you want to be able to switch
between two layers that do not match closely in their interfaces and
services, you must build an insulating layer on top of these two layers. The benefit of exchangeability comes at the price of increased
programming effort and possibly decreased run-time performance.
The Layers pattern also imposes liabilities:
Cascades ofchanging behavior: A severe problem can occur when the
behavior of a layer changes. Assume for example that we replace a 10
Megabit/sec Ethernet layer at the bottom of our networked application and instead put IP on top of 155 Megabit/sec ATM~. Due to
limitations wlth 1/0 and memory performance, our local-end system
cannot process incoming packets fast enough to keep up with ATM’s
high data rates. However, bandwidth-intensive applications such as
medical imaging or video conferencing could benefit from the full
speed of ATM. Sending multiple data streams in parallel is a highlevel solution to avoid the above limitations of lower levels. Similarly,
IP routers, which forward packets within the Internet, can be layered to run on top of high-speed ATM networks via multi-CPU systems that
perform IP packet processing in parallel [PST96].
In summary, higher layers can often be shielded from changes in lower layers. This allows systems to be tuned transparently by collapsing
lower layers and/or replacing them with faster solutions such as
hardware. The layering becomes a disadvantage if you have to do a
substantial amount of rework on many layers to incorporate an apparently local change.
Lower emiency. A layered architecture is usually less efficient than,
say, a monolithic structure or a ‘sea of objects’. If high-level services
in the upper layers rely heavily on the lowest layers, all relevant data
must be transferred through a number of intermediate layers, and
may be transformed several times. The same is true of all results or
error messages produced in lower levels that are passed to the highest
level. Communication protocols, for example, transform messages
from higher levels by adding message headers and trailers.
Unnecessary work. if some services performed by lower layers perform excessive or duplicate work not actually required by the higher
layer, this has a negative impact on performance. Demultiplexing in
a communication protocol stack is an example of this phenomenon.
Several high-level requests cause the same incoming bit sequence to
be read many times because every high-level request is interested in
a different subset of the bits. Another example is error correction in
fde transfer. A general purpose low-level transmission system is written first and provides a very high degree of reliability, but it can be
more economical or even mandatory to build reliability into higher
layers, for example by using checksums. See ISRC841 for details of
these trade-offs and further considerations about where to place
functionality in a layered system.
Difiulty of establishing the correct granularity of layers. A layered
architecture with too few layers does not fully exploit this pattern’s
potential for reusability, changeability and portability. On the other
hand, too many layers introduce unnecessary complexity and
overheads in the separation of layers and the transformation of
arguments and return values. The decision about the granularity of
layers and the assignment of tasks to layers is difficult, but is critical
for the quality of the architecture. A standardized architecture can only be used If the scope of potential client applications fits the
defined layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Architectural Patterns: MVVM

A

Model–view–viewmodel (MVVM) is a software architectural pattern.
MVVM facilitates a separation of development of the graphical user interface – be it via a markup language or GUI code – from development of the business logic or back-end logic (the data model). The view model of MVVM is a value converter,[1] meaning the view model is responsible for exposing (converting) the data objects from the model in such a way that objects are easily managed and presented. In this respect, the view model is more model than view, and handles most if not all of the view’s display logic.[1] The view model may implement a mediator pattern, organizing access to the back-end logic around the set of use cases supported by the view.
MVVM is a variation of Martin Fowler’s Presentation Model design pattern.[2][3] MVVM abstracts a view’s state and behavior in the same way,[3] but a Presentation Model abstracts a view (creates a view model) in a manner not dependent on a specific user-interface platform.
MVVM was invented by Microsoft architects Ken Cooper and Ted Peters specifically to simplify event-driven programming of user interfaces. The pattern was incorporated into Windows Presentation Foundation (WPF) (Microsoft’s .NET graphics system) and Silverlight (WPF’s Internet application derivative).[3] John Gossman, one of Microsoft’s WPF and Silverlight architects, announced MVVM on his blog in 2005.

Components of MVVM pattern
Model
Model refers either to a domain model, which represents real state content (an object-oriented approach), or to the data access layer, which represents content (a data-centric approach).[citation needed]
View
As in the model-view-controller (MVC) and model-view-presenter (MVP) patterns, the view is the structure, layout, and appearance of what a user sees on the screen.[6] It displays a representation of the model and receives the user’s interaction with the view (clicks, keyboard, gestures, etc.), and it forwards the handling of these to the view model via the data binding (properties, event callbacks, etc.) that is defined to link the view and view model.
View model
The view model is an abstraction of the view exposing public properties and commands. Instead of the controller of the MVC pattern, or the presenter of the MVP pattern, MVVM has a binder, which automates communication between the view and its bound properties in the view model. The view model has been described as a state of the data in the model.[7]
The main difference between the view model and the Presenter in the MVP pattern, is that the presenter has a reference to a view whereas the view model does not. Instead, a view directly binds to properties on the view model to send and receive updates. To function efficiently, this requires a binding technology or generating boilerplate code to do the binding.[6]
Binder
Declarative data and command-binding are implicit in the MVVM pattern. In the Microsoft solution stack, the binder is a markup language called XAML.[8] The binder frees the developer from being obliged to write boiler-plate logic to synchronize the view model and view. When implemented outside of the Microsoft stack, the presence of a declarative data binding technology is what makes this pattern possible,[4][9] and without a binder, one would typically use MVP or MVC instead and have to write more boilerplate (or generate it with some other tool).

Rationale
MVVM was designed to make use of data binding functions in WPF (Windows Presentation Foundation) to better facilitate the separation of view layer development from the rest of the pattern, by removing virtually all GUI code (“code-behind”) from the view layer.[3] Instead of requiring user experience (UX) developers to write GUI code, they can use the framework markup language (e.g., XAML) and create data bindings to the view model, which is written and maintained by application developers. The separation of roles allows interactive designers to focus on UX needs rather than programming of business logic. The layers of an application can thus be developed in multiple work streams for higher productivity. Even when a single developer works on the entire code base, a proper separation of the view from the model is more productive, as user interface typically changes frequently and late in the development cycle based on end-user feedback.[citation needed]
The MVVM pattern attempts to gain both advantages of separation of functional development provided by MVC, while leveraging the advantages of data bindings and the framework by binding data as close to the pure application model as possible.[3][10][11][clarification needed] It uses the binder, view model, and any business layers’ data-checking features to validate incoming data. The result is that the model and framework drive as much of the operations as possible, eliminating or minimizing application logic which directly manipulates the view (e.g., code-behind).

Link:
https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93viewmodel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Event-Driven Architecture:

  • Pattern Description
  • Broker Topology
  • Mediator Topology
  • Trade-offs
A

Event-driven architecture (EDA) is a software architecture paradigm promoting the production, detection, consumption of, and reaction to events.
An event can be defined as “a significant change in state”.[1] For example, when a consumer purchases a car, the car’s state changes from “for sale” to “sold”. A car dealer’s system architecture may treat this state change as an event whose occurrence can be made known to other applications within the architecture. From a formal perspective, what is produced, published, propagated, detected or consumed is a (typically asynchronous) message called the event notification, and not the event itself, which is the state change that triggered the message emission. Events do not travel, they just occur. However, the term event is often used metonymically to denote the notification message itself, which may lead to some confusion. This is due to Event-Driven architectures often being designed atop message-driven architectures, where such communication pattern requires one of the inputs to be text-only, the message, to differentiate how each communication should be handled.
This architectural pattern may be applied by the design and implementation of applications and systems that transmit events among loosely coupled software components and services. An event-driven system typically consists of event emitters (or agents), event consumers (or sinks), and event channels. Emitters have the responsibility to detect, gather, and transfer events. An Event Emitter does not know the consumers of the event, it does not even know if a consumer exists, and in case it exists, it does not know how the event is used or further processed. Sinks have the responsibility of applying a reaction as soon as an event is presented. The reaction might or might not be completely provided by the sink itself. For instance, the sink might just have the responsibility to filter, transform and forward the event to another component or it might provide a self-contained reaction to such event. Event channels are conduits in which events are transmitted from event emitters to event consumers. The knowledge of the correct distribution of events is exclusively present within the event channel.[citation needed] The physical implementation of event channels can be based on traditional components such as message-oriented middleware or point-to-point communication which might require a more appropriate transactional executive framework[clarify].
Building systems around an event-driven architecture simplifies horizontal scalability in distributed computing models and makes them more resilient to failure. This is because application state can be copied across multiple parallel snapshots for high-availability.[2] New events can be initiated anywhere, but more importantly propagate across the network of data stores updating each as they arrive. Adding extra nodes becomes trivial as well, you can simply take a copy of the application state, feed it a stream of events and run with it. [3]
Event-driven architecture can complement service-oriented architecture (SOA) because services can be activated by triggers fired on incoming events.[4][5] This paradigm is particularly useful whenever the sink does not provide any self-contained executive[clarify].
SOA 2.0 evolves the implications SOA and EDA architectures provide to a richer, more robust level by leveraging previously unknown causal relationships to form a new event pattern.[vague] This new business intelligence pattern triggers further autonomous human or automated processing that adds exponential value to the enterprise by injecting value-added information into the recognized pattern which could not have been achieved previously.

Event structure
An event can be made of two parts, the event header and the event body. The event header might include information such as event name, time stamp for the event, and type of event. The event body provides the details of the state change detected. An event body should not be confused with the pattern or the logic that may be applied in reaction to the occurrence of the event itself. CloudEvents provides an Open Source specification for describing event data in a common way.
Event generator
The first logical layer is the event generator, which senses a fact and represents that fact as an event message. As an example, an event generator could be an email client, an E-commerce system, a monitoring agent or some type of physical sensor.
Converting the data collected from such a diverse set of data sources to a single standardized form of data for evaluation is a significant task in the design and implementation of this first logical layer.[6] However, considering that an event is a strongly declarative frame, any informational operations can be easily applied, thus eliminating the need for a high level of standardization.
Event channel
This is the second logical layer. An event channel is a mechanism of propagating the information collected from an event generator to the event engine[6] or sink. This could be a TCP/IP connection, or any type of an input file (flat, XML format, e-mail, etc.). Several event channels can be opened at the same time. Usually, because the event processing engine has to process them in near real time, the event channels will be read asynchronously. The events are stored in a queue, waiting to be processed later by the event processing engine.
Event processing engine
The event processing engine is the logical layer responsible for identifying an event, and then selecting and executing the appropriate reaction. It can also trigger a number of assertions. For example, if the event that comes into the event processing engine is a product ID low in stock, this may trigger reactions such as “Order product ID” and “Notify personnel”.[6]
Downstream event-driven activity
This is the logical layer where the consequences of the event are shown. This can be done in many different ways and forms; e.g., an email is sent to someone and an application may display some kind of warning on the screen.[6] Depending on the level of automation provided by the sink (event processing engine) the downstream activity might not be required.

Event processing styles
There are three general styles of event processing: simple, stream, and complex. The three styles are often used together in a mature event-driven architecture.[6]
Simple event processing
Simple event processing concerns events that are directly related to specific, measurable changes of condition. In simple event processing, a notable event happens which initiates downstream action(s). Simple event processing is commonly used to drive the real-time flow of work, thereby reducing lag time and cost.[6]
For example, simple events can be created by a sensor detecting changes in tire pressures or ambient temperature. The car’s tire incorrect pressure will generate a simple event from the sensor that will trigger a yellow light advising the driver about the state of a tire.
Event stream processing
In event stream processing (ESP), both ordinary and notable events happen. Ordinary events (orders, RFID transmissions) are screened for notability and streamed to information subscribers. Event stream processing is commonly used to drive the real-time flow of information in and around the enterprise, which enables in-time decision making.[6]
Complex event processing
Complex event processing (CEP) allows patterns of simple and ordinary events to be considered to infer that a complex event has occurred. Complex event processing evaluates a confluence of events and then takes action. The events (notable or ordinary) may cross event types and occur over a long period of time. The event correlation may be causal, temporal, or spatial. CEP requires the employment of sophisticated event interpreters, event pattern definition and matching, and correlation techniques. CEP is commonly used to detect and respond to business anomalies, threats, and opportunities.[6]
Online event processing
Online event processing (OLEP) uses asynchronous distributed event-logs to process complex events and manage persistent data[7]. OLEP allows to reliably compose related events of a complex scenario across heterogeneous systems. It therewith enables very flexible distribution patterns with high scalability and offers strong consistency. However, it cannot guarantee an upper bound to the processing time.

Extreme loose coupling and well distributed
An event driven architecture is extremely loosely coupled and well distributed. The great distribution of this architecture exists because an event can be almost anything and exist almost anywhere. The architecture is extremely loosely coupled because the event itself doesn’t know about the consequences of its cause. e.g. If we have an alarm system that records information when the front door opens, the door itself doesn’t know that the alarm system will add information when the door opens, just that the door has been opened.[6]
Semantic Coupling and further research
Event driven architectures have loose coupling within space, time and synchronization, providing a scalable infrastructure for information exchange and distributed workflows. However, event-architectures are tightly coupled, via event subscriptions and patterns, to the semantics of the underlying event schema and values. The high degree of semantic heterogeneity of events in large and open deployments such as smart cities and the sensor web makes it difficult to develop and maintain event-based systems. In order to address semantic coupling within event-based systems the use of approximate semantic matching of events is an active area of research.

Link:
https://en.wikipedia.org/wiki/Event-driven_architecture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

SOA (Basic):

  • Definitions of SOA
  • SOA Concepts
  • Service Attributes
A

Service-oriented architecture (SOA) is a style of software design where services are provided to the other components by application components, through a communication protocol over a network. The basic principles of service-oriented architecture are independent of vendors, products and technologies.[1] A service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online.
A service has four properties according to one of many definitions of SOA:
1. It logically represents a business activity with a specified outcome.
2. It is self-contained.
3. It is a black box for its consumers.
4. It may consist of other underlying services.[3]
Different services can be used in conjunction to provide the functionality of a large software application,[4] a principle SOA shares with modular programming. Service-oriented architecture integrates distributed, separately-maintained and -deployed software components. It is enabled by technologies and standards that facilitate components’ communication and cooperation over a network, especially over an IP network.

Overview
In SOA, services use protocols that describe how they pass and parse messages using description metadata. This metadata describes both the functional characteristics of the service and quality-of-service characteristics. Service-oriented architecture aims to allow users to combine large chunks of functionality to form applications which are built purely from existing services and combining them in an ad hoc manner. A service presents a simple interface to the requester that abstracts away the underlying complexity acting as a black box. Further users can also access these independent services without any knowledge of their internal implementation.

Defining concepts
The related buzzword service-orientation promotes loose coupling between services. SOA separates functions into distinct units, or services,[6] which developers make accessible over a network in order to allow users to combine and reuse them in the production of applications. These services and their corresponding consumers communicate with each other by passing data in a well-defined, shared format, or by coordinating an activity between two or more services.[7]
A manifesto was published for service-oriented architecture in October, 2009. This came up with six core values which are listed as follows:
1. Business value is given more importance than technical strategy.
2. Strategic goals are given more importance than project-specific benefits.
3. Intrinsic inter-operability is given more importance than custom integration.
4. Shared services are given more importance than specific-purpose implementations.
5. Flexibility is given more importance than optimization.
6. Evolutionary refinement is given more importance than pursuit of initial perfection.
SOA can be seen as part of the continuum which ranges from the older concept of distributed computing[6][9] and modular programming, through SOA, and on to current practices of mashups, SaaS, and cloud computing (which some see as the offspring of SOA).

Principles
There are no industry standards relating to the exact composition of a service-oriented architecture, although many industry sources have published their own principles. Some of these[11][12][13][14] include the following:
- Standardized service contract
Services adhere to a standard communications agreements, as defined collectively by one or more service-description documents within a given set of services.
- Service reference autonomy (an aspect of loose coupling)
The relationship between services is minimized to the level that they are only aware of their existence.
- Service location transparency (an aspect of loose coupling)
Services can be called from anywhere within the network that it is located no matter where it is present.
- Service longevity
Services should be designed to be long lived. Where possible services should avoid forcing consumers to change if they do not require new features, if you call a service today you should be able to call the same service tomorrow.
- Service abstraction
The services act as black boxes, that is their inner logic is hidden from the consumers.
- Service autonomy
Services are independent and control the functionality they encapsulate, from a Design-time and a run-time perspective.
- Service statelessness
Services are stateless, that is either return the requested value or give an exception hence minimizing resource use.
- Service granularity
A principle to ensure services have an adequate size and scope. The functionality provided by the service to the user must be relevant.
- Service normalization
Services are decomposed or consolidated (normalized) to minimize redundancy. In some, this may not be done, These are the cases where performance optimization, access, and aggregation are required.[15]
- Service composability
Services can be used to compose other services.
- Service discovery
Services are supplemented with communicative meta data by which they can be effectively discovered and interpreted.
- Service reusability
Logic is divided into various services, to promote reuse of code.
- Service encapsulation
Many services which were not initially planned under SOA, may get encapsulated or become a part of SOA.

Patterns
Each SOA building block can play any of the three roles:
- Service provider
It creates a web service and provides its information to the service registry. Each provider debates upon a lot of hows and whys like which service to expose, which to give more importance: security or easy availability, what price to offer the service for and many more. The provider also has to decide what category the service should be listed in for a given broker service[16] and what sort of trading partner agreements are required to use the service.
- Service broker, service registry or service repository
Its main functionality is to make the information regarding the web service available to any potential requester. Whoever implements the broker decides the scope of the broker. Public brokers are available anywhere and everywhere but private brokers are only available to a limited amount of public. UDDI was an early, no longer actively supported attempt to provide Web services discovery.
- Service requester/consumer
It locates entries in the broker registry using various find operations and then binds to the service provider in order to invoke one of its web services. Whichever service the service-consumers need, they have to take it into the brokers, bind it with respective service and then use it. They can access multiple services if the service provides multiple services.
The service consumer–provider relationship is governed by a standardized service contract,[17] which has a business part, a functional part and a technical part.
Service composition patterns have two broad, high-level architectural styles: choreography and orchestration. Lower level enterprise integration patterns that are not bound to a particular architectural style continue to be relevant and eligible in SOA design.

Implementation approaches
Service-oriented architecture can be implemented with web services.[21] This is done to make the functional building-blocks accessible over standard Internet protocols that are independent of platforms and programming languages. These services can represent either new applications or just wrappers around existing legacy systems to make them network-enabled.[22]
Implementers commonly build SOAs using web services standards. One example is SOAP, which has gained broad industry acceptance after recommendation of Version 1.2 from the W3C[23] (World Wide Web Consortium) in 2003. These standards (also referred to as web service specifications) also provide greater interoperability and some protection from lock-in to proprietary vendor software. One can, however, also implement SOA using any other service-based technology, such as Jini, CORBA or REST.
Architectures can operate independently of specific technologies and can therefore be implemented using a wide range of technologies, including:
- Web services based on WSDL and SOAP
- Messaging, e.g., with ActiveMQ, JMS, RabbitMQ
- RESTful HTTP, with Representational state transfer (REST) constituting its own constraints-based architectural style
- OPC-UA
- WCF (Microsoft’s implementation of Web services, forming a part of WCF)
- Apache Thrift
- gRPC
- SORCER
Implementations can use one or more of these protocols and, for example, might use a file-system mechanism to communicate data following a defined interface specification between processes conforming to the SOA concept. The key is independent services with defined interfaces that can be called to perform their tasks in a standard way, without a service having foreknowledge of the calling application, and without the application having or needing knowledge of how the service actually performs its tasks. SOA enables the development of applications that are built by combining loosely coupled and interoperable services.
These services inter-operate based on a formal definition (or contract, e.g., WSDL) that is independent of the underlying platform and programming language. The interface definition hides the implementation of the language-specific service. SOA-based systems can therefore function independently of development technologies and platforms (such as Java, .NET, etc.). Services written in C# running on .NET platforms and services written in Java running on Java EE platforms, for example, can both be consumed by a common composite application (or client). Applications running on either platform can also consume services running on the other as web services that facilitate reuse. Managed environments can also wrap COBOL legacy systems and present them as software services..[24]
High-level programming languages such as BPEL and specifications such as WS-CDL and WS-Coordination extend the service concept by providing a method of defining and supporting orchestration of fine-grained services into more coarse-grained business services, which architects can in turn incorporate into workflows and business processes implemented in composite applications or portals.
Service-oriented modeling is an SOA framework that identifies the various disciplines that guide SOA practitioners to conceptualize, analyze, design, and architect their service-oriented assets. The Service-oriented modeling framework (SOMF) offers a modeling language and a work structure or “map” depicting the various components that contribute to a successful service-oriented modeling approach. It illustrates the major elements that identify the “what to do” aspects of a service development scheme. The model enables practitioners to craft a project plan and to identify the milestones of a service-oriented initiative. SOMF also provides a common modeling notation to address alignment between business and IT organizations.

Organizational benefits
Some enterprise architects believe that SOA can help businesses respond more quickly and more cost-effectively to changing market conditions.[26] This style of architecture promotes reuse at the macro (service) level rather than micro (classes) level. It can also simplify interconnection to—and usage of—existing IT (legacy) assets.
With SOA, the idea is that an organization can look at a problem holistically. A business has more overall control. Theoretically there would not be a mass of developers using whatever tool sets might please them. But rather they would be coding to a standard that is set within the business. They can also develop enterprise-wide SOA that encapsulates a business-oriented infrastructure. SOA has also been illustrated as a highway system providing efficiency for car drivers. The point being that if everyone had a car, but there was no highway anywhere, things would be limited and disorganized, in any attempt to get anywhere quickly or efficiently. IBM Vice President of Web Services Michael Liebow says that SOA “builds highways”.[27]
In some respects, SOA could be regarded as an architectural evolution rather than as a revolution. It captures many of the best practices of previous software architectures. In communications systems, for example, little development of solutions that use truly static bindings to talk to other equipment in the network has taken place. By embracing a SOA approach, such systems can position themselves to stress the importance of well-defined, highly inter-operable interfaces. Other predecessors of SOA include Component-based software engineering and Object-Oriented Analysis and Design (OOAD) of remote objects, for instance, in CORBA.
A service comprises a stand-alone unit of functionality available only via a formally defined interface. Services can be some kind of “nano-enterprises” that are easy to produce and improve. Also services can be “mega-corporations” constructed as the coordinated work of subordinate services. A mature rollout of SOA effectively defines the API of an organization.
Reasons for treating the implementation of services as separate projects from larger projects include:
1. Separation promotes the concept to the business that services can be delivered quickly and independently from the larger and slower-moving projects common in the organization. The business starts understanding systems and simplified user interfaces calling on services. This advocates agility. That is to say, it fosters business innovations and speeds up time-to-market.[28]
2. Separation promotes the decoupling of services from consuming projects. This encourages good design insofar as the service is designed without knowing who its consumers are.
3. Documentation and test artifacts of the service are not embedded within the detail of the larger project. This is important when the service needs to be reused later.

SOA promises to simplify testing indirectly. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation. If an organization possesses appropriately defined test data, then a corresponding stub is built that reacts to the test data when a service is being built. A full set of regression tests, scripts, data, and responses is also captured for the service. The service can be tested as a ‘black box’ using existing stubs corresponding to the services it calls. Test environments can be constructed where the primitive and out-of-scope services are stubs, while the remainder of the mesh is test deployments of full services. As each interface is fully documented with its own full set of regression test documentation, it becomes simple to identify problems in test services. Testing evolves to merely validate that the test service operates according to its documentation, and finds gaps in documentation and test cases of all services within the environment. Managing the data state of idempotent services is the only complexity.
Examples may prove useful to aid in documenting a service to the level where it becomes useful. The documentation of some APIs within the Java Community Process provide good examples. As these are exhaustive, staff would typically use only important subsets. The ‘ossjsa.pdf’ file within JSR-89 exemplifies such a file.

Criticism
SOA has been conflated with Web services;[30] however, Web services are only one option to implement the patterns that comprise the SOA style. In the absence of native or binary forms of remote procedure call (RPC), applications could run more slowly and require more processing power, increasing costs. Most implementations do incur these overheads, but SOA can be implemented using technologies (for example, Java Business Integration (JBI), Windows Communication Foundation (WCF) and data distribution service (DDS)) that do not depend on remote procedure calls or translation through XML. At the same time, emerging open-source XML parsing technologies (such as VTD-XML) and various XML-compatible binary formats promise to significantly improve SOA performance. Services implemented using JSON instead of XML do not suffer from this performance concern.[31][32][33]
Stateful services require both the consumer and the provider to share the same consumer-specific context, which is either included in or referenced by messages exchanged between the provider and the consumer. This constraint has the drawback that it could reduce the overall scalability of the service provider if the service-provider needs to retain the shared context for each consumer. It also increases the coupling between a service provider and a consumer and makes switching service providers more difficult.[34] Ultimately, some critics feel that SOA services are still too constrained by applications they represent.[35]
A primary challenge faced by service-oriented architecture is managing of metadata. Environments based on SOA include many services which communicate among each other to perform tasks. Due to the fact that the design may involve multiple services working in conjunction, an Application may generate millions of messages. Further services may belong to different organizations or even competing firms creating a huge trust issue. Thus SOA governance comes into the scheme of things.[36]
Another major problem faced by SOA is the lack of a uniform testing framework. There are no tools that provide the required features for testing these services in a service-oriented architecture. The major causes of difficulty are:[37]
- Heterogeneity and complexity of solution.
- Huge set of testing combinations due to integration of autonomous services.
- Inclusion of services from different and competing vendors.
- Platform is continuously changing due to availability of new features and services.

Extensions and variants
1. Event-driven architectures
2. Web 2.0
Tim O’Reilly coined the term “Web 2.0” to describe a perceived, quickly growing set of web-based applications.[39] A topic that has experienced extensive coverage involves the relationship between Web 2.0 and service-oriented architectures.[which?]
SOA is the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. The notion of complexity-hiding and reuse, but also the concept of loosely coupling services has inspired researchers to elaborate on similarities between the two philosophies, SOA and Web 2.0, and their respective applications. Some argue Web 2.0 and SOA have significantly different elements and thus can not be regarded “parallel philosophies”, whereas others consider the two concepts as complementary and regard Web 2.0 as the global SOA.[40]
The philosophies of Web 2.0 and SOA serve different user needs and thus expose differences with respect to the design and also the technologies used in real-world applications. However, as of 2008, use-cases demonstrated the potential of combining technologies and principles of both Web 2.0 and SOA.
3. Microservices
Microservices are a modern interpretation of service-oriented architectures used to build distributed software systems. Services in a microservice architecture[41] are processes that communicate with each other over the network in order to fulfill a goal. These services use technology agnostic protocols,[42] which aid in encapsulating choice of language and frameworks, making their choice a concern internal to the service. Microservices are a new realisation and implementation approach to SOA, which have become popular since 2014 (and after the introduction of DevOps), and which also emphasize continuous deployment and other agile practices.[43]
There is no single commonly agreed definition of microservices. The following characteristics and principles can be found in the literature:
- fine-grained interfaces (to independently deployable services),
- business-driven development (e.g. domain-driven design),
- IDEAL cloud application architectures,
- polyglot programming and persistence,
- lightweight container deployment,
- decentralized continuous delivery, and
- DevOps with holistic service monitoring.

Link:
https://en.wikipedia.org/wiki/Service-oriented_architecture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

GoF

  • Abstract Factory
  • Prototype
  • Factory Method
  • Singleton
A

Creational Patterns
Creational design patterns abstract the instantiation process. They help make a system
independent of how its objects are created, composed, and represented. A class cre-
ational pattern uses inheritance to vary the class that’s instantiated, whereas an object
creational pattern will delegate instantiation to another object.
Creational patterns become important as systems evolve to depend more on object
composition than class inheritance. As that happens, emphasis shifts away from hard-
coding a fixed set of behaviors toward defining a smaller set of fundamental behaviors
that can be composed into any number of more complex ones. Thus creating objects
with particular behaviors requires more than simply instantiating a class.
There are two recurring themes in these patterns. First, they all encapsulate knowledge
about which concrete classes the system uses. Second, they hide how instances of these
classes are created and put together. All the system at large knows about the objects is
their interfaces as defined by abstract classes. Consequently, the creational patterns give
you a lot of flexibility in what gets created, who creates it, how it gets created, and when.
They let you configure a system with “product” objects that vary widely in structure
and functionality. Configuration can be static (that is, specified at compile-time) or
dynamic (at run-time).
Sometimes creational patterns are competitors. For example, there are cases when either
Prototype (117) or Abstract Factory (87) could be used profitably. At other times they
are complementary: Builder (97) can use one of the other patterns to implement which
components get built. Prototype (117) can use Singleton (127) in its implementation.
Because the creational patterns are closely related, we’ll study all five of them together
to highlight their similarities and differences. We’ll also use a common example—
building a maze for a computer game—to illustrate their implementations. The maze
and the game will vary slightly from pattern to pattern. Sometimes the game will be
simply to find your way out of a maze; in that case the player will probably only have
a local view of the maze. Sometimes mazes contain problems to solve and dangers to overcome, and these games may provide a map of the part of the maze that has been
explored.
We’ll ignore many details of what can be in a maze and whether a maze game has a
single or multiple players. Instead, we’ll just focus on how mazes get created. We define
a maze as a set of rooms. A room knows its neighbors; possible neighbors are another
room, a wall, or a door to another room.

ABSTRACT FACTORY

Intent
Provide an interface for creating families of related or dependent objects without
specifying their concrete classes.

Motivation
Consider a user interface toolkit that supports multiple look-and-feel standards,
such as Motif and Presentation Manager. Different look-and-feels define different
appearances and behaviors for user interface “widgets” like scroll bars, windows,
and buttons. To be portable across look-and-feel standards, an application should
not hard-code its widgets for a particular look and feel. Instantiating look-and-
f eel-specific classes of widgets throughout the application makes it hard to change
the look and feel later.
We can solve this problem by defining an abstract WidgetFactory class that de-
clares an interface for creating each basic kind of widget. There’s also an abstract
class for each kind of widget, and concrete subclasses implement widgets for
specific look-and-feel standards. WidgetFactory’s interface has an operation that
returns a new widget object for each abstract widget class. Clients call these oper-
ations to obtain widget instances, but clients aren’t aware of the concrete classes
they’re using. Thus clients stay independent of the prevailing look and feel.
There is a concrete subclass of WidgetFactory for each look-and-feel standard.
Each subclass implements the operations to create the appropriate widget for the
look and feel. For example, the CreateScrollBar operation on the MotifWidgetFac-
tory instantiates and returns a Motif scroll bar, while the corresponding operation
on the PMWidgetFactory returns a scroll bar for Presentation Manager. Clients
create widgets solely through the WidgetFactory interface and have no knowl-
edge of the classes that implement widgets for a particular look and feel. In other
words, clients only have to commit to an interface defined by an abstract class,
not a particular concrete class.
A WidgetFactory also enforces dependencies between the concrete widget classes.
A Motif scroll bar should be used with a Motif button and a Motif text editor, and
that constraint is enforced automatically as a consequence of using a MotifWid-
getFactory.

Applicability
Use the Abstract Factory pattern when
• a system should be independent of how its products are created, composed,
and represented.
• a system should be configured with one of multiple families of products.
• a family of related product objects is designed to be used together, and you
need to enforce this constraint.
• you want to provide a class library of products, and you want to reveal just
their interfaces, not their implementations.

Participants
• AbstractFactory (WidgetFactory)
- declares an interface for operations that create abstract product objects.
• ConcreteFactory (MotifWidgetFactory, PMWidgetFactory)
- implements the operations to create concrete product objects.
• AbstractProduct (Window, ScrollBar)
- declares an interface for a type of product object.
• ConcreteProduct (MotifWindow, MotifScrollBar)
- defines a product object to be created by the corresponding concrete factory.
- implements the AbstractProduct interface.
• Client
- uses only interfaces declared by AbstractFactory and AbstractProduct classes.

Collaborations
• Normally a single instance of a ConcreteFactory class is created at run-time.
This concrete factory creates product objects having a particular implementa-
tion. To create different product objects, clients should use a different concrete
factory.
• AbstractFactory defers creation of product objects to its ConcreteFactory sub-
class.

Consequences
The Abstract Factory pattern has the following benefits and liabilities:
1. It isolates concrete classes. The Abstract Factory pattern helps you control the
classes of objects that an application creates. Because a factory encapsulates
the responsibility and the process of creating product objects, it isolates clients
from implementation classes. Clients manipulate instances through their
abstract interfaces. Product class names are isolated in the implementation
of the concrete factory; they do not appear in client code.
2. It makes exchanging product families easy. The class of a concrete factory appears
only once in an application—that is, where it’s instantiated. This makes it
easy to change the concrete factory an application uses. It can use different
product configurations simply by changing the concrete factory. Because an
abstract factory creates a complete family of products, the whole product
family changes at once. In our user interface example, we can switch from
Motif widgets to Presentation Manager widgets simply by switching the
corresponding factory objects and recreating the interface.
3. It promotes consistency among products. When product objects in a family are
designed to work together, it’s important that an application use objects from
only one family at a time. AbstractFactory makes this easy to enforce.
4. Supporting new kinds of products is difficult. Extending abstract factories to
produce new kinds of Products isn’t easy. That’s because the AbstractFactory
interface fixes the set of products that can be created. Supporting new kinds of
products requires extending the factory interface, which involves changing
the AbstractFactory class and all of its subclasses. We discuss one solution to
this problem in the Implementation section.

Implementation
Here are some useful techniques for implementing the Abstract Factory pattern.
1. Factories as singletons. An application typically needs only one instance of a
ConcreteFactory per product family. So it’s usually best implemented as a
Singleton (127).
2. Creating the products. AbstractFactory only declares an interface for creating
products. It’s up to ConcreteProduct subclasses to actually create them. The
most common way to do this is to define a factory method (see Factory
Method (107)) for each product. A concrete factory will specify its products
by overriding the factory method for each. While this implementation is
simple, it requires a new concrete factory subclass for each product family,
even if the product families differ only slightly.
If many product families are possible, the concrete factory can be imple-
mented using the Prototype (117) pattern. The concrete factory is initialized
with a prototypical instance of each product in the family, and it creates a new
product by cloning its prototype. The Prototype-based approach eliminates
the need for a new concrete factory class for each new product family.
3. Defining extensible factories. AbstractFactory usually defines a different op-
eration for each kind of product it can produce. The kinds of products are
encoded in the operation signatures. Adding a new kind of product requires
changing the AbstractFactory interface and all the classes that depend on it.
A more flexible but less safe design is to add a parameter to operations that
create objects. This parameter specifies the kind of object to be created. It
could be a class identifier, an integer, a string, or anything else that identifies
the kind of product. In fact with this approach, AbstractFactory only needs
a single “Make” operation with a parameter indicating the kind of object
to create. This is the technique used in the Prototype- and the class-based
abstract factories discussed earlier.
This variation is easier to use in a dynamically typed language like Smalltalk
than in a statically typed language like C++. You can use it in C++ only when
all objects have the same abstract base class or when the product objects can
be safely coerced to the correct type by the client that requested them. The
implementation section of Factory Method (107) shows how to implement
such parameterized operations in C++.
But even when no coercion is needed, an inherent problem remains: All
products are returned to the client with the same abstract interface as given
by the return type. The client will not be able to differentiate or make safe
assumptions about the class of a product. If clients need to perform subclass-
specific operations, they won’t be accessible through the abstract interface.
Although the client could perform a downcast (e.g., with dynamic-cast in
C++), that’s not always feasible or safe, because the downcast can fail. This
is the classic trade-off for a highly flexible and extensible interface.

PROTOTYPE
Intent
Specify the kinds of objects to create using a prototypical instance, and create new
objects by copying this prototype.

Motivation
You could build an editor for music scores by customizing a general framework
for graphical editors and adding new objects that represent notes, rests, and
staves. The editor framework may have a palette of tools for adding these music
objects to the score. The palette would also include tools for selecting, moving,
and otherwise manipulating music objects. Users will click on the quarter-note
tool and use it to add quarter notes to the score. Or they can use the move tool to
move a note up or down on the staff, thereby changing its pitch.
Let’s assume the framework provides an abstract Graphic class for graphical com-
ponents, like notes and staves. Moreover, it’ll provide an abstract Tool class for
defining tools like those in the palette. The framework also predefines a Graphic-
Tool subclass for tools that create instances of graphical objects and add them to
the document.
But GraphicTool presents a problem to the framework designer. The classes for
notes and staves are specific to our application, but the GraphicTool class belongs
to the framework. GraphicTool doesn’t know how to create instances of our music
classes to add to the score. We could subclass GraphicTool for each kind of music
object, but that would produce lots of subclasses that differ only in the kind of
music object they instantiate. We know object composition is a flexible alternative
to subclassing. The question is, how can the framework use it to parameterize
instances of GraphicTool by the class of Graphic they’re supposed to create?
The solution lies in making GraphicTool create a new Graphic by copying or
“cloning” an instance of a Graphic subclass. We call this instance a prototype.
GraphicTool is parameterized by the prototype it should clone and add to the
document. If all Graphic subclasses support a Clone operation, then the Graphic-
Tool can clone any kind of Graphic.
So in our music editor, each tool for creating a music object is an instance of
GraphicTool that’s initialized with a different prototype. Each GraphicTool in-
stance will produce a music object by cloning its prototype and adding the clone
to the score.
We can use the Prototype pattern to reduce the number of classes even further.
We have separate classes for whole notes and half notes, but that’s probably
unnecessary. Instead they could be instances of the same class initialized with
different bitmaps and durations. A tool for creating whole notes becomes just a
GraphicTool whose prototype is a MusicalNote initialized to be a whole note. This
can reduce the number of classes in the system dramatically. It also makes it easier
to add a new kind of note to the music editor.

Applicability
Use the Prototype pattern when a system should be independent of how its
products are created, composed, and represented; and
• when the classes to instantiate are specified at run-time, for example, by
dynamic loading; or
• to avoid building a class hierarchy of factories that parallels the class hierar-
chy of products; or
• when instances of a class can have one of only a few different combinations
of state. It may be more convenient to install a corresponding number of
prototypes and clone them rather than instantiating the class manually, each
time with the appropriate state.

Participants
• Prototype (Graphic)
- declares an interface for cloning itself.
• ConcretePrototype (Staff, WholeNote, HalfNote)
- implements an operation for cloning itself.
• Client (GraphicTool)
- creates a new object by asking a prototype to clone itself.

Collaborations
• A client asks a prototype to clone itself.

Consequences
Prototype has many of the same consequences that Abstract Factory (87) and
Builder (97) have: It hides the concrete product classes from the client, thereby
reducing the number of names clients know about. Moreover, these patterns let a
client work with application-specific classes without modification.
Additional benefits of the Prototype pattern are listed below.
1. Adding and removing products at run-time. Prototypes let you incorporate a
new concrete product class into a system simply by registering a prototyp-
ical instance with the client. That's a bit more flexible than other creational
patterns, because a client can install and remove prototypes at run-time.
2. Specifying new objects by varying values. Highly dynamic systems let you de-
fine new behavior through object composition—by specifying values for an object's variables, for example—and not by defining new classes. You ef-
fectively define new kinds of objects by instantiating existing classes and
registering the instances as prototypes of client objects. A client can exhibit
new behavior by delegating responsibility to the prototype.
This kind of design lets users define new "classes" without programming.
In fact, cloning a prototype is similar to instantiating a class. The Prototype
pattern can greatly reduce the number of classes a system needs. In our music
editor, one GraphicTool class can create a limitless variety of music objects.
3. Specifying new objects by varying structure. Many applications build objects
from parts and subparts. Editors for circuit design, for example, build cir-
cuits out of subcircuits. 1 For convenience, such applications often let you
instantiate complex, user-defined structures, say, to use a specific subcircuit
again and again.
The Prototype pattern supports this as well. We simply add this subcircuit as
a prototype to the palette of available circuit elements. As long as the com-
posite circuit object implements Clone as a deep copy, circuits with different
structures can be prototypes.
4. Reduced subclassing. Factory Method (107) often produces a hierarchy of Cre-
ator classes that parallels the product class hierarchy. The Prototype pattern
lets you clone a prototype instead of asking a factory method to make a new
object. Hence you don't need a Creator class hierarchy at all. This benefit
applies primarily to languages like C++ that don't treat classes as first-class
objects. Languages that do, like Smalltalk and Objective C, derive less bene-
fit, since you can always use a class object as a creator. Class objects already
act like prototypes in these languages.
5. Configuring an application with classes dynamically. Some run-time environ-
ments let you load classes into an application dynamically. The Prototype
pattern is the key to exploiting such facilities in a language like C++.
An application that wants to create instances of a dynamically loaded class
won't be able to reference its constructor statically. Instead, the run-time envi-
ronment creates an instance of each class automatically when it's loaded, and
it registers the instance with a prototype manager (see the Implementation
section). Then the application can ask the prototype manager for instances of
newly loaded classes, classes that weren't linked with the program originally.
The ET++ application framework [WGM88] has a run-time system that uses
this scheme.
The main liability of the Prototype pattern is that each subclass of Prototype must
implement the Clone operation, which may be difficult. For example, adding
Clone is difficult when the classes under consideration already exist. Implement-
ing Clone can be difficult when their internals include objects that don't support
copying or have circular references.

Implementation
Prototype is particularly useful with static languages like C++, where classes are
not objects, and little or no type information is available at run-time. It’s less
important in languages like Smalltalk or Objective C that provide what amounts
to a prototype (i.e., a class object) for creating instances of each class. This pattern is
built into prototype-based languages like Self [US87], in which all object creation
happens by cloning a prototype.
Consider the following issues when implementing prototypes:
1. Using a prototype manager. When the number of prototypes in a system isn’t
fixed (that is, they can be created and destroyed dynamically), keep a registry
of available prototypes. Clients won’t manage prototypes themselves but will
store and retrieve them from the registry. A client will ask the registry for a
prototype before cloning it. We call this registry a prototype manager.
A prototype manager is an associative store that returns the prototype match-
ing a given key. It has operations for registering a prototype under a key and
for unregistering it. Clients can change or even browse through the registry
at run-time. This lets clients extend and take inventory on the system without
writing code.
2. Implementing the Clone operation. The hardest part of the Prototype pattern
is implementing the Clone operation correctly. It’s particularly tricky when
object structures contain circular references.
Most languages provide some support for cloning objects. For example,
Smalltalk provides an implementation of copy that’s inherited by all sub-
classes of Object. C++ provides a copy constructor. But these facilities don’t
solve the “shallow copy versus deep copy” problem [GR83]. That is, does
cloning an object in turn clone its instance variables, or do the clone and
original just share the variables?
A shallow copy is simple and often sufficient, and that’s what Smalltalk
provides by default. The default copy constructor in C++ does a member-
wise copy, which means pointers will be shared between the copy and the
original. But cloning prototypes with complex structures usually requires a
deep copy, because the clone and the original must be independent. Therefore
you must ensure that the clone’s components are clones of the prototype’s
components. Cloning forces you to decide what if anything will be shared.
If objects in the system provide Save and Load operations, then you can use
them to provide a default implementation of Clone simply by saving the
object and loading it back immediately. The Save operation saves the object
into a memory buffer, and Load creates a duplicate by reconstructing the
object from the buffer.
3. Initializing clones. While some clients are perfectly happy with the clone as
is, others will want to initialize some or all of its internal state to values of their choosing. You generally can’t pass these values in the Clone oper-
ation, because their number will vary between classes of prototypes. Some
prototypes might need multiple initialization parameters; others won’t need
any. Passing parameters in the Clone operation precludes a uniform cloning
interface.
It might be the case that your prototype classes already define operations for
(re)setting key pieces of state. If so, clients may use these operations immedi-
ately after cloning. If not, then you may have to introduce an Initialize
operation (see the Sample Code section) that takes initialization parame-
ters as arguments and sets the clone’s internal state accordingly. Beware of
deep-copying Clone operations—the copies may have to be deleted (either
explicitly or within Initialize) before you reinitialize them.

FACTORY METHOD
Intent
Define an interface for creating an object, but let subclasses decide which class to
instantiate. Factory Method lets a class defer instantiation to subclasses.

Motivation
Frameworks use abstract classes to define and maintain relationships between
objects. A framework is often responsible for creating these objects as well.
Consider a framework for applications that can present multiple documents to
the user. Two key abstractions in this framework are the classes Application and
Document. Both classes are abstract, and clients have to subclass them to realize
their application-specific implementations. To create a drawing application, for
example, we define the classes DrawingApplication and DrawingDocument. The
Application class is responsible for managing Documents and will create them as
required—when the user selects Open or New from a menu, for example.
Because the particular Document subclass to instantiate is application-specific, the
Application class can’t predict the subclass of Document to instantiate—the Ap-
plication class only knows when a new document should be created, not what kind
of Document to create. This creates a dilemma: The framework must instantiate
classes, but it only knows about abstract classes, which it cannot instantiate.
The Factory Method pattern offers a solution. It encapsulates the knowledge
of which Document subclass to create and moves this knowledge out of the
framework.
Application subclasses redefine an abstract CreateDocument operation on Appli-
cation to return the appropriate Document subclass. Once an Application sub-
class is instantiated, it can then instantiate application-specific Documents with-
out knowing their class. We call CreateDocument a factory method because it’s
responsible for “manufacturing” an object.

Applicability
Use the Factory Method pattern when
• a class can’t anticipate the class of objects it must create.
• a class wants its subclasses to specify the objects it creates.
• classes delegate responsibility to one of several helper subclasses, and you
want to localize the knowledge of which helper subclass is the delegate.

Participants
Product (Document)
- defines the interface of objects the factory method creates.
ConcreteProduct (MyDocument)
- implements the Product interface.
Creator (Application)
- declares the factory method, which returns an object of type Product. Creator may also define a default implementation of the factory method that
returns a default ConcreteProduct object.
- may call the factory method to create a Product object.
• ConcreteCreator (MyApplication)
- overrides the factory method to return an instance of a ConcreteProduct.

Collaborations
• Creator relies on its subclasses to define the factory method so that it returns
an instance of the appropriate ConcreteProduct.

Consequences
Factory methods eliminate the need to bind application-specific classes into your
code. The code only deals with the Product interface; therefore it can work with
any user-defined ConcreteProduct classes.
A potential disadvantage of factory methods is that clients might have to subclass
the Creator class just to create a particular ConcreteProduct object. Subclassing is
fine when the client has to subclass the Creator class anyway, but otherwise the
client now must deal with another point of evolution.
Here are two additional consequences of the Factory Method pattern:
1. Provides hooks for subclasses. Creating objects inside a class with a factory
method is always more flexible than creating an object directly. Factory
Method gives subclasses a hook for providing an extended version of an
object.
In the Document example, the Document class could define a factory method
called CreateFileDialog that creates a default file dialog object for opening an
existing document. A Document subclass can define an application-specific
file dialog by overriding this factory method. In this case the factory method
is not abstract but provides a reasonable default implementation.
2. Connects parallel class hierarchies. In the examples we've considered so far, the
factory method is only called by Creators. But this doesn't have to be the
case; clients can find factory methods useful, especially in the case of parallel
class hierarchies.
Parallel class hierarchies result when a class delegates some of its responsibil-
ities to a separate class. Consider graphical figures that can be manipulated
interactively; that is, they can be stretched, moved, or rotated using the
mouse. Implementing such interactions isn't always easy. It often requires
storing and updating information that records the state of the manipulation
at a given time. This state is needed only during manipulation; therefore
it needn't be kept in the figure object. Moreover, different figures behave
differently when the user manipulates them. For example, stretching a line
figure might have the effect of moving an endpoint, whereas stretching a text
figure may change its line spacing.
With these constraints, it's better to use a separate Manipulator object that
implements the interaction and keeps track of any manipulation-specific state that's needed. Different figures will use different Manipulator subclasses to
handle particular interactions.
The Figure class provides a CreateManipulator factory method that lets
clients create a Figure's corresponding Manipulator. Figure subclasses over-
ride this method to return an instance of the Manipulator subclass that's right
for them. Alternatively, the Figure class may implement CreateManipulator
to return a default Manipulator instance, and Figure subclasses may simply
inherit that default. The Figure classes that do so need no corresponding
Manipulator subclass—hence the hierarchies are only partially parallel.
Notice how the factory method defines the connection between the two class
hierarchies. It localizes knowledge of which classes belong together.

Implementation
Consider the following issues when applying the Factory Method pattern:
1. Two major varieties. The two main variations of the Factory Method pattern are
(1) the case when the Creator class is an abstract class and does not provide
an implementation for the factory method it declares, and (2) the case when
the Creator is a concrete class and provides a default implementation for
the factory method. It’s also possible to have an abstract class that defines a
default implementation, but this is less common.
The first case requires subclasses to define an implementation,because there’s
no reasonable default. It gets around the dilemma of having to instantiate
unforeseeable classes. In the second case, the concrete Creator uses the fac-
tory method primarily for flexibility. It’s following a rule that says, “Create
objects in a separate operation so that subclasses can override the way they’re
created.” This rule ensures that designers of subclasses can change the class
of objects their parent class instantiates if necessary.
2. Parameterized factory methods. Another variation on the pattern lets the fac-
tory method create multiple kinds of products. The factory method takes a parameter that identifies the kind of object to create. All objects the factory
method creates will share the Product interface. In the Document example,
Application might support different kinds of Documents. You pass Create-
Document an extra parameter to specify the kind of document to create.
The Unidraw graphical editing framework [VL90] uses this approach for
reconstructing objects saved on disk. Unidraw defines aCreator class with a
factory method Create that takes a class identifier as an argument. The class
identifier specifies the class to instantiate. When Unidraw saves an object to
disk, it writes out the class identifier first and then its instance variables.
When it reconstructs the object from disk, it reads the class identifier first.
Once the class identifier is read, the framework calls Create, passing the
identifier as the parameter. Create looks up the constructor for the corre-
sponding class and uses it to instantiate the object. Last, Create calls the
object’s Read operation, which reads the remaining information on the disk
and initializes the object’s instance variables.
3. Language-specific variants and issues. Different languages lend themselves to
other interesting variations and caveats.
Smalltalk programs often use a method that returns the class of the object
to be instantiated. A Creator factory method can use this value to create
a product, and a ConcreteCreator may store or even compute this value.
The result is an even later binding for the type of ConcreteProduct to be
instantiated.
An even more flexible approach akin to parameterized factory methods is to
store the class to be created as a class variable of Application. That way
you don’t have to subclass Application to vary the product.
Factory methods in C++ are always virtual functions and are often pure vir-
tual. Just be careful not to call factory methods in the Creator’s constructor—
the factory method in the ConcreteCreator won’t be available yet.
You can avoid this by being careful to access products solely through acces-
sor operations that create the product on demand. Instead of creating the
concrete product in the constructor, the constructor merely initializes it to 0.
The accessor returns the product. But first it checks to make sure the product
exists, and if it doesn’t, the accessor creates it. This technique is sometimes
called lazy initialization.
4. Using templates to avoid subclassing. As we’ve mentioned, another potential
problem with factory methods is that they might force you to subclass just
to create the appropriate Product objects. Another way to get around this in
C++ is to provide a template subclass of Creator that’s parameterized by the
Product class
5. Naming conventions. It’s good practice to use naming conventions that make
it clear you’re using factory methods. For example, the MacApp Macintosh
application framework [App89] always declares the abstract operation that
defines the factory method as Class* DoMakeClass ( ) , where Class is
the Product class.

SINGLETON
Intent
Ensure a class only has one instance, and provide a global point of access to it.

Motivation
It’s important for some classes to have exactly one instance. Although there can be
many printers in a system, there should be only one printer spooler. There should
be only one file system and one window manager. A digital filter will have one
A/D converter. An accounting system will be dedicated to serving one company.
How do we ensure that a class has only one instance and that the instance is easily
accessible? A global variable makes an object accessible, but it doesn’t keep you
from instantiating multiple objects.
A better solution is to make the class itself responsible for keeping track of its sole
instance. The class can ensure that no other instance can be created (by intercepting
requests to create new objects), and it can provide a way to access the instance.
This is the Singleton pattern.

Applicability
Use the Singleton pattern when
• there must be exactly one instance of a class, and it must be accessible to
clients from a well-known access point.
• when the sole instance should be extensible by subclassing, and clients
should be able to use an extended instance without modifying their code.

Participants
• Singleton
- defines an Instance operation that lets clients access its unique instance.
Instance is a class operation (that is, a class method in Smalltalk and a static
member function in C++).
- may be responsible for creating its own unique instance.

Collaborations
• Clients access a Singleton instance solely through Singleton’s Instance opera-
tion.

Consequences
The Singleton pattern has several benefits:
1. Controlled access to sole instance. Because the Singleton class encapsulates its
sole instance, it can have strict control over how and when clients access it.
2. Reduced name space. The Singleton pattern is an improvement over global
variables. It avoids polluting the name space with global variables that store
sole instances.
3. Permits refinement of operations and representation. The Singleton class may be
subclassed, and it’s easy to configure an application with an instance of this
extended class. You can configure the application with an instance of the
class you need at run-time.
4. Permits a variable number of instances. The pattern makes it easy to change your
mind and allow more than one instance of the Singleton class. Moreover,
you can use the same approach to control the number of instances that
the application uses. Only the operation that grants access to the Singleton
instance needs to change.
5. More flexible than class operations. Another way to package a singleton’s func-
tionality is to use class operations (that is, static member functions in C++ or
class methods in Smalltalk). But both of these language techniques make it
hard to change a design to allow more than one instance of a class. Moreover,
static member functions in C++ are never virtual, so subclasses can’t override
them polymorphically.

Implementation
Here are implementation issues to consider when using the Singleton pattern:
1. Ensuring a unique instance. The Singleton pattern makes the sole instance a
normal instance of a class, but that class is written so that only one instance can ever be created. A common way to do this is to hide the operation that
creates the instance behind a class operation (that is, either a static member
function or a class method) that guarantees only one instance is created. This
operation has access to the variable that holds the unique instance, and it
ensures the variable is initialized with the unique instance before returning
its value. This approach ensures that a singleton is created and initialized
before its first use.
2. Subclassing the Singleton class. The main issue is not so much defining the
subclass but installing its unique instance so that clients will be able to use
it. In essence, the variable that refers to the singleton instance must get
initialized with an instance of the subclass. The simplest technique is to
determine which singleton you want to use in the Singleton’s Instance
operation. An example in the Sample Code shows how to implement this
technique with environment variables.
Another way to choose the subclass of Singleton is to take the implementation
of Instance out of the parent class (e.g., MazeFactory) and put it in the
subclass. That lets a C++ programmer decide the class of singleton at link-
time (e.g., by linking in an object file containing a different implementation)
but keeps it hidden from the clients of the singleton.
The link approach fixes the choice of singleton class at link-time, which
makes it hard to choose the singleton class at run-time. Using conditional
statements to determine the subclass is more flexible, but it hard-wires the
set of possible Singleton classes. Neither approach is flexible enough in all
cases.
A more flexible approach uses a registry of singletons. Instead of having
Instance define the set of possible Singleton classes, the Singleton classes
can register their singleton instance by name in a well-known registry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

GOF

  • Adapter
  • Composite
  • Decorator
A

Structural Patterns
Structural patterns are concerned with how classes and objects are composed to form
larger structures. Structural class patterns use inheritance to compose interfaces or im-
plementations. As a simple example, consider how multiple inheritance mixes two or
more classes into one. The result is a class that combines the properties of its parent
classes. This pattern is particularly useful for making independently developed class
libraries work together. Another example is the class form of the Adapter (139) pat-
tern. In general, an adapter makes one interface (the adaptee’s) conform to another,
thereby providing a uniform abstraction of different interfaces. A class adapter accom-
plishes this by inheriting privately from an adaptee class. The adapter then expresses
its interface in terms of the adaptee’s.
Rather than composing interfaces or implementations, structural object patterns de-
scribe ways to compose objects to realize new functionality. The added flexibility of
object composition comes from the ability to change the composition at run-time, which
is impossible with static class composition.
Composite (163) is an example of a structural object pattern. It describes how to build
a class hierarchy made up of classes for two kinds of objects: primitive and composite.
The composite objects let you compose primitive and other composite objects into
arbitrarily complex structures. In the Proxy (207) pattern, a proxy acts as a convenient
surrogate or placeholder for another object. A proxy can be used in many ways. It can
act as a local representative for an object in a remote address space. It can represent
a large object that should be loaded on demand. It might protect access to a sensitive
object. Proxies provide a level of indirection to specific properties of objects. Hence they
can restrict, enhance, or alter these properties.
The Flyweight (195) pattern defines a structure for sharing objects. Objects are shared
for at least two reasons: efficiency and consistency. Flyweight focuses on sharing for
space efficiency. Applications that use lots of objects must pay careful attention to
the cost of each object. Substantial savings can be had by sharing objects instead of
replicating them. But objects can be shared only if they don’t define context-dependent state. Flyweight objects have no such state. Any additional information they need to
perform their task is passed to them when needed. With no context-dependent state,
Flyweight objects may be shared freely.
Whereas Flyweight shows how to make lots of little objects, Facade (185) shows how
to make a single object represent an entire subsystem. A facade is a representative for a
set of objects. The facade carries out its responsibilities by forwarding messages to the
objects it represents. The Bridge (151) pattern separates an object’s abstraction from its
implementation so that you can vary them independently.
Decorator (175) describes how to add responsibilities to objects dynamically. Decorator
is a structural pattern that composes objects recursively to allow an open-ended number
of additional responsibilities. For example, a Decorator object containing a user interface
component can add a decoration like a border or shadow to the component, or it can
add functionality like scrolling and zooming. We can add two decorations simply by
nesting one Decorator object within another, and so on for additional decorations. To
accomplish this, each Decorator object must conform to the interface of its component
and must forward messages to it. The Decorator can do its job (such as drawing a
border around the component) either before or after forwarding a message.
Many structural patterns are related to some degree. We’ll discuss these relationships
at the end of the chapter.

ADAPTER
Intent
Convert the interface of a class into another interface clients expect. Adapter lets
classes work together that couldn’t otherwise because of incompatible interfaces.

Also Known As
Wrapper

Motivation
Sometimes a toolkit class that’s designed for reuse isn’t reusable only because its
interface doesn’t match the domain-specific interface an application requires.
Consider for example a drawing editor that lets users draw and arrange graphical
elements (lines, polygons, text, etc.) into pictures and diagrams. The drawing
editor’s key abstraction is the graphical object, which has an editable shape and
can draw itself. The interface for graphical objects is defined by an abstract class
called Shape. The editor defines a subclass of Shape for each kind of graphical
object: a LineShape class for lines, a PolygonShape class for polygons, and so
forth.
Classes for elementary geometric shapes like LineShape and PolygonShape are
rather easy to implement, because their drawing and editing capabilities are
inherently limited. But a TextShape subclass that can display and edit text is
considerably more difficult to implement, since even basic text editing involves
complicated screen update and buffer management. Meanwhile, an off-the-shelf
user interface toolkit might already provide a sophisticated TextView class for
displaying and editing text. Ideally we’d like to reuse TextView to implement
TextShape, but the toolkit wasn’t designed with Shape classes in mind. So we
can’t use TextView and Shape objects interchangeably.
How can existing and unrelated classes like TextView work in an application that
expects classes with a different and incompatible interface? We could change the
TextView class so that it conforms to the Shape interface, but that isn’t an option
unless we have the toolkit’s source code. Even if we did, it wouldn’t make sense to
change TextView; the toolkit shouldn’t have to adopt domain-specific interfaces
just to make one application work.
Instead, we could define TextShape so that it adapts the TextView interface to
Shape’s. We can do this in one of two ways: (1) by inheriting Shape’s interface
and Text View’s implementation or (2) by composing a TextView instance within
a TextShape and implementing TextShape in terms of Text View’s interface. These two approaches correspond to the class and object versions of the Adapter pattern.
We call TextShape an adapter.
Often the adapter is responsible for functionality the adapted class doesn’t pro-
vide. The diagram shows how an adapter can fulfill such responsibilities. The
user should be able to “drag” every Shape object to a new location interactively,
but Text View isn’t designed to do that. TextShape can add this missing function-
ality by implementing Shape’s CreateManipulator operation, which returns an
instance of the appropriate Manipulator subclass.
Manipulator is an abstract class for objects that know how to animate a Shape in
response to user input, like dragging the shape to a new location. There are sub-
classes of Manipulator for different shapes; TextManipulator, for example, is the
corresponding subclass for TextShape. By returning a TextManipulator instance,
TextShape adds the functionality that Text View lacks but Shape requires.

Applicability
Use the Adapter pattern when
• you want to use an existing class, and its interface does not match the one
you need.
• you want to create a reusable class that cooperates with unrelated or unfore-
seen classes, that is, classes that don’t necessarily have compatible interfaces.
• (object adapter only) you need to use several existing subclasses, but it’s un-
practical to adapt their interface by subclassing every one. An object adapter
can adapt the interface of its parent class.

Participants
• Target (Shape)
- defines the domain-specific interface that Client uses.
• Client (DrawingEditor)
- collaborates with objects conforming to the Target interface.
• Adaptec (TextView)
- defines an existing interface that needs adapting.
• Adapter (TextShape)
- adapts the interface of Adaptec to the Target interface.

Collaborations
• Clients call operations on an Adapter instance. In turn, the adapter calls
Adaptec operations that carry out the request.

Consequences
Class and object adapters have different trade-offs. A class adapter
• adapts Adaptee to Target by committing to a concrete Adaptee class. As a
consequence, a class adapter won’t work when we want to adapt a class and
all its subclasses.
• lets Adapter override some of Adaptee’s behavior, since Adapter is a subclass
of Adaptee.
• introduces only one object, and no additional pointer indirection is needed
to get to the adaptee.
An object adapter
• lets a single Adapter work with many Adaptees—that is, the Adaptee itself
and all of its subclasses (if any). The Adapter can also add functionality to
all Adaptees at once.
• makes it harder to override Adaptee behavior. It will require subclassing
Adaptee and making Adapter refer to the subclass rather than the Adaptee
itself.

Here are other issues to consider when using the Adapter pattern:
1. How much adapting does Adapter do ? Adapters vary in the amount of work they
do to adapt Adaptee to the Target interface. There is a spectrum of possible
work, from simple interface conversion—for example, changing the names of
operations—to supporting an entirely different set of operations. The amount
of work Adapter does depends on how similar the Target interface is to
Adaptee’s.
2. Pluggable adapters. A class is more reusable when you minimize the assump-
tions other classes must make to use it. By building interface adaptation into
a class, you eliminate the assumption that other classes see the same inter-
face. Put another way, interface adaptation lets us incorporate our class into
existing systems that might expect different interfaces to the class. Object-
Works\Smalltalk [Par90] uses the term pluggable adapter to describe classes
with built-in interface adaptation.
Consider a TreeDisplay widget that can display tree structures graphically.
If this were a special-purpose widget for use in just one application, then
we might require the objects that it displays to have a specific interface; that
is, all must descend from a Tree abstract class. But if we wanted to make
TreeDisplay more reusable (say we wanted to make it part of a toolkit of
useful widgets), then that requirement would be unreasonable. Applications
will define their own classes for tree structures. They shouldn’t be forced
to use our Tree abstract class. Different tree structures will have different
interfaces.
In a directory hierarchy, for example, children might be accessed with a
GetSubdirectories operation, whereas in an inheritance hierarchy, the corre-
sponding operation might be called GetSubclasses. A reusable TreeDisplay
widget must be able to display both kinds of hierarchies even if they use
different interfaces. In other words, the TreeDisplay should have interface
adaptation built into it.
We’ll look at different ways to build interface adaptation into classes in the
Implementation section.
3. Using two-way adapters to provide transparency. A potential problem with
adapters is that they aren’t transparent to all clients. An adapted object no
longer conforms to the Adaptec interface, so it can’t be used as is wherever
an Adaptec object can. Two-way adapters can provide such transparency.
Specifically, they’re useful when two different clients need to view an object
differently.
Consider the two-way adapter that integrates Unidraw, a graphical edi-
tor framework [VL90], and QOCA, a constraint-solving toolkit [HHMV92].
Both systems have classes that represent variables explicitly: Unidraw has
State Variable, and QOCA has ConstraintVariable. To make Unidraw work
with QOCA, ConstraintVariable must be adapted to State Variable; to let
QOCA propagate solutions to Unidraw, State Variable must be adapted to
ConstraintVariable.
The solution involves a two-way class adapter ConstraintStateVariable, a
subclass of both State Variable and ConstraintVariable, that adapts the two
interfaces to each other. Multiple inheritance is a viable solution in this case
because the interfaces of the adapted classes are substantially different. The
two-way class adapter conforms to both of the adapted classes and can work
in either system.

Implementation
Although the implementation of Adapter is usually straightforward, here are
some issues to keep in mind:
1. Implementing class adapters in C++. In a C++ implementation of a class adapter,
Adapter would inherit publicly from Target and privately from Adaptec.
Thus Adapter would be a subtype of Target but not of Adaptec.
2. Pluggable adapters. Let’s look at three ways to implement pluggable adapters
for the TreeDisplay widget described earlier, which can lay out and display
a hierarchical structure automatically.
The first step, which is common to all three of the implementations discussed
here, is to find a “narrow” interface for Adaptec, that is, the smallest subset
of operations that lets us do the adaptation. A narrow interface consisting of
only a couple of operations is easier to adapt than an interface with dozens
of operations. For TreeDisplay, the adaptee is any hierarchical structure. A
minimalist interface might include two operations, one that defines how to
present a node in the hierarchical structure graphically, and another that
retrieves the node’s children.
The narrow interface leads to three implementation approaches:
(a) Using abstract operations. Define corresponding abstract operations for the
narrow Adaptee interface in the TreeDisplay class. Subclasses must im-
plement the abstract operations and adapt the hierarchically structured
object. For example, a DirectoryTreeDisplay subclass will implement
these operations by accessing the directory structure.
(b) Using delegate objects. In this approach, TreeDisplay forwards requests for
accessing the hierarchical structure to a delegate object. TreeDisplay can
use a different adaptation strategy by substituting a different delegate.
For example, suppose there exists a DirectoryBrowser that uses a Tree-
Display. DirectoryBrowser might make a good delegate for adapting
TreeDisplay to the hierarchical directory structure. In dynamically typed
languages like Smalltalk or Objective C, this approach only requires an
interface for registering the delegate with the adapter. Then TreeDisplay simply forwards the requests to the delegate. NEXTSTEP [Add94] uses
this approach heavily to reduce subclassing.
Statically typed languages like C++ require an explicit interface defin-
ition for the delegate. We can specify such an interface by putting the
narrow interface that TreeDisplay requires into an abstract TreeAcces-
sorDelegate class. Then we can mix this interface into the delegate of
our choice—DirectoryBrowser in this case—using inheritance. We use
single inheritance if the DirectoryBrowser has no existing parent class,
multiple inheritance if it does. Mixing classes together like this is eas-
ier than introducing a new TreeDisplay subclass and implementing its
operations individually.
(c) Parameterized adapters. The usual way to support pluggable adapters in
Smalltalk is to parameterize an adapter with one or more blocks. The
block construct supports adaptation without subclassing. A block can
adapt a request, and the adapter can store a block for each individual
request. In our example, this means TreeDisplay stores one block for
converting a node into a GraphicNode and another block for accessing
a node’s children.

COMPOSITE
Intent
Compose objects into tree structures to represent part-whole hierarchies. Com-
posite lets clients treat individual objects and compositions of objects uniformly.

Motivation
Graphics applications like drawing editors and schematic capture systems let
users build complex diagrams out of simple components. The user can group
components to form larger components, which in turn can be grouped to form still
larger components. A simple implementation could define classes for graphical
primitives such as Text and Lines plus other classes that act as containers for these
primitives.
But there’s a problem with this approach: Code that uses these classes must treat
primitive and container objects differently, even if most of the time the user treats
them identically. Having to distinguish these objects makes the application more
complex. The Composite pattern describes how to use recursive composition so
that clients don’t have to make this distinction.
The key to the Composite pattern is an abstract class that represents both primi-
tives and their containers. For the graphics system, this class is Graphic. Graphic
declares operations like Draw that are specific to graphical objects. It also declares
operations that all composite objects share, such as operations for accessing and
managing its children.
The subclasses Line, Rectangle, and Text (see preceding class diagram) define
primitive graphical objects. These classes implement Draw to draw lines, rectan-
gles, and text, respectively. Since primitive graphics have no child graphics, none
of these subclasses implements child-related operations.
The Picture class defines an aggregate of Graphic objects. Picture implements
Draw to call Draw on its children, and it implements child-related operations ac-
cordingly. Because the Picture interface conforms to the Graphic interface, Picture
objects can compose other Pictures recursively.

Applicability
Use the Composite pattern when
• you want to represent part-whole hierarchies of objects.
• you want clients to be able to ignore the difference between compositions of
objects and individual objects. Clients will treat all objects in the composite
structure uniformly.

Participants
• Component (Graphic)
- declares the interface for objects in the composition.
- implements default behavior for the interface common to all classes, as
appropriate.
- declares an interface for accessing and managing its child components.
- (optional) defines an interface for accessing a component’s parent in the
recursive structure, and implements it if that’s appropriate.
• Leaf (Rectangle, Line, Text, etc.)
- represents leaf objects in the composition. A leaf has no children.
- defines behavior for primitive objects in the composition.
• Composite (Picture)
- defines behavior for components having children.
- stores child components.
- implements child-related operations in the Component interface.
• Client
- manipulates objects in the composition through the Component interface.

Collaborations
• Clients use the Component class interface to interact with objects in the com-
posite structure. If the recipient is a Leaf, then the request is handled directly.
If the recipient is a Composite, then it usually forwards requests to its child
components, possibly performing additional operations before and/or after
forwarding.

Consequences
The Composite pattern
• defines class hierarchies consisting of primitive objects and composite ob-
jects. Primitive objects can be composed into more complex objects, which in
turn can be composed, and so on recursively. Wherever client code expects a
primitive object, it can also take a composite object.
• makes the client simple. Clients can treat composite structures and indi-
vidual objects uniformly. Clients normally don’t know (and shouldn’t care)
whether they’re dealing with a leaf or a composite component. This simplifies
client code, because it avoids having to write tag-and-case-statement-style
functions over the classes that define the composition.
• makes it easier to add new kinds of components. Newly defined Composite
or Leaf subclasses work automatically with existing structures and client
code. Clients don’t have to be changed for new Component classes.
• can make your design overly general. The disadvantage of making it easy
to add new components is that it makes it harder to restrict the components
of a composite. Sometimes you want a composite to have only certain com-
ponents. With Composite, you can’t rely on the type system to enforce those
constraints for you. You’ll have to use run-time checks instead.

Implementation
There are many issues to consider when implementing the Composite pattern:
1. Explicit parent references. Maintaining references from child components to
their parent can simplify the traversal and management of a composite struc-
ture. The parent reference simplifies moving up the structure and deleting
a component. Parent references also help support the Chain of Responsibil-
ity (223) pattern.
The usual place to define the parent reference is in the Component class.
Leaf and Composite classes can inherit the reference and the operations that
manage it.
With parent references, it’s essential to maintain the invariant that all children
of a composite have as their parent the composite that in turn has them as
children. The easiest way to ensure this is to change a component’s parent
only when it’s being added or removed from a composite. If this can be
implemented once in the Add and Remove operations of the Composite
class, then it can be inherited by all the subclasses, and the invariant will be
maintained automatically.
2. Sharing components. It’s often useful to share components, for example, to
reduce storage requirements. But when a component can have no more than
one parent, sharing components becomes difficult.
A possible solution is for children to store multiple parents. But that can lead
to ambiguities as a request propagates up the structure. The Flyweight (195)
pattern shows how to rework a design to avoid storing parents altogether. It
works in cases where children can avoid sending parent requests by exter-
nalizing some or all of their state.
3. Maximizing the Component interface. One of the goals of the Composite pattern
is to make clients unaware of the specific Leaf or Composite classes they’re
using. To attain this goal, the Component class should define as many com-
mon operations for Composite and Leaf classes as possible. The Component
class usually provides default implementations for these operations, and
Leaf and Composite subclasses will override them.
However, this goal will sometimes conflict with the principle of class hierar-
chy design that says a class should only define operations that are meaningful
to its subclasses. There are many operations that Component supports that
don’t seem to make sense for Leaf classes. How can Component provide a
default implementation for them?
Sometimes a little creativity shows how an operation that would appear to
make sense only for Composites can be implemented for all Components by
moving it to the Component class. For example, the interface for accessing
children is a fundamental part of a Composite class but not necessarily Leaf
classes. But if we view a Leaf as a Component that never has children, then we
can define a default operation for child access in the Component class that
never returns any children. Leaf classes can use the default implementation,
but Composite classes will reimplement it to return their children.
The child management operations are more troublesome and are discussed
in the next item.
4. Declaring the child management operations. Although the Composite class imple-
ments the Add and Remove operations for managing children, an important
issue in the Composite pattern is which classes declare these operations in the
Composite class hierarchy. Should we declare these operations in the Com-
ponent and make them meaningful for Leaf classes, or should we declare
and define them only in Composite and its subclasses?
The decision involves a trade-off between safety and transparency:
• Defining the child management interface at the root of the class hierarchy
gives you transparency, because you can treat all components uniformly.
It costs you safety, however, because clients may try to do meaningless
things like add and remove objects from leaves.
• Defining child management in the Composite class gives you safety,
because any attempt to add or remove objects from leaves will be caught
at compile-time in a statically typed language like C++. But you lose
transparency, because leaves and composites have different interfaces.
5. Should Component implement a list of Components? You might be tempted to
define the set of children as an instance variable in the Component class
where the child access and management operations are declared. But putting
the child pointer in the base class incurs a space penalty for every leaf, even
though a leaf never has children. This is worthwhile only if there are relatively
few children in the structure.
6. Child ordering. Many designs specify an ordering on the children of Com-
posite. In the earlier Graphics example, ordering may reflect front-to-back
ordering. If Composites represent parse trees, then compound statements
can be instances of a Composite whose children must be ordered to reflect
the program.
When child ordering is an issue, you must design child access and man-
agement interfaces carefully to manage the sequence of children. The Itera-
tor (257) pattern can guide you in this.
7. Caching to improve performance. If you need to traverse or search compositions
frequently, the Composite class can cache traversal or search information
about its children. The Composite can cache actual results or just information
that lets it short-circuit the traversal or search. For example, the Picture class
from the Motivation example could cache the bounding box of its children.
During drawing or selection, this cached bounding box lets the Picture avoid
drawing or searching when its children aren’t visible in the current window.
Changes to a component will require invalidating the caches of its parents.
This works best when components know their parents. So if you’re using
caching, you need to define an interface for telling composites that their
caches are invalid.
8. Who should delete components? In languages without garbage collection, it’s
usually best to make a Composite responsible for deleting its children when
it’s destroyed. An exception to this rule is when Leaf objects are immutable
and thus can be shared.
9. What’s the best data structure for storing components? Composites may use a
variety of data structures to store their children, including linked lists, trees,
arrays, and hash tables. The choice of data structure depends (as always) on
efficiency. In fact, it isn’t even necessary to use a general-purpose data struc-
ture at all. Sometimes composites have a variable for each child, although
this requires each subclass of Composite to implement its own management
interface. See Interpreter (243) for an example.

DECORATOR

Intent
Attach additional responsibilities to an object dynamically. Decorators provide a
flexible alternative to subclassing for extending functionality.

Also Known As
Wrapper

Motivation
Sometimes we want to add responsibilities to individual objects, not to an entire
class. A graphical user interface toolkit, for example, should let you add properties
like borders or behaviors like scrolling to any user interface component.
One way to add responsibilities is with inheritance. Inheriting a border from
another class puts a border around every subclass instance. This is inflexible,
however, because the choice of border is made statically. A client can’t control
how and when to decorate the component with a border.
A more flexible approach is to enclose the component in another object that adds
the border. The enclosing object is called a decorator. The decorator conforms to
the interface of the component it decorates so that its presence is transparent to the
component’s clients. The decorator forwards requests to the component and may
perform additional actions (such as drawing a border) before or after forwarding.
Transparency lets you nest decorators recursively, thereby allowing an unlimited
number of added responsibilities.
For example, suppose we have a Text View object that displays text in a window.
Text View has no scroll bars by default, because we might not always need them.
When we do, we can use a ScrollDecorator to add them. Suppose we also want to
add a thick black border around the Text View. We can use a BorderDecorator to
add this as well. We simply compose the decorators with the Text View to produce
the desired result.

Applicability
Use Decorator
• to add responsibilities to individual objects dynamically and transparently,
that is, without affecting other objects.
• for responsibilities that can be withdrawn.
• when extension by subclassing is impractical. Sometimes a large number
of independent extensions are possible and would produce an explosion of
subclasses to support every combination. Or a class definition maybe hidden
or otherwise unavailable for subclassing.

Participants
• Component (VisualComponent)
- defines the interface for objects that can have responsibilities added to
them dynamically.
• ConcreteComponent (TextView)
- defines an object to which additional responsibilities can be attached.
• Decorator
- maintains a reference to a Component object and defines an interface that
conforms to Component’s interface.
• ConcreteDecorator (BorderDecorator, ScrollDecorator)
- adds responsibilities to the component.

Collaborations
• Decorator forwards requests to its Component object. It may optionally per-
form additional operations before and after forwarding the request.

Consequences
The Decorator pattern has at least two key benefits and two liabilities:
1. More flexibility than static inheritance. The Decorator pattern provides a more
flexible way to add responsibilities to objects than can be had with static
(multiple) inheritance. With decorators, responsibilities can be added and
removed at run-time simply by attaching and detaching them. In contrast,
inheritance requires creating a new class for each additional responsibil-
ity (e.g., BorderedScrollableTextView, BorderedTextView). This gives rise to
many classes and increases the complexity of a system. Furthermore, provid-
ing different Decorator classes for a specific Component class lets you mix
and match responsibilities.
Decorators also make it easy to add a property twice. For example, to give
a Text View a double border, simply attach two BorderDecorators. Inheriting
from a Border class twice is error-prone at best.
2. Avoids feature-laden classes high up in the hierarchy. Decorator offers a pay-
as-you-go approach to adding responsibilities. Instead of trying to support
all foreseeable features in a complex, customizable class, you can define
a simple class and add functionality incrementally with Decorator objects.
Functionality can be composed from simple pieces. As a result, an application
needn’t pay for features it doesn’t use. It’s also easy to define new kinds of
Decorators independently from the classes of objects they extend, even for
unforeseen extensions. Extending a complex class tends to expose details
unrelated to the responsibilities you’re adding.
3. A decorator and its component aren’t identical. A decorator acts as a transparent
enclosure. But from an object identity point of view, a decorated component
is not identical to the component itself. Hence you shouldn’t rely on object
identity when you use decorators.
4. Lots of little objects. A design that uses Decorator often results in systems
composed of lots of little objects that all look alike. The objects differ only
in the way they are interconnected, not in their class or in the value of
their variables. Although these systems are easy to customize by those who
understand them, they can be hard to learn and debug.

Implementation
Several issues should be considered when applying the Decorator pattern:
1. Interface conformance. A decorator object’s interface must conform to the inter-
face of the component it decorates. ConcreteDecorator classes must therefore
inherit from a common class (at least in C++).
2. Omitting the abstract Decorator class. There’s no need to define an abstract
Decorator class when you only need to add one responsibility. That’s often
the case when you’re dealing with an existing class hierarchy rather than
designing a new one. In that case, you can merge Decorator’s responsibility
for forwarding requests to the component into the ConcreteDecorator.
3. Keeping Component classes lightweight. To ensure a conforming interface, com-
ponents and decorators must descend from a common Component class.
It’s important to keep this common class lightweight; that is, it should fo-
cus on defining an interface, not on storing data. The definition of the data
representation should be deferred to subclasses; otherwise the complexity
of the Component class might make the decorators too heavyweight to use
in quantity. Putting a lot of functionality into Component also increases the
probability that concrete subclasses will pay for features they don’t need.
4. Changing the skin of an object versus changing its guts. We can think of a deco-
rator as a skin over an object that changes its behavior. An alternative is to
change the object’s guts. The Strategy (315) pattern is a good example of a
pattern for changing the guts.
Strategies are a better choice in situations where the Component class is
intrinsically heavyweight, thereby making the Decorator pattern too costly
to apply. In the Strategy pattern, the component forwards some of its behavior
to a separate strategy object. The Strategy pattern lets us alter or extend the
component’s functionality by replacing the strategy object.
For example, we can support different border styles by having the component
defer border-drawing to a separate Border object. The Border object is a
Strategy object that encapsulates a border-drawing strategy. By extending
the number of strategies from just one to an open-ended list, we achieve the
same effect as nesting decorators recursively.
In MacApp 3.0 [App89] and Bedrock [Sym93a], for example, graphical com-
ponents (called “views”) maintain a list of “adorner” objects that can attach
additional adornments like borders to a view component. If a view has any
adorners attached, then it gives them a chance to draw additional embellish-
ments. MacApp and Bedrock must use this approach because the View class
is heavyweight. It would be too expensive to use a full-fledged View just to
add a border.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

GOF

  • Strategy
  • Template Method
  • Mediator
A

Behavioral Patterns
Behavioral patterns are concerned with algorithms and the assignment of responsibili-
ties between objects. Behavioral patterns describe not just patterns of objects or classes
but also the patterns of communication between them. These patterns characterize
complex control flow that’s difficult to follow at run-time. They shift your focus away
from flow of control to let you concentrate just on the way objects are interconnected.
Behavioral class patterns use inheritance to distribute behavior between classes. This
chapter includes two such patterns. Template Method (325) is the simpler and more
common of the two. A template method is an abstract definition of an algorithm. It
defines the algorithm step by step. Each step invokes either an abstract operation or
a primitive operation. A subclass fleshes out the algorithm by defining the abstract
operations. The other behavioral class pattern is Interpreter (243), which represents
a grammar as a class hierarchy and implements an interpreter as an operation on
instances of these classes.
Behavioral object patterns use object composition rather than inheritance. Some de-
scribe how a group of peer objects cooperate to perform a task that no single object
can carry out by itself. An important issue here is how peer objects know about each
other. Peers could maintain explicit references to each other, but that would increase
their coupling. In the extreme, every object would know about every other. The Me-
diator (273) pattern avoids this by introducing a mediator object between peers. The
mediator provides the indirection needed for loose coupling.
Chain of Responsibility (223) provides even looser coupling. It lets you send requests to
an object implicitly through a chain of candidate objects. Any candidate may fulfill the
request depending on run-time conditions. The number of candidates is open-ended,
and you can select which candidates participate in the chain at run-time.
The Observer (293) pattern defines and maintains a dependency between objects. The
classic example of Observer is in Smalltalk Model/View/Controller, where all views
of the model are notified whenever the model’s state changes.
Other behavioral object patterns are concerned with encapsulating behavior in an object
and delegating requests to it. The Strategy (315) pattern encapsulates an algorithm in
an object. Strategy makes it easy to specify and change the algorithm an object uses.
The Command (233) pattern encapsulates a request in an object so that it can be passed
as a parameter, stored on a history list, or manipulated in other ways. The State (305)
pattern encapsulates the states of an object so that the object can change its behavior
when its state object changes. Visitor (331) encapsulates behavior that would otherwise
be distributed across classes, and Iterator (257) abstracts the way you access and traverse
objects in an aggregate.

STRATEGY
Intent
Define a family of algorithms, encapsulate each one, and make them interchange-
able. Strategy lets the algorithm vary independently from clients that use it.

Also Known As
Policy

Motivation
Many algorithms exist for breaking a stream of text into lines. Hard-wiring all such
algorithms into the classes that require them isn't desirable for several reasons:
• Clients that need linebreaking get more complex if they include the line-
breaking code. That makes clients bigger and harder to maintain, especially
if they support multiple linebreaking algorithms.
• Different algorithms will be appropriate at different times. We don't want to
support multiple linebreaking algorithms if we don't use them all.
• It's difficult to add new algorithms and vary existing ones when linebreaking
is an integral part of a client.
We can avoid these problems by defining classes that encapsulate different line-
breaking algorithms. An algorithm that's encapsulated in this way is called a
strategy.
Suppose a Composition class is responsible for maintaining and updating the
linebreaks of text displayed in a text viewer. Linebreaking strategies aren't im-
plemented by the class Composition. Instead, they are implemented separately
by subclasses of the abstract Compositor class. Compositor subclasses implement
different strategies:
• SimpleCompositor implements a simple strategy that determines linebreaks
one at a time.
• TeXCompositor implements the TgX algorithm for finding linebreaks. This
strategy tries to optimize linebreaks globally, that is, one paragraph at a time.
• ArrayCompositor implements a strategy that selects breaks so that each row
has a fixed number of items. It's useful for breaking a collection of icons into
rows, for example.
A Composition maintains a reference to a Compositor object. Whenever a Compo-
sition reformats its text, it forwards this responsibility to its Compositor object. The
client of Composition specifies which Compositor should be used by installing
the Compositor it desires into the Composition.

Applicability
Use the Strategy pattern when
• many related classes differ only in their behavior. Strategies provide a way
to configure a class with one of many behaviors.
• you need different variants of an algorithm. For example, you might de-
fine algorithms reflecting different space/time trade-offs. Strategies can be
used when these variants are implemented as a class hierarchy of algo-
rithms [HO87].
• an algorithm uses data that clients shouldn’t know about. Use the Strategy
pattern to avoid exposing complex, algorithm-specific data structures.
• a class defines many behaviors, and these appear as multiple conditional
statements in its operations. Instead of many conditionals, move related
conditional branches into their own Strategy class.

Participants
• Strategy (Compositor)
- declares an interface common to all supported algorithms. Context uses
this interface to call the algorithm defined by a ConcreteStrategy.
• ConcreteStrategy (SimpleCompositor, TeXCompositor, ArrayCompositor)
- implements the algorithm using the Strategy interface.
• Context (Composition)
- is configured with a ConcreteStrategy object.
- maintains a reference to a Strategy object.
- may define an interface that lets Strategy access its data.

Collaborations
• Strategy and Context interact to implement the chosen algorithm. A context
may pass all data required by the algorithm to the strategy when the algorithm
is called. Alternatively, the context can pass itself as an argument to Strategy
operations. That lets the strategy call back on the context as required.
• A context forwards requests from its clients to its strategy. Clients usually
create and pass a ConcreteStrategy object to the context; thereafter, clients
interact with the context exclusively. There is often a family of ConcreteStrategy
classes for a client to choose from.

Consequences
The Strategy pattern has the following benefits and drawbacks:
1. Families of related algorithms. Hierarchies of Strategy classes define a family of
algorithms or behaviors for contexts to reuse. Inheritance can help factor out
common functionality of the algorithms.
2. An alternative to subclassing. Inheritance offers another way to support a
variety of algorithms or behaviors. You can subclass a Context class directly
to give it different behaviors. But this hard-wires the behavior into Context. It
mixes the algorithm implementation with Context’s, making Context harder
to understand, maintain, and extend. And you can’t vary the algorithm
dynamically. You wind up with many related classes whose only difference
is the algorithm or behavior they employ. Encapsulating the algorithm in
separate Strategy classes lets you vary the algorithm independently of its
context, making it easier to switch, understand, and extend.
3. Strategies eliminate conditional statements. The Strategy pattern offers an alter-
native to conditional statements for selecting desired behavior. When differ-
ent behaviors are lumped into one class, it’s hard to avoid using conditional statements to select the right behavior. Encapsulating the behavior in sepa-
rate Strategy classes eliminates these conditional statements.
4. A choice of implementations. Strategies can provide different implementations
of the same behavior. The client can choose among strategies with different
time and space trade-offs.
5. Clients must be aware of different Strategies. The pattern has a potential draw-
back in that a client must understand how Strategies differ before it can
select the appropriate one. Clients might be exposed to implementation is-
sues. Therefore you should use the Strategy pattern only when the variation
in behavior is relevant to clients.
6. Communication overhead between Strategy and Context. The Strategy interface
is shared by all ConcreteStrategy classes whether the algorithms they imple-
ment are trivial or complex. Hence it’s likely that some ConcreteStrategies
won’t use all the information passed to them through this interface; simple
ConcreteStrategies may use none of it! That means there will be times when
the context creates and initializes parameters that never get used. If this is
an issue, then you’ll need tighter coupling between Strategy and Context.
7. Increased number of objects. Strategies increase the number of objects in an
application. Sometimes you can reduce this overhead by implementing
strategies as stateless objects that contexts can share. Any residual state is
maintained by the context, which passes it in each request to the Strategy object. Shared strategies should not maintain state across invocations. The
Flyweight (195) pattern describes this approach in more detail.

Implementation
Consider the following implementation issues:
1. Defining the Strategy and Context interfaces. The Strategy and Context interfaces
must give a ConcreteStrategy efficient access to any data it needs from a
context, and vice versa.
One approach is to have Context pass data in parameters to Strategy
operations—in other words, take the data to the strategy. This keeps Strategy
and Context decoupled. On the other hand, Context might pass data the
Strategy doesn’t need.
Another technique has a context pass itself as an argument, and the strategy
requests data from the context explicitly. Alternatively, the strategy can store
a reference to its context, eliminating the need to pass anything at all. Either
way, the strategy can request exactly what it needs. But now Context must
define a more elaborate interface to its data, which couples Strategy and
Context more closely.
The needs of the particular algorithm and its data requirements will deter-
mine the best technique.
2. Strategies as template parameters. In C++ templates can be used to configure
a class with a strategy. This technique is only applicable if (1) the Strategy
can be selected at compile-time, and (2) it does not have to be changed at
run-time.
3. Making Strategy objects optional. The Context class may be simplified if it’s
meaningful not to have a Strategy object. Context checks to see if it has
a Strategy object before accessing it. If there is one, then Context uses it
normally. If there isn’t a strategy, then Context carries out default behavior.
The benefit of this approach is that clients don’t have to deal with Strategy
objects at all unless they don’t like the default behavior.

TEMPLATE METHOD
Intent
Define the skeleton of an algorithm in an operation, deferring some steps to
subclasses. Template Method lets subclasses redefine certain steps of an algorithm
without changing the algorithm’s structure.

Motivation
Consider an application framework that provides Application and Document
classes. The Application class is responsible for opening existing documents stored
in an external format, such as a file. A Document object represents the information
in a document once it's read from the file.
Applications built with the framework can subclass Application and Document to
suit specific needs. For example, a drawing application defines Draw Application
and DrawDocument subclasses; a spreadsheet application defines Spreadsheet-
Application and SpreadsheetDocument subclasses.
OpenDocument defines each step for opening a document. It checks if the docu-
ment can be opened, creates the application-specific Document object, adds it to
its set of documents, and reads the Document from a file.
We call OpenDocument a template method. A template method defines an algo-
rithm in terms of abstract operations that subclasses override to provide concrete
behavior. Application subclasses define the steps of the algorithm that check if
the document can be opened (CanOpenDocument) and that create the Document
(DoCreateDocument). Document classes define the step that reads the document
(DoRead). The template method also defines an operation that lets Application
subclasses know when the document is about to be opened (AboutToOpenDocu-
ment), in case they care.
By defining some of the steps of an algorithm using abstract operations, the tem-
plate method fixes their ordering, but it lets Application and Document subclasses
vary those steps to suit their needs.

Applicability
The Template Method pattern should be used
• to implement the invariant parts of an algorithm once and leave it up to
subclasses to implement the behavior that can vary.
• when common behavior among subclasses should be factored and localized
in a common class to avoid code duplication. This is a good example of
“refactoring to generalize” as described by Opdyke and Johnson [OJ93].
You first identify the differences in the existing code and then separate the
differences into new operations. Finally, you replace the differing code with
a template method that calls one of these new operations.
• to control subclasses extensions. You can define a template method that calls
“hook” operations (see Consequences) at specific points, thereby permitting
extensions only at those points.

Participants
• AbstractClass (Application)
- defines abstract primitive operations that concrete subclasses define to
implement steps of an algorithm.
- implements a template method defining the skeleton of an algorithm. The
template method calls primitive operations as well as operations defined
in AbstractClass or those of other objects.
• ConcreteClass (MyApplication)
- implements the primitive operations to carry out subclass-specific steps of
the algorithm.

Collaborations
• ConcreteClass relies on AbstractClass to implement the invariant steps of the
algorithm.

Consequences
Template methods are a fundamental technique for code reuse. They are partic-
ularly important in class libraries, because they are the means for factoring out
common behavior in library classes.
Template methods lead to an inverted control structure that’s sometimes referred
to as “the Hollywood principle,” that is, “Don’t call us, we’ll call you” [Swe85].
This refers to how a parent class calls the operations of a subclass and not the
other way around.
Template methods call the following kinds of operations:
• concrete operations (either on the ConcreteClass or on client classes);
• concrete AbstractClass operations (i.e., operations that are generally useful
to subclasses);
• primitive operations (i.e., abstract operations);
• factory methods (see Factory Method (107)); and
• hook operations, which provide default behavior that subclasses can extend
if necessary. A hook operation often does nothing by default.
It’s important for template methods to specify which operations are hooks (may
be overridden) and which are abstract operations (must be overridden). To reuse
an abstract class effectively, subclass writers must understand which operations
are designed for overriding.

Implementation
Three implementation issues are worth noting:
1. Using C++ access control. In C++, the primitive operations that a template
method calls can be declared protected members. This ensures that they
are only called by the template method. Primitive operations that must be overridden are declared pure virtual. The template method itself should not
be overridden; therefore you can make the template method a nonvirtual
member function.
2. Minimizing primitive operations. An important goal in designing template
methods is to minimize the number of primitive operations that a subclass
must override to flesh out the algorithm. The more operations that need
overriding, the more tedious things get for clients.
3. Naming conventions. You can identify the operations that should be overrid-
den by adding a prefix to their names. For example, the MacApp framework
for Macintosh applications [App89] prefixes template method names with
“Do-“: “DoCreateDocument”, “DoRead”, and so forth.

MEDIATOR
Intent
Define an object that encapsulates how a set of objects interact. Mediator promotes
loose coupling by keeping objects from referring to each other explicitly, and it
lets you vary their interaction independently.

Motivation
Object-oriented design encourages the distribution of behavior among objects.
Such distribution can result in an object structure with many connections between
objects; in the worst case, every object ends up knowing about every other.
Though partitioning a system into many objects generally enhances reusability,
proliferating interconnections tend to reduce it again. Lots of interconnections
make it less likely that an object can work without the support of others—the
system acts as though it were monolithic. Moreover, it can be difficult to change
the system’s behavior in any significant way, since behavior is distributed among
many objects. As a result, you may be forced to define many subclasses to cus-
tomize the system’s behavior.
As an example, consider the implementation of dialog boxes in a graphical user
interface.
Often there are dependencies between the widgets in the dialog. For example,
a button gets disabled when a certain entry field is empty. Selecting an entry
in a list of choices called a list box might change the contents of an entry field.
Conversely typing text into the entry field might automatically select one or more
corresponding entries in the list box. Once text appears in the entry field, other
buttons may become enabled that let the user do something with the text, such as
changing or deleting the thing to which it refers.
Different dialog boxes will have different dependencies between widgets. So even
though dialogs display the same kinds of widgets, they can’t simply reuse stock
widget classes; they have to be customized to reflect dialog-specific dependencies.
Customizing them individually by subclassing will be tedious, since many classes
are involved.
You can avoid these problems by encapsulating collective behavior in a separate
mediator object. A mediator is responsible for controlling and coordinating the
interactions of a group of objects. The mediator serves as an intermediary that
keeps objects in the group from referring to each other explicitly. The objects only
know the mediator, thereby reducing the number of interconnections.
For example, FontDialogDirector can be the mediator between the widgets in
a dialog box. A FontDialogDirector object knows the widgets in a dialog and
coordinates their interaction.
Here’s the succession of events by which a list box’s selection passes to an entry
field:
1. The list box tells its director that it’s changed.
2. The director gets the selection from the list box.
3. The director passes the selection to the entry field.
4. Now that the entry field contains some text, the director enables button(s)
for initiating an action (e.g., “demibold,” “oblique”).
Note how the director mediates between the list box and the entry field. Widgets
communicate with each other only indirectly, through the director. They don’t
have to know about each other; all they know is the director. Furthermore, because
the behavior is localized in one class, it can be changed or replaced by extending
or replacing that class.
DialogDirector is an abstract class that defines the overall behavior of a dia-
log. Clients call the ShowDialog operation to display the dialog on the screen.
CreateWidgets is an abstract operation for creating the widgets of a dialog. Wid-
getChanged is another abstract operation; widgets call it to inform their director
that they have changed. DialogDirector subclasses override CreateWidgets to cre-
ate the proper widgets, and they override WidgetChanged to handle the changes.

Applicability
Use the Mediator pattern when
• a set of objects communicate in well-defined but complex ways. The resulting
interdependencies are unstructured and difficult to understand.
• reusing an object is difficult because it refers to and communicates with many
other objects.
• a behavior that’s distributed between several classes should be customizable
without a lot of subclassing.

Participants
• Mediator (DialogDirector)
- defines an interface for communicating with Colleague objects.
• ConcreteMediator (FontDialogDirector)
- implements cooperative behavior by coordinating Colleague objects.
- knows and maintains its colleagues.
• Colleague classes (ListBox, EntryField)
- each Colleague class knows its Mediator object.
- each colleague communicates with its mediator whenever it would have
otherwise communicated with another colleague.

Collaborations
• Colleagues send and receive requests from a Mediator object. The mediator
implements the cooperative behavior by routing requests between the appro-
priate colleague(s).

Consequences
The Mediator pattern has the following benefits and drawbacks:
1. It limits subclassing. A mediator localizes behavior that otherwise would be
distributed among several objects. Changing this behavior requires subclass-
ing Mediator only; Colleague classes can be reused as is.
2. It decouples colleagues. A mediator promotes loose coupling between col-
leagues. You can vary and reuse Colleague and Mediator classes indepen-
dently.
3. It simplifies object protocols. A mediator replaces many-to-many interactions
with one-to-many interactions between the mediator and its colleagues. One-
to-many relationships are easier to understand, maintain, and extend.
4. It abstracts how objects cooperate. Making mediation an independent concept
and encapsulating it in an object lets you focus on how objects interact apart
from their individual behavior. That can help clarify how objects interact in
a system.
5. It centralizes control. The Mediator pattern trades complexity of interaction
for complexity in the mediator. Because a mediator encapsulates protocols,
it can become more complex than any individual colleague. This can make
the mediator itself a monolith that’s hard to maintain.

Implementation
The following implementation issues are relevant to the Mediator pattern:
1. Omitting the abstract Mediator class. There’s no need to define an abstract
Mediator class when colleagues work with only one mediator. The abstract
coupling that the Mediator class provides lets colleagues work with different
Mediator subclasses, and vice versa.
2. Colleague-Mediator communication. Colleagues have to communicate with
their mediator when an event of interest occurs. One approach is to im-
plement the Mediator as an Observer using the Observer (293) pattern. Col-
league classes act as Subjects, sending notifications to the mediator whenever
they change state. The mediator responds by propagating the effects of the
change to other colleagues.
Another approach defines a specialized notification interface in Mediator
that lets colleagues be more direct in their communication. Smalltalk/V for
Windows uses a form of delegation: When communicating with the media-
tor, a colleague passes itself as an argument, allowing the mediator to identify
the sender. The Sample Code uses this approach, and the Smalltalk/V im-
plementation is discussed further in the Known Uses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Patterns of Enterprise Architecture Applications:

  • Web Server Patterns
  • Concurrency Patterns
  • Base Patterns
A

Domain Logic Patterns

Transaction Script
Organizes business logic by procedures where each procedure handles a single request from the presentation.
Most business applications can be thought of as a series of transactions. A transaction may view some information as organized in a particular way, another will make changes to it. Each interaction between a client system and a server system contains a certain amount of logic. In some cases this can be as simple as displaying information in the database. In others it may involve many steps of validations and calculations.
A Transaction Script organizes all this logic primarily as a single procedure, making calls directly to the database or through a thin database wrapper. Each transaction will have its own Transaction Script, although common subtasks can be broken into subprocedures.

Domain Model
An object model of the domain that incorporates both behavior and data.
At its worst business logic can be very complex. Rules and logic describe many different cases and slants of behavior, and it’s this complexity that objects were designed to work with. A Domain Model creates a web of interconnected objects, where each object represents some meaningful individual, whether as large as a corporation or as small as a single line on an order form.

Table Module
One of the key messages of object orientation is bundling the data with the behavior that uses it. The traditional object-oriented approach is based on objects with identity, along the lines of Domain Model (116). Thus, if we have an Employee class, any instance of it corresponds to a particular employee. This scheme works well because once we have a reference to an employee, we can execute operations, follow relationships, and gather data on him.
One of the problems with Domain Model (116) is the interface with relational databases. In many ways this approach treats the relational database like a crazy aunt who's shut up in an attic and whom nobody wants to talk about. As a result you often need considerable programmatic gymnastics to pull data in and out of the database, transforming between two different representations of the data.
A Table Module organizes domain logic with one class per table in the data-base, and a single instance of a class contains the various procedures that will act on the data. The primary distinction with Domain Model (116) is that, if you have many orders, a Domain Model (116) will have one order object per order while a Table Module will have one object to handle all orders.

Service Layer
Defines an application’s boundary with a layer of services that establishes a set of available operations and coordinates the application’s response in each operation.
Enterprise applications typically require different kinds of interfaces to the data they store and the logic they implement: data loaders, user interfaces, integration gateways, and others. Despite their different purposes, these interfaces often need common interactions with the application to access and manipulate its data and invoke its business logic. The interactions may be complex, involv-ing transactions across multiple resources and the coordination of several responses to an action. Encoding the logic of the interactions separately in each interface causes a lot of duplication.
A Service Layer defines an application’s boundary [Cockburn PloP] and its set of available operations from the perspective of interfacing client layers. It encapsulates the application’s business logic, controlling transactions and coor-dinating responses in the implementation of its operations.

Web Presentation Patterns

Model View Controller
Controller Model View Controller (MVC) is one of the most quoted (and most misquoted) patterns around. It started as a framework developed by Trygve Reenskaug for the Smalltalk platform in the late 1970s. Since then it has played an influential role in most UI frameworks and in the thinking about UI design.

Page Controller
An object that handles a request for a specific page or action on a Web site.
Most people’s basic Web experience is with static HTML pages. When you request static HTML you pass to the Web server the name and path for a HTML document stored on it. The key notion is that each page on the Web site is a separate document on the server. With dynamic pages things can get much more interesting since there’s a much more complex relationship between path names and the file that responds. However, the approach of one path leading to one file that handles the request is a simple model to understand.
As a result, Page Controller has one input controller for each logical page of the Web site. That controller may be the page itself, as it often is in server page environments, or it may be a separate object that corresponds to that page.

Front Controller
A controller that handles all requests for a Web site.
In a complex Web site there are many similar things you need to do when handling a request. These things include security, internationalization, and providing particular views for certain users. If the input controller behavior is scattered across multiple objects, much of this behavior can end up duplicated. Also, it’s difficult to change behavior at runtime.
The Front Controller consolidates all request handling by channeling requests through a single handler object. This object can carry out common behavior, which can be modified at runtime with decorators. The handler then dispatches to command objects for behavior particular to a request.

Template View
Renders information into HTML by embedding markers in an HTML page.
magine. Although programming languages are better at creating text than they used to be (some of us remember character handling in Fortran and stan-dard Pascal), creating and concatenating string constructs is still painful. If there isn’t much to do, it isn’t too bad, but a whole HTML page is a lot of text manipulation.
With static HTML pages - those that don’t change from request to request - you can use nice WYSIWG editors. Even those of us who like raw text editors find it easier to just type in the text and tags rather than fiddle with string con-catenation in a programming language.
Of course the issue is with dynamic Web pages - those that take the results of something like database queries and embed them into the HTML. The page looks different with each result, and as a result regular HTML editors aren’t up to the job.
The best way to work is to compose the dynamic Web page as you do a static page but put in markers that can be resolved into calls to gather dynamic information. Since the static part of the page acts as a template for the particular response, I call this a Template View.

Transform View
A view that processes domain data element by element and transforms it into HTML.
Transform When you issue requests for data to the domain and data source layers, you get back all the data you need to satisfy them, but without the formatting you need to make a proper Web page. The role of the view in Model View Controller (330) is to render this data into a Web page. Using Transform View means thinking of this as a transformation where you have the model’s data as input and its HTML as output.

Two Step View
Turns domain data into HTML in two steps: first by forming some kind of logical page, then rendering the logical page into HTML.
If you have a Web application with many pages, you often want a consistent look and organization to the site. If every page looks different, you end up with a site that users find confusing. You may also want to make global changes to the appearance of the site easily, but common approaches using Template View (350) or Transform View (361) make this difficult because presentation decisions are often duplicated across multiple pages or transform modules. A global change can force you to change several files.
Two Step View deals with this problem by splitting the transformation into two stages. The first transforms the model data into a logical presentation without any specific formatting; the second converts that logical presentation with the actual formatting needed. This way you can make a global change by altering the second stage, or you can support multiple output looks and feels with one second stage each.

Application Controller
A centralized point for handling screen navigation and the flow of an application.
Some applications contain a significant amount of logic about the screens to use at different points, which may involve invoking certain screens at certain times in an application. This is the wizard style of interaction, where the user is led through a series of screens in a certain order. In other cases we may see screens that are only brought in under certain conditions, or choices between different screens that depend on earlier input.
To some degree the various Model View Controller (330) input controllers can make some of these decisions, but as an application gets more complex this can lead to duplicated code as several controllers for different screens need to know what to do in a certain situation.
You can remove this duplication by placing all the flow logic in an Applica-tion Controller. Input controllers then ask the Application Controller for the appropriate commands for execution against a model and the correct view to use depending on the application context.

Offline Concurrency Patterns

Optimistic Offline Lock
Prevents conflicts between concurrent business transactions by detecting a conflict and rolling back the transaction.
Often a business transaction executes across a series of system transactions. Once outside the confines of a single system transaction, we can’t depend on our database manager alone to ensure that the business transaction will leave the record data in a consistent state. Data integrity is at risk once two sessions begin to work on the same records and lost updates are quite possible. Also, with one session editing data that another is reading an inconsistent read becomes likely.
Optimistic Offline Lock solves this problem by validating that the changes about to be committed by one session don’t conflict with the changes of another session. A successful pre-commit validation is, in a sense, obtaining a lock indi-cating it’s okay to go ahead with the changes to the record data. So long as the validation and the updates occur within a single system transaction the business transaction will display consistency.
Whereas Pessimistic Offline Lock (426) assumes that the chance of session conflict is high and therefore limits the system’s concurrency, Optimistic Offline Lock assumes that the chance of conflict is low. The expectation that session con-flict isn’t likely allows multiple users to work with the same data at the same time.

Pessimistic Offline Lock
Prevents conflicts between concurrent business transactions by allowing only one business transaction at a time to access data.
Since offline concurrency involves manipulating data for a business transaction that spans multiple requests, the simplest approach would seem to be having a system transaction open for the whole business transaction. Sadly, however, this doesn’t always work well because transaction systems aren’t geared to work with long transactions. For that reason you have to use multiple system transactions, at which point you’re left to your own devices to manage concurrent access to your data.
The first approach to try is Optimistic Offline Lock (416). However, that pattern has its problems. If several people access the same data within a busi-ness transaction, one of them will commit easily but the others will conflict and fail. Since the conflict is only detected at the end of the business transac-tion, the victims will do all the transaction work only to find at the last minute that the whole thing will fail and their time will have been wasted. If this hap-pens a lot on lengthy business transactions the system will soon become very unpopular.
Pessimistic Offline Lock prevents conflicts by avoiding them altogether. It forces a business transaction to acquire a lock on a piece of data before it starts to use it, so that, most of the time, once you begin a business transaction you can be pretty sure you’ll complete it without being bounced by concurrency control.

Coarse-Grained Lock
Locks a set of related objects with a single lock.
Objects can often be edited as a group. Perhaps you have a customer and its set of addresses. If so, when using the application it makes sense to lock all of these items if you want to lock any one of them. Having a separate lock for individual objects presents a number of challenges. First, anyone manipulating them has to write code that can find them all in order to lock them. This is easy enough for a customer and its addresses, but it gets tricky as you get more locking groups. And what if the groups get complicated? Where is this behavior when your framework is managing lock acquisition? If your locking strategy requires that an object be loaded in order to be locked, such as with Optimistic Offline Lock (416), locking a large group affects performance. And with Pessimistic Offline Lock (426) a large lock set is a management headache and increases lock table contention.
A Coarse-Grained Lock is a single lock that covers many objects. It not only simplifies the locking action itself but also frees you from having to load all the members of a group in order to lock them.

Implicit Lock
Allows framework or layer supertype code to acquire offline locks.
The key to any locking scheme is that there are no gaps in its use. Forgetting to write a single line of code that acquires a lock can render an entire offline lock-ing scheme useless. Failing to retrieve a read lock where other transactions use write locks means you might not get up-to-date session data; failing to use a version count properly can result in unknowingly writing over someone’s changes. Generally, if an item might be locked anywhere it must be locked everywhere. Ignoring its application’s locking strategy allows a business trans-action to create inconsistent data. Not releasing locks won’t corrupt your record data, but it will eventually bring productivity to a halt. Because offline concurrency management is difficult to test, such errors might go undetected by all of your test suites.
One solution is to not allow developers to make such a mistake. Locking tasks that cannot be overlooked should be handled not explicitly by developers but implicitly by the application. The fact that most enterprise applications make use of some combination of framework, Layer Supertypes (475), and code generation provides us with ample opportunity to facilitate Implicit Lock.

Base Patterns
Gateway
An object that encapsulates access to an external system or resource.
Interesting software rarely lives in isolation. Even the purest object-oriented system often has to deal with things that aren’t objects, such as relational data-base tables, CICS transactions, and XML data structures.
When accessing external resources like this, you’ll usually get APIs for them. However, these APIs are naturally going to be somewhat complicated because they take the nature of the resource into account. Anyone who needs to under-stand a resource needs to understand its API - whether JDBC and SQL for rela-tional databases or W3C or JDOM for XML. Not only does this make the software harder to understand, it also makes it much harder to change should you shift some data from a relational database to an XML message at some point in the future.
The answer is so common that it’s hardly worth stating. Wrap all the special API code into a class whose interface looks like a regular object. Other objects access the resource through this Gateway, which translates the simple method calls into the appropriate specialized API.

Mapper
An object that sets up a communication between two independent objects.
Sometimes you need to set up communications between two subsystems that still need to stay ignorant of each other. This may be because you can’t modify them or you can but you don’t want to create dependencies between the two or even between them and the isolating element.

Layer Supertype
A type that acts as the supertype for all types in its layer.
It’s not uncommon for all the objects in a layer to have methods you don’t want to have duplicated throughout the system. You can move all of this behavior into a common Layer Supertype.

Separated Interface
Defines an interface in a separate package from its implementation.
As you develop a system, you can improve the quality of its design by reducing the coupling between the system’s parts. A good way to do this is to group the classes into packages and control the dependencies between them.You can then follow rules about how classes in one package can call classes in another - for example, one that says that classes in the domain layer may not call classes in the presentation package.
However, you might need to invoke methods that contradict the general dependency structure. If so, use Separated Interface to define an interface in one package but implement it in another. This way a client that needs the dependency to the interface can be completely unaware of the implementation. The Separated Interface provides a good plug point for Gateway (466).

Registry
A well-known object that other objects can use to find common objects and services.
When you want to find an object you usually start with another object that has an association to it, and use the association to navigate to it. Thus, if you want to find all the orders for a customer, you start with the customer object and use a method on it to get the orders. However, in some cases you won’t have an appropriate object to start with. You may know the customer’s ID number but not have a reference. In this case you need some kind of lookup method - a finder - but the question remains: How do you get to the finder?
A Registry is essentially a global object, or at least it looks like one - even if it isn’t as global as it may appear.

Value Object
A small simple object, like money or a date range, whose equality isn’t based on identity.
With object systems of various kinds, I’ve found it useful to distinguish between reference objects and Value Objects. Of the two a Value Object is usually the smaller; it’s similar to the primitive types present in many languages that aren’t purely object-oriented.

Money
Represents a monetary value.
A large proportion of the computers in this world manipulate money, so it’s always puzzled me that money isn’t actually a first class data type in any mainstream programming language. The lack of a type causes problems, the most obvious surrounding currencies. If all your calculations are done in a single currency, this isn’t a huge problem, but once you involve multiple currencies you want to avoid adding your dollars to your yen without taking the currency differences into account. The more subtle problem is with rounding. Monetary calculations are often rounded to the smallest currency unit. When you do this it’s easy to lose pennies (or your local equivalent) because of rounding errors.
The good thing about object-oriented programming is that you can fix these problems by creating a Money class that handles them. Of course, it’s still surprising that none of the mainstream base class libraries actually do this.

Special Case
A subclass that provides special behavior for particular cases.
Nulls are awkward things in object-oriented programs because they defeat polymorphism. Usually you can invoke foo freely on a variable reference of a given type without worrying about whether the item is the exact type or a sub-class. With a strongly typed language you can even have the compiler check that the call is correct. However, since a variable can contain null, you may run into a runtime error by invoking a message on null, which will get you a nice, friendly stack trace.
If it's possible for a variable to be null, you have to remember to surround it with null test code so you'll do the right thing if a null is present. Often the right thing is same in many contexts, so you end up writing similar code in lots of places - committing the sin of code duplication.
Nulls are a common example of such problems and others crop up regularly. In number systems you have to deal with infinity, which has special rules for things like addition that break the usual invariants of real numbers. One of my earliest experiences in business software was with a utility customer who wasn't fully known, referred to as "occupant." All of these imply altering the usual behavior of the type.
Instead of returning null, or some odd value, return a Special Case that has the same interface as what the caller expects.
Plugin
Links classes during configuration rather than compilation
Separated Interface (476) is often used when application code runs in multiple runtime environments, each requiring different implementations of particular behavior. Most developers supply the correct implementation by writing a fac-tory method. Suppose you define your primary key generator with a Separated Interface (476) so that you can use a simple in-memory counter for unit testing but a database-managed sequence for production. Your factory method will most likely contain a conditional statement that looks at a local environment variable, determines if the system is in test mode, and returns the correct key generator. Once you have a few factories you have a mess on your hands. Establishing a new deployment configuration - say "execute unit tests against in-memory database without transaction control" or "execute in production mode against DB2 database with full transaction control" - requires editing conditional statements in a number of factories, rebuilding, and redeploying. Configuration shouldn't be scattered throughout your application, nor should it require a rebuild or redeployment. Plugin solves both problems by providing centralized, runtime configuration.

Service Stub
Removes dependence upon problematic services during testing. WSDL
Enterprise systems often depend on access to third-party services such as credit scoring, tax rate lookups, and pricing engines. Any developer who has built such a system can speak to the frustration of being dependent on resources completely out of his control. Feature delivery is unpredictable, and as these services are often remote reliability and performance can suffer as well.
At the very least these problems slow the development process. Developers sit around waiting for the service to come back on line or maybe put some hacks into the code to compensate for yet to be delivered features. Much worse, and quite likely, such dependencies will lead to times when tests can’t execute. When tests can’t run the development process is broken.
Replacing the service during testing with a Service Stub that runs locally, fast, and in memory improves your development experience.

Record Set
An in-memory representation of tabular data.
In the last twenty years, the dominant way to represent data in a database has been the tabular relational form. Backed by database companies big and small, and a fairly standard query language, almost every new development I see uses relational data.
On top of this has come a wealth of tools for building UI’s quickly. These data-aware UI frameworks rely on the fact that the underlying data is rela-tional, and they provide UI widgets of various kinds that make it easy to view and manipulate this data with almost no programming.
The dark side of these environments is that, while they make display and simple updates ridiculously easy, they have no real facilities in which to place business logic. Any validations beyond “is this a valid date,” and any business rules or computations have no good place to go. Either they’re jammed into the database as stored procedures or they’re mingled with UI code.
The idea of the Record Set is to give you your cake and let you eat it, by pro-viding an in-memory structure that looks exactly like the result of an SQL query but can be generated and manipulated by other parts of the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Flux:

  • Structure and Data Flow
  • Dispatcher
  • Stores
  • Actions
A

Overview
Flux is a pattern for managing data flow in your application. The most important concept is that data flows in one direction. As we go through this guide we’ll talk about the different pieces of a Flux application and show how they form unidirectional cycles that data can flow through.

Flux Parts

  • Dispatcher
  • Store
  • Action
  • View

Dispatcher
The dispatcher receives actions and dispatches them to stores that have registered with the dispatcher. Every store will receive every action. There should be only one singleton dispatcher in each application.
Example:
1. User types in title for a todo and hits enter.
2. The view captures this event and dispatches an “add-todo” action containing the title of the todo.
3. Every store will then receive this action.

Store
A store is what holds the data of an application. Stores will register with the application’s dispatcher so that they can receive actions. The data in a store must only be mutated by responding to an action. There should not be any public setters on a store, only getters. Stores decide what actions they want to respond to. Every time a store’s data changes it must emit a “change” event. There should be many stores in each application.
Examples:
1. Store receives an “add-todo” action.
2. It decides it is relevant and adds the todo to the list of things that need to be done today.
3. The store updates its data and then emits a “change” event.

Actions
Actions define the internal API of your application. They capture the ways in which anything might interact with your application. They are simple objects that have a “type” field and some data.
Actions should be semantic and descriptive of the action taking place. They should not describe implementation details of that action. Use “delete-user” rather than breaking it up into “delete-user-id”, “clear-user-data”, “refresh-credentials” (or however the process works). Remember that all stores will receive the action and can know they need to clear the data or refresh credentials by handling the same “delete-user” action.
Examples:
When a user clicks “delete” on a completed todo a single “delete-todo” action is dispatched:
{
type: ‘delete-todo’,
todoID: ‘1234’,
}

Views
Data from stores is displayed in views. Views can use whatever framework you want (In most examples here we will use React). When a view uses data from a store it must also subscribe to change events from that store. Then when the store emits a change the view can get the new data and re-render. If a component ever uses a store and does not subscribe to it then there is likely a subtle bug waiting to be found. Actions are typically dispatched from views as the user interacts with parts of the application’s interface.
Example:
1. The main view subscribes to the TodoStore.
2. It accesses a list of the Todos and renders them in a readable format for the user to interact with.
3. When a user types in the title of a new Todo and hits enter the view tells the Dispatcher to dispatch an action.
4. All stores receive the dispatched action.
5. The TodoStore handles the action and adds another Todo to its internal data structure, then emits a “change” event.
6. The main view is listening for the “change” event. It gets the event, gets new data from the TodoStore, and then re-renders the list of Todos in the user interface.

Flow of data
We can piece the parts of Flux above into a diagram describing how data flows through the system.
1. Views send actions to the dispatcher.
2. The dispatcher sends actions to every store.
3. Stores send data to the views.

Link:
https://github.com/facebook/flux/tree/master/examples/flux-concepts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly