Architecture
What is the “environment”
The application environment consists in 3 main areas:
- Configuration
- Dependencies
Infrastructure is the most important element of the
environment, as it defines where the
application will run, the specific configuration needs and how dependencies
need to interact with the application.
Configuration is the next most important aspect of the
application environment. Configuration dictates both how the application
behaves in a given infrastructure and how the infrastructure behaves in
relation to the underlying application.
Dependencies are all the different modules or systems an
application dependes on, from libraries to services or other applications.
From
<https://clarive.com/why-environment-provisioning/>

CAP
theorem
From
<https://en.wikipedia.org/wiki/CAP_theorem>
No
distributed system is safe from network failures, thus network partitioning generally has to be tolerated. In the presence
of a partition, one is then left with two options: consistency or availability. When
choosing consistency over availability, the system will return an error or a
time-out if particular information cannot be guaranteed to be up to date due to
network partitioning. When choosing availability over consistency, the system
will always process the query and try to return the most recent available
version of the information, even if it cannot guarantee it is up to date due to
network partitioning.
Database
systems designed with traditional ACID guarantees
in mind such as RDBMS choose consistency over availability, whereas
systems designed around the BASE philosophy, common in the NoSQL movement
for example, choose availability over consistency.
Many
NoSQL stores compromise consistency (in the sense of the CAP theorem) in
favor of availability, partition tolerance, and speed. Barriers to the greater
adoption of NoSQL stores include the use of low-level query languages (instead
of SQL, for instance the lack of ability to perform ad-hoc joins across
tables), lack of standardized interfaces, and huge previous investments in
existing relational databases. Most NoSQL stores lack true ACID transactions.
What is distributed
caching and when is it used?
Today’s
web, mobile and IoT applications need to operate at web scale, anticipating
millions of users, terabytes of data and submillisecond response times, as well
as operating on multiple devices around the world.
Distributed
caching solves many common problems with data access, improving performance,
manageability and scalability, but what is it and how can it benefit
businesses?
What is distributed caching?
Caching
has become the de facto technology to boost application performance as well as
reduce costs. The primary goal of caching is to alleviate bottlenecks that come
with traditional databases. By caching frequently used data in memory – rather
than making database round trips – application response times can be
dramatically improved.
Distributed
caching is simply an extension of this concept, but the cache is configured to
span multiple servers. It's commonly used in cloud computing and virtualised
environments, where different
servers give a portion of their cache memory into a pool which can then be
accessed by virtual machines. This also means it’s a much more scalable
option.
The
data stored in a distributed cache is quite simply whatever is accessed the
most, and can change over time if a piece of data hasn't been requested in a
while.
Distributed
caching can also substantially lower capital and operating costs by reducing
workloads on backend systems and reducing network usage. In particular, if the
application runs on a relational database such as Oracle, which requires
high-end, costly hardware in order to scale, distributed caching that runs on
low-cost commodity servers can reduce the need to add expensive resources.
What makes distributed caching effective?
The
requirements for effective distributed caching are fairly straightforward.
Enterprises generally factor six key criteria into their evaluation, but how
important they are depends on the specific situation.
Performance: Specific performance requirements are driven
by the underlying application. For a given workload, the cache must meet and
sustain the application’s required steady-state performance targets for latency
and throughput. Efficiency of performance is a related factor that impacts
cost, complexity and manageability.
Scalability: As the workload increases, the cache must
continue to deliver the same performance. The cache must be able to scale
linearly, easily, affordably and without adversely impacting application
performance and availability.
Availability: Data needs to always be available during both
planned and unplanned interruptions, so the cache must ensure availability of
data 24/7.
Manageability: The use of a cache should
not place undue burden on the operations team. It should be reasonably quick to
deploy and easy to monitor and manage.
Simplicity: Adding a cache to a deployment should not
introduce unnecessary complexity, or make more work for developers.
Affordability: Cost is always a
consideration with any IT decision, both upfront implementation as well as
ongoing costs. An evaluation should consider total cost of ownership, including
license fees as well as hardware, services, maintenance and support.
From
<https://www.itpro.co.uk/virtualisation/30271/our-5-minute-guide-to-distributed-caching>
Micro
Services
Tuesday,
June 04, 2019
11:48
AM
There are no
rules, just tradeoffs!
Back in 1986,
Fred Brooks, author of The Mythical Man-Month, said that in software
engineering, there are no silver bullets. In other words, there are no
techniques or technologies that if you adopted would give you a 10X boost in
productivity.
From
<https://livebook.manning.com/#!/book/microservice-patterns/chapter-1/v-9/143>
http://microservices.io/patterns/index.html
"When you book a flight on aa.com,
delta.com, or united.com, you’re seeing some of these concepts in action. When
you choose a seat, you don’t actual get assigned it, you reserve it. When you
book your flight, you don’t actually have a ticket. You get an email later
telling you you’ve been confirmed/Ticketed. Have you ever had a plane change
and be assigned a different seat for the actual flight? Or been to the gate and
heard them ask for volunteers to give up their seat because they oversold the flight?
These are all examples of transactional boundaries, eventual consistency,
compensating transactions, and even apologies at work.
The moral of the story here is that
data, data integration, data boundaries, enterprise usage patterns, distributed
systems theory, timing, etc, are all the hard parts of microservices (since
microservices is really just distributed systems!). I’m seeing too much
confusion around technology (“if i use Spring Boot i’m doing microservices”, “i
need to solve service discovery, load balancing in the cloud before i can do
microservices”, “i must have a single database per microservice”) and useless
“rules” regarding microservices. Don’t worry. Once the big vendors have come
and sold you all the fancy suites of products (mmm… SOA ring a bell), you’ll
still be left to do the hard parts listed above.
Another perceived
"disadvantage" to this approach is that it takes an enterprise
significantly longer to gain agreement amongst business owners on what the
transactional boundaries are. The desire to get "something" out the
door quickly often trumps good design and causes issues down the road. We need
to push DDD thinking upstream to the business users as well, which will help
them understand the tradeoffs they are making ahead of time."
SOA vs Microservices:
Some critics of the microservice architecture claim that it is nothing
new and that it is just SOA. At a very high-level, there are some similarities.
SOA and the microservice architecture are architectural styles that structure a
system as a set of services. But once you dig deep you encounter significant
differences.
SOA and the microservice architecture usually use different technology stacks. SOA applications typically use
heavyweight technologies such as SOAP and other WS* standards. They often use a
ESB, which is a 'smart pipe' that contain business and messaging processing
logic, to integrate the services. Applications built using the microservice
architecture tend to use lightweight, open-source technologies. The services
communicate via 'dumb pipes', such as a message broker or lightweight protocols
such as REST or gRPC.
SOA and the microservice architecture also differ in how they treat data. SOA applications typically
have a global data model and share databases. In contrast, as mentioned
earlier, in the microservice architecture each service has its own database.
Moreover, as I describe in chapter 2, each service is usually considered to have
its own domain model.
Another key difference between SOA and the microservice architecture is
the size of the services. SOA is
typically used to integrate large, complex monolithic applications. While
services in a microservice architecture are not always tiny they are almost
always much smaller. As a result, a SOA application will usually consist of a
few large services where is a microservices-based application will consist of
10s or 100s of smaller services.
Microservices Pros and Cons:
Pros:
- Enables the continuous
delivery and deployment of large, complex applications.
- Each service is a
small, maintainable application
- Services are
independently deployable
- Services are
independently scalable
- The microservice
architecture enables teams to be autonomous
- Easily experiment with
and adopt new technologies
- Improved fault
isolation
Cons:
- Finding the right set
of services is challenging
if you decompose a system incorrectly you will build a
distributed monolith, a system consisting of coupled services that must be
deployed together. It has the drawbacks of both the monolithic architecture and
the microservice architecture.
- Distributed systems
are complex
your organization’s developers must have sophisticated
software development and delivery skills in order to successfully use
microservices.
The microservice architecture also introduces significant
operational complexity. There are many more moving parts – multiple instances
of different types of service – that must be managed in production. To
successfully deploy microservices you need a high-level of automation. You must
use technologies such as:
Automated deployment tooling such as Netflix Spinnaker
An off the shelf PaaS such as Pivotal Cloud Foundry or Redhat
Openshift
A Docker orchestration platform such as Docker Swarm or
Kubernetes
- Deploying features
that span multiple services requires careful coordination
- Deciding when to adopt
the microservice architecture is difficult
Using the microservice architecture makes it much more
difficult to iterate rapidly. A startup should almost certainly begin with a
monolithic application.
for complex applications, such as a consumer-facing web
application or SaaS application, it is usually the right choice.
Monolithic hell:
- Too
complex, hard for developer to understand the entire program
- Slow day to day
development
- Hard to go Agile,
everything takes too long
- Hart to scale, modules
have different requirements on resource
- Poor reliability, one
piece broken, all broken
- Requires long term
commitment to technology stack
software architecture has very little to do with functional
requirements. Architecture matters because of how it affects the non-functional
requirements, "-ilities" -- maintainability, extensibility, and
testability, availablity, scalibility…
Domain-Driven Design (DDD)
Event-Driven Design
From <https://vaughnvernon.co/?p=838>
Disadvantage of above model
-it’s more complicated
-difficult to debug
-since you have a delay when seeing events, you cannot make any
assumptions about what other systems -know (which you cannot do anyway, but
it’s more pronounced in this model)
-more difficult to operationalize
-you have to pay even more attention to CAP Theorem and the
technologies you chose to implement your storage/queues
How to decompose?
- Decompose by verb or use case and define services that
are responsible for particular actions. e.g. a Shipping
Service that’s responsible for shipping
complete orders.
- Decompose by by nouns or resources by defining a service
that is responsible for all operations on entities/resources of a given
type. e.g. an Account Service that is responsible for managing user accounts.
Use Saga pattern for consistency when 2
phase commitment (2PC) is not allowed
Implement each business transaction that spans multiple services as a
saga. A saga is a sequence of local transactions. Each local transaction
updates the database and publishes a message or event to trigger the next local
transaction in the saga. If a local transaction fails because it violates a
business rule then the saga executes a series of compensating transactions that
undo the changes that were made by the preceding local transactions.
There are two
ways of coordination sagas:
- Choreography
- each local transaction publishes domain events that trigger local
transactions in other services
- Orchestration
- an orchestrator (object) tells the participants what local transactions
to execute
Pros:
- It
enables an application to maintain data consistency across multiple
services without using distributed transactions
This solution
has the following drawbacks:
Cons:
- The
programming model is more complex. For example, a developer must design
compensating transactions that explicitly undo changes made earlier in a
saga.
- In
order to be reliable, a service must atomically update its database and publish an event. It cannot use
the traditional mechanism of a distributed transaction that spans the
database and the message broker. Instead, it must use one of the patterns
listed below.
From <http://microservices.io/patterns/data/saga.html>
- Observability
patterns:
- UI
patterns:
From <http://microservices.io/patterns/microservices.html>

In 2017, AWS comprised more than 90
services spanning a wide range including computing, storage, networking, database, analytics, application services, deployment, management, mobile, developer tools,
and tools for the Internet of Things.
The most popular include Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3). Most services are not
exposed directly to end users, but instead offer functionality through APIs for developers to use in their applications. Amazon Web Services’
offerings are accessed over HTTP, using the REST architectural style and SOAP protocol.
Amazon markets AWS to subscribers as a way of
obtaining large scale computing capacity more quickly and cheaply than building
an actual physical server farm.[8] All services are
billed based on usage, but each service measures usage in varying ways. As of
2017, AWS owns a dominant 34% of all cloud (IaaS, PaaS) while the next three
competitors Microsoft, Google, and IBM have 11%, 8%, 6% respectively
according to Synergy Group.[9][10]
From
<https://en.wikipedia.org/wiki/Amazon_Web_Services>
Useful Tools for Managing Complexity of
Microservice Architecture
- Containers, Clustering and
Orchestration, IaC
- Cloud Infrastructure, Serverless
- API Gateway
- Enterprise Service Bus
- Service Discovery
From <https://blog.byndyusoft.com/useful-tools-for-managing-complexity-of-microservice-architecture-109a2289acc>
Deal with Cross-Cutting Concerns
In the
microservices world a great deal of time is spent on discussing cross-cutting concerns and how to manage them. These are factors that sit
across any application (“cutting across them”) and generally focus on the
non-functional aspects of software development. Examples of cross-cutting
concerns include:
Auditing
|
Logging
|
Persistence
|
Security
|
Enterprise
Modelling
|
Exception
Handling
|
Configuration
Management
|
State
Management
|
Transactionality
|
From <https://nordicapis.com/creating-a-microservices-framework-at-cibc-a-case-study/>
Build your microservices using a microservice chassis framework, which
handles cross-cutting concerns
Examples:
From <http://microservices.io/patterns/microservice-chassis.html>
Microsoft
Microservice Architecture
https://www.microsoft.com/net/learn/architecture
Docker
containers, images, and registries
---
Docker image
is static representation of application, configurations, dependencies
Docker
container is instance of image running in Docker host
Docker image
can be stored in Docker storage (on prem / cloud - trusted registry, azure, aws, google…)
However, the
monolithic approach is common, because
the development of the application is initially easier than for microservices
approaches. Thus, many organizations develop using this architectural
approach. While some organizations have had good enough results, others are
hitting limits. Many organizations designed their applications using this model
because tools and infrastructure made it too difficult to build service
oriented architectures (SOA) years ago, and they did not see the need— until
the application grew.
Services
typically need to call one another. In a monolithic application, services
invoke one another through language-level method or procedure calls. In a
traditional distributed system deployment, services run at fixed, well known
locations (hosts and ports) and so can easily call one another using HTTP/REST
or some RPC mechanism. However, a modern microservice-based application typically
runs in a virtualized or containerized environments where the number of
instances of a service and their locations changes dynamically
From
<https://microservices.io/patterns/server-side-discovery.html>

