Highly Available and Fault-Tolerant Architectures

By

Date: Aug 17, 2021

Return to the article

In this sample chapter from AWS Certified Solutions Architect - Associate (SAA-C02) Cert Guide, you will explore topics related to exam Domain 1: Design Resilient Architectures, including Disaster Recovery and Business Continuity, Automating AWS Architecture, and Elastic Beanstalk.

This chapter covers the following topics:

This chapter covers content that’s important to the following exam domain/objective:

The terms high availability and fault tolerant take on unique meanings in today’s hosted cloud. Amazon wants to make sure that you understand how these terms differ in a cloud environment from in on-premises situation. For example, you might be accustomed to using clustering as a common method to protect database data. In the cloud, clustering has been mainly been replaced with horizontal scaling and placing multiple copies of databases in different data centers. You could still build a cluster in the cloud, but there are better ways to achieve high availability and fault tolerance—or at least Amazon and many customers feel there are better ways. Ultimately, no matter the terminology used, your application should be always up and available when the end user needs it. In addition, your data records should always be available. And both your application and your data should be able to handle problems in the backend; you need compute power for the many web servers that are available to take over, and your data needs to be stored in numerous physical locations.

The questions on the AWS Certified Solutions Architect - Associate (SAA-C02) exam quantify your understanding of the services at AWS that enable you to design your application stacks to be highly available and very fault tolerant.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 3-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Q&A Sections.”

Table 3-1 “Do I Know This Already?” Section-to-Question Mapping

Foundation Topics Section

Questions

Comparing Architecture Designs

1, 2

Disaster Recovery and Business Continuity

3, 4

Automating Architecture

5, 6

Elastic Beanstalk

7, 8

  1. Which of the following is a characteristic of applications that are designed to be highly available?

    1. Minimal downtime

    2. Increased performance

    3. Higher level of security

    4. Cost-effective operation

  2. Which of the following is a characteristic of applications that are designed with a high level of fault tolerance?

    1. Automatic recovery from most failures

    2. Higher speed

    3. Higher performance

    4. Cost-effective operation

  3. What does the term RPO stand for?

    1. Reliability point objective

    2. Recovery point objective

    3. Restore point objective

    4. Return to an earlier phase of operation

  4. What does the term RTO stand for?

    1. Recovery tier objective

    2. Recovery time objective

    3. The amount of data that will be lost

    4. The time the application will be down

  5. What is the purpose of using CloudFormation?

    1. To recover from failures

    2. To automate the building of AWS infrastructure components

    3. To deploy applications with automation

    4. To document manual tasks

  6. What CloudFormation component is used to check deployment changes to existing infrastructure?

    1. CloudFormation template

    2. Stack

    3. Change set

    4. JSON script

  7. What two components does Elastic Beanstalk deploy?

    1. Infrastructure and storage

    2. Application and infrastructure

    3. Compute and storage

    4. Containers and instances

  8. What term defines application updates that are applied to a new set of EC2 instances?

    1. Rolling

    2. Immutable

    3. All at once

    4. Blue/green

Comparing Architecture Designs

Successful applications hosted at AWS have some semblance of high availability and/or fault tolerance in the underlying design. As you prepare for the AWS Certified Solutions Architect - Associate (SAA-C02) exam, you need to understand these concepts conceptually in conjunction with the services that can provide a higher level of redundancy and availability. After all, if an application is designed to be highly available, it will have a higher level of redundancy; therefore, regardless of the potential failures that may happen from time to time, your application will continue to function at a working level without massive, unexpected downtime.

The same is true with a fault-tolerant design: If your application has tolerance for any operating issues such as latency, speed, and durability, your application design will be able to overcome most of these issues most of the time. The reality is that nothing is perfect, of course, but you can ensure high availability and fault tolerance for hosted applications by properly designing the application.

Designing for High Availability

Imagine that your initial foray into the AWS cloud involves hosting an application on a single large EC2 instance. This would be the simplest approach in terms of architecture, but there will eventually be issues with availability, such as the instance failing or needing to be taken down for maintenance. There are also design issues related to application state and data records stored on the same local instance.

As a first step, you could split the application into a web tier and a database tier with separate instances to provide some hardware redundancy; this improves availability because the two systems are unlikely to fail at the same time. If the mandate for this application is that it should be down for no more than 44 hours in any given 12-month period, this equates to 99.5% uptime. Say that tests show that the web server actually has 90% availability, and the database server has 95% reliability, as shown in Figure 3-1. Considering that the hardware, operating system, database engine, and network connectivity all have the same availability level, this design results in a total availability of 85.5%.

FIGURE 3-1 Availability Calculation for a Simple Hosted Application

This amount of uptime and availability obviously needs to be increased to meet the availability mandate. Adding an additional web server to the design increases the overall availability to 94.5%, as shown in Figure 3-2.

FIGURE 3-2 Increasing Application Availability by Adding Compute

The design now has some additional fault tolerance for the web tier so that one server can fail, but the application will still be available. However, you also need to increase the database tier availability because, after all, it’s where the important data is stored. Adding to the existing database infrastructure by adding two replica database servers in addition to adding a third web server to the web server tier results in both tiers achieving a total availability figure of 99.8% and achieving the SLA goal, as shown in Figure 3-3.

FIGURE 3-3 Adding Availability to Both Tiers

Adding Fault Tolerance

You can apply the same availability principles just discussed to the AWS data centers that host the application tiers. To do so, you increase application availability and add fault tolerance by designing with multiple availability zones, as shown in Figure 3-4.

FIGURE 3-4 Hosting Across Multiple AZ’s to Increase Availability and Fault Tolerance

Remember that each AWS availability zone has at least one physical data center. AWS data centers are built for high-availability designs, are connected together with high-speed/low-latency links, and are located far enough away from each other that a single natural disaster won’t affect all of the data centers in each AWS region at the same time. Thanks to this design, your application has high availability, and because of the additional availability, the application has more tolerance for failures when they occur.

Removing Single Points of Failure

Eliminating as many single points of failure as possible in your application stack design will also greatly increase high availability and fault tolerance. A single point of failure can be defined as any component of your application stack that will cause the rest of your integrated application to fail if that particular individual component fails. Take some time to review Table 3-2, which discusses possible mitigation paths for single points of failure.

Table 3-2 Avoiding Single Points of Failure

Possible Single Point of Failures

Mitigation Plan

Reason

On-premises DNS

Route 53 DNS

Anycast DNS services hosted across all AWS regions. Health checks and DNS failover.

Third-party load balancer

Elastic Load Balancing (ELB) services

ELB instances form a massive regional server farm with EIP addresses for fast failover.

Web/app server

ELB/Auto Scaling for each tier

Scale compute resources automatically up and down to meet demand.

RDS database servers

Redundant data nodes (primary/standby)

Synchronized replication between primary and standby nodes provides two copies of updated data

EBS data storage

Snapshots and retention schedule

Copy snapshots across regions adding additional redundancy.

Authentication problem

Redundant authentication nodes

Multiple Active Directory domain controllers provide alternate authentication options.

Data center failure

Multiple availability zones

Each region has multiple AZs providing high-availability and failover design options.

Regional disaster

Multi-region deployment with Route 53

Route 53 routing policy provides geo-redundancy for applications hosted across regions.

AWS recommends that you use a load balancer to balance the load of requests between multiple servers and to increase overall availability and reliability. The servers themselves are physically located in separate availability zones targeted directly by the load balancer. However, a single load-balancing appliance could be a single point of failure; if the load balancer failed, there would be no access to the application. You could add a second load balancer, but doing so would make the design more complicated as failover to the redundant load balancer would require a DNS update for the client, and that would take some time. AWS gets around this problem through the use of elastic IP addresses that allow for rapid IP remapping, as shown in Figure 3-5. The elastic IP address is assigned to multiple load balancers, and if one load balancer is not available ,the elastic IP address attaches itself to another load balancer.

FIGURE 3-5 Using Elastic IP Addresses to Provide High Availability

You can think of the elastic IP address as being able to float between resources as required. This software component is the mainstay of providing high-availability infrastructure at AWS. You can read further details about elastic IP addresses in Chapter 8, “Networking Solutions for Workloads.”

Table 3-3 lists AWS services that can be improved with high availability, fault tolerance, and redundancy.

Table 3-3 Planning for High Availability, Fault Tolerance, and Redundancy

AWS Service

High Availability

Fault Tolerance

Redundancy

Multi-Region

EC2 instance

Additional instance

Multiple availability zones

Auto Scaling

Route 53 health checks

EBS volume

Cluster design

Snapshots

AMI

Copy AMI/snapshot

Load balancer

Multiple AZs

Elastic IP addresses

Server farm

Route 53 geo proximity load balancing options

Containers

Elastic Container Service (ECS)

Fargate management

Application load balancer/Auto Scaling

Regional service not multi-region

RDS deployment

Multiple AZs

Synchronous replication

Snapshots/backup EBS data volumes and transaction records

Regional service not multi-region.

Custom EC2 database

Multiple AZs and replicas

Asynchronous/Synchronous replication options

Snapshots/backup EBS volumes

Custom high-availability and failover designs across regions with Route 53 Traffic Policies

Aurora (MySQL/PostgreSQL)

Replication across 3 AZs

Multiple writers

Clustered shared storage VSAN

Global database hosted and replicated across multiple AWS regions

DynamoDB (NoSQL)

Replication across 3 AZs

Multiple writers

Continuous backup to S3

Global table replicated across multiple AWS regions

Route 53

Health checks

Failover routing

Multi-value answer routing

Geolocation/geo proximity routing

S3 bucket

Same-region replication

Built-in

Built-in

Cross-region replication

Disaster Recovery and Business Continuity

How do you work around service failure at AWS? You must design for failure. Each customer must use the tools and services available at AWS to create an application environment with the goal of 100% availability. When failures occur at AWS, automated processes must be designed and in place to ensure proper failover with minimum to zero data loss. You can live with computer loss, although it might be painful; data loss is unacceptable—and it does not have to happen.

It is important to understand the published AWS uptime figures. For example, Amazon Relational Database Service (RDS) has been designed to fail a mere 52 minutes per year, but this does not mean you can schedule this potential downtime. As another example, just because Route 53, AWS’s DNS service, is designed for 100% uptime does not mean that Route 53 will not have issues. The published uptime figures are not guarantees; instead, they are what AWS strives for—and typically achieves.

When a cloud service fails, you are out of business during that timeframe. Failures are going to happen, and designing your AWS hosted applications for maximum uptime is the goal. You also must consider all the additional external services that allow you to connect to AWS: your telco, your ISP, and all the other moving bits. Considering all services in the equation, it’s difficult—and perhaps impossible—to avoid experiencing some downtime.

There are two generally agreed upon metrics that define disaster recovery:

A customer operating in the AWS cloud needs to define acceptable values for RPO and RTO based on their needs and requirements and build that in to a service-level agreement.

Backup and Restoration

On-premises DR has traditionally involved regular backups to tape and storage of the tapes off-site, in a safe location. This approach works, but recovery takes time. AWS offers several services that can assist you in designing a backup and restoration process that is much more effective than the traditional DR design. Most, if not all, third-party backup vendors have built-in native connectors to directly write to S3 as the storage target. Backups can be uploaded and written to S3 storage using a public Internet connection or using a faster private Direct Connect or VPN connection.

Pilot Light Solution

When you design a pilot light disaster recovery configuration, your web, application, and primary database server are on premises and fully operational. Copies of the web and application servers are built on EC2 instances in the AWS cloud and are ready to go but are not turned. Your on-premises primary database server replicates updates and changes to the standby database server hosted in the AWS cloud, as shown in Figure 3-6. When planning what AWS region to use for your disaster recovery site, the compliance rules and regulations that your company follows dictate which regions can be used. In addition, you want the region/availability zones to be as close as possible to your physical corporate location.

FIGURE 3-6 Pilot Light Setup

When a disaster occurs, the web and application instances and any other required infrastructure, such as a load balancer at AWS are initiated. The standby database server at AWS needs to be set as the primary database server, and the on-premise DNS services have to be configured to redirect traffic to the AWS cloud as the disaster recovery site, as shown in Figure 3-7. The RTO to execute a pilot light deployment is certainly faster than the backup and restoration scenario; there is no data loss, but there is no access to the hosted application at AWS until configuration is complete. The key to having a successful pilot light solution is to have all of your initial preconfiguration work automated with CloudFormation templates, ensuring that your infrastructure is built and ready to go as fast as possible. We will discuss CloudFormation automation later in this chapter.

FIGURE 3-7 Pilot Light Response

Warm Standby Solution

A warm standby solution speeds up recovery time because all the components in the warm standby stack are already in active operation—hence the term warm—but at a smaller scale of operation. Your web, application, and database servers are all in operation, including the load balancer, as shown in Figure 3-8.

FIGURE 3-8 Warm Standby Setup

The key variable is that the warm standby application stack is not active for production traffic until disaster strikes. When a disaster occurs, you recover by increasing the capacity of your web and application servers by changing the size or number of EC2 instances and reconfiguring DNS records to reroute the traffic to the AWS site, as shown in Figure 3-9. Because all resources were already active, the recovery time is shorter than with a pilot light solution; however, a warm standby solution is more expensive than a pilot light option as more resources are running 24/7.

FIGURE 3-9 Warm Standby Response

An application that requires less downtime with minimal data loss could also be deployed by using a warm standby design across two AWS regions. The entire workload is deployed to both AWS regions, using a separate application stack for each region.

Because data replication occurs across multiple AWS regions, the data will eventually be consistent, but the time required to replicate to both locations could be substantial. By using a read-local/write-global strategy, you could define one region as the primary for all database writes. Data would be replicated for reads to the other AWS region. If the primary database region then fails, failover to the passive site occurs. Obviously, this design has plenty of moving parts to consider and manage. This design could also take advantage of multiple availability zones within a single AWS region instead of using two separate regions.

Hot Site Solution

If you need RPO and RTO to be very low, you might want to consider deploying a hot site solution with active-active components running both on premises and in the AWS cloud. The secret sauce of the hot site is Route 53, the AWS DNS service. The database is mirrored and synchronously replicated, and the web and application servers are load balanced and in operation, as shown in Figure 3-10. The application servers are synchronized to the live data located on premises.

FIGURE 3-10 Hot Site Setup

Both application tiers are already in full operation; if there’s an issue with one of the application stacks, traffic gets redirected automatically, as shown in Figure 3-11. With AWS, you can use Auto Scaling to scale resources to meet the capacity requirements if the on-premises resources fail. A hot site solution is architected for disaster recovery events.

FIGURE 3-11 Hot Site Response

Multi-Region Active-Active Application Deployment

An active-active deployment of an application across multiple AWS regions adds a level of redundancy and availability, but it has an increased cost as each region hosts a complete application stack.

For the AWS Certified Solutions Architect - Associate (SAA-C02) exam, it is important to know that for an automated multi-region deployment, AWS offers Amazon Aurora, a relational database solution for maintaining the consistency of database records across the two AWS regions. Aurora, which is a relational database that is PostgreSQL and MySQL compatible, can function as a single global database operating across multiple AWS regions. An Aurora global database has one primary region and up to five read-only secondary regions. Cross-region replication latency with Aurora is typically around 1 second. Aurora allows you to create up to 16 additional database instances in each AWS region; these instances all remain up to date because Aurora storage is a shared virtual SAN clustered shared storage solution.

With Aurora, if your primary region faces a disaster, one of the secondary regions can be promoted to take over the reading and writing responsibilities, as shown in Figure 3-12. Aurora cluster recovery can be accomplished in less than 1 minute. Applications that use this type of database design would have an effective RPO of 1 second and an RTO of less than 1 minute. Web and application servers in both AWS regions are placed behind elastic load-balancing services (ELB) at each tier level and also use Auto Scaling to automatically scale each application stack, when required, based on changes in application demand. Keep in mind that Auto Scaling can function across multiple availability zones; Auto Scaling can mitigate an availability zone failure by adding the required compute resources in a single availability zone.

FIGURE 3-12 Aurora DB Cluster with Multiple Writes

The AWS Certified Solutions Architect - Associate (SAA-C02) exam is likely to ask you to consider best practices based on various scenarios. There are many potential solutions to consider in the real world, and Amazon wants to ensure that you know about a variety of DR solutions.

The AWS Service-Level Agreement (SLA)

Many technical people over the years have described cloud service-level agreements (SLAs) as being inadequate—especially when they compare cloud SLAs with on-premises SLAs. Imagine that you were a cloud provider with multiple data centers and thousands of customers. How would you design an SLA? You would probably tell your customers something like “We do the best job we can, but computers do fail, as we know.” If you think about the difficulty of what to offer a customer when hardware and software failures occur, you will probably come up with the same solution that all cloud providers have arrived at. For example, AWS uses the following language in an SLA:

With AWS, many core services have separate SLAs. Certainly, the building blocks of any application—compute, storage, CDN, and DNS services—have defined service levels (see Table 3-4). However, an SLA does not really matter as much as how you design your application to get around failures when they occur.

Table 3-4 Service-Levels at AWS

AWS Service

General Service Commitment

EC2 instances

99.99%

EBS volumes

99.99%

RDS

99.95%

S3 buckets

99.9%

Route 53

100 %

CloudFront

99.9%

Aurora

99.99%

DynamoDB

99.99%

Elastic Load Balancing

99.99%

As you think about AWS cloud service SLAs, keep in mind that each service is going to fail, and you’re not going to have any warning of these failures. This is about the only guarantee you have when hosting applications in the public cloud: The underlying cloud services are going to fail unexpectedly. AWS services are typically stable for months, but failures do happen unexpectedly.

Most of the failures that occur in the cloud are compute failures. An instance that is powering an application server, a web server, a database server, or a caching server fails. What happens to your data? Your data in the cloud is replicated, at the very least, within the AZ where your instances are running. (Ideally, your data records reside on multiple EBS volumes.) This does not mean you can’t lose data in the cloud; if you never back up your data, you will probably lose it. And because customers are solely in charge of their own data, 100% data retention is certainly job one.

Automating AWS Architecture

Many systems have been put in place over the years to help successfully manage and deploy complicated software applications on complicated hardware stacks.

If you look at the AWS cloud as an operating system hosted on the Internet, the one characteristic of AWS that stands above all others is the level of integrated automation used to deploy, manage, and recover AWS services. There is not a single AWS service offered that is not heavily automated for deployment and in its overall operation. When you order a virtual private network (VPN), it’s created and available in seconds. If you order an Elastic Compute Cloud (EC2) instance, either through the AWS Management Console or by using the AWS command-line interface (CLI) tools, it’s created and available in minutes. Automated processes provide the just-in-time response you want when you order cloud services.

AWS services are being changed, enhanced, and updated 24/7, with features and changes appearing every day. AWS as a whole is deployed and maintained using a combination of developer agility and automated processes; it is able to move quickly and easily with a partnership of developers, system operations, project managers, network engineers, and security professionals working together from the initial design stages, through the development process, to production and continual updates.

AWS wasn’t always so automated and regimented. In the early days, Amazon was a burgeoning online e-commerce bookseller. Not so very long ago, you would order a virtual machine from a cloud provider and wait several days for an email to arrive, telling you that your service was ready to go. As the Amazon e-commerce site became more popular, new problems appeared in the realm of scaling online resources to match customers’ needs. Over time, Amazon developed rules for all developers, mandating that each underlying service that supported the Amazon store must be accessible through a core set of shared application programming interfaces (APIs) that were shared with all developers; each service needed to be built and maintained on a common core of compute and storage resources.

Amazon built and continues to build its hosting environment using mandated internal processes, which can be described as a mixture of the following:

Many people at AWS are working together in an effective manner to make hundreds of changes to the AWS hardware and software environment every month. In addition, all AWS services are being monitored, scaled, rebuilt, and logged through completely automated processes. Amazon avoids using manual processes, and your long-term goal should be for your company to do the same.

As your experience with AWS grows, you’re going to want to start using automation to help run and manage your day-to-day AWS operations and to solve problems when they occur. There are numerous services available that don’t cost anything additional to use aside from the time it takes to become competent in using them. This might sound too good to be true, but most of Amazon’s automation services are indeed free to use; you are charged only for the AWS compute and storage resources that each service uses.

Automation services will always manage your resources more effectively than you can manually. At AWS, the automation of infrastructure is typically called infrastructure as code (IAC). When you create resources using the AWS Management Console, in the background, AWS uses automated processes running scripts to finish the creation and management of resources.

Regardless of how you define your own deployment or development process, there are a number of powerful tools in the AWS toolbox that can help you automate procedures:

Automating Infrastructure with CloudFormation

The second and third times you deploy EC2 instances using the AWS Management Console, you will probably not perform the steps the same way you did the first time. Even if you do manage to complete a manual task with the same steps, by the tenth installation your needs will have changed or better options will have become available. A manual process rarely stays the same over time. To make changes easier, you can automate even the simplest manual processes at AWS.

If you peek under the hood at any management service running at AWS, you’ll find the process command set driven by JavaScript Object Notation (JSON) scripts. At the GUI level, you use the Management Console to fill in the blanks; when you click Create, JSON scripts are executed in the background to carry out your requests. AWS’s extensive use of JSON is similar to Microsoft Azure’s extensive use of PowerShell; at AWS, JSON scripts are used internally for many tasks and processes; creating security policy with Identity and Access Management (IAM) and working with CloudFormation are two examples that you will commonly come across.

If you use Windows EC2 instances at AWS, you can also use PowerShell scripting. Both Microsoft Azure and AWS heavily rely on automation tools for everyday deployments. In the background, Azure relies heavily on PowerShell scripting and automation.

CloudFormation is an AWS-hosted orchestration engine that works with JSON templates to deploy AWS resources on demand or on predefined triggers (see Figure 3-13). AWS uses CloudFormation extensively, and so can you. More than 300,000 AWS customers use CloudFormation to manage deployment of just about everything, including all infrastructure stack deployments. A CloudFormation template declares the infrastructure stack to be created, and the CloudFormation engine automatically deploys and links the needed resources. You can add additional control variables to each CloudFormation template to manage and control the precise order of the installation of resources.

FIGURE 3-13 The CloudFormation Console

Consider this comparison of the manual AWS deployment process and the automated process that starts with CloudFormation:

CloudFormation Components

CloudFormation works with templates, stacks, and change sets. A CloudFormation template is an AWS resource blueprint that can create a complete application stack or a single stack component, such as a VPC network complete with multiple subnets, Internet gateways, and NAT services all automatically deployed and configured. You can create a template type called a change set to help visualize how proposed changes will affect AWS resources deployed by a CloudFormation template.

CloudFormation Templates

Each CloudFormation template is a text file that follows either JSON or YAML formatting standards. CloudFormation responds to files saved with JSON, YAML, and .txt extensions. Each template can deploy or update multiple AWS resources or a single resource, such as a VPC or an EC2 instance. Example 3-1 shows a CloudFormation template in JSON format, and Example 3-2 displays the same information in YAML format. It’s really a matter of personal preference which format you use. When creating CloudFormation templates, you might find YAML easier to read, which could be helpful in the long term. YAML seems more self-documenting because it’s easier to read.

Example 3-1 CloudFormation Template in JSON Format

{
    "AWSTemplateFormatVersion" : "2010-09-09",
    "Description" : "EC2 instance",
    "Resources" : {
    "EC2Instance" : {
    "Type" : "AWS::EC2::Instance",
    "Properties" : {
    "ImageId" : "ami-0ff8a91497e77f667",
    "InstanceType" : "t1.micro"
    }
    }
    }
}

Example 3-2 CloudFormation Template in YAML Format

AWSTemplateFormatVersion: '2010-09-09'
Description: EC2 instance
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-0ff8a91497e77f667

CloudFormation templates can have multiple sections, as shown in Example 3-3. However, the only mandatory section is Resources. As with any other template or script, the better the internal documentation, the more usable a CloudFormation template is—for the author as well as other individuals. It is highly recommended to use the Metadata section for comments to ensure that users understand the script.

Example 3-3 Valid Sections in a CloudFormation Template

"AWSTemplateFormatVersion": "version date",
"AWSTemplateFormatVersion": "2010-09-09"
<TemplateFormatVersion: Defines the current CF template version>
 
"Description": "Here are the additional details about this template
and what it does",
<Description: Describes the template: must always follow the version
section>
 
"Metadata": {
   "Metadata" : {
   "Instances" :  {"Description : "Details about the instances"},
   "Databases" : {"Description : "Details about the databases"}
  }
},
<Metadata: Additional information about the resources being deployed
by the template>
 
   "Parameters" : {
   "InstanceTypeParameter" : {
       "Type" : "String" ,
       "Default" : "t2.medium" , 
       "AllowedValues" : ["t2.medium", "m5.large", "m5.xlarge"],
       "Description" : "Enter t2.medium, m.5large, or m5.xlarge.
       Default is t2.medium."
  }
}
<Parameters: Defines the AWS resource values allowed to be selected
and used by your template>
 
"Mappings" : {
     "RegionMap" : [
         "us-east-1           : { "HVM64 : "ami-0bb8a91508f77f868"},
         "us-west-1          : { "HVM64 : "ami-0cdb828fd58c52239"},
         "eu-west-1          : { "HVM64 : "ami-078bb4163c506cd88"},
         "us-southeast-1    : { "HVM64 : "ami-09999b978cc4dfc10"},
         "us-northeast-1    : { "HVM64 : "ami-06fd42961cd9f0d75"}
  }
}
<Mappings: Defines conditional parameters defined by a "key"; in this
example, the AWS region and a set of AMI values to be used>
 
  "Conditions": {
  "CreateTestResources": {"Fn::Equals" : [{"Ref" : "EnvType"}, "test"]}
},
<Conditions: Defines dependencies between resources, such as the
order when resources are created, or where resources are created.
For example, "test" deploys the stack in the test environment >

CloudFormation Stacks

AWS has many sample CloudFormation templates that you can download from the online CloudFormation documentation, as shown in Figure 3-14, and deploy in many AWS regions. A CloudFormation stack can be as simple as a single VPC or as complex as a complete three-tier application stack, complete with the required network infrastructure and associated services.

FIGURE 3-14 AWS Sample Stacks and CloudFormation Templates

CloudFormation can be useful for deploying infrastructure at AWS, including in the following areas:

Creating an EC2 Instance

Example 3-4 provides a simple example that shows how to create an EC2 instance using a CloudFormation template. The template parameters are easily readable from top to bottom. Under Properties, the EC2 instance ID, subnet ID, and EC2 instance type must all be already present in the AWS region where the template is executed; if they are not, the deployment will fail. If there are issues in the CloudFormation script during deployment, CloudFormation rolls back and removes any infrastructure that the template created. The Ref statement is used in this template to attach the elastic IP (EIP) address to the defined EC2 instance that was deployed and referenced under the resources listed as EC2 Machine.

Example 3-4 CloudFormation Template for Creating an EC2 Instance

AWSTemplateFormatVersion: 2010-09-09
Description: EC2 Instance Template
"Resources": {
 "EC2Machine": {
 "Type": "AWS::EC2::Instance",
 "Properties": {
   "ImageId": "i-0ff407a7042afb0f0",
   "NetworkInterfaces": [{
   "DeviceIndex": "0",
   "DeleteOnTermination": "true",
   "SubnetId": "subnet-7c6dd651"
   }]
   "InstanceType": "t2.small"
   }
  }
},
"EIP": {
  "Type": "AWS::EC2::EIP",
  "Properties": {
  "Domain": "VPC"
  }
},
"VpcIPAssoc": {
"Type": "AWS::EC2::EIPAssociation",
  "Properties": {
  "InsanceId":  {
  "Ref": "EC2Machine"
  },
  "AllocationId": {
  "Fn::GetAtt": ["EIP",
  "AllocationId"]
  }
  }
}
Updating with Change Sets

Change sets allow you to preview how your existing AWS resources will be modified when a deployed CloudFormation resource stack needs to be updated (see Figure 3-15). You select an original CloudFormation template to edit and input the desired set of changes. CloudFormation then analyzes your requested changes against the existing CloudFormation stack and produces a change set that you can review and approve or cancel.

FIGURE 3-15 Using Change Sets with CloudFormation

You can create multiple change sets for various comparison purposes. Once a change set is created, reviewed, and approved, CloudFormation updates your current resource stack.

CloudFormation Stack Sets

A stack set allows you to create a single CloudFormation template to deploy, update, or delete AWS infrastructure across multiple AWS regions and AWS accounts. When a CloudFormation template will be deploying infrastructure across multiple accounts, as shown in Figure 3-16, and AWS regions, you must ensure that the AWS resources that the template references are available in each AWS account or region. For example, EC2 instances, EBS volumes, and key pairs are always created in a specific region. These region-specific resources must be copied to each AWS region where the CloudFormation template is executed. It is important to review global resources such as IAM roles and S3 buckets that are being created by the CloudFormation template to make sure there are no naming conflicts during creation, as global resources must be unique across all AWS regions.

FIGURE 3-16 A Stack Set with Two AWS Target Accounts

Once a stack set is updated, all instances of the stack that were created are updated as well. For example, if you have 10 AWS accounts across 3 AWS regions, 30 stack instances are updated when the primary stack set is executed. If a stack set is deleted, all corresponding stack sets are also deleted.

A stack set is first created in a single AWS account. Before additional stack instances can be created from the primary stack set, trust relationships using IAM roles must be created between the initial AWS administrator account and the desired target accounts.

For testing purposes, one example available in the AWS CloudFormation console is a sample stack set that allows you to enable AWS Config across selected AWS regions or accounts. Keep in mind that AWS Config allows you to control AWS account compliance by defining rules that monitor specific AWS resources to ensure that the desired level of compliance has been followed.

Third-Party Solutions

There are a number of third-party solutions, such as Chef, Puppet, Ansible, and TerraForm, for performing automated deployments of compute infrastructure. CloudFormation does not replace these third-party products but can be a useful tool for building automated solutions for your AWS infrastructure if you don’t use one of the third-party orchestration tools. AWS has a managed service called OpsWorks that comes in three flavors and might be useful to your deployments at AWS if your company currently uses Chef or Puppet:

AWS Service Catalog

Using a CloudFormation template provides great power for creating, modifying, and updating AWS infrastructure. Creating AWS infrastructure always costs money. Say that you would like to control who gets to deploy specific CloudFormation templates. You can use Service Catalog to manage the distribution of CloudFormation templates as a product list to an AWS account ID, an AWS Organizations account, or an organizational unit contained within an AWS organization. Service Catalog is composed of portfolios, as shown in Figure 3-17, each of which is a collection of one or more products.

FIGURE 3-17 Portfolios in Service Catalog

When an approved product is selected, Service Catalog delivers a confirmation template to CloudFormation, which then executes the template and creates the product. Third-party products hosted in the AWS Marketplace are also supported by Service Catalog, and software appliances are bundled with a CloudFormation template.

Each IAM user in an AWS account can be granted access to a Server Catalog portfolio of multiple approved products. Because products are built using common confirmation templates, any AWS infrastructure components, including EC2 instances and databases hosted privately in a VPC, can be deployed. In addition, VPC endpoints using AWS PrivateLink allow access to the AWS Service Catalog service.

When you’re creating Service Catalog products, you can use constraints with IAM roles to limit the level of administrative access to the resources contained in the stack being deployed by the product itself. You can also assign server actions for rebooting, starting, or stopping deployed EC2 instances, as shown in Figure 3-18.

FIGURE 3-18 IAM Group Constraints Controlled by Service Catalog

In addition, you can add rules that control any parameter values that the end user enters. For example, you could mandate that specific subnets must be used for a stack deployment. You can also define rules that allow you to control which AWS account and region a product can launch.

If you list deployed products by version number, you can allow end users to select the latest versions of products so they can update older versions of currently deployed products. In this way, you can use CloudFormation and Service Catalog together to create a self-serve portal for developers consisting of portfolios and products.

Elastic Beanstalk

When moving to the AWS cloud, developers typically have little time and budget but must develop a web application or migrate an existing web app into the AWS cloud while adhering to the company’s compliance standards. The web application needs to be reliable, able to scale, and easy to update. In such situations, Elastic Beanstalk can be of some help.

Elastic Beanstalk, which has been around since 2011, was launched as a platform as a service (PaaS) offering from AWS to help enable developers to easily deploy web applications hosted on AWS Linux and Windows EC2 instances in the AWS cloud. As briefly mentioned earlier in this chapter, Elastic Beanstalk automates both application deployment, as shown in Figure 3-19, and the required infrastructure components, including single and multiple EC2 instances behind an elastic load balancer hosted in an Auto Scaling group. Monitoring of your Elastic Beanstalk environment is carried out with CloudWatch metrics for monitoring the health of your application. Elastic Beanstalk also integrates with AWS X-Ray, which can help you monitor and debug the internals of your hosted application.

FIGURE 3-19 Elastic Beanstalk Creating Infrastructure and Installing an Application

Elastic Beanstalk supports a number of development platforms, including Java (Apache HTTP or Tomcat) for PHP, Node.js (Nginx or Apache HTTP), Python (Apache HTTP), Ruby (Passenger), .NET (IIS), and the Go language. Elastic Beanstalk allows you to deploy different runtime environments across multiple technology stacks that can all be running AWS at the same time; the technology stacks can be EC2 instances or Docker containers.

Developers can use Elastic Beanstalk to quickly deploy and test applications on a predefined infrastructure stack. If an application does not pass the test, the infrastructure can be quickly discarded at little cost. Keep in mind that Elastic Beanstalk is not a development environment (like Visual Studio). An application must be written and ready to go before Elastic Beanstalk is useful. After an application has been written, debugged, and approved from your Visual Studio (or Eclipse) development environment combined with the associated AWS Toolkit, you need to upload your code. Then you can create and upload a configuration file that details the infrastructure that needs to be built. Elastic Beanstalk completes the deployment process for the infrastructure and the application. The original goal for Elastic Beanstalk was to greatly reduce the timeframe for hardware procurement for applications (which in some cases took weeks or months).

Elastic Beanstalk is useful in organizations that are working with a DevOps mentality, where the developer is charged with assuming some operational duties. Elastic Beanstalk can help developers automate tasks and procedures that were previously carried out by administrators and operations folks when an application was hosted in an on-premises data center. Elastic Beanstalk carries out the following tasks automatically:

Elastic Beanstalk is free of charge to use; you are charged only for the resources used for the deployment and hosting of your applications. The AWS resources that you use are provisioned within your AWS account, and you have full control of these resources. In contrast, with other PaaS solutions, the provider blocks access to the infrastructure resources. At any time, you can go into the Elastic Beanstalk configuration of your application and make changes, as shown in Figure 3-20. Although Beanstalk functions like a PaaS service, you can tune and change the infrastructure resources as desired.

FIGURE 3-20 Modifying the Capacity of the Elastic Beanstalk Application Infrastructure

Applications supported by Elastic Beanstalk include simple HTTPS web applications and applications with worker nodes that can be subscribed to Amazon Simple Queue Service (SQS) queues to carry out more complex, longer-running processes.

After an application has been deployed by Elastic Beanstalk, you can have AWS automatically update the selected application platform environment by enabling managed platform updates, which can be deployed during a defined maintenance window. These updates include minor platform version updates and security patching but not major platform updates to the web services being used; major updates must be initiated manually.

Database support for Elastic Beanstalk includes any application that can be installed on an EC2 instance, RDS database options, and DynamoDB. A database can be provisioned by Elastic Beanstalk during launch or can be exposed to the application through the use of environmental variables. You can also choose to deploy instances that are hosting your applications in multiple AZs and control your application HTTPS security and authentication by deploying Application Load Balancer.

Updating Elastic Beanstalk Applications

You can deploy new versions of an application to your Elastic Beanstalk environment in several ways, depending on the complexity of the application. During updates, Elastic Beanstalk archives the old application version in an S3 bucket. The methods available for updating Elastic Beanstalk applications include the following:

Deployment Methodologies

Developers getting ready to create their first application in the cloud can look to a number of rules that are generally accepted for successfully creating applications that run exclusively in the public cloud.

Several years ago, Heroku cofounder Adam Wiggins released a suggested blueprint for creating native software as a service (SaaS) application hosted in the public cloud called the Twelve-Factor App Methodology. These guidelines can be viewed as a set of best practices to consider using when deploying applications in the cloud. Of course, depending on your deployment methods, you may quibble with some of the rules—and that’s okay. There are many methodologies available to deploy applications. There are also many complementary management services hosted at AWS that greatly speed up the development process, regardless of the model used.

The development and operational model that you choose to embrace will follow one of these development and deployment paths.

Before deploying applications in the cloud, you should carefully review your current development process and perhaps consider taking some of the steps in the Twelve-Factor App Methodology, which are described in the following sections. Your applications that are hosted in the cloud also need infrastructure; as a result, these rules for proper application deployment in the cloud don’t stand alone; cloud infrastructure is also a necessary part of the rules. The following sections look at the 12 rules of the Twelve-Factor App Methodology from an infrastructure point of view and identify the AWS services that can help with each rule. This information can help you understand both the rules and the AWS services that can be useful in application development.

Rule 1: Use One Codebase That Is Tracked with Version Control to Allow Many Deployments

In development circles, this rule is non-negotiable; it must be followed. Creating an application usually involves three separate environments: development, testing, and production (see Figure 3-22). The same codebase should be used in each environment, whether it’s the developer’s laptop, a set of testing server EC2 instances, or the production EC2 instances. Operating systems, off-the-shelf software, dynamic-link libraries (DLLs), development environments, and application code are always defined and controlled by versions. Each version of application code needs to be stored separately and securely in a safe location. Multiple AWS environments can take advantage of multiple availability zones and multiple VPCs.

FIGURE 3-22 One Codebase, Regardless of Location

Developers typically use code repositories such as GitHub to store their code. As your codebase undergoes revisions, each revision needs to be tracked; after all, a single codebase might be responsible for thousands of deployments, and documenting and controlling the separate versions of the codebase just makes sense. Amazon has a code repository, called CodeCommit, that may be more useful than Git for applications hosted at AWS.

At the infrastructure level at Amazon, it is important to consider dependencies. The AWS infrastructure components to keep track of include the following:

AWS CodeCommit

CodeCommit is a hosted AWS version control service with no storage size limits (see Figure 3-23). It allows AWS customers to privately store their source and binary code, which are automatically encrypted at rest and at transit, at AWS. CodeCommit allows customers to store code versions at AWS rather than at Git without worrying about running out of storage space. CodeCommit is also HIPAA eligible and supports Payment Card Industry Data Security Standard (PCI DSS) and ISO 27001 standards.

FIGURE 3-23 A CodeCommit Repository

CodeCommit supports common Git commands and, as mentioned earlier, there are no limits on file size, type, and repository size. CodeCommit is designed for collaborative software development environments. When developers make multiple file changes, CodeCommit manages the changes across multiple files. S3 buckets also support file versioning, but S3 versioning is really meant for recovery of older versions of files; it is not designed for collaborative software development environments; as a result, S3 buckets are better suited for files that are not source code.

Rule 2: Explicitly Declare and Isolate Dependencies

Any application that you have written or will write depends on some specific components, such as a database, a specific operating system version, a required utility, or a software agent that needs to be present. You should document these dependencies so you know the components and the version of each component required by the application. Applications that are being deployed should never rely on the assumed existence of required system components; instead, each dependency needs to be declared and managed by a dependency manager to ensure that only the defined dependencies will be installed with the codebase. A dependency manager uses a configuration file to determine what dependency to get, what version of the dependency to get, and what repository to get it from. If there is a specific version of system tools that the codebase always requires, perhaps the system tools could be added to the operating system that the codebase will be installed on. However, over time, software versions for every component will change. An example of a dependency manager could be Composer, which is used with PHP projects, or Maven, which can be used with Java projects. Another benefit of using a dependency manager is that the versions of your dependencies will be the same versions used in the development, testing, and production environments.

If there is duplication with operating system versions, the operating system and its feature set can also be controlled by AMI versions, and CodeCommit can be used to host the different versions of the application code. CloudFormation also includes a number of helper scripts that can allow you to automatically install and configure applications, packages, and operating system services that execute on EC2 Linux and Windows instances. The following are a few examples of these helper scripts:

Rule 3: Store Configuration in the Environment

Your codebase should be the same in the development, testing, and production environments. However, your database instances or your S3 buckets will have different paths, or URLs, used in testing or development. Obviously, a local database shouldn’t be stored on a compute instance operating as a web server or as an application server. Other configuration components, such as API keys, plus database credentials for access and authentication, should never be hard-coded. You can use AWS Secrets for storing database credentials and secrets, and you can use IAM roles for accessing data resources at AWS, including S3 buckets, DynamoDB tables, and RDS databases. You can use API Gateway to store your APIs.

Development frameworks define environment variables through the use of configuration files. Separating your application components from the application code allows you to reuse your backing services in different environments, using environment variables to point to the desired resource from the development, testing, or production environment. Amazon has a few services that can help centrally store application configurations:

Rule 4: Treat Backing Services as Attached Resources

All infrastructure services at AWS can be defined as backing services, and AWS services can be accessed by HTTPS private endpoints. Backing services hosted at AWS are connected over the AWS private network and include databases (for example, Relational Database Service [RDS], DynamoDB), shared storage (for example, S3 buckets, Elastic File System [EFS]), Simple Mail Transfer Protocol (SMTP) services, queues (for example, Simple Queue Service [SQS]), caching systems (such as ElastiCache, which manages Memcached or Redis in-memory queues or databases), and monitoring services (for example, CloudWatch, Config, CloudTrail).

Under certain conditions, backing services should be completely swappable; for example, a MySQL database hosted on premises should be able to be swapped with a hosted copy of the database at AWS without requiring a change to application code; the only variable that needs to change is the resource handle in the configuration file that points to the database location.

Rule 5: Separate the Build and Run Stages

If you are creating applications that will be updated, whether on a defined schedule or at unpredictable times, you will want to have defined stages during which testing can be carried out on the application state before it is approved and moved to production. Amazon has several such PaaS services that work with multiple stages. As discussed earlier in this chapter, Elastic Beanstalk allows you to upload and deploy your application code combined with a configuration file that builds the AWS environment and deploys your application.

The Elastic Beanstalk build stage could retrieve your application code from the defined repo storage location, which could be an S3 bucket. Developers could also use the Elastic Beanstalk CLI to push your application code commits to AWS CodeCommit. When you run the CLI command EB create or EB deploy to create or update an EBS environment, the selected application version is pulled from the defined CodeCommit repository, and the application and required environment are uploaded to Elastic Beanstalk. Other AWS services that work with deployment stages include the following:

Rule 6: Execute an App as One or More Stateless Processes

Stateless processes provide fault tolerance for the instances running your applications by separating the important data records being worked on by the application and storing them in a centralized storage location such as an SQS message queue. An example of a stateless design using an SQS queue could be a design in which an SQS message queue is deployed as part of the workflow to add a corporate watermark to all training videos uploaded to an associated S3 bucket (see Figure 3-24). A number of EC2 instances could be subscribed to the watermark SQS queue; every time a video is uploaded to the S3 bucket, a message is sent to the watermark SQS queue. The EC2 servers subscribed to the SQS queue poll for any updates to the queue; when an update message is received by a subscribed server, the server carries out the work of adding a watermark to the video and then stores the video in another S3 bucket.

FIGURE 3-24 Using SQS Queues to Provide Stateless Memory-Resident Storage for Applications

Others stateless options available at AWS include the following:

Let’s look at an example of how these services can solve availability and reliability problems. Say that a new employee at your company needs to create a profile on the first day of work. The profile application runs on a local server, and each new hire needs to enter pertinent information. Each screen of information is stored within the application running on the local server until the profile creation is complete. This local application is known to fail without warning, causing problems and wasting time. You decide to move the profile application to the AWS cloud, which requires a proper redesign with hosted components to provide availability and reliability by hosting the application on multiple EC2 instances behind a load balancer in separate availability zones. Components such as an SQS queue can retain the user information in a redundant data store. Then, if one of the application servers crashes during the profile creation process, another server takes over, and the process completes successfully.

Data that needs to persist for an undefined period of time should always be stored in a redundant storage service such as a DynamoDB database table, an S3 bucket, an SQS queue, or a shared file store such as EFS. When the user profile creation is complete, the application can store the relevant records and can communicate with the end user by using Amazon Simple Email Service (SES).

Rule 7: Export Services via Port Binding

Instead of using a local web server installed on a local server host and accessible only from a local port, you should make services accessible by binding to external ports where the services are located and accessible using an external URL. For example, all web requests can be carried out by binding to an external port, where the web service is hosted and from which it is accessed. The service port that the application needs to connect to is defined by the development environment’s configuration file (see the section “Rule 3: Store Configuration in the Environment,” earlier in this chapter). The associated web service can be used multiple times by different applications and the different development, testing, and production environments.

Rule 8: Scale Out via the Process Model

If your application can’t scale horizontally, it’s not designed for dynamic cloud operation. Many AWS services, including these, are designed to automatically scale horizontally:

Rule 9: Maximize Robustness with Fast Startup and Graceful Shutdown

User session information can be stored in Amazon ElastiCache or in in-memory queues, and application state can be stored in SQS message queues. Application configuration and bindings, source code, and backing services are hosted by AWS managed services, each with its own levels of redundancy and durability. Data is stored in a persistent packing storage location such as S3 buckets, RDS, or DynamoDB databases (and possibly EFS or FSx shared storage). Applications with no local dependencies and integrated hosted redundant services can be managed and controlled by a number of AWS management services.

Rule 10: Keep Development, Staging, and Production as Similar as Possible

With this rule, similar does not refer to the number of instances or the size of database instances and supporting infrastructure. Your development environment must be exact in the codebase being used but can be dissimilar in the number of instances or database servers being used. Aside from the infrastructure components, everything else in the codebase must remain the same. CloudFormation can be used to automatically build each environment using a single template file with conditions that define what infrastructure resources to build for each development, testing, and production environment.

Rule 11: Treat Logs as Event Streams

In development, testing, and production environments, each running process log stream must be stored externally. At AWS, logging is designed as event streams. CloudWatch logs or S3 buckets can be created to store EC2 instances’ operating system and application logs. CloudTrail logs, which track all API calls to the AWS account, can also be streamed to CloudWatch logs for further analysis. Third-party monitoring solutions support AWS and can interface with S3 bucket storage. All logs and reports generated at AWS by EC2 instances or AWS managed services eventually end up in an S3 bucket.

Rule 12: Run Admin/Management Tasks as One-Off Processes

Administrative processes should be executed using the same method, regardless of the environment in which the administrative task is executed. For example, an application might require a manual process to be carried out; the steps to carry out the manual process must remain the same, whether they are executed in the development, testing, or production environment.

The goal in presenting the rules of the Twelve-Factor App Methodology is to help you think about your applications and infrastructure and, over time, implement as many of the rules as possible. This might be an incredibly hard task to do for applications that are simply lifted and shifted to the cloud. Newer applications that are completely developed in the cloud should attempt to follow these rules as closely as possible.

Exam Preparation Tasks

As mentioned in the section “How to Use This Book” in the Introduction, you have a couple of choices for exam preparation: the exercises here, Chapter 14, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

Review All Key Topics

Review the most important topics in the chapter, noted with the key topics icon in the outer margin of the page. Table 3-5 lists these key topics and the page number on which each is found.

Table 3-5 Chapter 3 Key Topics

Key Topic Element

Description

Page Number

Figure 3-2

Increasing application availability by adding compute resources

89

Figure 3-4

Increasing fault tolerance

90

Table 3-2

Avoiding single points of failure

91

Figure 3-5

Using elastic IP addresses for high availability

92

Table 3-3

Planning for high availability, fault tolerance, and redundancy

93

List

Recovery point objective

94

List

Recovery time objective

95

Figure 3-9

Warm standby response

98

Figure 3-11

Hot site response

100

Paragraph

CloudFormation templates

107

Section

Third-party solutions

114

Figure 3-19

Using Elastic Beanstalk to create infrastructure and install an application

117

List

Elastic Beanstalk tasks

118

Paragraph

Updating Elastic Beanstalk applications

119

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

service-level agreement (SLA), recovery point objective (RTO), recovery time objective (RPO), pilot light, warm standby, hot site, multi-region, Information Technology Infrastructure Library (ITIL), Scrum, Agile, JavaScript Object Notation (JSON)

Q&A

The answers to these questions appear in Appendix A. For more practice with exam format questions, use the Pearson Test Prep Software Online.

  1. How do you achieve AWS’s definition of high availability when designing applications?

  2. What AWS service is not initially designed with fault tolerance or high availability?

  3. What is the easiest way to remove single points of failure when operating in the AWS cloud?

  4. What is the difference between a pilot light solution and a warm standby solution?

  5. What is the difference between RPO and RTO?

  6. What two AWS database services allow you to deploy globally across multiple AWS regions?

  7. How can you build a self-serve portal for developers at AWS and control what architecture is deployed?

  8. What service allows you to deploy an architectural solution for an application that you have already written?

800 East 96th Street, Indianapolis, Indiana 46240

sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |