• Home
  • Amazon
  • AWS Certified Cloud Practitioner AWS Certified Cloud Practitioner (CLF-C01) Dumps

Pass Your Amazon AWS Certified Cloud Practitioner Exam Easy!

100% Real Amazon AWS Certified Cloud Practitioner Exam Questions & Answers, Accurate & Verified By IT Experts

Instant Download, Free Fast Updates, 99.6% Pass Rate

Amazon AWS Certified Cloud Practitioner Exam Screenshots

Amazon AWS Certified Cloud Practitioner Practice Test Questions in VCE Format

File Votes Size Date
File
Amazon.passguide.AWS Certified Cloud Practitioner.v2023-09-22.by.iris.512q.vce
Votes
2
Size
14.16 MB
Date
Sep 22, 2023
File
Amazon.test-inside.AWS Certified Cloud Practitioner.v2022-01-24.by.benjamin.496q.vce
Votes
1
Size
13.05 MB
Date
Jan 24, 2022
File
Amazon.practiceexam.AWS Certified Cloud Practitioner.v2021-12-29.by.lincoln.478q.vce
Votes
1
Size
643.74 KB
Date
Dec 29, 2021
File
Amazon.passit4sure.AWS Certified Cloud Practitioner.v2021-11-03.by.anthony.449q.vce
Votes
1
Size
599.15 KB
Date
Nov 03, 2021
File
Amazon.pass4sure.AWS Certified Cloud Practitioner.v2021-09-14.by.annie.427q.vce
Votes
1
Size
582.15 KB
Date
Sep 14, 2021
File
Amazon.selftestengine.AWS Certified Cloud Practitioner.v2021-08-02.by.wangyan.414q.vce
Votes
1
Size
554.09 KB
Date
Aug 02, 2021
File
Amazon.passguide.AWS Certified Cloud Practitioner.v2021-06-28.by.scarlett.395q.vce
Votes
1
Size
532.73 KB
Date
Jun 28, 2021
File
Amazon.examquestions.AWS Certified Cloud Practitioner.v2021-04-04.by.mason.380q.vce
Votes
1
Size
515.5 KB
Date
Apr 06, 2021
File
Amazon.questionspaper.AWS Certified Cloud Practitioner.v2020-09-03.by.carter.221q.vce
Votes
2
Size
305.24 KB
Date
Sep 03, 2020
File
Amazon.test-king.AWS Certified Cloud Practitioner.v2020-07-21.by.robyn.198q.vce
Votes
2
Size
277.92 KB
Date
Jul 21, 2020
File
Amazon.pass4sures.AWS Certified Cloud Practitioner.v2020-05-26.by.wangchao.37q.vce
Votes
2
Size
51.69 KB
Date
May 26, 2020
File
Amazon.examlabs.AWS Certified Cloud Practitioner.v2020-04-08.by.djamel.165q.vce
Votes
2
Size
243.14 KB
Date
Apr 08, 2020
File
Amazon.examlabs.AWS Certified Cloud Practitioner.v2020-02-07.by.eleanor.130q.vce
Votes
4
Size
200.22 KB
Date
Feb 07, 2020

Amazon AWS Certified Cloud Practitioner Practice Test Questions, Exam Dumps

Amazon AWS Certified Cloud Practitioner AWS Certified Cloud Practitioner (CLF-C01) exam dumps vce, practice test questions, study guide & video training course to study and pass quickly and easily. Amazon AWS Certified Cloud Practitioner AWS Certified Cloud Practitioner (CLF-C01) exam dumps & practice test questions and answers. You need avanset vce exam simulator in order to study the Amazon AWS Certified Cloud Practitioner certification exam dumps & Amazon AWS Certified Cloud Practitioner practice test questions in vce format.

Understanding Core AWS Services

8. Understanding Basics of Firewall

Hi everyone and welcome back to the Knowledge Portal video series. So today we are going to talk about a very interesting topic, which is firewall and TCP/IP communication. So let's start. Now, generally, if we talk about the Internet, there are hundreds and thousands of devices available. So a person sitting in a remote forest can actually access any server located in any part of the world. So let's say that this is the Internet. So there are a lot of devices which areavailable, mobile phone, tablets, computers, and they can begood guys and there can also be bad guys. Bad guys. By that, I mean hackers. So this is the server that is connected to the Internet. So let's assume this is one easy instance, which you just launched in the AWS console. So generally, what happens is that, as this EC2 instance is also connected to the internet, let's assume that it has a public IP. That means everyone on the Internet can connect to the server. So there is no protection for the server for now. So what came up was that the firewall got introduced. So what happened in the firewall was that the firewall acted as the first layer of defense. So any device that wants to connect to this particular server has to go via the firewall. Now, in very simple terms, the main purpose of a firewall is to allow connections to the server from trusted people and to block those connections from untrusted people. As a result, firewalling is a very simple methodology. So the question now comes: How exactly does the firewall do this? So, for a very simple example, if you'll see over here, Let's take a simple example where there are five services running on this server, which are Apache, SSH, FTP, SMTP, and MySQL. And each service is associated with the port. Now, there is one firewall rule. There is a firewall attached, and one firewall rule states that the SSH protocol is TCP port 22, and the source IP is 100 557. So what this basically means is that the firewall will only allow connections on port 22, which is SSH from the source 100 557. So, when a user has this specific IP address, does SSH, you see, type SSH to this specific server? The connection will first go to the firewall. The firewall will evaluate whether it has to allow or deny. Now, here, it is allowed. That means the firewall will allow the user to connect to the SSH server. Let's take a second example over here, where there's one hacker and he has the IP. He also tries to SSH to this particular firewall. Now, Firewall, we check the Firewall table, and if there is no one, one to 2050, or 60 IP address present in this table, Firewall will block this connection. The firewall will prevent this IP address from connecting to the back end server. And this is one of the very simple examples of how a firewall works. So, most people will understand the fundamentals of how the firewall works. So basically, we are like certified security specialists, and this is why this course is flawed, and this is one of the reasons why we should know more in depth about the functioning of firewalls. So basically, the question is, "How does the firewall know that the connection that is coming from this particular IP address and this user is trying to connect to this particular port?" So, let's get into the nitty-gritty of TCP/IP and how Firewall checks these various fields using the TCP/IP packet header. So basically, I hope you are aware of the basic three-way handshake. So in TCPiV communication, before the data transfer takes place, there is a formal communication like a hello between a server and a client that takes place before the data is exchanged, and this is one of the simple examples of the handshake that I have captured with the help of TCP dump. Now, below this, you see this is the TCP header, and the right side is the IP header. Now, what the firewall does is, anytime a user makes a request, the request will be something like this: "Anytime a user makes a request to the server, the firewall will evaluate these particular fields." Now, you must be wondering how and with what these particular fields are associated, and these fields are basically the TCP and IP header fields. So let's take one example. So if you will see the ether type, which is IPV4, then IPV4 is basically the version, and the length is 74. You see the length header length is 74 followed by the source IP, which is the IP from which the request is coming to the server. So the source IP 172, 21, 55, as you see it, is present over here, followed by the destination IP 4, which is this feed, followed by the source code 55427, which is the TCP header. This is the source code and the destination port, which is 80. So this is the destination port, and then you see there is a flag associated, which is S, which stands for Synth. So if you go to the TCP header You will see this is the synth flag, followed by the sequence number; you will see this is the window size; the sequence number you will see over here; and the window size you see over here so basically all the packets that come via TCP/IP They will have something similar header fields so whatFirewall will do is Firewall will evaluate this headerfield and it will check against the Firewall rulewhether to allow it or not. We'll look into how exactly this works. So again, to revise Firewall, basically evaluate the four major things one is the source code,one is the destination port, one is thesource IP and next is the destination IP. So again, this is the source IP. So if we look over here, this is the source IP, this is the destination IP, this is the source port, and this is the destination port from the first SYNT packet that we have seen. Now, based on these conditions, Firewall will extract these values and, based on these conditions, it will check and evaluate whether to allow this packet or not. Now, one thing to remember is that we are just understanding the very basic concepts of firewalls. It actually goes into a lot of detail, which we will not be covering right now. So let's look at the simple firewall rule that says to allow SSH from this particular IP. Now, as soon as the firewall receives a packet from a user, it will first check what the source IP of it is. So the source IP is 172 21 55.So you see, it does not match over here. Then it can probably check what the portrait is trying to connect, which is 80. And in the firewall rule, you see port 80 is not allowed; only port 22 is allowed. As a result, the Firewall will simply reject this packet. So the user will not be able to connect to this particular port itself. Let's look into the secondexample where something very similar. The Firewall is now allowing port 22 on this specific IP, 100 557, as it did previously. Now, if you see the source IP, this is actually the IP of the client. So the client's IP address corresponds to this 100 557. You see, it is matching. So the first field is a go. Now the second field is the destination port. It is 80, as you can see, and the Firewall only allows destination port 22. So what is happening is that the client is trying to connect to the server on port 80. However, the firewall is only allowing port 22 to connect, and this is the reason why this packet will also be dropped by the firewall. So I hope you understood the basics of understanding the TCP and IP packet headers. So, let us do one thing: let us begin a practical that will be extremely beneficial to our understanding. So what I have is my favourite Linux box. Let me log in perfectly. So let me open up the terminal, and we'll open up my favourite wireshark. So wireshark is basically a tool to capture the network traffic, which includes the TCP and IP packets. So I will click on "Start," and currently there is no traffic coming. Allow me to do one thing: curl Kplabs in. Okay, so you got the output from Kplabs in If you open the wireshack now, you'll see a slew of new packets. The first are the DNS packets, and after DNS, you see the Cincinnati that we talked about, the three-way handshake. After the three-way handshake, you can see that data transfer has actually begun. Now, as soon as my machine sends this sync packet to this particular server, it will first reach the firewall. Now, what Firewall will evaluate is something that we have already seen. So if you'll see, there is an IEP, and there is a TCP packet header. So the first thing that the firewall will check is the source IP, which is 170, 216, 38, 130 in my case. So it will first check the source IP. It will look into the Firewall table andsee if the source IP is allowed. Then, if you look even further into the TCP header, it will check the destination port, which is ad. The firewall will allow a connection to destination port 80 if the source IP, which ranges from 170 to 130. And in our case, as we got the reply, if you'll see, we got this nice little reply, which means that the firewall is allowing the particular connection to happen. As a result, the Firewall is not blocking this particular request. So I hope you got the basic concept of a TCP IP header. Let's do one more thing. Let's try it out in our AWS console. So I have this particular EC2 instance running on this particular IP address. So let me open this up. So, if I do a telnet over here, let me try to connect on port 22 to this specific EC2 instance, and you can see that I am connected to this port 22. Now, the reason I am connected here is that, if you will see the security groups, which are basically the Amazon version of a firewall, this particular IP address is allowed. So let's do one thing—let's remove this particular rule. So now nothing is allowed. So the firewall will not allow any connection to the particular server. Now, if I try to connect back, you see, it will not allow me to connect because it is being blocked at the firewall level only. And trust me, a firewall is extremely important, and every day it can block up to thousands of attacks. So I would really encourage you to understand the TCPIP header and try to do the practical session that we did with the help of Wireshark and also try to understand these particular fields, which are basically the source port, destination port, source IP, and destination IP. And this will help us understand the firewall in great detail because we'll be doing a firewall session in detail in the app coming out.

9. Network ACL

Hey everyone and welcome back. In today's video, we will be discussing the network ACL. So let's look into some of the important points for network ACL. The first one is that network ACLs are stateless in nature. The second one is that they operate at the subnet level instead of the instance level like the security group. So this can be understood with the diagram here. So you have an EC2 instance here, and the security group is basically associated with the network interface card, which is attached to the EC2 instance. So the security group operates at the instance level. However, when you talk about network ACL, they do not operate at the instance level, they operate at the subnet level. Now, a specific subnet can contain hundreds of instances, and one rule in the network ACL will affect all the hundred instances that are associated with a specific subnet. Now, a third important pointer is that all subnets in a VPC must be associated with the network KCl. So generally, whenever you create a new VPC, it automatically creates a default network ACL for you. And the fourth one is by default the networkACL contains full inbound and outbound as allowed. So these are the default network ACLs, which get generated when you create a VPC. So let's get started on understanding why network ACL is important. So, let's say that you have a company XYZ that is getting a lot of attacks from a random IP address, which is 102 819-1232.Now, the company has more than 500 servers, and the security team has decided to block that specific IP in the firewall for all the servers. Now, how can you go ahead and achieve the goal? So generally, if you talk about the organisation and if they have an application that is accessible over the internet, then typically you'll have something like 443 allow or 80 allow. Now, in such cases, you cannot block a specific IP; that is something that is not possible in terms of security groups. If you allow all the traffic and you want to block a specific IP, it is not really possible. And even if it were possible, it would have been difficult because, as you see, there are 500 servers, and you don't want to add this blacklisted IP to all 500 security groups associated with those servers. So the better way would be the KCl one. So what you do is that if those 500 servers are within the same subnet, you block that IP at the network AC level, and that's about it. So let me quickly show you how exactly the network looks and how you can configure the rules there. So this is easy to console, and I have one EC-2 instance. So we'll be doing demos based on this EC2 instance. And if you look into this EC2 instance, it has the public IPV4 and, within the firewall rule, it has allowed 80 and 22. All right. Now, let's say that I want to block only one IP from accessing my port 80. On the AC two instance. It is not really possible with the help of the security group. So we need to take the route of the KCl network. So in order to access network KCl, we need to go to the VPC console, and let's select one VPC. So this is the VPC where our CE2 instance is currently created. Now, within your if you go a bit down underthe security, you have a network ACL and there isone network ACL which is created and it has avalue of default is equal to yes, that means thisis the default network ACL which is created. And this network ACL is associated with the six subnets that are part of the VPC. So a single network KCl can be associated with the multiple subnets within the VPC. Now, within the network KC, you have the inbound rules and the outbound rules. So within the inbound rule, you see there are two rules that are present, and within the outbound rule, there are also two rules that are present. Now, there is a rule number that is associated with them. So the first one has a rule number of 100, and all the traffic has been allowed over here. And the second one has the star, and all traffic is denied. So one important part to remember is that in the ACS network, the lower the number, the higher the priority it will receive. So, if traffic matches this specific rule, the network ACL will either allow or deny it, based on the configuration that you set over here. If the traffic matches here, then it will not look into the below rule. It will just follow what is present over here. All right? And this is the reason why 100 is slower and all the traffic is, therefore, allowed. Same with the outbound. You see, 100 is lower here, and this is the reason why all the traffic is allowed. So let's look into whether the things that I am saying are correct or not. So let's do one thing. I'll copy the public IP here. So I'm in my CLI, and let's try to ping this specific EC2 instance. And currently, you see, you are getting the ping reply back. Great. So that means the connectivity is present. So now let's do one thing. Let's modify the inbound rule. I'll add one more rule here. I'll put the rule number as 99, and I'll also associate all the traffic here, similar to the first rule. And this time we'll put it in deny, and I'll click on save. All right. So now you have two rules that are quite identical. One is denied and the second is allowed. And the rule of priority in terms of numbers is different. So now, if you try to ping, you'll see the ping has been blocked over here. Now, this has been blocked because there is one rule over here that is denying all the traffic. And the rule has priority 99. So as soon as the network ACL receives the traffic from my network, it evaluates it against the rules that you have set. And since we have already specified all the traffic, So that means that this rule matches. And since this rule matches over here, the network ACL will block it; it will not look into the next rule altogether. All right. So now let's do one thing. I'll change this rule number from 99 to 101, and let's click on save. So this time the "allow" rule has the higher priority. And you can see here that you can get a response. Great. So I hope you understand at a high level what an ACL is all about. So one important part to remember is that a single network ACL can be associated with multiple subnets, and you can also create a rule similar to this. Let me do one thing. So currently, this is my IP address. So let's say, because we were discussing the use case, that you have something like this where you are allowing traffic from everyone. And there is this one IP address that is trying to attack you all the time. So you can create a rule, give it the number 99, specify the source IP, let's say 32, and set it to deny, and then save. So now what will happen is a traffic, if it originatesfrom this source address, it matches the rule number 99 andat the rule number 99 it will be blocked. Any other source that is not the one that we have specified here does not match over here. So the network ACL will look into the second rule. Now, the second rule says to allow all, and hence the traffic will be allowed. So this is the high-level overview of the network ACL. Do remember that by default, when you create a network ACL, it will have "allow all." However, when you create a custom network ACL, let me actually show you this. So let's go ahead and create a network, KCl. I'll call it KP r, when you crNow you can associate a network ACL with a VPC. Let me do a creative All right, so this is the custom network KCl, and within the customer network KCl, you see, by default the rule is denied. same for inbound and outbound. And if the network ACL is default, like you have not createdin a custom way, then it will have allow for all. So that's the high-level overview of the network ACL. I hope this video has been informative for you, and I look forward to seeing you in the next video.

10. Introduction to Block & Object Storage Mechanism

Hey everyone, and welcome back to the Knowledge Portal video series. And in today's lecture, we will be speaking about block versus object storage. So let's get started and have a high-level overview of the difference between them. Now, in very simple terms, in block storage, the data is stored in terms of blocks. So let's assume that this is a storage device, for example, a hard disc drive. And in terms of block storage, what happens is that whenever you install a file system like NTFS, or Ext 3, or Ext 4, the file system will actually divide the storage in terms of blocks. So what really happens is that the blocks are created. So you see, the blocks are getting created, and once the block is created, then the storage is stored in terms of blocks. So I'll show you one of the examples. Let me just log in to one of my servers. So this is the server, and if you run the command, it will basically show you what the block size of a device is, which is devsda. So if you do LSBL, I have two devices over here. One is the SDA, which is of size 25 GB, and the second is the SDB, which is a swap partition. So in this SDA, let me show you. So I have an export-based file system, and when you do a block depth, it will show you the block size of this specific storage device. Going back to the PPT, this file system is divided into 40 x 96-based blocks. So if you have a dataset of, let's assume, 80 96 or eight KB, then you will have two blocks that will be allocated. If you have twelve KB of data, then there will be three blocks that will be allocated. So it all works on the basis of blocks. Now, one more important thing to remember is that data stored in blocks is normally read or written entirely for a whole block at the same time. And most of the file systems that we speak about, likefor USB or for Windows, we have NTFS or Fat, forLinux, we have ext four or XFS those filesystem. All are based on block storage. Now, when we speak about block storage, every block has a specific address, and applications can call those specific blocks via a scuzzy call via the block address. Now, one more important thing to remember is that there is no storage site metadata associated with the block except the address. So if there is an object or there is a file, let's assume there is an image file that has been stored. There is no metadata. Just by looking at the block, I cannot determine what file is being stored over here. So that is called "metadata" and it does not also have a description or owner. So we'll look into what I really mean by metadata when we talk about object storage. so in object storage. It is a data storage architecture where the message or the data is stored as an object as opposed to a block of storage. So there are no blocks of data that have been divided. So even if you have, let's assume, ten MB of data, it will be directly stored as an object. An object storage one does not contain any blocks. And a very important point to remember here is that an object is defined as a file, which is data, along with all of its metadata, which is combined together as a specific object. So let me show you this. AWS S3 is a type of object-based tool. So when I go to AWS Step 3 and let's open up any bucket, So I have a PNG file over here, and if I click over here within the properties, you find that there is a metadata that is attached over here. So let's click here, and it will show you the type of metadata, which is content type, which is image PNG. So this metadata basically tells you what the object is all about. Whether it is a text file or it is asong or an MP3 file or it is image file. So this is the metadata that S automatically adds when you upload a file. You can also create custom metadata according to your requirements. So this is also possible, and this is the major difference. So this is what this basically means. What we call an object in S3 is not only a file but also the metadata that is connected with the specific file. So this file, along with all its metadata, creates an object. So that is the second point, and the object ID, which is basically a unique ID associated, will be the combination of the content of the object as well as the metadata. So we'll be looking into this whenever the right time comes, otherwise it will become much more complicated. So, difference between block storage and object storage One of the very famous differences I can tell you is that object storage can be called via API. So you see you have an HTTP-based interface, so you can call the object. So if this was a text file, you could actually open this up in the browser via HTTP. You will see that you are actually able to open this from the browser itself. However, you cannot really do that directly with the block storage. So that is one of the major differences within object storage and block storage. Now, object storage is not as fast as block storage, but it does have its own importance in the industry.

11. Instance Store Volumes

Hey everyone and welcome back to the Knowledge Portal video series, and today we'll be speaking about the Instance Store. So let's begin. So the AWS instance store was, by definition, a temporary block storage volume for the use of EC two.Now, what do I mean by that? In a nutshell, an instance store is a temporary storage device where we can put our data for the time being; in Linux terms, it is comparable to the Temp file system. So what happens if you store data in a temp file system? After you stop and start the instance, or generally after you reboot the instance, the data within that file system is lost, and this analogy is very similar to that of an instance store. So what really happens in an instance store is that the storage device, or the hard disc drive, is directly part of the server that is hosting the virtual machine. Now, in the previous lecture, we discussed the virtualization environment as well as the architecture of a cloud provider, where you have servers over here on top of servers and some kind of virtualization technology, be it Zen or KVM or VMware vs. Fear or HyperV, and so on. You may have virtual machines on top of the virtualization hypervisor, depending on the provider. Now, let's assume that each of these virtual machines belongs to an individual customer, so there are four virtual machines and therefore four customers. Now, in this scenario, we are using the storage device of the particular host server. Now, the problem with this type of architecture is that what happens if the host server fails? What happens if the storage device on the server fails? Now, in this scenario, if the storage device of the host server fails, then the storage of all four VMs will fail, and this is actually very dangerous. What happens if there is some critical database that a customer is running, and you cannot rely on this kind of scenario? This is not a very optimal scenario. So what happens in an idle or real-world use case is that the cloud service providers will never use this host storage. So storage is a network cluster, and this cluster is mounted into each individual VM. Now, we will be discussing this in the relevant section, but just to give you an overview, in the idle case, the storage device of the server is never used. Storage from some kind of technology, like a network-attached storage system or NAS, is mounted on this relevant service. Anyways, that is irrelevant for now, so going with the second point, this storage is located on the disc that is physically attached to the host computer, so I hope you got the meaning of the second point, and the third point is that the size of the instance store varies depending upon the instance type. Now, you will understand this when we actually do the practical So let me switch back to the AWS console, and let's click on Launch an instance. So if you go to the community, I go a bit down. You see, there are two root device types. One is EBS, and the other is Instant Store. So EBS is like permanent storage. An instance store is like a temporary storage facility. Now, each of them comes with an advantage. So let's click on Instance Store for now. And for our use case, let me click on the first one, which is AMI, zero, eight, three, A, B, two, six, eight. So I'll use this AMI, and I'll click on Launch. Let me see. And if you see it, it does not allow you to select all the instance types. There are specific instance types that you can use. One thing to keep in mind is that M stands for medium and S stands for small. Now, m one small comes with one vCPU. 1.7 GB of RAM and 160 GB of instance storage. So if I just go a bit up, you will see that the instance store is 160 GB. If you go to Medium, The instance store is 410 GB. Now, one of the advantages of this is that it is completely free. So the storage part of Instance Store is completely free. You do not really have to pay for that. This is one of the reasons why instance stores are frequently preferred. So, if you select a Medium instance, you actually have a 410 GB storage device, which comes free along with this particular instance. So in our case, I'll use M for medium over here. I'll go to configure the instance details. I'll use the default VPC for Timeme. I'll click on Add Storage. And if you see, there are no storage devices attached over here. So if you directly click on Review and Launch, for instance, Store will be directly attached. However, we will click on "Add new volume." And if you see where the Instance Store is attached, you cannot really modify any of these parameters. And this directory relates to the third point, which is that instance size varies depending on your instance type. We have already discussed that. One medium has 160 GB. Then, if you are going with another instance type, it has a different storage device attached to it. So you cannot really change the size of the instance store of a particular instance type. So I'll click on Review and Launch and let this launch. I'll name this instance "Store." And now one thing to notice is that if you go down into the root device type, you see Instance Store over here. Now, in the previous launches of instances that we did, as you see, the root device type is EBS. So EBS is like permanent storage, a proper hard disc drive. However, if the root device is Instant Store, it means temporary storage. Now, till the time it gets launched, let's complete the second slide that we have, which will give us further understanding of an instance store. So the data in an instance store is lost in the following situation, and this is very important. We've already talked about instance storage, also known as temporary storage. So in what scenarios will the data be lost in that temporary storage? First, the underlying disc drive fails. Now, we have already discussed that the instance store uses the underlying storage. So if the underlying storage of the hostserver fails, then the instance storage is lost. That is the first. The second is that the instance stops. If an instance stops, then the instance is terminated and the instance store is lost. and the third is if the instance terminates. Now, the next point is instance store areincluded in cost of EC two instance. So they are quite cost effective.So we discovered that if we only launched one small instance, we would receive a free 160 GB instance without having to pay anything. So let's discuss these two points. Now, once we complete the practical, it will become clearer what an example tourist is. So I'll just copy this public IP. Let me do as I said. Let me just test it to see if I'm able to connect. Okay, I'm able to connect. Let me do AC for two users, and the rate Oops. I need to specify the key. Okay. So I'm logged in. Now let me do "sudo sue." So we are into roots. Now, ideally, one way to recognise an instance is within the device itself. If you perform a Dfan edge, you will see something similar to Fumeril Ten. So female is essentially temporary. If the device is mounted, and you see 147 CB of a hard disc drive, it is mounted on mediafrequency ten. This means that the device—this particular device—is a temporary storage device. This is very important to understand. The second way to do it is to go to the console and see the root device type, which can be either EBS or instance tool. Now, let's do some use-case testing. So if I go to root, I'll create a folder, which I'll name backup, and inside backup I'll create a test file, say kplabTXT, and inside this text file, nano is not there. Let me just do an echo. This is an instance store lecture, okay. and I'll store it into kplabs txto cat.What we have essentially done is create a folder, and inside the folder we have created a text file called as kplabs TXT.And inside the text file we have written a simple one-line sentence. Now, it's very important to understand that the data within the instance tool will be lost if the instance stops. This is very important. If an instance restarts, then the data will not be lost. So let's understand this. This is a very important point. I'll show you an interesting thing. So if you try to click on instance data, You see, you cannot really stop here. You can either reboot or you can terminate. So if you stop this particular EC2, the EC2 will automatically terminate. Now let's go ahead and let me reboot this particular instance. so I'll click on reboot. So this is just to see if the things mentioned in the documentation as well as the things we are currently studying are true or not. Let's wait. And this is one of the more interesting scenarios, I'll tell you. Just a few months ago, we were actually planning to do maintenance on a server. It was a database server that needed some kind of maintenance. so we had to stop the instance. So before stopping the instance, we made sure that we had everything ready. We made a checklist related to the patches and everything that was applied. And all that was needed was that we shut down for a few minutes. And after some activity was completed, we would restart the instance. Now, at the last moment, just before we were planning to shut down, one of the system administrators just verified and checked the root device type of the instance store. And that was a very important observation becauseif you would have shut down the instant,then it would have been a nightmare. So we'll look into what I mean by this. So it's very important that, before you shut down the instances, you check to see if it is not an instance store. Now I'll just try to connect back to the server again. Okay, now we are connected. So, if you just want to confirm that the data is still there, I go to backup, and you can see that the KP lapse text is still there. So during the restart of EC2, the instance store will not be affected. For those who are curious about what will happen if we shut down, AWS does not allow us to shut down if we try to shut down directly from here. So for those crazy guys who want to experiment, we can directly shut it down from the server itself. So I'll run the hold command over here and let's see what happens. Okay, so I have run the halt command, which shuts down. Let me just refresh. Let's wait for a minute, and the status has changed to initializing. It should change, ideally. And now the instance is shutting down, so let's see what really happens after it shuts down. If it's still shutting down, patience is a virtue if you want to learn a few things. Okay? And, as you can see, it has been terminated. So as soon as we shut down the instance, which is backed by the instance store, Idly, it gets terminated. And this is very important to understand. So make sure that whenever you stop an instance, make sure that the instance root device type is only EBS. It is not based on instance storage. So that's the fundamentals of the instance store. Again, if you want to do the practical yourself, you can go ahead and do that. But these things might not come under free tires, and you might have to pay for this if you want to do the practical aspects related to this aspect anyway.

12. Introduction to Elastic Block Store

Hey everyone and welcome back. In today's video, we will be discussing the ElasticBlock Store, which is also referred to as the EBS. AWS Elastic Block Store is now a persistent blockstorage volume for an EC2 instance. Now, by persistent, I mean that even if you stop and start your EC2 instances, your data will still remain. Now, in the case of the instance store volumes, generally, when you stop and start, your data would not persist. So this is the great thing about EBS, and this is the reason why EBS is used in production environments. Now, each EBS volume is designed for more than 99% availability, and the data is automatically replicated within multiple availability zones. So this can be better understood with a diagram over here. So you have the server, you have the virtualization layer, and you have various EC2 instances. Now, the EC2 instance will not store the data within the server because, let's say, there are four easy EC2 instances that are running on one server. And if there is an issue with this one server—let's say there's an issue with the hard disk—then the data of all the four EC2 instances would either be corrupted or it would be lost. And this is why storing data on an EC2 instance is never a waste of time. And the second problem with this approach is that, again, if the hard disc gets corrupted, it is not replicated. So this is the reason why EPS is generally different from network-attached storage, where, let's say, these are the hardest drives and they are mounted via network to your EC volumes. Now, these devices are replicating each other. So that means that even if one volume goes down, the data is still present within the multiple volumes that are part of the replication zone. So this is another advantage of EBS. But keep in mind that, while AWS claims that this is the availability and that data is replicated, I have seen many instances, specifically because I have worked in enterprises with thousands of servers, where EBS volumes are experiencing issues and data is completely lost. In fact, we had one issue where AWS sent a mail saying there was an issue with EBS, and our entire production database server went down because of that. So, although it is good, do remember that you should always take a backup of your data within the EBS volume because even EBS volumes can have issues anyway. So those are just some real-world things that I would like to share. The third important aspect of ABS is that it is elastic in nature, which allows for dynamic increases in capacity performance and the ability to change the instance type of the life volumes. So we'll understand this after the next slide. Now, this slide is very important to remember: AWS Easy is regarded as a compute-based service. Now, compute generally refers to memory and the CPU. So if you look into the EC2 pricing or EC2 instance types that are available, you will see that you have the V CPU, you have the memory, and you have the storage option where you have EBS only here. As a result, EC2 is a compute service, which generally refers to a virtual CPU and memory. However, storage is a different thing altogether. So you have various storage options like EPS, you might have instanced volumes, et cetera. So let's jump to the practical and understand some of these factors in great detail. So I'm in my EC2 console, and currently I have one EC2 instance running. Now, if you go a bit down, you will see that this EC2 instance has a root device type of EBS, and the root device is devexvda. So if I just click over here, I can click on the volume ID. So here it is giving me a lot of information, like the size of the volume, the volume type, and the amount of IOPS that is associated. Now if I want to change these factors, I'll be able to do that. So in case you remember, we discussed that EPS can be elastic in nature, which allows it to have a dynamic increase in capacity performance as well. So if I click on "modify volume" and let's say the current volume size is eight GB and I want to change the instance size to ten GB, I will be able to do that. Along with that, if I want to change the volume type, say from GP 2 to provision IOPS, I'll be able to do that as well. So I hope you understood a high-level overview of EBS. Do remember that EBS stays on a separate layer altogether. EBS are not included within the server; they are attached to your volumes. So it can be referred to as "network attached." So there is a network over here, and through the network the data travels. So, let's say this EC2 instance wants to store some data, so the data is routed through the network to the EBS volumes, which are replicated and elastic in nature, allowing us to increase the EBS volume size. Also, the capacity and performance.

Go to testing centre with ease on our mind when you use Amazon AWS Certified Cloud Practitioner vce exam dumps, practice test questions and answers. Amazon AWS Certified Cloud Practitioner AWS Certified Cloud Practitioner (CLF-C01) certification practice test questions and answers, study guide, exam dumps and video training course in vce format to help you study with ease. Prepare with confidence and study using Amazon AWS Certified Cloud Practitioner exam dumps & practice test questions and answers vce from ExamCollection.

Read More


Comments
* The most recent comment are at the top
  • AWS Guy
  • United Kingdom
  • Apr 26, 2020

@Mad Belo1, congratulations! Is the premium file word for word with its Q&A's with the ones you saw in the exams?

  • Apr 26, 2020
  • Mad Belo1
  • United States
  • Apr 23, 2020

Passed today!! Test results wont come in until the next 5 days. Premium DUMP is FULLY valid!! Good Luck!!!

  • Apr 23, 2020
  • Tristan
  • United States
  • Feb 10, 2020

The practice questions prepared me very well. I passed.

  • Feb 10, 2020
  • Vinilus
  • United States
  • Feb 06, 2020

Passed yesterday 903/1000, also you need study by your self too, due to include new services that AWS launched.

  • Feb 06, 2020
  • Vinilus
  • United States
  • Feb 05, 2020

Passed Today, but you must reinforce with your own-training

  • Feb 05, 2020
  • Fredrick
  • Canada
  • Feb 02, 2020

Premium dumps is valid. only a few new questions, Good luck!

  • Feb 02, 2020
  • Practicante
  • Peru
  • Dec 30, 2019

Hello guys, is the premium exam valid? thank you

  • Dec 30, 2019
  • OD
  • United States
  • Dec 09, 2019

Please is the AWS Certified Cloud Practitioner Premium still valid
Thanks!

  • Dec 09, 2019
  • noor parkar
  • United Arab Emirates
  • Nov 15, 2019

looking for vce simulator. where i can get it.

  • Nov 15, 2019
  • Sesejive
  • United States
  • Oct 23, 2019

I am writing the exam next weekend in the US, is the prep4sure by Abrielle still valid?

  • Oct 23, 2019
  • Rod
  • Italy
  • Oct 22, 2019

Anybody has any update if the premium is valid?

  • Oct 22, 2019
  • peter
  • United States
  • Oct 15, 2019

is the premium dump valid??

  • Oct 15, 2019
  • Carlos
  • Spain
  • Sep 14, 2019

Just took the exam yesterday friday 13th September. Have to say the exam are like 65 q. This free dump has only 32, from which, i got like less than half of the questions. So it helps, but get the official technical essentials PDF and do some quick labs. It asked a lot about billing, trust advisor and some otehr tools. Just with the PDF its not enough for sure.

  • Sep 14, 2019
  • Stpn2me
  • United States
  • Aug 24, 2019

Just took the test this morning 24 Aug 2019. I passed. Yes the dump is valid. You still need to study. Know the cost estimations. Know what services bring what benefits.

  • Aug 24, 2019
  • Edsoa
  • Brazil
  • Aug 18, 2019

Its valid?

  • Aug 18, 2019
  • Kalidas
  • United States
  • Aug 14, 2019

Hello, Please confirm this dump for AWS Cloud Practitioner (32q free and 65q Premium) is valid?

  • Aug 14, 2019
  • Mav
  • United Kingdom
  • Aug 12, 2019

Is this dump still valid in the uk?

  • Aug 12, 2019
  • Richard
  • Netherlands
  • Jul 08, 2019

any practitioner dumps available?
Please share....

  • Jul 08, 2019
  • sravani
  • India
  • Jul 08, 2019

Preparing for aws cloud practioner exam

  • Jul 08, 2019
  • Mahesh
  • United States
  • Apr 27, 2019

Please share the AWS cloud Practioner exam dumps

  • Apr 27, 2019
  • bernard
  • Singapore
  • Mar 23, 2019

is this already available?
AWS Certified Cloud Practitioner

  • Mar 23, 2019
  • VB
  • India
  • Mar 22, 2019

Where can I put hase practioner aws exam?

  • Mar 22, 2019

Add Comment

Feel Free to Post Your Comments About EamCollection VCE Files which Include Amazon AWS Certified Cloud Practitioner Exam Dumps, Practice Test Questions & Answers.

SPECIAL OFFER: GET 10% OFF

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |