CompTIA CV0-003 Exam Dumps & Practice Test Questions
An organization experienced a datacenter failure and transitioned operations to its disaster recovery (DR) site, which successfully maintained business functions for a week. Now that the primary site has been restored, they intend to revert operations back. Before doing so, they need to synchronize the block-level storage at the primary site with all the updates made during the week at the DR site.
Which approach would be the most efficient for this synchronization while minimizing downtime?
A. Set up replication
B. Copy the data across both sites
C. Restore incremental backups
D. Restore full backups
Correct Answer: A
Explanation:
When reverting operations from a disaster recovery (DR) site back to the primary data center, ensuring data consistency is essential. Any changes that occurred while the DR site was in use—such as file updates, new records, or transactions—must be accurately reflected at the primary site before failback occurs. The goal is to synchronize the primary site with the latest data while keeping operational downtime to a minimum.
The most efficient and reliable approach to achieve this is setting up replication from the DR site to the primary datacenter. Specifically, block-level replication enables near-real-time synchronization of data. This method ensures that all changes made while operating from the DR site are transferred back incrementally and automatically. It minimizes both the risk of data loss and the amount of manual intervention required.
In contrast, Option B, manually copying data between the sites, introduces potential issues. It’s time-consuming, error-prone, and difficult to scale for large volumes of data. Additionally, this approach lacks built-in data consistency checks, which increases the risk of corruption or incomplete data transfer.
Option C, restoring incremental backups, may seem more efficient than full backups but still has drawbacks. It assumes that all incremental backups were taken correctly and that none are missing or corrupted. Restoring them also takes time and may involve service disruption, especially when dependencies need to be reestablished manually.
Option D, restoring full backups, is the least efficient approach. Full backups involve restoring the entire data set, which is both resource-intensive and slow. This process increases the risk of overwriting recent data unless detailed differential logic is used, making it unsuitable for time-sensitive operations.
Therefore, Option A: Set up replication stands out as the most efficient, scalable, and robust solution. It not only automates the synchronization process but also provides real-time data accuracy with minimal disruption. This ensures that the primary datacenter is brought fully up to date before resuming its role in hosting production workloads.
A developer needs a virtual machine (VM) for machine learning tasks that require maximum performance. Specifically, the VM must have exclusive access to a complete GPU without sharing it with other workloads.
As the cloud environment supports different GPU provisioning models, which configuration would best satisfy the requirement for dedicated GPU access?
A. Virtual GPU
B. External GPU
C. Passthrough GPU
D. Shared GPU
Correct Answer: C
Explanation:
When a virtual machine (VM) is intended to run machine learning workloads, particularly training of large models, GPU performance is a critical factor. Machine learning algorithms—especially deep learning models—require intensive parallel computation. To support this need, the VM must access a dedicated GPU that is not shared with other processes or virtual instances.
The ideal solution in this scenario is a Passthrough GPU. With this method, a physical GPU is assigned directly to the VM using PCI passthrough. The VM then interacts with the GPU as if it were directly connected hardware, giving it uninterrupted, near-native performance. This setup allows high-efficiency training, optimal hardware utilization, and minimal overhead, which is essential for AI and ML workloads.
Looking at the alternatives:
Option A: Virtual GPU (vGPU) allows multiple VMs to share a single physical GPU by abstracting GPU resources into virtual slices. While vGPU is well-suited for graphics-intensive but less computationally demanding workloads—such as 3D rendering or VDI—it does not provide the full GPU performance required for advanced machine learning training. The shared nature of vGPU can lead to performance variability due to resource contention.
Option B: External GPU (eGPU) involves connecting a GPU externally, typically via high-speed connections like Thunderbolt. eGPUs are more common in personal or workstation environments and are not typically deployed in cloud infrastructures or enterprise-scale virtualization scenarios. They're also limited by the bandwidth and performance constraints of external interfaces.
Option D: Shared GPU, similar to vGPU, involves multiple virtual machines accessing the same GPU. While it enables resource optimization across various VMs, it significantly impacts consistency and performance for tasks requiring dedicated processing power.
In conclusion, Option C: Passthrough GPU is the most suitable configuration for the given requirement. It provides dedicated access, reduces latency, and ensures the VM can fully leverage the GPU’s computational capabilities. For cloud administrators provisioning high-performance environments for machine learning, passthrough GPU offers the best balance of performance, isolation, and reliability.
Your organization is transitioning its on-premises database system to the cloud in order to gain scalability, reduce the need for manual maintenance, and benefit from built-in features such as automated backups and performance optimization.
Which cloud service model best aligns with this approach?
A. Platform as a Service (PaaS)
B. Infrastructure as a Service (IaaS)
C. Container as a Service (CaaS)
D. Software as a Service (SaaS)
Correct Answer: A
Explanation:
When selecting a cloud service model for hosting a database, it’s essential to understand how much of the underlying infrastructure and software stack the organization wants to manage. The scenario presented highlights the need for a solution that handles infrastructure and database management tasks automatically, allowing developers to focus solely on data-related activities and application development. The most fitting option in this context is Platform as a Service (PaaS).
PaaS offers a managed environment where the cloud provider handles the provisioning and maintenance of the hardware, operating system, storage, networking, and the database engine itself. Users interact directly with the database service—writing queries, modeling data, and integrating the database into applications—without worrying about installing patches, configuring backups, or manually scaling resources. Examples include Amazon RDS, Azure SQL Database, and Google Cloud SQL.
Let’s explore why the other service models are less suitable:
B. Infrastructure as a Service (IaaS): IaaS provides virtual machines and basic infrastructure. While flexible, this model requires users to install and maintain the database software, operating system, patches, and backup routines. It’s a better fit for teams needing granular control over the entire stack, which contradicts the goal of reducing operational overhead.
C. Container as a Service (CaaS): CaaS allows you to deploy and manage containerized applications. While a containerized database can be deployed in this model, it typically requires a high degree of system administration, orchestration with tools like Kubernetes, and deep knowledge of container management—again, counterproductive to the team's goals.
D. Software as a Service (SaaS): SaaS offers fully managed software applications like CRM systems or email services. While databases are used in SaaS platforms, users don’t interact with the database engine directly or design custom schemas. This model is not suitable for teams needing hands-on access to a customizable database.
In conclusion, PaaS delivers the right balance of abstraction and control, automating administrative tasks while still allowing development teams full access to the database functionalities. It aligns perfectly with the organization’s objectives of simplified management and developer productivity.
Users in the drafting department, who rely heavily on CAD and 3D modeling tools, have started experiencing noticeable slowdowns in their virtual desktop environments. These applications require intensive graphical rendering. As the VDI administrator, you need to investigate the issue.
Which system resource should you examine first to identify and resolve the rendering performance issues?
A. GPU (Graphics Processing Unit)
B. CPU (Central Processing Unit)
C. Storage
D. Memory (RAM)
Correct Answer: A
Explanation:
In Virtual Desktop Infrastructure (VDI) environments, especially those supporting users engaged in graphically demanding work like 3D modeling, video editing, or CAD design, the Graphics Processing Unit (GPU) becomes the most critical component. These workloads require advanced rendering and image processing capabilities that go far beyond what the central processor (CPU) or system memory can handle alone.
When users report that graphical rendering performance has dropped, the first resource to examine should be the GPU. This includes checking whether virtual GPUs (vGPUs) have been properly assigned, whether GPU passthrough is configured correctly, and whether the hardware GPU is being overloaded by multiple users. Monitoring tools provided by VDI platforms (like VMware vSphere, Citrix, or NVIDIA GRID) can help identify usage patterns and performance bottlenecks.
Here’s why the GPU should be the primary focus:
Graphics-intensive applications rely on the GPU to handle real-time rendering, 3D visualizations, and hardware acceleration.
If virtual desktops are not assigned sufficient GPU resources, users will experience lag, low frame rates, and poor visual quality.
Misconfigured drivers or outdated GPU firmware can further degrade performance, even when resources appear adequate.
Let’s now consider why the other options are less relevant in this specific situation:
B. CPU: While important for general application processing, the CPU is not responsible for rendering graphics-intensive workloads. Unless the application offloads tasks to the CPU (which is inefficient for rendering), checking the CPU should come later in the troubleshooting process.
C. Storage: Sluggish storage could cause delays when loading large design files, but it wouldn’t directly impact live rendering performance once those files are loaded into memory or processed by the GPU.
D. Memory (RAM): Low RAM might lead to overall performance degradation or application crashes, but it’s not usually the cause of reduced rendering speeds unless there’s a severe memory leak or the system is swapping memory excessively.
In summary, when users report poor rendering in a VDI setup tailored for graphical workloads, GPU performance should be the first area of investigation. Ensuring adequate allocation, monitoring utilization, and verifying proper configuration can quickly uncover and resolve the root cause.
An organization’s Chief Information Security Officer (CISO) is conducting a detailed audit of the company’s security framework. As part of this process, the CISO needs to identify every asset within the enterprise that contains vulnerabilities, is out of compliance, or poses unresolved risks. The CISO also wants access to a document that outlines existing risk mitigation strategies, their implementation status, and any remaining residual risk for each asset. To facilitate strategic decisions, the document must offer centralized visibility into each risk, including threat likelihood, business impact, and associated controls.
Which document would best fulfill the CISO’s requirements?
A. Service Level Agreement (SLA)
B. Disaster Recovery (DR) plan
C. Security Operations Center (SOC) procedures
D. Risk Register
Correct Answer: D
Explanation:
The most appropriate document to meet the CISO’s requirements is the Risk Register. A Risk Register is a centralized and comprehensive tool used by organizations to identify, evaluate, and monitor risks across assets and processes. It plays a vital role in enterprise risk management and is particularly useful during audits or security program evaluations.
The Risk Register captures a wide range of information that is critical for informed decision-making. For each risk entry, the document typically includes:
A clear description of the risk
The asset or system affected
The assessed likelihood and potential impact
The individual or team responsible for managing the risk
Mitigation strategies and current status
Residual risk levels after mitigation
For a CISO seeking insight into which assets have known vulnerabilities or compliance issues, the Risk Register provides a structured overview. It not only lists known issues but also outlines which actions have been taken to reduce risks and whether those efforts have fully addressed the concern or left residual exposure.
Now, let’s consider why the other options are not suitable:
A. SLA (Service Level Agreement): This agreement outlines the performance standards between service providers and clients. It focuses on service delivery expectations—not on tracking or managing risks associated with assets or vulnerabilities.
B. Disaster Recovery (DR) Plan: A DR plan provides instructions for restoring IT services after a disruption. While vital for continuity, it does not catalog known risks or describe mitigation status.
C. SOC Procedures: These documents guide the Security Operations Center in handling daily operations and incidents. While they touch on detection and response, they don’t offer a detailed or strategic view of risks across the organization.
Thus, the Risk Register is the document that most effectively satisfies the CISO’s need for visibility into known risks, mitigation efforts, and decision-making support.
A cloud engineer is managing infrastructure in a public cloud setup where all resources are currently hosted within a single virtual network (VPC or VNet). As the cloud environment scales, the engineer finds that there are no more available IP addresses in the current network, blocking the deployment of new servers. The engineer needs to implement a long-term, scalable solution that enables continued resource growth without impacting existing services.
What should the engineer do to resolve the IP limitation and support future expansion?
A. Create a new VPC or Virtual Network and connect it to the current one using network peering
B. Use dynamic routing within the existing network
C. Enable DHCP on the current network to allocate IPs automatically
D. Subscribe to an IP Address Management (IPAM) service for more public IP addresses
Correct Answer: A
Explanation:
The best solution to resolve IP exhaustion in a cloud environment is to create a new Virtual Network (VNet/VPC) and establish peering with the existing one. Virtual network peering allows two separate virtual networks to communicate privately using internal IP addresses without routing through the internet. This not only increases the available IP address pool but also ensures high availability and performance with minimal disruption to existing infrastructure.
In cloud platforms such as AWS, Azure, or Google Cloud, virtual networks are assigned a CIDR block that defines the number of usable IP addresses. Once exhausted, the network cannot accommodate new resources unless the range is expanded—which may not be possible or desirable in live environments. Creating a new VNet or VPC offers a cleaner, safer solution.
Why the other options fall short:
B. Dynamic routing: While dynamic routing helps manage traffic between networks, it does not address the fundamental issue—lack of IP addresses. It cannot extend the IP space
C. Enabling DHCP: DHCP automates IP address allocation but does not increase the available number of IPs. If the pool is empty, DHCP cannot assign addresses.
D. IPAM service: IPAM solutions help manage and organize IP address usage across environments. However, subscribing to IPAM does not expand the IP range of an existing network nor fix immediate address shortages.
By peering a new VNet with the original, the cloud engineer maintains secure and efficient connectivity while significantly expanding capacity for new cloud resources. This is the most scalable and seamless option for managing growth in a modern cloud infrastructure.
Question 7:
A system administrator has been assigned to move a legacy application running on a physical server in an on-premises environment to the cloud. The requirement is to transfer the full operating system, along with all its applications and configurations, into a cloud-hosted virtual machine without rebuilding the system manually.
Which migration method is most suitable for this task?
A. V2V (Virtual to Virtual)
B. V2P (Virtual to Physical)
C. P2P (Physical to Physical)
D. P2V (Physical to Virtual)
Correct Answer: D
Explanation:
The scenario clearly describes the need to migrate a complete physical server—including the operating system, applications, settings, and data—to a virtual machine hosted in a cloud environment. The most appropriate solution for this type of migration is P2V, which stands for Physical to Virtual.
P2V is a process that converts a physical server into a virtual machine image. This image includes all the essential elements of the original server: the operating system, installed applications, user settings, and data. Tools such as VMware vCenter Converter, Microsoft Virtual Machine Converter, AWS Server Migration Service, and Azure Migrate are commonly used to perform P2V migrations. These tools simplify the transition by automating much of the conversion process and minimizing the chances of error.
A significant benefit of P2V is that it helps organizations move legacy or custom applications that are hard to reinstall or reconfigure. Instead of rebuilding the server environment from scratch, the system administrator can simply replicate the existing server and deploy it as a virtual machine in the cloud. This reduces downtime and helps maintain consistency.
Let’s look at the incorrect options:
A. V2V (Virtual to Virtual): This refers to migrating one virtual machine to another virtual platform, such as from VMware to Hyper-V. It does not involve a physical machine and is irrelevant here.
B. V2P (Virtual to Physical): This is the reverse of what is needed. It involves deploying a virtual machine back to physical hardware, which is not the requirement in this case.
C. P2P (Physical to Physical): This migration involves moving data or OS setups from one physical machine to another, typically for hardware upgrades or replacements, and not for virtualization.
Given that the task is to convert a physical machine into a virtual one for deployment in the cloud, P2V is the only method that aligns with the objective. It ensures that the entire environment is preserved and replicated accurately in a cloud-based virtual machine.
Question 8:
During a security audit, a cloud administrator discovers that Sales team members have unintended access to a financial application, which should be exclusive to the Finance department. Upon review, it’s found that this occurred because the Sales group was mistakenly added under the Finance group, giving Sales team members elevated permissions.
Considering that the organization uses Single Sign-On (SSO), which access control model needs to be corrected to fix this misconfiguration?
A. Discretionary Access Control (DAC)
B. Attribute-Based Access Control (ABAC)
C. Mandatory Access Control (MAC)
D. Role-Based Access Control (RBAC)
Correct Answer: D
Explanation:
The access control issue described stems from improper group structuring, where members of the Sales team are receiving permissions intended only for the Finance department due to nested roles or groups. This type of configuration problem is best understood—and resolved—within the framework of Role-Based Access Control (RBAC).
RBAC is a widely used access control method in enterprise environments. It organizes permissions based on roles, which are then assigned to users or user groups. In this case, roles like "Sales" and "Finance" likely have different access levels. However, the Sales team inadvertently inherited the Finance role’s permissions because of group nesting. This violates the principle of least privilege, which states that users should only have the access necessary to perform their specific job functions.
Fixing this issue involves revisiting the structure of roles and ensuring that no inappropriate inheritance exists between them. Roles should be designed to be mutually exclusive when their responsibilities and access requirements differ significantly—as is the case with Sales and Finance.
Now, let's evaluate why the other options are not suitable:
A. Discretionary Access Control (DAC): DAC places control in the hands of the resource owner, allowing them to grant access at their discretion. It’s not typically based on organizational roles or group hierarchies, so it doesn’t apply to this nested group issue.
B. Attribute-Based Access Control (ABAC): ABAC uses dynamic attributes (such as department, time of day, location) to determine access rights. While powerful and flexible, this model doesn’t rely on group membership structures, so it's not the model in question here.
C. Mandatory Access Control (MAC): MAC enforces access based on strict rules defined by a central authority, often using security labels or classifications. It’s typically found in military or government systems, and it does not align with the group-based setup described.
In summary, the issue stems from how users are granted access via roles, and the solution lies in properly designing and isolating those roles. Therefore, the most relevant model—and the one requiring revision—is RBAC (Role-Based Access Control).
A cloud administrator is tasked with configuring a virtual machine (VM) template that will be used to deploy multiple instances for a web application. The administrator wants to ensure consistency across all VMs while reducing configuration time.
Which of the following is the BEST approach?
A. Use an infrastructure as code (IaC) tool to create each VM manually
B. Create a snapshot of an existing running VM and replicate it
C. Develop a VM image with preinstalled software and settings
D. Deploy VMs using separate ISO files and scripts per instance
Correct Answer: C
Explanation:
The key concern in this scenario is consistency and efficiency during deployment. The administrator aims to deploy multiple virtual machines while ensuring each instance maintains uniform settings, configurations, and application requirements.
Option C is the most efficient and scalable method. Creating a VM image (also known as a golden image) allows the administrator to capture a standardized virtual machine configuration that includes preinstalled operating systems, application software, security settings, and network configurations. Once created, this image can be reused to deploy multiple VM instances quickly and with identical setups. This practice significantly reduces human error, speeds up provisioning, and ensures configuration compliance.
Option A refers to infrastructure as code (IaC), which is a powerful automation method, but stating "create each VM manually" contradicts the purpose of automation. While IaC tools like Terraform or Ansible can streamline deployment, they are most effective when paired with prebuilt VM images rather than manual setups.
Option B, using a snapshot, captures the current state of a running VM, including memory and disk state. While snapshots are useful for backup or rollback, they are not ideal for scalable deployments, especially across environments or over extended timeframes.
Option D, deploying each VM individually using ISO files and custom scripts, is a time-consuming and error-prone approach. It lacks the efficiency and consistency needed for large-scale VM provisioning.
Therefore, the best answer is C: using a VM image provides a balance of consistency, speed, and maintainability across multiple deployments.
A company is migrating critical workloads to the cloud. The IT team wants to ensure that sensitive customer data is protected both in transit and at rest.
Which of the following should the team implement to meet this requirement?
A. VPN tunnels for internal traffic only
B. Encryption using TLS and disk-level encryption
C. Role-based access control (RBAC)
D. A cloud firewall and antivirus tools
Correct Answer: B
Explanation:
Protecting sensitive customer data during a cloud migration requires ensuring security both while data is moving (in transit) and when it is stored (at rest). This is a fundamental requirement of cloud data protection and is often mandated by compliance frameworks such as GDPR, HIPAA, and PCI-DSS.
Option B is the correct answer because it covers both aspects of data protection. TLS (Transport Layer Security) ensures that data is encrypted as it travels between systems, preventing interception or tampering. Disk-level encryption, also known as data-at-rest encryption, secures data stored on virtual drives or cloud-based storage services. Together, these technologies provide a strong foundation for data confidentiality and integrity.
Option A, while beneficial for securing traffic between on-premises and cloud networks, does not address data at rest and is limited to network-level protection. A VPN tunnel encrypts traffic between endpoints but does not inherently protect data stored on cloud servers or services.
Option C, RBAC (role-based access control), is important for managing who can access what, but it does not encrypt data. It helps enforce least privilege access, reducing the risk of unauthorized access but does not meet the core requirement of protecting data in motion or at rest.
Option D, combining a cloud firewall and antivirus, offers protection against threats like malware and unauthorized access. However, these tools do not ensure that data is encrypted, especially during transmission or storage.
In a secure cloud environment, end-to-end encryption is a must-have. By applying TLS for transit and disk encryption for rest, organizations ensure that even if the data is intercepted or the storage media is compromised, it remains unintelligible without the proper decryption keys. This layered approach aligns with best practices in cloud security and directly addresses the requirements stated in the scenario.
Hence, the best option is B.
Top CompTIA Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.