Microsoft AZ-303 Exam Dumps & Practice Test Questions

Question 1:

Which two features are the primary advantages of implementing Azure Virtual Network (VNet) Peering? (Select two.)

A. Enables secure communication between virtual networks in the same or different Azure regions
B. Allows virtual networks to share a public IP address for external connectivity
C. Facilitates seamless connectivity between Azure and on-premises infrastructures
D. Supports communication between Azure VMs across VNets without VPN
E. Automatically balances traffic loads between peered virtual networks to improve fault tolerance

Correct Answers: A and D

Explanation:

Azure Virtual Network Peering is a core feature within Microsoft Azure’s networking capabilities. It allows administrators to connect two or more virtual networks (VNets) across the same region (regional peering) or across different Azure regions (global peering). The peering is accomplished using Azure’s private backbone, which avoids traffic going over the public internet, thereby enhancing security and reducing latency.

Option A is accurate because peering enables secure, direct communication between virtual networks in both local and global Azure environments. Once two VNets are peered, resources like virtual machines can communicate with each other as if they were part of the same network. The traffic between the peered networks remains private and encrypted, traveling through Microsoft's internal backbone instead of the public internet.

Option D is also correct. VNet peering allows Azure virtual machines in different VNets to interact without the need for VPN gateways. This is a substantial advantage, as it eliminates the complexity of setting up and maintaining VPN tunnels, certificates, or external routing configurations. Since communication occurs within Azure’s infrastructure, it’s both faster and more reliable than VPN-based connections.

Now, let’s analyze the incorrect options:

  • Option B is incorrect because public IP addresses are not shared between peered virtual networks. Public IPs remain associated with specific Azure resources, and peering does not merge or route outbound traffic through a shared IP pool.

  • Option C is inaccurate because VNet peering does not support hybrid scenarios (i.e., connecting Azure VNets to on-premises networks). For such integration, services like VPN Gateway or Azure ExpressRoute are required.

  • Option E is also wrong. VNet peering does not provide automatic load balancing or fault tolerance. Load distribution must be explicitly designed using services like Azure Load Balancer or Application Gateway, and peering merely facilitates communication—not traffic management.

To conclude, VNet peering is a lightweight yet powerful solution for enabling private, low-latency communication between Azure virtual networks without external routing infrastructure. Thus, the two correct answers are A and D.

Question 2:

Which two Azure Active Directory (Azure AD) features are most directly responsible for strengthening the authentication process? (Choose two.)

A. Azure AD Conditional Access
B. Azure AD B2B Collaboration
C. Azure AD Multi-Factor Authentication (MFA)
D. Azure AD Application Proxy
E. Azure AD Self-Service Password Reset (SSPR)

Correct Answers: A and C

Explanation:

Microsoft Azure Active Directory (Azure AD) provides enterprise-grade identity and access management tools to secure access to applications and data. Two of the most authentication-focused security mechanisms within Azure AD are Conditional Access policies and Multi-Factor Authentication (MFA).

Option A – Conditional Access – is a rules-based engine that helps organizations implement automated access control decisions based on conditions like user location, device compliance, login risk, and the application being accessed. Conditional Access policies add intelligence to the authentication process by enforcing adaptive rules. For instance, users logging in from a high-risk country may be blocked or required to complete MFA. This conditional enforcement significantly raises the security level of the authentication process by aligning access with organizational risk tolerance.

Option C – Multi-Factor Authentication (MFA) – enhances authentication by requiring two or more verification methods. This could be a combination of something the user knows (e.g., password), something they have (e.g., a smartphone with an authenticator app), or something they are (e.g., fingerprint). Even if a user's credentials are compromised, an attacker cannot proceed without the second factor, making MFA one of the most effective tools to prevent unauthorized access and data breaches.

Now, for the incorrect options:

  • Option B, B2B collaboration, enables secure sharing of organizational resources with external users, like partners or contractors. While it supports secure access, it primarily focuses on identity federation and resource sharing, not direct authentication hardening.

  • Option D, Application Proxy, provides secure remote access to on-premises applications. It plays a role in securing application access, but does not fundamentally change how users authenticate.

  • Option E, Self-Service Password Reset, allows users to reset their own passwords, enhancing convenience and reducing support tickets. However, it addresses password recovery, not authentication strength.

In conclusion, the two Azure AD capabilities that directly fortify user authentication are Conditional Access and Multi-Factor Authentication. These features create a layered, adaptive security model that mitigates common threats like credential theft, phishing, and brute-force attacks. Therefore, the correct answers are A and C.

Question 3:

When designing a monitoring solution for Azure virtual machines, which two Azure services are most appropriate for capturing and analyzing performance metrics? (Select two options.)

A. Azure Monitor
B. Azure Traffic Manager
C. Azure Log Analytics
D. Azure Application Insights
E. Azure Network Watcher

Correct Answers: A and C

Explanation:

Monitoring virtual machines (VMs) in Azure requires a structured approach using purpose-built services that can collect, analyze, and respond to system-level telemetry data. Among the tools available, Azure Monitor and Azure Log Analytics are the two most essential services for comprehensive metric tracking and analysis for virtual machines.

Azure Monitor (Option A) is Azure’s core platform for collecting and evaluating telemetry across all Azure resources. It enables real-time monitoring of VM performance by gathering platform-level metrics such as CPU usage, memory consumption, disk I/O, and network throughput. It also supports setting up alerts, creating performance dashboards, and automating responses to metric thresholds. With the optional installation of diagnostic agents on VMs, Azure Monitor can access more detailed performance and health insights. As a central component of the Azure observability suite, Azure Monitor ensures system health and rapid issue resolution.

Azure Log Analytics (Option C) works in conjunction with Azure Monitor and provides advanced analytical capabilities. It stores log and metric data in a Log Analytics Workspace, where users can run complex queries using Kusto Query Language (KQL) to discover patterns, troubleshoot incidents, or generate reports. Log Analytics is especially powerful for performing deep inspections of historical performance data, identifying anomalies, and setting custom alerting criteria.

The remaining options do not align directly with the task of monitoring virtual machine metrics:

  • Azure Traffic Manager (Option B) is a DNS-based load balancing service that distributes incoming traffic across endpoints globally but does not offer metric collection or analysis capabilities.

  • Azure Application Insights (Option D) is focused on monitoring the performance and usage of applications, especially web services, and is not designed to capture infrastructure-level data like VM CPU or disk metrics.

  • Azure Network Watcher (Option E) specializes in network diagnostics and traffic analysis, offering tools such as packet capture and topology views, but it does not provide core performance data for VMs.

To summarize, for a comprehensive VM monitoring strategy in Azure, Azure Monitor delivers essential real-time metrics and alerting, while Azure Log Analytics empowers deeper, more flexible data analysis. Therefore, A and C are the best-suited services for tracking and evaluating Azure virtual machine performance.

Question 4:

When selecting a managed disk for an Azure virtual machine, which two considerations are most crucial for ensuring optimal disk performance? (Select two options.)

A. Required IOPS for the workload
B. Number of virtual CPUs (vCPUs) in the VM
C. Geographical location of the storage account
D. Nature of the workload (e.g., transactional vs. archival)
E. Type of storage redundancy (LRS, ZRS)

Correct Answers:  A and D

Explanation:

To ensure high performance for an Azure virtual machine, especially when attaching managed disks, it is critical to consider disk characteristics that directly affect read/write speed, throughput, and responsiveness. The two most essential factors in this regard are IOPS requirements and the type of workload being executed.

Option A, IOPS (Input/Output Operations Per Second), is a primary performance metric that indicates how many read/write operations a disk can handle per second. Azure offers different types of managed disks — such as Standard HDD, Standard SSD, Premium SSD, and Ultra Disk — each designed for varying performance levels. For example, Premium SSDs and Ultra Disks offer high IOPS suitable for data-intensive applications like SQL databases or real-time transaction processing. If the disk does not support the IOPS your application requires, it may suffer from latency, reduced throughput, or degraded user experience.

Option D, the type of workload, is equally important because different workloads demand different disk performance characteristics. For instance, transactional workloads, like OLTP databases, need low latency and high IOPS, which makes Premium or Ultra Disks more appropriate. Conversely, non-transactional workloads, such as archival storage or backup data, can operate efficiently on Standard HDDs or Standard SSDs, which are more cost-effective but offer lower performance.

Let's now examine why the other options are less relevant:

  • Option B involves the number of vCPUs in the VM. While VM size affects maximum IOPS the VM can handle, it doesn't directly determine disk type selection. VM sizing is a broader performance consideration but secondary to disk-specific capabilities.

  • Option C, which refers to the storage account's location, is not applicable for managed disks. Azure abstracts storage accounts for managed disks, and disks are automatically co-located with the VM region.

  • Option E, storage redundancy, impacts data availability and durability, not performance. Whether you choose LRS (Locally Redundant Storage) or ZRS (Zone-Redundant Storage), it won’t significantly alter disk IOPS or latency under normal conditions.

In summary, matching the disk type with your workload’s IOPS demands and operational characteristics is vital for achieving high performance and cost efficiency in Azure. Thus, A and D are the most important considerations when selecting a managed disk.

Question 5:

You are designing a solution in Microsoft Azure for a client who needs to host a web application that must remain available even during planned maintenance or regional outages.

Which two Azure services should you recommend to ensure high availability and resiliency for this web app? (Choose 2.)

A. Azure Traffic Manager
B. Azure Availability Zones
C. Azure DevTest Labs
D. Azure Advisor
E. Azure Backup

Correct Answers: A and B

Explanation:

The Microsoft AZ-303 exam evaluates a candidate's ability to implement and monitor Azure infrastructures, with a focus on high availability, disaster recovery, identity, security, and integration.

In this scenario, the client needs a web application that is resilient and highly available across different failure domains. To achieve this, the correct Azure services must be used to distribute the application load and maintain service continuity even in the face of planned maintenance or regional outages.

Azure Traffic Manager (Option A) is a DNS-based traffic load balancer that enables distribution of traffic to different Azure regions, improving availability and responsiveness. Traffic Manager supports several routing methods, including geographic and performance-based routing. In case one region becomes unavailable, it can redirect traffic to another healthy endpoint, ensuring continued availability of the web app.

Azure Availability Zones (Option B) are physically separate zones within an Azure region. Deploying the application across multiple zones protects against datacenter failures and provides higher availability. If one zone goes down due to hardware failure or power outage, the others can continue running the application.

Azure DevTest Labs (Option C) is used for creating development and testing environments quickly and is not intended for production high-availability scenarios.

Azure Advisor (Option D) provides best practice recommendations but does not directly contribute to availability or resiliency.

Azure Backup (Option E) helps with data recovery and long-term retention but does not address application uptime or traffic distribution.

In the context of the AZ-303 exam, understanding how to architect solutions that span availability zones and regions using services like Traffic Manager and Availability Zones is essential. These services are critical for designing robust solutions that meet enterprise-grade uptime SLAs and business continuity objectives.

Question 6:

When setting up an Azure Storage account, which two configuration options are the most critical for ensuring that your data remains highly durable and readily available? (Choose 2.)

A. Selecting the type of storage replication (e.g., LRS, GRS)
B. Activating Azure Blob Indexer to improve search functionality
C. Enabling Azure Storage Service Encryption (SSE)
D. Using a custom domain name for the storage endpoint
E. Choosing between Standard or Premium performance tiers

Correct Answers: A and E

Explanation:

To achieve maximum durability and availability of your stored data in Azure, two of the most crucial storage account configurations are the replication type and the performance tier. These two directly affect how resilient your data is to failures and how reliably it can be accessed.

Option A—replication type—is arguably the most essential factor in ensuring data protection and access in adverse conditions. Azure offers several replication strategies:

  • LRS (Locally Redundant Storage): Replicates data three times within a single data center. It protects against hardware failures but not regional disasters.

  • ZRS (Zone-Redundant Storage): Replicates across multiple availability zones in a region, offering higher fault tolerance.

  • GRS (Geo-Redundant Storage): Copies data to a secondary region, hundreds of miles away, guarding against regional outages.

  • RA-GRS (Read-Access GRS): Adds read access to the secondary region, improving availability further.

Selecting ZRS, GRS, or RA-GRS increases both durability (survivability of data) and availability (accessibility even during failures).

Option E, the performance tier, also influences availability. Premium tiers typically use SSD-backed storage, which not only improves speed but also enhances reliability and consistency in data retrieval. While performance tiers are often associated with speed, they also impact infrastructure robustness, reducing latency and helping ensure continuous uptime under high load.

Now, examining the less relevant options:

  • B (Blob Indexer): Helps with content discovery but doesn't impact the durability or availability of data itself.

  • C (SSE): Ensures data is encrypted at rest, enhancing security, not durability or uptime.

  • D (Custom domain): Improves user access experience and branding but has no effect on backend storage resilience or fault tolerance.

In conclusion, for organizations prioritizing data reliability and uptime, the replication method and performance tier are the key Azure Storage settings. These ensure that your data is both preserved through failures and accessible when you need it most.

Question 7:

Which two Azure services are most commonly used for analyzing and managing log data to gain insights into system security and operations? (Choose 2.)

A. Azure Sentinel
B. Azure Log Analytics
C. Azure Active Directory
D. Azure Automation
E. Azure Traffic Analytics

Correct Answers: A and B

Explanation:

In the Azure ecosystem, analyzing and managing log data is vital for understanding both security threats and operational health. The two most effective tools for these purposes are Azure Sentinel and Azure Log Analytics.

Option A, Azure Sentinel, is Microsoft’s cloud-native SIEM (Security Information and Event Management) platform. It is designed for real-time threat detection, incident investigation, and automated response. Sentinel pulls in data from multiple sources—including firewalls, devices, identity platforms, and cloud workloads. It offers built-in AI to identify unusual behavior, generate alerts, and even trigger automation to mitigate threats. Crucially, Sentinel is fully integrated with Log Analytics and uses the same Kusto Query Language (KQL) to query log data efficiently.

Option B, Azure Log Analytics, is the backbone of many Azure monitoring solutions. It collects, stores, and queries log and telemetry data from various Azure and on-premises resources. Users can create custom dashboards, set up proactive alerts, and generate insights into system performance, resource usage, or unexpected behavior. Log Analytics also integrates with services like Azure Monitor and Sentinel, making it a central platform for both security analysis and operational diagnostics.

Now, let’s consider the incorrect options:

  • Option C, Azure Active Directory (Azure AD), is a user identity and access management service. While it logs sign-in and activity data, it doesn’t offer the querying or analytic depth of Sentinel or Log Analytics. It feeds data into these tools, but isn't a log analysis platform itself.

  • Option D, Azure Automation, is focused on automating repetitive tasks like VM management or patching. Though it logs its own tasks, it's not used to analyze logs broadly across services.

  • Option E, Azure Traffic Analytics, provides insights into network flow data, mainly from Network Security Groups (NSGs). It is valuable for network-level visibility, but its scope is too narrow to be considered a general log analysis tool

In summary, for a comprehensive, cross-environment solution to collect, monitor, and analyze log data—particularly for security and operations—Azure Sentinel and Azure Log Analytics are unmatched. These tools form the core of Azure’s observability and threat detection ecosystem.

Question 8:

You are building a web application in Azure that must remain accessible to users around the world, even if a regional failure occurs.

Which two Azure services should you use to ensure high availability across global regions? (Select 2)

A. Azure Traffic Manager
B. Azure Application Gateway
C. Azure VPN Gateway
D. Azure Blob Storage
E. Azure Load Balancer

Correct Answers: A and E

Explanation:

Creating a globally available and resilient web application in Azure requires leveraging services that provide intelligent traffic routing and regional redundancy. Two Azure services specifically suited for this purpose are Azure Traffic Manager and Azure Load Balancer.

Azure Traffic Manager (Option A) is a DNS-based traffic distribution service designed to direct user traffic efficiently across global endpoints. It improves availability by automatically routing traffic based on rules such as geographic location, performance, or priority. In case one region becomes unresponsive, Traffic Manager can redirect users to an alternate, healthy region. This ensures that a regional outage doesn’t affect global accessibility, making it essential for geo-distributed applications.

Azure Load Balancer (Option E) is used to distribute traffic within a specific Azure region. It supports both internal and external load balancing scenarios and maintains application reliability by automatically routing traffic only to healthy backend instances. If one virtual machine (VM) fails, the Load Balancer shifts requests to another functioning instance, ensuring consistent performance and minimizing downtime within the region.

Let’s evaluate the incorrect options:

  • Azure Application Gateway (Option B) operates at Layer 7 and provides application-level features such as SSL offloading and Web Application Firewall (WAF). While useful within a single region, it doesn’t handle global traffic distribution on its own. It complements Traffic Manager but isn’t a substitute for global failover capabilities.

  • Azure VPN Gateway (Option C) is designed for secure communication between on-premises networks and Azure. While critical for hybrid connectivity, it doesn't serve any purpose in managing traffic for public-facing web applications or enhancing availability.

  • Azure Blob Storage (Option D) is a scalable storage service for unstructured data. It can be geo-redundant, which aids in data durability, but it does not handle incoming web traffic or balance requests across compute instances.

In conclusion, Azure Traffic Manager and Azure Load Balancer form a powerful duo: one manages global routing, while the other balances load within a region. Together, they help you achieve high availability for web applications used worldwide.

Question 9:

When developing a disaster recovery strategy for virtual machines using Azure Site Recovery, which two features are part of its core functionality? (Select 2)

A. Replicating VMs to an alternative region
B. Manual initiation of VM failover
C. Automatic patching of VMs
D. Continuous backups to Azure Storage
E. Built-in integration with Azure Backup

Correct Answers: A and B

Explanation:

Azure Site Recovery (ASR) is a core component of Microsoft’s business continuity and disaster recovery (BCDR) strategy. It is designed to minimize service disruptions by replicating workloads—including virtual machines—to a secondary location, ensuring continuity in the event of outages or failures.

Option A is correct because VM replication to another Azure region or on-premises site is one of Site Recovery’s foundational features. This replication occurs in near real-time, enabling the environment to be brought online in the backup location if the primary region experiences a failure. This ensures high availability and quick recovery with minimal data loss.

Option B is also accurate since ASR allows for manual failover. In a real disaster scenario or even during a scheduled test, administrators can manually trigger the failover to switch operations to the secondary site. This control is important as it allows organizations to assess the situation and initiate failover only when needed. It also allows for non-disruptive disaster recovery drills to validate recovery plans.

Now let’s analyze the incorrect choices:

  • Option C mentions automated patching, which is not part of Azure Site Recovery’s feature set. Patch management is typically handled via Azure Automation or Windows Update, and while important, it's unrelated to ASR's replication or failover functionalities.

  • Option D confuses Site Recovery with Azure Backup. While both support resilience, they serve different purposes. Azure Backup is meant for point-in-time recovery of data through backups, whereas Site Recovery focuses on continuity and quick failover during outages. Continuous backup is not part of ASR’s capabilities.

  • Option E incorrectly implies that Azure Backup is directly integrated with Site Recovery. In practice, these are two separate services with different goals. Site Recovery manages VM replication and orchestration for disaster recovery, while Azure Backup is concerned with data protection and restore capabilities. Although they can coexist in an organization’s broader recovery strategy, they operate independently.

In summary, Azure Site Recovery focuses on two key capabilities: replicating virtual machines to a secondary region and enabling manual failover during disaster events. These ensure minimal service disruption and make A and B the correct answers.

Question 10:

You are building a solution to efficiently deploy and manage a fleet of virtual machines in Azure. Which two features offered by Azure Automation are most effective for automating this process? (Select two options.)

A. Azure Automation Runbooks
B. Azure Automation Desired State Configuration (DSC)
C. Azure Automation Webhooks
D. Azure Resource Manager (ARM) templates
E. Azure Automation Inventory

Correct Answers: A and B

Explanation:

Effectively managing and deploying virtual machines (VMs) in a scalable cloud environment like Microsoft Azure requires automation to ensure reliability, speed, and consistency. Azure Automation is a service specifically designed to help automate frequent administrative tasks and enforce configuration standards across virtual infrastructure. Two core features of Azure Automation that directly support VM deployment and lifecycle management are Runbooks and Desired State Configuration (DSC).

Option A: Azure Automation Runbooks
Runbooks are scripts that automate repetitive tasks in Azure. They can be written in PowerShell or Python and are used to manage operations such as VM provisioning, starting and stopping instances, applying updates, and orchestrating maintenance tasks. These Runbooks can be triggered manually, scheduled to run at specific times, or even initiated via external systems using webhooks.

When deploying multiple VMs, Runbooks can simplify the process by automating the end-to-end workflow—from setting up virtual networks and storage accounts to spinning up VMs and applying initial configurations. This not only speeds up deployment but also reduces the likelihood of human error, ensuring consistency across environments.

Option B: Azure Automation Desired State Configuration (DSC)
DSC provides a declarative platform for ensuring systems remain in a specific, predefined configuration. You define how each VM should be configured—including installed software, system settings, and services—and Azure Automation DSC ensures that each machine conforms to that configuration. If any drift occurs (i.e., if the system deviates from the expected configuration), DSC can detect it and automatically correct it, ensuring systems remain compliant and secure.

This capability is especially valuable in large environments where maintaining uniformity across VMs is critical. By leveraging DSC, IT teams can ensure operational consistency and enhance system resilience.

Why the other options are not ideal for this scenario:

  • Option C: Azure Automation Webhooks
    While Webhooks allow external triggers to start Runbooks, they do not directly contribute to VM deployment or configuration. They serve as a method of initiating automation, rather than managing the infrastructure itself.

  • Option D: Azure Resource Manager (ARM) templates
    Although ARM templates are powerful for provisioning infrastructure in a declarative way, they are part of the Azure Resource Manager framework—not Azure Automation. They work well for initial deployments, but they don’t handle ongoing management tasks like configuration enforcement.

  • Option E: Azure Automation Inventory
    This feature helps track installed software and configuration details across VMs, making it useful for reporting and audits. However, it does not provide any mechanism for deploying or configuring virtual machines.

Conclusion:

The two most effective Azure Automation tools for VM deployment and ongoing configuration are Runbooks and DSC. Together, they provide a complete automation solution that simplifies provisioning, enforces compliance, and supports ongoing operational management.


SPECIAL OFFER: GET 10% OFF

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |