Cisco 350-901 Exam Dumps & Practice Test Questions

Question 1:

A customer has requested a highly available application with minimal service disruption. Their key concerns are reducing the time it takes to restore services after a failure (RTO) and limiting the acceptable amount of data loss during an outage (RPO). 

Which design strategy best supports high availability while ensuring both low RTO and low RPO?

A. Active/passive setup with timely data synchronization between data centers for seamless operations.
B. Active/passive setup with no requirement for timely data synchronization.
C. Active/active setup where data synchronization is not required to be timely.
D. Active/active setup with timely data synchronization between data centers for uninterrupted request processing.

Correct Answer: D

Explanation:

When designing highly available applications, it’s critical to focus on two main recovery metrics: Recovery Time Objective (RTO) and Recovery Point Objective (RPO). RTO defines how quickly services can be restored after a failure, while RPO refers to the maximum acceptable amount of data loss measured in time.

To meet these goals, organizations typically use either active/passive or active/active architectures:

Active/Passive Architecture (Options A & B):
In an active/passive setup, one environment handles all live traffic, while the passive environment stays idle and is only activated during a failure. Although this setup can provide basic failover capabilities, it often results in higher RTO and RPO. This is because failover involves time-consuming processes such as spinning up services and redirecting traffic. If data synchronization between the active and passive sites is not timely (as stated in Option B), the backup environment might not have the latest data, increasing potential data loss. Even if synchronization is timely (Option A), the downtime due to switching still impacts RTO.

Active/Active Architecture (Options C & D):
In contrast, an active/active architecture involves multiple environments actively handling requests at the same time. This model inherently reduces both RTO and RPO. Since both environments are live, there’s no need for service reactivation after a failure — traffic can instantly shift to the other site. However, this advantage only holds if the data between the sites is synchronized in real time or near-real time. Without timely synchronization (Option C), even an active/active setup risks inconsistent data, leading to potential service errors and data loss.

Why Option D is Correct: An active/active setup with timely data synchronization ensures high availability, seamless traffic handling, and minimal data loss. It eliminates the wait involved in failover and ensures both environments remain current with identical data. This design supports continuous availability, immediate recovery (low RTO), and near-zero data loss (low RPO).

Therefore, for applications requiring minimal downtime and high resilience, Option D offers the most robust and effective design strategy.

Question 2:

In a cloud-native application project using Python, Ruby, and JavaScript, a CI/CD pipeline is triggered whenever code changes occur. The pipeline automates code building, testing, and deployment. 

Which of the following steps is unnecessary in this pipeline given the languages and the cloud-native setup?

A. Deploying the code to staging or production environments.
B. Building containers that include the code and its dependencies.
C. Compiling the code.
D. Running automated tests to validate functionality.

Correct Answer: C

Explanation:

CI/CD pipelines are fundamental to automating the lifecycle of cloud-native applications. These pipelines typically consist of multiple stages — code build, testing, packaging (often in containers), and deployment. However, the exact steps depend on the nature of the application and the languages used.

This question focuses on a project developed in Python, Ruby, and JavaScript — all of which are interpreted languages, not compiled. This is a crucial distinction.

Let’s evaluate each option:

A. Deploy to staging/production:
This is a standard and necessary CI/CD step. After the code passes testing, it must be pushed to a live or staging environment for further validation or end-user access. Omitting deployment would defeat the purpose of continuous delivery.

B. Build containers with code and dependencies:
Containerization (e.g., using Docker) is a best practice in cloud-native development. It ensures consistency across environments and makes scaling and deployment much easier. Packaging code and dependencies into containers is essential, especially for microservices and Kubernetes-based deployments.

C. Compile code (Correct Answer):
This is the only step that is unnecessary in this scenario. Python, Ruby, and JavaScript are all interpreted, which means they do not require compilation into binary form before execution. Instead, they run via their respective interpreters (Python interpreter, Ruby interpreter, Node.js for JavaScript). While some performance enhancements might include bytecode compilation (e.g., .pyc files in Python), these are not typical CI/CD build steps and are managed automatically.

D. Run automated tests:
Automated testing is critical in any CI/CD pipeline. It helps catch bugs early, ensures code quality, and supports faster, safer deployments. Whether using unit tests, integration tests, or end-to-end testing frameworks, this step must be included in every modern pipeline.

Since compilation is unnecessary for interpreted languages like Python, Ruby, and JavaScript, Option C is the correct answer. Omitting this step streamlines the CI/CD process without affecting the functionality or reliability of the pipeline.

Question 3:

You are building a cloud-native application based on the 12-factor app methodology. This approach is designed to help developers build scalable, maintainable, and portable applications by adhering to a set of best practices. 

When considering how the application should handle logging, which two options reflect correct implementation of logging as per 12-factor principles?

A. Application code writes its event stream to stdout
B. Application log streams are archived in multiple replicated databases
C. Application log streams are sent to log indexing and analysis systems
D. Application code writes its event stream to specific log files
E. Log files are aggregated into a single file on individual nodes

Answer: A and C

Explanation:

The 12-factor app methodology provides a framework for designing modern web applications that are optimized for cloud environments. It encourages practices that lead to better scalability, resilience, and simplicity across different environments. One of its core principles is how applications should handle logging.

Logging, in this context, is not just about writing output for developers but about providing a continuous and consistent stream of events that external systems can consume for monitoring, debugging, and auditing.

Option A — Writing the application’s event stream to stdout (standard output) is a key recommendation in the 12-factor methodology. This design decouples log generation from log storage and analysis. Instead of managing log files within the application, logs written to stdout can be automatically captured by the underlying platform (such as Kubernetes or Docker) and sent to external log management systems. This simplifies log processing and ensures portability.

Option B — Archiving logs in replicated databases contradicts the 12-factor principles. Databases are not optimized for high-throughput log ingestion or real-time log analysis. Using databases for log storage introduces additional complexity, performance overhead, and scaling challenges. The focus of 12-factor logging is stream-oriented processing, not archival or structured storage within relational systems.

Option C — Sending logs to centralized indexing and analysis systems such as Splunk, ELK Stack, or Fluentd is in line with the 12-factor methodology. Once logs are written to stdout, they can be consumed by the platform and forwarded to these external tools. These tools provide rich features like full-text search, filtering, and alerting, which help developers and operations teams maintain application health.

Option D — Writing logs directly to specific files on disk is discouraged. This approach is not portable, especially in containerized or serverless environments, where file systems are ephemeral and hard to access externally. Log files can be difficult to aggregate across distributed systems and often require additional tooling to ship logs to centralized systems.

Option E — Aggregating logs into a single file on each node similarly violates the principle of separating log generation from log management. This approach complicates multi-instance or multi-node deployments, where visibility into logs from individual containers or services becomes fragmented.

In summary, the 12-factor methodology strongly advocates for writing logs to stdout and leveraging external systems for storage and analysis. These practices promote better decoupling, observability, and scalability in modern application environments. Therefore, the two correct statements are:

Question 4:

A company is operating a large-scale microservices-based application deployed in multiple geographically distributed data centers. The system is designed to tolerate regional failures by running in at least three separate regions. However, users frequently report that the application feels sluggish. Analysis of the container orchestration logs reveals that containers often fail and restart. 

Which action should the organization take to enhance application resiliency without increasing infrastructure footprint?

A. Update the base image of the containers
B. Test the application on a different cloud provider
C. Increase the number of containers running per service
D. Insert try/catch error handling in the application code

Answer: C

Explanation:

In a microservices architecture deployed across multiple cloud regions, resiliency refers to the application’s ability to maintain availability and performance in the face of component failures. In the scenario described, despite redundancy at the infrastructure level, users are experiencing degraded performance due to frequent container faults and restarts.

This situation suggests that the current number of containers per service may be insufficient to absorb faults gracefully or maintain performance under load. In microservices environments, services should be designed to scale horizontally, meaning additional instances (or containers) of a service can be deployed to handle increased load or fault conditions.

Option A — Updating the base image may address certain vulnerabilities or outdated dependencies, but it is unlikely to solve performance degradation stemming from frequent container crashes. Unless the image itself is the root cause (e.g., corrupted binaries or misconfigured runtime), this action offers limited improvement to system resiliency.

Option B — Migrating to another cloud provider could be a lengthy and expensive process. Moreover, it does not guarantee a resolution, especially if the performance issues are tied to the application’s architecture, not the underlying infrastructure. Cloud portability is useful, but it’s not the most direct way to address in-place resiliency challenges.

Option C — Increasing the number of containers per service is the most effective and immediate solution. By scaling out, the system introduces more redundancy at the service level, reducing the impact of a single container failing. It ensures that other healthy instances can continue processing requests, preserving user experience. Container orchestration platforms like Kubernetes are designed to manage such scaling seamlessly, allowing automatic scheduling and balancing of additional containers across the cluster.

Option D — Adding try/catch blocks in the code improves error handling but does not address the system’s capacity to absorb failure or reduce container restarts. While robust error handling is a valuable coding practice, it’s not sufficient to solve structural or resource-level issues causing container failure.

Therefore, the best way to enhance resiliency while maintaining the current infrastructure size is to increase the number of containers per microservice. This mitigates the effects of failures by leveraging redundancy and load distribution, key principles of resilient system design in cloud-native applications.

Question 5:

A development team is building a web application expected to handle up to 1000 requests per second. 

To ensure the system remains responsive and performs reliably during peak traffic periods, which design technique should be adopted?

A. Use algorithms like random early detection to deny excessive requests
B. Set user-specific request limits and reject requests from users who exceed them
C. Limit connections to 1000 concurrent users and block additional connections
D. Queue all incoming requests and process them sequentially

Correct Answer: A

Explanation:

In high-traffic environments, ensuring consistent application performance under load is critical. This question centers on identifying a strategy that allows a web application to serve up to 1000 requests per second without degrading performance. The correct approach is to use Random Early Detection (RED) or a similar algorithm, which effectively manages congestion by selectively dropping or denying requests before the system reaches capacity.

Option A is correct because RED proactively monitors queue sizes and begins to randomly discard low-priority or excessive requests when thresholds are approached. This ensures that the system doesn't get overwhelmed and can continue to serve legitimate traffic efficiently. RED is often used in networking and queuing systems to maintain throughput and reduce latency during periods of congestion.

Option B, rate limiting per user, is useful to prevent abuse or excessive use by specific clients. However, it doesn’t offer a holistic solution to managing overall system load. If a sudden influx of many users all stay within their limits, the system can still become overwhelmed. It’s more of a user-level control than a system-wide load management solution.

Option C, limiting the number of active connections to 1000, can prevent overloading but at the cost of usability and user experience. New users would be blocked entirely, even if the system could have served them efficiently with smarter load handling.

Option D, queuing all requests and processing them sequentially, is impractical for real-time applications with high request volumes. It introduces latency and reduces throughput as each request must wait its turn. Users may experience delays or timeouts, especially during peak traffic.

RED and similar congestion-avoidance techniques provide dynamic control over traffic volume. By intelligently denying requests early in the pipeline when usage starts to spike, RED prevents resource exhaustion and helps the system maintain performance even under stress. These techniques work well with horizontally scalable cloud infrastructure, enabling elastic expansion and graceful degradation rather than total failure.

In conclusion, for applications that demand high throughput and responsiveness, proactive traffic shaping strategies like RED are essential. They balance performance and stability, ensuring continued service availability without rejecting legitimate users unnecessarily.


Question 6:

A company runs a cloud-native application based on a microservices architecture across multiple geographic regions. The app has been experiencing performance issues, and logs reveal frequent container crashes followed by automatic restarts. 

What two design strategies would best enhance fault visibility and help diagnose the root causes of these failures? (Choose two.)

A. Automatically terminate containers with the highest failure frequency
B. Use trace tagging to follow service-to-service transactions
C. Log exceptions and trigger immediate alerts
D. Record all failures directly into the application's datastore
E. Set up SNMP alerts for slow network links

Correct Answers: B, C

Explanation:

In microservices-based systems—especially those spread across multiple data centers or cloud regions—managing fault detection and diagnosis is complex. Since each service may interact with many others, identifying the root cause of a failure requires visibility into how requests flow through the system.

Option B is correct because implementing distributed tracing via tagging methodologies (like using trace IDs or span IDs) allows engineers to monitor and follow each request as it moves across multiple services. These traces help identify performance bottlenecks, failed service calls, or latency introduced at specific stages of the application. Tools such as OpenTelemetry, Jaeger, or Zipkin enable developers to capture end-to-end transaction data in distributed environments. This is vital when services fail silently or intermittently.

Option C is also correct. Logging exceptions combined with real-time alerts allows teams to quickly detect and respond to issues. Exception logs can capture the service name, error type, stack trace, and contextual metadata, making it easier to debug problems. When paired with alerting systems (like PagerDuty or Opsgenie), the operations team can be notified instantly, reducing downtime and mitigating performance impacts.

Option A, removing containers with frequent failures, might temporarily reduce visible errors but fails to address the underlying issue. Automatically removing containers may suppress symptoms while allowing the actual cause—such as a memory leak, misconfiguration, or external dependency issue—to persist. Additionally, in auto-scaling environments, the orchestrator may simply replace removed containers, creating a loop of hidden instability.

Option D, logging every failure directly to the datastore, is inefficient and potentially harmful to performance. Writing large volumes of error logs to a transactional datastore can overwhelm the system and introduce new bottlenecks, especially during large-scale failures. Centralized logging solutions like ELK or Fluentd are better suited for this purpose.

Option E, implementing SNMP alerts for slow network links, addresses a different layer of the stack. While helpful for network diagnostics, it won’t assist in identifying microservice-specific issues such as container crashes or inter-service faults.

In summary, distributed tracing and real-time exception logging with notifications are fundamental techniques for enhancing fault detection and observability in large-scale, containerized applications. Together, they provide the visibility and actionable insights necessary to maintain system reliability and quickly resolve issues.

Question 7:

In a continuous integration (CI) pipeline, tools like OWASP Dependency-Check are used to inspect software components and identify security or compliance issues. 

What are two types of problems that such tools are designed to detect in application dependencies? (Choose two.)

A. Publicly disclosed vulnerabilities related to the included dependencies
B. Mismatches in coding styles and conventions in the included dependencies
C. Incompatible licenses in the included dependencies
D. Test case failures introduced by bugs in the included dependencies
E. Buffer overflows that occur due to interactions among included dependencies

Correct Answers: B. A and C

Explanation:

In today's development workflows, particularly those that follow Continuous Integration (CI) practices, integrating third-party libraries or modules is extremely common. These dependencies help speed up development by offering reusable functionalities. However, they can also introduce risks if not regularly analyzed and maintained. To mitigate these risks, organizations use dependency scanning tools like OWASP Dependency-Check, Snyk, or WhiteSource. These tools analyze dependency trees to detect issues such as known security vulnerabilities or license incompatibilities.

A. Publicly disclosed vulnerabilities are one of the primary focus areas for dependency scanners. When vulnerabilities in libraries are made public (often listed in databases like the CVE - Common Vulnerabilities and Exposures list), tools compare your project's library versions to this list. If your project includes a version known to have a flaw—say, a remote code execution exploit or a SQL injection vulnerability—the tool flags it so your team can update or replace the affected component.

C. Incompatible licenses are also a critical concern in software development. Different open-source licenses come with varying restrictions and obligations. For example, using a library with a copyleft license (like GPL) in a commercial application could legally require you to open-source your code. Dependency-checking tools can detect these incompatibilities, helping organizations avoid legal complications related to software distribution or usage rights.

In contrast, B, D, and E are not typically within the scope of what these tools evaluate:

  • B (Coding style mismatches): Style or formatting inconsistencies are generally handled by static code analysis tools like linters, not dependency-checkers.

  • D (Test case failures): Failures during test execution are usually identified by unit or integration test frameworks, not by scanning tools that look only at metadata.

  • E (Buffer overflows from combined dependencies): Buffer overflows are a runtime behavior and generally require dynamic analysis, fuzz testing, or static security analysis—not simple dependency metadata scanning.

In summary, tools like OWASP Dependency-Check are essential in CI pipelines for maintaining application security and license compliance. They help development teams manage risk proactively by highlighting known vulnerabilities and license issues in third-party components.

Question 8:

A network operations team needs to automate the deployment and configuration of their tools in a cloud environment. Their solution must be fully ephemeral and version-controlled using Git. They will use VMs in a cloud platform, configure open-source tools, and manage remote network devices. 

Which combination of tools best supports this approach?

A. Ansible
B. Ansible and Terraform
C. NSO
D. Terraform
E. Ansible and NSO

Correct Answer: B

Explanation:

This scenario describes a need for fully automated, repeatable, and version-controlled infrastructure management, with the ability to deploy, test, and configure both cloud-based and networked systems. To satisfy these requirements, the most appropriate combination of tools is Terraform and Ansible.

Terraform is an Infrastructure as Code (IaC) tool that allows users to define and provision infrastructure using a declarative configuration language (HCL). It is ideal for managing cloud resources such as virtual machines, networking, and storage. Since the operations team wants their automation environment to be ephemeral—meaning it can be created and destroyed easily—Terraform’s idempotent behavior and ability to destroy and recreate environments cleanly are critical. It ensures that every deployment starts from a clean slate, avoiding drift and ensuring consistency.

Once infrastructure is provisioned using Terraform, there’s a need to configure the operating system, install packages, deploy services, and enforce desired states. This is where Ansible comes in. Ansible is an agentless automation and configuration management tool that uses SSH and simple YAML playbooks. It's well-suited for configuring the VMs deployed by Terraform, installing tools, running scripts, and interacting with external devices such as routers or switches. Storing Ansible playbooks and Terraform code in Git also enables version control and change tracking, aligning with DevOps best practices.

Here’s why the other options fall short:

  • A. Ansible alone cannot provision cloud infrastructure. It is excellent for configuration but does not handle VM creation or cloud orchestration as seamlessly as Terraform.

  • C. NSO (Cisco Network Services Orchestrator) is a powerful tool for managing complex network services, especially in service provider environments. However, it is not designed for provisioning general cloud infrastructure or managing ephemeral environments.

  • D. Terraform alone provisions infrastructure but cannot configure it post-deployment.

  • E. Ansible and NSO could manage configuration, but this combo lacks a robust method for provisioning ephemeral cloud environments, making it incomplete for the described needs.

Combining Terraform for infrastructure provisioning and Ansible for configuration provides a fully automated, reproducible, and scalable solution. This pairing is a popular choice in modern DevOps and NetDevOps workflows.

Question 9:

Which HTTP response code indicates that the request was successful and a new resource was created as a result?

A. 200
B. 201
C. 202
D. 204

Correct Answer: B

Explanation:

In RESTful API development, understanding HTTP status codes is crucial for correctly interpreting responses from servers. Each status code provides important feedback about what happened with a client's request.

The HTTP status code 201 Created is specifically used to indicate that a new resource has been successfully created on the server. This code is typically returned in response to a POST request when the server creates a new resource, such as adding a new device to an inventory system or provisioning a new network service.

Let’s break down the other options:

  • A. 200 OK: This is a generic success response, indicating that the request was successful and the server has returned the requested data. It’s commonly used for GET or PUT requests but does not signify that a new resource was created.

  • C. 202 Accepted: This code means the request has been accepted for processing, but the processing is not yet complete. It's often used for asynchronous operations, such as when a network controller queues a request for later execution.

  • D. 204 No Content: This indicates that the request was successful but there is no content to return in the response body. It’s commonly used after successful DELETE operations.

For API developers working with Cisco platforms such as Cisco DNA Center, Cisco Meraki, or Webex APIs, using and interpreting HTTP status codes correctly is essential. When creating new resources—such as network profiles, virtual devices, or access policies—a 201 Created response confirms that the operation was successful and the server has generated a new resource, often returning the URI of the new object in the Location header.

Thus, the correct status code to confirm successful creation of a resource is 201.

Question 10:

Which protocol does Cisco use in its Model-Driven Telemetry for streaming structured data from devices to external collectors?

A. SNMP
B. NETCONF
C. gRPC
D. RESTCONF

Correct Answer: C

Explanation:

Cisco’s Model-Driven Telemetry (MDT) is a modern method for real-time network monitoring and data collection. Unlike traditional polling methods (e.g., SNMP), telemetry pushes data from the device to a collector at a configured frequency, improving efficiency and allowing near real-time visibility.

Among the protocols available, gRPC (Google Remote Procedure Call) is the protocol most widely used in MDT. gRPC is a high-performance, open-source universal RPC framework that uses HTTP/2 as its transport protocol and Protocol Buffers (Protobuf) as its serialization format. This allows for efficient and compact streaming of telemetry data.

Cisco devices like IOS-XE and NX-OS use gRPC in conjunction with YANG data models to define and stream telemetry data. For example, a device can stream interface statistics, CPU usage, and routing information continuously using gRPC without the need for polling.

Let’s consider the other options:

  • A. SNMP (Simple Network Management Protocol): While widely used for network monitoring, SNMP is a pull-based protocol and lacks the granularity and performance of model-driven telemetry. It’s not used in MDT.

  • B. NETCONF (Network Configuration Protocol): This protocol is used for configuration and state data retrieval using YANG models. While it's YANG-based like MDT, it is not used for streaming telemetry.

  • D. RESTCONF: Like NETCONF, RESTCONF is used to retrieve or configure data using YANG models over HTTP/HTTPS. It is not optimized for real-time streaming.

Therefore, gRPC is the correct answer because it is specifically optimized for streaming structured data efficiently in Cisco’s MDT architecture. Understanding this is crucial for developers working with Cisco platforms such as IOS XE, NX-OS, or Cisco SD-WAN, where real-time analytics and automated insights rely heavily on telemetry streams.


SPECIAL OFFER: GET 10% OFF

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |