ECCouncil 312-38 Exam Dumps & Practice Test Questions
A network security analyst observes repeated attempts from a single external IP address targeting TCP port 3389, which is typically used for Remote Desktop Protocol (RDP).
Which two actions should the analyst take immediately to best reduce the risk of a potential security breach? (Select two options.)
A. Create an ACL rule to drop traffic from the source IP address
B. Enable Network Address Translation (NAT) on the RDP server
C. Restrict RDP service access to a VPN or jump-host
D. Disable port forwarding of port 3389 on the perimeter firewall
E. Change the RDP service to use the UDP protocol instead of TCP
Correct Answers: C and D
Explanation:
When abnormal activity is detected on TCP port 3389, which is reserved for Remote Desktop Protocol (RDP), it often indicates attempts to exploit vulnerabilities or conduct a brute-force attack. In such cases, the primary objective of the analyst should be to minimize the system’s exposure and control access to the service.
Option C — Restricting RDP access to a VPN or jump-host — is one of the most effective and industry-recommended practices. This limits RDP exposure to internal or secured network contexts. A VPN ensures that only authenticated users can reach the internal network, adding a secure barrier before RDP access is even attempted. Similarly, using a jump-host or bastion server creates a controlled access point with centralized monitoring and logging, which strengthens accountability and prevents direct internet exposure.
Option D — Disabling port forwarding for TCP 3389 on the perimeter firewall — is another highly effective step. Port forwarding allows external users to connect to internal services, and disabling it will block unsolicited inbound RDP traffic from the internet. This move effectively shields the RDP service from external threats, stopping most brute-force attempts at the firewall level.
Option A, creating an ACL to drop traffic from a known malicious IP, may offer temporary relief, but attackers can easily switch IP addresses or use botnets. This is a reactive, short-lived measure and does not fundamentally protect the service from being exposed.
Option B, enabling NAT, is irrelevant in this context. NAT simply translates IP addresses and does not restrict access or protect against unauthorized connections. It does not provide a security boundary in itself.
Option E, switching RDP to UDP, does not reduce attack surface or brute-force risk. RDP is typically used over TCP because it provides session reliability. UDP lacks features like guaranteed delivery and does not enhance security in this scenario.
In conclusion, the most impactful and sustainable strategies to mitigate risk are to restrict RDP access to secure internal pathways (VPN or jump-host) and remove direct internet exposure by disabling port forwarding. These measures significantly reduce the potential attack surface.
A. netstat
B. fsutil
C. wmic process
D. volatility
E. tasklist
Correct Answers: A and E
Explanation:
Live forensic acquisition involves gathering information from a system that is actively running. Unlike disk or memory images taken during post-mortem investigations, live forensics focuses on volatile data—processes, network connections, and session details that will disappear upon shutdown or restart.
Option A, netstat, is a critical built-in utility that reveals active network connections, listening ports, and associated IP addresses. This helps responders determine whether the host is communicating with suspicious external systems. By analyzing netstat output, analysts can spot unauthorized open ports or connections to known malicious IPs, giving clues about data exfiltration or command-and-control channels.
Option E, tasklist, is another built-in Windows tool that displays currently running processes, including PID (Process ID), session name, and memory usage. It’s a vital tool for quickly spotting unusual or rogue processes that could be malware or unauthorized scripts. Attackers often disguise malicious code under legitimate-looking process names, and tasklist can help identify anomalies in memory usage or naming patterns.
Option B, fsutil, while powerful for file system manipulation and diagnostics, is generally not suited for volatile data collection. It provides information like volume details and file system behavior—not relevant to detecting active threats or intrusions.
Option C, wmic process, is a valid command for process monitoring, but Microsoft has deprecated WMIC in recent Windows versions in favor of PowerShell-based tools. While still usable on some systems, it is less standard today than tasklist and is not the go-to tool for live forensics anymore.
Option D, volatility, is an advanced and popular forensic framework—but it is not built into Windows. It is a third-party tool used primarily for analyzing memory dumps offline, not for live forensic collection. It requires installation and setup, which could alter the system and is not ideal during an initial investigation.
In conclusion, the two most reliable and non-intrusive tools built into Windows for live forensic use are netstat (for network connections) and tasklist (for process inspection). These tools enable investigators to collect real-time, volatile data with minimal system impact.
Question 3:
You are configuring an Intrusion Prevention System (IPS) in inline mode, where it directly analyzes and acts on live traffic.
To ensure that the IPS efficiently detects threats without degrading network performance, which two configuration methods should be applied?
A. Disable all signature categories except those mapped to the organization’s threat model
B. Set the IPS to log-only mode for the first 30 days before enabling blocking
C. Increase the inspection buffer size to the maximum supported value
D. Enable rate-based detection thresholds for known noisy signatures
E. Duplicate all traffic to a span port for out-of-band analysis
Correct Answers: A and D
When an IPS operates inline, it intercepts and processes every packet of traffic in real-time. While this is effective for immediate threat mitigation, it can introduce latency or packet loss if the IPS becomes overloaded. To prevent that, careful performance tuning is necessary.
This is a best practice in IPS configuration. Most IPS devices support thousands of signatures, many of which are not relevant to a specific organization. Enabling all signatures can cause:
High CPU usage
Increased latency
False positives
By disabling irrelevant signatures (e.g., SCADA signatures in a non-industrial environment), you reduce the processing burden and focus only on critical threats relevant to your infrastructure. This leads to better performance and more meaningful detection.
This might help establish a baseline or avoid false positives early on, but it does not improve performance while the system is in active blocking mode. In fact, it delays protection rather than tuning for optimal inline performance. It’s a staging technique, not a performance optimization.
While increasing the buffer size may seem helpful, in practice, it:
Uses more memory
Can introduce delay
Doesn't address root performance issues
It can help in specific traffic scenarios with bursty loads, but blindly setting this to the maximum can backfire by introducing unnecessary latency.
D. Enable rate-based detection thresholds for known noisy signatures
Some IPS signatures, especially those monitoring high-frequency or low-severity events, generate excessive alerts or consume a disproportionate amount of processing time. Examples include:
ICMP sweeps
Port scanning
Misconfigured applications
Rate-based detection allows the IPS to ignore or suppress repeated alerts after a defined threshold is exceeded. This avoids alert storms and processing bottlenecks, keeping the system responsive and focused on real threats.
This is related to passive IDS setups, not inline IPS. If you're using span port monitoring, you're not in inline mode—so this option is completely irrelevant in the context of optimizing inline performance.
The best performance tuning strategies for an inline IPS are:
A. Streamlining signature sets to only those relevant to your threat model
D. Managing noisy signatures with rate-based thresholds
These methods minimize unnecessary processing while maintaining effective threat detection.
During the containment phase of a security incident, which two types of logs are most effective for tracing how an attacker moves between systems within the network? (Choose two.)
A. DNS resolver logs
B. DHCP lease logs
C. NetFlow/IPFIX flow records
D. Web proxy access logs
E. Endpoint EDR agent telemetry
Correct Answers: C and E
Explanation:
When a security incident is underway and has entered the containment phase, one of the main objectives is to understand the scope of the breach—specifically, how an attacker has moved from one system to another. This behavior is known as lateral movement, and identifying it helps security teams contain the threat and prevent further compromise.
Two of the most powerful log types for identifying lateral movement are NetFlow/IPFIX flow records and Endpoint Detection and Response (EDR) telemetry.
NetFlow/IPFIX (Option C) logs come from network devices like routers and switches and capture metadata about network traffic flows. These records include details such as source and destination IP addresses, ports, protocols, and data volume. While they don’t contain the full packet payloads, they are incredibly useful for observing communication patterns between internal hosts. By analyzing NetFlow data, security analysts can detect suspicious behaviors like:
Lateral access between machines that don’t typically communicate
Scanning activity from a compromised host
Unusual volumes or timing of internal connections
This visibility helps map the attacker’s movement across the network infrastructure.
EDR telemetry (Option E) provides granular, host-level data about what is happening on each endpoint. This includes process creation, command-line arguments, file system access, user logins, and network activity. EDR agents also detect the use of known attack tools such as PsExec, WMI, or PowerShell, which are often used in lateral movement. EDR tools can also correlate activities across multiple systems, helping responders reconstruct the attack timeline.
Now, let’s examine why the other options are less suitable:
DNS logs (Option A) are useful for identifying external communications, such as command-and-control traffic, but they offer little insight into purely internal movements unless the attacker is using internal DNS names.
DHCP lease logs (Option B) can help identify which IP was assigned to which host, which aids in mapping, but they do not show actual communication or behavior.
Web proxy logs (Option D) track outbound web traffic but are not helpful for peer-to-peer or internal network movement.
In conclusion, to track lateral movement effectively, security teams should rely on NetFlow/IPFIX for network-level visibility and EDR telemetry for endpoint-level behavioral insights.
Which two methods are most effective at preventing buffer overflow attacks against network-facing applications? (Choose two.)
A. Enable Data Execution Prevention (DEP) on the server
B. Turn off code obfuscation in the source code
C. Deploy an Intrusion Detection System (IDS)
D. Use input validation to sanitize user input
E. Use a firewall to block traffic on untrusted ports
Correct Answers: A and D
Explanation:
Buffer overflow vulnerabilities occur when a program attempts to write more data into a memory buffer than it can hold. If not properly handled, this can lead to memory corruption, crashes, or arbitrary code execution—often by injecting malicious payloads. Preventing such attacks requires both secure coding practices and system-level defenses.
Data Execution Prevention (DEP) (Option A) is a hardware- and software-based feature that marks certain areas of memory as non-executable. Attackers often try to inject executable code into memory regions intended for data, such as the stack or heap. DEP prevents the system from running code from these memory segments, thereby thwarting one of the most common techniques in buffer overflow exploits. It’s a foundational system-level defense in modern operating systems and is especially effective when combined with other technologies like Address Space Layout Randomization (ASLR).
Input validation (Option D) is a fundamental coding best practice. Buffer overflows often exploit the lack of proper checks on user-provided input. If an application accepts input without enforcing limits or formats, an attacker can deliberately craft input that exceeds the expected size of a buffer, leading to overflow. By validating input—checking lengths, allowed characters, and formats—you can prevent malicious or oversized data from ever reaching vulnerable memory allocations, thereby stopping the attack before it begins.
Now consider why the other options fall short:
Disabling obfuscation (Option B) does nothing to improve security. Obfuscation makes reverse engineering harder but doesn’t affect memory handling or buffer safety.
IDS (Option C) tools can detect some types of attack patterns, but they are reactive and often miss novel or encrypted payloads. They’re not suitable as primary defenses against buffer overflows.
Firewalls (Option E) control traffic based on ports and IPs, but they cannot prevent exploitation of a vulnerable application on an open and legitimate service port.
In summary, the two most effective preventative measures are enabling DEP and implementing strict input validation. These strategies tackle both the memory exploitation technique and the source of untrusted input.
Which two techniques are most effective for maintaining data confidentiality on a corporate network? (Choose two.)
A. Use IPsec to encrypt data as it travels across the network
B. Store sensitive data using MD5 hashing
C. Enable TLS for email communications
D. Use full disk encryption on employee laptops
E. Transfer files securely between systems using SSH
Correct Answers: A and D
Data confidentiality is a foundational concept in cybersecurity that focuses on ensuring that information is only accessible to authorized individuals and systems. It protects against unauthorized disclosure or exposure of sensitive data, whether it is in transit over a network or stored on a device.
One highly effective approach to protecting data in transit is IPsec (Internet Protocol Security). IPsec works at the network layer to provide end-to-end encryption and integrity verification for data traveling across IP networks. It secures communication by encrypting IP packets and using cryptographic keys to prevent unauthorized access during transmission. This is particularly important in scenarios such as remote work or internal systems that communicate over potentially vulnerable segments of the network. IPsec is widely used in enterprise VPNs and inter-office communication because it allows secure tunnels to be formed across untrusted infrastructure like the public internet.
Another critical method for protecting data at rest is Full Disk Encryption (FDE). This solution encrypts the entire contents of a storage device (e.g., laptop hard drive), ensuring that if the device is lost or stolen, its contents cannot be accessed without proper decryption credentials. FDE protects all files, including the operating system and system files, and automatically encrypts data as it's written to disk and decrypts it during access by authenticated users. This is especially important for mobile employees or in industries where devices contain customer or financial information.
Now, let’s consider the incorrect choices:
MD5 hashing (B) is not suitable for confidentiality. Hashing is designed for verifying data integrity, not hiding the content. Moreover, MD5 is outdated and vulnerable to collision attacks, making it an insecure choice even for integrity checking.
TLS (C) does provide encryption for data in transit, such as emails or HTTPS web traffic. However, while useful, it offers less comprehensive network-level protection than IPsec and is often limited to specific applications (like SMTP or HTTPS), not the entire network flow.
SSH (E) encrypts terminal sessions and file transfers, such as with SFTP, but it is typically used on a per-connection basis. While secure, it does not scale easily for enterprise-wide confidentiality needs the way IPsec and FDE do.
In summary, IPsec (A) and Full Disk Encryption (D) are the two most effective and scalable methods to maintain data confidentiality across a corporate network.
What are two types of cyberattacks that can be prevented or reduced by implementing DNS security extensions (DNSSEC)? (Choose two.)
A. DNS cache poisoning
B. DNS amplification
C. Denial of Service (DoS)
D. Man-in-the-middle (MitM) DNS attacks
E. Domain hijacking
Correct Answers: A and D
DNSSEC, or Domain Name System Security Extensions, enhances the traditional DNS protocol by adding a layer of authentication and integrity checking to DNS responses. It doesn’t encrypt DNS data but instead uses digital signatures and public key cryptography to validate that the DNS data received by a client has not been tampered with and truly comes from an authoritative source.
One of the primary threats that DNSSEC addresses is DNS cache poisoning (A). In a typical DNS attack of this kind, a malicious actor injects false DNS records into a resolver’s cache, redirecting users to fraudulent websites without their knowledge. Because traditional DNS doesn’t validate the authenticity of a response, it's possible for attackers to spoof DNS replies. DNSSEC prevents this by requiring resolvers to verify the digital signature attached to each DNS record. If the signature does not match or is missing, the resolver rejects the record, thereby eliminating the opportunity for attackers to poison the cache with unauthorized entries.
Another major attack mitigated by DNSSEC is the man-in-the-middle (MitM) attack involving DNS responses (D). In such attacks, a threat actor intercepts DNS queries or responses to substitute fraudulent data, possibly redirecting the user to a malicious IP address. With DNSSEC in place, any attempt to modify DNS responses would cause the signature validation to fail, allowing the resolver to detect and discard tampered data. Although DNSSEC does not prevent the interception itself, it ensures that data integrity is preserved, making the attack ineffective.
Now let’s review the incorrect options:
DNS amplification (B) is a type of DDoS attack that uses vulnerable DNS servers to flood a target with large amounts of data. DNSSEC does not prevent this kind of abuse—in fact, DNSSEC increases the size of DNS responses, potentially making amplification worse if not properly configured.
DoS attacks (C) are generic service disruption attempts and are typically mitigated with rate-limiting, firewalls, or load balancing—not DNSSEC.
Domain hijacking (E) refers to unauthorized changes to domain registration records at the registrar level. DNSSEC secures name resolution, not domain ownership or control, so it cannot prevent domain hijacking.
In conclusion, the implementation of DNSSEC significantly reduces the risk of cache poisoning and MitM attacks by enabling DNS resolvers to validate the authenticity of DNS data. The correct answers are therefore A and D.
Your organization uses Microsoft 365 and has Exchange Online mailboxes. You want to ensure that all external email messages are clearly marked to help users distinguish them from internal emails. What should you do?
A. Configure a mail flow rule to prepend a disclaimer for messages from external senders
B. Enable external tagging via the Microsoft 365 Defender portal
C. Set up a transport rule to block external email
D. Modify the default spam filter policy to tag external senders
Correct Answer: A
Explanation:
Marking external emails is a common requirement for security-conscious organizations. The goal is to visually flag messages originating outside the organization so that users are more cautious when interacting with them — especially if these emails contain links, attachments, or unusual requests.
Option A, configuring a mail flow rule (transport rule) to prepend a disclaimer, is a well-established method for labeling emails from external sources. These rules can detect if the sender is outside the organization and automatically add a header or message banner (e.g., "External Email – Use Caution") to the email body or subject. This approach is customizable and easy to implement through the Exchange admin center.
Option B, enabling external tagging, is a new feature in Microsoft 365 that automatically applies an "External" label in the message header and Outlook client UI for messages from outside the organization. However, this option is not configurable for visual disclaimers or banners, and is only visible in supported clients. While useful, it’s not as comprehensive as a mail flow rule for customizable messaging.
Option C, blocking external email, would be counterproductive unless the organization has no need for outside communication—which is rare.
Option D, modifying the spam filter policy, deals with spam detection and thresholds. It cannot insert disclaimers or labels for regular, legitimate external emails.
Thus, while external tagging (Option B) is helpful, Option A provides the greatest control and visibility, making it the most effective solution for this requirement.
Your organization has a hybrid Exchange environment with Exchange Server 2016 on-premises and Exchange Online.
A user reports they are unable to access a shared mailbox hosted on Exchange Online from their on-premises Outlook client. What is the likely cause?
A. Autodiscover service not published externally
B. Hybrid Modern Authentication is not configured
C. Mailbox permissions not synced via Azure AD Connect
D. User is not licensed in Microsoft 365
Correct Answer: B
Explanation:
In hybrid Exchange deployments, accessing shared mailboxes across environments (from on-premises to Exchange Online, and vice versa) requires proper authentication and connectivity configuration. If a user on an on-premises Exchange client attempts to access a shared mailbox in Exchange Online, they must authenticate to Microsoft 365 services from their Outlook client.
The most likely cause in this scenario is that Hybrid Modern Authentication (HMA) is not configured. Without HMA, users with Outlook clients connected to on-premises Exchange cannot properly authenticate to access shared resources in Exchange Online. HMA allows single sign-on (SSO) using OAuth 2.0 and enables clients to access cloud resources securely.
Let’s analyze the other options:
Option A, the Autodiscover service, is necessary for Outlook configuration. However, in a hybrid setup, Autodiscover is usually configured to direct clients appropriately. If Autodiscover were not working, users wouldn’t connect to their own mailbox, let alone a shared one.
Option C, Azure AD Connect syncing mailbox permissions, is irrelevant. Azure AD Connect syncs user and group identities but does not sync mailbox permissions, which are handled separately through mailbox delegation or hybrid permissions.
Option D, Microsoft 365 licensing, applies to the shared mailbox only if it exceeds size or feature thresholds. Shared mailboxes under 50 GB do not require a license. More importantly, if the user trying to access the shared mailbox is not licensed, they can still access it if they are on-prem and properly authenticated via hybrid.
Thus, without Hybrid Modern Authentication, Outlook clients on-premises cannot authenticate securely to access cloud mailboxes, especially shared ones. Enabling HMA resolves these cross-premises access issues and supports a seamless hybrid experience.
Top ECCouncil Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.